text
stringlengths 14
1.76M
|
|---|
# Predicting Nanorobot Shapes via Generative Models
Emma Benjaminson
Mechanical Eng.
Carnegie Mellon University
Pittsburgh, PA 15213
<EMAIL_ADDRESS>
Rebecca E. Taylor
Mechanical Eng.
Biomedical Eng.
Electrical & Computer Eng.
Carnegie Mellon University
Pittsburgh, PA 15213
<EMAIL_ADDRESS>
Matthew Travers
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
<EMAIL_ADDRESS>
###### Abstract
The field of DNA nanotechnology has made it possible to assemble, with high
yields, different structures that have actionable properties. For example,
researchers have created components that can be actuated, used to sense (e.g.,
changes in pH), or to store and release loads. An exciting next step is to
combine these components into multifunctional nanorobots that could,
potentially, perform complex tasks like swimming to a target location in the
human body, detect an adverse reaction and then release a drug load to stop
it. However, as we start to assemble more complex nanorobots, the yield of the
desired nanorobot begins to decrease as the number of possible component
combinations increases. Therefore, the ultimate goal of this work is to
develop a predictive model to maximize yield. However, training predictive
models typically requires a large dataset. For the nanorobots we are
interested in assembling, this will be difficult to collect. This is because
high-fidelity data, which allows us to exactly characterize the shape and size
of individual structures, is very time-consuming to collect, whereas low-
fidelity data is readily available but only captures bulk statistics for
different processes. Therefore, this work combines low- and high-fidelity data
to train a generative model using a two-step process. We first use a
relatively small, high-fidelity dataset to train a generative model. Then at
run time, the model takes low-fidelity data and uses it to approximate the
high-fidelity content. We do this by biasing the model towards samples with
specific properties as measured by low-fidelity data. In this work we bias our
distribution towards a desired node degree of a graphical model that we take
as a surrogate representation of the nanorobots that this work will ultimately
focus on. We have not yet accumulated a high-fidelity dataset of nanorobots,
so we leverage the MolGAN architecture [1] and the QM9 small molecule dataset
[2-3] to demonstrate our approach.
## 1 Introduction
The field of DNA nanotechnology has generated a rich array of nanoscale
designs for applications such as moving through fluids [4], sensing changes in
pH [5-6] and storing drug loads [7]. We hypothesize that we could combine
components with individual capabilities into multifunctional nanorobots to
perform complex tasks, potentially, like swimming through the human body to a
target site, detecting an adverse reaction, and then releasing a large drug
load to stop the reaction from continuing. We could build these nanorobots by
combining multiple species of functional components in solution under specific
manufacturing conditions, such as temperature and ion concentration. However,
research suggests that as we build more complex nanorobots, the yields will
decrease [8]. The engineering challenge here is to learn to optimize the
manufacturing conditions to maximize yield, and we can accelerate this
learning process with a predictive model.
However, training predictive models typically requires a large dataset, which
will be difficult to collect in this application. This is because the
dimensions of nanoscale objects are below the diffraction limit of light, so
we must use time-consuming techniques like superresolution microscopy, atomic
force microscopy (AFM) or transmission electron microscopy (TEM) to image the
nanorobots directly. The alternative to using these high-fidelity methods is
to use low-fidelity characterization techniques like spectrophotometry.
Spectrophotometry uses light to characterize sample composition and can be
used to detect the number of unhybridized DNA strands in solution, which can
be correlated to how many connected neighbors a component has. If we represent
the structure of a nanorobot as a graph, we may be able to relate the
spectrophotometry data to the average node degree of that graph. Therefore,
while spectrophotometry data is faster to collect than AFM or TEM, it can only
tell us how many connections there are between components in a nanorobot, but
not the types of components present in the nanorobot, nor its overall
topology. Note that spectrophotometry techniques are not the only low-fidelity
tools available to us - we could also use gel electrophoresis or fluorescence
microscopy, among others. In future, we will conduct experiments to understand
what characteristics we can reliably observe about our nanorobots from low-
fidelity data, and we will use those metrics to bias our model’s output using
the same approach as the one described here.
We propose to bridge the gap between time-consuming imaging techniques and
fast, yet low-fidelity characterizations by combining them to train a
generative model in a two-step process. We support the belief that generative
models are an appropriate choice because they can learn to represent a
distribution and make novel predictions by biasing the learned distribution
towards desired characteristics. Our two-step training process, as shown in
Figure 1, will first train the model on a relatively small dataset of high-
fidelity data. (Based on work in [1], we estimate we will only need a few
thousand example nanorobots.) Once the model has learned to approximate the
distribution of nanorobot shapes, the second step will be to collect only low-
fidelity data for subsequent experiments. This data will then be used to bias
the distribution towards nanorobots with a desired average node degree.
Figure 1: An overview of our two-step training approach that uses high-
fidelity (AFM) data and low-fidelity (spectrophotometry data) to train a GAN
with an additional reward network. (Credit to Ying Liu for AFM image.)
While there are many examples of generative models that incorporate a
mechanism for adding bias to their outputs [1, 9-14], we selected the MolGAN
architecture presented by De Cao and Kipf for our preliminary experiments [1].
The MolGAN framework was designed to learn a distribution of small molecules
(maximum 9 atoms) from the QM9 dataset [2-3], represented as graphs. MolGAN
linearly combines the typical loss function for the generator with a
reinforcement learning (RL) objective in order to bias the generator to output
novel molecules that meet certain criteria, such as solubility or
druglikeliness (a term coined in [15]) [1].
We present our preliminary implementation of a generative model, as inspired
by MolGAN, using a novel reward objective. We compare our model’s performance
to the original MolGAN using the same QM9 dataset employed by De Cao and Kipf
[1]. Note that we use the QM9 dataset [2-3] as a surrogate for our DNA
nanorobots because we do not yet have a dataset of nanorobot structures. Since
we intend to represent our nanorobots as graphs, the QM9 dataset is a good
surrogate because it also uses a graph-based representation. This allows us to
test our novel reward objective, which is designed to bias the generator
towards returning graphs with a desired average node degree. In this work we
explore two aspects of the model’s performance that will be of particular
importance moving forwards: (1) how well we can bias our model’s output and
(2) how diverse the generated samples are when we are biasing the model.
## 2 Key Concepts from Prior Work
In this section, we present key concepts from the literature that we used in
our implementation. First, we define a graph as a set of nodes, $V$, connected
by a set of edges, $E$ which together represent a graph, $G=(V,E)$. Following
De Cao and Kipf, we can encode information about the nodes and edges in one-
hot tensors. For example, the nodes are described by a feature matrix, $X$,
which has size $N\times T$, where $N$ is the number of nodes in the graph, and
$T$ is the number of different types of nodes. Similarly, the edges can be
described by an adjacency tensor, $A$, of size $N\times N\times B$, where $B$
is the number of different edge types [1].
### 2.1 Wasserstein Generative Adversarial Networks with Gradient Penalty
(WGAN-GP)
The MolGAN architecture uses a variant on the original GAN [16] that minimizes
the Wasserstein-1 distance instead of the Kullbeck-Leibler divergence; this
variant is called a WGAN [17]. The advantage of the WGAN is that it is less
prone to mode collapse than the original GAN because the Wasserstein-1
distance function is much less susceptible to the vanishing gradient problem
[17].
The Wasserstein-1 distance function is intractable to compute, so it is
approximated using the Kantorovich-Rubinstein duality [17-18]. This
approximation uses a K-Lipschitz function that must, by definition, be
bounded. Gulrajani et al. suggested using a gradient penalty to apply this
constraint, giving rise to the WGAN-GP [19]. This WGAN-GP model is used in
MolGAN, and allows us to define a new loss function for the discriminator as
follows [1]:
$L\big{(}\textbf{x}^{(i)},G_{\theta}(\textbf{z}^{(i)});\phi\big{)}=-D_{\phi}(\textbf{x}^{(i)})+D_{\phi}(G_{\theta}(\textbf{z}^{(i)}))+\alpha\bigg{(}||\nabla_{\hat{\textbf{x}}^{(i)}}D_{\phi}(\hat{\textbf{x}}^{(i)})||-1\bigg{)}^{2}$
(1)
Where $D_{\phi}$ is the discriminator, $G_{\theta}$ is the generator, and
$\hat{\textbf{x}}^{(i)}$ is a linear combination of real and fake samples,
$\hat{\textbf{x}}^{(i)}=\epsilon\mathbf{x}^{(i)}+(1-\epsilon)G_{\theta}(\mathbf{z}^{(i)})$.
The random seed is $\textbf{z}^{(i)}\sim\mathcal{N}(0,1)$ and
$\epsilon\sim\mathcal{U}(0,1)$ [1]. We set $\alpha=10$ following Gulrajani et
al.’s recommendation [19].
### 2.2 Relational Graph Convolutional Networks (R-GCN)
We use graph convolution in both the discriminator and reward networks as a
means of finding salient graph features that help each network to categorize
graphs as real or fake (in the case of the discriminator) and to attach the
correct score to each graph for meeting the design criteria (in the case of
the reward function). In MolGAN, De Cao and Kipf use a variant of the
Relational Graph Convolutional Network (R-GCN) by Schlichtkrull et al. [20-21]
to obtain the following convolution function [1]:
$H_{i}^{(l+1)}=\tanh\bigg{(}\sum_{j=1}^{N}\sum_{y=1}^{Y}\frac{\tilde{A}_{ijy}}{|\mathcal{N}_{i}|}f_{y}^{(l)}(H_{j}^{(l)},x_{i})+f_{s}^{(l)}(H_{i}^{(l)},x_{i})\bigg{)}$
(2)
$H_{i}^{(l+1)}$ represents the output of a graph convolution layer. The first
term sums up the graph convolution for every node, $x_{i}$, in the previous
layer, $H_{j}^{(l)}$, with its neighbors in $\tilde{A}$, a modified version of
the adjacency matrix. Dividing by $|\mathcal{N}_{i}|$ (the number of neighbors
of the $i$-th node) normalizes the output. The function $f_{y}^{(l)}$ is from
a family of affine functions for every edge type, and it performs the graph
convolution step in the R-GCN. Similarly, $f_{s}$ is a function that adds
self-loops to the output of each layer [20].
We must aggregate the output of the final layer of the R-GCN into a scalar
value for the discriminator and the reward networks. We use a soft-attention
mechanism to obtain some "attention glimpse," or scalar, as follows [1]:
$H_{\mathcal{G}}=\tanh\bigg{(}\sum_{v\in\mathcal{V}}\sigma(i(H_{v}^{(l)},x_{v}))\odot\tanh(j(H_{v}^{(l)},x_{v}))\bigg{)}$
(3)
Where $H_{\mathcal{G}}$ is the scalar output of the soft-attention mechanism
as applied to the entire graph, $\mathcal{G}$. Two different multilayer
perceptrons are represented by $i$ and $j$, $\sigma$ is the sigmoid activation
function, and $\odot$ is element-wise multiplication [1].
### 2.3 Deep Deterministic Policy Gradient (DDPG) and the Generator Loss
Function
De Cao and Kipf framed the graph generation process as a one-step RL episode
where the actor (i.e. the generator) must choose an action (i.e. an output
graph) from a high-dimensional action space. Presented this way, the deep
deterministic policy gradient (DDPG) algorithm [22] is an appropriate method
of learning to optimize the actor’s policy because it is capable of operating
in high-dimensional spaces with neural networks. De Cao and Kipf cast the
generator as the policy of the MolGAN agent, and the reward function serves as
the critic in this paradigm. The authors use a streamlined version of the DDPG
approach, neglecting the use of experience replay and target networks, because
we are working with i.i.d. data in this case [1].
Ultimately, we can write the loss function for the generator as follows [1]:
$L(\mathbf{z}^{(i)};\theta)=\lambda\big{(}-D_{\phi}(G_{\theta}(z))\big{)}+(1-\lambda)\big{(}-\hat{R}_{\psi}(G_{\theta}(z))\big{)}$
(4)
Where $\hat{R}_{\psi}$ is the output of the reward network, and $\lambda$ is a
hyperparameter for tuning the trade-off between the WGAN component and the RL
component of the generator’s loss [1].
## 3 Results
### 3.1 Novel Reward Objective
The MolGAN architecture uses a neural network to approximate a specific reward
function provided by the user. This function approximator provides a gradient
that the generator can follow to maximize reward. The reward network must
minimize the mean squared error between its predictions and the actual outputs
from the reward function ($R$) for both real and fake data [1]:
$L\big{(}G_{\theta}(\textbf{z}^{(i)}),\mathbf{x}^{(i)};\psi\big{)}=\bigg{(}\hat{R}_{\psi}(G_{\theta}(\textbf{z}^{(i)})-R(G_{\theta}(\textbf{z}^{(i)})\bigg{)}^{2}+\bigg{(}\hat{R}_{\psi}(\mathbf{x}^{(i)})-R(\mathbf{x}^{(i)})\bigg{)}^{2}$
(5)
In this paper we created a novel reward function that computes the average
node degree for every graph in a set. The function assigns a score in the
range [0, 1] based on how closely the average node degree for a set of graphs
matches the desired value, similar to the MolGAN approach [1].
### 3.2 Implementation
Table 1: Effect of reward function objective ($d$) on output bias and sample
diversity. The bolded values in the MolGAN data indicate the maximum values
for each score. Original MolGAN [1]
---
Objective | % Unique | Druglikeliness | Synthesizability | Solubility
Druglikeliness | 2.2 | 0.62 | 0.59 | 0.53
Synthesizability | 2.1 | 0.53 | 0.95 | 0.68
Solubility | 2.0 | 0.44 | 0.22 | 0.89
All Simultaneously | 2.3 | 0.51 | 0.82 | 0.69
Our Results
Objective | % Unique | Average $d$
$d=1$ | $0.1$ | $1.1$ (average over 3 trials) | |
$d=2$ | $0.8$ | $2.0$ | |
$d=4$ | $0.7$ | $4.0$ | |
$d=6$ | $0.7$ | $6.0$ (average over 3 trials) | |
Baseline $(\lambda=1.0)$ | $12.8$ | $1.4$ | |
We conducted a series of experiments with our novel reward function to
investigate two key questions: (1) how well did our novel reward function bias
the output of our generator towards the desired average node degree? And (2)
how diverse was our output sample set for varying values of $\lambda$?
In accordance with the test conditions established by De Cao and Kipf, in each
experiment we trained111We wrote our implementation in TensorFlow v2.3.0 [23].
We chose to use RMSProp [24] ($\alpha=$1e-3, $\rho=0.9$, momentum$=0$) instead
of Adam [25] as our optimizer based on the recommendation by Arjovsky et al.
which observed that using momentum-based optimizers tended to perform worse in
training a WGAN than those that optimized without momentum [17]. the
appropriate version of our model for 300 epochs on a 5K random sampling from
the QM9 dataset [2-3] (we also performed a validation run on an additional
1664 samples). We pretrained the model for the first 150 epochs (i.e.
$\lambda=1.0$) and then for the second half of the training process, we set
$\lambda$ to the appropriate value. In the results shown below, we computed
statistics for 6400 sampled graphs, after repeating for 5 trials and
averaging, unless otherwise stated.
### 3.3 Effect of Reward Function on Output Bias
First, we varied the desired average node degree, $d$, in our reward function
and trained our model to bias its outputs to meet this objective. For all of
these experiments, we set $\lambda=0$ for the second half of training. This is
following the precedent set by De Cao and Kipf, who found that $\lambda=0$
maximized the generator’s scores from the reward function. We trained the
model to target values of $d=\\{1,2,4,6\\}$ and report the resulting average
node degree for each value of $d$ in Table 1. We also report the percentage of
unique (non-isomorphic) graphs. Our results are compared to those from De Cao
and Kipf in [1] and [26]. Figure 2 shows samples of graphs generated by the
models.
### 3.4 Effect of $\lambda$ on Output Diversity
Second, we investigated the effect that $\lambda$ had on the diversity of
samples from the generator. We measured this as the percentage of graphs from
a set that were unique (n = 6400 for each of 5 trials unless otherwise stated)
after training the model with values of
$\lambda=\\{1.0,0.75,0.5,0.25,0.05,0.0\\}$. We set $d=2$ for all trials and
also report the resulting average node degree. The results are shown in Table
2, and we compare them to De Cao and Kipf’s findings in [1],222De Cao and Kipf
computed this as the ratio of all non-isomorphic graphs to all valid graphs.
Here we consider all graphs as valid so we calculated the percentage of all
non-isomorphic graphs in a set of 6400 samples from the original dataset.
including their model’s performance with respect to solubility, which they
were attempting to optimize with their reward function. We also compare to an
additional dataset describing sample diversity and quantitative estimate of
druglikeliness (QED) [15] provided in [26] by De Cao.
Table 2: Effect of $\lambda$ on output bias and sample diversity. | MolGAN [1] | MolGAN [26] | Our Results
---|---|---|---
$\lambda$ | % Unique | Solubility | % Unique | QED | % Unique | Average $d$
0.0 | 2.3 | 0.86 | 3.16 | 0.61 | $0.8$ | $2.0$
0.05 | 2.5 | 0.67 | - | - | $2.3$ | $2.0$
0.25 | 1.9 | 0.65 | 10.16 | 0.61 | $6.3$ | $2.0$
0.5 | 1.8 | 0.48 | 31.29 | 0.56 | $8.9$ | $2.0$
0.75 | 2.5 | 0.57 | 64.35 | 0.51 | $9.8$ | $2.1$
1.0 | 2.5 | 0.54 | 63.91 | 0.50 | $12.8$ | $1.4$
## 4 Discussion
In our first experiment, we observed that our novel reward function was able
to bias the average node degree of the sample graphs obtained after training.
The average $d$ values reported agreed closely with the desired values. The
added bias also greatly reduced the percentage of graphs that were non-
isomorphic, from $12.8\%$ to less than $1\%$ for most values of $d$. These
results suggest that although the model can be biased towards graphs with
specific characteristics, it comes at a cost of greatly reduced output sample
diversity.
In reviewing Figure 2, it is clear that as $d$ increased, the sample graphs
also showed increasing numbers of connections between nodes. Interestingly,
the model seems to have also been biased towards using more fluorine atoms in
the $d=4$ test. The $d=2$ samples seem the most closely related to the QM9
dataset, and show indications that the model had learned to form benzene-like
rings.
Figure 2: Sample graphs for every value of $d$ tested in this work, compared
to some sample graphs from the QM9 dataset. (All graphs were generated using
NetworkX [27].)
The variation of our results with added bias disagreed with the trend that De
Cao and Kipf found in their original paper [1]. Specifically, they found that
bias did not significantly affect the uniqueness values as compared to tests
conducted without added bias [1]. Perhaps our objective - a specific average
node degree - is a more stringent constraint than De Cao and Kipf’s objectives
of druglikeliness, synthesizability and solubility.
In our second experiment, we found that as we decreased $\lambda$, we saw a
corresponding decrease in the percentage of unique graphs in our sample set.
This suggests that as we pursued the RL objective more heavily (i.e. as
$\lambda$ decreased), the generator experienced mode collapse and only output
a few unique graphs that met the desired average node degree criteria. This
result disagrees with the findings presented by De Cao and Kipf in [1], but
agrees with findings in [26]. In [1], the $\%$ unique score did not vary with
$\lambda$, but in [26] $\%$ uniqueness decreased along with $\lambda$.
Interestingly, we also observed that as we weighted the RL objective more
heavily, the average node degree values converged on $2.0$.
## 5 Conclusions
In this preliminary investigation, we used a novel reward function to bias the
output of a generative model. The results presented suggest that this approach
is capable of generating novel samples that are similar to the training
dataset and also possess desired characteristics as dictated by the reward
function. We intend to pursue this approach further to predict the shape of
nanorobots formed with DNA nanotechnology components, using only low-fidelity
data.
However, our data also showed reduced sample diversity with added bias, which
could potentially limit our ability to accurately predict all the possible
nanorobots we would see in vitro. In future work we will investigate
alternative approaches to biasing the output of a GAN so that we can preserve
our ability to output graphs with specific characteristics while expanding the
diversity of the output.
## Broader Impact
This paper is intended to benefit other researchers who wish to apply
generative models to their own research by demonstrating a new approach. It is
always possible that our model and our approach to adding bias could be used
to tune the output graphs for discriminatory or dangerous purposes. There is
limited risk of bias in the QM9 dataset [2-3] what we used, because it is a
collection of small organic molecules (up to 9 heavy atoms), and so should not
contain any human-based discriminatory bias.
## Acknowledgments and Disclosure of Funding
The authors have no competing financial interests to disclose. This material
is based on work supported by the National Science Foundation under Grant No.
1739308 and by the National Science Foundation Graduate Research Fellowship
Program under Grant No. DGE1745016, awarded to Emma Benjaminson. Any opinions,
findings, and conclusions or recommendations expressed in this material are
those of the authors and do not necessarily reflect the views of the National
Science Foundation. Thank you to members of the Microsystems and
Mechanobiology Lab and the Biorobotics Lab for many valuable discussions.
Thank you also to Nicola De Cao and Faruk Ahmed for their personal
correspondence in conjunction with this work.
## References
[1] De Cao, N. & Kipf, T. (2018) MolGAN: An implicit generative model for
small molecular graphs. arXiv Prepr. arXiv:1805.11973.
[2] Ruddigkeit, L., van Deursen, R., Blum, L. C. & Reymond, J.-L. (2012)
Enumeration of 166 billion organic small molecules in the chemical universe
database GDB-17. Journal of Chemical Information and Modeling
52(11):2864–2875.
[3] Ramakrishnan, R., Dral, P. O., Rupp, M. & von Lilienfeld, O. A. (2014)
Quantum chemistry structures and properties of 134 kilo molecules. Scientific
Data 1, 140022.
[4] Dreyfus, R., Baudry, J., Roper, M. L., Fermigier, M., Stone, H. A. &
Bibette, J. (2005) Microscopic artificial swimmers. Nature, 437(6):2–5.
[5] Narayanaswamy, N., Chakraborty, K., Saminathan, A., Zeichner, E., Leung,
K. H., Devany, J. & Krishnan, Y. (2019) A pH-correctable, DNA-based
fluorescentreporter for organellar calcium. Nature Methods, 16(1):95–102.
[6] Liu, Y., Kumar, S. & Taylor, R. E. (2018) Mix-and-match nanobiosensor
design: Logical and spatial programming of biosensors using self-assembled DNA
nanostructures. Wiley Interdisciplinary Reviews: Nanomedicine and
Nanobiotechnology, pp. e1518.
[7] Douglas, S. M., Bachelet, I. & Church, G. M. (2012) A logic-gated
nanorobot for targeted transport of molecular payloads. Science,
335(6070):831–834.
[8] Tikhomirov, G., Petersen, P. & Qian, L. (2017) Fractal assembly of
micrometre-scale DNA origami arrays with arbitrary patterns. Nature,
552(7683):67–71.
[9] Radford, A., Metz, L. & Chintala, S. (2016) Unsupervised representation
learning with deep convolutional generative adversarial networks. 4th
International Conference on Learning Representations, ICLR 2016 - Conference
Track Proceedings.
[10] Zhu*, J., Park*, T., Isola, P. & Efros, A.A. (2017) Unpaired Image-to-
Image Translation using Cycle-Consistent Adversarial Networks. IEEE
International Conference on Computer Vision (ICCV). (* indicates equal
contributions)
[11] Bojchevski, A., Shchur, O., Zügner, D. & Günnemann, S. (2018) NetGAN:
Generating Graphs via Random Walks. 35th International Conference on Machine
Learning, ICML 2018. 2:973–988.
[12] Mirza, M. & Osindero, S. (2014) Conditional Generative Adversarial Nets.
arXiv Prepr. arXiv:1411.1784.
[13] Kang, S. & Cho, K. (2019) Conditional Molecular Design with Deep
Generative Models. Journal of Chemical Information and Modeling 59(1):43–52.
[14] Guimaraes, G. L., Sanchez-Lengeling, B., Outeiral, C., Farias, P. L. C. &
Aspuru-Guzik, A. (2017) Objective-Reinforced Generative Adversarial Networks
(ORGAN) for Sequence Generation Models. arXiv Prepr. arXiv:1705.10843.
[15] Bickerton, G. R., Paolini, G. V., Besnard, J., Muresan, S. & Hopkins, A.
L. (2012) Quantifying the chemical beauty of drugs. Nature Chemistry, 4(2):90.
[16] Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D.,
Ozair, S., Courville, A. & Bengio, Y. (2014) Generative Adversarial Networks.
Proceedings of the 27th International Conference on Neural Information
Processing Systems 2:2672-2680.
[17] Arjovsky, M., Chintala, S. & Bottou, L. (2017) Wasserstein Generative
Adversarial Networks. Proceedings of the 34th International Conference on
Machine Learning, PMLR 70:214-223.
[18] Villani, C. (2009) Optimal Transport: Old and New. Grundlehren der
mathematischen Wissenschaften. Springer, Berlin.
[19] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V. & Courville, A.
(2017) Improved Training of Wasserstein GANs. Proceedings of the 31st
International Conference on Neural Information Processing Systems, pp.
5769–5779.
[20] Schlichtkrull, M., Kipf, T. N., Bloem, P., van den Berg, R., Titov, I. &
Welling, M. (2018) Modeling Relational Data with Graph Convolutional Networks.
Gangemi A. et al. (eds) The Semantic Web. ESWC 2018. Lecture Notes in Computer
Science, vol 10843. Springer, Cham.
https://doi.org/10.1007/978-3-319-93417-4_38
[21] Kipf, T. N. & Welling, M. (2016) Semi-Supervised Classification with
Graph Convolutional Networks. 5th International Conference on Learning
Representations - Conference Track Proceedings.
[22] Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa,
Y., Silver, D. & Wierstra, D. (2015) Continuous control with deep
reinforcement learning. arXiv preprint arXiv:1509.02971.
[23] Martin, A., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin,
M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R.,
Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P.,
Wicke, M., Yu, Y. & Zheng, X. (2016) TensorFlow: A system for large-scale
machine learning. 12th USENIX Symposium on Operating Systems Design and
Implementation (OSDI 16), USENIX Association, pp. 265-283
[24] Tieleman, T. & Hinton, G. (2012) Lecture 6.5—RmsProp: Divide the gradient
by a running average of its recent magnitude. COURSERA: Neural Networks for
Machine Learning.
[25] Kingma, D. P. & Ba, J. (2014) Adam: A method for stochastic optimization.
CoRR, abs/1412.6980.
[26] De Cao, N. (2019) Deep Generative Models for Molecular Graphs. Presented
at Machine Learning for Physics and the Physics of Learning 2019, Workshop I:
From Passive to Active: Generative and Reinforcement Learning with Physics,
hosted by the Institute for Pure & Applied Mathematics, UCLA.
[27] Hagberg, A. A., Schult, D. A. & Swart, P. J. (2008) Exploring network
structure, dynamics, and function using NetworkX. Proceedings of the 7th
Python in Science Conference (SciPy2008), Gäel Varoquaux, Travis Vaught, and
Jarrod Millman (Eds), (Pasadena, CA USA), pp. 11–15.
|
# State of the Art: Content-based and Hybrid Phishing Detection
Luis Felipe Castaño
<EMAIL_ADDRESS>Eduardo Fidalgo
<EMAIL_ADDRESS>Enrique Alegre
<EMAIL_ADDRESS>Deisy Chaves
<EMAIL_ADDRESS>Manuel Sánchez-Paniagua
<EMAIL_ADDRESS>
###### Abstract
Phishing attacks have evolved and increased over time and, for this reason,
the task of distinguishing between a legitimate site and a phishing site is
more and more difficult, fooling even the most expert users. The main
proposals focused on addressing this problem can be divided into four
approaches: List-based, URL based, content-based, and hybrid. In this state of
the art, the most recent techniques using web content-based and hybrid
approaches for Phishing Detection are reviewed and compared.
Keywords: Phishing Detection, Content-based Features, Hybrid Features, Deep
learning, Machine learning
## 1 Introduction
Phishing is a crime that uses social engineering to steal consumers’ personal
identity data and financial account credentials, taking advantage of
unsuspecting victims by making them believe that they are dealing with a
legitimate and trusted party, through the use of deceptive email addresses,
and email messages. These are designed to lead consumers to Websites that
trick recipients into divulging financial data such as usernames and
passwords. Phishing attacks have evolved and increased over time, Anti-
Phishing Working Group (APWG) report phishing attacks rise in the third
quarter of 2020, detecting around 200,000 phishing sites in September, APWG
also informed that from 2016 until the third quarter of 2020 phishing attacks
hosted on HTTPS have increased from 10% to 80% [1], fooling even the most
expert users.
The first approach to detect phishing websites was using the URL, being able
to classify websites even if they were not reachable, but in the same way as
phishing techniques evolved, the techniques used to detect them did too.
Currently, the approaches for detecting phishing webpages can be classified as
follows: List-based, URL based, content-based, hybrid, and images-based.
List-based approaches can be applied in two ways: the first one is allowing
only sites registered in a white-list as proposed Jain et al. [2] in 2016; the
second is restricting the sites that appear on a black-list [3]. However,
these solutions need to be constantly updated and the attacker can bypass them
just by making small changes to the URL.
URL-based approaches analyze exhaustively the URL components (protocol,
domain, TLD, path) looking for variations in order to distinguish between a
legitimate website and a phishing website. In these approaches, there are
proposals such as [4, 5, 6, 7]. These methods currently compare legitimate
landing webpage with reported phishing webpage for training and testing their
results. However, recently Sanchez-Paniagua et al. [8] proposed to compare
legitimate login webpages with reported phishing, due this corresponds with
the real problem: determine if a login form of a website is legitimate or
phishing.
Content-based approaches use website content, such as HTML, CSS, and
Javascript as input for algorithms, either raw data or as a vector of features
extracted from a NLP technique to identify patterns between phishing websites.
On the other hand, hybrid approaches use both URL and content-based data for
the same purpose. These two approaches will be deepened in the next section.
Images-based approaches compare screenshots of both legitimate and phishing
webpage to compute the visual similarity. Among the techniques in this type of
approach, it can be found the method proposed by Gangwar et al. [9] which
extracts a fingerprint from the website that can be compared with a previously
collected phishing fingerprint dataset.
This document is organized as follows. We present a revision about the state
of the art Content-based and Hybrid Phishing Detection, in Section 2. Finally,
we draw conclusions in Section 3.
## 2 Literature Review
Content-based or hybrid methods can be divided into two groups. The first one
corresponds with Automatic Features where the raw data is used directly into
an artificial intelligence algorithm, leaving the job of determining the
important features on the algorithm. The second group are methods that use
Handcrafted Features. They obtain a feature vector by using an NLP technique
and later on this vector is the input for a classification algorithm.
### 2.1 Automatic Features
Opara et al. [10] proposed the use of characters embedding and string
embedding techniques to represent features of each HTML, then this
representation is used as input to a Convolutional Neural Network (CNN) in
order to model semantic dependencies. They collect their own data from Alexa
and Phishtank, reporting two sets of data, the first one with 23000 legitimate
websites and 2300 phishing websites used for training, and the second one with
24000 legitimate websites and 2400 phishing websites used for testing, these
datasets are not available. Then, they obtained an accuracy of 98.00% and an
F1 score of 97.00%. Finally, they affirm that automatic feature selection is
the solution to the current approaches problems because the accuracy of
existing models depends on how comprehensive the feature set is and how robust
the feature set remains to future attacks, and also because the handcrafted
approach requires substantial feature engineering.
Recently, Opara et al. [11] proposed a technique that also uses automatic
features, this time using the URL and HTML embeddings as input to a deep
learning algorithm. The URL and HTML strings are tokenized using a character
corpus that includes punctuation marks, then, this tokenized data is processed
into a character embedding matrix. They use the datasets presented in their
previous work, [10], reporting an accuracy of 98.00% and an F1 score of
98.00%.
### 2.2 Handcrafted Features
Adebowale et al. [12] proposed an Adaptive Neuro-Fuzzy Inference System
trained from images, frames, and text of the webpage, extracting 35 features.
They used The University of Huddersfield [13] web phishing public dataset for
training and testing, achieving 98.30% of accuracy and 98.28% of F1 score.
Yang et al. [14], proposed a stack of algorithms for the detection of
phishing. First, a CNN-LSTM algorithm for detecting phishing using URL
features. Then, the result of the CNN-LSTM algorithm was joined to webpage
code features and webpage text features. And finally, all these features are
used by XGBoost algorithm for the final classification, achieving an accuracy
of 99.41% and F1 score of 99.00%. They Collect their own data from PhishTank
and dmoztools.net, using 1021758 phishing webpages and 989021 legitimate
webpages, this dataset was not released.
Rao et al. [15] implemented a system using three types of features: URL
Obfuscation features, Third-Party-based features and Hyperlink-based features.
These features are the input for a Random Forest algorithm that achieves
99.55% accuracy using data collected from Alexa and Phishtank with 1407
legitimate webpages and 2119 phishing webpages.
Ozker et al. [16] studied the use of multiple machine learning algorithms.
They identified 58 different features in the HTML content and then applied
several machine learning methods like Naive Bayes, Random Forest, Support
Vector Machine, Logistic Regression, K-Nearest Neighbors, Decision Tree,
Multilayer Perceptron, and XGBoost. Random Forest showed better results with
an accuracy of 97.91% and F1 score of 98.00%. They Collect their own data from
PhishTank, using 8353 phishing webpages and 5438 legitimate webpages, this
dataset was not released.
Li et al. [17] proposed a two-layer stacking algorithm where the first layer
consists of three basic models: Gradient Boosting Decision Tree (GBDT),
XGBoost, and LightGBM, using a strategy similar to K-fold cross-validation to
train the basic models. Then, they combine the first input features and the
results of the first layer of the stacking models as the final features. This
final features are used to train a GBDT model to make predictions on the
phishing webpages. For the tests, they collect and release a dataset called
50K Image Phishing Detection Dataset, reporting accuracy of 97.30%. It was not
possible to find the dataset at the moment of this review although according
to the authors it was released.
Alotaibi et al. [18] proposed the use of a voting algorithm for feature
selection. Then they used AdaBoost and LightGBM ensemble methods to detect
phishing websites. They used two public datasets for testing, ”Phishing
Dataset for Machine Learning: Feature Evaluation” [19] and ”Phishing Websites
Data Set” [13], reporting an accuracy of 97.05% and an F1 score of 97.35%.
Automatic Features
---
Method | Type | Technique | Year | Dataset | Results (%)
Name | N Legitimate | N Phishing | Accuracy | F1 Score
Opara et al. [10] | Content | CNN | 2019 | Collected data | 24,000 | 2,400 | 98.00 | 97.00
Opara et al. [11] | Hybrid | Deep learning | 2020 | Collected data | 47,000 | 4,700 | 98.00 | 98.00
Handcrafted Features |
Method | Type | Technique | Year | Dataset | Results (%)
Name | N Legitimate | N Phishing | Accuracy | F1 Score
Adebowale et al. [12] | Hybrid | Neuro-Fuzzy Inference | 2019 | [13] | 7,262 | 3,793 | 98.30 | 98.28
Yang et al. [14] | Hybrid | CNN - LSTM - XGBoost | 2019 | Collected data | 22,390 | 22,445 | 99.41 | 99.00
Rao et al. [15] | Hybrid | Random Forest | 2019 | Collected data | 1,407 | 2,119 | 99.55 | -
Ozker et al. [16] | Content | Random Forest | 2020 | Collected data | 5,438 | 8,353 | 97.91 | 98.00
Li et al. [17] | Hybrid | Stacking model | 2019 | 50K PD[17] | 28,320 | 24,789 | 97.30 | -
Alotaibi et al. [18] | Hybrid | AdaBoost and LightGBM | 2020 | [19] \- [13] | 5,000/7,262 | 5,000/3,793 | 97.05 | 97.35
Table 1: Comparative table of the methods presented in section 2 \- Content-
based and Hybrid Approaches
## 3 Discussion and Conclusions
Although the best reported result is presented in the method proposed by Rao
et al. [15], in the table summary 1 it can be seen that the dataset used for
testing is small compared to other methods and also they use third-party
features, making it dependent on external services. The method proposed by
Yang et al. [14] is the one that reports the second-highest accuracy and the
best F1 score value. It was trained with the largest and most balanced dataset
reported in the reviewed methods. However, this method uses third-party
features and the complete list of selected features is not clear enough. On
other hand, the technique proposed by Opara et al. [11] in the section of
automatic features 2.1 shows good results, their method was trained with an
unbalanced dataset as can be seen in table summary 1, which can easily be
improved.
Most of the works reviewed use a hybrid approach, and some of them compare the
results obtained using only the content against the achieved using both,
reporting an increase in the performance of the algorithm when using both,
HTML and URL content. This is why we think using a hybrid method is the better
option. Due to the above and the changing nature of phishing, we have decided
to explore hybrid techniques with Automatic Features as shown in section 2.1,
taking as a baseline the method proposed by Opara et al. [11] in 2020.
## Acknowledgment
This work was supported by the framework agreement between the University of
León and INCIBE (Spanish National Cybersecurity Institute) under Addendum 01.
## References
* [1] Anti-Phishing Working Group. Phishing Activity Trends Report 3 Quarter. Most, (March):1–12, 2020. Available at https://docs.apwg.org/reports/apwg_trends_report_q3_2020.pdf Accessed: Jan. 14, 2021.
* [2] Ankit Kumar Jain and B. B. Gupta. A novel approach to protect against phishing attacks at client side using auto-updated white-list. Eurasip Journal on Information Security, 2016(1):1–11, dec 2016\.
* [3] Pawan Prakash, Manish Kumar, Ramana Rao Kompella, and Minaxi Gupta. PhishNet: Predictive blacklisting to detect phishing attacks. In Proceedings - IEEE INFOCOM, 2010.
* [4] Ozgur Koray Sahingoz, Ebubekir Buber, Onder Demir, and Banu Diri. Machine learning based phishing detection from URLs. Expert Systems with Applications, 117:345–357, mar 2019.
* [5] Mahmood Moghimi and Ali Yazdian Varjani. New rule-based phishing detection method. Expert Systems with Applications, 53:231–242, jul 2016.
* [6] M. Somesha, Alwyn Roshan Pais, Routhu Srinivasa Rao, and Vikram Singh Rathour. Efficient deep learning techniques for the detection of phishing websites. Sadhana - Academy Proceedings in Engineering Sciences, 45(1):1–18, dec 2020.
* [7] Erzhou Zhu, Yuyang Chen, Chengcheng Ye, Xuejun Li, and Feng Liu. OFS-NN: An Effective Phishing Websites Detection Model Based on Optimal Feature Selection and Neural Network. IEEE Access, 7:73271–73284, 2019.
* [8] M. Sánchez-Paniagua, E. Fidalgo, V. González-Castro, and E. Alegre. Impact of current phishing strategies in machine learning models for phishing detection. In Álvaro Herrero, Carlos Cambra, Daniel Urda, Javier Sedano, Héctor Quintián, and Emilio Corchado, editors, 13th International Conference on Computational Intelligence in Security for Information Systems (CISIS 2020), pages 87–96, Cham, 2021. Springer International Publishing.
* [9] Abhishek Gangwar, Eduardo Fidalgo, Enrique Alegre, and Víctor González-Castro. Phishfingerprint: A practical approach for phishing web page identity retrieval based on visual cues. In International Conference of Applications of Intelligent Systems, Las Palmas de Gran Canaria, 2018.
* [10] Chidimma Opara, Bo Wei, and Yingke Chen. HTMLPhish: Enabling Phishing Web Page Detection by Applying Deep Learning Techniques on HTML Analysis. Proceedings of the International Joint Conference on Neural Networks, aug 2019.
* [11] Chidimma Opara, Yingke Chen, and Bo. Wei. Look Before You Leap: Detecting Phishing Web Pages by Exploiting Raw URL And HTML Characteristics. arXiv preprint arXiv:2011.04412, nov 2020.
* [12] Moruf A Adebowale, Khin T Lwin, Erika Sanchez, and M Alamgir Hossain. Intelligent web-phishing detection and protection scheme using integrated features of images, frames and text. Expert Systems with Applications, 115:300–313, 2019.
* [13] Rami M Mohammad, Fadi Thabtah, and Lee McCluskey. Phishing websites features. School of Computing and Engineering, University of Huddersfield, 2015. Available at http://archive.ics.uci.edu/ml/datasets/Phishing+Websites Accessed: Jan. 14, 2021.
* [14] Peng Yang, Guangzhen Zhao, and Peng Zeng. Phishing website detection based on multidimensional features driven by deep learning. IEEE Access, 7:15196–15209, 2019.
* [15] Routhu Srinivasa Rao and Alwyn Roshan Pais. Detection of phishing websites using an efficient feature-based machine learning framework. Neural Computing and Applications, 31(8):3851–3873, aug 2019.
* [16] Uğur Ozker and Ozgur Koray Sahingoz. Content based phishing detection with machine learning. In 2020 International Conference on Electrical Engineering (ICEE), pages 1–6. IEEE, 2020.
* [17] Yukun Li, Zhenguo Yang, Xu Chen, Huaping Yuan, and Wenyin Liu. A stacking model using URL and HTML features for phishing webpage detection. Future Generation Computer Systems, 94:27–39, may 2019.
* [18] Bandar Alotaibi and Munif Alotaibi. Consensus and majority vote feature selection methods and a detection technique for web phishing. Journal of Ambient Intelligence and Humanized Computing, 1:3, may 2020.
* [19] Choon Lin Tan. Phishing dataset for machine learning: Feature evaluation. Mendeley Data, 1:2018, 2018.
|
# An Environmental Monitoring Network for Quantum Gas Experiments and Devices
T J Barrett1, W Evans1, A Gadge1,2, S Bhumbra1, S Sleegers1, R Shah1, J
Fekete1, F Oručević1 and P Krüger1 1 School of Mathematical and Physical
Sciences, University of Sussex, Brighton BN1 9QH, UK 2 Department of Physics
of Complex Systems, Weizmann Institute of Science, Rehovot 761001, Israel
<EMAIL_ADDRESS>
###### Abstract
The field of quantum technology is approaching a level of maturity, recently
demonstrated in space-borne experiments and in-field measurements, which would
allow for adoption by non-specialist users. Parallel advancements made in
microprocessor-based electronics and database software can be combined to
create robust, versatile and modular experimental monitoring systems. Here, we
describe a monitoring network used across a number of cold atom laboratories
with a shared laser system. The ability to diagnose malfunction, unexpected or
unintended behaviour and passively collect data for key experimental
parameters, such as vacuum chamber pressure, laser beam power, or resistances
of important conductors, significantly reduces debugging time. This allows for
efficient control over a number of experiments and remote control when access
is limited.
available
### Introduction
Recent developments in quantum technologies that exploit the unique properties
of cold atomic clouds, such as gravimeters [1] and navigational accelerometers
[2, 3], have been focused on producing miniature, field-based and remote
systems. The challenging environmental conditions these systems are exposed
to, as seen in space-borne experiments [4], can be mitigated using automated
control sequences, with evolutionary algorithms and machine learning protocols
becoming increasingly common [5, 6, 7, 8]. The rapid resolution of problems is
essential in inter-dependent networks [9] or in isolated remote systems where
performance data may only be accessed infrequently, such as marine-based
systems [10, 11].
Ultracold atom clouds are extremely sensitive to external environmental
conditions due to having energy distributions on the nanokelvin scale. Typical
laboratory-based systems experience drifts and instability caused by
environmental effects such as thermal disturbances and acoustic noise, which
can affect a number of experimental components including mirrors, laser paths,
coil resistances and power supplies used to generate magnetic fields. To
mitigate these effects, it is possible to actively correct for individual
issues with local feedback systems on parameters such as the polarisation of
light beams [12], or the currents used to produce magnetic fields [13], for
example.
Often, environmental measurements are subsequently collected after an error
has occurred, to retrospectively determine the underlying cause. This can be
cumbersome and events may not be easy to reproduce, causing further
operational delays. Here, we present a laboratory monitoring network used to
autonomously record a number of experimental parameters across a suite of
ultracold atom laboratories continually in the background, and visualise them
in real time which allows for a fast response to unexpected errors. The
ability to efficiently process and record environmental data will be crucial
as quantum technology devices become more complex, for example using Bose-
Einstein condensates instead of thermal clouds, or as they are exposed to more
challenging environmental conditions where changes are unpredictable. The
described system employs a number of measurement nodes to record a wide
variety of relevant parameters, such as temperatures, vacuum chamber
pressures, laser power levels, and magnetic fields, as shown in Fig. 1 (a).
The sensors are networked together and data are recorded in an external time-
series database, which is then accessed through a series of end-user
dashboards supported on an internet platform, an example of which is shown in
Fig. 1 (b).
Fig. 1: (a) A diagram showing the systems that are needed for the monitoring,
visualisation and control of an ultracold atom experiment. (b) Example of a
custom dashboard in a web browser interface used to assess performance of the
experiments at any given time.
Our network contributes to an emerging ecosystem of laboratory monitoring
systems that provide both measurement and management of environmental data
[14]. These database systems must be scalable, flexible and time-stamped for
access in order to help users solve problems faster, and ideally remotely,
facilitating experimental progress. The breadth of hardware and software
tools, that has been developed in the open source community, means that
monitoring systems can be designed for specific use and implementation is
straightforward. Improvements in the capability of programmable electronics
[15, 16, 17] and expanding database infrastructure can be exploited for use in
cold atom laboratories [18]. Microprocessor approaches have been implemented
for control systems [19, 20, 21], locking systems [22, 23] and for
environmental management [24]. Furthermore, such monitoring hardware can be
integrated into feedback systems [25] and used with management software [24].
A similar advance has occurred in field programmable gate array (FPGA)-based
hardware and has been followed by the development of FPGA-based control
systems for atomic physics experiments[26, 27]. Additional advances in control
systems [28] have allowed for the automation of experimental feedback [29],
optimised control of complex sequences [30] and even citizen-science
participation [31, 32]. Remote control of laboratory experiments is common in
situations such as particle physics [33] and teaching laboratories to expand
use of key equipment [34, 35]. Furthermore, as quantum technology approaches
maturity, commercial systems are becoming available for remote non-specialist
users through online portals [36, 37].
### Data acquisition
The monitoring system presented here has two types of measurement nodes. In
the first type of node, data is ‘pushed’ by the node to the database, as part
of the analysis protocol. In atomic physics experiments this often means the
acquisition of a series of images of the atomic cloud [38] to determine
parameters such as atom number, cloud temperature and trap frequency. Once
these values are calculated, they are pushed to the database within the
analysis script. The second type of measurement node is a device that takes
data when triggered via a local area network (LAN), here data is ‘pulled’ from
the node by the collector device. Microprocessor hardware is used to read
sensor values locally which allows for each node to be configured to suit a
range of requirements and standardise data readings to conform to the database
format. Current microprocessor hardware includes transimpedence amplifiers and
photodiodes that can be used to measure laser beam powers, Hall sensors for
measuring magnetic field strengths and serial interfaces for communicating
with devices such as vacuum gauges.
The existing ecosystem of microprocessor hardware is accessible, easy to
implement, has community support available and elements can be integrated into
custom devices to reduce the number of total components. For example, in the
system presented here, a single printed circuit board (PCB) was designed to
consolidate a number of thermocouple amplifiers and transimpedence amplifiers
that are used to record the temperature at various locations and laser beam
powers at several points along the beam path.
The combination of different measurements is easily programmed and can be
carried out sequentially over each measurement node. Once the node receives a
request for data from the network hub it collects data from each of the
attached sensors, collates them to a location stamped string and returns this
string via the LAN. There are over 100 sensors compiling environmental data
across the devices in the system presented here. Measurements are collected
over a wired network to avoid wireless microwave noise interfering with the
experiment, for instance wifi signals at $2.4\text{\,}\mathrm{GHz}$ and
$5.0\text{\,}\mathrm{GHz}$ are close to the hyperfine ground states in alkali
metals. The data are transferred via user datagram protocol (UDP) at intervals
of $20\text{\,}\mathrm{s}$, which was measured to have a $100\%$ transfer
efficiency over a week of operation. This time resolution is used for
monitoring environmental data but can be reduced to millisecond time steps
with the current microprocessor hardware or to the nanosecond limit of the
time-series database (TSDB) software. To preserve the regularity of
communication, each device is programmed with a watchdog script to reset if
communication is lost.
### Database architecture
Once the individual measurements have been taken at local nodes in each
laboratory, it is important that they are reliably imported to a TSDB for
storage. In this network, a central ‘collector’ microprocessor receives data
transfer strings via the LAN and sends them to a Raspberry Pi using serial
communication. Python programming scripts are used to parse the data string
into separate variables, standardise the format, and enter them into a
database server. The database operates on a backed-up network drive running a
TSDB software, InfluxDB [39], using an application programming interface (API)
through a hypertext transfer protocol (HTTP) endpoint.
Fig. 2: A schematic showing the architecture of the monitoring system. On the
left, environmental parameters are continually measured in both the local cold
atom experiments and the shared laser system, and sent to the database via the
system nodes and collector. Additional parameters are calculated through
analysis of absorption images when they arrive, and are pushed directly to the
database. The server hosts a time-series database, which stores all the data
and periodically copies it to back-up storage. The stored measurements can be
accessed through a visualisation engine, allowing for simple analysis,
customised dashboards and alert systems for end-users.
The TSDB software is optimised to efficiently write, compress, and store data,
whilst also keeping it readily available for fast querying. This is important
when maintaining historical data over years of use. The format is easily
scalable and data points can be tagged and grouped for simple management.
InfluxDB query language is simple, easy to integrate into existing analysis
code, and similar to existing database languages such as structured query
language (SQL). The software time-stamps all data on entry, ensures they are
backed up regularly and is capable of down sampling historic data to preserve
storage space.
A system of ten measurement devices, capturing an average of ten measurements
each at 20 s intervals, requires ~6.25 GB of storage space per year. Each data
entry is stored with the form:
"Measurement Name", <tag_key1>=<tag_value>, <tag_key2>=<tag_value>,
<field_key>=<field_value>,
for example:
"temperature", RoomID=Lab03, DevID=Dev01, T1=21.6 T2=22.8 T3=25.2.
Each monitoring node in a network system can be adapted to suit the necessary
local measurements. This leads to a complex data set which must be processed
during collection at the sensor hub.
Data visualisation programs are useful for interpretation of the stored data
by general users without detailed knowledge of the database scheme. Here, we
use an open-source tool called Grafana [40], which has native support for
direct integration with a TSDB. This combination of tools has been used for an
expansive range of complex systems, from bee keeping [41] to financial
monitoring [42]. The visualisation tool allows users to build customised
dashboards to display relevant metrics in a web browser from a remote location
with an internet connection. This includes dashboards for different
combinations of measurements from several nodes across a network of
laboratories, used here in one case for beam powers at several points along
the chain of a shared laser system to ensure stability at each point. Users
can quickly look back at charts of data over any time period or see the
overall system health at a glance rather than setting up additional time
consuming measurements following an error.
For critical measurements the system can be programmed to send an alert to
users if a threshold is reached, for example if the temperature of a vacuum
system rises too quickly during a bake out process which can damage
components. The ability to continuously assess a range of experimental
parameters with one central database simplifies maintenance procedures which,
if ignored, can cause large delays in experiments.
### Distributed laser system
One example of the type of experimental infrastructure that can benefit from
integration with a monitoring network is a collection of shared laser
frequency sources. In our system, a pair of tapered amplifier lasers are used
to provide the two central laser frequencies employed to a suite of three
atomic physics laboratories, where this seed laser light is then locally
amplified. Each laser is locked to an atomic reference in the central
laboratory, split into individual paths for each experiment and then
distributed to the laboratories via single-mode polarisation-maintaining
optical fibers. The ability to monitor properties of the light at several
points along each path allows for fast debugging when necessary.
The optical power of the laser beams is monitored both in the central laser
source laboratory and in the individual science laboratories to assess the
stability and detect changes in fibre coupling efficiencies. This is also
important for the protection of the local amplifier chips which can be damaged
if operated when the seed power is measured to be above or below a given
threshold. Therefore, watchdog safety circuits, which switch off the amplifier
if seed power deviates outside a preset range, have been implemented measuring
both the input seed powers with in-fibre photodiodes and amplifier output
powers. Additionally, these circuits are integrated with interlock safety
systems in each individual laboratory. The monitoring system is supplemented
with local active feedback on the amplifier current, which is able to mitigate
small drifts in the laser properties due to polarisation drift in the delivery
fibres [43, 44] and provide stable output power. This demonstrates the
interaction between the monitoring system and local hardware to provide
suitable protection and feedback.
The ability to monitor laser powers at each point in the chain of this system,
using a centralised dashboard, significantly reduces the time taken to rectify
issues with coupling and frequency stability. This allows for simultaneous
monitoring of a number of laboratories by a single user which paves the way
for implementation of distributed networks of quantum technology devices, such
as a network of gravimeters for gravitation-wave detection [45].
### Integration with experimental data
Current quantum technology devices have focused on using thermal atomic clouds
but there are schemes which employ Bose-Einstein condensates (BECs) for a
greater sensitivity in gravimetry [46, 47] or a finer resolution in microscopy
[48, 49]. To generate BECs, we evaporatively cool a magnetically trapped cloud
of atoms using radio-frequency radiation to a temperature below a critical
limit, typically on the order of merely hundreds of nanokelvin, which requires
precise control of the atoms for an experimental cycle on the order of a few
seconds to a minute. These systems are extremely sensitive to the external
environment which, while making it challenging to produce BECs also provides
substantial benefits for quantum sensors.
Experiments designed to produce a BEC involve complex parameter spaces with
one laboratory in this system using eight laser frequencies, combined pairwise
into eight beams, delivered to two different vacuum regions, using fifteen
power supply currents and three distinct stages of applied radio-frequency
radiation. The centralised database of the network system can be accessed
independently to download specific time series. These data can then be
manipulated and analysed alongside experimental data locally allowing for
simple debugging across a large variable set. The database can store
information with nanosecond resolution if necessary but the data can be down-
sampled and viewed on different timescales to show both short-term and long-
term variations.
Fig. 3: (a) A collection of environmental measurements taken from the time-
series database and experimental parameters calculated from image analysis
during a repeated experimental run over two and a half hours. (b) An example
correlation matrix calculated for the monitoring data above showing a Pearson
coefficient. There are strong correlations between the cloud position in both
horizontal (H) and vertical (V) directions with the power in the imaging beam.
However, there are only strong correlations between the cloud V position and
the output power of the local laser amplifier, showing that the cloud position
is more sensitive to the laser power in this direction. The strongest
correlations with atom number are seen in the local laser output power and
central laser laboratory temperature, which highlight problems due to
fluctuations in seed power introduced during fibre coupling.
An example showing the collated raw monitoring system data from an atom
trapping experiment is presented in Fig. 3(a), along with a correlation matrix
constructed to analyse these data in (b). At this point, there are a variety
of signal processing techniques which can be applied to characterise the
correlations between two signals. The linear dependence between the variables
can be characterised using the Pearson coefficient, which is a scaled
covariance, as displayed in Fig. 3(b). This technique highlights some large
correlations that can be used to debug some experimental problems. For
example, both the horizontal (H) and vertical (V) cloud positions are strongly
correlated with the power in the imaging beam, used to probe the atomic
sample, which in turn is derived from the main laser. The atom number has its
largest correlations with local laser output power and temperature in the
central laser room which can be traced to fibre coupling optical misalignment
and polarisation drifts.
A time-lagged linear dependence can be determined by using cross-correlations
to find phase differences. Similarly, frequency responses and correlations in
frequency space can be determined by comparing each signal’s power spectral
density (PSD) from a discrete Fourier analysis. Furthermore, the analysis of
such a wide data set can be extended using machine learning algorithms and
principal component analysis [50]. These techniques can be used to identify a
more abstract analysis that best describes the entire system. Here we chose to
maintain the original variables throughout the analysis to simplify the
debugging process.
Fig. 4: (a) The atom number of a magnetic trap in one experiment in the
network is plotted with a measurement of the vacuum chamber pressure and the
ambient room temperature. Measurements are shown for two $8\,$hour windows,
first with the local air-conditioning on and then with the air-conditioning
off. The first measurement shows a laboratory temperature of $20.1\pm
1.2$$\text{\,}\mathrm{\SIUnitSymbolCelsius}$, a pressure of $(1.22\pm
0.05)\times 10^{-10}$$\text{\,}\mathrm{mbar}$ and the atom number is measured
to be $(2.22\pm 0.55)\times 10^{6}$. The second measurement shows a
temperature of $25.1\pm 0.3$$\text{\,}\mathrm{\SIUnitSymbolCelsius}$, a
pressure of and $(2.01\pm 0.04)\times 10^{-10}$$\text{\,}\mathrm{mbar}$ and an
atom number of $(4.81\pm 0.33)\times 10^{6}$. (b) A pair of absorption images
of atomic clouds is shown from during each 8 hour window, indicated by the
labels A and B. The images for each of the two times depict both a
magnetically trapped cloud (above the transition temperature, $T_{c}$), and
the corresponding Bose-Einstein condensate that can be produced by further
evaporative cooling.
An example subset of the data from the monitoring system used to resolve
experimental difficulties is shown in Fig. 4. The data captured with the
monitoring system allow for continuous comparison of the experiment under
different environmental conditions. This figure shows the increase in atom
number due to better optimisation which is permitted with more stable
environmental conditions. Here, the standard deviation on atom number
decreases from $5.5\times 10^{5}$ to $3.3\times 10^{5}$, an improvement of
$39\,\%$, when the temperature is stabilised and the standard deviation in the
vacuum chamber pressure decreases from $5.1\times 10^{-10}\,$mbar to
$3.8\times 10^{-10}\,$mbar. This shows the benefit of having access to regular
environmental data to improve experimental performance by optimising the
system to the most stable background. The ability to autonomously monitor a
series of experimental variables including power supply currents and sample
resistances is key to allow experimental development even with limited
laboratory access for debugging, as for in-field devices or up-scaled quantum
technologies. Using this system allowed one laboratory to advance an
experiment from using thermal clouds to producing BECs without physical access
to the laboratory during the Covid-19 pandemic in 2020.
### Conclusion
The accessible and flexible monitoring system presented here provides a useful
blueprint for replication in similar cold atom laboratories. The ability to
continually measure key experimental parameters and centralise data storage
across a network of experiments paves the way for consolidated control and
shared infrastructure, demonstrated by the distributed laser system. The time-
stamped database system allows a wide range of measurements from various
sources to be aggregated into one place, in a standardised format and
presented to end users with versatile dashboards. This reduces the time needed
for problem solving from days or hours down to minutes or seconds and eases
the progression of experiments, shown here by the remote creation of a Bose-
Einstein condensate in an experiment which did not previously have one. The
data management infrastructure presented here can be employed in a wide range
of quantum setups, including large networks of atomic experiments or remote
systems where access is limited, inside a laboratory context or in the field.
## References
## References
* [1] Vincent Ménoret, Pierre Vermeulen, Nicolas Le Moigne, Sylvain Bonvalot, Philippe Bouyer, Arnaud Landragin and Bruno Desruelle “Gravity measurements below $10^{-9}$ g with a transportable absolute quantum gravimeter” In _Scientific Reports_ 8.1 Nature Publishing Group, 2018, pp. 12300 DOI: 10.1038/s41598-018-30608-1
* [2] Pierrick Cheiney, Lauriane Fouché, Simon Templier, Fabien Napolitano, Baptiste Battelier, Philippe Bouyer and Brynle Barrett “Navigation-Compatible Hybrid Quantum Accelerometer Using a Kalman Filter” In _Physical Review Applied_ 10 American Physical Society, 2018, pp. 034030 DOI: 10.1103/PhysRevApplied.10.034030
* [3] Carlos L. Garrido Alzar “Compact chip-scale guided cold atom gyrometers for inertial navigation: Enabling technologies and design study” In _AVS Quantum Science_ 1.1, 2019, pp. 014702 DOI: 10.1116/1.5120348
* [4] Dennis Becker et al. “Space-borne Bose–Einstein condensation for precision interferometry” In _Nature_ 562.7727, 2018, pp. 391–395 DOI: 10.1038/s41586-018-0605-1
* [5] P.. Wigley et al. “Fast machine-learning online optimization of ultra-cold-atom experiments” In _Scientific Reports_ 6.1, 2016, pp. 25890 DOI: 10.1038/srep25890
* [6] I. Geisel, K. Cordes, J. Mahnke, S. Jöllenbeck, J. Ostermann, J. Arlt, W. Ertmer and C. Klempt “Evolutionary optimization of an experimental apparatus” In _Applied Physics Letters_ 102.21, 2013, pp. 214105 DOI: 10.1063/1.4808213
* [7] A.. Tranter, H.. Slatyer, M.. Hush, A.. Leung, J.. Everett, K.. Paul, P. Vernaz-Gris, P.. Lam, B.. Buchler and G.. Campbell “Multiparameter optimisation of a magneto-optical trap using deep learning” In _Nature Communications_ 9.1, 2018, pp. 4360 DOI: 10.1038/s41467-018-06847-1
* [8] A J Barker, H Style, K Luksch, S Sunami, D Garrick, F Hill, C J Foot and E Bentine “Applying machine learning optimization methods to the production of a quantum gas” In _Machine Learning: Science and Technology_ 1.1 IOP Publishing, 2020, pp. 015007 DOI: 10.1088/2632-2153/ab6432
* [9] B. Canuel et al. “Technologies for the ELGAR large scale atom interferometer array” In _preprint_ , 2020 arXiv: http://arxiv.org/abs/2007.04014
* [10] Y. Bidel, N. Zahzam, C. Blanchard, A. Bonnin, M. Cadoret, A. Bresson, D. Rouxel and M.. Lequentrec-Lalancette “Absolute marine gravimetry with matter-wave interferometry” In _Nature Communications_ 9 Springer US, 2018, pp. 627 DOI: 10.1038/s41467-018-03040-2
* [11] Aleksandar Zatezalo, Vladan Vuletić, Paul Baker and T. Poling “Bose-Einstein interferometry and its applications to precision undersea navigation” In _2008 IEEE/ION Position, Location and Navigation Symposium_ , 2008, pp. 940–950 DOI: 10.1109/PLANS.2008.4570036
* [12] Ariya Hidayat, Benjamin Koch, Hongbin Zhang, Vitali Mirvoda, Manfred Lichtinger, David Sandel and Reinhold Noé “High-speed endless optical polarization stabilization using calibrated waveplates and field-programmable gate array-based digital controller” In _Optics Express_ 16.23, 2008, pp. 18984 DOI: 10.1364/oe.16.018984
* [13] R. Thomas and N. Kjærgaard “A digital feedback controller for stabilizing large electric currents to the ppm level for Feshbach resonance studies” In _Review of Scientific Instruments_ 91.3 AIP Publishing, LLC, 2020, pp. 034705 DOI: 10.1063/1.5128935
* [14] Akshay Gurdita, Heather Vovko and Mark Ungrin “A simple and low-cost monitoring system to investigate environmental conditions in a biological research laboratory” In _PLoS ONE_ 11.1, 2016, pp. e0147140 DOI: 10.1371/journal.pone.0147140
* [15] James P. Grinias, Jason T. Whitfield, Erik D. Guetschow and Robert T. Kennedy “An inexpensive, open-source USB Arduino data acquisition device for chemical instrumentation” In _Journal of Chemical Education_ 93.7, 2016, pp. 1316–1319 DOI: 10.1021/acs.jchemed.6b00262
* [16] Štěpánka Kubínová and Jan Šlégr “ChemDuino: Adapting Arduino for Low-Cost Chemical Measurements in Lecture and Laboratory” In _Journal of Chemical Education_ 92.10, 2015, pp. 1751–1753 DOI: 10.1021/ed5008102
* [17] K. Zachariadou, K. Yiasemides and N. Trougkakos “A low-cost computer-controlled Arduino-based educational laboratory system for teaching the fundamentals of photovoltaic cells” In _European Journal of Physics_ 33.6, 2012, pp. 1599–1610 DOI: 10.1088/0143-0807/33/6/1599
* [18] Matthew Chilcott and Niels Kjærgaard “Low-cost Wireless Condition Monitoring for an Ultracold Atom Machine” In _Internet of Things_ 13 Elsevier B.V., 2021, pp. 100345 DOI: 10.1016/j.iot.2020.100345
* [19] Bola S. Malek, Zachary Pagel, Xuejian Wu and Holger Müller “Embedded control system for mobile atom interferometers” In _Review of Scientific Instruments_ 90 AIP Publishing, LLC, 2019, pp. 073103 DOI: 10.1063/1.5083981
* [20] E.. Eyler “A single-chip event sequencer and related microcontroller instrumentation for atomic physics research” In _Review of Scientific Instruments_ 82, 2011, pp. 013105 DOI: 10.1063/1.3523426
* [21] Wei Deng, Peter Yun, Yi Zhang, Jiehua Chen and Sihong Gu “Embedded Control System for Atomic Clock” In _Knowledge Discovery and Data Mining_ Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 61–67 DOI: 10.1007/978-3-642-27708-5˙9
* [22] E.. Eyler “Instrumentation for laser physics and spectroscopy using 32-bit microcontrollers with an Android tablet interface” In _Review of Scientific Instruments_ 84.10, 2013, pp. 103101 DOI: 10.1063/1.4821986
* [23] K. Huang, H. Le Jeannic, J. Ruaudel, O. Morin and J. Laurat “Microcontroller-based locking in optics experiments” In _Review of Scientific Instruments_ 85.12, 2014, pp. 123112 DOI: 10.1063/1.4903869
* [24] Linying Jiang, Zhiliang Zhu, Hailong Li and Zhenhua Guo “Embedded laboratory environment monitor system” In _2009 WASE International Conference on Information Engineering, ICIE 2009_ 2 IEEE, 2009, pp. 197–201 DOI: 10.1109/ICIE.2009.179
* [25] Reetam Mondal and Sagarika Pal “Microcontroller based temperature monitoring and closed loop control to study the reaction of controlled variable with respect to load changes” In _Sensors and Transducers_ 153.6, 2013, pp. 148–154
* [26] Sébastien Bourdeauducq et al. “ARTIQ 1.0.” In _Zenodo. 10.5281/ zenodo.51303._ , 2016 DOI: 10.5281/ZENODO.1492176
* [27] Elia Perego, Marco Pomponio, Amelia Detti, Lucia Duca, Carlo Sias and Claudio E. Calosso “A scalable hardware and software control apparatus for experiments with hybrid quantum systems” In _Review of Scientific Instruments_ 89, 2018, pp. 113116 DOI: 10.1063/1.5049120
* [28] Aviv Keshet and Wolfgang Ketterle “A distributed, graphical user interface based, computer control system for atomic physics experiments” In _Review of Scientific Instruments_ 84, 2013, pp. 015105 DOI: 10.1063/1.4773536
* [29] P.. Starkey, C.. Billington, S.. Johnstone, M. Jasperse, K. Helmerson, L.. Turner and R.. Anderson “A scripted control system for autonomous hardware-timed experiments” In _Review of Scientific Instruments_ 84, 2013, pp. 085111 DOI: 10.1063/1.4817213
* [30] S. Amri, R. Corgier, D. Sugny, E.. Rasel, N. Gaaloul and E. Charron “Optimal control of the transport of Bose-Einstein condensates with atom chips” In _Scientific Reports_ 9 Nature Publishing Group, 2019, pp. 5346 DOI: 10.1038/s41598-019-41784-z
* [31] Robert Heck et al. “Remote optimization of an ultracold atoms experiment by experts and citizen scientists” In _Proceedings of the National Academy of Sciences of the United States of America_ 115.48, 2018, pp. E11231–E11237 DOI: 10.1073/pnas.1716869115
* [32] J S Laustsen, R Heck, O Elíasson, J J Arlt, J F Sherson and C A Weidner “Remote multi-user control of the production of Bose-Einstein condensates for research and education” In _preprint_ , 2021 arXiv: http://arxiv.org/abs/2101.11398
* [33] H.. Krause, E.. Deveney, N.. Jones, C.. Vane, S. Datz, H. Knudsen, P. Grafström and R. Schuch “Novel apparatus and methods for performing remotely controlled particle-solid interaction experiments at CERN” In _Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and Atoms_ 124.1, 1997, pp. 128–139 DOI: 10.1016/S0168-583X(97)00098-0
* [34] A. Hyder, S.. Choi and D. Schaefer “Remotely controlled laboratory experiments: Creation and examples” In _2010 IEEE Systems and Information Engineering Design Symposium, SIEDS10_ , 2010, pp. 62–67 DOI: 10.1109/SIEDS.2010.5469676
* [35] Galiya Sharafutdinova, Michael Cvetanovski, Daniel Walton, Ben Guest, John Foster, George Piszczuk and John Furst “Distance learning laboratory: A remote Atomic and Nuclear Physics experiment” In _Australian Physics_ 50.2, 2013, pp. 55–59
* [36] ColdQuanta Inc. “Albert: Quantum Matter on the Cloud”, 2020 URL: https://coldquanta.com/albertcloud
* [37] IBM Quantum Team “IBM Quantum Experience”, 2020 URL: https://quantum-computing.ibm.com
* [38] David A. Smith, Simon Aigner, Sebastian Hofferberth, Michael Gring, Mauritz Andersson, Stefan Wildermuth, Peter Krüger, Stephan Schneider, Thorsten Schumm and Jörg Schmiedmayer “Absorption imaging of ultracold atoms on atom chips” In _Optics Express_ 19.9, 2011, pp. 8471 DOI: 10.1364/oe.19.008471
* [39] InfluxData Inc. “InfluxDB: Purpose-Built Open Source Time Series Database” URL: https://www.influxdata.com
* [40] Grafana Labs “Grafana: The Open Observability Platform” URL: https://grafana.com
* [41] The Hiveeyes Developers “The Hiveeyes Project”, 2011 URL: https://hiveeyes.org
* [42] ayoungprogrammer “Grafana Finance”, 2017 URL: https://github.com/ayoungprogrammer/grafana-finance
* [43] Martin Kyselak, Filip Dvorak, Jan Maschke and Cestmir Vlcek “Phase shift response of birefringent PANDA fiber after the end of thermal exposure during recovery to ambient temperature” In _Optical and Quantum Electronics_ 52.10, 2020, pp. 422 DOI: 10.1007/s11082-020-02539-7
* [44] W. Eickhoff “Temperature sensing by mode–mode interference in birefringent optical fibers” In _Optics Letters_ 6.4, 1981, pp. 204 DOI: 10.1364/ol.6.000204
* [45] L. Badurina et al. “AION: An atom interferometer observatory and network” In _Journal of Cosmology and Astroparticle Physics_ 2020.5, 2020 DOI: 10.1088/1475-7516/2020/05/011
* [46] D N Aguilera et al. “STE-QUEST—test of the universality of free fall using cold atom interferometry” In _Classical and Quantum Gravity_ 31.11 IOP Publishing, 2014, pp. 115010 DOI: 10.1088/0264-9381/31/11/115010
* [47] Savas Dimopoulos, Peter W. Graham, Jason M. Hogan and Mark A. Kasevich “Testing general relativity with atom interferometry” In _Physical Review Letters_ 98.11, 2007, pp. 1–4 DOI: 10.1103/PhysRevLett.98.111102
* [48] Stephan Wildermuth, Sebastian Hofferberth, Igor Lesanovsky, Elmar Haller, L. Andersson, Sönke Groth, Israel Bar-Joseph, Peter Krüger and Jörg Schmiedmayer “Bose-Einstein condensates: Microscopic magnetic-field imaging” In _Nature_ 435.7041, 2005, pp. 440 DOI: 10.1038/435440a
* [49] M. Gierling, P. Schneeweiss, G. Visanescu, P. Federsel, M. Häffner, D.. Kern, T.. Judd, A. Günther and J. Fortágh “Cold-atom scanning probe microscopy” In _Nature Nanotechnology_ 6.7, 2011, pp. 446–451 DOI: 10.1038/nnano.2011.80
* [50] Stephen R. Segal, Quentin Diot, Eric A. Cornell, Alex A. Zozulya and Dana Z. Anderson “Revealing buried information: Statistical processing techniques for ultracold-gas image analysis” In _Physical Review A - Atomic, Molecular, and Optical Physics_ 81.5, 2010, pp. 1–9 DOI: 10.1103/PhysRevA.81.053601
|
# BCN2BRNO: ASR System Fusion for Albayzin 2020 Speech to Text Challenge
###### Abstract
This paper describes joint effort of BUT and Telefónica Research on
development of Automatic Speech Recognition systems for Albayzin 2020
Challenge. We compare approaches based on either hybrid or end-to-end models.
In hybrid modelling, we explore the impact of SpecAugment[1, 2] layer on
performance. For end-to-end modelling, we used a convolutional neural network
with gated linear units (GLUs). The performance of such model is also
evaluated with an additional n-gram language model to improve word error
rates. We further inspect source separation methods to extract speech from
noisy environment (i.e. TV shows). More precisely, we assess the effect of
using a neural-based music separator named Demucs[3]. A fusion of our best
systems achieved 23.33 % WER in official Albayzin 2020 evaluations. Aside from
techniques used in our final submitted systems, we also describe our efforts
in retrieving high-quality transcripts for training.
Index Terms: fusion, end-to-end model, hybrid model, semi-supervised,
automatic speech recognition, convolutional neural network.
## 1 Introduction
Albayzin 2020 challenge is a continuation of the Albayzin 2018 challenges [4],
which has evaluations for the following tasks: Speech to Text, Speaker
Diarization and Identity Asignement, Multimodal Diarization and Scene
Description and Search on Speech. The target domain of the series is broadcast
TV and radio content, with shows in a notable variety of Spanish accents.
This paper describes BCN2BRNO’s team Automatic Speech Recognition (ASR) system
for IberSPEECH-RTVE 2020 Speech to Text Transcription Challenge, a joint
collaboration between Speech@FIT research group, Telefónica Research (TID) and
Universitat Pompeu Fabra (UPF). Our goal is to develop two distinct ASR
systems, one based on a hybrid model [5] and the other one on an end-to-end
approach [6], and complement each other through a joint fusion.
We submitted one primary system and one contrastive system. The primary system
– Fusion B – is a word-level ROVER fusion of hybrid ASR models and end-to-end
models. It achieved 23.33 % WER on official evaluation dataset. However, the
same result was accomplished by the contrastive system – Fusion A–, a fusion
which comprises only hybrid ASR models. In this paper we describe both ASR
systems, plus a post-evaluation analysis and experiments that lead to a better
performance of the primary fusion. We also discuss the effect of speech
enhancement techniques like background music removal or speech denoising.
## 2 Data
The Albayzin 2020 challenge comes with two databases: _RTVE2018_ and
_RTVE2020_. The RTVE2018 is the main source of training and development data,
while the RTVE2020 database is used for the final evaluation of submitted
systems. RTVE2018 database [7] comprises 15 different TV programs broadcast
between 2015 and 2018 by the Spanish public television Radiotelevisión
Española (RTVE). The programs contain a great variety of speech scenarios from
read speech to spontaneous speech, live broadcast, political debates, etc.
They cover also different Spanish accents, including Latin-American ones. The
database is partitioned into 4 different subsets: train, dev1, dev2 and test.
The database consists of $569$ hours of audio data, from which $468$ hours are
provided with subtitles (train set), and $109$ hours are human-revised (dev1,
dev2 and test sets). Both hybrid and end-to-end models utilize dev1 and train
sets for training, while dev2 and test sets serve as validation datasets.
RTVE2020 database [8] consists of TV shows of different genres broadcast by
the RTVE from 2018 to 2019. It includes more than $70$ hours of audio and it
has been whole manually annotated.
In addition, three Linguistic Data Consortium (LDC) corpora were used for
training the language model in the hybrid ASR system: _Fisher Spanish Speech_
, _CALLHOME Spanish Speech_ and _Spanish Gigaword Third Edition_.
Fisher Spanish Speech [9] corpus comprises spontaneous telephone speech from
$136$ native Caribbean Spanish and non-Caribbean Spanish speakers with full
orthographic transcripts. The recordings consists of $819$ telephone
conversations lasting up to $12$ minutes each.
CALLHOME Spanish Speech [10] corpus consists of $120$ telephone conversations
between Spanish native speakers lasting less than 30 minutes. Spanish Gigaword
Third Edition [11] is an extensive database of Spanish newswire text data
acquired by the LDC. It includes reports, news, news briefs, etc. collected
from 1994 through Dec 2010. We also downloaded the text data from Spanish
Wikipedia.
The end-to-end model is trained on Fisher Spanish Speech, Mozilla’s Common
Voice Spanish corpus and Telefónica’s Call Center in-house data (23 hours).
Mozilla’s Common Voice Spanish [12] corpus is an open-source dataset that
consists of recordings from volunteer contributors pronouncing scripted
sentences, recorded at 48kHz rate. The sentences come from original
contributor donations and public domain movie scripts. The version of Common
Voice corpus used for this work is 5.1, which has 521 hours of recorded
speech. However, we have kept only speech validated by the contributors, an
amount of 290 hours.
### 2.1 Transcript retrieval
The training data from RTVE2018 database includes many hours of subtitled
speech. Although, the captions contain several errors. In the most cases
captions are shifted by a few seconds, so a segment with correct transcript
corresponds to a different portion of audio. This phenomenon also occurs in
human-revised development and test sets. Another problem with subtitled speech
is “partly-said” captions. This issue involves misspelled and unspoken words
of the transcription.
Since the training procedure of the hybrid ASR is quite error-prone in case of
misaligned labels, we decided to apply a transcript retrieval technique
developed by Manohar, et al. [13]: the closed-captions related to the same
audio, i.e., the whole TV show, are first concatenated according to the
original timeline. This creates a small text corpus containing a few hundreds
of words. The text corpus is used for training a biased $N$-gram language
model (LM) with $N=7$, so the model is biased only on the currently processed
captions. During decoding, the weight of the acoustic model (AM) is
significantly smaller than the weight of LM, because we believe that the
captions should occur in hypotheses. Then, the “winning” path is retrieved
from the hypothesis lattice as the path that has a minimum edit cost w.r.t.
the original transcript. Finally, the retrieved transcripts are segmented
using the CTMs obtained from the oracle alignment (previous step). More
details can be found in [14, 13].
Table 1: 2-pass transcript retrieval. Cleaning | Train | Dev1 | Dev2 | Test
---|---|---|---|---
Original | $468$ | $60.6$ | $15.2$ | $36.8$
1-pass | $99.4$ | $21$ | $7.5$ | -
2-pass | $234.2$ | $55.1$ | $14.3$ | $33.7$
Recovered | $50\,\%$ | $91\,\%$ | $94\,\%$ | $92\,\%$
The transcript retrieval technique is applied twice. First, we train an
initial ASR system on out-of-domain data, e.g., Fisher and CALLHOME. A system
is used in the first pass of transcript retrieval. Then, a new system is
trained from scratch on already cleaned data and the whole process of
transcript retrieval is repeated again. Table 1 shows how this 2-pass cleaning
leads to recover almost all the manually annotated development data and half
of the subtitled training data.
Figure 1: Amount of cleaned audio per TV-show, in hours.
Figure 1 depicts how many hours have been recovered in individual TV programs.
It also shows how data is distributed in the database. Most speech comes from
La-Mañana (LM) TV program. We discarded most data in this TV program after
2-pass data cleaning. It happened because this particular TV show was quite
challenging for our ASR model.
## 3 Hybrid speech recognition
### 3.1 Acoustic Model
In all our experiments, the acoustic model was based on a hybrid Deep Neural
Network – Hidden Markov Model architecture trained in Kaldi [15]. The NN part
of the model contains 6 convolutional layers followed by 19 TDNN layers with
semi-orthogonal factorization [5] (CNN-TDNNf). The input consists of 40-dim
MFCCs concatenated with speaker dependent 100-dim i-vectors. Whole model is
trained using LF-MMI objective function with bi-phone acoustic units as the
targets.
In order to make our NN model training more robust, we introduced feature
dropout layer into the architecture. This prevents the model from overfitting
on training data. In fact, it turned overfitting problem into underfitting
problem. Thus, it leads to a slower convergence during training. This is
solved by increasing the number of epochs from 6 to 8 to balance the
underfitting in our system. This technique is also known as Spectral
Augmentation. It was first suggested for multi-stream hybrid NN models in [1]
and fully examined in [2].
### 3.2 Language Model
We trained three different $3$-gram language models: Alb, Wiki and Giga. The
names suggest which text corpus was used during training. Albayzin LM was
trained on dev1 and train sets from RTVE2018. This text mixture contains $80$
thousand unique words in $0.5$ million sentences. This small training text is
not optimal to train $N$-gram LM, which is able to generalize well. So we also
included larger text corpora: Wikipedia and Spanish Gigaword. These databases
were further processed to get rid of unrelated text like advertisement, emoji,
urls, etc. This resulted into more than $2.5$ million fine sentences in
Wikipedia and $20$ million sentences in Spanish Gigaword. We experimented with
$4$ combinations of interpolation: Alb, Alb+Wiki, Alb+Giga, Alb+Wiki+Giga.
Our vocabulary consists of words from RTVE2018 database and from Santiago
lexicon111https://www.openslr.org/34/. The pronunciation of Spanish words was
extracted using public TTS model called E-speak [16]. The vocabulary was then
extended by auxiliary labels for noise, music and overlapped speech. The final
lexicon contains around $110$ thousand words.
### 3.3 Voice Activity Detection
Voice activity detection (VAD) was applied on evaluation data in order to
segment the audio into smaller chunks. VAD is based on feed-forward neural
network with two outputs. It expects 15-dimensional filterbank features with
additional 3 Kaldi pitch features [17] as the input. Features are normalized
with cepstral mean normalization. More details can be found in [18].
## 4 End-to-end speech recognition
### 4.1 Acoustic Model
The end-to-end acoustic model is based on a convolutional architecture
proposed by Collobert et al. [6] that uses gated linear units (GLUs). Using
GLUs in convolutional approaches helps avoiding vanishing gradients, by
providing them linear paths while keeping high performances. Concretely, we
have used the model from wav2letter’s Wall Street Journal (WSJ) recipe. This
model has approximately 17M parameters with dropout applied after each of its
17 layers. The WSJ dataset contains around 80 hours of audio recordings, which
is smaller than the magnitude of our data ($\sim$600 hours). The LibriSpeech
recipe ($\sim$1000 hours) provides a deeper ConvNet GLU based architecture,
however we decided to use the WSJ one in order to reduce computational time
and improve hyper-parameter fine-tuning of the network.
All data samples are resampled at 16kHz, and the system is trained with
wav2letter++ framework. Mel-frequency spectral coefficients (MFSCs) are
extracted from raw audio, using 80 filterbanks, and the system is trained
using the Auto Segmentation criterion (ASG) [6] with batch size set to 4. The
learning rate starts at 5.6 and is decreased down to 0.4 after 30 epochs,
where training is finished since no significant WER gains are achieved. From
epochs 22 to 28 the system is trained also with the same train set, but adding
the RTVE2018 train and dev1 samples with the background music cleaned by
Demucs module [3]. The last two epochs, from epoch 28 to epoch 30, are done
incorporating further samples with background noise removed by Demucs and
denoised by a neural denoiser [19]. This way, data augmentation with samples
without background music and noise is done, to aid the network at training
with samples with difficult acoustic conditions. Besides, the network is more
likely to generalize audio artifacts caused by the denoiser and music
separator networks, which is useful when using these to clean test audio.
### 4.2 Language Model
Regarding the lexicon, we extract it from the train and validation
transcripts, plus Sala lexicon [20]. The resulting lexicon is a grapheme-based
one with 271k words. We use the standard Spanish alphabet as tokens, plus the
”ç” letter from Catalan and the vowels with diacritical marks, making a total
of 37 tokens.
The LM is a 5-gram model trained with KenLM [21] using only transcripts from
the training sets: RTVE2018 train and dev1, plus Common Voice, Fisher and Call
Center. The resulting LM is described in this paper as _Alb+Others_.
Fine-tuning of decoder hyperparameters is done via grid-search with RTVE2018
dev2 set. The best results are achieved with a LM weight of 2.25, a word score
of 2.25 and a silence score of -0.35. This same configuration is then applied
for evaluation datasets from RTVE2018 and RTVE2020.
## 5 Experiments
### 5.1 Data cleaning
Data cleaning by means of 2-pass transcript retrieval improves the performance
of our models the most. Table 1 shows the effect of each pass. The $2$nd pass
helped to improve the accuracy by almost $2$ % in terms of WER. We also ran
the $3$rd pass, but that did not help anymore. We simply did not retrieve more
cleaned data from the original transcripts, just $3$ hours more. We could not
train the models with the original subtitles, since these contained wrong
timestamps.
Table 2: Effect of 2-pass transcript cleaning evaluated on RTVE2018 test set. AM | LM | Training | WER [%]
---|---|---|---
data | Test
CNN-TDNNf | Alb | 1-pass | 17.2
2-pass | 15.5
3-pass | 15.5
### 5.2 Speech Enhancement
It is very common to find background music on TV programs, which can confuse
our recognizer if it has a notorious presence. This brought us the idea of
processing the audio through a Music Source Separator called Demucs [3]. It
separates the original audio into voice, bass, drums and others. By keeping
only the voice component, we managed to significantly eliminate the background
music, while maintaining relatively good quality in the original voice.
We enhanced both validation sets in order to assess possible WER reductions.
As seen in Table 4, this approach yielded a small increase in WER. We also
tried applying a specialized denoiser [19] after background music removal, but
the WER for dev2 increased in an absolute 1.6%, compared to original system
without enhancement. None of these two approaches (Demucs and Demucs+Denoiser)
provided WER improvements at first, so we did not apply them for the end-to-
end model used in the fusion. Although, the end-to-end, end-to-end + Demucs
and end-to-end + Demucs + Denoiser models were submitted as separate systems
by UPF-TID team, see Table 5 for details.
Our hypothesis is that not all the samples contain background music. Speech
enhancement for already clean samples is detrimental because it causes slight
degradation in the signal. Hence, we have evaluated the effects of applying
music source separation to samples under certain SNR ranges, measured with the
WADA-SNR algorithm [22]. The application of music separation on RTVE dataset
is optimal for SNR ranges between -5 and 5 or 8 as it is shown in Table 3.
Looking at Figure 2, best improvements are found at TV shows with higher WER
(thus harder/noisier speech), e.g., AV, where most of the time speakers are in
a car, or LM and DH, where music and speech often overlap. Other shows have
slighter benefits, since these already contain good quality audio. The
exception is AFI show, which is reported to have poor quality audio, so
further audio degradation from Demucs might cause worse performance.
Figure 2: Variation of the mean WER per TV show between using Demucs-cleaned or original samples on RTVE’s 2018 test set. Negative values represent Demucs improvements. Note that only samples with SNR between -5 and 8 are enhanced. Table 3: WER impact of cleaning speech signals between certain SNR ranges, using a music source separator. End-to-end ConvNet GLU model is used without LM, and percentage of cleaned samples are reported. SNR | Cleaned Samples [%] | Test WER [%]
---|---|---
| 2018 | 2020 | 2018 | 2020
$(-\infty,\infty)$ | 100 | 100 | 37.50 | 53.53
$(-\infty,10)$ | 25.97 | 34.22 | -0.05 | -0.87
($-5$, $10$) | 25.84 | 31.33 | -0.05 | -0.88
($-5$, $5$) | 5.14 | 11.88 | -0.07 | -1.03
($-5$, $8$) | 14.95 | 22.11 | -0.08 | -0.97
### 5.3 Spectral augmentation
Table 4 shows compared models with and without spectral augmentation. The
technique helps quite significantly. All models with feature dropout layer
outperformed their counterparts with a quite constant $0.4\%$ absolute WER
improvement on RTVE2018 test set and around $0.6\%$ on RTVE2018 dev2 set.
### 5.4 Model fusion
We also fuse the output of our best systems to further improve the
performance. Overall results of our systems considered for the fusion are
depicted in Table 4. Since the models with spectral augmentation performed
significantly better, we decided to fuse only these systems. We analyzed two
different approaches: a pure hybrid model fusion (Fusion A) and hybrid and
end-to-end model fusion (Fusion B).
Considering that the end-to-end model does not provide word-level timestamps,
we had to force-align the transcripts with the hybrid ASR system in order to
obtain CTM output. The original word-level fusion was done using ROVER toolkit
[23]. Fusion B with end-to-end models performed slightly better than its
counterpart Fusion A, despite the fact that the end-to-end models achieved
worse results. This somehow proves the idea that the fusion can benefit from
different modeling approaches.
## 6 Final systems
Table 5 depicts the results on RTVE2020 test set. For the end-to-end ConvNet
GLU model, the performance drops around a 15% WER when compared with previous
results on development sets. Since the TV shows in such sets are also present
in training dataset, our hypothesis is that the model slightly overfits to
them. Therefore, when facing different acoustic conditions, voices, background
noises and musics presented in RTVE2020 test set, the WER noticeable
increases. Enhancing the test samples with Demucs or with Demucs+Denoiser
yields a worse WER score, probably due to an inherent degradation of the
signal. A deeper analysis about more efficient ways to apply such enhancements
has been done in section 5.2.
Also, note that the submitted systems had a leak of dev2 stm transcripts in
the LM, causing an hyperparameter overfitting during LM tuning. This caused a
WER drop in all end-to-end systems, yielding WERs of 41.4%, 42.3% and 58.6%.
Table 5 also displays the results of same systems with the leakage and LM
tuning corrected in post-evaluation analysis.
Table 4: Overall results on RTVE2018 dataset with various language models and fusions. | AM | LM | WER [%]
---|---|---|---
| Dev2 | Test
1 | CNN-TDNNf | Alb | 14.1 | 15.5
2 | Alb + Wiki | 13.6 | 14.9
3 | Alb + Giga | 13.6 | 15.1
4 | Alb + Wiki + Giga | 13.5 | 15.0
5 | \+ SpecAug | Alb | 13.4 | 15.0
6 | Alb+Wiki | 12.9 | 14.5
7 | Alb+Giga | 13.0 | 14.7
8 | Alb+Wiki+Giga | 12.9 | 14.6
9 | ConvNet GLU | None | 36.1 | 37.5
10 | Alb + Others | 20.8 | 20.7
11 | \+ Demucs | None | 36.4 | 37.5
12 | Alb + Others | 21.1 | 20.8
13 | Fusion A _(row 5-8)_ | 12.9 | 13.7
14 | Fusion B _(row 5-8 and 10)_ | 12.8 | 13.3
Table 5: Official and post-evaluation final results on RTVE2020 eval set for the submitted systems. Model | WER [%]
---|---
Official | Post-eval
CNN-TDNNf | - | 24.3
\+ SpecAug | - | 23.5
ConvNet GLU | 41.4222Primary system of UPF-TID team. | 36.2
\+ Demucs | 42.3333First contrastive system of UPF-TID team. | 37.9
\+ Demucs + Denoiser | 58.6444Second contrastive system of UPF-TID team. | 40.0
Fusion A | 23.33 | 23.38
Fusion B | 23.33 | 23.24
## 7 Conclusions
In this paper we described two different ASR model architectures and their
fusion. We focused on improving the original subtitled data in order to train
our models on high quality target labels. We also improved the $N$-gram
language model by incorporating publicly available text data from Wikipedia
and Spanish Gigaword corpus from LDC. We have also successfully incorporated
the spectral augmentation into our AM architecture. Our best system achieved
$13.3$ % and 23.24 % WER on RTVE2018 and RTVE2020 test sets respectively.
The performance of our hybrid system can be further improved by using lattice-
fusion with Minimum Bayes Risk decoding[24]. Another space for improvement is
offered by adding a RNN-LM lattice-rescoring. Our end-to-end model shows
relatively competitive performance on RTVE2018 test set in comparison with its
hybrid counterpart. However, its performance on RTVE2020 expose that the model
was not able to generalize very well since this database turns out to contain
slightly different acoustic conditions. Despite of this fact, the model still
managed to improve the results in the final fusion with hybrid systems. An
exploration on background music removal shows that it yields the best results
for lower SNR ranges, thus having a different impact depending on the acoustic
conditions of each TV show.
## References
* [1] S. H. R. Mallidi and H. Hermansky, “A Framework for Practical Multistream ASR,” in _Interspeech 2016, 17th Annual Conference of the International Speech Communication Association, San Francisco, CA, USA, September 8-12, 2016_ , N. Morgan, Ed. ISCA, 2016, pp. 3474–3478.
* [2] D. S. Park, W. Chan, Y. Zhang, C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition,” in _Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019_. ISCA, 2019, pp. 2613–2617.
* [3] A. Défossez, N. Usunier, L. Bottou, and F. Bach, “Music source separation in the waveform domain,” _arXiv preprint arXiv:1911.13254_ , 2019.
* [4] E. Lleida, A. Ortega, A. Miguel, V. Bazán-Gil, C. Pérez, M. Gómez, and A. de Prada, “Albayzin 2018 evaluation: the iberspeech-rtve challenge on speech technologies for spanish broadcast media,” _Applied Sciences_ , vol. 9, no. 24, p. 5412, 2019.
* [5] D. Povey, G. Cheng, Y. Wang, K. Li, H. Xu, M. Yarmohammadi, and S. Khudanpur, “Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks,” in _Proceedings of Interspeech_ , 09 2018, pp. 3743–3747.
* [6] R. Collobert, C. Puhrsch, and G. Synnaeve, “Wav2letter: an end-to-end convnet-based speech recognition system,” _CoRR_ , vol. abs/1609.03193, 2016\. [Online]. Available: http://arxiv.org/abs/1609.03193
* [7] E. Lleida, A. Ortega, A. Miguel, V. Bazán, C. Pérez, M. Zotano, and A. De Prada, “RTVE2018 Database Description,” 2018. [Online]. Available: http://catedrartve.unizar.es/reto2018/RTVE2018DB.pdf
* [8] E. Lleida, A. Ortega, A. Miguel, V. Bazán-Gil, C. Pérez, M. Góméz, and A. De Prada, “RTVE2020 Database Description,” 2020\. [Online]. Available: http://catedrartve.unizar.es/reto2020/RTVE2020DB.pdf
* [9] D. Graff, S. Huang, I. Cartagena, K. Walker, and C. Cieri, “Fisher Spanish Speech,” _LDC2010S01. DVD. Philadelphia: Linguistic Data Consortium_ , 2010\.
* [10] A. Canavan and G. Zipperlen, “CALLHOME Spanish Speech,” _LDC96S35. Web Download. Philadelphia: Linguistic Data Consortium_ , 1996.
* [11] Ângelo Mendonça, D. Jaquette, D. Graff, and D. DiPersio, “Spanish Gigaword Third Edition,” _LDC2011T12. Web Download. Philadelphia: Linguistic Data Consortium_ , 2011.
* [12] R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber, “Common voice: A massively-multilingual speech corpus,” 2019.
* [13] V. Manohar, D. Povey, and S. Khudanpur, “JHU Kaldi system for Arabic MGB-3 ASR challenge using diarization, audio-transcript alignment and transfer learning,” in _2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)_ , vol. 2018-. IEEE, 2017, pp. 346–352.
* [14] M. Kocour, “Automatic Speech Recognition System Continually Improving Based on Subtitled Speech Data,” Diploma thesis, Brno University of Technology, Faculty of Information Technology, Brno, 2019, technical supervisor Dr. Ing. Jordi Luque Serrano. supervisor Doc. Dr. Ing. Jan Černocky.
* [15] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely, “The Kaldi Speech Recognition Toolkit,” in _IEEE 2011 Workshop on Automatic Speech Recognition and Understanding_. IEEE Signal Processing Society, Dec. 2011, iEEE Catalog No.: CFP11SRW-USB.
* [16] J. Duddington and R. Dunn, “eSpeak text to speech,” 2012. [Online]. Available: http://espeak.sourceforge.net
* [17] P. Ghahremani, B. BabaAli, D. Povey, K. Riedhammer, J. Trmal, and S. Khudanpur, “A pitch extraction algorithm tuned for automatic speech recognition,” in _Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on_. Florence, Italy: IEEE, May 2014.
* [18] O. Plchot, P. Matějka, O. Novotný, S. Cumani, A. D. Lozano, J. Slavíček, M. S. Diez, F. Grézl, O. Glembek, M. V. Kamsali, A. Silnova, L. Burget, L. Ondel, S. Kesiraju, and A. J. Rohdin, “Analysis of but-pt submission for nist lre 2017,” in _Proceedings of Odyssey 2018 The Speaker and Language Recognition Workshop_ , 2018, pp. 47–53.
* [19] A. Defossez, G. Synnaeve, and Y. Adi, “Real time speech enhancement in the waveform domain,” 2020.
* [20] A. Moreno, O. Gedge, H. Heuvel, H. Höge, S. Horbach, P. Martin, E. Pinto, A. Rincón, F. Senia, and R. Sukkar, “Speechdat across all america: Sala ii,” 2002.
* [21] K. Heafield, “Kenlm: Faster and smaller language model queries,” in _Proceedings of the Sixth Workshop on Statistical Machine Translation_ , ser. WMT ’11. USA: Association for Computational Linguistics, 2011, p. 187–197.
* [22] C. Kim and R. M. Stern, “Robust signal-to-noise ratio estimation based on waveform amplitude distribution analysis,” in _Ninth Annual Conference of the International Speech Communication Association_ , 2008.
* [23] J. G. Fiscus, “A post-processing system to yield reduced word error rates: Recognizer Output Voting Error Reduction (ROVER),” in _1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings_ , 1997, pp. 347–354.
* [24] P. Swietojanski, A. Ghoshal, and S. Renals, “Revisiting hybrid and GMM-HMM system combination techniques,” in _IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, Canada, May 26-31, 2013_ , 2013.
|
# Speech Enhancement for Wake-Up-Word detection in Voice Assistants
###### Abstract
Keyword spotting and in particular Wake-Up-Word (WUW) detection is a very
important task for voice assistants. A very common issue of voice assistants
is that they get easily activated by background noise like music, TV or
background speech that accidentally triggers the device. In this paper, we
propose a Speech Enhancement (SE) model adapted to the task of WUW detection
that aims at increasing the recognition rate and reducing the false alarms in
the presence of these types of noises. The SE model is a fully-convolutional
denoising auto-encoder at waveform level and is trained using a log-Mel
Spectrogram and waveform reconstruction losses together with the BCE loss of a
simple WUW classification network. A new database has been purposely prepared
for the task of recognizing the WUW in challenging conditions containing
negative samples that are very phonetically similar to the keyword. The
database is extended with public databases and an exhaustive data augmentation
to simulate different noises and environments. The results obtained by
concatenating the SE with a simple and state-of-the-art WUW detectors show
that the SE does not have a negative impact on the recognition rate in quiet
environments while increasing the performance in the presence of noise,
especially when the SE and WUW detector are trained jointly end-to-end.
Index Terms: keyword spotting, speech enhancement, wake-up-word, deep
learning, convolutional neural network
## 1 Introduction
Voice interaction with devices is becoming ubiquitous. Most of them use a
mechanism to avoid the excessive usage of resources, a trigger word detector.
This ensures the efficient use of resources, using a Speech-To-Text tool only
when needed and with the consequent start of a conversation. It is key to only
start this conversation when the user is addressing the device, otherwise the
user experience is notoriously degraded. Thus, the wake-up-word detection
system must be robust enough to avoid wake-ups with TV, music, speech and
sounds that do not contain the key phrase.
A common approach to reduce the impact of this type of noise in the system is
the adoption of speech enhancement algorithms. Speech enhancement consists of
the task of improving the perceptual intelligibility and quality of speech by
removing background noise [1]. Its main application is in the field of mobile
and internet communications [2] and related to hearing aids [3], but SE has
also been applied successfully to automatic speech recognition systems [4, 5,
6].
Traditional SE methods involved a characterization step of the noise spectrum
which is then used to try reduce the noise from the regenerated speech signal.
Examples of these approaches are spectral subtraction [3], Wiener filtering
[7] and subspace algorithms [8]. One of the main drawbacks of the classical
approaches is that they are not very robust against non-stationary noises or
other type of noises that can mask speech, like background speech. In the last
years, Deep Learning approaches have been widely applied to SE at the waveform
level [9, 10] and spectral level [6, 11]. In the first case, a common
architecture falls within the encoder-decoder paradigm. In [12], authors
proposed a fully convolutional generative adversarial network architecture
structured as an auto-encoder with U-Net like skip-connections. Other recent
work [13] proposes a similar architecture at the waveform level that includes
a LSTM between the encoder and the decoder and it is trained directly with a
regression loss combined with a spectrogram domain loss.
Inspired by these recent models, we propose a similar SE auto-encoder
architecture in the time domain that is optimized not only by minimizing
waveform and Mel-spectrogram regression losses, but also includes a task-
dependent classification loss provided by a simple WUW classifier acting as a
Quality-Net [14]. This last term serves as a task-dependent objective quality
measure that trains the model to enhance important speech features that might
be degraded otherwise.
## 2 Speech Enhancement
Speech enhancement is interesting for triggering phrase detection since it
tries to remove noise that could trigger the device, and at the same time
improves speech quality and intelligibility for a better detection. In this
case, we try to tackle the most common noisy environments where voice
assistants are used: TV, music, background conversations, office noise and
living room noise. Some of these types of background noise, such as TV and
background conversations, are the most likely to trigger the voice assistant
and are also the most challenging to remove.
Figure 1: End-to-end SE model at waveform level concatenated with a
classifier. Log-Mel Spectrogram and waveform reconstruction losses of the SE
model can be used together with the BCE loss of the classifier as a linear
combination to train the model.
### 2.1 Model
Our model has a fully-convolutional denoising auto-encoder architecture with
skip connections (Fig. 1), which has proven to be very effective in SE tasks
[12], working end-to-end at waveform level. In training, we input a noisy
audio $\bm{x}\in\mathbb{R}^{T}$, comprised of clean speech signal
$\bm{y}\in\mathbb{R}^{T}$ and background noise $\bm{n}\in\mathbb{R}^{T}$ so
that $\bm{x}=\lambda\bm{y}+(1-\lambda)\bm{n}$.
The encoder compresses the input signal and expands the number of channels. It
is composed of six convolutional blocks (ConvBlock1D), each consisting of a
convolutional layer, followed by an instance normalization and a rectified
linear unit (ReLU). Kernel size $K=4$ and stride $S=2$ are used, except in the
first layer where $K=7$ and $S=1$. The compressed signal goes through an
intermediate stage where the shape is preserved, consisting of three residual
blocks (ResBlock1D), each formed by two ConvBlock1D with $K=3$ and $S=1$ where
a skip connection is added from the input of the residual block to its output.
The last stage of the SE model is the decoder, where the original shape of the
raw audio is recovered at the output. Its architecture follows the inverse
structure of the encoder, where deconvolutional blocks (DeconvBlock1D) replace
the convolutional layers of the ConvBlock1D with transposed convolutional
layers. Skip connections from the encoder blocks to the decoder blocks are
also used to ensure low-level detail when reconstructing the waveform.
We use a regression loss function (L1 loss) at raw waveform level together
with another L1 loss over the log-Mel Spectrogram as proposed in [15] to
reconstruct a ”cleaned” signal $\hat{\bm{y}}$ at the output. Finally, we
include the classification loss (BCE Loss) when training the SE model jointly
with the classifier or concatenating a pretrained classifier at its output.
Thus, we also try to optimize the SE model to the specific task of WUW
classification. Our final loss function is defined as a linear combination of
the three losses:
$L_{T}=\alpha L_{raw}(\bm{y},\hat{\bm{y}})+\beta
L_{spec}(S(\bm{y}),S(\hat{\bm{y}}))+\gamma L_{BCE}$ (1)
where $\alpha$, $\beta$ and $\gamma$ are hyperparamters weighting each loss
term and $S(\cdot)$ denotes the log-Mel Spectrogram of the signal, which is
computed using 512 FFT bins, a window of 20 ms with 10 ms of shift and 40
filters in the Mel Scale.
## 3 Methodology
### 3.1 Databases
The database used for conducting the experiments here presented consists of
WUW samples labeled as positive, and other non-WUW samples labeled as
negative. Since the chosen keyword is ”OK Aura”, which triggers Telefónica’s
home assistant, Aura, positive samples are drawn from company’s in-house
databases. Some of the negative samples have been also recorded in such
databases, but we also add speech and other types of acoustic events from
external data sets, so the models gain robustness with further data.
Information about all data used is detailed in this section.
#### 3.1.1 OK Aura Database
In a first round, around 4300 WUW samples from 360 speakers have been
recorded, resulting in 2.8 hours of audio. Furthermore, office ambient noise
has been recorded as well, with the aim of having samples for noise data
augmentation. The second data collection round has been done in order to study
and improve some sensitive cases where WUW modules typically underperform. For
instance, such dataset contains rich metadata about positive and negative
utterances, like room distance, speech accent, emotion, age or gender.
Furthermore, the negative utterances contain phonetically similar words to ”OK
Aura”, since these are the most ambiguous to recognize for a classifier.
Detailed information about data acquisition is explained in the following
subsection.
#### 3.1.2 Data acquisition
A web-based Jotform form has been designed for data collection. Such form is
open and actually still receiving input data from volunteers, so readers are
also invited to contribute to the
dataset111https://form.jotform.com/201694606537056. Until the date of this
work, 1096 samples from 80 speakers have been recorded, which consists of 1.2
hours of audio. Volunteers are asked to pronounce various scripted utterances
at a close distance and also at two meters from the device mic. The similarity
levels are the following:
1. 1.
Exact WUW, in an isolated manner: OK Aura.
2. 2.
Exact WUW, in a context: Perfecto, voy a mirar qué dan hoy. OK Aura.
3. 3.
Contains ”Aura”: Hay un aura de paz y tranquilidad.
4. 4.
Contains ”OK”: OK, a ver qué ponen en la tele.
5. 5.
Contains similar word units to ”Aura”: Hola Laura.
6. 6.
Contains similar word units to ”OK”: Prefiero el hockey al baloncesto.
7. 7.
Contains similar word units to ”OK Aura”: Porque Laura, ¿qué te pareció la
película?
#### 3.1.3 External data
General negative examples have been randomly chosen from the publicly
available Spanish Common Voice corpus [16] that currently holds over 300 hours
of validated audio. However, we keep a 10:1 ratio between negative and
positive samples, since such ratio proves to yield good results in [17], thus
avoiding bigger ratios that lead to increasing computational times. Therefore,
we have used a Common Voice partition consisting of 55h for training, 7h for
development and 7h for testing.
Background noises were selected from various public datasets according to
different use case scenarios. Living room background noise (HOME-LIVINGB) from
the QUT-NOISE Database [18], TV audios from the IberSpeech-RTVE Challenge
[19], and music222https://freemusicarchive.org/ and
conversations333http://www.podcastsinspanish.org/ from free libraries.
#### 3.1.4 Data processing
All the audio samples are monoaural signals stored in Waveform Audio File
Format (WAV) with a sampling rate of 16kHz. The speech data that has been
collected was processed with a Speech Activity Detection (SAD) module
producing timestamps where speech occurs. For this purpose the tool from
pyannote.audio [20] has been used, which has been trained with the AMI corpus
[21]. This helped us to only use the valid speech segments of the audios we
collected.
As features to train the models we mainly used two: Mel-Frequency Cesptral
Coefficients (MFCCs) and log-Mel Spectrogram. The MFCCs were constructed first
filtering the audio with a band pass filter (20Hz to 8kHz) and then,
extracting the first thirteen coefficients with 100 milliseconds of windows
size and frame shifting of 50 milliseconds. The procedure to extract the log-
Mel Spectrogram ($S(\cdot)$) is detailed in section 2.1.
Train, development and test partitions are split ensuring that neither speaker
nor background noise is repeated between partitions, trying to maintain a
80-10-10 proportion, respectively. Total data, containing internal and
external datasets, consists of 50.737 non-WUW samples and 4.651 WUW samples.
### 3.2 Data augmentation
Several Room Impulse Responses (RIR) were created based on the Image Source
Method (ISM) [22], for a room of dimensions $(L_{x},L_{y},L_{z})$ where $2\leq
L_{x}\leq 4.5,2\leq L_{y}\leq 5.5,2.5\leq L_{z}\leq 4$ meters, with microphone
and source randomly located at any $(x,y)$ point within a height of $0.5\leq
z\leq 2$ meters. Every TV and music original recordings were convolved with
different RIRs to simulate the signal picked up by the microphone of the
device in the room.
Adding background noise to clean speech signals is the main data augmentation
technique used in the training stage. We use background noises of different
scenarios (TV, music, background conversations, office noise and living room
noise) and a wide range of SNRs to improve the performance of the models
against noisy environments. In each epoch, we create different noisy samples
by randomly selecting a sample of background noise for each speech event and
combining them with a randomly chosen SNR in a specified range.
### 3.3 Wake-Up Word Detection Models
With the aim of assessing the quality of the trained SE models, we use several
trigger word detection classifier models, reporting the impact of the SE
module at WUW classification performance. The WUW classifiers used here are a
LeNet, a well-known standard classifier, easy to optimize [23]; `Res15`,
`Res15-narrow` and `Res8` based on a reimplementation by Tang and Lin [24] of
Sainath and Parada’s Convolutional Neural Networks (CNNs) for keyword spotting
[25], using residual learning techniques with dilated convolutions [26]; a
`SGRU` and `SGRU2`, two Recurrent Neural Network (RNNs) models, based on the
open source tool named Mycroft Precise[27], which is a lightweight wake-up-
word detection tool implemented in TensorFlow. These are two bigger variations
that we have implemented in PyTorch. We also use a `CNN-FAT2019`, a CNN
architecture adapted from a kernel [28] in Kaggle’s FAT 2019 competition [29],
which has shown good performance in tasks like audio tagging or detection of
gender, identity and speech events from pulse signal [30].
Figure 2: Macro F1-score box plot for different SNR ranges. Classifiers
trained with low noise ($[5,30]$ dB SNR). Figure 3: Macro F1-score box plot
for different SNR ranges. Classifiers trained with a very wide range of noise
($[-10,50]$ dB SNR).
### 3.4 Training
Speech signals and background noises are combined randomly following the
procedure explained in 3.2 with a given SNR range. The SE model is trained to
cover a wide SNR range of $[-10,50]$ dBs, whereas WUW models are trained to
cover two scenarios: a classifier trained with the same SNR range as the SE
model, and a classifier less aware of noise with a narrower SNR range of
$[5,30]$ dBs. This way, it is possible to study the impact of the SE model
regarding if the classifier has been trained with more or less noise.
Data imbalance is addressed balancing the classes in each batch using a
weighted sampler. We use a fixed window length of 1.5 seconds based on the
annotated timestamps for our collected database, and random cuts for the rest
of the Common Voice samples.
All the models are trained with early stopping based on the validation loss
with 10 epochs of patience. We use the Adam optimizer with a learning rate of
0.001 and a batch size of 50. Loss (1) allows to train the models in multiple
ways and we define different SE models and classifiers based on the loss
function used:
1. a)
Classifier: we remove the auto-encoder from the architecture (Fig. 1) and
train any of the classifiers using the noisy audio as input: $\alpha=\beta=0$
and $\gamma=1$
2. b)
SE model (SimpleSE): we remove the classifier from the architecture and
optimize the auto-encoder based on the reconstruction losses only:
$\alpha=\beta=1$ and $\gamma=0$
3. c)
SE model + frozen classifier (FrozenSE): operations of the classifier are
dropped from the backward graph for gradient calculation, optimizing only the
SE model for a given pretrained classifier (LeNet). $\alpha=\beta=\gamma=1$
4. d)
SE model + classifier (JointSE): auto-encoder and LeNet are trained jointly
using the three losses: $\alpha=\beta=\gamma=1$
### 3.5 Tests
All the models take as input windows of 1.5 seconds of audio, to ensure that
common WUW utterances are fully within it, since the average ”OK Aura” is
about 0.8 seconds long. Therefore, we perform an atomic test evaluating if a
single window contains the WUW or not. Both negative and positive samples are
assigned a background noise sample with which they are combined with a random
SNR between certain ranges, as described in section 3.4.
Given the output scores of the models, the threshold to decide if a sample
test is a WUW or not is chosen as the one yielding the biggest difference
between true and false positive rates, based on Youden’s J statistic [31].
Once the threshold is decided, macro F1-score is computed in order to balance
WUW/non-WUW proportions in the results. We average such scores across all WUW
classifiers described in section 3.3, for every SNR range.
## 4 Results
Figure 2 illustrates the improvement of the WUW detection in noisy scenarios
by concatenating our FrozenSE model with all WUW classifiers described in
section 3.3 trained with low noise ($[5,30]$ dB SNR), which we could find in
simple voice assistant systems. Applying SE in quiet scenarios maintains
fairly good results, and improves them in lower SNR ranges.
If we train the classifiers with more data augmentation ($[-10,50]$ dB SNR),
the baseline classifier results for noisier scenarios improve. Results using
the FrozenSE do not decrease but the improvement in ranges of severe noise is
not as large as in Figure 2, see Figure 3.
In section 3.4 we have defined the parameters of the loss function (1) to
train a classifier (case a)), and different approaches to train the SE model,
either standalone (b), c)) or in conjunction with the classifier (d)). In
Figure 4 we can see how JointSE performs better than all the other cases in
almost every SNR range. From 40 dB to 10 dB of SNR, the results are very
similar for the 4 models. In contrast, in the noisiest ranges we can see how
the classifier without SE model is the worst performer, followed by the
SimpleSE case where only the waveform and spectral reconstruction losses are
used. We found that the FrozenSE case, which includes the classification loss
in the training stage, improves the results for the wake-up-word detection
task. However, the best results are obtained with the JointSE case where the
SE model + LeNet are trained jointly using all three losses.
Figure 4: Comparison of different training methods for the SE models and LeNet
classifier, in terms of the macro F1-Score for different SNR ranges. All
models trained in the range of $[-10,50]dB$ SNR.
We compared the WUW detection results of our JointSE with other SOTA SE models
(SEGAN [12] and Denoiser [13]), followed by a classifier (data augmented
LeNet) in different noise scenarios. In Table 1, it can be observed how when
training the models together with the task loss, the results in our setup are
better than with other more powerful but more general SE models, since there
is no mismatch between the SE and classifier in the end-to-end and it is also
more adapted to common home noises. JointSE improves the detection over the no
SE model case, especially in scenarios with background conversations, loud
office noise or loud TV, see Table 2.
Table 1: Macro F1-score enhancing the noisy audios with SOTA SE models and
using a LeNet as a classifier.
SNR $[dB]$ | | No SE | SEGAN | Denoiser | JointSE
---|---|---|---|---|---
$[20,10]$ | Clean | $0.980$ | $0.964$ | $0.980$ | $0.990$
$[10,0]$ | Noisy | $0.969$ | $0.940$ | $0.955$ | $0.972$
$[0,-10]$ | Very noisy | $0.869$ | $0.798$ | $0.851$ | $0.902$
Table 2: Macro F1-score percentage difference between JointSE and LeNet
without any SE module, for different background noise types. Positive values
mean that the JointSE score is bigger than the single LeNet’s.
SNR $[dB]$ | | Music | TV | Office | Living Room | Conversations
---|---|---|---|---|---|---
$[20,10]$ | Clean | $1.0$ | $-0.9$ | $1.4$ | $0.4$ | $2.3$
$[10,0]$ | Noisy | $0.0$ | $-1.2$ | $0.8$ | $0.4$ | $1.9$
$[0,-10]$ | Very noisy | $0.5$ | $3.9$ | $11.2$ | $3.1$ | $3.8$
## 5 Conclusions
In this paper we proposed a SE model adapted to the task of WUW in voice
assistants for the home environment. The SE model is a fully-convolutional
denoising auto-encoder at waveform level and it is trained using a log-Mel
Spectrogram and waveform regression losses together with a task-dependent WUW
classification loss. Results show that for clean and slightly noisy
conditions, SE in general does not bring a substantial improvement over a
classifier trained with proper data augmentation, but in the case of very
noisy conditions SE does improve the performance, especially when the SE and
WUW detector are trained jointly end-to-end.
## References
* [1] P. C. Loizou, _Speech enhancement: theory and practice_. CRC press, 2013.
* [2] C. K. Reddy, E. Beyrami, J. Pool, R. Cutler, S. Srinivasan, and J. Gehrke, “A scalable noisy speech dataset and online subjective test framework,” _arXiv preprint arXiv:1909.08050_ , 2019.
* [3] L.-P. Yang and Q.-J. Fu, “Spectral subtraction-based speech enhancement for cochlear implant patients in background noise,” _The journal of the Acoustical Society of America_ , vol. 117, no. 3, pp. 1001–1004, 2005.
* [4] C. Zorilă, C. Boeddeker, R. Doddipatla, and R. Haeb-Umbach, “An investigation into the effectiveness of enhancement in ASR training and test for chime-5 dinner party transcription,” in _2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)_. IEEE, 2019, pp. 47–53.
* [5] A. L. Maas, Q. V. Le, T. M. O’Neil, O. Vinyals, P. Nguyen, and A. Y. Ng, “Recurrent neural networks for noise reduction in robust ASR,” in _Thirteenth Annual Conference of the International Speech Communication Association_ , 2012.
* [6] F. Weninger, H. Erdogan, S. Watanabe, E. Vincent, J. Le Roux, J. R. Hershey, and B. Schuller, “Speech enhancement with LSTM recurrent neural networks and its application to noise-robust ASR,” in _International Conference on Latent Variable Analysis and Signal Separation_. Springer, 2015, pp. 91–99.
* [7] J. Meyer and K. U. Simmer, “Multi-channel speech enhancement in a car environment using wiener filtering and spectral subtraction,” in _1997 IEEE international conference on acoustics, speech, and signal processing_ , vol. 2. IEEE, 1997, pp. 1167–1170.
* [8] Y. Ephraim and H. L. Van Trees, “A signal subspace approach for speech enhancement,” _IEEE Transactions on speech and audio processing_ , vol. 3, no. 4, pp. 251–266, 1995.
* [9] D. Rethage, J. Pons, and X. Serra, “A wavenet for speech denoising,” in _2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2018, pp. 5069–5073.
* [10] H. Phan, I. V. McLoughlin, L. Pham, O. Y. Chén, P. Koch, M. De Vos, and A. Mertins, “Improving GANs for speech enhancement,” _arXiv preprint arXiv:2001.05532_ , 2020.
* [11] S. R. Park and J. Lee, “A fully convolutional neural network for speech enhancement,” _arXiv preprint arXiv:1609.07132_ , 2016.
* [12] S. Pascual, A. Bonafonte, and J. Serra, “Segan: Speech enhancement generative adversarial network,” _arXiv preprint arXiv:1703.09452_ , 2017.
* [13] A. Défossez, G. Synnaeve, and Y. Adi, “Real time speech enhancement in the waveform domain,” _arXiv preprint arXiv:2006.12847_ , 2020.
* [14] S.-W. Fu, C.-F. Liao, and Y. Tsao, “Learning with learned loss function: Speech enhancement with quality-net to improve perceptual evaluation of speech quality,” _IEEE Signal Processing Letters_ , vol. 27, pp. 26–30, 2019\.
* [15] R. Yamamoto, E. Song, and J.-M. Kim, “Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram,” in _ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2020, pp. 6199–6203.
* [16] R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber, “Common Voice: A massively-multilingual speech corpus,” _arXiv preprint arXiv:1912.06670_ , 2019.
* [17] J. Hou, Y. Shi, M. Ostendorf, M.-Y. Hwang, and L. Xie, “Mining effective negative training samples for keyword spotting,” in _ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2020, pp. 7444–7448.
* [18] D. B. Dean, S. Sridharan, R. J. Vogt, and M. W. Mason, “The QUT-NOISE-TIMIT corpus for the evaluation of voice activity detection algorithms,” _Proceedings of Interspeech 2010_ , 2010.
* [19] E. Lleida, A. Ortega, A. Miguel, V. Bazán-Gil, C. Pérez, M. Gómez, and A. de Prada, “Albayzin 2018 evaluation: the IberSpeech-RTVE challenge on speech technologies for Spanish broadcast media,” _Applied Sciences_ , vol. 9, no. 24, p. 5412, 2019.
* [20] H. Bredin, R. Yin, J. M. Coria, G. Gelly, P. Korshunov, M. Lavechin, D. Fustes, H. Titeux, W. Bouaziz, and M.-P. Gill, “pyannote.audio: neural building blocks for speaker diarization,” in _ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing_ , Barcelona, Spain, May 2020.
* [21] J. Carletta, “Unleashing the killer corpus: experiences in creating the multi-everything ami meeting corpus,” _Language Resources and Evaluation_ , vol. 41, no. 2, pp. 181–190, 2007.
* [22] J. B. Allen and D. A. Berkley, “Image method for efficiently simulating small-room acoustics,” _The Journal of the Acoustical Society of America_ , vol. 65, no. 4, pp. 943–950, 1979.
* [23] Y. LeCun _et al._ , “Lenet-5, convolutional neural networks,” _URL: http://yann. lecun. com/exdb/lenet_ , vol. 20, no. 5, p. 14, 2015.
* [24] R. Tang and J. Lin, “Honk: A PyTorch reimplementation of convolutional neural networks for keyword spotting,” _CoRR_ , vol. abs/1710.06554, 2017\. [Online]. Available: http://arxiv.org/abs/1710.06554
* [25] T. N. Sainath and C. Parada, “Convolutional neural networks for small-footprint keyword spotting,” in _Sixteenth Annual Conference of the International Speech Communication Association_ , 2015.
* [26] R. Tang and J. Lin, “Deep residual learning for small-footprint keyword spotting,” _CoRR_ , vol. abs/1710.10361, 2017. [Online]. Available: http://arxiv.org/abs/1710.10361
* [27] M. D. Scholefield, “Mycroft Precise,” https://github.com/MycroftAI/mycroft-precise, 2019.
* [28] M. H. ”mhiro2”, “Freesound Audio Tagging 2019: Simple 2D-CNN Classifier with PyTorch,” https://www.kaggle.com/mhiro2/simple-2d-cnn-classifier-with-pytorch/, 2019\.
* [29] E. Fonseca, M. Plakal, F. Font, D. P. Ellis, and X. Serra, “Audio tagging with noisy labels and minimal supervision,” _arXiv preprint arXiv:1906.02975_ , 2019.
* [30] G. Cámbara, J. Luque, and M. Farrús, “Detection of speech events and speaker characteristics through photo-plethysmographic signal neural processing,” in _ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2020, pp. 7564–7568.
* [31] W. J. Youden, “Index for rating diagnostic tests,” _Cancer_ , vol. 3, no. 1, pp. 32–35, 1950.
|
# A discovery of a low-energy excess in cosmic-ray iron: an evidence of the
past supernova activity in the Local Bubble
M. J. Boschini INFN, Milano-Bicocca, Milano, Italy CINECA, Segrate, Milano,
Italy S. Della Torre INFN, Milano-Bicocca, Milano, Italy M. Gervasi INFN,
Milano-Bicocca, Milano, Italy Physics Department, University of Milano-
Bicocca, Milano, Italy D. Grandi INFN, Milano-Bicocca, Milano, Italy Physics
Department, University of Milano-Bicocca, Milano, Italy G. Jóhannesson
Science Institute, University of Iceland, Dunhaga 3, IS-107 Reykjavik, Iceland
NORDITA, Roslagstullsbacken 23, 106 91 Stockholm, Sweden G. La Vacca INFN,
Milano-Bicocca, Milano, Italy Physics Department, University of Milano-
Bicocca, Milano, Italy N. Masi INFN, Bologna, Italy Physics Department,
University of Bologna, Bologna, Italy I. V. Moskalenko Hansen Experimental
Physics Laboratory, Stanford University, Stanford, CA 94305 Kavli Institute
for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA
94305 S. Pensotti INFN, Milano-Bicocca, Milano, Italy Physics Department,
University of Milano-Bicocca, Milano, Italy T. A. Porter Hansen Experimental
Physics Laboratory, Stanford University, Stanford, CA 94305 Kavli Institute
for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA
94305 L. Quadrani INFN, Bologna, Italy Physics Department, University of
Bologna, Bologna, Italy P. G. Rancoita INFN, Milano-Bicocca, Milano, Italy
D. Rozza INFN, Milano-Bicocca, Milano, Italy Physics Department, University
of Milano-Bicocca, Milano, Italy M. Tacconi INFN, Milano-Bicocca, Milano,
Italy Physics Department, University of Milano-Bicocca, Milano, Italy
###### Abstract
Since its launch, the Alpha Magnetic Spectrometer – 02 (AMS-02) has delivered
outstanding quality measurements of the spectra of cosmic-ray (CR) species,
$\bar{p}$, $e^{\pm}$, and nuclei, 1H– 8O, 10Ne, 12Mg, 14Si, which resulted in
a number of breakthroughs. One of the latest long awaited surprises is the
spectrum of 26Fe just published by AMS-02. Because of the large fragmentation
cross section and large ionization energy losses, most of CR iron at low
energies is local, and may harbor some features associated with relatively
recent supernova (SN) activity in the solar neighborhood. Our analysis of new
iron spectrum together with Voyager 1 and ACE-CRIS data reveals an unexpected
bump in the iron spectrum and in the Fe/He, Fe/O, and Fe/Si ratios at 1–2 GV,
while a similar feature in the spectra of He, O, Si, and in their ratios is
absent, hinting at a local source of low-energy CRs. The found excess fits
well with recent discoveries of radioactive 60Fe deposits in terrestrial and
lunar samples, and in CRs. We provide an updated local interstellar spectrum
(LIS) of iron in the energy range from 1 MeV nucleon-1 to $\sim$10 TeV
nucleon-1. Our calculations employ the GalProp–HelMod framework that is proved
to be a reliable tool in deriving the LIS of CR $\bar{p}$, $e^{-}$, and nuclei
$Z\leq 28$.
cosmic rays — diffusion — elementary particles — interplanetary medium — ISM:
general — Sun: heliosphere
100
## 1 Introduction
New era of precise astrophysical measurements has started about a decade ago
with the launch of the Payload for Antimatter Matter Exploration and Light-
nuclei Astrophysics (PAMELA, Picozza et al., 2007; Adriani et al., 2014). It
was followed by a continuing series of launches of unique instrumentation,
such as the Fermi Large Area Telescope (_Fermi_ -LAT, Atwood et al., 2009),
the Alpha Magnetic Spectrometer – 02 (AMS-02, Aguilar et al., 2013), NUCLEON
experiment (Grebenyuk et al., 2019a, b), CALorimetric Electron Telescope –
(CALET, Adriani et al., 2019; Torii et al., 2019; Maestro et al., 2020), DArk
Matter Particle Explorer mission – (DAMPE, Chang et al., 2017; DAMPE
Collaboration et al., 2017; An et al., 2019), and Cosmic-Ray Energetics and
Mass investigation – (ISS-CREAM, Seo et al., 2014). These experiments are
operating in the high-energy and very-high-energy domains.
Meanwhile, understanding the origin of cosmic rays (CRs) and our interstellar
environment is impossible without connecting high energy measurements with
data from low-energy experiments, such as the Cosmic Ray Isotope Spectrometer
onboard of the Advanced Composition Explorer (ACE-CRIS, Israel et al., 2018;
Binns et al., 2016) operating at the L1 Lagrange point for more than two
decades, and Voyager 1, 2 spacecraft (Stone et al., 1977), the grandparents of
the current instrumentation. The latter are providing unique data on the
elemental spectra and composition at the interstellar reaches of the solar
system (Stone et al., 2013; Cummings et al., 2016; Stone et al., 2019),
currently at 149 AU and 124 AU from the sun, correspondingly.
These instruments and their experimental teams deliver CR data with unmatched
precision and have not tired to surprise us with new-found features in the
energy range that is deemed well-studied. One of the latest long awaited
surprises is the spectrum of iron published by AMS-02 (Aguilar et al., 2021).
This is an expected surprise because CR iron is quite different from lighter
species whose spectra have been recently published: 1H– 8O, 10Ne, 12Mg, 14Si
(Aguilar et al., 2014, 2015a, 2015b, 2016a, 2016b, 2017, 2018a, 2018b, 2019a,
2019b, 2020). Because of the large fragmentation cross section of iron and
large ionization energy losses at low energies, most of low-energy CR iron is
local, coming from relatively recent supernova (SN) activity in the solar
neighborhood, apart from lighter species that may come from more distant
sources. The injection of iron is quite different from the injection of
lighter species—it is injected into the interstellar space only during the SN
explosion, while lighter elements are abundant in pre-supernova winds of
massive stars. Besides, it has lower charge to mass ratio $(Z/A)_{\rm
Fe}\approx 0.46$ compared to $(Z/A)_{\rm He,C,O,Si}\approx 1/2$ for lighter
species and $(Z/A)_{p}=1$ for protons, and thus its acceleration in SN shock
should be somewhat different from other nuclei. It is, therefore, natural to
expect that the spectrum of CR iron should be somewhat different from other
species, and we emphasized that numerous times (e.g., Boschini et al., 2020a).
In a recent paper (Boschini et al., 2020a) we put forward a set of predicted
spectra for all CR nuclei 1H– 28Ni, including those that are not yet published
by AMS-02. Our predicted spectra for those species were based on Voyager 1
(Cummings et al., 2016) and ACE-CRIS data at low energies, while at high
energies we used ATIC-2 (Panov et al., 2009), CREAM (Ahn et al., 2008),
NUCLEON (Grebenyuk et al., 2019a, b), CALET (Adriani et al., 2019), and DAMPE
(An et al., 2019). In the intermediate range we used the HEAO-3-C2 data
(Engelmann et al., 1990) that correspond to the aerogel counter of HEAO-3-C2
experiment. Our comparison with the published AMS-02 spectra of 1H– 8O, 10Ne,
12Mg, 14Si has shown that the HEAO-3-C2 data in the middle range from
2.65–10.6 GeV nucleon-1, the so-called “plateau” corresponding to the aerogel
counter, agree with the AMS-02 data quite well while at lower and higher
energies the systematic deviations are large (see a detailed discussion in
Boschini et al., 2020a). However, the newly measured precise spectrum of iron
just published by AMS-02 (Aguilar et al., 2021) is indeed harboring a surprise
that provides an exciting opportunity to gain insight into the origin and
evolution of the local interstellar medium.
In this paper we discuss the particularity of the measured spectrum of iron.
Our calculations and interpretation employ the GalProp111Available from
http://galprop.stanford.edu –HelMod222http://www.helmod.org/ framework that
is proved to be a reliable tool in deriving the LIS of CR species (Boschini et
al., 2019, 2020a).
## 2 Calculations
Table 1: Best-fit propagation parameters for I\- and P-scenarios Parameter | Units | Best Value | Error
---|---|---|---
$z_{h}$ | kpc | 4.0 | 0.6
$D_{0}(R=4\ {\rm GV})$ | cm2 s-1 | $4.3\times 10^{28}$ | 0.7
$\delta$aaThe P-scenario assumes a break in the diffusion coefficient with index $\delta_{1}=\delta$ below the break and index $\delta_{2}=0.15\pm 0.03$ above the break at $R=370\pm 25$ GV (for details see Boschini et al., 2020b). | | 0.415 | 0.025
$V_{\rm Alf}$ | km s-1 | 30 | 3
$dV_{\rm conv}/dz$ | km s-1 kpc-1 | 9.8 | 0.8
Table 2: The injection spectrum of iron
| Spectral parameters
---|---
Nucleus | $\gamma_{0}$ | $R_{0}$ (GV) | $s_{0}$ | $\gamma^{\prime}$ | $R^{\prime}$ (GV) | $s^{\prime}$ | $\gamma_{1}$ | $R_{1}$ (GV) | $s_{1}$ | $\gamma_{2}$ | $R_{2}$ (GV) | $s_{2}$ | $\gamma_{3}$
Old 26 | Fe | 0.27 | 1.04 | 0.18 | $\cdots$ | $\cdots$ | $\cdots$ | 1.99 | 7.00 | 0.20 | 2.51 | 355 | 0.17 | 2.19
New 26 | Fe | 0.95 | 2.00 | 0.20 | 3.62 | 2.94 | 0.10 | 2.05 | 17.0 | 0.18 | 2.452 | 355 | 0.17 | 2.23
Note. — The primary abundance of 56Fe is increased from 515 to 577 (c.f.
Tables 2, 3 in Boschini et al., 2020a), with relative abundances of isotopes
of Fe after propagation tuned to ACE-CRIS data.
In this work we are using the same CR propagation model with distributed
reacceleration and convection that was used in our previous analyses (for more
details see Boschini et al., 2017, 2018a, 2018b, 2020a, 2020b). The latest
versions of the GalProp code for Galactic propagation of CRs and the HelMod
code for heliospheric propagation are described in detail in a recent paper by
Boschini et al. (2020a), see also references therein.
The values of propagation parameters along with their confidence limits are
derived from the best available CR data using the Markov Chain Monte Carlo
(MCMC) routine. Five main propagation parameters, that affect the overall
shape of CR spectra, were left free in the scan using GalProp running in the
2D mode: the Galactic halo half-width $z_{h}$, the normalization of the
diffusion coefficient $D_{0}$ at the reference rigidity $R=4$ GV and the index
of its rigidity dependence $\delta$, the Alfvén velocity $V_{\rm Alf}$, and
the gradient of the convection velocity $dV_{\rm conv}/dz$ ($V_{\rm conv}=0$
in the plane, $z=0$). Their best-fit values tuned to the AMS-02 data are
listed in Table 1 and are the same as obtained in Boschini et al. (2020a). The
radial size of the Galaxy does not significantly affect the values of
propagation parameters and was set to 20 kpc. Besides, we introduced a factor
$\beta^{\eta}$ in the diffusion coefficient, where $\beta=v/c$, and $\eta$ was
left free. The best fit value of $\eta=0.70$ improves the agreement at low
energies, and slightly affects the choice of injection indices ${\gamma}_{0}$
an ${\gamma}_{1}$. A detailed discussion of the injection (I) and propagation
(P) scenarios of the 350 GV break can be found in Vladimirov et al. (2012) and
Boschini et al. (2020a).
The corresponding B/C ratio also remains the same (see Fig. 4 in Boschini et
al., 2020a), and compares well with all available measurements: Voyager 1
(Cummings et al., 2016), ACE-
CRIS333http://www.srl.caltech.edu/ACE/ASC/level2/cris_l2desc.html, AMS-02
(Aguilar et al., 2018a), ATIC-2 (Panov et al., 2009), CREAM (Ahn et al., 2008,
2009), and NUCLEON (Grebenyuk et al., 2019b).
In this calculation we are tuning only the injection spectrum of iron to match
the new data from AMS-02 at high energies and ACE-CRIS and Voyager 1 at low
energies, the old and new injection parameters are shown in Table 2. Compared
to the old injection spectrum we have to add another break at low rigidities
at $R^{\prime}=2.94$ GV and a very steep spectrum with index
$\gamma^{\prime}=3.62$ between $R_{0}=2.00$ GV and $R^{\prime}=2.94$ GV. The
obtained agreement with data is good, but the new Fe spectrum is quite
different from what was expected based on the fit that employs the HEAO-3-C2
data (Boschini et al., 2020a).
## 3 Results
Figure 1: A fit to the new spectrum of iron by AMS-02 (Aguilar et al., 2021).
In the top panel, only AMS-02 data are shown. The middle panel shows the
quality of the fit, the relative difference between the calculations and the
data set. The bottom panel shows two fits as compared to the data sets by
HEAO-3-C2 (Engelmann et al., 1990) or by AMS-02 (Aguilar et al., 2021) in the
intermediate range, while at low and very-high energies we use Voyager 1
(Cummings et al., 2016), ACE-CRIS, CREAM (Ahn et al., 2008), and NUCLEON
(Grebenyuk et al., 2019a, b) data. The gray lines show the LIS tuned to AMS-02
data (solid line) and the old LIS tuned to the “plateau” middle range of the
HEAO-3-C2 data (dashed line). The red line shows the LIS modulated
appropriately to the solar activity during the AMS-02 data taking, while solid
blue line is the LIS derived from the AMS-02 data and modulated appropriately
to the HEAO-3 flight. The two green lines correspond to two ACE-CRIS data
taking periods, where the modulated spectra are almost independent on which
data are used at high energies, AMS-02 or HEAO-3-C2, dashed and solid lines
are overlapping. Figure 2: The LIS of He, O, Si, and Fe compared to Voyager
1 data (Cummings et al., 2016). The solid black line shows the updated Fe LIS
tuned to AMS-02 data, while dashed line shows the previous LIS (Boschini et
al., 2020a) based on the “plateau” middle range of the HEAO-3-C2 data.
Figure 3: The calculated ratios of primary species, Fe/He, Fe/O, Fe/Si, are
compared with Voyager 1 (Cummings et al., 2016), ACE-CRIS, and AMS-02 data
(Aguilar et al., 2017, 2021, 2020). Also shown are HEAO-3-C2 data (Engelmann
et al., 1990). The gray line shows the LIS ratios tuned to AMS-02 data, and
the red line shows the modulated LIS ratios. The dashed gray line shows the
LIS ratios tuned to HEAO-3-C2 data (Boschini et al., 2020a) based on the
“plateau” middle range of the HEAO-3-C2 data. For the Fe/He ratio, only Fe was
tuned to the HEAO-3-C2 data. In all plots, the Voyager 1, ACE-CRIS, and
HEAO-3-C2 data are converted from kinetic energy per nucleon to rigidity
assuming $A/Z=2$. The shaded area shows the ratios modulated to the
appropriate level (ACE-CRIS, HEAO-3-C2) with the width corresponding to
1$\sigma$ error.
Figure 4: The calculated ratios of primary species, He/O and Si/O, are
compared with Voyager 1 (Cummings et al., 2016), ACE-CRIS, HEAO-3-C2, and
AMS-02 data (Aguilar et al., 2015b, 2017, 2020). The Voyager 1, ACE-CRIS, and
HEAO-3-C2 data are converted from kinetic energy per nucleon to rigidity
assuming $A/Z=2$.
Fig. 1 shows the fit to the iron spectrum by AMS-02 (Aguilar et al., 2021). In
the top panel, only AMS-02 data are shown. The quality of the fit is good with
the residuals being $\lesssim$5% in the whole range (middle panel). The bottom
panel shows two fits as compared to the data sets by HEAO-3-C2 (Engelmann et
al., 1990) or by AMS-02 (Aguilar et al., 2021) in the intermediate range,
while at low and very-high energies we use Voyager 1 (Cummings et al., 2016),
ACE-CRIS, CREAM (Ahn et al., 2008), and NUCLEON (Grebenyuk et al., 2019a, b)
data. The gray lines show the LIS tuned to AMS-02 data (solid line) and the
previous LIS tuned to the “plateau” middle range of the HEAO-3-C2 data (dashed
line), while colored lines show the LIS modulated appropriately to the periods
of ACE-CRIS, AMS-02, and HEAO-3 data taking. The solid blue line in Fig. 1
shows where the correct spectrum of HEAO-3-C2 should be if we assume the iron
LIS based on AMS-02 data. One can see that matching the Voyager 1 and
modulated ACE-CRIS data, on one hand, and modulated AMS-02 data, on the other
hand, requires a sharp spectral steepening in the transition region of the LIS
between $\sim$2 GV and $\sim$3 GV and a similar feature in the injection
spectrum as well (see Table 2).
Obvious is also the difference in the spectral shapes of He, O, Si, on one
hand, and Fe, on the other hand (Fig. 2). This difference becomes more obvious
when one looks at the ratios. A comparison of the ratios of iron to lighter
primary species, Fe/He, Fe/O, Fe/Si, with AMS-02 data shows excellent
agreement in the rigidity range from 2 GV – 2 TV (Fig. 3). If the ratios are
extended to lower energies where they compare with Voyager 1 and ACE-CRIS
data, one can see a clear bump at $\sim$1–2 GV. The middle plot of the ratio
Fe/O, where we use Voyager 1, ACE-CRIS, and AMS-02 data plotted vs. kinetic
energy per nucleon without converting to rigidity, proves that fitting all
data simultaneously is impossible without a bump. Therefore, the bumps appear
in every ratio, Fe/He, Fe/O, Fe/Si, while a similar bump in the He/O and Si/O
ratios is absent, see Fig. 4.
Interestingly, the bump appears in the energy gap between ACE-CRIS and AMS-02,
and is the result of tuning the Fe spectrum to all three data sets, Voyager 1,
ACE-CRIS, AMS-02, each with an appropriate heliospheric modulation level. The
anomaly would be unnoticed if the data of each instrument is taken separately.
## 4 Discussion
The first ever accurate measurement of Fe spectrum in the rigidity range from
2 GV – 2 TV (Aguilar et al., 2021) is a giant milestone. For the first time it
allows us to analyze the spectra of primary species, He, O, Fe, whose atomic
and mass numbers, fragmentation cross sections, and ionization energy losses
are considerably different. The appearance of the bump in the Fe/He, Fe/O, and
Fe/Si ratios, while lacking a similar feature in the He/O and Si/O ratios,
implies a local source of low-energy CRs. Because of the fast ionization
losses of iron nuclei in the ISM, CR iron from distant sources is suppressed
and this gives the local source such a prominence. In the case of lighter CR
nuclei, whose ionization losses are not that fast, the local component is
mixed up with old Galactic CRs coming from distant sources and thus a
prominent excess does not form.
The likely source of the excess CR iron are the old SN remnants. In fact, the
evidence of the past SN activity in the local ISM is abundant. The Local
Bubble is a low density region of the size of $\sim$200 pc around the Sun
filled with hot H i gas that itself was formed in a series of SN explosions
(e.g., Sfeir et al., 1999; Frisch et al., 2011). There are multiple reports of
an excess of radioactive 60Fe found in the deep ocean sediments (Knie et al.,
1999, 2004; Ludwig et al., 2016; Wallner et al., 2016), in lunar regolith
samples (Cook et al., 2009; Fimiani et al., 2012, 2014), and more recently in
the Antarctic snow (Koll et al., 2019). Such deposits can be made by SN
explosions in the solar neighborhood. Recent observation of 60Fe (the half-
life $\tau_{1/2}\sim 2.6$ Myr, Rugel et al., 2009) in CRs by ACE-CRIS
spacecraft (Binns et al., 2016) implies that the low-energy CRs from the most
recent SN are still around.
It is hard to establish the number of SNe events and their exact timing, but
it is clear that there could be several events during the last $\sim$10 Myr at
distances of up to 100 parsecs (Wallner et al., 2016). The most recent SN
events in the solar neighborhood were 1.5–3.2 Myr and 6.5–8.7 Myr ago (Fry et
al., 2015; Wallner et al., 2016). The measured spread (width) of the signal is
$\sim$1.5 Myr (Fry et al., 2015), which is too long for a single event. Each
of these events could, in principle, consist of several consequent SN
explosions separated by some 100 kyr, as an estimated time spread for a single
SN, located at $\sim$100 pc from the Earth, is just $\sim$100–400 kyr and the
travel time is $\sim$200 kyr. A detailed modeling by Breitschwerdt et al.
(2016) indicates two SNe at distances 90–100 pc with the closest occurred 2.3
Myr ago and the second-closest exploded about 1.5 Myr ago, both with stellar
masses of $\sim$9$M_{\odot}$.
The new-found excess in the Fe spectrum below $\sim$2 GV is therefore falling
in line with other excesses in 60Fe. However, this is the first time when the
excess is found in the spectrum of the element that is dominated by stable
species, most notably 56Fe. Such an excess has implications for the spectra of
fragmentation products of iron. However, it is too early to make predictions
for them as we have to await for a confirmation of similar features in the
spectra of sub-Fe species.
## 5 HEAO-3-C2 data
In the previous paper we relied on the HEAO-3-C2 data (Engelmann et al., 1990)
in the middle range, a so-called “plateau” region, to substitute the missing
AMS-02 data. While it worked well for light and middle-range nuclei, it does
not work so good for Fe – as we have learned after AMS-02 data were published
(Aguilar et al., 2021). Surprisingly all HEAO-3-C2 data points for Fe overlap
with AMS-02 data (Fig. 1), but the solar modulation levels during the HEAO-3
flight and AMS-02 data taking are dramatically different. The former was
launched on September 20, 1979 and ended on May 29, 1981, and took the data
during the solar maximum conditions, while the AMS-02 data were taken from
2011–2019, i.e., through almost entire Cycle 24 where the solar activity was
moderate even during the polarity reversal of the solar magnetic field.
Therefore, the LIS derived from these two datasets are also quite different.
The solid blue line in Fig. 1 shows where the correct spectrum of HEAO-3-C2
should be if we assume the iron LIS based on AMS-02 data. Here we discuss a
possible reason for such a discrepancy.
The AMS-02 experiment uses modern technology, its response was thoroughly
simulated and tested, it also has several independent systems that allow for
data cross checks. Therefore, it is rather unlikely to have a serious flaw. On
the other hand, the HEAO-3-C2 experiment was built in 1970s, at the same time
that Voyager 1, 2 were built. There was no prelaunch detector response
simulations. Many systems went untested and the calibrations were done after
the launch. In particular, the rigidity calibration was done using the
geomagnetic field; the latter was lacking the detailed knowledge at that time
(for a discussion see Boschini et al., 2020a). Meanwhile, our analysis has
shown that the middle range of the HEAO-3-C2 data, a so-called “plateau,”
agrees well with AMS-02 data for light and medium nuclei. There must be some
reason why the situation with iron is different.
While it is difficult to point to a single specific reason for such a
difference, we think that the incorrect evaluation of the probability of
nuclear fragmentation is the most likely. In the data analysis, the charge
changing cross section was assumed to be energy-independent (Engelmann et al.,
1990),
$\sigma=57.3(A_{t}^{1/3}+A_{p}^{1/3}-0.83)^{2},$ (1)
with parameters tuned to the measurements of charge changing reactions of 12C,
16O, and 56Fe beams with He target in the energy range from 540 MeV nucleon-1
to 1600 MeV nucleon-1 (Ferrando et al., 1988). Here $A_{p}$ and $A_{t}$ are
the mass numbers of the projectile and target nuclei, correspondingly.
Meanwhile, the average grammage for particles passing the instrument is 16 g
cm2 and the “mean atomic number is $Z_{t}=30$.” The mean atomic number of the
material $Z_{t}$ is likely a typo and perhaps more correct would be to assume
that it is the mean mass number $A_{t}=30$ rather than the atomic number.
In any case, the instrumental grammage is significant and the loss of heavy CR
nuclei due to the fragmentation was considerable. Therefore, the accuracy of a
charge changing cross section is crucial for heavy species while it is less
essential for light nuclei. The adopted scaling formula Eq. (1) was tuned to a
rather light He target and extrapolated for $A_{t}=30$ target. We believe it
is this correction for nuclear fragmentation for Fe that makes it different
from lighter species and is the most likely reason for the disagreement with
the AMS-02 data in the HEAO-3-C2 “plateau” energy range 2.65–10.6 GeV
nucleon-1. If our conclusion is correct, we should see a gradual increase in
the discrepancy between the HEAO-3-C2 “plateau” data and AMS-02 data as the
mass number increases from 14Si to 26Fe even though the agreement is good for
all species from 4Be to 14Si.
## 6 Conclusion
Using a combined data of AMS-02 (Aguilar et al., 2021), ACE-CRIS, and Voyager
1 (Cummings et al., 2016), we found an unexpected bump in the spectrum of CR
iron and the Fe/He, Fe/O, and Fe/Si ratios, which is revealed as a sharp
steepening in the range 2–3 GV necessary to connect ACE-CRIS and AMS-02 data.
Meanwhile, a similar feature in the spectra of He, O, and Si and their ratios
is absent. Taken independently, the data of each instrument do not show any
feature.
The new-found excess in the Fe spectrum below $\sim$2 GV is therefore falling
in line with other excesses in 60Fe, which is likely connected to the past SN
activity in the Local Bubble. However, this is the first time when the excess
is found in the spectrum of the element that is dominated by stable species,
most notably 56Fe. Such an excess has implications for the spectra of
fragmentation products of iron. It is also important to measure the spectra of
other heavy CR species, specifically in the middle rigidity domain
$\lesssim$10 GV, to see if a similar spectral feature is present.
We are also discussing the reason for the discrepancy between AMS-02 and
HEAO-3-C2 data. We argue that the likely reason is the incorrect estimate of
fragmentation of CR iron nuclei inside the instrument. Given a significant
instrumental grammage $\sim$16 g cm-2 and its high mean mass number, the loss
of heavy nuclei due to the fragmentation was considerable. Therefore, the
accuracy of a charge changing cross section is crucial for heavy species while
it is not so essential for light nuclei. If our conclusion is correct, we
should see a gradual increase in the discrepancy between the HEAO-3-C2
“plateau” data and AMS-02 data as the mass number increases from 14Si to 26Fe
even though the agreement is good for all species from 4Be to 14Si.
Special thanks to Pavol Bobik, Giuliano Boella, Karel Kudela, Marian Putis,
and Mario Zannoni for their continuous support of the HelMod project and many
useful suggestions. This work is supported by ASI (Agenzia Spaziale Italiana)
through a contract ASI-INFN No. 2019-19-HH.0 and by ESA (European Space
Agency) through a contract 4000116146/16/NL/HK. Igor V. Moskalenko and Troy A.
Porter acknowledge support from NASA Grant No. NNX17AB48G. We thank the ACE
CRIS instrument team and the ACE Science Center for providing the ACE data.
This research has made use of the SSDC Cosmic rays database (Di Felice et al.,
2017) and LPSC Database of Charged Cosmic Rays (Maurin et al., 2014).
## References
* Adriani et al. (2014) Adriani, O., Barbarino, G. C., Bazilevskaya, G. A., et al. 2014, Phys. Rep., 544, 323, doi: 10.1016/j.physrep.2014.06.003
* Adriani et al. (2019) Adriani, O., Akaike, Y., Asano, K., et al. 2019, Phys. Rev. Lett., 122, 181102, doi: 10.1103/PhysRevLett.122.181102
* Aguilar et al. (2013) Aguilar, M., Alberti, G., Alpat, B., et al. 2013, Phys. Rev. Lett., 110, 141102, doi: 10.1103/PhysRevLett.110.141102
* Aguilar et al. (2014) Aguilar, M., Aisa, D., Alpat, B., et al. 2014, Phys. Rev. Lett., 113, 221102, doi: 10.1103/PhysRevLett.113.221102
* Aguilar et al. (2015a) —. 2015a, Phys. Rev. Lett., 114, 171103, doi: 10.1103/PhysRevLett.114.171103
* Aguilar et al. (2015b) —. 2015b, Phys. Rev. Lett., 115, 211101, doi: 10.1103/PhysRevLett.115.211101
* Aguilar et al. (2016a) Aguilar, M., Ali Cavasonza, L., Alpat, B., et al. 2016a, Phys. Rev. Lett., 117, 091103, doi: 10.1103/PhysRevLett.117.091103
* Aguilar et al. (2016b) Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2016b, Phys. Rev. Lett., 117, 231102, doi: 10.1103/PhysRevLett.117.231102
* Aguilar et al. (2017) Aguilar, M., Ali Cavasonza, L., Alpat, B., et al. 2017, Phys. Rev. Lett., 119, 251101, doi: 10.1103/PhysRevLett.119.251101
* Aguilar et al. (2018a) Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2018a, Phys. Rev. Lett., 120, 021101, doi: 10.1103/PhysRevLett.120.021101
* Aguilar et al. (2018b) Aguilar, M., Ali Cavasonza, L., Alpat, B., et al. 2018b, Phys. Rev. Lett., 121, 051103, doi: 10.1103/PhysRevLett.121.051103
* Aguilar et al. (2019a) Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2019a, Phys. Rev. Lett., 122, 041102, doi: 10.1103/PhysRevLett.122.041102
* Aguilar et al. (2019b) Aguilar, M., Ali Cavasonza, L., Alpat, B., et al. 2019b, Phys. Rev. Lett., 122, 101101, doi: 10.1103/PhysRevLett.122.101101
* Aguilar et al. (2020) Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2020, Phys. Rev. Lett., 124, 211102, doi: 10.1103/PhysRevLett.124.211102
* Aguilar et al. (2021) Aguilar, M., Cavasonza, L. A., Allen, M. S., et al. 2021, Phys. Rev. Lett., 126, 041104, doi: 10.1103/PhysRevLett.126.041104
* Ahn et al. (2008) Ahn, H. S., Allison, P. S., Bagliesi, M. G., et al. 2008, Astroparticle Physics, 30, 133, doi: 10.1016/j.astropartphys.2008.07.010
* Ahn et al. (2009) Ahn, H. S., Allison, P., Bagliesi, M. G., et al. 2009, ApJ, 707, 593, doi: 10.1088/0004-637X/707/1/593
* An et al. (2019) An, Q., Asfandiyarov, R., Azzarello, P., et al. 2019, Science Advances, 5, doi: 10.1126/sciadv.aax3793
* Atwood et al. (2009) Atwood, W. B., Abdo, A. A., Ackermann, M., et al. 2009, ApJ, 697, 1071, doi: 10.1088/0004-637X/697/2/1071
* Binns et al. (2016) Binns, W. R., Israel, M. H., Christian, E. R., et al. 2016, Science, 352, 677, doi: 10.1126/science.aad6004
* Boschini et al. (2019) Boschini, M. J., Della Torre, S., Gervasi, M., La Vacca, G., & Rancoita, P. G. 2019, Advances in Space Research, 64, 2459, doi: 10.1016/j.asr.2019.04.007
* Boschini et al. (2017) Boschini, M. J., Della Torre, S., Gervasi, M., et al. 2017, ApJ, 840, 115, doi: 10.3847/1538-4357/aa6e4f
* Boschini et al. (2018a) —. 2018a, ApJ, 854, 94, doi: 10.3847/1538-4357/aaa75e
* Boschini et al. (2018b) —. 2018b, ApJ, 858, 61, doi: 10.3847/1538-4357/aabc54
* Boschini et al. (2020a) —. 2020a, ApJS, 250, 27, doi: https://doi.org/10.3847/1538-4365/aba901
* Boschini et al. (2020b) Boschini, M. J., Torre, S. D., Gervasi, M., et al. 2020b, ApJ, 889, 167, doi: 10.3847/1538-4357/ab64f1
* Breitschwerdt et al. (2016) Breitschwerdt, D., Feige, J., Schulreich, M. M., et al. 2016, Nature, 532, 73, doi: 10.1038/nature17424
* Chang et al. (2017) Chang, J., Ambrosi, G., An, Q., et al. 2017, Astroparticle Physics, 95, 6, doi: 10.1016/j.astropartphys.2017.08.005
* Cook et al. (2009) Cook, D. L., Berger, E., Faestermann, T., et al. 2009, in Lunar and Planetary Science Conference, Lunar and Planetary Science Conference, 1129
* Cummings et al. (2016) Cummings, A. C., Stone, E. C., Heikkila, B. C., et al. 2016, ApJ, 831, 18, doi: 10.3847/0004-637X/831/1/18
* DAMPE Collaboration et al. (2017) DAMPE Collaboration, Ambrosi, G., An, Q., et al. 2017, Nature, 552, 63, doi: 10.1038/nature24475
* Di Felice et al. (2017) Di Felice, V., Pizzolotto, C., D’Urso, D., et al. 2017, in International Cosmic Ray Conference, Vol. 301, 35th International Cosmic Ray Conference (ICRC2017), 1073
* Engelmann et al. (1990) Engelmann, J. J., Ferrando, P., Soutoul, A., et al. 1990, A&A, 233, 96
* Ferrando et al. (1988) Ferrando, P., Webber, W. R., Goret, P., et al. 1988, Phys. Rev. C, 37, 1490, doi: 10.1103/PhysRevC.37.1490
* Fimiani et al. (2012) Fimiani, L., Cook, D. L., Faestermann, T., et al. 2012, in Lunar and Planetary Science Conference, Lunar and Planetary Science Conference, 1279
* Fimiani et al. (2014) Fimiani, L., Cook, D. L., Faestermann, T., et al. 2014, in Lunar and Planetary Science Conference, Lunar and Planetary Science Conference, 1778
* Frisch et al. (2011) Frisch, P. C., Redfield, S., & Slavin, J. D. 2011, ARA&A, 49, 237, doi: 10.1146/annurev-astro-081710-102613
* Fry et al. (2015) Fry, B. J., Fields, B. D., & Ellis, J. R. 2015, ApJ, 800, 71, doi: 10.1088/0004-637X/800/1/71
* Grebenyuk et al. (2019a) Grebenyuk, V., Karmanov, D., Kovalev, I., et al. 2019a, Advances in Space Research, 64, 2546, doi: 10.1016/j.asr.2019.10.004
* Grebenyuk et al. (2019b) —. 2019b, Advances in Space Research, 64, 2559, doi: 10.1016/j.asr.2019.06.030
* Israel et al. (2018) Israel, M. H., Lave, K. A., Wiedenbeck, M. E., et al. 2018, ApJ, 865, 69, doi: 10.3847/1538-4357/aad867
* Knie et al. (2004) Knie, K., Korschinek, G., Faestermann, T., et al. 2004, Phys. Rev. Lett., 93, 171103, doi: 10.1103/PhysRevLett.93.171103
* Knie et al. (1999) —. 1999, Phys. Rev. Lett., 83, 18, doi: 10.1103/PhysRevLett.83.18
* Koll et al. (2019) Koll, D., Korschinek, G., Faestermann, T., et al. 2019, Phys. Rev. Lett., 123, 072701, doi: 10.1103/PhysRevLett.123.072701
* Ludwig et al. (2016) Ludwig, P., Bishop, S., Egli, R., et al. 2016, Proceedings of the National Academy of Science, 113, 9232, doi: 10.1073/pnas.1601040113
* Maestro et al. (2020) Maestro, P., Adriani, O., Akaike, Y., et al. 2020, Physics of Atomic Nuclei, 82, 766, doi: 10.1134/S1063778819660384
* Maurin et al. (2014) Maurin, D., Melot, F., & Taillet, R. 2014, A&A, 569, A32, doi: 10.1051/0004-6361/201321344
* Panov et al. (2009) Panov, A. D., Adams, J. H., Ahn, H. S., et al. 2009, Bulletin of the Russian Academy of Sciences, Physics, 73, 564, doi: 10.3103/S1062873809050098
* Picozza et al. (2007) Picozza, P., Galper, A. M., Castellini, G., et al. 2007, Astroparticle Physics, 27, 296, doi: 10.1016/j.astropartphys.2006.12.002
* Rugel et al. (2009) Rugel, G., Faestermann, T., Knie, K., et al. 2009, Phys. Rev. Lett., 103, 072502, doi: 10.1103/PhysRevLett.103.072502
* Seo et al. (2014) Seo, E. S., Anderson, T., Angelaszek, D., et al. 2014, Advances in Space Research, 53, 1451, doi: 10.1016/j.asr.2014.01.013
* Sfeir et al. (1999) Sfeir, D. M., Lallement, R., Crifo, F., & Welsh, B. Y. 1999, A&A, 346, 785
* Stone et al. (2019) Stone, E. C., Cummings, A. C., Heikkila, B. C., & Lal, N. 2019, Nature Astronomy, 3, 1013, doi: 10.1038/s41550-019-0928-3
* Stone et al. (2013) Stone, E. C., Cummings, A. C., McDonald, F. B., et al. 2013, Science, 341, 150, doi: 10.1126/science.1236408
* Stone et al. (1977) Stone, E. C., Vogt, R. E., McDonald, F. B., et al. 1977, Space Sci. Rev., 21, 355, doi: 10.1007/BF00211546
* Torii et al. (2019) Torii, S., Marrocchesi, P. S., & Calet Collaboration. 2019, Advances in Space Research, 64, 2531, doi: 10.1016/j.asr.2019.04.013
* Vladimirov et al. (2012) Vladimirov, A. E., Jóhannesson, G., Moskalenko, I. V., & Porter, T. A. 2012, ApJ, 752, 68, doi: 10.1088/0004-637X/752/1/68
* Wallner et al. (2016) Wallner, A., Feige, J., Kinoshita, N., et al. 2016, Nature, 532, 69, doi: 10.1038/nature17196
## Appendix A Analytical parameterization and numerical tables of the iron
LIS
Here we provide an analytical parameterization of the iron LIS:
$\displaystyle F_{\rm Fe}$ $\displaystyle(R)=$ (A1)
$\displaystyle\begin{cases}\displaystyle
R^{-2.7}\left[a-bR+cR^{2}-dR^{3}+f\tanh\left(\tanh\left(RG(-gR)\right)\right)-hG(iR^{2})\right],&R\leq
2.8\,{\rm GV},\vskip 3.0pt plus 1.0pt minus 1.0pt\\\ \displaystyle
R^{-2.7}\left[l-mR-n\tilde{R}+o\frac{\tilde{R}}{R^{2}}-\frac{p}{\tilde{R}}+\frac{q}{R^{2}\tilde{R}}+r\log\left(s+R\right)+\frac{t}{\tilde{R}\log\left(s+R\right)}\right],&R>2.8\,{\rm
GV},\end{cases}$
where $R$ is the particle rigidity in GV, the values of the fitting parameters
from $a$ to $t$ are given in Table 3, and the $\tilde{R}$, and $G(x)$
functions are defined as:
$\displaystyle\tilde{R}$ $\displaystyle=\log{R},$ $\displaystyle G(x)$
$\displaystyle=e^{-x^{2}}.$
The analytical representation, Eq. (A1), is also complemented by numerical
tables calculated for the I-scenario, which tabulate the LIS in rigidity $R$
(Table 4) and in kinetic energy $E_{\rm kin}$ per nucleon (Table 5).
Table 3: Parameters of the analytical fit to the iron LIS Param | 26Fe | Param | 26Fe | Param | 26Fe | Param | 26Fe
---|---|---|---|---|---|---|---
a | 2.8224e+0 | g | 2.3448e+0 | m | 5.9700e-8 | q | 1.6412e+2
b | 3.3660e-1 | h | 2.8171e+0 | n | 5.6550e+0 | r | 6.5309e+0
c | 6.1955e-1 | i | 3.8226e-1 | o | 2.4807e+2 | s | 2.0316e+2
d | 1.2735e-1 | l | 2.6334e+1 | p | 1.3610e+2 | t | 1.5300e+2
f | 2.3980e-1 | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$
Table 4: $Z=26$ – Iron LIS
Rigidity | Differential | Rigidity | Differential | Rigidity | Differential | Rigidity | Differential | Rigidity | Differential
---|---|---|---|---|---|---|---|---|---
GV | intensity | GV | intensity | GV | intensity | GV | intensity | GV | intensity
9.299e-02 | 5.252e-03 | 7.508e-01 | 3.968e-01 | 1.039e+01 | 1.475e-02 | 5.430e+02 | 6.616e-07 | 3.413e+04 | 1.376e-11
9.758e-02 | 5.891e-03 | 7.892e-01 | 4.239e-01 | 1.127e+01 | 1.234e-02 | 5.977e+02 | 5.152e-07 | 3.759e+04 | 1.069e-11
1.024e-01 | 6.587e-03 | 8.297e-01 | 4.510e-01 | 1.224e+01 | 1.029e-02 | 6.580e+02 | 4.013e-07 | 4.139e+04 | 8.297e-12
1.075e-01 | 7.365e-03 | 8.724e-01 | 4.780e-01 | 1.330e+01 | 8.542e-03 | 7.243e+02 | 3.126e-07 | 4.558e+04 | 6.441e-12
1.128e-01 | 8.233e-03 | 9.175e-01 | 5.043e-01 | 1.448e+01 | 7.060e-03 | 7.974e+02 | 2.436e-07 | 5.019e+04 | 4.999e-12
1.183e-01 | 9.204e-03 | 9.651e-01 | 5.300e-01 | 1.576e+01 | 5.812e-03 | 8.779e+02 | 1.898e-07 | 5.527e+04 | 3.880e-12
1.242e-01 | 1.029e-02 | 1.015e+00 | 5.547e-01 | 1.718e+01 | 4.766e-03 | 9.665e+02 | 1.479e-07 | 6.086e+04 | 3.012e-12
1.303e-01 | 1.150e-02 | 1.069e+00 | 5.778e-01 | 1.873e+01 | 3.889e-03 | 1.064e+03 | 1.152e-07 | 6.702e+04 | 2.337e-12
1.368e-01 | 1.285e-02 | 1.125e+00 | 5.987e-01 | 2.045e+01 | 3.162e-03 | 1.172e+03 | 8.974e-08 | 7.380e+04 | 1.813e-12
1.435e-01 | 1.435e-02 | 1.184e+00 | 6.164e-01 | 2.233e+01 | 2.561e-03 | 1.290e+03 | 6.989e-08 | 8.126e+04 | 1.407e-12
1.506e-01 | 1.603e-02 | 1.248e+00 | 6.309e-01 | 2.440e+01 | 2.065e-03 | 1.420e+03 | 5.443e-08 | 8.948e+04 | 1.092e-12
1.581e-01 | 1.790e-02 | 1.315e+00 | 6.415e-01 | 2.668e+01 | 1.657e-03 | 1.564e+03 | 4.238e-08 | 9.854e+04 | 8.468e-13
1.659e-01 | 1.998e-02 | 1.386e+00 | 6.476e-01 | 2.920e+01 | 1.325e-03 | 1.722e+03 | 3.300e-08 | 1.085e+05 | 6.569e-13
1.741e-01 | 2.230e-02 | 1.461e+00 | 6.486e-01 | 3.196e+01 | 1.057e-03 | 1.896e+03 | 2.570e-08 | 1.195e+05 | 5.095e-13
1.827e-01 | 2.488e-02 | 1.542e+00 | 6.436e-01 | 3.500e+01 | 8.402e-04 | 2.087e+03 | 2.001e-08 | 1.316e+05 | 3.952e-13
1.918e-01 | 2.775e-02 | 1.627e+00 | 6.317e-01 | 3.835e+01 | 6.665e-04 | 2.298e+03 | 1.559e-08 | 1.449e+05 | 3.065e-13
2.013e-01 | 3.093e-02 | 1.718e+00 | 6.120e-01 | 4.204e+01 | 5.275e-04 | 2.531e+03 | 1.215e-08 | 1.596e+05 | 2.377e-13
2.112e-01 | 3.447e-02 | 1.816e+00 | 5.839e-01 | 4.610e+01 | 4.166e-04 | 2.787e+03 | 9.464e-09 | 1.757e+05 | 1.843e-13
2.217e-01 | 3.839e-02 | 1.919e+00 | 5.472e-01 | 5.057e+01 | 3.283e-04 | 3.068e+03 | 7.374e-09 | 1.935e+05 | 1.429e-13
2.327e-01 | 4.273e-02 | 2.031e+00 | 5.031e-01 | 5.549e+01 | 2.583e-04 | 3.379e+03 | 5.744e-09 | 2.131e+05 | 1.108e-13
2.442e-01 | 4.754e-02 | 2.150e+00 | 4.539e-01 | 6.091e+01 | 2.029e-04 | 3.720e+03 | 4.474e-09 | $\cdots$ | $\cdots$
2.563e-01 | 5.286e-02 | 2.277e+00 | 4.025e-01 | 6.688e+01 | 1.592e-04 | 4.097e+03 | 3.485e-09 | $\cdots$ | $\cdots$
2.690e-01 | 5.873e-02 | 2.414e+00 | 3.521e-01 | 7.344e+01 | 1.247e-04 | 4.511e+03 | 2.713e-09 | $\cdots$ | $\cdots$
2.823e-01 | 6.521e-02 | 2.561e+00 | 3.051e-01 | 8.068e+01 | 9.766e-05 | 4.967e+03 | 2.113e-09 | $\cdots$ | $\cdots$
2.964e-01 | 7.234e-02 | 2.720e+00 | 2.633e-01 | 8.864e+01 | 7.640e-05 | 5.470e+03 | 1.645e-09 | $\cdots$ | $\cdots$
3.111e-01 | 8.019e-02 | 2.891e+00 | 2.274e-01 | 9.741e+01 | 5.972e-05 | 6.023e+03 | 1.280e-09 | $\cdots$ | $\cdots$
3.265e-01 | 8.881e-02 | 3.075e+00 | 1.974e-01 | 1.071e+02 | 4.665e-05 | 6.632e+03 | 9.964e-10 | $\cdots$ | $\cdots$
3.428e-01 | 9.826e-02 | 3.275e+00 | 1.723e-01 | 1.177e+02 | 3.639e-05 | 7.303e+03 | 7.754e-10 | $\cdots$ | $\cdots$
3.598e-01 | 1.086e-01 | 3.490e+00 | 1.508e-01 | 1.294e+02 | 2.836e-05 | 8.042e+03 | 6.033e-10 | $\cdots$ | $\cdots$
3.777e-01 | 1.199e-01 | 3.724e+00 | 1.319e-01 | 1.423e+02 | 2.209e-05 | 8.855e+03 | 4.693e-10 | $\cdots$ | $\cdots$
3.966e-01 | 1.322e-01 | 3.978e+00 | 1.151e-01 | 1.565e+02 | 1.720e-05 | 9.751e+03 | 3.651e-10 | $\cdots$ | $\cdots$
4.163e-01 | 1.456e-01 | 4.254e+00 | 1.002e-01 | 1.721e+02 | 1.339e-05 | 1.074e+04 | 2.839e-10 | $\cdots$ | $\cdots$
4.371e-01 | 1.600e-01 | 4.554e+00 | 8.683e-02 | 1.894e+02 | 1.042e-05 | 1.182e+04 | 2.208e-10 | $\cdots$ | $\cdots$
4.590e-01 | 1.756e-01 | 4.880e+00 | 7.502e-02 | 2.083e+02 | 8.110e-06 | 1.302e+04 | 1.717e-10 | $\cdots$ | $\cdots$
4.820e-01 | 1.925e-01 | 5.236e+00 | 6.465e-02 | 2.292e+02 | 6.311e-06 | 1.434e+04 | 1.335e-10 | $\cdots$ | $\cdots$
5.061e-01 | 2.105e-01 | 5.625e+00 | 5.552e-02 | 2.522e+02 | 4.911e-06 | 1.579e+04 | 1.037e-10 | $\cdots$ | $\cdots$
5.315e-01 | 2.299e-01 | 6.049e+00 | 4.752e-02 | 2.775e+02 | 3.821e-06 | 1.738e+04 | 8.062e-11 | $\cdots$ | $\cdots$
5.582e-01 | 2.505e-01 | 6.513e+00 | 4.056e-02 | 3.054e+02 | 2.973e-06 | 1.914e+04 | 6.265e-11 | $\cdots$ | $\cdots$
Note. — Differential Intensity units: (m2 s sr GV)-1.
Table 5: $Z=26$ – Iron LIS
$E_{\rm kin}$ | Differential | $E_{\rm kin}$ | Differential | $E_{\rm kin}$ | Differential | $E_{\rm kin}$ | Differential | $E_{\rm kin}$ | Differential
---|---|---|---|---|---|---|---|---|---
GeV/n | intensity | GeV/n | intensity | GeV/n | intensity | GeV/n | intensity | GeV/n | intensity
1.000e-03 | 2.434e-01 | 6.309e-02 | 2.428e+00 | 3.981e+00 | 3.246e-02 | 2.512e+02 | 1.428e-06 | 1.585e+04 | 2.969e-11
1.101e-03 | 2.602e-01 | 6.948e-02 | 2.483e+00 | 4.384e+00 | 2.708e-02 | 2.766e+02 | 1.112e-06 | 1.745e+04 | 2.305e-11
1.213e-03 | 2.773e-01 | 7.651e-02 | 2.531e+00 | 4.827e+00 | 2.253e-02 | 3.046e+02 | 8.663e-07 | 1.922e+04 | 1.790e-11
1.335e-03 | 2.954e-01 | 8.425e-02 | 2.571e+00 | 5.315e+00 | 1.866e-02 | 3.354e+02 | 6.748e-07 | 2.116e+04 | 1.389e-11
1.470e-03 | 3.148e-01 | 9.277e-02 | 2.602e+00 | 5.853e+00 | 1.540e-02 | 3.693e+02 | 5.257e-07 | 2.330e+04 | 1.079e-11
1.619e-03 | 3.353e-01 | 1.022e-01 | 2.623e+00 | 6.446e+00 | 1.266e-02 | 4.067e+02 | 4.096e-07 | 2.566e+04 | 8.371e-12
1.783e-03 | 3.572e-01 | 1.125e-01 | 2.635e+00 | 7.098e+00 | 1.036e-02 | 4.478e+02 | 3.192e-07 | 2.825e+04 | 6.497e-12
1.963e-03 | 3.805e-01 | 1.239e-01 | 2.637e+00 | 7.816e+00 | 8.450e-03 | 4.931e+02 | 2.487e-07 | 3.111e+04 | 5.042e-12
2.162e-03 | 4.052e-01 | 1.364e-01 | 2.627e+00 | 8.607e+00 | 6.864e-03 | 5.430e+02 | 1.937e-07 | 3.426e+04 | 3.912e-12
2.381e-03 | 4.315e-01 | 1.502e-01 | 2.602e+00 | 9.478e+00 | 5.555e-03 | 5.980e+02 | 1.508e-07 | 3.773e+04 | 3.035e-12
2.622e-03 | 4.593e-01 | 1.654e-01 | 2.564e+00 | 1.044e+01 | 4.476e-03 | 6.585e+02 | 1.175e-07 | 4.155e+04 | 2.355e-12
2.887e-03 | 4.889e-01 | 1.822e-01 | 2.513e+00 | 1.149e+01 | 3.590e-03 | 7.251e+02 | 9.147e-08 | 4.575e+04 | 1.827e-12
3.179e-03 | 5.202e-01 | 2.006e-01 | 2.447e+00 | 1.266e+01 | 2.870e-03 | 7.985e+02 | 7.122e-08 | 5.038e+04 | 1.417e-12
3.501e-03 | 5.533e-01 | 2.209e-01 | 2.367e+00 | 1.394e+01 | 2.287e-03 | 8.793e+02 | 5.546e-08 | 5.548e+04 | 1.099e-12
3.855e-03 | 5.884e-01 | 2.432e-01 | 2.270e+00 | 1.535e+01 | 1.818e-03 | 9.682e+02 | 4.318e-08 | 6.109e+04 | 8.524e-13
4.245e-03 | 6.256e-01 | 2.678e-01 | 2.156e+00 | 1.690e+01 | 1.442e-03 | 1.066e+03 | 3.364e-08 | 6.727e+04 | 6.611e-13
4.675e-03 | 6.648e-01 | 2.949e-01 | 2.024e+00 | 1.861e+01 | 1.141e-03 | 1.174e+03 | 2.621e-08 | 7.408e+04 | 5.127e-13
5.148e-03 | 7.061e-01 | 3.248e-01 | 1.873e+00 | 2.049e+01 | 9.009e-04 | 1.293e+03 | 2.042e-08 | 8.157e+04 | 3.975e-13
5.669e-03 | 7.498e-01 | 3.577e-01 | 1.705e+00 | 2.257e+01 | 7.098e-04 | 1.424e+03 | 1.591e-08 | 8.983e+04 | 3.082e-13
6.242e-03 | 7.957e-01 | 3.938e-01 | 1.525e+00 | 2.485e+01 | 5.583e-04 | 1.568e+03 | 1.239e-08 | 9.892e+04 | 2.390e-13
6.874e-03 | 8.440e-01 | 4.337e-01 | 1.339e+00 | 2.736e+01 | 4.385e-04 | 1.726e+03 | 9.654e-09 | $\cdots$ | $\cdots$
7.569e-03 | 8.947e-01 | 4.776e-01 | 1.158e+00 | 3.013e+01 | 3.440e-04 | 1.901e+03 | 7.519e-09 | $\cdots$ | $\cdots$
8.335e-03 | 9.479e-01 | 5.259e-01 | 9.886e-01 | 3.318e+01 | 2.695e-04 | 2.093e+03 | 5.855e-09 | $\cdots$ | $\cdots$
9.179e-03 | 1.004e+00 | 5.791e-01 | 8.372e-01 | 3.654e+01 | 2.110e-04 | 2.305e+03 | 4.559e-09 | $\cdots$ | $\cdots$
1.011e-02 | 1.062e+00 | 6.377e-01 | 7.069e-01 | 4.023e+01 | 1.650e-04 | 2.539e+03 | 3.549e-09 | $\cdots$ | $\cdots$
1.113e-02 | 1.122e+00 | 7.022e-01 | 5.982e-01 | 4.431e+01 | 1.290e-04 | 2.795e+03 | 2.762e-09 | $\cdots$ | $\cdots$
1.226e-02 | 1.186e+00 | 7.733e-01 | 5.092e-01 | 4.879e+01 | 1.008e-04 | 3.078e+03 | 2.150e-09 | $\cdots$ | $\cdots$
1.350e-02 | 1.251e+00 | 8.515e-01 | 4.365e-01 | 5.373e+01 | 7.859e-05 | 3.390e+03 | 1.673e-09 | $\cdots$ | $\cdots$
1.486e-02 | 1.319e+00 | 9.377e-01 | 3.757e-01 | 5.916e+01 | 6.124e-05 | 3.733e+03 | 1.302e-09 | $\cdots$ | $\cdots$
1.637e-02 | 1.389e+00 | 1.033e+00 | 3.236e-01 | 6.515e+01 | 4.770e-05 | 4.110e+03 | 1.013e-09 | $\cdots$ | $\cdots$
1.802e-02 | 1.462e+00 | 1.137e+00 | 2.785e-01 | 7.174e+01 | 3.714e-05 | 4.526e+03 | 7.877e-10 | $\cdots$ | $\cdots$
1.985e-02 | 1.536e+00 | 1.252e+00 | 2.393e-01 | 7.900e+01 | 2.891e-05 | 4.984e+03 | 6.126e-10 | $\cdots$ | $\cdots$
2.185e-02 | 1.612e+00 | 1.379e+00 | 2.050e-01 | 8.699e+01 | 2.250e-05 | 5.489e+03 | 4.764e-10 | $\cdots$ | $\cdots$
2.406e-02 | 1.689e+00 | 1.518e+00 | 1.753e-01 | 9.580e+01 | 1.751e-05 | 6.044e+03 | 3.704e-10 | $\cdots$ | $\cdots$
2.650e-02 | 1.767e+00 | 1.672e+00 | 1.496e-01 | 1.055e+02 | 1.363e-05 | 6.656e+03 | 2.879e-10 | $\cdots$ | $\cdots$
2.918e-02 | 1.846e+00 | 1.841e+00 | 1.274e-01 | 1.162e+02 | 1.060e-05 | 7.329e+03 | 2.238e-10 | $\cdots$ | $\cdots$
3.213e-02 | 1.925e+00 | 2.027e+00 | 1.082e-01 | 1.279e+02 | 8.250e-06 | 8.071e+03 | 1.739e-10 | $\cdots$ | $\cdots$
3.539e-02 | 2.003e+00 | 2.233e+00 | 9.171e-02 | 1.409e+02 | 6.419e-06 | 8.887e+03 | 1.352e-10 | $\cdots$ | $\cdots$
Note. — Differential Intensity units: (m2 s sr GeV/n)-1.
|
# Borel Vizing’s Theorem for 2-Ended Groups
Felix Weilacher
###### Abstract
We show that Vizing’s Theorem holds in the Borel context for graphs induced by
actions of 2-ended groups, and ask whether it holds more generally for
everywhere two ended Borel graphs.
## 1 Introduction
For a graph $G$ on a set $X$, let $\chi^{\prime}(G)$ denote the edge chromatic
number of $G$. That is, the smallest cardinal $k$ such that there exists a
function assigning each edge in $G$ to an element of $k$ such that any two
edges incident on the same vertex are assigned different elements. Such a
function is called a $k$-edge coloring of $G$, and the elements of $k$ are
called colors. If $G$ is a Borel graph on a standard Borel space $X$, let
$\chi_{B}^{\prime}(G)$ denote the Borel edge chromatic number of $G$. That is,
the smallest cardinal $k$ as above, but where only Borel (as functions)
colorings are allowed.
A classical theorem of Vizing states that if $G$ is a graph of maximum degree
$d\in\omega$, $\chi^{\prime}(G)\leq d+1$. Note the tirival lower bound
$\chi^{\prime}(G)\geq d$, so that Vizing’s theorem implies
$\chi^{\prime}(G)\in\\{d,d+1\\}$. We are interested in generalizations of this
theorem to the Borel context. Marks has shown [1] that the direct
generalization fails, but on the other hand, Grebik and Pikhurko have shown
[2] that the generalization holds if ‘Borel’ is weakened to ‘$\mu$-measurable’
for some Borel probability measure $\mu$ on $X$ which is $G$-invariant.
Recently, Weilacher has shown [4] that some combinatorial bounds which hold in
the measurable context but not generally in the Borel context can still be
salvaged in the Borel context with an additional assumption: that every
connected component of $G$ has two ends. In the spirit of this, we show in
this note the following:
###### Theorem 1.
Let $\Gamma$ be a marked group with two ends, say with $d$ generators. Let $G$
be the shift graph of $\Gamma$, so that $G$ is $d$ regular. Then
$\chi_{B}^{\prime}(G)\leq d+1$.
One purpose of this note is to pose the question of whether the assumption in
Theorem 1 that $G$ be generated by a group action is necessary, which seems to
be open.
###### Problem 1.
Let $G$ be a Borel graph of maximum degree $d$ such that every connected
component of $G$ has two ends. Is $\chi^{\prime}_{B}(G)\leq d+1$?
## 2 Proof
In this section we prove Theorem 1. The proof is very simple and intuitive,
but it takes some time to write down all of the details
###### Proof.
Fix a two ended marked group $\Gamma$ with symmetric generating set $S$ of
size $d$. It is well known that since $\Gamma$ has two ends, there is a finite
normal subgroup $\Delta\leq\Gamma$ such that $\Gamma/\Delta\cong\mathbb{Z}$ or
$D_{\infty}=\langle a,b\mid a^{2}=b^{2}=\textnormal{id}\rangle$. Let us start
with the former case for ease of notation. The latter case can be handled in
the same way and will be addressed at the end of this section.
Partition $S$ as $S=\bigsqcup_{n\in\mathbb{Z}}S_{n}$, where
$S_{n}=\\{\gamma\in S\mid\overline{\gamma}=n\\}$, where $\overline{\gamma}$
denotes the image of $\gamma$ in the quotient $\Gamma/\Delta$ identified with
$\mathbb{Z}$. Note that $S_{-n}=S_{n}^{-1}=\\{\gamma^{-1}\mid\gamma\in
S_{n}\\}$ for each $n$. Let $d_{n}=|S_{n}|$ for each $n$, so that
$\sum_{n}d_{n}=d$.
Let $G$ be the shift graph of $\Gamma$ with vertex set $X$. Let $Y$ be the
standard Borel space of $\Delta$-orbits of $X$. The action of $\Gamma$ on $X$
descends to an action of $\mathbb{Z}$ on $Y$. Let $H$ denote the Borel
multigraph on $Y$ defined by placing an edge between $y$ and $n\cdot y$ for
each $\gamma\in S_{n}$ for $n>0$ and $y\in Y$. In other words, the number of
edges between $y$ and $n\cdot y$ in $H$ is always $d_{n}$ for $n\neq 0$. Let
$k=\sum_{n>0}d_{n}$, so that $H$ is $2k$-regular. We claim that $H$ admits a
Borel $2k+1$-edge coloring. Since $H$ is generated by an action of
$\mathbb{Z}$, it suffices to just prove our main Theorem when
$\Gamma=\mathbb{Z}$ and multiplicity for the generators is allowed:
###### Lemma 1.
Theorem 1 holds when $\Gamma=\mathbb{Z}$, even when multiplicity for
generators is allowed.
It should be noted that Vizing’s Theorem fails in general for multigraphs,
(although there is a generalization which still holds) so this lemma is
somewhat surprising. We now prove it:
###### Proof.
Keeping in line with the notation established so far, let $H$ be the shift
graph of $\mathbb{Z}$ with our generating set with vertex set $Y$. Let
$0<n_{1}\leq n_{2}\leq\cdots\leq n_{k}$ list the positive generators with
multiplicity, so that $H$ is $2k$-regular.
Let $G^{\prime}$ be the graph on $Y$ induced by the action of $\mathbb{Z}$
with usual generators $\pm 1$.
We first show the following: Let $A\subset Y$ be a Borel subset of our shift
graph such that the induced subgraph $G^{\prime}\upharpoonright A$ has
connected components all of size at least $2$, and such that $A$ is recurrent.
That is, for each $x\in Y$, there are $m,l>0$ such that $m\cdot x\in A$ and
$(-l)\cdot x\in A$. Then there is a Borel 3-edge coloring of $G^{\prime}$, say
using the colors 1,2, and 3, such that the color 3 only appears on edges
between points in $A$.
First, by the recurrence of $A$, we can clearly find a Borel
$G^{\prime}$-independent recurrent set $B\subset Y$ such that if $x,y\in B$
are distinct points in the same connected component of $G^{\prime}$, the
unique path between them in $G^{\prime}$ passes through $A$. Suppose $x,y\in
B$ such that $y=N\cdot x$ for some $N>0$ and there are no points of $B$
between $x$ and $y$ in the graph $G^{\prime}$.
We need to color the edges between $x$ and $y$. If $N$ is even, we color the
edge $(m\cdot x,(m+1)\cdot x)$ with the color 1 for $m$ even and the color 2
for $m$ odd for $0\leq m<N$. If $N$ is odd, let $0<M<N-1$ be minimal with the
property that $M\cdot x\in A$. This exists by definition of $B$, and then
$(M+1)\cdot x\in A$ by definition of $A$. Accordingly, we color the edge
$(m\cdot x,(m+1)\cdot x)$ with the color 1 for $m$ even and the color 2 for
$m$ odd for $0\leq m<M$, then color the edge $(M\cdot x,(M+1\cdot x))$ with
the color 3, then color the edge $(m\cdot x,(m+1)\cdot x)$ with the color $1$
for $m$ odd and the color 2 for $m$ even for $M<m<N$. Note that for each $x\in
B$, $(x,1\cdot x)$ has color 1 and $(-1\cdot x,x)$ has color 2, so we do
indeed end up with a coloring. Furthermore, the color 3 was clearly only ever
used for edges between points of $A$.
Now, returning to our original goal, we begin by partitioning $Y$ into $k$
Borel recurrent sets $A_{1},\ldots,A_{k}$ such that for each $i$, the
connected components of $G^{\prime}\upharpoonright A_{i}$ each have size at
least $2n_{k}$. By a result from [3], we can start by finding a Borel maximal
$2n_{k}$-discrete set $B$. In particular $B$ will be recurrent. We can then
partition $B$ into $k$ many recurrent sets $B_{1},\ldots,B_{k}$ using, for
example, that same result. Now, for each $i$ and each $x\in B_{i}$, there will
be a smallest $N>0$ such that $N\cdot x\in B$. We then include $m\cdot x\in
A_{i}$ for each $0\leq m<N$. This clearly works.
Now, fix one of our generators $n_{i}$. Consider only the edges in $H$
corresponding to this generator, and call the resulting (simple) graph
$H_{i}$. Abusing language slightly, observe that since $n_{i}\leq n_{k}$, our
set $A_{i}$ will be recurrent for the graph $H_{i}$ (more precisely, for each
$y\in Y$, there are $m,l>0$ such that $mn_{i}\cdot y$ and $(-l)n_{i}\cdot y\in
A_{i}$), and all the connected components of $H_{i}\upharpoonright A_{i}$ will
have size at least 2. It follows from the statement of paragraph 3 of this
proof that we can Borel edge color $H_{i}$, say using the colors $2i,2i+1$,
and $2k+1$, such that the color $2k+1$ is only used for edges between vertices
in $A_{i}$. Do so for each $i$. Now, the sets of colors we used for each
$H_{i}$ were disjoint, save for the color $2k+1$. This was only used to color
edges between points in $A_{i}$, though, so since the $A_{i}$’s are pairwise
disjoint, this will not cause any conflicts. Thus in the end, we have a Borel
edge coloring using the colors $1,2,\ldots,2k+1$, as desired.
∎
We now return to our proof of Theorem 1 in the case
$\Gamma/\Delta\cong\mathbb{Z}$. Fix a Borel $2k+1$-edge coloring $c$ of $H$,
say using the colors $1,2,\ldots,2k+1$
Let $\gamma\in S_{n}$ for some $n>0$. For each $\Delta$-orbit $y$, $\gamma$
corresponds to an edge from $y$ to $n\cdot y$ in $H$. Suppose $c$ assigns the
color $l$ to that edge. Then let us give the edges $(x,\gamma\cdot x)$ in $G$
the color $l$ for each $x\in y$. Of course, since $x\neq
x^{\prime}\Rightarrow\gamma\cdot x\neq\gamma\cdot x^{\prime}$, this does not
cause any conflicts. Also, since $c$ was a coloring of $H$, doing this for all
$\gamma\in S\setminus S_{0}$ does not cause any conflicts.
It remains to color the edges corresponding to generators in $S_{0}$. These
are the edges within each $\Delta$-orbit. For every such orbit $y$, the
induced subgraph $G\upharpoonright y$ is $d_{0}$-regular, so by Vizing’s
theorem it can be $d_{0}+1$-edge colored, say with the colors
$2k+2,\ldots,2k+d_{0}+2=d+2$. Since there are only finitely many such
colorings for each orbit, we may choose one of them for each orbit in a Borel
fashion. We have now $d+2$-edge colored our graph $G$ in a Borel fashion.
Finally, for each $\Delta$-orbit $y$, there must be some color
$l\in\\{1,\ldots,2k+1\\}$ which does not appear on any edges incident to $y$
in our coloring $c$ of $H$, since $H$ is $2k$-regular. This means that, in our
$d+2$-edge coloring above, none of the edges incident to a point in $y$ have
the color $l$. Therefore, in the $d_{0}+1$-coloring of $y$ we have, we may
replace the color $d+2$ with the color $l$ without causing any new conflicts.
Doing so, we improve our coloring to a $d+1$-edge coloring, and so are done.
Finally, let us address the case $\Gamma/\Delta\cong D_{\infty}$. The argument
which showed the sufficiency of Lemma 1 was completely general, so here it
suffices to show
###### Lemma 2.
Theorem 1 holds when $\Gamma=D_{\infty}$, even when multiplicity for
generators is allowed.
This can be proved similarly to Lemma 1. If $\gamma\in D_{\infty}$ is an order
two element, no two of the edges it corrresponds to share an edge, So they can
be Borel colored with a single color. Else, $\gamma$ has infinite order, so
the edges corresponding to $\gamma$ and $\gamma^{-1}$ can be Borel 3-colored.
As in the proof of Lemma 1, the third color here can be the same for every
such $\gamma$, and used sparsely enough for each $\gamma$ so that there is no
conflict in the end.
Thus, Theorem 1 is proved.
∎
## References
* [1] A. Marks, A determinacy approach to Borel combinatorics. J. Amer. Math. Soc. 29 (2016), 579-600.
* [2] J. Grebík, O. Pikhurko, Measurable Versions of Vizing’s Theorem. Adv. Math 374 (2020).
* [3] A.S. Kechris, S. Solecki, and S. Todorcevic, Borel chromatic numbers, Adv. Math., 141 (1999), 1-44.
* [4] F. Weilacher, Descriptive chromatic numbers of locally finite and everywhere two ended graphs. Groups Geom. Dyn. Forthcoming.
|
# General-Purpose OCR Paragraph Identification by Graph Convolutional Neural
Networks
Renshen Wang, Yasuhisa Fujii and Ashok C. Popat R. Wang, Y. Fujii and A. C.
Popat are with Google Research, Mountain View, CA.Manuscript received January
29, 2021.
###### Abstract
Paragraphs are an important class of document entities. We propose a new
approach for paragraph identification by spatial graph convolutional neural
networks (GCN) applied on OCR text boxes. Two steps, namely line splitting and
line clustering, are performed to extract paragraphs from the lines in OCR
results. Each step uses a $\beta$-skeleton graph constructed from bounding
boxes, where the graph edges provide efficient support for graph convolution
operations. With only pure layout input features, the GCN model size is
3$\sim$4 orders of magnitude smaller compared to R-CNN based models, while
achieving comparable or better accuracies on PubLayNet and other datasets.
Furthermore, the GCN models show good generalization from synthetic training
data to real-world images, and good adaptivity for variable document styles.
###### Index Terms:
Optical character recognition, document layout, graph convolutional neural
network.
This work has been submitted to the IEEE for possible publication. Copyright
may be transferred without notice, after which this version may no longer be
accessible.
## 1 Introduction
Document image understanding is a task to recognize, structure, and understand
the contents of document images, and is a key technology to digitally process
and consume such document images. If we regard any images containing
structured text as document images, they are ubiquitous and can be found in
numerous applications. Document image understanding enables the conversion of
such documents into a digital format with rich structure and semantic
information and makes them available for subsequent information tasks.
A document can be represented by its semantic structure and physical structure
[1]. The task to recover the semantic structure is called logical layout
analysis [2] or semantic structure extraction [3] while the task to recover
the physical structure is called geometric (physical, or structural) layout
analysis [2]. These tasks are critical subproblems of document image
understanding.
A paragraph is a semantic unit of writing consisting of one or more sentences
that usually develops one main idea. Paragraphs are basic constituents of
semantic structure and thus paragraph boundary estimation (or paragraph
estimation, for short) is an important building block of logical layout
analysis. Moreover, paragraphs are often appropriate as processing units for
various downstream tasks such as translation and information extraction
because they are self-contained and have rich semantic information. Therefore,
developing a generic paragraph estimation algorithm is of great interest by
itself.
Figure 1: Examples of paragraphs in printed text. Paragraphs may have complex
shapes when wrapped around figures or other types of document entities.
Paragraphs are usually rendered in a geometric layout structure according to
broadly accepted typographical rules. For example, a paragraph can be rendered
as a series of text lines that
* •
are placed with uniform vertical spacing between adjacent lines;
* •
start with a line where one of the following is true:
1. a.
The line is indented. (An indented paragraph.)
2. b.
The line starts with a bullet symbol or number, with all the subsequent lines
indented to be left-justified, flush with the first. (A list item.)
3. c.
The vertical spacing above the first line is significantly larger than the
uniform spacing between subsequent lines. (A block paragraph.)
As such, there are usually clear visual cues to identify paragraphs111A
semantic paragraph can span over multiple text columns or pages. In this
paper, we only look for physical paragraphs where lines of contiguous indices
are always physically proximate. Moreover, we regard stand-alone text spans
such as titles and headings as single-line paragraphs.. Nevertheless, the task
of estimating paragraphs often remains non-trivial as shown in Fig. 1.
Previous studies have attempted to develop a paragraph estimation method by
defining handcrafted rules based on careful observations [4, 5, 6, 7] or by
learning an object detection model to identify the regions of paragraphs from
an image [3, 8]. For the former approaches, it is usually challenging to
define a robust set of heuristics even for a limited domain, and hence
machine-learning-based solutions are generally preferable. The latter
approaches tend to have difficulty dealing with diverse aspect ratios and text
shapes, and the wide range of degradations observed in real-world applications
such as image skews and perspective distortions.
In this paper, we propose to apply graph convolutional neural networks (GCNs)
in a post-processing step of an optical character recognition (OCR) system for
paragraph estimation. Modern OCR engines can detect and recognize texts with a
very high recall for documents in a variety of conditions. Indeed, as will be
shown in the experiments, our generic OCR system can detect and recognize
texts with a higher recall than a specialized image-based paragraph detector,
indicating little risk of missing correct paragraph boundaries by restricting
to the possibilities generated by the OCR engine. That motivates us to employ
the post-processing strategy rather than a pre-processing or an entangled
approach. Recent advancements in graph neural (convolutional) networks [9, 10]
have enabled deep learning on non-Euclidian data. GCNs can learn spatial
relationships among entities combining information from multiple sources and
provide a natural way to learn the non-linear mapping from OCR results to
paragraphs.
More specifically, we design two classifiers based on GCNs — one for line
splitting and one for line clustering. A word graph is constructed for the
first stage and a line graph is constructed for the second stage. Both graphs
are constructed based on the $\beta$-skeleton algorithm [11] that produces a
graph with good connectivity and sparsity.
To fully utilize the models’ capability, it is desirable to have a diverse set
of document styles in the training data. We create synthetic data sets from
web pages where the page styles are randomly modified in the web scraping
engine. By leveraging open web sites like Wikipedia [12] for source material
to render in randomized styles, we have access to practically unlimited
document data.
We evaluate the 2-step models on both the PubLayNet [8] and our own datasets.
We show that GCN based models can be small and efficient by taking OCR
produced bounding boxes as input, and are also capable of generating highly
accurate results. Moreover, with synthesized training data from a browser-
based rendering engine, these models can be a step towards a reverse rendering
engine that recovers comprehensive layout structure from document images.
This paper is organized as follows: Section 2 reviews related work. Section 3
presents our proposed method, where the graph construction method and the
details of each step of the algorithms are described. Section 4 explains
training data generation methods with web scraping. Experimental setups and
results are given in Section 5. Section 6 concludes the paper with suggestions
for future work.
## 2 Related Work
OCR layout analysis (”layout” for short) comprises a large variety of problems
that have been studied from different aspects. There is pre-recognition layout
like [13] to find text lines as the input of recognition, and post-recognition
layout like [14] to find higher level entities based on OCR recognition
results. We list selected studies that are most relevant to our problem in the
following subsections.
### 2.1 Geometric and Rule-based Approaches
Multi-column text, often with small column gaps, needs to be first identified
before paragraphs. Early studies have proposed geometric methods [6, 15] and
rule-based methods [4, 5, 7]. Both categories have algorithms to find column
gaps by searching whitespace [15] or text alignment [7].
Limitations of these approaches include susceptibility to input noise and
false positive column boundaries, especially with monospace font families. Our
production222In current use in products and services at the time of this
writing. layout analyzer has been using a simpler rule-based heuristic
algorithm which splits lines at white spaces that are significantly larger
than others. This simplification gives it a performance (computational and
memory cost) advantage, but also hampers its capability of handling dense text
columns.
Figure 2: Example of multiple short paragraphs densely packed and rotated into
a non axis-aligned direction. The right side shows the region proposal boxes
for object detection models.
### 2.2 Image Based Detection
The PubLayNet paper [8] provides a large dataset for multiple types of
document entities, as well as two object detection models F-RCNN [16] and
M-RCNN [17] trained to detect these entities. Both show good metrics in
evaluations, but also with some disadvantages on detecting paragraphs:
* •
Cost: Object detection models are typically large in size and expensive in
computation. When used together with an OCR engine to retrieve text
paragraphs, it seems wasteful to bypass the OCR results and attempt to detect
paragraphs independently.
* •
Quality: Paragraph bounding boxes may have high aspect ratios and are
sometimes tightly packed, making it difficult for Faster R-CNN detection. In
Fig. 2, several short paragraphs are printed with dense text and rotated by 45
degrees. The region proposals required to detect all the paragraphs are highly
overlapped, so some detections will be dropped by non-maximum suppression
(NMS). Rotational R-CNN models [18] can mitigate this issue by inclined NMS,
but further increase the computational cost while still facing a more
difficult task with rotated or warped inputs.
### 2.3 Page Segmentation
Page segmentation models [3, 19, 20] classify every part of the image to
certain types of objects such as text, table, image and background. Sometimes
the shapes of paragraphs can be revealed by the “text” part of the
segmentation. However, when text is dense and paragraphs are indentation based
without variation in line spacings, individual paragraphs cannot be easily
extracted from large connected text regions. On the other hand, when text is
sparse and appears as a lot of separate small components, paragraphs are not
obvious in the segmentation result either.
### 2.4 Graph Neural Network for Table Detection
A graph neural network approach is proposed in [14] to detect tables in
invoice documents. It shows that tabular structures can be detected based
purely on structural information by graph neural networks.
Limitations of this approach include graph construction and graph
representation. First, the visibility graph is built by only connecting pairs
of pre-defined entities that are vertically or horizontally visible, which
requires the input image free of skews and distortions. Second, the adjacency
matrix learned by the GNN is $O(n^{2})$ in size and hence inefficient for
large inputs. A general-purpose post-OCR model will need to overcome these
limitations to accommodate all types of input images and achieve high
computational efficiency.
## 3 Paragraph Estimation with Graph Convolutional Neural Networks
A paragraph consists of a set of text lines, which are usually produced in the
output of OCR systems [21, 13]. If text lines are given by OCR systems, we can
consider a bottom-up approach to cluster text lines into paragraphs for the
paragraph estimation task.
The detected lines provide rudimentary layout information but may not match
the true text lines. For example, in Fig. 3, the lower section of the page
consists of two text columns placed closely to each other. The line detector
might be confused by the tiny spacing and find wrong lines spanning both
columns. These lines need to be split in the middle before being clustered
into paragraphs.
Line splitting and clustering are non-trivial tasks for general-purpose
paragraph estimation – the input images can be skewed or warped, and the
layout styles can vary among different types of documents, e.g. newspapers,
books, signs, web pages, handwritten letters, etc. Even though the concept of
paragraph is mostly consistent across all document categories, the appearance
of a paragraph can differ by many factors such as word spacing, line spacing,
indentation, text flowing around figures, etc. Such variations make it
difficult, if not impossible, to have a straightforward algorithm that
identifies all the paragraphs.
In order to address erroneous line detection and solve the non-trivial split
and clustering problem, we design a paragraph identification method as a
2-step process after the main OCR engine produces line and word bounding
boxes. Both steps use a graph convolutional neural network (GCN) that takes
input features from bounding boxes in the OCR result, together with a
$\beta$-skeleton graph [11] constructed from these boxes. Neither the original
image nor text transcriptions are included in the input, so the models are
small, fast, and entirely focused on the layout structure.
* •
Step 1: Line splitting. Raw text lines from OCR line detectors may cross
multiple columns, and thus need to be split into shorter lines. A GCN node
classifier predicts splitting points in lines.
* •
Step 2: Line clustering. After step 1 produces refined lines, they are
clustered into paragraphs. A GCN edge classifier predicts clustering
operations on pairs of neighboring lines.
The following subsections describe these steps in details. In addition, we
discuss the possibility of an alternative one-step process.
Figure 3: Example of a double-column document image and its paragraphs. The
left side shows all the lines found by an OCR text line detector, and the
right side shows the paragraphs formed by clustered text lines within each
column. Figure 4: Comparison among different types of graphs constructed on an
example set of boxes.
### 3.1 $\bm{\beta}$-skeleton on Boxes
A graph is a key part of the GCN model input. We want a graph with high
connectivity for effective message passing in graph convolutions, while also
being sparse for computational efficiency.
Visibility graphs have been used in previous studies [22, 14], where edges are
made by “lines-of-sight”. They are not considered suitable for our models
because the lines may create excessive edges. Fig. 4(a) shows the visibility
graph built on two rows of boxes, where any pairs of boxes on different rows
are connected. This means word connections between text lines may get
$O(n^{2})$ number of edges. If we limit the lines-of-sight to be axis aligned
like Fig. 4(b), then the graph becomes too sparse, even producing disconnected
components in some cases.
By changing “lines-of-sight” into “balls-of-sight”, we get a $\beta$-skeleton
graph [11] with $\beta=1$. In such a graph, two boxes are connected if they
can both touch a circle that does not intersect with any other boxes. It
provides a good balance between connectivity and sparsity. As shown in Fig.
4(c), a $\beta$-skeleton graph does not have too many connections between rows
of boxes. With $\beta=1$, it is a subgraph of a Delaunay triangulation [23]
with number of edges bounded by $O(n)$. Yet, it provides good connectivity
within any local cluster of boxes, and the whole graph is guaranteed to be one
connected component.
Figure 5: Building a $\beta$-skeleton on boxes from $\beta$-skeleton on
points. Left side: intersecting boxes are first connected with edges of length
0. Right side: Non-internal peripheral points are connected with
$\beta$-skeleton edges which are then collapsed into box edges. Edge lengths
are approximate.
The original $\beta$-skeleton graph is constructed on a point set. To apply it
to bounding boxes, we use an algorithm illustrated in Fig. 5 and described in
the following steps, where $n$ used for complexity analysis is the number of
boxes. We assume the length and width of all the input boxes are bounded by a
constant.
1. 1\.
For each box, pick a set of peripheral points at a pre-set density, and pick a
set of internal points along the longitudinal middle line333The middle line
points are added so that no edges can go through the boxes..
2. 2\.
Build a Delaunay triangulation graph $G_{D}$ on all the points. (Time
complexity $O(n\log n)$ [23].)
3. 3\.
Find all the “internal” points that are inside at least one of the boxes.
(Time complexity $O(n)$ by traversing along $G_{D}$’s edges inside each box
starting from any peripheral point. Internal points are marked grey in Fig.
5.)
4. 4\.
Add an edge of length 0 for each pair of intersecting boxes (containing each
other’s peripheral points).
5. 5\.
Pick $\beta$-skeleton edges from $G_{D}$ where for each edge
$e=(v_{1},v_{2})$, both its vertices $v_{1}$, $v_{2}$ are non-internal points
and the circle with $v_{1}v_{2}$ as diameter does not cover any other point.
If there is such a point set $V_{c}$ covered by the circle, then the point
$v_{3}\in V_{c}$ closest to $\overline{v_{1}v_{2}}$ must be the neighbor of
either $v_{1}$ or $v_{2}$ (in Delaunay triangulation graphs). Finding such
$v_{3}$ takes $O(\log n)$ time for each edge, since $G_{D}$ produced in step 2
have edges sorted at each point.
6. 6\.
Keep only the shortest edge for each pair of boxes as the $\beta$-skeleton
edge.
The overall time complexity of this box based $\beta$-skeleton graph
construction is $O(n\log n)$, dominated by Delaunay triangulation. There are
pathological cases where step 4 will need $O(n^{2})$ time, e.g. all the $n$
boxes contain a common overlapping point. But these cases are easily excluded
from OCR results. The total number of edges is bounded by $O(n)$ as in
$G_{D}$, so the graph convolution layers have linear time operations.
Figure 6: Overview of the line splitting model. In the output, line start
nodes are marked green and line end nodes are marked orange. Figure 7:
Overview of the line clustering model. In the output, positive edges are
marked pink.
### 3.2 Message Passing on Graphs
We use spatial-based graph convolutional neural networks (GCNs) [10, 24] for
both tasks of line splitting and line clustering, since both can leverage the
local spatial feature aggregation and combinations across graph edges (more
details in subsections 3.3 and 3.4 below).
Our graph convolutional neural network resembles the message passing neural
network (MPNN) [25] and GraphSage [26]. We use the term “message passing
phase” from [25] to describe the graph level operations in our models. In this
phase, repeated steps of “message passing” are performed based on a message
function $M$ and node update function $U$. At step $t$, a message
$M(h_{v}^{t},h_{w}^{t})$ is passed along every edge $e_{vw}$ in the graph
where $h_{v}^{t}$ and $h_{w}^{t}$ are the hidden states of node $v$ and $w$.
Let $N(v)$ denote the neighbors of node $v$ in the graph, the aggregated
message by average pooling received by $v$ is
$m_{v}^{t+1}=\frac{\sum_{w\in N(v)}M(h_{v}^{t},h_{w}^{t})}{|N(v)|}$ (1)
and the updated hidden state is
$h_{v}^{t+1}=U(h_{v}^{t},m_{v}^{t+1})$ (2)
Alternatively, we can use attention weighted pooling [27] to enhance message
aggregation. Consequently, the model is also called a graph attention network
(GAT), where calculation of $m_{v}^{t+1}$ is replaced by
$m_{v}^{t+1}=\frac{\sum_{w\in
N(v)}\exp(e_{vw}^{t})M(h_{v}^{t},h_{w}^{t})}{\sum_{w\in
N(v)}\exp(e_{vw}^{t})}$ (3)
and $e_{vw}^{t}$ is computed from a shared attention mechanism
$e_{vw}^{t}=a(h_{v}^{t},h_{w}^{t})$ (4)
For $a$, we use the self-attention mechanism introduced in [28].
In our GCN models, the message passing steps are applied on the
$\beta$-skeleton graph constructed on OCR bounding boxes, so that structural
information can be passed around local vicinities along graph edges, and
potentially be combined and extracted into useful signals. Both the line
splitting step and the line clustering step rely on this mechanism to make
predictions on graph nodes or edges.
### 3.3 Splitting Lines
As in [6, 7], if multi-column text blocks are present in a document page,
splitting lines across columns is a necessary first step for finding
paragraphs. Here we have the same objective but a different input with
available OCR bounding boxes for each word and symbol. Image processing can be
skipped to accelerate computations.
Note that the horizontal spacings between words is not a reliable signal for
this task, as when the typography alignment of the text is “justified,” i.e.
the text falls flush with both sides, these word spacings may be stretched to
fill the full column width. In Fig. 3, the bottom left line has word spacings
larger than the column gap. This is common in documents with tightly packed
text such as newspapers.
We use the GCN model shown in Fig. 6 to predict the splitting points, or tab-
stops. Each graph node is a word bounding box. Graph edges are the
$\beta$-skeleton edges built as described in section 3.1. The model output
contains two sets of node classification results – whether each word is a
“line start” and whether it is a “line end”. This model is expected to work
well for difficult cases like dense text columns with “justified” alignment by
aggregating signals from words in multiple lines surrounding the potential
splitting point.
Fig. 8 shows a zoomed-in area of Fig. 3 with a $\beta$-skeleton graph
constructed from the word bounding boxes. Since words are aligned on either
side of the two text columns, a set of words with their left edges all aligned
are likely on the left side of a column, i.e. these words are line starts.
Similarly, a set of words with right edges all aligned are likely on the right
side and are line ends. The $\beta$-skeleton edges are guaranteed to connect
aligned words in neighboring lines, since aligned words have the shortest
distance between the two lines and there is nothing in between to block the
connection. Thus, the alignment signal can be passed around in the message
passing steps and be effectively learned by the GCN model. Moreover, the two
sets of words beside the column gap are also connected with $\beta$-skeleton
edges crossing the two columns, so the signals can be mutually strengthened.
Figure 8: Line splitting signal from word box alignment propagating through
$\beta$-skeleton edges. The resulting predictions are equivalent to tab-stop
detection. Figure 9: Example of paragraph line clustering by indentations.
Light blue edges indicate the $\beta$-skeleton constructed on line bounding
boxes, and pink edges indicate that the connected lines are clustered into
paragraphs.
### 3.4 Clustering Lines
After splitting all the lines into “true” lines, the remaining task is to
cluster them into paragraphs. Again we use a graph convolutional neural
network, but now each graph node is a line bounding box, and the output is
edge classification similar to link prediction in [29, 30]. We define a
positive edge to be one that connects two consecutive lines in the same
paragraph. Note that it is possible to have non-consecutive lines in the same
paragraph being connected by a $\beta$-skeleton edge. Such edges are defined
as negative to make the task easier to learn.
Fig. 7 is an overview of the line clustering model. It looks similar to the
line splitting model in Fig. 6, except the input consists of line bounding
boxes, and the output predictions are on graph edges instead of nodes. An
additional “node-to-edge” step is necessary to enable edge classification with
node-level output from the graph convolution steps. It works in a similar way
as the first half of a graph convolution step, with node aggregation replaced
by edge aggregation:
$m^{\prime}_{e=(v,w)}=\frac{M^{\prime}(h_{v},h_{w})+M^{\prime}(h_{w},h_{v})}{2}$
(5)
The model predicts whether two lines belong to the same paragraph on each pair
of lines connected with a $\beta$-skeleton edge. The predictions are made from
multiple types of context.
* •
Indentation: For example, in Fig. 9 which is zoomed-in from Fig. 3, each new
paragraph starts with an indented line, so the edge connecting the fourth and
fifth lines in the left column is predicted as non-clustering.
* •
Vertical spacing: “Block paragraphs” are separated by extra vertical spacing,
which are common in web pages. Line spacing signals are passed around in graph
convolutions to detect vertical space variations.
* •
List items: The first line of a list item is usually outdented with a bullet
point or a number, and the first word after is flush with the following lines.
So list items can be detected in a similar way as indentation based
paragraphs.
Besides the three common types listed above, we may have other forms of
paragraphs such as mailing addresses, computer source code or other customized
structures. The model can be trained on different types of layout data.
### 3.5 Possibility of Clustering Words
If a 1-step model can cluster the words directly into paragraphs, it will be
preferable to the 2-step GCN models described above. A single model is not
only faster to train and run, but can also avoid cascading errors where the
first step’s mistake propagates to the second.
However, there is significant difficulty for a 1-step GCN model to work on the
paragraph word clustering problem. First, word based GCN models may not have
good signal aggregation for line level features because of the limited number
of graph convolution layers. The oversmoothing effect [31, 32] limits the
depth of the network, i.e. the number of “message passes”. With
$\beta$-skeleton graphs mostly consisting of local connections, the “receptive
field” on each graph node is small and often cannot cover a whole line. For
instance, a word at the end of a line has no information on whether this line
is indented. In a general purpose paragraph model where the input can be noisy
and deformed, this limitation can severely affect model performance.
While it is possible to extend the receptive fields by adding non-local edges
in the graph, or employing residual connections and dilated convolutions [32]
in the model, it is non-trivial to build a scalable and effective solution.
This is an interesting topic for further research, but not the focus of this
paper.
## 4 Synthetic Training Data from Web
A large set of diverse and high quality annotated data is a necessity for
training deep neural networks. Such datasets are not readily available for
paragraphs and layout-related tasks. The PubLayNet dataset [8] is a very large
annotated set, but lacks in style diversity as all the pages are from
publications.
Therefore, we largely rely on automated training data generation [33]. By
taking advantage of high quality and publicly available web documents, as well
as a powerful rendering engine used in modern browsers, we can generate
synthetic training data with a web scraper.
Figure 10: Training data examples from web scraping with randomized style
changes and data augmentation. Green boxes indicate line ground truth labels
and yellow boxes indicate multi-line paragraph ground truth labels. (Yellow
paragraph boxes are for visualization purpose only. Paragraph ground truth
labels are represented by sets of line numbers in order to prevent ambiguity
when there are overlaps between paragraph bounding boxes.)
### 4.1 Scraping Web Pages with Modified Styles
Web pages are a good source of document examples. Wikipedia [12] is well known
to host a great number of high quality articles with free access.
We use a browser-based web scraper to retrieve a list of Wikipedia pages,
where each result includes the image rendered in the browser as well as the
HTML DOM (document object model) tree of that page. The DOM tree contains the
complete document structure and detailed locations of all the rendered
elements, from which we can reconstruct the ground truth line bounding boxes.
Each line bounding box is an axis-aligned rectangle covering a line of text.
For paragraph ground truth, the HTML tag $<$p$>$ conveniently indicates a
paragraph node, and all the text lines under this node belong to the same
paragraph.
TABLE I: Sample web script code for changing paragraph styles. Random combinations of these changes are used in training data synthesis. Style Change | Script Sample
---|---
Single-column to | div.style.columnCount = 2;
double-column |
Vertical spacing to | div.style.textIndent = 30px;
indentation | div.style.marginTop = 0;
| div.style.marginBottom = 0;
Typography alignment | div.style.textAlign = “right”;
Text column width | div.style.width = 50%;
Horizontal text block | div.style.marginLeft = 20%;
position |
Line height/spacing | div.style.lineHeight = 150%;
Font | div.style.fontFamily = “times”;
One issue of using web page data directly for document layout is the lack of
diversity in document styles. Almost all web pages use vertical spacing to
separate paragraphs, and multi-column text is rare. Fortunately, modern web
browsers support extensions that can run script code on web pages to change
their CSS styles. For example, to generate double-column text for a certain
division of a page, we can use “div.style.columnCount = 2.”
Table I lists a few examples of web script code for changing paragraph styles.
Such pieces are randomly picked and combined in our training data pipeline.
Parameters such as column count and alignment type are also randomized. Thus,
the total combinations give a great diversity of styles to simulate various
types of documents to be encountered in the real world.
### 4.2 Data Augmentation
A general-purpose OCR engine must accommodate all types of input images,
including photos of text taken at different camera angles. Our model should be
able to handle the same variations in input, so data augmentation is needed to
transform the rectilinear data scraped from web pages into photo-like data.
To emulate the effect of camera angles on a page, we need two types of
geometric transformation: rotation and perspective projection. Again, we use
randomized parameters in each transformation to diversify our data. Each data
point gets a random projection followed by a random rotation, applied to both
the image and ground truth boxes. Fig. 10 shows two training examples with
data augmentation.
* •
The left one has Arial font, dense text lines, paragraphs separated by
indentation and the camera placed near the upper-left corner.
* •
The right one has Monospace font, sparse text lines, paragraphs separated by
vertical spacing and the camera placed near the lower-right corner.
Note that we do not need pixel-level augmentation (imaging noise, illumination
variation, compression artifacts, etc.) for the training of our GCN models,
because these models only take bounding box features from the OCR engine
output, and are decoupled from the input image. Even when real input images
look very different from the training data images, the bounding boxes from a
robust OCR engine can still be consistent. It is assumed that the OCR engine
has been trained to be robust to pixel-level degradation, as is the case in
the present work.
### 4.3 Sequential 2-Step Training by Web Synthetic Data
We train the two GCN models in sequence, where the line clustering input
depends on the line splitting model. For each model, the classification ground
truth labels are computed from matching OCR output to the shapes of ground
truth (GT) lines. The GT lines are denoted by ${L_{1},L_{2},...,L_{n}}$. Each
GT line is a rectangle from the web rendering engine, and is transformed into
a quadrilateral in data augmentation.
In the line splitting model, the graph nodes are the OCR word boxes, and the
output labels are node classifications on whether each node is a “line start”
and whether it is a “line end”. These labels can be computed by the following
two steps.
* •
For each OCR word $w_{i}$, the word is allocated to the ground truth line with
maximum intersection area with the word bounding box.
* •
For each GT line $L_{j}$, sort the words allocated to this line along its
longitudinal axis. The word at the left end is a line start, the word at the
right end is a line end, and the remaining words are negative on both.
In the line clustering model, the graph nodes are the line boxes after the
line splitting process, and the output labels are edge classifications on
whether an edge connects two lines that are adjacent lines in the same
paragraph. For each $\beta$-skeleton edge that connects a pair of OCR line
boxes ($l_{i}$, $l_{j}$), we find the corresponding pair of GT line boxes
($L_{i^{\prime}}$, $L_{j^{\prime}}$) by the same maximum intersection area as
the step above. The edge label is positive if $L_{i^{\prime}}$ and
$L_{j^{\prime}}$ belong to the same paragraph and
$\left|i^{\prime}-j^{\prime}\right|=1.$
444$\left|i^{\prime}-j^{\prime}\right|=1$ means no skip-line positive edges,
which makes the task easier to learn. For datasets without line level ground
truth labels, this condition is replaced by “there is no path in the
$\beta$-skeleton graph shorter than edge ($l_{i}$, $l_{j}$)”.
The line clustering model input is generated from line splitting on the OCR
results. While there remains some risk of cascading of errors, line clustering
is able to correct some mistakes in the previous line splitting step.
Specifically,
* •
For “under-splitting”, i.e. when an OCR line covers multiple GT lines, there
is no way to correct it by clustering, and the training example is discarded.
* •
For “over-splitting”, i.e. when multiple OCR lines match the same GT line, the
line clustering model can cluster the over-split short lines into the same
paragraph and recover the original lines. See the second picture in Fig. 15 as
an example. The sequential training steps enable this error correction.
It is worth noting that the ground truth labels associated with table elements
are treated as “don’t-care” and assigned with weight 0 in training. The reason
is that tables have very different structures from paragraphed text, and the
two types of entities often produce contradicting labels within the current
GCN framework. Using GCN for table detection like [14] is another interesting
topic but out of the scope of this paper.
## 5 Experiments
We experiment with the 2-step GCN models and evaluate the end-to-end
performance on both the open PubLayNet dataset and our own annotated sets.
In the end-to-end flow, the line splitting model and the line clustering model
work in a sequential order. It takes an OCR result page as input, and produces
a set of paragraphs each containing a set of lines, and every line in the page
belongs to exactly one paragraph.
### 5.1 Setups
We use the OCR engine behind Google Cloud Vision API DOCUMENT_TEXT_DETECTION
555https:$\mathbin{/\mkern-6.0mu/}$cloud.google.com/vision/docs/fulltext-
annotations version 2020 for all the pre-layout detection and recognition
tasks. Setup details are elaborated as follows.
#### 5.1.1 Data
We use 3 datasets in our evaluations: PubLayNet from [8], the web synthetic
set as described in section 4, and a human annoatated set with real-world
images.
* •
PubLayNet contains a large amount of document images with ground truth
annotations: 340K in the training set and 12K in the development/validation
set. The testing set ground truth has not been released at the time of this
writing, so here we use the development set for testing.
* •
For web synthetic, we scrape 100K Wikipedia [12] pages in English for image
based model training and testing at a 90/10 split. For GCN models, 10K pages
are enough. An additional 10K pages in Chinese, Japanese and Korean are
scraped to train the omni-script GCN models.
* •
We also use a human annotated set with real-world images – 25K in English for
training and a few hundred for testing in each available language. The images
are collected from books, documents or objects with printed text, and then
sent to a team of raters who draw the ground truth polygons for all the
paragraphs. Example images are shown in Fig. 14, 15 and 16.
#### 5.1.2 Models and Hyperparameters
The GCN models are built as in Fig. 6 and Fig. 7, each carrying 8 steps of
graph convolutions with 4-head self-attention weighted pooling [27, 28].
At the models’ input, each graph node’s feature is a vector containing its
bounding box information of the word or the line. The first five values are
width $w$, height $h$, rotation angle $\alpha$, $\cos\alpha$ and $\sin\alpha$.
Then for each of its 4 corners $(x_{p},y_{p})$, we add 6 values $[x_{p},$
$x_{p}\cos\alpha,$ $x_{p}\sin\alpha,$ $y_{p},$ $y_{p}\cos\alpha,$
$y_{p}\sin\alpha]$. For line clustering, an additional $w_{1}$ indicating the
first word’s width is added to each line for better context of line breaks and
list items. These values provide the starting point for feature crossings and
combinations in graph convolutions. This low dimension of model input enables
lightweight and efficient computation.
Model inference latency is very low, under 20 milliseconds for an input graph
of around 1500 nodes (on a 12-core 3.7GHz Xeon CPU), since each GCN model is
less than 130KB in size with 32-bit floating point parameters. In fact, the
computation bottleneck is on the $\beta$-skeleton construction which can take
$\sim$50 milliseconds for the same graph. Compared to the main OCR process,
the overall GCN latency is small, and the $O(n\log n)$ complexity ensures
scalability.
We cannot claim that GCN models have better latency than image based R-CNN
models, because image models can run in parallel with the OCR engine when
resources allow. Instead, the small size of GCN models makes them easy to be
deployed as a lightweight, low cost and energy efficient step of post-OCR
layout analysis.
#### 5.1.3 Evaluation Metrics
While classification tasks are evaluated by precision and recall, the end-to-
end performance is measured by IoU based metrics such as the COCO
mAP@IoU[.50:.95] used in [8] so the results are comparable.
The average precision (AP) for mAP is usually calculated on a precision-recall
curve. Since our models produce binary predictions instead of detection boxes,
we have only one output set of paragraph bounding boxes, i.e. only one point
on the precision-recall curve. So $AP=precision\times recall$.
We introduce another metric F1var using variable IoU thresholds, which is more
suitable for paragraph evaluations. In Fig. 11, a single-line paragraph has a
lower IoU even though it is correctly detected, while a 4-line detection (in
red) has a higher IoU with a missed line. This is caused by boundary errors at
character scale rather than at paragraph scale. This error is larger for post-
OCR methods since the OCR engine is not trained to fit paragraph boxes. If we
have line-level ground truth in each paragraph, and adjust IoU thresholds
$T_{iou}$ by
$T_{iou}=\min(1-\frac{1}{1+\\#lines},0.95)$ (6)
the single-line paragraph will have IoU threshold 0.5, the 5-line one will
have IoU threshold 0.833, and both cases in Fig. 11 can be more reasonably
scored.
Figure 11: Paragraph detection example from PubLayNet [8]. Red boxes are
different from ground truth in terms of enclosed words. A single-line correct
detection has lower IoU than a multi-line detection missing a line,
necessitating variable IoU thresholds in evaluations.
Both PubLayNet [8] and our web synthetic set have line level ground truth to
support this metric. For the human annotated set without line annotations, we
fall back to a fixed IoU threshold of 0.5.
#### 5.1.4 Baselines
The 2-step GCN models are compared against image based models and the
heuristic algorithm in our production system.
The image models include Faster R-CNN and Mask R-CNN used in [8], which work
on the PubLayNet data with non-augmented images. For broader testing on
augmented datasets, we train a Faster R-CNN model with an additional
quadrilateral output to indicate rotated boxes, denoted by “F-RCNN-Q” in
following subsections. This model uses a ResNet-101 [34] backbone and is
$\sim$200MB in size, smaller than the two models in [8] but still 3 orders of
magnitude larger than the GCN models.
For reference, the baseline heuristic algorithm takes the OCR recognized text
lines as input and generates paragraphs by the following steps.
1. 1\.
Within each line, find white spaces between words that are significantly wider
than average, and split the line by these spaces into shorter lines.
2. 2\.
For each line, repeatedly cluster nearby lines into its block by a distance
threshold, until no more proximate lines can be found.
3. 3\.
Within each block, merge lines that are roughly placed in the same straight
line.
4. 4\.
Within each block, for each indented line, create a new paragraph.
These rule-based steps were intended to handle multi-column text pages, but
the fixed hand-tuned parameters make it inflexible at style variations.
Replacing them with machine learned GCN models as proposed here can greatly
enhance the algorithm’s performance and adaptivity.
### 5.2 GCN Classification Accuracies
We first check the metrics of the GCN classification tasks on various training
sets. Precision and recall scores of the binary classification tasks are shown
in Table II. The PubLayNet data is not applied with data augmentation because
of the low resolution of its images, while the web synthetic data is tried
with and without data augmentation.
The human annotated training set is added to train the line clustering GCN
model, but not the line splitting model because it lacks dense, multi-column
text pages. Therefore, only the line clustering scores are shown for the
combined set in Table II. The scores on the annotated set are significantly
lower because of the diverse and noisy nature of the data source.
We also compare the $\beta$-skeleton graph with the two types of “line-of-
sight” graphs in Fig. 4. Since the edges are very different among these
graphs, Table III only compares node classification scores trained on the
augmented web synthetic set. When average pooling is used in graph
convolutions, the free “line-of-sight” graph in Fig. 4(a) achieves the best
scores. However, the $O(n^{2})$ size of the graph scales poorly and causes
out-of-memory errors when training with attention weighted pooling within our
environment. In practical use, $\beta$-skeleton graph appears to yield the
best results for our purpose.
### 5.3 PubLayNet Evaluations
The PubLayNet dataset has five types of layout elements: text, title, list,
figure and table. For our task, we take text and title bounding boxes as
paragraph ground truth, and set all other types as “don’t-care” for both
training and testing.
TABLE II: Precision/recall pairs of the two GCN models’ classification tasks during training with different datasets. The $\beta$-skeleton graph is used for all tasks. Dataset | Line start | Line end | Edge clustering
---|---|---|---
PubLayNet | 0.998/0.992 | 0.992/0.990 | 0.994/0.997
Web synthetic | 0.995/0.996 | 0.994/0.997 | 0.978/0.980
Augmented web | 0.988/0.986 | 0.990/0.987 | 0.958/0.966
synthetic | | |
Combined set | Augmented web synthetic | 0.949/0.953
| Human annotated | 0.901/0.912
TABLE III: Precision/recall pairs of the line splitting model using different types of graphs on the augmented web synthetic set. Both average pooling and attention weighted pooling are tested for message aggregation in graph convolutions. Graph type | Pooling method | Line start | Line end
---|---|---|---
$\beta$-skeleton | Average | 0.982/0.978 | 0.981/0.978
| Attention | 0.988/0.986 | 0.990/0.987
Line-of-sight | Average | 0.983/0.985 | 0.984/0.988
| Attention | -/- | -/-
Axis-aligned | Average | 0.972/0.974 | 0.970/0.971
line-of-sight | Attention | 0.973/0.973 | 0.964/0.978
(a) (b) (c) (d)
Figure 12: Representative PubLayNet examples of paragraphs by OCR followed by
GCN line splitting and line clustering.
(a) (b) (c) (d)
Figure 13: Paragraph errors in PubLayNet examples caused by various types of
failures including OCR detection, line splitting and line clustering. (a)
Under splitting. (b) Over splitting. (c) Clustering errors for normal text and
math equations. (d) Clustering error across table boundary line.
Table IV shows that F-RCNN-Q matches the mAP scores in [8]. The GCN models are
worse in this metric because there is only one point in the precision-recall
curve, and the OCR engine is not trained to produce bounding boxes that match
the ground truth. In the bottom row of Table IV, “OCR + Ground Truth” is
computed by clustering OCR words into paragraphs based on ground truth boxes,
which is the upper bound for all post-OCR methods. For mAP scores, even the
upper bound is lower than the scores of image based models. However, if we
measure by F1var scores defined in subsection 5.1.3, OCR + GCNs can match
image based models with a slight advantage.
TABLE IV: Paragraph mAP@IoU[.50:.95] score and F1var comparison on the PubLayNet development set. Numbers in the first 2 rows are from [8]. Model | Training Set | mAP | F1var
---|---|---|---
F-RCNN [8] | PubLayNet training | 0.910 | -
M-RCNN [8] | PubLayNet training | 0.916 | -
F-RCNN-Q | PubLayNet training | 0.914 | 0.945
Tesseract [13] | - | 0.571 | 0.707
OCR + Heuristic | - | 0.302 | 0.364
OCR + GCNs | Augmented web synthetic | 0.748 | 0.867
OCR + GCNs | PubLayNet training | 0.842 | 0.959
OCR + Ground Truth - | 0.892 | 0.997
The high F1var score on “OCR + Ground Truth” also shows that the OCR engine we
use has a very high recall on text detection. The reason it is lower than 1 is
mostly from ground truth variations – a small fraction of single-line
paragraphs have IoU lower than 0.5.
Fig. 12 shows some GCN produced examples where all the paragraphs are
correctly identified. Errors made by the GCN models (or the OCR engine) are
shown in Fig. 13 with four examples:
1. a.
Under splitting – the top line (marked red) should have been split into two.
This usually causes large IoU drop and cannot be recovered by line clustering.
2. b.
Over splitting.
3. c.
Clustering errors among text lines, and also on a math equation together with
detection errors.
4. d.
A table annotation is clustered with a table cell across a boundary line,
because our models do not take image features and ignore non-text lines.
### 5.4 Synthetic Dataset Evaluations
The synthetic dataset from web scraping can give a more difficult test for
these models by its aggressive style variations.
(a) (b) (c)
Figure 14: Representative examples of real-world images with OCR followed by
GCN line splitting and line clustering. Blue boxes: words; green boxes: lines;
yellow boxes: paragraphs; pink line segments: positive line clustering
predictions.
(a) (b) (c)
Figure 15: Paragraph errors in real-word images. (a) Under splitting. (b) Over
splitting. (c) Over clustering table elements.
In Table V, we can see the F1var score of the image based F-RCNN-Q model
decreases sharply as the task difficulty increases. At the synthetic dataset
where the images are augmented with rotations and projections as in Fig. 10,
detection is essentially broken, not only from non-max suppression drops shown
in Fig. 2, but also from much worse box predictions.
TABLE V: Paragraph F1var score comparison across different types of models and datasets. Model | Training/Testing Set | F1var
---|---|---
F-RCNN-Q | PubLayNet training/dev | 0.945
| Web synthetic | 0.722
| Augmented web synthetic | 0.547
OCR + GCNs | PubLayNet training/dev | 0.959
| Web synthetic | 0.830
| Augmented web synthetic | 0.827
In contrast, the GCN models are much less affected by data augmentations and
layout style variations. Especially between augmented and non-augmented
datasets, the F1var score change is minimal. So GCN models will have greater
advantage when input images are non axis-aligned.
### 5.5 Real-World Dataset Evaluations
The human annotated dataset can potentially show the models’ performance in
real-world applications. Since the annotated set is relatively small, the
F-RCNN-Q model needs to be pre-trained on other paragraph sets, while the GCN
models are small enough that the line clustering model can be trained entirely
on the paragraph annotations. Evaluation metric for this set is F1-score with
a fixed IoU threshold of 0.5.
TABLE VI: Paragraph F1-scores tested on the real-world test set with paragraph annotations. Fixed IoU threshold 0.5 is used since there is no line-level ground truth to support variable thresholds. Model | Training Data |<EMAIL_ADDRESS>
---|---|---
F-RCNN-Q | Augmented web synthetic | 0.030
| Annotated data (pre-trained | 0.607
| on PubLayNet) |
OCR + Heuristic | - | 0.602
OCR + GCNs | Augmented web synthetic | 0.614
| Annotated data | 0.671
| Combined set | 0.671
OCR + Ground Truth | - | 0.960
Table VI shows comparisons across different models and different training
sets. All the models should handle image rotations and perspective
transformations, so we only compare models trained on the augmented web
synthetic set or the human annotated set. First, we can see that Faster R-CNN
trained from synthetic web rendered pages does not work at all for real-world
images, whereas the GCN models can generalize well from synthetic training
data.
Also note that most of the annotated images are nearly axis-aligned, so the
GCN models will yield even greater advantage if the images are rotated or
taken with varied camera angles.
Fig. 14 and Fig. 15 show six examples of OCR + GCNs produced paragraphs. The
successful examples in Fig. 14 are all difficult cases for heuristic and
detection based approaches but are handled well by the GCN models. The image
on the right shows the effectiveness of training with augmented web synthetic
data, as there are no similar images in the annotated set. Error examples
produced by GCN are shown in Fig. 15:
1. a.
Under splitting: the caption under the top-right picture is not split from the
paragraph on the left, causing downstream errors.
2. b.
Over splitting: two lines in the middle are mistakenly split, but the short
line segments are then clustered back into the same paragraph, resulting in a
correct final output.
3. c.
Over clustering table elements: since tables are “don’t-care” regions in the
training data, the GCN models trained with paragraph data may take table
elements as sparse text lines and incorrectly cluster them together. A table
detector may help to filter out these lines for paragraphs.
To verify the robustness of the GCN models for language and script diversity,
we test them on a multi-language evaluation set. The models are trained with
both synthetic and human annotated data in English, and additional synthetic
data from Wikipedia pages in Chinese, Japanese and Korean. No other Latin
language data is needed as the English data is sufficient to represent the
layout styles.
Table VII shows the F1-scores across multiple languages. F-RCNN-Q is not
evaluated for the three Asian languages, because we don’t have suitable
training data, and Table VI indicates that synthetic training data is not
useful for this model. The GCN models produce best results in almost all the
languages tried, once again showing good generalizability.
TABLE VII<EMAIL_ADDRESS>scores tested on the multi-language evaluation set. | OCR + | F-RCNN | OCR + | OCR +
---|---|---|---|---
Language | Heuristic | -Q | GCNs | Ground Truth
English | 0.429 | 0.513 | 0.544 | 0.890
French | 0.438 | 0.557 | 0.553 | 0.885
German | 0.427 | 0.538 | 0.566 | 0.873
Italian | 0.455 | 0.545 | 0.556 | 0.862
Spanish | 0.449 | 0.597 | 0.616 | 0.885
Chinese | 0.370 | - | 0.485 | 0.790
Japanese | 0.398 | - | 0.487 | 0.772
Korean | 0.400 | - | 0.547 | 0.807
The GCN models are also flexible in handling text lines written in vertical
directions, which are common in Japanese and Chinese, and also appear in
Korean. Although we don’t have much training data with vertical lines, the
bounding box structures of lines and symbols in these languages remain the
same when the lines are written vertically, as if they were written
horizontally while the image is rotated clockwise by 90 degrees. Fig. 16 shows
such an example. Since our models are trained to handle all rotation angles,
such paragraphs can be correctly identified.
## 6 Conclusions and Future Work
We demonstrate that GCN models can be powerful and efficient for the task of
paragraph estimation. Provided with a good OCR engine, they can match image
based models with much lower requirement on training data and computation
resources, and significantly beat them on non-axis-aligned inputs with complex
layout styles. The graph convolutions in these models give them unique
advantages in dealing with different levels of page elements and their
relations.
Future work includes model performance improvement through both training data
and model architectures. Training data can be made more realistic by tuning
the web scraping pipeline and adding more complex degradation transformations
such as wrinkling effects on document pages. Also, alternative model
architectures and graph structures mentioned in subsection 3.5 may improve
quality and performance.
Another aspect of the future work is to extend the GCN models’ capability to
identify more types of entities and extract document structural information
such as reading order. Some entities like titles and list items are similar to
paragraphs, while some others like tables and document sections are not
straightforward to handle with our proposed models. Image based CNNs may be
needed with their outputs used as node or edge features in the GCN model, so
that non-text components in the document (e.g. checkboxes, table grid lines)
can be captured. In addition, reading order among entities is a necessary step
if we want to identify semantic paragraphs that span across multiple
columns/pages.
Figure 16: Example of paragraphs from text lines with vertical writing
direction.
## Acknowledgments
The authors would like to thank Chen-Yu Lee, Chun-Liang Li, Michalis Raptis,
Sandeep Tata and Siyang Qin for their helpful reviews and feedback, and to
thank Alessandro Bissacco, Hartwig Adam and Jake Walker for their general
leadership support in the overall project effort.
## References
* [1] W. Horak, “Office document architecture and office document interchange formats: Current status of international standardization,” _Computer_ , vol. 18, pp. 50–60, October 1985.
* [2] R. Cattoni, T. Coianiz, S. Messelodi, and C. M. Modena, “Geometric layout analysis techniques for document image understanding: A review,” IRST, Trento, Italy, Tech. Rep. 9703-09, 1998.
* [3] X. Yang, E. Yumer, P. Asente, M. Kraley, D. Kifer, and C. L. Giles, “Learning to extract semantic structure from documents using multimodal fully convolutional neural networks,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2017.
* [4] D. Niyogi and S. N. Srihari, “A rule-based system for document understanding,” in _Proceedings of the Fifth AAAI National Conference on Artificial Intelligence_ , ser. AAAI’86. AAAI Press, 1986, p. 789–793.
* [5] S. N. Srihari and G. W. Zack, “Document image analysis,” in _Proceedings of the 8th International Conference on Pattern Recognition_. AAAI Press, 1986, p. 434–436.
* [6] T. M. Breuel, “High performance document layout analysis,” in _Proceedings of the Symposium on Document Image Understanding Technology_ , Greenbelt, MD, 2003, pp. 209–218.
* [7] R. W. Smith, “Hybrid page layout analysis via tab-stop detection,” in _10th International Conference on Document Analysis and Recognition, ICDAR 2009, Barcelona, Spain, 26-29 July 2009_. IEEE Computer Society, 2009, pp. 241–245. [Online]. Available: https://doi.org/10.1109/ICDAR.2009.257
* [8] X. Zhong, J. Tang, and A. Jimeno-Yepes, “Publaynet: Largest dataset ever for document layout analysis,” in _2019 International Conference on Document Analysis and Recognition, ICDAR 2019, Sydney, Australia, September 20-25, 2019_. IEEE, 2019, pp. 1015–1022. [Online]. Available: https://doi.org/10.1109/ICDAR.2019.00166
* [9] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” _IEEE Transactions on Neural Networks_ , vol. 20, no. 1, pp. 61–80, 2009.
* [10] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A comprehensive survey on graph neural networks,” _IEEE Transactions on Neural Networks and Learning Systems_ , pp. 1–21, 2020.
* [11] D. G. Kirkpatrick and J. D. Radke, “A framework for computational morphology,” _Machine Intelligence and Pattern Recognition_ , vol. 2, pp. 217–248, 1985. [Online]. Available: https://doi.org/10.1016/B978-0-444-87806-9.50013-X
* [12] “Wikipedia, the free encyclopedia.” [Online]. Available: https:// en.wikipeida.org
* [13] R. Smith, “An overview of the Tesseract OCR engine,” in _Proc. 9th IEEE Intl. Conf. on Document Analysis and Recognition (ICDAR)_ , 2007, pp. 629–633.
* [14] P. Riba, A. Dutta, L. Goldmann, A. Fornés, O. R. Terrades, and J. Lladós, “Table detection in invoice documents by graph neural networks,” in _2019 International Conference on Document Analysis and Recognition, ICDAR 2019, Sydney, Australia, September 20-25, 2019_. IEEE, 2019, pp. 122–127. [Online]. Available: https://doi.org/10.1109/ICDAR.2019.00028
* [15] T. M. Breuel, “An algorithm for finding maximal whitespace rectangles at arbitrary orientations for document layout analysis,” in _Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings._ , 2003, pp. 66–70 vol.1.
* [16] S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” _IEEE Trans. Pattern Anal. Mach. Intell._ , vol. 39, no. 6, pp. 1137–1149, 2017. [Online]. Available: https://doi.org/10.1109/TPAMI.2016.2577031
* [17] K. He, G. Gkioxari, P. Dollár, and R. B. Girshick, “Mask R-CNN,” _IEEE Trans. Pattern Anal. Mach. Intell._ , vol. 42, no. 2, pp. 386–397, 2020. [Online]. Available: https://doi.org/10.1109/TPAMI.2018.2844175
* [18] Y. Jiang, X. Zhu, X. Wang, S. Yang, W. Li, H. Wang, P. Fu, and Z. Luo, “R2 cnn: Rotational region cnn for arbitrarily-oriented scene text detection,” in _2018 24th International Conference on Pattern Recognition (ICPR)_ , 2018, pp. 3610–3615.
* [19] J. Lee, H. Hayashi, W. Ohyama, and S. Uchida, “Page segmentation using a convolutional neural network with trainable co-occurrence features,” in _2019 International Conference on Document Analysis and Recognition, ICDAR 2019, Sydney, Australia, September 20-25, 2019_. IEEE, 2019, pp. 1023–1028. [Online]. Available: https://doi.org/10.1109/ICDAR.2019.00167
* [20] A. L. L. M. Maia, F. D. Julca-Aguilar, and N. S. T. Hirata, “A machine learning approach for graph-based page segmentation,” in _31st SIBGRAPI Conference on Graphics, Patterns and Images, SIBGRAPI 2018, Paraná, Brazil, October 29 - Nov. 1, 2018_. IEEE Computer Society, 2018, pp. 424–431. [Online]. Available: https://doi.org/10.1109/SIBGRAPI.2018.00061
* [21] Y. Fujii, K. Driesen, J. Baccash, A. Hurst, and A. C. Popat, “Sequence-to-label script identification for multilingual OCR,” in _14th IAPR International Conference on Document Analysis and Recognition, ICDAR 2017, Kyoto, Japan, November 9-15, 2017_. IEEE, 2017, pp. 161–168. [Online]. Available: https://doi.org/10.1109/ICDAR.2017.35
* [22] B. L. Davis, B. S. Morse, S. Cohen, B. L. Price, and C. Tensmeyer, “Deep visual template-free form parsing,” in _2019 International Conference on Document Analysis and Recognition, ICDAR 2019, Sydney, Australia, September 20-25, 2019_. IEEE, 2019, pp. 134–141. [Online]. Available: https://doi.org/10.1109/ICDAR.2019.00030
* [23] M. d. Berg, O. Cheong, M. v. Kreveld, and M. Overmars, _Computational Geometry: Algorithms and Applications_ , 3rd ed. Santa Clara, CA, USA: Springer-Verlag TELOS, 2008.
* [24] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams, “Convolutional networks on graphs for learning molecular fingerprints,” in _Advances in Neural Information Processing Systems_ , C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, Eds., vol. 28. Curran Associates, Inc., 2015, pp. 2224–2232.
* [25] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in _Proceedings of the 34th International Conference on Machine Learning - Volume 70_ , ser. ICML’17. JMLR.org, 2017, p. 1263–1272.
* [26] W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA_ , I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 1024–1034. [Online]. Available: http://papers.nips.cc/paper/6703-inductive-representation-learning-on-large-graphs
* [27] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in _6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018_ , 2018. [Online]. Available: https://openreview.net/forum?id=rJXMpikCZ
* [28] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” _CoRR_ , vol. abs/1706.03762, 2017. [Online]. Available: http://arxiv.org/abs/1706.03762
* [29] D. Liben-Nowell and J. Kleinberg, “The link-prediction problem for social networks,” _Journal of the American Society for Information Science and Technology_ , vol. 58, no. 7, p. 1019–1031, May 2007.
* [30] M. Zhang and Y. Chen, “Link prediction based on graph neural networks,” in _Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada_ , 2018, pp. 5171–5181. [Online]. Available: http://papers.nips.cc/paper/7763-link-prediction-based-on-graph-neural-networks
* [31] Q. Li, Z. Han, and X. Wu, “Deeper insights into graph convolutional networks for semi-supervised learning,” in _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_ , S. A. McIlraith and K. Q. Weinberger, Eds. AAAI Press, 2018, pp. 3538–3545.
* [32] G. Li, M. Müller, A. K. Thabet, and B. Ghanem, “Deepgcns: Can gcns go as deep as cnns?” in _2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019_. IEEE, 2019, pp. 9266–9275. [Online]. Available: https://doi.org/10.1109/ICCV.2019.00936
* [33] T. Kanungo and R. M. Haralick, “An automatic closed-loop methodology for generating character groundtruth for scanned documents,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 21, no. 2, pp. 179–183, 1999.
* [34] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2016, pp. 770–778.
* [35] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” _Proceedings of the IEEE_ , vol. 86, no. 11, pp. 2278–2324, 1998.
* [36] C. Clausner, A. Antonacopoulos, and S. Pletschacher, “ICDAR2019 competition on recognition of documents with complex layouts - RDCL2019,” in _2019 International Conference on Document Analysis and Recognition, ICDAR 2019, Sydney, Australia, September 20-25, 2019_. IEEE, 2019, pp. 1521–1526. [Online]. Available: https://doi.org/10.1109/ICDAR.2019.00245
|
# An elementary 158 km long quantum network connecting room temperature
quantum memories
Dounan Du Department of Physics and Astronomy, Stony Brook University, Stony
Brook, NY 11794, USA Paul Stankus Brookhaven National Laboratory, Upton, NY,
11973, USA Olli-Pentti Saira Brookhaven National Laboratory, Upton, NY,
11973, USA Mael Flament Qunnect Inc., 141 Flushing Av. Suite 1110, Brooklyn,
NY, 11205, USA Steven Sagona-Stophel Department of Physics and Astronomy,
Stony Brook University, Stony Brook, NY 11794, USA Mehdi Namazi Qunnect
Inc., 141 Flushing Av. Suite 1110, Brooklyn, NY, 11205, USA Dimitrios
Katramatos Brookhaven National Laboratory, Upton, NY, 11973, USA Eden
Figueroa Department of Physics and Astronomy, Stony Brook University, Stony
Brook, NY 11794, USA Brookhaven National Laboratory, Upton, NY, 11973, USA
###### Abstract
First-generation long-distance quantum repeater networks require quantum
memories capable of interfacing with telecom photons to perform quantum-
interference-mediated entanglement generation operations. The ability to
demonstrate these interconnections using real-life fiber connections in a
long-distance setting is paramount to realize a scalable quantum internet.
Here we address these significant challenges by observing Hong-Ou-Mandel (HOM)
interference between indistinguishable telecom photons produced in two
independent room temperature quantum memories, separated by a distance of 158
km. We obtained interference visibilities after long-distance propagation of
$\rm\boldsymbol{V=(38\pm 2)\%}$ for single-photon level experimental inputs.
This first-of-its-kind quantum network prototype connecting quantum
laboratories in Stony Brook University and Brookhaven National Laboratory is
envisioned to evolve into a large-scale memory-assisted entanglement
distribution quantum network, the basis for inter-city quantum communication.
## I Introduction
Quantum information technologies, in which quantum mechanics’ physical laws
govern the control and transmission of information, show significant promise
for revolutionary new capabilities that are fundamentally different from what
is possible with classical technology alone Acin2018 . Any quantum processing
that takes place across multiple locations will require the preservation,
transport, and sharing of quantum states across significant distances. Thus a
widespread quantum network can usher in a new age of quantum information
processing, with never-before-seen capabilities in communication, sensing, and
computing Simon2017 . Prime examples of such quantum network protocols are
Measurement-Device-Independent Quantum Key Distribution (MDI-QKD) CurtisLo2002
, and entanglement distribution using quantum repeaters Lloyd2001 ,
Over the last decade, the experimental development of such networks’
components has been widely pursued. Prototypes have demonstrated elementary
quantum functionality, such as direct entanglement distribution Hensen2016 ;
Valivarthi2016 ; Sun2016 ; Valivarthi2020 , quantum-state transfer
Puigibert2020 ; Cao2020 , and short-distance interference-mediated
entanglement generation between light-matter quantum nodes Slodicka2013 . As
we move forward in implementing large quantum networks, we can list a set of
general properties that we would want to have to build an extended network to
be practical Loock2020 . These follow the discussions presented, for example,
in the DOE-ASCR workshop “Quantum Networks for Open Science” ASCRQN2019 , the
NSF “Quantum InterConnects (QuICs) for Next-Generation Information
Technologies” NSFQINT2019 and the DOE “Quantum Internet Blueprint Workshop”
DOEQINT2020 .
* •
Portability. The essential components of each node, particularly quantum
memories and frequency converters, should all be portable, meaning that they
can be manufactured centrally and then deployed in the field with minimal
installation complexity.
* •
Room temperature operation. It will be of great advantage in any system to be
built across a wide area and utilize a wide variety of locations to minimize
environmental demands by the equipment. In particular, if all the components
can work at room temperature without the need for cryogenic partitions or
consumables.
* •
Agnostic/signal independence. Just as the operation of a digital data network
is essentially independent of what data is being carried over it, the quantum
network should have the capability to transport different kinds of quantum
states. Such a network is termed agnostic, e.g. not fixed to any one specific
computational basis.
* •
Transparent. The quantum network will also be greatly simplified if it is
transparent, i.e, all-optical, with no optical-to-electrical-to-optical
conversions across which it would be difficult to maintain entanglement at
room temperature.
* •
Compatible with telecom infrastructure. The building of a quantum network will
be greatly advanced if it can take advantage of the physical channels,
particularly optical fibers, that have already been highly developed and
widely deployed for data networks. In the case of using entangled photon
pairs, this will require a photon-photon transduction capability between the
wavelengths used by the sources, the memories, and existing telecom industry
fiber channels.
One of the most promising avenues to build first-generation quantum networks
is the use of atomic-ensemble-based optical quantum memories. These systems
have recently shown to operate with high-efficiency Wang2019 and long storage
times Yang2016 . Furthermore, elementary demonstrations of small quantum
networks have also been performed recently JWP2019 ; JWP2019_2 . This work has
focused on implementing the DLCZ protocol DLCZ2001 , by using Class 1
memories, which emit one photon first and another photon later on-demand,
relying on quantum interference to generate entanglement Cabrillo1999 . Room-
temperature quantum technology offers a very exciting solution to build these
Class 1 memory-based quantum repeater networks Novikova2012 . Recent studies
have demonstrated the potential of room-temperature quantum devices for this
purpose, including the demonstration of noiseless room-temperature quantum
memories for single-photons and polarization qubits Namazi2017 ;
Finkelstein2018 ; Kaczmarek2018 with coherence times of several seconds in
ambient conditions Katz2018 . Furthermore, preliminary quantum networks using
room-temperature memories are already available Namazi2017_2 ; Flament2019 .
Here, we demonstrate the implementation of Class 1 room-temperature long-
distance quantum network of two quantum light-matter interfaces and benchmark
the inter-connectivity between its components at telecom wavelengths. We
interconnected the quantum devices in a configuration akin to the one needed
to perform the Cabrillo entanglement scheme Cabrillo1999 . This configuration
requires two sources of telecom qubits (herein called Alice and Bob)
generating single-photon level pulses. We have chosen Alice and Bob to be two
room-temperature quantum memories performing transduction between the 795 and
1324 nm wavelengths Radnaev2010 . After transmission in long-distance fiber
links, the qubits are sent to the measurement station, where we investigate
the indistinguishability of the two single-photon wave-packets produced in the
memory using Hong-Ou-Mandel (HOM) interference HOM . We measure the two-
detector coincidence rate as a function of the relative arrival time between
the qubits.
We characterize our elementary long-distance quantum network in two different
configurations. First, we show Mach-Zender interferometry of the memory-
produced qubits in an interferometer with two 70 km long arms. Next, we
demonstrate the degree of indistinguishability of the telecom polarization
qubits transduced in two independent quantum memories using HOM interference
experiments, with one arm of the interferometer being 70 km, and the second
one being 88km, in order to achieve phase independency between the qubits.
## II Telecom Compatible Quantum Memories
The first step in our procedure is to demostrate that our rubidium vapor
quantum memory platform is compatible with telecom operation. This desired
property has been demonstrated in proof of principle experiments with
cryogenically cooled crystals Saglamyurek2015 ; Jin2015 ; Falamarzi2019 , but
not yet at room temperature.
### II.1 Two-photon absorption and telecom laser rubidium stabilization
Stabilization of a 1367 nm diode laser to the rubidium transitions is achieved
in the following manner. We apply an optical-optical double resonance (OODR)
scheme to obtain the $\rm{}^{85}Rb$ spectrum in the vecinity of $\rm 1.3\mu
m$. Figure 2a shows the energy scheme. The $\rm 5S_{1/2}\rightarrow 5P_{3/2}$
transition($\ket{1}\rightarrow\ket{3}$) is $\rm 780nm$ and the $\rm
5P_{3/2}\rightarrow 6S_{1/2}$ $\ket{3}\rightarrow\ket{4}$ is $\rm 1367nm$. A
$\rm 780nm$ pump laser is used to increase the population in the $\rm
5P_{3/2}$ level. The pump laser is locked to the $\rm 5S_{1/2}F=3\rightarrow
5P_{3/2}F^{\prime}=4$ transition via a Doppler-free saturated absorption
spectroscopy (SAS) system. The $\rm 1367nm$ laser is conterpropagating with
the pump laser beam in a $\rm 7cm$ long Rubidium reference cell. The cell is
maintained at $\rm 60^{\circ}C$. A InGaAs balanced amplified photodetector is
used to obtain the spectrum signal. We scan the $\rm 1367nm$ laser while
keeping the pump laser locked to obtain the OODR spectrum. We generate the
error signal by modulating the $\rm 1367nm$ laser at $\rm 100kHz$ and use a
standard locking loop, as shown in Figure 1.
Figure 1: The laser stabilization setup. SAS: saturated absorption
spectroscopy setup; LD: laser diode; BS: beam splitter; LF: long pass filter;
BPD: balanced amplified photodetector; FG1: function generator generating
100kHz laser current modulation signal; FG2: function generator used to
generate 10Hz signal for laser Piezo scanning; LPF: low-pass filter; OSC:
monitor oscilloscope; PI: proportional integral controller.
### II.2 Telecom Frequency Conversion in Rubidium (Theory and Simulation)
Figure 2: a. Diamond atomic scheme use for frequency conversion and the two-
photon absorption spectroscopy. In our setup $\rm\ket{1}=^{85}Rb:5S_{1/2}F=3$,
$\rm\ket{2}=^{85}Rb:5P_{1/2}F=2$,$\rm\ket{3}=^{85}Rb:5P_{3/2}F=4$ and
$\rm\ket{4}=^{85}Rb:6S_{1/2}F=1$. b. A simulation of the frequency conversion
process using the simulation procedure described below. The two pump fields
are kept at CW, the $\rm 795nm$ is modulated to have a Gaussian envelope using
an AOM. We make an assumption that the photo detector is placed at the output
port of the Rubidium cell.
A diamond configuration 4-level system is used for frequency conversion, see
Figure 2a. The signal field ($795nm$) is the input which encodes the qubit
information. The near resonance Pump I ($780nm$) and Pump II ($1367nm$) are
strong CW pump fields. The telecom field ($1324nm$) is the generated field
which encodes the same qubit information as that of the signal field. The Pump
fields are given as
$\vec{E}_{I}^{phy}(z,t)=\vec{E}_{I}(z)e^{-i(\omega_{I}t-\vec{k}_{I}\cdot\vec{z})}+c.c.\\\
$ (1)
$\vec{E}_{II}^{phy}(z,t)=\vec{E}_{II}(z)e^{-i(\omega_{II}t-\vec{k}_{II}\cdot\vec{z})}+c.c.$
(2)
Here the $\vec{E}_{I}^{phy}(z,t)$ is the physical field that can be measured.
The absolute value of $\vec{E}_{I}(z)$ is half of that of the physical field.
$|\vec{E}_{I}(z)|=\frac{1}{2}|\vec{E}_{I}^{phy}(z,t)|$ (3)
Generally $\vec{E}_{I}(z)$ is complex to include phase shifts. In this model
we assume it to be real to simplify the modeling. Since the signal and telecom
field encode qubits, they are of single photon level. They are given as
$\hat{\vec{E}}_{s}^{phy}(z,t)=\vec{e}_{s}(\frac{\hbar\omega_{s}}{2V\epsilon_{0}})^{\frac{1}{2}}\hat{\psi}_{s}(z,t)e^{-i(\omega_{s}t-\vec{k}_{s}\cdot\vec{z})}+h.c.\\\
$ (4)
$\hat{\vec{E}}_{t}^{phy}(z,t)=\vec{e}_{t}(\frac{\hbar\omega_{t}}{2V\epsilon_{0}})^{\frac{1}{2}}\hat{\psi}_{t}(z,t)e^{-i(\omega_{t}t-\vec{k}_{t}\cdot\vec{z})}+h.c.\\\
$ (5)
Here $\hat{\psi}(z,t)$ is the wave envelope.
This model works on a reduced 4 level Hilbert space. The Hamiltonian is
$\hat{H}=\hat{H}_{0}+\hat{V}$ (6)
The $\hat{H}$ is the free Hamiltonian of the four system without the
interaction with all the fields. If we use the free Hamiltonian energy
eigenstates $\ket{n}$ as our basis, the free Hamiltonian have the form
$\hat{H}_{0}=\begin{bmatrix}\hbar\omega_{1}&0&0&0\\\ 0&\hbar\omega_{2}&0&0\\\
0&0&\hbar\omega_{3}&0\\\ 0&0&0&\hbar\omega_{4}\\\ \end{bmatrix}$ (7)
The incident fields cause a perturbation $\hat{V}$ of the form
$\hat{V}=-\vec{\hat{d}}\cdot\vec{E}^{phy}$ (8)
where
$\vec{E}^{phy}=\vec{E}^{phy}_{I}+\vec{E}^{phy}_{II}+\vec{E}^{phy}_{s}+\vec{E}^{phy}_{t}$
(9)
and $\vec{\hat{d}}$ is the dipole operator. From our diamond configuration,
the nonzero element of dipole operator are
$\vec{\hat{d}}=\begin{bmatrix}0&\vec{d}_{12}&\vec{d}_{13}&0\\\
\vec{d}_{21}&0&0&\vec{d}_{24}\\\ \vec{d}_{31}&0&0&\vec{d}_{34}\\\
0&\vec{d}_{42}&\vec{d}_{43}&0\\\ \end{bmatrix}$ (10)
To proceed, rotating wave approximation is used to simplify the Hamiltonian.
First decompose the $E$ field into positive and negative rotating part. Pump I
physical field can be written as
$\vec{E}_{I}^{phy}(z,t)=\vec{E}_{I}^{+}(z,t)+\vec{E}_{I}^{-}(z,t)$ (11)
where
$\vec{E}_{I}^{+}(z,t)\equiv\vec{E}_{I}(z)e^{-i(\omega_{I}t-\vec{k}_{I}\cdot\vec{z})}$
(12)
and $\vec{E}_{I}^{-}(z,t)$ equal the complex conjugate part. Similar
decomposition can be done with the dipole operator. We write the dipole
operator from equation (10) as
$\begin{split}\vec{\hat{d}}=&\vec{d}_{12}\ket{1}\bra{2}+\vec{d}_{21}\ket{2}\bra{1}+\\\
&\vec{d}_{13}\ket{1}\bra{3}+\vec{d}_{31}\ket{3}\bra{1}+\\\
&\vec{d}_{24}\ket{2}\bra{4}+\vec{d}_{42}\ket{4}\bra{2}+\\\
&\vec{d}_{34}\ket{3}\bra{4}+\vec{d}_{43}\ket{4}\bra{3}\end{split}$ (13)
We can adjust the energy scale so that the energy of level 1 is the zero
energy reference point. In interaction picture, the first operator on the
right becomes
$\vec{d}_{12}\ket{1}\bra{2}\rightarrow
U^{\dagger}\vec{d}_{12}\ket{1}\bra{2}U=\vec{d}_{12}\ket{1}\bra{2}e^{-i\omega_{21}}$
(14)
It contains a positively rotating phase factor, which means its expectation
value contains the same phase factor, so we write
$\begin{split}\vec{d}_{12}^{+}&\equiv\vec{d}_{12}\ket{1}\bra{2}\\\
\vec{d}_{12}^{-}&\equiv\vec{d}_{21}\ket{2}\bra{1}=\vec{d}_{12}\ket{2}\bra{1}\end{split}$
(15)
Notice we assume that the matrix element of the dipole operator is real. We
can achieve this by choosing the phase factor of the energy eigenkets. This is
always possible because we have $4$ unknown phase factors and $4$
equations(difference of corresponding phase factor is zero), one for each
dipole matrix element from the upper right triangular section of the matrix.
Therefor a solution always exists. With our decomposition of the fields and
the dipole operator, we rewrite perturbation given by equation (8) as
$\begin{split}\hat{V}=-&(\vec{d}_{12}^{+}+\vec{d}_{12}^{-}+\vec{d}_{13}^{+}+\vec{d}_{13}^{-}+\vec{d}_{24}^{+}+\vec{d}_{24}^{-}+\vec{d}_{34}^{+}+\vec{d}_{34}^{-})\cdot\\\
&[\vec{E}_{I}^{+}(z,t)+\vec{E}_{I}^{-}(z,t)+\vec{E}_{II}^{+}(z,t)+\vec{E}_{II}^{-}(z,t)\\\
&\vec{E}_{s}^{+}(z,t)+\vec{E}_{s}^{-}(z,t)+\vec{E}_{t}^{+}(z,t)+\vec{E}_{t}^{-}(z,t)]\end{split}$
(16)
Notice that the dipole operator rotating with the frequency corresponds to the
energy difference between levels while the field rotating with the frequency
of the laser. We then make two approximations. First, only the laser detuning
frequency is small enough to make a significant contribution in the
Schrödinger equation; second, between different levels the frequency
difference is large so that they contribute little in the Schrödinger
equation. Hence laser Pump I only couples level 1 and 3, the same for other
fields. With the above approximations, we can rewrite equation (16) as
$\begin{split}\hat{V}=-[&\vec{d}_{12}^{-}\cdot\vec{E}_{s}^{+}(z,t)+\vec{d}_{13}^{-}\cdot\vec{E}_{I}^{+}(z,t)+\\\
&\vec{d}_{24}^{-}\cdot\vec{E}_{t}^{+}(z,t)+\vec{d}_{34}^{-}\cdot\vec{E}_{II}^{+}(z,t)+h.c]\end{split}$
(17)
For numerical simulation, we transform the system in rotating frame. In
Schrödinger picture, we rewrite the perturbation from equation (17) in matrix
form
$\hat{V}=-\leavevmode\resizebox{359.90538pt}{}{$\begin{bmatrix}0&\vec{d}_{12}\cdot\vec{E}_{s}(z)e^{i(\omega_{s}t-\vec{k}_{s}\cdot\vec{z})}&\vec{d}_{13}\cdot\vec{E}_{I}(z)e^{i(\omega_{I}t-\vec{k}_{I}\cdot\vec{z})}&0\\\
\vec{d}_{12}\cdot\vec{E}_{s}(z)e^{-i(\omega_{s}t-\vec{k}_{s}\cdot\vec{z})}&0&0&\vec{d}_{24}\cdot\vec{E}_{t}(z)e^{i(\omega_{t}t-\vec{k}_{t}\cdot\vec{z})}\\\
\vec{d}_{13}\cdot\vec{E}_{I}(z)e^{-i(\omega_{I}t-\vec{k}_{I}\cdot\vec{z})}&0&0&\vec{d}_{34}\cdot\vec{E}_{II}(z)e^{i(\omega_{II}t-\vec{k}_{II}\cdot\vec{z})}\\\
0&\vec{d}_{24}\cdot\vec{E}_{t}(z)e^{-i(\omega_{t}t-\vec{k}_{t}\cdot\vec{z})}&\vec{d}_{34}\cdot\vec{E}_{II}(z)e^{-i(\omega_{II}t-\vec{k}_{II}\cdot\vec{z})}&0\end{bmatrix}$}$
(18)
Notice that the matrix element is rotating at laser frequencies, which is of
the order of Terahertz. If a computer is to simulate the model, the time steps
should be shorter than the laser period, which would require too much
resources. We need to eliminate the time dependence of the elements. This is
achieved by going into the co-rotating frame of the lasers.
To understand the co-rotating frame, let’s recall the interaction picture. In
interaction picture, the states and operators are transformed by an unitary
operator $U$, which gives a rotating phase factor to energy eigen state as
$e^{-i(\omega_{n}t)}\ket{n}$, where $\omega_{n}$ is the transition frequency
of level $n$ relative to the reference level, which is the ground state
usually.
The transformation to the rotating frame is very similar to the transition to
interaction picture, except that the unitary operator brings a phase factor
rotating at the corresponding laser frequency instead of the atomic transition
frequency. In this 4 level system, we can choose level 4 as our reference
level, so that the unitary operator is
$U_{R}^{\dagger}=\begin{bmatrix}e^{-i[(\omega_{I}+\omega_{II})t-(\vec{k}_{I}+\vec{k}_{II})\cdot\vec{z}]}&0&0&0\\\
0&e^{-i(\omega_{t}t-\vec{k}_{t}\cdot\vec{z})}&0&0\\\
0&0&e^{-i(\omega_{II}t-\vec{k}_{II}\cdot\vec{z})}&0\\\ 0&0&0&1\end{bmatrix}$
(19)
Here the time related phase factor is enough to remove the time dependence.
However, it is convenient to also remove the $z$ dependence of the
Hamiltonian. The trick is to ensure that the $\vec{k}$ terms always have a
opposite sign as the $t$ terms is the phase factor.
Equation (19) is the Hermitian conjugate of the operator $U_{R}$ that
corresponds to the $U$ in interaction picture transformation, since it is the
Hermitian conjugate acts on the states
$\ket{\tilde{n}}=U_{R}^{\dagger}\ket{n}$ (20)
where the tilde means this is the ket in the rotating frame. The Hamiltonian
transforms in such a way to preserve the Shrödinger equation of the rotating
frame states.
$\tilde{H}=U_{R}^{\dagger}HU_{R}+i\hbar(\partial_{t}U_{R}^{\dagger})U_{R}$
(21)
We want to preserve the Schrödinger euqation because the master equation of
the density matrix is based on the Schrödinger equation.
The Hamiltonian in rotating frame is
$\tilde{H}=\hbar\begin{bmatrix}\omega_{1}+\omega_{II}+\omega_{I}&-e^{i\phi}\Omega_{s}&-\Omega_{I}&0\\\
-e^{-i\phi}\Omega_{s}&\omega_{2}+\omega_{t}&0&-\Omega_{t}\\\
-\Omega_{I}&0&\omega_{3}+\omega_{II}&-\Omega_{II}\\\
0&-\Omega_{t}&-\Omega_{II}&\omega_{4}\end{bmatrix}$ (22)
where
$\phi=(-\vec{k}_{I}+\vec{k}_{II}+\vec{k}_{s}-\vec{k}_{t})\cdot\vec{z}+(\omega_{I}-\omega_{II}-\omega_{s}+\omega_{t})t$
(23)
and
$\begin{split}&\Omega_{I}(z)\equiv\frac{\vec{E}_{I}(z)\cdot\vec{d}_{13}}{\hbar}\\\
&\Omega_{II}(z)\equiv\frac{\vec{E}_{II}(z)\cdot\vec{d}_{34}}{\hbar}\end{split}$
(24)
are the laser Rabi frequencies,
$\begin{split}&\Omega_{s}(z)\equiv\frac{1}{\hbar}(\frac{\hbar\omega_{s}}{2V\epsilon_{0}})^{\frac{1}{2}}\vec{e}_{s}\cdot\vec{d}_{12}\\\
&\Omega_{t}(z)\equiv\frac{1}{\hbar}(\frac{\hbar\omega_{t}}{2V\epsilon_{0}})^{\frac{1}{2}}\vec{e}_{t}\cdot\vec{d}_{24}\end{split}$
(25)
are the single photon Rabi frequencies.
We can further simplify the Hamiltonian. First the generated telecom photon
must satisfy the energy conservation and momentum conservation
$\omega_{s}+\omega_{II}=\omega_{t}+\omega_{I}$ (26)
$\vec{k}_{s}+\vec{k}_{II}=\vec{k}_{t}+\vec{k}_{I}$ (27)
use these relation we find the phase factor in equation (23) is $\phi=0$.
Second, the calculation is not changed if we change the reference energy
level. This is achieved by adding multiples of identity operator to
$\tilde{H}$. Notice we can do this even if we are in rotating frame but not in
Schrödinger picture. This is because the transformation given by equation (21)
does not change the diagonal elements of the original $H$, up to some
constants introduced by the second term on the right of equation (21). If we
add $\alpha I$ to $\tilde{H}$, it is the same if we add $\alpha I$ to $H$
first, then perform the transformation given by equation (21). Here, we want
to use the laser detunings to express frequencies on the diagonal elements, so
we add $-(\omega_{1}+\omega_{II}+\omega_{s})\hat{I}$ to $\tilde{H}$
$\tilde{H}=\hbar\begin{bmatrix}0&-\Omega_{s}&-\Omega_{I}&0\\\
-\Omega_{s}&-\Delta_{s}&0&-\Omega_{t}\\\
-\Omega_{I}&0&-\Delta_{I}&-\Omega_{II}\\\
0&-\Omega_{t}&-\Omega_{II}&-\Delta_{41}\end{bmatrix}$ (28)
where $\Delta_{s}=\omega_{s}-(\omega_{2}-\omega_{1})$,
$\Delta_{I}=\omega_{I}-(\omega_{3}-\omega_{1})$,
$\Delta_{II}=\omega_{II}-(\omega_{4}-\omega_{3})$ are the one-photon detunings
and $\Delta_{41}=\Delta_{I}+\Delta_{II}$ is the two-photon detuning.
From equation (20) the density matrix in the rotating frame is
$\tilde{\rho}=U_{R}^{\dagger}\rho U_{R}$ (29)
or express $\rho$ in terms of $\tilde{\rho}$,
$\rho=U_{R}\tilde{\rho}U_{R}^{\dagger}$ (30)
where $\rho$ is the original density matrix in Schrödinger Picture. The
evolution of the rotating frame density matrix is governed by the master
equation
$\dot{\tilde{\rho}}=-i[\tilde{H},\tilde{\rho}]+\sum_{a>0}(L_{a}\tilde{\rho}L_{a}^{\dagger}-\frac{1}{2}L_{a}^{\dagger}L_{a}\tilde{\rho}-\frac{1}{2}\tilde{\rho}L_{a}^{\dagger}L_{a})$
(31)
The polarization of the medium is
$\vec{P}=n\text{Tr}(\rho\vec{\hat{d}})$ (32)
and the light propagation is governed by
$(\nabla^{2}-\frac{1}{c^{2}}\frac{\partial^{2}}{\partial
t^{2}})\vec{E}=\mu_{0}\frac{\partial^{2}\vec{P}}{\partial t^{2}}$ (33)
substitute Eq(4),(5) and (32) into Eq(33), matching the terms with the same
rotating frequency
$(\frac{\partial}{\partial t}+c\frac{\partial}{\partial
z})\vec{\psi}_{s}(z,t)=i\frac{\omega_{s}nV}{\epsilon_{0}}\tilde{\rho}_{21}\vec{d}_{12}$
(34) $(\frac{\partial}{\partial t}+c\frac{\partial}{\partial
z})\vec{\psi}_{t}(z,t)=i\frac{\omega_{t}nV}{\epsilon_{0}}\tilde{\rho}_{42}\vec{d}_{24}$
(35)
These are the equations couple the atomic states to the field envelopes.
Together with Eq (31), the three equations fully describe the conversion
process. We conduct a numerical simulation to investigate the frequency
response of the conversion process. In the simulation we apply several
simplifications. We only consider an effective four-level system and ignore
the effects from all hyperfine structures. We also completely ignore the
Doppler effect from the motion of the atoms. We simulate a $7cm$ long cell,
the beam diameter is $0.29mm$ and the atomic density is $4\times
10^{17}/m^{3}$. We set the $780nm$ and $1367nm$ pump to be CW light and the
signal $795nm$ to be a Gussian pulse. We found that the conversion efficiency,
which is defined as the ratio of the converted $1324nm$ intensity to the
incident $795nm$ light intensity, is very sensitive to the choice of the
dipole moments. Since we ignore all the hyperfine levels, we do not expect an
accurate conversion efficiency from the numerical simulation. Figure 2b shows
a typical conversion process from the numerical simulation. We calculate the
conversion efficiency as a function of the detunings of the two pump fields,
the results is show in Fig. 3.
Figure 3: Simulation results. a. Conversion output power as a function of the
two pump field detunings. Here $\rm\Delta_{I}$ is the $\rm 780nm$ pump and the
$\rm\Delta_{II}$ is the $\rm 1367nm$ pump. The colorbar is in arbitrary unit.
In this plot $\rm\Delta_{s}=0MHz$ b. Conversion output power as a function of
the signal detuning. In this plot $\rm\Delta_{I}=0MHz$ and
$\rm\Delta_{II}=200MHz$. The conversion power in both figure is in arb.u
because they are highly dependent on the dipole moment value.
### II.3 Telecom Frequency Conversion in Rubidium (Experiments and
Charaterization)
Figure 4: The frequency conversion setup. An AOM is used to modulate the field
envelope of $\rm 795nm$ signal. The $\rm 795nm$ signal is combined with
$780nm$ pump, and is combined with the $\rm 1367nm$ pump by a long pass
filter. We use a half-wave plate and a quarter-wave plate before the second
Glan Laser Polarizer to compensate the birefringence introduced by the
Rubidium cell. An $\rm InGaAs$ avalanche photodetector is used to detect the
generated $\rm 1324nm$ field.
The frequency conversion setup is shown in Fig. 4. The two pump fields and the
signal $795nm$ beam propagate col-linearly. The $795nm$ and the $1367nm$
fields are H polarized and the $780nm$ light is V polarized. We use a $71.8mm$
long Thorlabs Rubidium reference cell(GC25075-RB) maintained at a temperature
of $\rm 100^{\circ}C$. The cell contains $\rm 5-10mg$ nature composed vapor of
$\rm{}^{85}Rb$ and $\rm{}^{87}Rb$. The attenuation of the $1367nm$ pump field
is achieved by three processes. The generated $1324nm$ is orthogonal to the
$1367nm$ pump field. A pair of Glan Laser Polarizer is used to filter out H
polarized $1367nm$ field. A band-pass filter($1320nm$ center wavelength, FWHM
$12nm$) is then used. Finally, the pumps are slightly misaligned from the
signal to provide a small spatial separation between the $1367nm$ pump and the
generated $1324nm$ beam.
We first measure the frequency conversion efficiency by locking the $795nm$
laser to Rubidium Saturated Absorption Spectrum at transitions
$\rm{}^{85}Rb:5S_{1/2}F=3\rightarrow 5P_{1/2}F^{\prime}=2$ and the $780nm$
laser at $\rm{}^{85}Rb:5S_{1/2}F=3\rightarrow 5P_{3/2}F^{\prime}=4$. We then
lock the $1367nm$ laser to the $\rm 5P_{3/2}F^{\prime}=4-6S_{1/2}F^{\prime}=3$
transition using the two-photon absorption spectrum. We can also modulate the
$795nm$ into pulse envelopes using the AOM. The conversion efficiency is
measured to be around $0.1\%$. We then lock the $795nm$ to
$\rm{}^{85}Rb:5S_{1/2}F=3\rightarrow 5P_{3/2}F^{\prime}=4$ transition and scan
the $780nm$ pump across transition $\rm{}^{85}Rb:5S_{1/2}F=3\rightarrow
5P_{1/2}F^{\prime}=2$ at 10.1 Hz and scan the $1367nm$ pump across transition
$\rm 5P_{3/2}F^{\prime}=4-6S_{1/2}F^{\prime}=3$ at 10 Hz. We record the
reading of the photodetector in an oscilloscope and obtain the conversion
intensity as a function of the pump fields detunings, as shown in Figure 5.
Figure 5: The measured conversion intensity as a function of the pump fields
laser Pizeo voltage offset. The colorbar is in arbitrary unit. In a coarse
approximation, the Pizeo voltage offset can be regarded as linear to the
frequency detuning. However, more in depth calibration should be done to
translate the voltage offset to frequency detunins. The zero point of the
voltage offset does not necessary corresponds to the zero detuning position.
The graininess is due to the limited data points density.
## III Long Distance Quantum Network Infrastructure
### III.1 Long Distance Fiber Infrastructure
A practical quantum network, used to interconnect quantum systems over large
distances in real-world configurations, needs to co-exist and be compatible to
the largest degree possible with existing classical telecommunication network
infrastructure. This is the litmus test for bringing quantum networking
technology out of the confines of a laboratory without the requirement of a
new infrastructure that can be prohibitively costly. Furthermore, the co-
existence of quantum and classical signals is an open research topic. While it
is feasible to create fully transparent, all-optical, quantum networks
utilizing the available dark fiber infrastructure and use separate channels
for quantum and classical traffic, extensive research is required to determine
if and how it would be possible to have quantum and classical signals share
the same optical fibers while avoiding significant cross-talk and reliably
maintaining the quantum information over long distances.
The SBU/BNL quantum network prototype interconnects quantum laboratories at
SBU and BNL, taking advantage of the existing optical fiber infrastructure
within the campuses and a commercially available long-distance optical fiber
links. There are four single-mode fiber strands available that are used for
quantum and classical signals, enabling a wide range of experiments. We
utilize dedicated dark fibers for the quantum signals while all classical
signals are carried in separate classical-signal-only fibers, except for the
polarization reference frames, which co-propagate with the qubits. The long-
distance fiber link provides connectivity between the main networking hub at
each campus with an average attenuation of approx. 22.7dB at 1310nm. It
consists of a long segment and a shorter one, comprising multiple fusion-
spliced shorter segments, patched together in a co-location room near the SBU
campus. At SBU and BNL, the long-distance fibers are patched to the local
infrastructure fibers that interconnect the various campus buildings. At both
SBU and BNL, these fibers comprise multiple segments, fusion and/or
mechanically spliced, as well as fiber patches between distribution frames. At
SBU, the fiber path runs from the university’s main networking hub (ECC
building) to the Physics building’s QIT lab. At BNL, the fiber path also runs
from the main networking hub (ITD building) to the QIST lab in the
Instrumentation Division building. In addition to the overall physical length
one must also account for the patch fibers within the networking hubs and the
patch fibers from the fiber panels within each quantum lab to the quantum
memories and detectors. Overall, the physical fiber length from Alice to Bob
through Charlie is estimated at approximately 140km; however, the total
measured loss is 23dB, corresponding to a perfect fiber length (with a typical
loss of 0.35dB/km at 1310nm) to over 160km.
Figure 6: Overview of the testbed on Long Island, New York. It consists of 4
commercially available fibers (rented from Crown Castle) connecting the main
fiber exchange points in Stony Brook University (ECC Engineering Building) and
Brookhaven National Laboratory (SDCC Building). From ECC, several fiber
strands connect to the Quantum Internet Laboratory (QIT) in the Physics and
Astronomy Building. From SDCC, several strands connect to the Quantum
Information Science and Technology Laboratory (QIST) in the Instrumentation
Building. Two quantum memories and frequency conversion setups (Alice and Bob)
are located on the SBU QIT laboratory and are connected independently to the
network. The interference setup and telecom compatible single photon nano-wire
detectors (Charlie station) are located in the QIST laboratory in BNL. The
other two fibers are used to transport classical timing triggers, network
status, and sequencing information.
### III.2 Polarization Compensation
Since in quantum networking applications, the quantum information is often
encoded in the polarization state (SOP) of photons, a particular bottleneck in
these long-distance applications is that of polarization fluctuations, which
is rarely of concern in classical telecommunication networks. If fibers were
perfect (e.g. polarization-maintaining), the signal SOP would remain mostly
constant, eliminating the need for compensation. However, millions of miles of
deployed non-polarization maintaining fiber infrastructure already exist and
should be utilized given the requiremens needed for entanglement
communication. In these optical fibers the SOP of the light propagating varies
along its length due to random birefringence induced by thermal changes,
mechanical stress, or fiber core irregularities and other changing
environmental and material inhomogeneities - influencing both polarization
axes simultaneously. These fluctuations can also lead to variations in the
optical path length, resulting in fluctuations/drift of power. We have
established the dynamic compensation of polarization drift to maintain quantum
information integrity. Here we use a quantum-state compensator for flying
photons to maintain their polarization, ensuring that transmitted qubits’
purity is optimal at their destination/measurement station. The SOP can be
changed using compensation retardation plates or by actively modifying the
birefringence in the optical fiber, as was performed here. Our experiment uses
a prototype of a machine learning enhanced polarization compensating device
built by Qunnect Inc (Qu-APC module). The module requires a triggered string
of reference photons interspersed between the qubit carrying photons. The
method has the advantage of measuring the polarization of photons at the same
transmission wavelength as the qubits. The polarization controller itself
consists of three motorized rotating paddles with fiber loops acting as wave-
plates on each of the Alice and Bob branches to rotate the light’s
polarization-state with feedback from a polarimeter capable of remapping the
light’s SOP down to -70dB. The retardation plates effectively rotate around
the optical beam (fiber) with respect to each other, inducing strain on the
fiber, and changing the polarization of the photons travelling through.
Figure 7: Principle of operation for polarization compensation. The initial
qubit polarization is represented by the green vector mapped onto a sphere. As
the H qubit (left) transmits through the fiber, fluctuations in the
birefringence cause the polarization to drift (defined as the product of the 3
Stokes vectors shown in black) from its initial state (trending towards red).
The compensation device modulates the fiber stress to compensate for this
drift and restore the initial state (right).
Using rotating wave-plates allows to easily correct for the random
polarization drifts by letting the state randomly “walk-off” within a specific
interval before correcting for it. Therefore, at every acquisition cycle,
Alice sends a “calibration start” trigger along with CW polarized light at
-65bB (receiving end). Polarization compensation setup then adjusts both
branches to the same chosen polarization at the measurement station. Once the
two have converged within an acceptable margin of the desired state, it sends
back to Alice a “calibration termination”. Then Alice confirms as she sends
qubits back instead of CW light (corresponding to the “calibration stop”). The
more frequent the calibration is applied, the less time it takes to converge,
but the less network uptime is available. For optimal results and maximal
network uptime, the re-calibration was re-applied every 5 minutes by which
time the drift was too large to continue the interference with sufficient
visibility. Figure 7 shows the effect of the polarization compensation on an
interference observed after the 68km propagation in both branches.
Figure 8: Top: State compensation for out-of-cycle polarization basis
realignment showing the three normalized Stokes vectors S1, S2, and S2 as a
function of time. Here the Qu-APC device is active and cycling its
compensation routine. Grey periods correspond to when the compensation system
is active (network down-time), whereas periods in between correspond to free
drift. Bottom: Compensation score indicating the state of the drift and the
expected need for recalibration. We observe that without compensation, on
average, the available window for experiments is on the order of 2-5minutes
before the drift impends the results (red). With the calibration above
threshold (green), the post-stabilized visibility at the interferometer was
roughly constant at 88%.
### III.3 Charlie station at Brookhaven
At the Charlie station in BNL, a series of additional wave-plates are used to
compensate for polarization rotation caused by propagation in the lab optical
fibers after the compensation apparatus. Then the pulses retrieved from the
two paths interfere at a $\rm 50:50$ NPBS beamsplitter, and then go to two
single-photon detectors placed in another adjacent laboratory through 10m
light-shielded fibers. These detectors generate a signal every time they
record a hit. Data is then analyzed to calculate the coincidence rate between
the two output arms of the interferometer.
## IV Long Distance Quantum Interference Experiments
Having quantum memories operating at telecom wavelengths and a long-distance
fiber infrastructure that does not degrade the initial quantum states, allow
us to connect the two quantum light matter interfaces. We now show quantum
interference experiments using the large interferometer connecting SBU and
BNL. Firstly, we present a single photon (SP) Mach-Zehnder interferometer
demostrating the preservation of quantum state trasmited over a 140 km
network. Secondly, we present a two-photon (TP) Hong-Ou-Mandel interference
experiment, showing the indistinguishability of the telecom photons produced
in the quantum memories and the possibility to create entanglement between
them. We mention that first experiments connecting telecom operational light-
matter interfaces and telecom optical links have been shown recently Bock2018
; Leent2020 ; Yu2020 . However, the experiments presented here are covering an
unprecedented distance, one order of magnitude larger that anything attempted
before.
Figure 9: Basic schematic diagrams of the long-distance interference
experiments. Top: The 1367 nm coherent beam at SBU is split and sent down the
two long fiber runs; for single-photon running the initial beam is heavily
attenuated. At the receiving end at BNL the two lines are combined through a
beam splitter whose outputs go to two detectors. The paths are equal length
for the MZ configuration, and an extra 18 km loop is added to one path for the
HOM configuration. Bottom: The input 795 nm beam is split and then sent to two
quantum memories, where it is combined with 1367 nm and 780 nm beams (not
shown) and converted to 1324 nm light, which is then sent on to the two long
fiber runs.
## V Single Photon (SP) Mach-Zehnder interferometry
The simplest experiment to demonstrate interference after transmission on two
long-distance links is to split a low-intensity beam at the transmitting
station, and then re-combine them through a splitter at the receiving station.
The basic arrangement is shown in Figure 9 and corresponds to a Mach-Zehnder
interferometer with one spatial mode in the fibers.
With a narrow-band laser source we can write the initial state $I$ of the
field before the initial symmetric beamsplitter as a coherent, polarized,
quasi-monochromatic state $\ket{\alpha}_{I,s}$ where $s$ labels the
polarization of the occupied mode. We can then write the state of the two
modes $A$ and $B$ after the splitter as
$\Psi_{AB}=\frac{1}{\sqrt{2}}\ket{\alpha}_{A,s}\,\otimes\,\frac{1}{\sqrt{2}}\ket{\alpha}_{B,s}$
(36)
After transmission the state of the modes at points $C$ and $D$ are still
coherent but attenuated
$\Psi_{CD}=\frac{t_{AC}}{\sqrt{2}}\ket{\alpha}_{C,p}\,\otimes\,\frac{e^{i\phi}\;t_{BD}}{\sqrt{2}}\ket{\alpha}_{D,q}$
(37)
where (i) the intensities along the $A\rightarrow C$ and $B\rightarrow D$ are
reduced by (real) transmission coefficients $T_{AC}={t_{AC}}^{2}$ and
$T_{BD}={t_{BD}}^{2}$, respectively; (ii) the factor $e^{i\phi}$ encodes any
phase difference resulting from any length differences between the two long
fibers; and (iii) the polarizations of the modes occupied after transmission
are denoted $p$ and $q$; in the absence of polarization compensation the
transforms $s\rightarrow p$ and $s\rightarrow q$ will generally be random.
The state in the modes $E$ and $F$ after the final beamsplitter then becomes
$\Psi_{EF}=\left(\frac{t_{AC}}{2}\ket{\alpha}_{E,p}+\frac{e^{i\phi}\;t_{BD}}{2}\ket{\alpha}_{E,q}\right)\,\otimes\,\left(\frac{t_{AC}}{2}\ket{\alpha}_{F,p}-\frac{e^{i\phi}\;t_{BD}}{2}\ket{\alpha}_{F,q}\right)$
(38)
From standard photo-detection theory we can now calculate the rates of
detectable photons in $E$ and $F$ at any specific time as
$R_{E}(t)=\frac{|\alpha|^{2}}{2}\frac{T_{AC}+T_{BD}}{2}(1+\mathcal{V}\cos\phi(t))\hskip
36.135ptR_{F}(t)=\frac{|\alpha|^{2}}{2}\frac{T_{AC}+T_{BD}}{2}(1-\mathcal{V}\cos\phi(t))$
(39)
where $|\alpha|^{2}$ is the original beam intensity in photons per coherence
time, $(T_{AC}+T_{BD})/2$ is the average transmission through the two fiber
runs. This is the standard Mach-Zehnder interference for a macroscopic beam in
a single spatial mode, where the intensities at the two outputs are directly
anti-correlated. It is interesting to note, that even though the entangled
state is travelling through both fibers, the loss in photon intensity
corresponds to only the average loss in a single fiber and not the product of
two losses.
Here $\mathcal{V}$ is the visibility of the M-Z interference pattern
$\mathcal{V}\equiv\frac{2}{t_{AC}/t_{BD}+t_{BD}/t_{AC}}\;|\langle
p|q\rangle|=\frac{1}{\cosh(\log(t_{AC}/t_{BD}))}\;|\langle p|q\rangle|$ (40)
The first factor relates to the balance between the transmission losses along
the two fiber runs, and the second factor is the magnitude of the overlap
between the two final polarization states $p$ and $q$. If the beams are
balanced on arrival and the polarization states are the same then the
visibility will be unity; as either of these is not satisfied then the
visibility will decrease.
### V.1 Single-photon interference
The anti-correlated intensities described in Equation 39 are true for any
coherent state, with any intensity $|\alpha|^{2}$. However, for beams at the
single-photon level the M-Z interferometer can be seen as transporting an
entangled state along macroscopic distances.
The coherent beam state as defined by Glauber can be expanded in a Fock basis
as
$\ket{\alpha}=e^{-|\alpha|^{2}}\sum_{n}\frac{\alpha^{n}}{\sqrt{n!}}\ket{n}$
(41)
with the average occupation number of state $\ket{\alpha}$ being $\langle
n\rangle=|\alpha|^{2}$. For a purely monochromatic beam this would be the
number of photons in the mode over infinite time; for a quasi-monochromatic
beam it can be taken to represent the number of photons in the beam per
interval of the coherence time, as above.
For a weak, highly attenuated initial beam $|\alpha|^{2}\ll 1$ and so we can
expand Equation 36 as
$\Psi_{AB}=\frac{e^{-|\alpha|^{2}}}{\sqrt{2}}\left(\ket{0}_{A,s}+\alpha\ket{1}_{A,s}+\mathcal{O}(\alpha^{2})\right)\,\otimes\,\frac{e^{-|\alpha|^{2}}}{\sqrt{2}}\left(\ket{0}_{B,s}+\alpha\ket{1}_{B,s}+\mathcal{O}(\alpha^{2})\right)$
(42)
Limiting our observations to the cases where at least one photon is detected
effectively post-selects against the $\ket{0}_{A,s}\otimes\ket{0}_{B,s}$
branch of the wavefunction. Expanding the reduced wavefunction to leading
order in $\alpha$ we have
$\Psi_{AB}=\frac{\alpha}{\sqrt{2}}\left(\ket{1}_{A,s}\otimes\ket{0}_{B,s}\,+\,\ket{0}_{A,s}\otimes\ket{1}_{B,s}\right)$
(43)
Equation 43 is then an entangled state, a superposition between a single
photon having gone down the $A$ channel and the $B$ channel. If we can observe
an interference between the two terms at the receiving end then we confirm
that the entangled superposition is preserved in transit, i.e. no “which-way”
information was available to the outside world.
If the state we $\Psi_{AB}$ in Eq. 43 propagated through the fiber
transmission and the beamsplitter without decoherence or interaction with the
outside world, then as in Sec. V we can derive the (complementary)
probabilities of seeing a photon at $E$ versus $F$, conditioned on a photon
having been observed at one or the other:
$P_{E}(t)=\frac{1}{2}(1+\mathcal{V}\cos\phi(t))\hskip
36.135ptP_{F}(t)=\frac{1}{2}(1-\mathcal{V}\cos\phi(t))$ (44)
Here the visibility $\mathcal{V}$ is the same as shown in Eq. 38. If we can
show that these probabilities are observed in the single-photon limit then it
will confirm that transmission over the long fibers preserves the entangled
state in Equation 43. By contrast, if the state were to de-cohere, de-polarize
or otherwise interact non-reversibly with the environment then the
interference would not manifest and the observation probabilities would reduce
to $P_{E}=P_{F}=1/2$.
It may not be straightforward to measure the probabilities of Eq. 44 directly
if the photon rates are low and the phase difference $\phi(t)$ is changing
quickly. We can instead create a correlation observable, following the simple
idea that if photons are, say, generally being steered toward output $E$ then
to photons observed close together in time will both tend to register at $E$;
and this will generate a net suppression in observing $E$-$F$ pairs which are
close together in time.
With individual photon detection we measure the standard normalized
correlation function
$G^{(2)}(\Delta t)\equiv\frac{\langle P_{E}(t)P_{F}(t+\Delta
t)\rangle_{t}}{\langle P_{E}(t)\rangle_{t}\,\langle P_{F}(t+\Delta
t)\rangle_{t}}$ (45)
With Equation 44 this becomes for the Mach-Zehnder configuration
$G^{(2)}(\Delta
t)=1-\mathcal{V}^{2}\langle\cos(\phi(t))\cdot\cos(\phi(t+\Delta
t))\rangle_{t}=1-\mathcal{V}^{2}/2\textrm{ for }\Delta t\rightarrow 0\textrm{
or }1,\textrm{ for large }\Delta t$ (46)
and so if the coherence between the two branches of the superposition is
preserved then we would expect to observe a dip feature in the $G^{(2)}(\Delta
t)$, with a maximum depth of $1/2$ for unit visibility; and then a return to
$G^{(2)}=1$ when $\Delta t$ is much larger than the time scale on which
$\phi(t)$ is fluctuating. It should be noted that this anti-correlation
between two photon detections is not a two-photon interference effect; it is
rather the correlation in time between two single-photon interference effects.
We will examine true two-photon interference in Section VI below.
Figure 10: Measured $G^{(2)}$ correlation function between single-photon
detections at the two outputs of the final beamsplitter in the Mach-Zehnder
configuration. Here we are observing 1324 nm photons which have been produced
by a 795 nm photon being split and then converted in two quantum memories, as
in Figure 9. The dip follows the expectation from Equation 46, with a
visibility of about $\mathcal{V}=0.9$ and a feature width of about a
millisecond.
Figure 10 shows the $G^{(2)}(\Delta t)$ correlation function between single-
photon hits detected at the two output channels of the final beamsplitter, in
the M-Z configuration, as a function of the time separation between the
detected hits. A clear anti-correlation is seen for small $\Delta t$, and a
return to $G^{(2)}=1$ at large $\Delta t$, exactly as expected from Equation
46. The feature’s width measures the timescale on which the induced phase
difference $\phi(t)$ is varying, here on the order of a milliseconds. These
data were taken with polarization optimization at the receiving end. The dip
has a minimum value of about $G^{(2)}=0.6$, indicating to a very high
visibility of $\mathcal{V}\sim 0.9$ and so a correspondingly good match
between the final polarizations.
The interference results in Figure 10 were measured with 1324 nm light, as
shown in the lower panel of Figure 9. Thus, in this experiment the first
entangled state of the type shown in Equation 43 is present with the 795 nm
photons at the initial mode points $I$ and $J$. That superposition state is
then put through frequency conversion to 1324 nm light in the two quantum
memory vapor cells, to emerge at mode points $A$ and $B$ before being sent
onto the long fiber runs. The fact that the Mach-Zehnder interference is
observed with high visibility at the receiving end indicates that the
entangled superposition is well preserved through both the conversion process
and the long-distance transmission stage.
## VI Two-photon (TP) Hong-Ou-Mandel interferometry
To demonstrate that our memories are capable of being entangled by
measurement, we must demonstrate that the quantum information is preserved
after frequency conversion, retrieval and long distance transmission. The
application of the Cabrillo entanglement scheme requires that the temporal
envelope, optical frequency, and polarization of both photon streams produced
in the Alice and Bob stations remain indistinguishable at the input of the
interference experiment. Therefore, we calibrate the relevant parameters of
the quantum memories to ensure that Alice and Bob have identical FWM
bandwidths and conversion efficiencies. Two independent filtering systems (to
filter out the auxiliary frequency conversion fields) located after the atomic
interfaces, each consisting of several consecutive wavelength filters, are
calibrated to have identical transmissions for Alice and Bob. After carefully
matching all the auxiliary fields parameters, we proceed to frequency convert
qubits at the few-photon level in the two memories and couple their output
into the long distance fibers.
Figure 11: Correlation $G^{(2)}(\Delta t)$ measurements between single-photon
detections of 1324 nm light produced through frequency conversion, as per the
lower panel of Figure 9. These were taken in the HOM configuration with the
extra delay loop on one path; this delay is much longer than the coherence
time of the 1324 nm beam and so ensures that the photons arriving at the
receiver are mutually incoherent. We observe the Hong-Ou-Mandel (HOM)
interference, manifested as a depletion of observed pairs at the beamsplitter
output ports, when the detection time difference is on the order of the beam
coherence time. Upper: The magnitude of the HOM dip is small when the
polarization compensation is deliberately set badly. Middle: Without any
polarization adjustement the HOM dip reaches a minimum of of $G^{(2)}=0.75$,
which is typical for a random alignment. Lower: With optimization in
polarization compensation the HOM minimum is as low as $G^{(2)}=0.65$
indicating a high degree of indistinguishability between the arriving photons.
The mean number of photons per pulse at the output of each memory is measured
to be $\rm\langle n\rangle_{Alice}=20$ and $\rm\langle n\rangle_{Bob}=20$. As
indicatd in Fig. 9, in this experiment we add a 18km long delay on the BNL
side in order to scramble the phase coherence and measure the second order
correlation. The losses through both the long distance setups is measured to
be $\rm\approx 23dB$ for one link and $\approx 26dB$ for the longer arm. We
obtain average photon numbers after long distance propagation of $\rm\langle
n\rangle_{Alice}\approx 0.1$ and $\rm\langle n\rangle_{Bob}\approx 0.06$ at
the nanowire detectors in BNL. The HOM coincidence rate is measured versus the
arrival time of the photons in the two detectors. The coincidences within a
temporal region of interest (ROI) are post-selected with a width of $0.1\mu
s$. We observe the desired modulation in the coincidence rate, exhibiting a
minimum for the initial identical polarization and reaching a maximum
corresponding to uncorrelatd photons beyond the coherence time of the FWM
process. The interference visibility is measured to be $V=(38\pm 2.0)\%$. We
conclude that our memories do not significantly affect the
indistinguishability of the photons.
## VII Discussion and Outlook
Our experimental scheme reflects the original idea of the Cabrillo scheme
Cabrillo1999 , where entanglement is produced based on a measurement
projection, applied in a long distance 158 km network. In our case, the two
atomic quantum memories are initially prepared in the in $\ket{1}$ states
(ground state in Fig. 2). The 795 nm weak excitation pulse, together with the
FWM pump beams excites the atoms to state $\ket{4}$ (excited state in Fig. 2).
The atoms can then decay to the intermediate state $\ket{2}$ by spontaneous
emission (creating the 1324 nm photons), with a small probability given by the
efficiency of the FWM process. After the photons propagation in the long
distance links, the detection of one photon in either detector after the HOM
interference projects the state of the two memories to
$\ket{\phi}=\frac{1}{\sqrt{2}}(\ket{12}+e^{i\theta}\ket{21})$. The phase
$\theta$ is due to the phase difference of the weak excitation pulses applied
to the two memories and the memory-detector path difference. Future
experiments will aim to store these entangled state of the memories. An
important constrain will be to have long coherence times on the order of
hundreds of microseconds, allowing for the long-distance transmission of the
telecom photons and the heralding information transmission. With these
conditions it will possible to verify the entanglement created between the
memories by retrieving their state into an entangled photonic state and
reconstructing its density matrix.
## VIII Aknowledgements
This work is supported by the Department Of Energy ASCR grant: “Inter-campus
network enabled by atomic quantum repeater nodes”, the Department Of Energy
CESER grant: “A Prototype for Ultimate Secure Transmission” and the Brookhaven
National Laboratory LDRD grant, “Building a quantum repeater prototype
connecting BNL to NYC”.
## References
* (1)
## IX References
* (2) A. Acin, et al., The quantum technologies roadmap: a European community view, New J. Phys. 20 080201 (2018).
* (3) C. Simon. Towards a global quantum network, Nat. Phot. 11, 678–680 (2017).
* (4) H. Lo, M. Curty, and B. Qi. Measurement-Device-Independent Quantum Key Distribution, Phys. Rev. Lett. 108, 130503 (2012).
* (5) S. Lloyd, M. S. Shahriar, J. H. Shapiro, and P. R. Hemmer. Long Distance, Unconditional Teleportation of Atomic States via Complete Bell-state Measurements, Phys. Rev. Lett. 87, 167903 (2001).
* (6) B. Hensen, N. Kalb, M.S. Blok, A. E. Dréau, A. Reiserer, R. F. L. Vermeulen, R. N. Schouten, M. Markham, D. J. Twitchen, K. Goodenough, D. Elkouss, S. Wehner, T. H. Taminiau, and R. Hanson. Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis, Sci. Rep. 6, 30289 (2016).
* (7) R. Valivarthi, M. G. Puigibert, Q. Zhou, G. H. Aguilar, V. B. Verma, F. Marsili, M. D. Shaw, S. W. Nam, D. Oblak, and W. Tittel. Quantum teleportation across a metropolitan fiber network, Nat. Phot. 10, 676–680 (2016).
* (8) Q. Sun, Y. Mao, S. Chen, W. Zhang, Y. Jiang, Y. Zhang, W. Zhang, S. Miki, T. Yamashita, H. Terai, X. Jiang, T. Chen, L. You, X. Chen, Z. Wang, J. Fan, Q. Zhang, and J.W. Pan. Quantum teleportation with independent sources and prior entanglement distribution over a network, Nat. Phot. 10, 671–675 (2016).
* (9) R. Valivarthi et al., Teleportation Systems Toward a Quantum Internet, Phys. Rev. X Quantum 1, 020317 (2020).
* (10) M. Puigibert, Entanglement and nonlocality between disparate solid-state quantum memories mediated by photons, Phys. Rev. Research 2, 013039 (2020).
* (11) M. Cao et al., Efficient reversible entanglement transfer between light and quantum memories, Optica 7, 1440 (2020).
* (12) L. Slodicka et al., Atom-Atom Entanglement by Single-Photon Detection, Phys. Rev. Lett. 110, 083603 (2013).
* (13) P. van Loock et al., Extending Quantum Links: Modules for Fiber- and Memory-Based Quantum Repeaters, Adv. Quantum Technol. 3, 1900141 (2020).
* (14) Quantum Networks for Open Science Workshop, Office of Advanced Scientific Computing Research, Department of Energy, DOI: 10.2172/1499146 (2020).
* (15) Awschalom2019 D. Awschalom, et al., Development of Quantum InterConnects (QuICs) for Next-Generation Information Technologies, arXiv:1912.06642 (2019).
* (16) Report of the DOE Quantum Internet Blueprint Workshop, From Long-distance Entanglement to Building a Nationwide Quantum Internet (2020).
* (17) Y. Wang, J. Li, S. Zhang, K. Su, Y. Zhou, K. Liao, S. Du, H. Yan, and S. Zhu. Efficient quantum memory for single-photon polarization qubits, Nat. Phot. 13, 346-351, (2019).
* (18) S. Yang, X. Wang, X. Bao, and J.W. Pan. An efficient quantum light-matter interface with sub-second lifetime. Nat. Phot. 10, 381–384 (2016).
* (19) B. Jing, X. Wang, Y. Yu, P. F. Sun, Y. Jiang, S. J. Yang, W. H. Jiang, X. Y. Luo, J. Zhang, X. Jiang, X. H. Bao, and J. W. Pan. Entanglement of three quantum memories via interference of three single photons, Nat. Phot. 13, 210-213 (2019).
* (20) Y. Yu, F. Ma, X. Luo, B. Jing, P. Sun, R. Fang, C. Yang, H. Liu, M. Zheng, X. Xie, W. Zhang, L. You, Z. Wang, T. Chen, Q. Zhang, X. Bao, and J. W. Pan. arXiv:1903.11284 (2019).
* (21) L.M. Duan, M. D. Lukin, J. I. Cirac, and P. Zoller. Long-distance quantum communication with atomic ensembles and linear optics. Nature, 414, 6862 (2001).
* (22) C. Cabrillo, J. Dirac, P. Fernandez, and P. Zoller. Creation of entangled states of distant atoms by interference. Phys. Rev. A 59, 1025–1033 (1999).
* (23) I. Novikova, R. Walsworth, and Y. Xiao. Electromagnetically induced transparency-based slow and stored light in warm atoms. Las. Phot. Rev. 6, 333–353 (2012). C. Kupchak, T. Mittiga, B. Jordaan, M. Namazi, C. Nölleke, and E. Figueroa. Room-Temperature Single-photon level Memory for Polarization States, Sci. Rep. 5, 7658 (2015).
* (24) M. Namazi, C. Kupchak, B. Jordaan, R. Shahrokhshahi, and E. Figueroa. Ultralow-Noise Room-Temperature Quantum Memory for Polarization Qubits, Phys. Rev. App. 8, 034023 (2017).
* (25) R. Finkelstein, E. Poem, O. Michel, O. Lahad, and O. Firstenberg. Fast, noise-free memory for photon synchronization at room temperature. Sci. Adv. 4, eaap8598 (2018).
* (26) K. T. Kaczmarek, P. M. Ledingham, B. Brecht, S. E. Thomas, G. S. Thekkadath, O. Lazo-Arjona, J. H. D. Munns, E. Poem, A. Feizpour, D. J. Saunders, J. Nunn, and I. A. Walmsley. High-speed noise-free optical quantum memory. Phys. Rev. A 97, 042316 (2018).
* (27) O. Katz, and O. Firstenberg. Light storage for one second in room-temperature alkali vapor. Nat. Comm. 9, 2074 (2018).
* (28) M. Namazi, G. Vallone, B. Jordaan, C. Goham, R. Shahrokhshahi, P. Villoresi, and E. Figueroa. Free-Space Quantum Communication with a Portable Quantum Memory. Phys. Rev. App. 8, 064013 (2017).
* (29) M. Flament et al., Hong-Ou-Mandel interference of polarization qubits stored in independent room-temperature quantum memories, arXiv:1808.07015v2 (2019).
* (30) A. G. Radnaev, Y.O. Dudin, R. Zhao, H. H. Jen, S. D. Jenkins, A. Kuzmich, and T. A. B. Kennedy, “A quantum memory with telecom wavelength conversion,” Nature Physics 6, 894 (2010).
* (31) C. K. Hong, Z. Y. Ou, and L. Mandel. Measurement of Subpicosecond Time Intervals Between Two-Photon By Interference, Phys. Rev. Lett. 59, 18 2044–2046 (1987).
* (32) E. Saglamyurek et al., Quantum storage of entangled telecom-wavelength photons in an erbium-doped optical fibre, Nature Photonics 9, 83 (2015).
* (33) J. Jin, et al., Telecom-Wavelength Atomic Quantum Memory in Optical Fiber for Heralded Polarization Qubits, Phys. Rev. Lett. 115, 140501 (2015).
* (34) M. Falamarzi et al., Storage and Reemission of Heralded Telecommunication-Wavelength Photons Using a Crystal Waveguide, Phys. Rev. Applied 11, 054056 (2019).
* (35) M. Bock et al., High-fidelity entanglement between a trapped ion and a telecom photon via quantum frequency conversion, Nature Communications 9, 1998 (2018).
* (36) T. van Leent et al., Long-Distance Distribution of Atom-Photon Entanglement at Telecom Wavelength, Phys. Rev. Lett. 124, 010510 (2020).
* (37) Y. Yu et al., Entanglement of two quantum memories via fibres over dozens of kilometres, Nature 578, 240 (2020).
|
††thanks: These authors contributed equally to this work.††thanks: These
authors contributed equally to this work.
# Many-body quantum teleportation via operator spreading in the traversable
wormhole protocol
Thomas Schuster Department of Physics, University of California, Berkeley,
California 94720 USA Bryce Kobrin Department of Physics, University of
California, Berkeley, California 94720 USA Materials Sciences Division,
Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Ping Gao
Center for Theoretical Physics, Massachusetts Institute of Technology,
Cambridge, MA 02139, USA Iris Cong Department of Physics, Harvard
University, Cambridge, Massachusetts 02138, USA Emil T. Khabiboulline
Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA
Norbert M. Linke Joint Quantum Institute and Department of Physics,
University of Maryland, College Park, MD 20742 USA Mikhail D. Lukin
Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA
Christopher Monroe Joint Quantum Institute and Department of Physics,
University of Maryland, College Park, MD 20742 USA Beni Yoshida Perimeter
Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada Norman
Y. Yao Department of Physics, University of California, Berkeley, California
94720 USA Materials Sciences Division, Lawrence Berkeley National Laboratory,
Berkeley, CA 94720, USA
(August 27, 2024)
###### Abstract
By leveraging shared entanglement between a pair of qubits, one can teleport a
quantum state from one particle to another. Recent advances have uncovered an
intrinsically _many-body_ generalization of quantum teleportation, with an
elegant and surprising connection to gravity. In particular, the teleportation
of quantum information relies on many-body dynamics, which originate from
strongly-interacting systems that are holographically dual to gravity; from
the gravitational perspective, such quantum teleportation can be understood as
the transmission of information through a traversable wormhole. Here, we
propose and analyze a new mechanism for many-body quantum teleportation—dubbed
peaked-size teleportation. Intriguingly, peaked-size teleportation utilizes
precisely the same type of quantum circuit as traversable wormhole
teleportation, yet has a completely distinct microscopic origin: it relies
upon the spreading of local operators under generic thermalizing dynamics and
not gravitational physics. We demonstrate the ubiquity of peaked-size
teleportation, both analytically and numerically, across a diverse landscape
of physical systems, including random unitary circuits, the Sachdev-Ye-Kitaev
model (at high temperatures), one-dimensional spin chains and a bulk theory of
gravity with stringy corrections. Our results pave the way towards using many-
body quantum teleportation as a powerful experimental tool for: ($i$)
characterizing the size distributions of operators in strongly-correlated
systems and ($ii$) distinguishing between generic and intrinsically
gravitational scrambling dynamics. To this end, we provide a detailed
experimental blueprint for realizing many-body quantum teleportation in both
trapped ions and Rydberg atom arrays; effects of decoherence and experimental
imperfections are analyzed.
Figure 1: (a) Teleportation protocol, proceeding from bottom to top. To
teleport, a subset of the left qubits are measured in the $\hat{O}_{i}$ basis,
and operations $\hat{V}_{i}=e^{igo_{i}\hat{O}_{i}/K}$ conditioned on the
measurement results $o_{i}$ are performed on the right (purple). (b) The
protocol hosts two mechanisms of teleportation: peaked-size (red) and
gravitational (blue). The channel capacity of peaked-size teleportation
decreases with increasing time, while its fidelity decreases with decreasing
temperature. At high temperature and late times, it is equivalent to
teleportation in the HPR protocol (red diamond). Gravitational teleportation
occurs at low temperatures in systems dual to semiclassical gravity (e.g. the
SYK model), and exhibits the same channel capacity but _higher_ fidelity
compared to peaked-size teleportation. Increasing the strength of stringy
corrections to the gravity theory interpolates between gravitational and
peaked-size teleportation. (c) The two mechanisms display distinct time
profiles for the teleportation fidelity. In systems dual to gravity (top), the
fidelity features a single $\mathcal{O}(1)$ peak near the scrambling time
(gravitational, blue), and a late time revival (peaked-size, red) to a
fidelity suppressed by the two-point function $G_{\beta}$ Maldacena _et al._
(2017). In generic thermalizing systems (bottom), the fidelity oscillates
between 0 and $G_{\beta}$ with phase proportional to the operator size, may
subsequently decay if sizes become not peaked, and revives at late times.
Quantum teleportation leverages entanglement to transmit quantum information
between distant locations Bennett _et al._ (1993); Barrett _et al._ (2004);
Riebe _et al._ (2004); Olmschenk _et al._ (2009); Ren _et al._ (2017).
Typically, one thinks about teleportation in the context of a few, well-
controlled degrees of freedom. For example, two distant observers might share
a pair of maximally entangled qubits (i.e. an EPR pair), enabling a
measurement by one observer to teleport an unknown quantum state to the other.
Recently, a confluence of seminal results has unveiled several novel instances
of teleportation in strongly-interacting, _many-body_ systems Gao _et al._
(2017); Maldacena _et al._ (2017); Bao _et al._ (2018); Maldacena and Qi
(2018); Yoshida and Kitaev (2017); Yoshida and Yao (2019); Landsman _et al._
(2019a); Blok _et al._ (2020); Brown _et al._ (2019); Nezami _et al._
(2021); Gao and Jafferis (2019). Similar to conventional quantum
teleportation, these protocols utilize shared entanglement as well as
measurement and classical communication. However, they differ from
conventional quantum teleportation in a few key aspects. Most notably, prior
to teleportation, the initial quantum state is _scrambled_ by the application
of a many-body unitary. At first glance, this coexistence of
scrambling—broadly speaking, the increasing complexity of initially simple
quantum information under many-body time dynamics Sekino and Susskind (2008);
Shenker and Stanford (2014); Roberts _et al._ (2015); Maldacena _et al._
(2016); Hosur _et al._ (2016)—and teleportation might seem counterintuitive.
Indeed, one often thinks of teleportation as a directed quantum channel moving
information between two specific locations; in contrast, scrambling disperses
quantum information across all of the degrees of freedom in a system. The most
natural way to reconcile these two perspectives is through the language of
quantum error correction Hayden and Preskill (2007): by encoding, via
scrambling, one observer’s local information into non-local correlations
across a many-body system, one can in fact teleport this information with
access only to any few of the system’s qubits.
The most notable example of many-body teleportation is the so-called
traversable wormhole (TW) protocol, discovered in the context of quantum
gravity Gao _et al._ (2017); Maldacena _et al._ (2017); Bao _et al._
(2018); Brown _et al._ (2019); Nezami _et al._ (2021); Gao and Jafferis
(2019). From the bulk gravitational perspective, this protocol consists of a
particle traveling from one side of a wormhole geometry to the other; the
wormhole is rendered traversable by the application of a two-sided boundary
coupling. In the boundary theory, the wormhole geometry corresponds to a
highly entangled thermofield double (TFD) state shared between two copies of a
many-body system, and the coupling is implemented via measurement and feed-
forward operations [Fig. 1(a)]. Crucially, for this bulk-boundary
correspondence to hold, the Hamiltonian describing the boundary system must
exhibit “coherent”, gravitational scrambling dynamics—this is realized, for
example, in the Sachdev-Ye-Kitaev (SYK) model at low temperatures.
Interestingly, recent work has uncovered a number of instances of many-body
teleportation _without_ gravitational dynamics. For example, teleportation in
the TW protocol was recently demonstrated analytically in the SYK model at
_high_ temperatures Gao and Jafferis (2019), and numerically in chaotic spin
chains at late times Brown _et al._ (2019); Nezami _et al._ (2021); in both
cases, the microscopic mechanism for teleportation remains an outstanding
puzzle. In addition to the TW protocol, an alternate many-body teleportation
protocol was introduced in the context of the Hayden-Preskill variant of the
black hole information paradox Hayden and Preskill (2007); Yoshida and Kitaev
(2017). This so-called Hayden-Preskill recovery (HPR) protocol allows for
many-body teleportation via _generic_ scrambling dynamics. Although the two
protocols bear some structural similarity, the HPR protocol is exponentially
less efficient for teleporting multiple qubits. To this end, understanding the
precise relationship between these protocols remains an essential open
question.
## I Summary of results
Here, we provide a framework that unifies all known examples of many-body
teleportation outside the gravitational regime. The microscopic mechanism
underlying this framework relies upon the growth of operators under scrambling
time evolution. For the purposes of teleportation, this growth is naturally
characterized by the size distribution of the operators Roberts _et al._
(2018); Qi and Streicher (2019); Qi _et al._ (2019), which provides a finer-
grained measure of quantum information scrambling compared to more
conventional quantities such as out-of-time-ordered correlators (OTOCs)
Shenker and Stanford (2014); Maldacena _et al._ (2016); Larkin and
Ovchinnikov (1969). Our teleportation mechanism—dubbed _peaked-size
teleportation_ —occurs in the TW protocol when the size distributions of
operators are tightly peaked, a situation that arises for a wide variety of
many-body dynamics.
Our results demonstrate that generic many-body dynamics can function as a
powerful encoding procedure for the communication of quantum information. In
particular, peaked-size teleportation represents an entanglement-assisted
decoding scheme that is both exponentially more efficient than the HPR
protocol, and applicable to a vastly broader class of encoding dynamics
compared to gravitational teleportation Bennett _et al._ (1999); Brun _et
al._ (2006). Furthermore, our work paves the way toward utilizing the TW
protocol as a robust experimental tool for characterizing the growth of
operators in strongly interacting systems, and as a litmus test for
identifying intrinsically gravitational scrambling dynamics.
_Analyzing the teleportation circuit_ (Section II)—To set the scene for
peaked-size teleportation, we first determine the conditions for information
recovery in the TW protocol [Fig. 1(a)]. In this protocol, locally encoded
quantum information is inserted into one side of an entangled thermofield
double (TFD) state and teleported to the other side through a combination of
($i$) unitary evolution of each side individually, and ($ii$) a simple two-
sided coupling that acts on a large subsystem of each side. The coupling is
quite flexible in form, and corresponds to evolution, $e^{igV}$, under the
two-sided interaction
$V=\frac{1}{K}\sum_{i=1}^{K}O_{i,l}O^{*}_{i,r}$ (1)
where $O_{i}$ are _any_ generic local operators applied to the left ($l$) and
right ($r$) side of the system. This coupling can be performed either as a
quantum gate, or through local measurements of $O_{i}$ on the left side,
followed by classical communication and feed-forward operations on the right
side [Fig. 1(a)].
We begin by rigorously connecting the quantum teleportation fidelity to two-
sided correlation functions of the form Maldacena _et al._ (2017):
$C_{Q}(t)\equiv\bra{\text{TFD}}Q_{r}(-t)e^{igV}Q_{l}(t)\ket{\text{TFD}}$ (2)
where $Q(\pm t)$ is a time-evolved operator initially acting on the qubit(s)
to be teleported. We isolate two conditions on these correlators that, when
combined together, are necessary and sufficient for teleportation to succeed:
1. 1.
The magnitudes of the correlators must be maximal for every $Q$.
2. 2.
The phases of the correlators must be the same for every $Q$.
The first of these conditions is naturally satisfied if the TFD state is at
infinite temperature, in which case it reduces to an extensive set of
maximally entangled EPR pairs. On the other hand, meeting the second condition
requires that the coupling acts non-trivially on the operators $Q$.
_Relation to operator spreading_ (Section III)—A central theme of our work is
relating the correlation functions, by virtue of the coupling $V$, to the size
distribution of the time-evolved operator $Q(t)$. Specifically, writing $Q(t)$
as a sum over Pauli strings, $Q(t)=\sum_{R}c_{R}R$, we may define the the size
distribution of $Q(t)$ as
$P(\mathcal{S})=\sum_{\mathcal{S}[R]=\mathcal{S}}|c_{R}|^{2}$ (3)
where the sum is over Pauli strings, $R$, of size $\mathcal{S}$. Crucially,
the coupling $V$, by probing correlations between the two sides of the doubled
Hilbert space, directly measures the operator size Qi and Streicher (2019).
_Peaked-size quantum teleportation_ (Section IV, V, VI)—The peaked-size
teleportation mechanism succeeds whenever the size distributions of time-
evolved operators, $Q(t)$, are tightly peaked about their average size. In
this case, the exponentiated coupling, $e^{igV}$, applies approximately the
same phase, proportional to the size, to each coefficient $c_{R}$, and
therefore to the entire operator, $Q(t)$. We show that these applied phases
can be used to align the phases of all correlators $C_{Q}$, achieving our
second requirement for successful teleportation.
Interestingly, we find that peaked-size teleportation is naturally implemented
across a broad landscape of interacting, many-body dynamics. To this end, we
find it helpful to distinguish two broad regimes of dynamics: those at late
times (after the scrambling time) and intermediate times (before the
scrambling time). At late times, peaked-size teleportation occurs because
operators become fully delocalized across the system, thus their size
distribution is peaked about a typical, extensive value. In this regime, as in
the HPR protocol, peaked-size teleportation is limited to transmitting only a
single qubit. Motivated by this observation, we demonstrate a surprising
relation between the HPR and TW protocols—despite their disparate origins, a
simple modification of the coupling in the TW protocol renders it _identical_
to the HPR protocol in the high temperature limit.
| Peaked-size, | Peaked-size, | Gravitational,
---|---|---|---
| late times | intermediate times | intermediate times
Physical setting | all thermalizing | |
systems | generic thermalizing systems, e.g. spin chains, 0D RUCs, high-$T$ SYK | systems with a semiclassical gravity dual (e.g. low-$T$ SYK) |
Teleportation time | $t\gtrsim t_{s}$ | $t\lesssim t_{s}$ | $t\lesssim t_{s}$
Channel capacity | $1$ qubit | $\sim K$ qubits | $\sim K$ qubits
Fidelity per qubit | $\sim G_{\beta}$ | $\sim G_{\beta}$ | $\sim 1$
Coupling strength | $g\sim\pi\text{ mod }2\pi$ | $g\sim\pi N/\mathcal{S}\text{ mod }2\pi$ | $g\sim e^{-t}/G_{N}$
Table 1: Comparison of peaked-size teleportation at intermediate and late
times with gravitational teleportation. The three regimes differ in their
teleportation fidelity, channel capacity, and the coupling strength $g$, which
optimizes teleportation. Both peaked-size and gravitational teleportation
exhibit a regime before the scrambling time $t_{s}$, where multiple qubits can
be teleported.
At intermediate times, peaked-size teleportation exhibits substantially richer
behavior. We show that generic systems in $\geq$1D naturally satisfy the
peaked-size condition due to thermalization within the bulk of a time-evolved
operator’s light cone. In contrast, the size distributions of operators in 0D,
or all-to-all coupled, systems are not intrinsically peaked; however, they can
be sufficiently narrowed by encoding the quantum information before insertion
into the teleportation circuit. Interestingly, this resolves a previous
puzzle, namely, the observation of teleportation in the Sachdev-Ye-Kitaev
(SYK) model Sachdev and Ye (1993); Kitaev (2015); Maldacena and Stanford
(2016); Kitaev and Suh (2018) _outside_ the gravitational regime Gao and
Jafferis (2019). We substantiate these claims through extensive numerical and
analytic studies in a variety of physical models: random unitary circuits
(RUCs) in dimensions $d=0,1,$ and $2$ Nahum _et al._ (2018), the SYK model,
and experimentally relevant spin chain Hamiltonians Bernien _et al._ (2017).
These studies explicitly demonstrate how the profile of operator size
distributions in strongly-interacting systems controls the success and failure
of many-body teleportation.
Through these examples, we investigate the capabilities of peaked-size
teleportation as a quantum channel. In particular, we focus on two
features—the channel capacity and the teleportation fidelity—as a function of
evolution time and temperature [Fig. 1(b)]. Before the scrambling time,
peaked-size teleportation is capable of sending multiple qubits with no
increase in complexity; strikingly, it does so with the _same_ asymptotic
channel capacity as gravitational teleportation. This represents an
exponential improvement in efficiency over previous instances of non-
gravitational teleportation, i.e. in the HPR protocol Yoshida and Kitaev
(2017). Nevertheless, the fidelity of peaked-size teleportation is limited at
low temperatures compared to gravity. This limit arises from the reduced
entanglement of the TFD state (at low temperatures), and is naturally overcome
by the unique structure of gravitational scrambling Gao _et al._ (2017);
Maldacena _et al._ (2017); Brown _et al._ (2019); Nezami _et al._ (2021).
_Experimental implementations and implications_ (Section VIII)—Following this
logic, we demonstrate that many-body quantum teleportation functions as a
powerful diagnostic tool for scrambling dynamics in near-term quantum
simulators, enabling one to distinguish between generic thermalizing systems
and gravitational dynamics. To this end, we provide detailed blueprints for
realizing the protocol in two complementary experimental platforms—Rydberg
atom arrays Maller _et al._ (2015); Labuhn _et al._ (2016); Bernien _et
al._ (2017); Graham _et al._ (2019); Madjarov _et al._ (2020); Wilson _et
al._ (2018) and trapped ions Blatt and Wineland (2008); Monroe and Kim (2013);
Ballance _et al._ (2016); Gaebler _et al._ (2016); Cetina _et al._ (2020).
Specifically, the observation of a high teleportation fidelity at low
temperatures in an experimental setting would be a tantalizing indicator of
gravitational scrambling dynamics. In addition, gravitational dynamics exhibit
unique qualitative features as a function of both evolution time and protocol
parameters [Fig. 1(c), Table 1]. More broadly, our analysis suggests that the
TW protocol can also provide insights into many-body dynamics outside the
gravitational regime. In particular, we demonstrate that the fidelity of
peaked-size teleportation serves as an efficient probe of higher moments of
operator size distributions Qi _et al._ (2019).
_Quantum gravity interpretation of peaked-size teleportation_ (Section
VII)—Finally, while we have thus far indicated a sharp distinction between
models with gravitational duals and generic thermalizing systems, this is not
always the case. In particular, our results show that varying the temperature
of the SYK model provides a continuous interpolation between gravitational
teleportation at low temperature and peaked-size teleportation at high
temperature. In the dual picture, perturbing away from the low temperature
limit corresponds to adding _stringy_ corrections to the gravity theory
Shenker and Stanford (2015); Maldacena and Stanford (2016); Gu and Kitaev
(2019). Following this intuition, we show that teleportation in a gravity
theory with strong stringy corrections Maldacena _et al._ (2017) bears a
remarkable qualitative similarity to peaked-size teleportation, thus providing
a first step towards a bulk understanding of this phenomenon.
## II Analyzing the teleportation protocol
To begin, let us describe the teleportation circuit depicted in Fig. 1(a) and
redrawn below in greater detail:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 2.40001pt\hbox{\hskip
1.6pt\includegraphics[scale={.8}]{fig-TWH-state-two-
sided.pdf}}}\end{array}}\centering\@add@centering$ (5)
The protocol begins by preparing two copies of the system in an entangled TFD
state. At infinite temperature, the TFD state is simply an extensive set of
EPR pairs between the left and right systems:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 0.90001pt\hbox{\hskip
0.6pt\includegraphics[scale={.3}]{fig-
epr}}}\end{array}}\centering\@add@centering$ (7)
where we have decomposed each system into two subsystems, A and
$\bar{\text{A}}$, for future reference, and each dot represents a
normalization factor given by the inverse square root of the subsystem’s
dimension. The finite temperature TFD state is obtained by applying the square
root of the density matrix to either side of $\ket{\text{EPR}}$:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 0.90001pt\hbox{\hskip
0.6pt\includegraphics[scale={.3}]{fig-
tfd}}}\end{array}}\centering\@add@centering$ (9)
where $\rho^{1/2}\equiv e^{-\beta H/2}/\tr(e^{-\beta H})^{1/2}$, and
$\beta=1/T$ is the inverse temperature.
The remainder of the protocol proceeds as described in Section I and depicted
in Eq. (5). As shown in the latter, we denote the teleported qubit(s) as
subsystem A and the coupled qubits as subsystem C (chosen independently from
A). In addition, we include a simple decoding operator, $D$, applied at the
end of the circuit before state recovery.
### II.1 Requirements for successful teleportation
We now introduce heuristic arguments for when teleportation succeeds in this
protocol. This will culminate in the two requirements for teleportation listed
in Section I. In Section IV, we derive these conditions more formally by
providing exact relations between the two-sided correlators in Eq. (2) and the
teleportation fidelity.
To begin, let us first recall that a fundamental property of the EPR state is
that an operator acting on the left side is equivalent to its transpose acting
on the right:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 0.90001pt\hbox{\hskip
0.6pt\includegraphics[scale={.3}]{fig-O-
slide}}}\end{array}}\centering\@add@centering$ (11)
While this property does not generally apply to the TFD state at finite
temperature, it does hold for the unitary $U=e^{-iHt}$ since this commutes
with the density matrix. Using this, we can re-express the teleportation
circuit as:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 2.40001pt\hbox{\hskip
1.6pt\includegraphics[scale={.8}]{fig-TWH-state-
swap}}}\end{array}}\centering\@add@centering$ (13)
Next, since teleportation should occur regardless of the state of the swapped
out subsystem, we can imagine measuring this leg to be in some state
$\ket{\phi}$111At infinite temperature, using Eq. (11), $\ket{\phi}$ can be
understood as the counterpart of $\ket{\psi}$, to be teleported from right to
left instead of left to right.:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 2.40001pt\hbox{\hskip
1.6pt\includegraphics[scale={.8}]{fig-TWH-
state}}}\end{array}}\centering\@add@centering$ (15)
This reformulation makes it clear that teleportation depends on the action of
the coupling on states of the form $Q_{A,l}(t)\ket{\text{TFD}}$, where
$Q_{A}=\outerproduct{\psi}{\phi}$ and we denote222Traditionally, this would be
considered reverse time-evolution, and denoted $Q_{A}(-t)$. For brevity, we
have flipped the sign of $t$ throughout the text. time-evolved operators as
$Q_{A}(t)\equiv UQ_{A}U^{\dagger}$.
The following identity, if true for all operators $Q_{A}$ on A, would
guarantee successful teleportation for all states:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 1.19998pt\hbox{\hskip
0.79999pt\includegraphics[scale={.4}]{fig-correlator-inner-
product}}}\end{array}}\centering\@add@centering$ (17)
Here $\theta_{Q}$ is an overall phase and we represent conjugation by the
decoding operator as $\tilde{Q}_{A}\equiv D^{\dagger}Q_{A}D$. One can verify
this explicitly by plugging the RHS of the above equality into Eq. (15).
To quantify whether this equality holds, we should measure the inner product
between the two states. This inner product333For simplicity of notation and
consistency with previous works Gao _et al._ (2017); Maldacena _et al._
(2017); Gao and Jafferis (2019), from here on we have assumed that the unitary
is symmetric, $U^{T}=U,\,U^{\dagger}=U^{*}$.,
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 0.90001pt\hbox{\hskip
0.6pt\includegraphics[scale={.3}]{fig-
correlator}}}\end{array}}\centering\@add@centering$ (19)
is precisely the two-sided correlation function introduced in Eq. (2), now
modified to include the decoding operator. In particular, if the inner product
is maximal for any operator $Q_{A}$, then Eq. (17) always holds and
teleportation succeeds with perfect fidelity for all initial states.
In practice, it is sufficient to evaluate the correlators for a complete basis
of operators on subsystem A (e.g. the Pauli operators). In this case, we now
have _two_ requirements on the operator correlators, as listed in Section I:
($i$) all correlators must have maximal magnitude, i.e. equal to 1, and ($ii$)
all correlators must have the _same phase_ —if two operators both individually
obey Eq. (17) but with different phases, their sum will not.
At infinite temperature, owing to Eq. (11), the first requirement is satisfied
even in the absence of the coupling, for any symmetric or antisymmetric
operator. However, to satisfy the second requirement, the role of the coupling
$e^{igV}$ must be to apply a $Q_{A}$-dependent overall phase. In the following
Section, we analyze the action of the coupling and show precisely when such an
overall phase occurs.
## III Connection to operator size
In this Section, we outline the connection between the coupling $V$ and the
operator size when $V$ is acted on states of the form:
$Q_{A,l}(t)\ket{\text{TFD}}=Q_{A,l}(t)\rho^{1/2}_{l}\ket{\text{EPR}}.$ (20)
This connection has been noted in a number of previous works, focusing
primarily on a specific bilinear coupling in fermionic systems Roberts _et
al._ (2018); Qi and Streicher (2019); Lin _et al._ (2019); Susskind (2019);
Lin and Susskind (2019); Brown _et al._ (2019); Nezami _et al._ (2021). In
the following, we extend this notion to bosonic systems and more generic
couplings. We then show that the action of the exponentiated coupling,
$e^{igV}$, is particularly simple—it applies an overall phase—whenever
operator size distributions are tightly peaked.
### III.1 Coupling measures size
In bosonic qudit systems, we define the size of a Pauli string as its number
of non-identity elements. For instance, the Pauli string
$\mathbbm{1}\otimes X\otimes\mathbbm{1}\otimes\mathbbm{1}\otimes Z\otimes
X\otimes\mathbbm{1}$ (21)
has size 3. A more general operator can be written as a sum of Pauli strings,
$R$:
$Q_{A}(t)\rho^{1/2}=\sum_{R}c_{R}(t)R,$ (22)
and possesses a corresponding _size distribution_. We define this
distribution, $P(\mathcal{S})$, as
$P(\mathcal{S})=\sum_{\mathcal{S}[R]=\mathcal{S}}|c_{R}(t)|^{2}.$ (23)
The distribution is normalized,
$\sum_{\mathcal{S}}P(\mathcal{S})=\sum_{R}|c_{R}(t)|^{2}=\tr(\rho)=1.$ (24)
We note that, at finite temperature, the coefficients $c_{R}(t)$ will
generally be complex. The role of their phases is discussed in Section VII.
The operator size is naturally measured in the doubled Hilbert space
associated with the state in Eq. (20). In bosonic qudit systems, the _exact_
measure of the operator size is a sum of individual EPR projectors on each
qudit $i$:
$V_{s}=\frac{1}{N}\sum_{i=1}^{N}P_{\text{EPR},i}=\frac{1}{Nd^{2}}\sum_{i=1}^{N}\sum_{P_{i}}P_{i,l}P_{i,r}^{*},$
(25)
where $d$ is the local qudit dimension, $N$ is the number of qudits, and
$P_{i}$ form a complete basis of single-qudit operators (e.g. for qubits
$P_{i}\in\\{\mathbbm{1},X,Y,Z\\}$). This coupling is clearly of the form used
in the teleportation protocol [Eq. (1)]. Its eigenstates are
$R_{l}\ket{\text{EPR}}$ for Pauli strings $R$, with eigenvalues determined by
the size, $\mathcal{S}[R]$, of the Pauli string:
$V_{s}R_{l}\ket{\text{EPR}}=\bigg{(}1-\frac{\mathcal{S}[R]}{N}\bigg{)}R_{l}\ket{\text{EPR}}.$
(26)
Acting on a general time-evolved operator, we have
$V_{s}Q_{A,l}(t)\ket{\text{TFD}}=d^{N/2}\sum_{R}\bigg{(}1-\frac{\mathcal{S}[R]}{N}\bigg{)}c_{R}(t)R_{l}\ket{\text{EPR}}.$
(27)
The expectation value of $V$ in the state $Q_{A,l}(t)\ket{\text{TFD}}$
therefore measures the _average_ of the size distribution, which we denote as
$\mathcal{S}[Q_{A}\rho^{1/2}]$, or, when the context is clear, simply
$\mathcal{S}$.
We can derive Eq. (26) by first analyzing the action of a single EPR
projector, $P_{\text{EPR},i}$. Writing each Pauli string as a tensor product
of single-qudit Paulis, $R=\bigotimes_{j=1}^{N}R_{j}$, we find
$P_{\text{EPR},i}R_{l}\ket{\text{EPR}}=\delta_{R_{i},\mathbbm{1}}R_{l}\ket{\text{EPR}},$
(28)
since
$\bra{\text{EPR}_{i}}R_{i,l}\ket{\text{EPR}_{i}}=\tr_{i}(R_{i})/d_{i}=\delta_{R_{i},\mathbbm{1}}$.
A single EPR projector thus acts as a binary variable, giving eigenvalue $1$
or $0$ if a given Pauli string is, or is not, the identity on the designated
qudit. The full coupling is a sum of these binary variables over all qudits,
and therefore counts the total number of non-identity elements in the Pauli
string, i.e. the operator size.
We now turn to more general couplings of the form in Eq. (1). First, as a
trivial but useful modification, we can remove the identity operators from
$V_{s}$. These constitute a fraction $1/d^{2}$ of the complete basis, $P_{i}$,
summed in Eq. (25). Removing these terms simply renormalizes the eigenvalues
of the coupling:
$\begin{split}\bigg{(}\frac{1}{N(d^{2}-1)}&\sum_{i=1}^{N}\sum_{P_{i}\neq\mathbbm{1}}P_{i,l}P_{i,r}^{*}\bigg{)}R_{l}\ket{\text{EPR}}\\\
&=\bigg{[}1-\bigg{(}\frac{1}{1-1/d^{2}}\bigg{)}\frac{\mathcal{S}[R]}{N}\bigg{]}R_{l}\ket{\text{EPR}},\end{split}$
(29)
where the left side sum is now over $N(d^{2}-1)$ non-identity operators.
Second, we consider omitting some of the operators non-identity $P_{i}$ at
each site. Intuitively, under thermalizing dynamics, if an operator has spread
to some qudit $i$ it should not matter which Pauli operator we use to probe
the operator’s presence. For example, in qubits, a typical Pauli string of
$Q_{A}(t)\rho^{1/2}$ will have equal probability to commute with $Z_{i}$ as it
would with $X_{i}$ and $Y_{i}$; thus, coupling only using $Z_{i}$ operators is
sufficient for measuring a thermalized operator’s size.
Third, we expect even more general couplings, composed of $O_{i}$ that are
local but not necessarily Pauli operators, to behave similarly. Specifically,
each individual coupling, $O_{i,l}O_{i,r}$, will asymptote to two different
expectation values before and after the time-evolved operator has spread to
the support of $O_{i}$. Before, the coupling will maintain its expectation
value in the unperturbed TFD state,
$\text{tr}(O_{i}\rho^{1/2}O^{\dagger}_{i}\rho^{1/2})$. After, the spread of
$Q_{A}(t)$ will disrupt the two-sided correlations in the TFD state that give
rise to this initial expectation value, and the coupling will instead
asymptote to its value in two thermal states,
$\tr(O_{i}\rho)\cdot\tr(O_{i}\rho)$. As before, the sum of many terms, each
behaving as above, leads to an approximate measure of operator size.
Lastly, we consider the case where the coupling is restricted to act only on
some subsystem C, consisting of $K$ qudits444For simplicity, this assumes that
there is a single coupling per qudit in C.. The coupling now measures the
number of non-identity elements of a Pauli string within C—we denote this as
the $K$-_size_ , $\mathcal{S}_{K}$, of the Pauli string. The eigenvalues the
coupling are the same as those in Eq. (29), with the replacement
$\mathcal{S}/N\rightarrow\mathcal{S}_{K}/K$. For a typical Pauli operator, we
expect the $K$-size distribution of an operator to be similar to its full size
distribution when $K$ is large and the coupled qubits are distributed
randomly. In particular, we expect the average $K$-size, $\mathcal{S}_{K}$, to
be proportional to the average size, $\mathcal{S}$,
$\frac{\mathcal{S}_{K}}{K}\approx\frac{\mathcal{S}}{N}.$ (30)
For simplicity, we will make this substitution, when appropriate, in the
remainder of the work. However, if $C$ is a spatially local subsystem (instead
of a random subsystem), then this replacement will be modified depending on
the spatial extent of the operator.
As a final remark, we note that the operator size distribution is directly
related to out-of-time-order correlators (OTOCs), a more familiar quantity for
probing operator growth Shenker and Stanford (2014); Maldacena _et al._
(2016); Larkin and Ovchinnikov (1969); Qi and Streicher (2019). In particular,
the average size is equal to a sum of OTOCs between $Q_{A}$ and $O_{i}$
Roberts _et al._ (2018); Qi and Streicher (2019),
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 0.84pt\hbox{\hskip
0.56pt\includegraphics[scale={.28}]{fig-V-
OTOC}}}\end{array}}\centering\@add@centering$ (32)
To see this, one simply slides $O_{i}^{*}$ and each $\sqrt{\rho}$ to the left
side, and uses the fact that
$\bra{\text{EPR}}A_{l}\ket{\text{EPR}}=\tr(A)/d^{N}$. Higher moments of the
size distribution can also be probed by OTOCs, now between $Q_{A}$ and various
products of the $O_{i}$, e.g. $O_{i}O_{j}$ for the size width. We discuss
these relations further, paying particular to subtleties that arise at finite
temperature, in Section VII.
### III.2 Peaked-size distributions
We now turn to the action of the exponentiated coupling, $e^{igV}$. This
applies a phase to each Pauli string of $Q_{A}(t)\rho^{1/2}$ determined by the
string’s size.
A particularly simple situation occurs when the operator’s size distribution
is _tightly peaked_ about its average size. In this regime, each Pauli string
gains approximately the same phase, and so the action of the coupling reduces
to applying a $Q_{A}$-dependent overall phase,
$e^{igV}Q_{A,l}(t)\ket{\text{TFD}}\approx
e^{ig\expectationvalue{V}_{Q}}Q_{A,l}(t)\ket{\text{TFD}},$ (33)
where the applied phase is proportional to the average $K$-size [see Eq.
(29)],
$\begin{split}g\expectationvalue{V}_{Q}&=g\bra{\text{TFD}}Q^{\dagger}_{A,l}(t)VQ_{A,l}(t)\ket{\text{TFD}}\\\
&\approx g-g_{d}\frac{\mathcal{S}_{K}[Q_{A}(t)\rho^{1/2}]}{K},\\\ \end{split}$
(34)
and we define $g_{d}\equiv g/(1-1/d^{2})$ for convenience.
Corrections to this behavior are controlled by higher moments of the size
distribution. In particular, the leading order correction is equal to the
$K$-size variance, $\delta
S_{K}^{2}/K^{2}=\expectationvalue{V^{2}}_{Q}-\expectationvalue{V}_{Q}^{2}$,
multiplied by $g^{2}$:
$\begin{split}\expectationvalue{e^{igV}}_{Q}=&\expectationvalue{1+igV-\frac{1}{2}g^{2}V^{2}+\ldots}_{Q}\\\
=&\bigg{(}1+ig\expectationvalue{V}_{Q}-\frac{1}{2}g^{2}\expectationvalue{V}_{Q}^{2}+\ldots\bigg{)}\\\
&-\frac{1}{2}g^{2}\bigg{(}\expectationvalue{V^{2}}_{Q}-\expectationvalue{V}_{Q}^{2}\bigg{)}+\ldots\\\
=&\exp\bigg(ig\expectationvalue{V}_{Q}\bigg{missing})-\frac{1}{2}g^{2}\delta\mathcal{S}_{K}^{2}/K^{2}+\ldots\end{split}$
(35)
The $K$-size variance receives contributions from two sources: the variance of
the full size distribution, $\delta\mathcal{S}^{2}$, and a statistical error
from sampling only $K$ of $N$ qubits for the $K$-size. If the $K$ qubits are
distributed randomly, these errors scale as
$\delta\mathcal{S}_{K}\sim\delta\mathcal{S}\cdot(K/N)$ and
$\delta\mathcal{S}_{K}\sim\sqrt{\mathcal{S}_{K}}\approx\sqrt{\mathcal{S}K/N}$,
respectively (see Appendix E for a detailed derivation of the latter). These
are small compared to the average $K$-size whenever
$\delta\mathcal{S}\ll\mathcal{S}$ and $1\ll\mathcal{S}_{K}$.
In Appendix A, we go beyond these leading order corrections and provide
quantitative bounds on when the peaked-size approximation in Eq. (33) is
valid. In general, we can strictly prove that this approximation holds
whenever there is a parametric separation between an asymptotic size width,
defined in the Appendix, and the average size.
## IV Peaked-size teleportation
We are now ready to demonstrate how the coupling leads to teleportation of a
quantum state when size distributions are tightly peaked. We first demonstrate
this in its simplest context: teleportation of a single qubit at infinite
temperature. We then show that the fidelity of peaked-size teleportation is
generally suppressed at finite temperature. For ease of reading, we relegate
rigorous results supporting each of the above arguments to the end of the
Section.
### IV.1 Single-qubit teleportation
To analyze teleportation of a single qubit, we turn to the two-sided
correlators in Eq. (19), with $Q_{A}\in\\{\mathbbm{1},X,Y,Z\\}$ running over
the single-qubit Pauli operators. We recall that the requirements for
teleportation are for all $C_{Q}$ to have ($i$) maximal magnitude and ($ii$)
the same phase.
The first requirement is naturally satisfied at infinite temperature even
_before coupling and decoding_ but the second requirement is not. In
particular, the four correlators with $D=\mathbbm{1},\,g=0$ are:
$Q_{A}$ | $C_{Q}$
---|---
$\mathbbm{1}$ | $+1$
$X$ | $+1$
$Y$ | $-1$
$Z$ | $+1$
$(D=\mathbbm{1})$
$(g=0)$
where the left entries are qubit operators, $Q_{A}$, and the right entries are
the correlators, $C_{Q}$. The correlators have maximal magnitude because each
operator can be transferred perfectly from left to right using Eq. (11).
However, the $Y$ operator picks up an overall minus sign during this process,
since $Y^{T}=-Y$, and so the correlator phases are not aligned. One can verify
the resulting teleportation fidelity is indeed trivial. Our goal will be to
show that the action of the coupling in Eq. (33), as well as a simple decoding
operation, are sufficient to align the four phases.
To begin, we assume that all time-evolved Pauli operators have a tightly
peaked size distribution and that the average size $\mathcal{S}$ is the same
for all non-identity operators. From Eqs. (33-34), we have that the coupling
applies a total phase difference $g_{d}\mathcal{S}/N$ between the thermofield
double state (the identity operator; size zero) and all perturbed states
(time-evolved Pauli operators; size $\mathcal{S}$). Our table of correlator
phases is thus modified to:
$Q_{A}$ | $C_{Q}$
---|---
$\mathbbm{1}$ | $+1$
$X$ | $+1$
$Y$ | $-1$
$Z$ | $+1$
$(D=\mathbbm{1})$
$(g=0)$
$\longrightarrow$ $Q_{A}$ $C_{Q}$ $\mathbbm{1}$ $e^{ig_{d}\mathcal{S}/N}$ $X$
$+1$ $Y$ $-1$ $Z$ $+1$ $(D=\mathbbm{1})$ $(g\neq 0)$
We again do not achieve perfect phase alignment. However, we can now correct
the misaligned phases using the decoding operator, $D=Y$. This applies an
additional minus sign to the $X$ and $Z$ correlators:
$Q_{A}$ | $C_{Q}$
---|---
$\mathbbm{1}$ | $+1$
$X$ | $+1$
$Y$ | $-1$
$Z$ | $+1$
$(D=\mathbbm{1})$
$(g=0)$
$\longrightarrow$ $Q_{A}$ $C_{Q}$ $\mathbbm{1}$ $e^{ig_{d}\mathcal{S}/N}$ $X$
$+1$ $Y$ $-1$ $Z$ $+1$ $(D=\mathbbm{1})$ $(g\neq 0)$ $\longrightarrow$ $Q_{A}$
$C_{Q}$ $\mathbbm{1}$ $e^{ig_{d}\mathcal{S}/N}$ $X$ $-1$ $Y$ $-1$ $Z$ $-1$
$(D=Y)$ $(g\neq 0)$
The correlator phases are now aligned whenever
$g_{d}\,\frac{\mathcal{S}}{N}=\pi\text{ mod }2\pi,$ (36)
leading to perfect teleportation at these values.
### IV.2 Finite temperature teleportation
There are two important modifications to peaked-size teleportation at finite
temperature. First, the relevant notion of operator size is modified Qi and
Streicher (2019). In particular, in the peaked-size regime, the difference in
phase applied between the identity and non-identity Pauli operators is
modified to
$\mathcal{S}[Q_{A}(t)]\rightarrow\mathcal{S}[Q_{A}(t)\rho^{1/2}]-\mathcal{S}[\rho^{1/2}].$
(37)
Second, the maximal fidelity of peaked-size teleportation is reduced at finite
temperature. In particular, when sizes are tightly peaked, the two-sided
correlators factorize into a constant magnitude multipled by an overall phase:
$\begin{split}C_{Q}&=\bra{\text{TFD}}\tilde{Q}^{\dagger}_{A,r}Q_{A,l}\ket{\text{TFD}}e^{i(g-g_{d}\mathcal{S}_{K}[Q_{A}(t)\rho^{1/2}]/K)}\\\
&=G_{\beta}(Q_{A})\cdot e^{i\theta_{Q}}\\\ \end{split}$ (38)
where $\theta_{Q}$ combines the effects of transposition, coupling, and
decoding, and the correlator magnitude corresponds to an imaginary-time
Green’s function,
$G_{\beta}(Q_{A})\equiv\text{tr}(Q_{A}^{\dagger}\,\rho^{1/2}Q_{A}\,\rho^{1/2})\leq
1.$ (39)
This Green’s function is unity at infinite temperature and generically
decreases at finite temperatures, due to the reduced entanglement of the TFD
state. This violates the maximal magnitude requirement for teleportation, and
therefore leads to a corresponding decrease in the teleportation fidelity.
The astute reader will recall that finite temperature teleportation is known
to succeed with higher fidelities than $G_{\beta}$ in the case of
gravitational teleportation Gao _et al._ (2017); Maldacena _et al._ (2017);
Gao and Jafferis (2019); this is a signature of physics outside the peaked-
size regime, which we connect to in Section VII.
### IV.3 Rigorous expressions for teleportation fidelity
We now derive formal expressions of the teleportation fidelity for $n$
teleported qubits as a function of the correlator phases. To do so, we
consider a variant of the protocol where instead of teleporting a quantum
state we attempt to distill an EPR pair:
${\begin{array}[]{c}\vbox{\vskip 2.40001pt\hbox{\hskip
1.6pt\includegraphics[scale={.8}]{fig-TWH-
epr}}}\end{array}}\centering\@add@centering$ (40)
The fidelity of EPR distillation is precisely related to the average fidelity
of state teleportation Yoshida and Yao (2019),
$F_{\text{EPR}}=[(d_{A}+1)\langle F_{\psi}\rangle-1]/d_{A}$, where
$d_{A}=2^{n}$ is the dimension of subsystem A when teleporting $n$ qubits.
We calculate the teleportation fidelity by Pauli decomposing the SWAP operator
as $\text{SWAP}=\sum_{Q_{A}}Q_{A}\otimes Q_{A}^{\dagger}/d_{A}$. This gives:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 1.71002pt\hbox{\hskip
1.14001pt\includegraphics[scale={.57}]{fig-state-fidelity-swap-
EPR}}}\end{array}}\centering,\@add@centering$ (42)
or
$\begin{split}F_{\text{EPR}}=&\frac{1}{d_{A}^{4}}\sum_{Q_{1},Q_{2}}\bra{\text{TFD}}Q_{2,l}(t)\,e^{-igV}\,\tilde{Q}_{2,r}^{\dagger}(-t)\\\
&\times\tilde{Q}_{1,r}^{\dagger}(-t)\,e^{igV}\,Q_{1,l}(t)\ket{\text{TFD}}.\end{split}$
(43)
Similar expressions for teleportation of quantum states are contained in
Appendix C.
In general, the teleportation fidelity and two-sided correlators are related
only by a lower bound,555Under special circumstances, namely large-$N$ models,
one may be able to factorize the above expression in terms of correlators of
the form Eq. (19) Gao and Jafferis (2019).
$F_{\text{EPR}}\geq\bigg{|}\frac{1}{d_{A}^{2}}\sum_{Q_{A}}C_{Q}\bigg{|}^{2}.$
(44)
This is obtained diagrammatically by inserting an identity,
$\mathbbm{1}=D_{r}U_{r}U_{r}^{\dagger}D_{r}^{\dagger}$, followed by the
projector $\outerproduct{\text{TFD}}{\text{TFD}}$, into the center of Eq.
(42):
${\begin{array}[]{c}\vbox{\vskip 1.59pt\hbox{\hskip
1.06pt\includegraphics[scale={.53}]{fig-state-fidelity-swap-EPR-
correlator}}}\end{array}}\centering\@add@centering$ (45)
A similar bound was obtained in Ref. Brown _et al._ (2019); Nezami _et al._
(2021), conditional on certain assumptions about operators’ size
distributions.
At infinite temperature _in the peaked-size regime_ , we have
$C_{Q}=e^{i\theta_{Q}}$ and the fidelity is equal to the lower bound:
$\begin{split}F_{\text{EPR}}=\frac{1}{d_{A}^{4}}\sum_{Q_{1},Q_{2}}e^{i(\theta_{Q_{1}}-\theta_{Q_{2}})}=\bigg{|}\frac{1}{d_{A}^{2}}\sum_{Q_{A}}e^{i\theta_{Q}}\bigg{|}^{2}.\\\
\end{split}$ (46)
The sum is over $d_{A}^{2}$ terms, and is unity only when all the operators’
phases are the same. At finite temperature in the peaked-size regime, we
instead find
$\begin{split}F_{\text{EPR}}&=\frac{1}{d_{A}^{4}}\sum_{Q_{1},Q_{2}}e^{i(\theta_{Q_{1}}-\theta_{Q_{2}})}\tr(Q_{2}^{\dagger}Q_{1}^{\dagger}\,\rho^{1/2}Q_{1}Q_{2}\,\rho^{1/2})\\\
&\leq\frac{1}{d_{A}^{2}}\sum_{Q_{A}}G_{\beta}(Q_{A}).\end{split}$ (47)
The maximum fidelity is again achieved when the phases align; however, its
value is equal to the sum of various imaginary time Green’s functions, i.e.
the correlator magnitudes [Eq. (39)].
## V Late times
We now introduce the simplest physical example of peaked-size teleportation:
teleportation in any scrambling system after the scrambling time. There are
two distinguishing features of this regime: ($i$) the circuit can only
teleport a single qubit, i.e. the channel capacity is one, and ($ii$) as for
all peaked-size teleportation, the teleportation fidelity is suppressed at low
temperatures. We also demonstrate that this regime of peaked-size
teleportation, as well as the full quantum circuit implementing the TW
protocol, are equivalent to HPR teleportation of a single qubit. In Section
VI, we will demonstrate that the limited late time channel capacity can be
overcome at intermediate times in many scrambling systems.
### V.1 Teleportation at late times
At late times, the dynamics of a scrambling system can be approximated by a
Haar random unitary Hayden and Preskill (2007); Roberts and Yoshida (2017). In
this case, each time-evolved operator, $Q_{A}(t)$, becomes a sum of random
Pauli strings, each with probability $1/d^{2}$ to be the identity at any
individual site. As a result, time-evolved operators have an average size,
$\mathcal{S}\approx(1-1/d^{2})N,$ (48)
and a size width,
$\delta\mathcal{S}\sim\sqrt{N},$ (49)
where the scaling is based on the central limit theorem. The $K$-size
distribution takes the same form, replacing $N$ with $K$, and is tightly
peaked as long as $K$ is large (specifically, $g\delta\mathcal{S}_{K}/K\approx
g/\sqrt{K}\ll 1$).
For simplicity, we will focus on late time teleportation at infinite
temperature; finite temperature modifications follow according to Section
IV.2. Using Eqs. (33-34), we find that the coupling applies a relative phase
$e^{ig}$ between the identity operator (size zero) and all non-identity Pauli
operators (size above) Maldacena _et al._ (2017):
$\begin{split}e^{igV}\ket{\text{EPR}}&=e^{ig}\ket{\text{EPR}}\\\
e^{igV}Q_{A,l}(t)\ket{\text{EPR}}&=Q_{A,l}(t)\ket{\text{EPR}}.\end{split}$
(50)
The lack of an applied phase for non-identity Pauli operators corresponds to
the vanishing of $\expectationvalue{V}_{Q}$ at late times, when OTOCs have
decayed to zero [see Eq. (34)]. From Section IV.1, we see that whenever
$g=\pi\text{ mod }2\pi,$ (51)
single-qubit teleportation succeeds.
A brief argument shows that late time teleportion of higher dimensional
quantum states is not possible. Consider teleportation of a $d$-dimensional
qudit, with a basis of states $\ket{i}$, $i=0,\ldots,d-1$. The qudit Pauli
operators are generated by the ‘clock’ and ‘shift’ operators:
$Z\ket{i}=e^{i\omega}\ket{i}$, with $\omega=2\pi/d$, and $X\ket{i}=\ket{i+1}$.
The two generators obey the commutation relation, $XZ=e^{-i\omega}ZX$. After
transposition, each Pauli operator, $X^{p}Z^{q}$, becomes
$(X^{p}Z^{q})^{T}=Z^{T,q}X^{T,p}=Z^{q}X^{-p}=e^{-ipq\omega}X^{-p}Z^{q}.$ (52)
Meanwhile, late time dynamics ensure that the coupling applies an overall
phase only to the identity operator. For teleportation to be successful, we
would therefore require a decoding operation, $D$, that acts as
$DX^{-p}Z^{q}D^{\dagger}\sim X^{p}Z^{q}$. Suppose there was such an operator,
and consider its action on the generators: $DXD^{\dagger}=X^{-1}$ and
$DZD^{\dagger}=Z$. The above action implies that commuting the two generators
gives a different phase before and after decoding:
$DXZD^{\dagger}=e^{-i\omega}DZXD^{\dagger}=e^{-i\omega}ZX^{-1}$ and
$DXZD^{\dagger}=X^{-1}Z=e^{+i\omega}ZX^{-1}$. This is a contradiction whenever
$e^{+i\omega}\neq e^{-i\omega}$, i.e. whenever $d>2$.
### V.2 Equivalence to HPR protocol
We now turn to the equivalence between peaked-size teleportation and
teleportation in the HPR protocol. The latter was originally introduced to
recover information in the Hayden-Preskill thought experiment Hayden and
Preskill (2007); Yoshida and Kitaev (2017), and is reviewed in detail in
Appendix B.
Here, we restrict our attention to teleportation in the deterministic variant
of the protocol, of a single qubit at infinite temperature Yoshida and Kitaev
(2017); Landsman _et al._ (2019a). The protocol takes the form:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 2.40001pt\hbox{\hskip
1.6pt\includegraphics[scale={.8}]{fig-YK-Grover-onestep-state-
simplified}}}\end{array}}\centering\@add@centering$ (54)
where $P_{\text{EPR}}$ projects onto an EPR pair between subsystems C on the
left and right sides.
The equivalence between this protocol and the TW protocol [Eq. (5)] is
manifest, with the only difference being the locality of the coupling.
Specifically, the HPR coupling is of the same general form as the TW coupling
[Eq. (1)]:
$gV\equiv\pi
P_{\text{EPR}}=\frac{\pi}{d_{C}^{2}}\sum_{P_{C}}P_{C,l}\,P^{*}_{C,r},$ (55)
where the sum is over of a complete basis of $d_{C}^{2}$ Pauli operators on C.
However, the operators $P_{C}$ are typically non-local across C, whereas the
coupling considered in the TW protocol was restricted to local operators. As a
consequence, the HPR coupling functions as a binary variable measuring whether
or not an operator has support on subsystem C (see Section III). In contrast,
the TW coupling measures the operator size within C, which takes an
approximately continuous range of values when C is large. Crucially, at late
times under scrambling dynamics, the effect of both couplings will be the
same: to apply an overall phase to non-identity operators.
A few additional remarks are in order. First, while the leading order effect
of the HPR and TW couplings is the same, they lead to different finite-size
corrections. In particular, in a fully scrambled system, the variance in the
phases applied by the HPR coupling is equal to the probability of a random
Pauli string not having support on C, which is suppressed exponentially in the
size of C, i.e. $1/d_{C}^{2}$. On the other hand, the variance in phases
applied by the TW coupling is suppressed only polynomially, by $\sim g^{2}/K$
[Eq. (49)]. These enhanced phase fluctuations are relevant for finite-size
implementations of the TW protocol, as discussed further in Section VIII.
Second, it has previously been shown that an extended version of the HPR
protocol allows for teleportation of _multiple_ qubits at late times Yoshida
and Kitaev (2017). Because of the equivalence between the protocols, this
extension would also allow for multi-qubit teleportation via the peaked-size
mechanism. However, the enhanced channel capacity comes with a trade-off: the
circuit complexity (measured by the number of applications of the unitary $U$)
grows exponentially in the number of qubits to be teleported. As we will see
in the following Section, this limitation can be overcome by peaked-size
teleportation at intermediate times (in the original TW protocol), owing to
the locality of the TW coupling.
## VI Intermediate times
Figure 2: Operator sizes and peaked-size teleportation in 1D, 2D, and 0D
RUCs. (a-b) In 1D and 2D RUCs, sizes grow ballistically in time, while the
size width grows with a slower power of $t$ and matches predictions from the
KPZ universality class. For $\mathcal{O}(1)$ values of the coupling strength
$g$, this leads to an oscillatory fidelity at all intermediate times, and
saturation to $1$ at late times (for teleportation of a single qubit and odd
values of $g$). (c) In contrast, in 0D all-to-all coupled RUCs, both the size
and size width grow exponentially in time, although they can be parametrically
separated using a $p$-body encoding. The teleportation fidelity reflects this
growth, displaying a distinct three-regime profile for $g\gg 1$. At early
times and late times, peaked-size teleportation succeeds and the fidelity
oscillates as in 1D and 2D. Between these, no teleportation occurs because the
size width has grown too large, $g\delta\mathcal{S}/N\gtrsim 1$.
We now turn to analyze the behavior of peaked-size teleportation at
intermediate times, i.e. before the scrambling time. In this regime, multiple
qubits can be teleported given a certain condition on the growth of time-
evolved operators, namely when the overlap of their size distributions is
sufficiently small.
We explicitly demonstrate that this condition is satisfied, and multi-qubit
teleporation is possible, in a wide variety of physical systems at infinite
temperature. These include random unitary circuits (RUCs) in $\geq$1D, for
which peaked sizes naturally occur due to local thermalization within each
operator’s light cone, and time-evolved operators are non-overlapping due to
spatial locality. More surprisingly, we show that multi-qubit peaked-size
teleportation can also be achieved in ‘fast scrambling’, all-to-all coupled
systems, including 0D random unitary circuits and the SYK model (at infinite
temperature) Hayden and Preskill (2007); Sekino and Susskind (2008). In this
case, operators are _not_ spatially separated at any nonzero time;
nonetheless, the overlap of their size distributions remains
_probabilistically_ small at sufficiently early times.
Finally, we consider the channel capacity—i.e. the maximum number of qubits
that can be teleported—of peaked-size teleportation in all-to-all coupled
systems. This is an essential question for comparing the capabilities of
peaked-size teleportation with those of gravitational teleportation in
traversable wormholes Maldacena _et al._ (2017). Remarkably, we provide
analytic and numerical evidence that the channel capacity of peaked-size
teleportation in 0D RUCs, a quite simple microscopic system, is asymptotically
equivalent to that of the gravitational mechanism!
### VI.1 Multi-qubit teleportation: additive operator sizes
We begin with a few simple examples of multi-qubit teleportation to build
intuition. First, consider a unitary $U$ that factorizes as
$U=U_{1}\otimes\cdots\otimes U_{n}$, where each $U_{i}$ acts on a disjoint
subsystem. If we insert $n$ qubits individually into the $n$ different
subsystems, then the entire protocol decouples into $n$ independent channels
and there is no restriction on sending multiple qubits. This trivial example
relies on the fact that $U$ does not scramble information across the entire
system but only within each disjoint subsystem.
A similar situation occurs even when the dynamics are not factorizable, as
long as the teleported qubits are in causally separated regions. For example,
consider a $(d\geq 1)$-dimensional system with short-range interactions, where
the inserted qubits are spatially separated. At intermediate times, the time-
evolved qubit operators will have support within a local ‘light cone’ about
their initial location, but will continue to act on disjoint subsystems. This
scenario is therefore no different from the previous example and multi-qubit
teleportation remains possible, as long as ($i$) the size distributions of
each operator is tightly peaked, ($ii$) the coupling $V$ has support within
each qubit’s light cone, and ($iii$) the light cones of each qubit are non-
overlapping. This final requirement constrains the number of qubits that can
be sent at a given time $t$. In particular, the light cone of each operator
will have a radius $v_{B}t$ where $v_{B}$ is the butterfly velocity. The
maximum number of non-overlapping light cones—equal to the total number of
qubits $n$ that can be teleported—is therefore $n\lesssim N/(v_{B}t)^{d}$,
where $N$ is the total system volume.
More formally, we can analyze the success of $n$-qubit teleportation using the
two-sided correlators, $C_{Q}$. We are concerned with $n$-qubit operators
$Q(t)=Q_{1}(t)\ldots Q_{n}(t)$, where each $Q_{i}\in\\{I,X,Y,Z\\}$ is a
single-qubit Pauli on the $i^{\text{th}}$ teleported qubit. We work at
infinite temperature and assume that sizes are tightly peaked. Teleportation
therefore succeeds whenever all correlators have the same phase.
Inspired by the example of $n$ decoupled protocols, we will take the decoding
operator to be the tensor product, $D=Y\otimes\ldots\otimes Y$. The
combination of transposition and conjugation by $D$ thus applies a minus sign
to every single-qubit non-identity Pauli operator. An additional phase is
applied by coupling proportional to the size of each operator. For example,
for $n=2$ qubits, we have:
$\mathbbm{1}\otimes\mathbbm{1}$ | $1$
---|---
$Q_{1}\otimes\mathbbm{1}$ | $-1\times e^{ig_{d}\mathcal{S}_{1}/N}$
$\mathbbm{1}\otimes Q_{2}$ | $-1\times e^{ig_{d}\mathcal{S}_{2}/N}$
$Q_{1}\otimes Q_{2}$ | $(-1)^{2}\times e^{ig_{d}\mathcal{S}_{12}/N}$
where $\mathcal{S}_{i}$ and $\mathcal{S}_{ij}$ are shorthand for
$\mathcal{S}[Q_{i}(t)]$ and $\mathcal{S}[Q_{i}(t)Q_{j}(t)]$. In order for all
correlators to have the same phase, we require that
$g_{d}\mathcal{S}_{1}/N=g_{d}\mathcal{S}_{2}/N=\pi\text{ mod }2\pi$, and that
the operator sizes _add_ , such that $e^{ig_{d}\mathcal{S}_{12}/N}\approx
e^{ig_{d}(\mathcal{S}_{1}+\mathcal{S}_{2})/N}=e^{i(\pi+\pi)}=1$.
This requirements generalize straightforwardly to $n$ qubits. Specifically,
teleportation succeeds whenever the single-qubit operator sizes obey
$g_{d}\mathcal{S}_{i}/N=\pi\text{ mod }2\pi$ and the multi-qubit operator
sizes _add_ under operator multiplication:
$\begin{split}\mathcal{S}[Q_{1}(t)&Q_{2}(t)\ldots Q_{n}(t)]\\\
&\approx\mathcal{S}[Q_{1}(t)]+\mathcal{S}[Q_{2}(t)]+\ldots+\mathcal{S}[Q_{n}(t)].\end{split}$
(56)
This latter requirement implies that the phases applied by the coupling,
$e^{igV}$, factorize, and allows the $n$ qubits to be teleported ‘in parallel’
as in the previous simple examples.
The size addition requirement naturally bounds the channel capacity in terms
of the number of couplings, $K$. Specifically, the $K$-size takes integer
values between $1$ and $K$. However, the requirement that all three single-
qubit Pauli operators have the same $K$-size increases the minimum $K$-size to
$2$. From Eq. (56), this implies that an $n$-qubit operator has a $K$-size of
at least $2n$, which is only possible if
$2n\leq K.$ (57)
Indeed, this strict upper bound can also be understood from an information
theoretic perspective: teleporting $n$ qubits requires an increase of $2n$ in
the mutual information between the left and right sides of the system. Each of
the $K$ classical bits sent increases the mutual information by at most $1$,
so at least $2n$ bits are required.
### VI.2 $\geq$1D random unitary circuits
As a first concrete example of intermediate time peaked-size teleportation, we
consider a random unitary circuit (RUC) applied to a lattice of $N$ qubits in
one or higher dimensions. At each time step, pairs of neighboring qubits are
evolved via independent Haar random unitaries arranged in a ‘brick-layer’
fashion [Fig. 2(a,b)]. Operator growth in such systems has been studied at
great length, and is believed to be a good model for many aspects of
information scrambling under Hamiltonian dynamics Nahum _et al._ (2018); Von
Keyserlingk _et al._ (2018); Khemani _et al._ (2018); Rakovszky _et al._
(2018); Li _et al._ (2018); Skinner _et al._ (2019).
A key property of Haar random unitary circuits is that the expectation values
of many circuit quantities can be computed by replacing the Haar random
unitaries with randomly chosen _Clifford_ unitaries, thereby enabling
efficient classical simulation Dankert _et al._ (2009); Nahum _et al._
(2018). Generally, this equivalence holds for any quantity that contains no
more than _two_ copies each of $U$ and $U^{\dagger}$ (e.g. the Renyi-2
entropy, or the OTOC); for systems of qubits, it in fact holds for up to three
copies Webb (2015); Kueng and Gross (2015); Zhu (2017). From Eq. (43), we see
that the teleportation fidelity contains three copies of $U$ and
$U^{\dagger}$, so the average fidelity is efficiently simulable666For higher-
dimensional qudits, while we cannot efficiently simulate the teleportation
fidelity, we can still calculate the correlators Eq. (19), which lower bound
the fidelity via Eq. (44).. Moreover, by definition, the size distributions of
operators under Clifford dynamics are perfectly tightly-peaked, since a Pauli
operator $Q_{A}$ evolved under a Clifford unitary remains a Pauli string.
Hence, the teleportation fidelity can be computed using the simplified
expression given in Eq. (46).
In practice, we compute the average EPR fidelity for teleporting $n$ qubits
via the following steps. First, we choose a particular $U$ by sampling each
2-qubit unitary from a uniform distribution of 2-qubit Clifford unitaries.
Second, we determine the $K$-size of $UQ_{A}U^{\dagger}$ for each $n$-qubit
Pauli operator, $Q_{A}$, or, if $n$ is large, for a random subset of these
operators; such simulations can be performed efficiently with a time cost that
scales linearly with the circuit depth. Third, we compute the fidelity for a
given coupling $g$ using Eq. (46), with the phases
$\theta_{Q}=g\mathcal{S}_{K}/K+\pi\mathcal{S}[Q_{A}(0)]$, where the latter
term captures the fact that decoding and transposition apply a minus sign for
each non-identity element of the initial $Q_{A}$. Lastly, we average the EPR
fidelity over multiple realizations of $U$.
The results of these simulations for $n=1$ qubit in 1D and 2D are shown in
Fig. 2(a,b). As expected, the average operator size grows ballistically,
$\mathcal{S}\propto t^{d}$, until the operator’s light cone reaches the edge
of the system, at which point the size saturates to $3/4N$. While the behavior
of the size width is more complex, in both dimensionalities it grows more
slowly than the average size. This implies that size distribution is tightly-
peaked and the teleportation fidelity can be approximated by
$F=\frac{1}{2}[1-\cos(g\mathcal{S})]$. We verify that the time profile of the
fidelity follows this prediction, and nearly perfect fidelity is achieved when
$g\mathcal{S}=\pi\mod 2\pi$. In Appendix D, we verify that teleportation of
$n>1$ qubits is also possible at intermediate times, as long as their light
cones do not overlap.
Figure 3: Probing operator size width in a 1D RUC. (top) The size width
initially grows as $t^{1/2}$ and reaches a peak at the scrambling time
$t^{*}\sim N=10000$. (bottom) We probe this behavior by measuring the
teleportation fidelity with a large coupling $g=71\pi\sim\sqrt{N}$. The
fidelity exhibits a distinct decay-revival profile, controlled by whether the
size width has exceeded the threshold $g\delta\mathcal{S}/N\approx 1$: nearly
perfect fidelity initially, power law decay to a trivial fidelity at
intermediate times, and partial revival at late times.
_Probing the size width—_ Let us now turn to the time profile of the size
width, which exhibits a _peak_ near the scrambling time in both 1D and 2D.
Qualitatively, this behavior arises from fact that the size width receives
contributions from two sources: the interior of the light cone, and the
boundary of the light cone. Within the light cone, we expect a $\geq$1D system
with a small local Hilbert space to ‘locally thermalize’ as the operator
spreads. This implies that the bulk’s contribution to the size width scales as
$\delta\mathcal{S}_{\textrm{bulk}}\propto\sqrt{\mathcal{S}}\propto t^{d/2}$
and saturates at the scrambling time. Second, the size width also receives
contributions from the light cone’s boundary, which has not yet thermalized.
At late times, the boundary of the light cone reaches the edge of the system
and these additional contributions subside, leading to the peak in the size
width at the scrambling time.
To quantify these effects, we note that the growth of operators in $\geq$1D
RUCs is predicted to fall in the Kardar–Parisi–Zhang (KPZ) universality class
Kardar _et al._ (1986); Nahum _et al._ (2018). In 1D, fluctuations in
operator size near the boundary have been verified numerically to have a
growing width $\sim\\!t^{\alpha}$ with the KPZ growth exponent $\alpha=1/2$.
This implies that the contribution of the boundary to the size width is
$\delta S_{\textrm{boundary}}\propto t^{1/2}$, and the full width is
$\displaystyle\delta
S=\left\\{\begin{array}[]{cc}(\alpha_{\textrm{bulk}}+\alpha_{\textrm{boundary}})t^{1/2},&\hskip
14.22636ptt\lesssim t_{\textrm{scr}}\\\
\alpha_{\textrm{bulk}}t_{\textrm{scr}}^{1/2},&\hskip 14.22636ptt\gtrsim
t_{\textrm{scr}}\end{array}\right.$ (60)
We note that the maximum size width relative to the late-time size width is a
constant set by
$(\alpha_{\textrm{bulk}}+\alpha_{\textrm{boundary}})/\alpha_{\textrm{bulk}}$.
Comparing the size width of multiple system sizes, we observe excellent
agreement with predicted scalings over a wide range of system sizes (Appendix
D).
The time profile of the size width is directly observable in the peaked-size
teleportation fidelity if we scale $g\sim t_{\text{scr}}^{1/2}\sim N^{1/2}$.
In particular, by setting $N/g$ to lie between the maximum size width and the
late time size width, we observe a distinct decay-revival profile for the
teleportation fidelity (Fig. 3). At early times, we observe successful
teleportation with an oscillating fidelity. The fidelity decays slowly, as a
power law in time, as it receives corrections proportional to the growing size
variance $\sim g^{2}\delta\mathcal{S}^{2}/N^{2}$. After the scrambling time,
we see a revival in the teleportation fidelity as the size width narrows. The
lack of a parametric separation between the maximum and late time size widths
means that late time teleportation will also have some finite error for this
value of $g$.
In 2D, the scaling of the size width also matches predictions from the KPZ
universality class. In this case, the width of the boundary scales as $\sim
t^{\alpha}$, with $\alpha=1/3$ Nahum _et al._ (2018). However, to calculate
the boundary’s contribution to the size width, one must take into account two
additional considerations. First, the boundary is 1-dimensional, so its length
trivially grows in time as $\sim t$. Second, fluctuations of the boundary are
expected to have a finite correlation length, $\xi\sim t^{1/z}$, where $z=3/2$
is the KPZ dynamic exponent Corwin (2012). Thus, the boundary can be modeled
as $n_{\xi}\sim t/\xi=t^{1/3}$ uncorrelated regions, each of length $\xi$.
Each region contributes $\sim\xi t^{\alpha}$ to the size width; adding the
uncorrelated contributions from all regions yields a total size width
$\delta\mathcal{S}\sim\sqrt{n_{\xi}}\,\xi\,t^{\alpha}=t^{1/6+2/3+1/3}=t^{7/6}$.
The time profile of the size width in 2D is thus given by
$\displaystyle\delta
S=\left\\{\begin{array}[]{cc}\beta_{\textrm{bulk}}t+\beta_{\textrm{boundary}}t^{7/6},&\hskip
14.22636ptt\lesssim t_{\textrm{scr}}\\\
\beta_{\textrm{bulk}}t_{\textrm{scr}},&\hskip 14.22636ptt\gtrsim
t_{\textrm{scr}}\end{array}\right.$ (63)
We confirm these scalings in our numerics (Fig. 2(b) and Appendix D). Notably,
the size width is now dominated by the boundary contribution at intermediate
times, such that the ratio of the maximum size width to the late time size
width scales as $t^{1/6}_{\textrm{scr}}\sim N^{1/12}$. As in 1D, one can probe
this behavior using the peaked-size teleportation fidelity, now with $g\sim
N/t_{\text{scr}}^{7/6}\sim N^{5/12}$. We emphasize that in 2D, the scaling of
the size width is determined by _correlations_ between different points on the
light-cone boundary. This goes beyond the behavior studied in previous works
on RUCs, which focus on quantities probed by local OTOCs.
### VI.3 0D random unitary circuits
We now turn to random unitary circuits in zero dimensions, a prototypical
model for ‘fast scramblers’ Hayden and Preskill (2007); Sekino and Susskind
(2008). We construct these circuits as follows: at each time-step, we
partition the $N$ qubits into randomly chosen pairs, and apply independent
Haar random 2-qubit unitaries to each pair. As before, such circuits are
efficiently simulable via Clifford numerics.
_Peaked sizes_ —In all-to-all coupled systems, operators are generally
expected to grow exponentially in time, $\mathcal{S}\sim e^{\lambda t}$, where
$\lambda$ is the Lyapunov exponent. The reason is simple: at each time step,
every term in an operator—rather than just those on a ‘light-cone’
boundary—has a fixed probability of spreading under random pairwise unitaries.
A somewhat less intuitive expectation is that the size width also generally
grows exponentially. One way of understanding this is by imagining two
realizations of the dynamics: in one realization the initial operator doubles
at the first time and in the other it does not. In effect, the latter system
now lags behind the former by one time step, $\Delta t$, and the difference in
their sizes at later times will be exponentially magnified, to $e^{\lambda
t}(1-e^{-\lambda\Delta t})$.
The lack of separation between the size and size width seems to preclude the
possibility of peaked-size teleportation at intermediate times. Nevertheless,
we can engineer such a separation by _encoding_ the information of each input
qubit into $p$-body operators, with $p\gg 1$. As an example, consider encoding
a single qubit into $p=5$ qubit operators via
$\begin{split}E(X\otimes\mathbbm{1}\otimes\mathbbm{1}\otimes\mathbbm{1}\otimes\mathbbm{1})E^{\dagger}&=Z\otimes
X\otimes X\otimes Y\otimes Z\\\
E(Y\otimes\mathbbm{1}\otimes\mathbbm{1}\otimes\mathbbm{1}\otimes\mathbbm{1})E^{\dagger}&=Y\otimes
Z\otimes Z\otimes X\otimes Y\\\
E(Z\otimes\mathbbm{1}\otimes\mathbbm{1}\otimes\mathbbm{1}\otimes\mathbbm{1})E^{\dagger}&=X\otimes
Y\otimes Y\otimes Z\otimes X,\\\ \end{split}$ (64)
Here, $E$ is a Clifford unitary encoding operation that conjugates state
insertion and decoding [explicitly, replacing $U\rightarrow
UE,U^{*}\rightarrow U^{*}E^{*}$, and $U^{T}\rightarrow E^{T}U^{T}$ in Fig.
1(a)]. The success of teleportation is now dependent on the size distributions
of time-evolved $p$-body operators, $Q_{A}(t)=UEPE^{\dagger}U^{\dagger}$,
where $P$ runs over the initial unencoded single-qubit Pauli operators. As we
will soon verify explicitly, before the scrambling time the support of each of
the $p$ operators composing $Q_{A}$ will be approximately non-overlapping, so
that their size distributions will convolve. Thus, the total operator size is
multiplied by a factor of $p$ but, through the central limit theorem, the size
width is multiplied only by $\sqrt{p}$.
In more detail, consider the size growth of an operator, $Q_{A}$, with initial
size $\mathcal{S}_{0}=p$. During a single time step, each qubit $i$ in the
support of $Q_{A}(t)$ is paired with another random qubit; for simplicity, we
assume the second qubit is outside the support of $Q_{A}(t)$, which should be
valid at times well before the scrambling time. Under random two-qubit
Clifford time-evolution, $Q_{A}(t)$ grows to have support on both qubits with
probability $\eta=1-2(d^{2}-1)/(d^{4}-1)$ ($9/15$ for qubits). The operator
size, $\mathcal{S}_{t}$, therefore grows stochastically in time, according to
$\begin{split}\mathcal{S}_{t+1}&=\mathcal{S}_{t}+\sum_{i=0}^{\mathcal{S}_{t}}s_{i}\\\
&=\mathcal{S}_{t}+\text{Bi}_{t}(\mathcal{S}_{t},\eta)\\\
&\approx(1+\eta)\mathcal{S}_{t}+\sqrt{\mathcal{S}_{t}\eta(1-\eta)}\,\mathcal{N}_{t}(0,1)\end{split}$
(65)
where each $s_{i}$ is a binary random variable that increases the size by $1$
with probability $\eta$ and $0$ with probability $1-\eta$, and
$\text{Bi}_{t}(\mathcal{S}_{t},\eta)$ denotes the binomial distribution with
$\mathcal{S}_{t}$ trials and probability $\eta$, which we can approximate as a
normal distribution,
$\mathcal{N}_{t}(\eta\mathcal{S}_{t},\sqrt{\mathcal{S}_{t}\eta(1-\eta)})$. The
size at arbitrary times can be written as a sum of random variables drawn at
each time step:
$\begin{split}\mathcal{S}_{t}\approx&(1+\eta)^{t}p\\\
&+\sqrt{\eta(1-\eta)}\sum_{t^{\prime}=0}^{t-1}(1+\eta)^{t-t^{\prime}-1}\sqrt{\mathcal{S}_{t^{\prime}}}\,\mathcal{N}_{t^{\prime}}(0,1)\\\
\end{split}$ (66)
from which we see that the average size grows exponentially in time with
Lyapunov exponent $e^{\lambda}=1+\eta$. Deviations arise at each time step
$t^{\prime}$, with typical magnitude
$(1+\eta)^{t-t^{\prime}-1}\sqrt{\mathcal{S}_{t^{\prime}}}\approx(1+\eta)^{t-1-t^{\prime}/2}\sqrt{p}$.
Since this decays exponentially in $t^{\prime}$, we can approximate the total
variation, $\delta\mathcal{S}_{t}$, as the largest term in the sum
($t^{\prime}=0$), which has magnitude
$\delta\mathcal{S}_{t}\sim(1+\eta)^{t-1}\sqrt{p}\approx\frac{\mathcal{S}_{t}}{\sqrt{p}}.$
(67)
As anticipated, the size width is dominated by early time errors that have
exponentially grown in time, so that the ratio of the size width to the size
remains constant at $\sim 1/\sqrt{p}$.
To support these claims, we numerically simulate the time-evolved size
distribution of operators with an initial size $p\approx 1000$ [Fig. 2(c)]. As
expected, we observe that the average size grows exponentially as $\sim
pe^{\lambda t}$ and saturates at a timescale $t^{*}\sim\log(N/p)$. Moreover,
the size width grows at the same exponential rate but its magnitude is
suppressed by a factor of $\sqrt{p}$ compared to the average size.
To verify that this allows for teleportation, we next compute the fidelity for
teleporting a single qubit, in the regime $g\gg 1$. As shown in Fig. 2(c),
teleportation occurs with near perfect fidelity beginning at $t\approx
t^{*}-\log({gp})$, corresponding $g\mathcal{S}/N\approx 1$. Thereafter, the
teleportation fidelity decreases exponentially in time, consistent with the
increase of the size width. At time $t\approx t^{*}-\log({g\sqrt{p}})$,
teleportation stops succeeding entirely, since the size width has reached the
limit $g\delta\mathcal{S}/N\approx 1$. Finally, at late times $t\approx
t^{*}-\log(p)$, the fidelity revives as the system becomes fully scrambled and
the operator size width narrows to $\delta\mathcal{S}\sim\sqrt{\mathcal{S}}$.
Size addition—We now turn to the possibility of teleporting multiple qubits in
0D RUCs. Within the peaked-size regime, this reduces to the question of
whether operator sizes add according to Eq. (56). Satisfying this requirement
in all-to-all coupled systems is not as trivial as in $\geq 1$D, since time-
evolved operators typically act on overlapping subsystems at any finite time.
Nevertheless, we now provide a simple argument for why size addition holds
despite this.
To do so, we model each time-evolved Pauli operator $Q_{i}(t)$ as an
independent random Pauli string of size $\mathcal{S}[Q_{i}]$. Consider two
such strings, $P_{1}$ and $P_{2}$, with support on regions $A_{1}$ and $A_{2}$
and sizes $\mathcal{S}[P_{1}]=|A_{1}|$ and $\mathcal{S}[P_{2}]=|A_{2}|$. The
size of the product, $P_{1}P_{2}$, is the size of the union $A_{1}\cup A_{2}$,
minus the number of sites where the two strings overlap and have the same
single-qubit Pauli operator. This occurs with probability $1/(d^{2}-1)=1/3$ at
each site in the region $A_{1}\cap A_{2}$, giving
$\begin{split}\mathcal{S}[P_{1}P_{2}]&\approx|A_{1}\cup
A_{2}|-\frac{1}{3}|A_{1}\cap A_{2}|\\\
&=\mathcal{S}[P_{1}]+\mathcal{S}[P_{2}]-\frac{4}{3}|A_{1}\cap A_{2}|.\\\
\end{split}$ (68)
The deviation from the simple additive rule
$\mathcal{S}[P_{1}P_{2}]=\mathcal{S}[P_{1}]+\mathcal{S}[P_{2}]$ is thus
controlled by $|A_{1}\cap A_{2}|$.
For $n$-qubit teleportation, one must consider the combined size,
$\mathcal{S}[P_{1}\ldots P_{m}]$, of $m$ independent Pauli strings, where $m$
takes a typical value $m\approx 3n/4$ (a typical $n$-qubit operator has non-
identity support on $3n/4$ qubits). In general, this quantity will receive
corrections from $\binom{m}{k}$ different $k$-way intersections of the
strings, for all $2\leq k\leq m$. For random Pauli strings, the expected size
of these intersections scales as $\overline{|A_{1}\cap\ldots\cap
A_{k}|}=\prod_{i=1}^{k}\frac{|A_{i}|}{N}\sim\mathcal{S}^{k}/N^{k-1}$, where
$\mathcal{S}$ is the typical size of a single Pauli string. For a given $k$,
the correction to size addition will be the sum of $\binom{m}{k}\sim m^{k}$
different intersections and therefore scales as
$m\mathcal{S}(m\mathcal{S}/N)^{k-1}$. These corrections can be neglected if
they are small compared to the total size; this occurs when $m\mathcal{S}\ll
N$, which corresponds to a timescale much less than the scrambling time.
To demonstrate this claim, we numerically simulate the teleportation protocol
with $n>1$ qubits in the regime $1\ll p,np\ll K$ [Fig. 4]. Analogous to
single-qubit teleportation, the teleportation fidelity exhibits oscillations
beginning at $t\approx t^{*}-\log({gp})$, and vanishes at $t\approx
t^{*}-\log({g\sqrt{pn}})$ due to the growth of the combined size width.
However, in contrast to the single-qubit case, teleportation of multiple
qubits is not possible at late times, $t\gtrsim t^{*}-\log({gpn})$, as
predicted in Section V. Interestingly, between these two regimes, we observe a
partial revival of the fidelity: this indicates that the operator size widths
begin to narrow before the additive condition is completely invalidated.
_Error analysis—_ While we have confirmed that multi-qubit teleportation can
be achieved in certain ideal limits, a key question remains: how does the
maximum number of qubits that can be teleported scale as a function of $K$,
i.e. what is the protocol’s channel capacity? To answer this question, we now
estimate how deviations from these ideal limits lead to errors in peaked-size
teleportation and ultimately constrain the channel capacity. Throughout this
discussion, we assume that the size, $\mathcal{S}$, is extensive, but $K$ is
not; this is the natural regime for probing the channel capacity of the
protocol at intermediate times, and is the physical scenario in the context of
traversable wormholes Maldacena _et al._ (2017).
In summary, we identify four distinct sources of error in the multi-qubit
teleportation fidelity, $F=1-\epsilon$:
1. 1.
Errors due to finite $p$: $\epsilon\sim ng^{2}\mathcal{S}_{K}^{2}/K^{2}p$
2. 2.
Errors due to finite $K$: $\epsilon\sim ng^{2}\mathcal{S}_{K}/K^{2}$
3. 3.
Errors due to imperfect size addition:
$\epsilon\sim\big{[}n^{2}g^{2}\mathcal{S}_{K}^{4}/K^{4}+\ldots\big{]}$, where
ellipses indicate higher orders in $(n\mathcal{S}_{K}/K)^{2}$
4. 4.
Errors due to fluctuations in size addition:
$\epsilon\sim\big{[}n^{2}g^{2}\mathcal{S}_{K}^{2}/K^{3}+\ldots\big{]}$, where
ellipses indicate higher orders in $n\mathcal{S}_{K}/K$
We discuss each of these errors in detail below.
The first and second sources of error are due to imperfectly peaked $K$-size
distributions. The $K$-size width receives contributions from finite-$p$
corrections, $\sim\mathcal{S}_{K}/\sqrt{p}$, and finite-$K$ corrections,
$\sim\sqrt{\mathcal{S}_{K}}$ [see the discussion below Eq. (35)]. To translate
these into errors in the teleportation fidelity, we multiply the size width by
$g/K$ and take the square. This gives fidelity errors $\sim
g^{2}\mathcal{S}_{K}^{2}/pK^{2}$ and $\sim g^{2}\mathcal{S}_{K}/K^{2}$ per
teleported qubit.
The third and fourth sources of error arise from imperfect size addition. This
leads both to ‘systematic’ errors, due to the average overlap of operators, as
well as ‘sampling’ errors, due to random fluctuations in this overlap. We
begin with the systematic errors: as we recall, the size addition of $m$ time-
evolved operators receives corrections from $k$-way overlaps of the operators,
each scaling as $\sim m\mathcal{S}_{K}(m\mathcal{S}_{K}/K)^{k-1}$, for $2\leq
k\leq m$ (rescaling our previous results to the $K$-size instead of the size).
The nonlinear dependence on $m$ indicates that sizes do not add perfectly.
Nevertheless, when teleporting an $n$-qubit initial state for large $n$, we
can correct for the above effect at leading order by using a linear
approximation for $m^{k}$ about its typical value, $(3n/4)^{k}$. This leads to
an effectively smaller operator size, which can be observed in the reduced
frequency of the fidelity oscillations for 10-qubit teleportation compared to
1-,3-qubit teleportation in Fig. 4(a). The leading errors after this shift are
quadratic in $\delta m\equiv m-3n/4$, which has a typical magnitude $\delta
m\sim\sqrt{n}$. Multiplying by $g/K$ and taking the square, we therefore find
multi-qubit fidelity errors
$\sim(g\mathcal{S}_{K}/K)^{2}(n\mathcal{S}_{K}/K)^{2k-2}$; at leading order
$k=2$, this gives $\sim n^{2}g^{2}\mathcal{S}_{K}^{4}/K^{4}$.
Finally, each intersection above is subject to additional random fluctuations
about its average value. When operator sizes are much smaller than the system
size, we can treat each intersection as arising from a binomial process, in
which case fluctuations are proportional to the square root of the
intersection’s average size (see Appendix E for a detailed accounting). These
add in quadrature for $\sim n^{k}$ overlaps, producing a total fidelity error
$\sim(g^{2}/K)(n\mathcal{S}_{K}/K)^{k}$.
_Channel capacity—_ To define the channel capacity of the teleportation
protocol, we fix a per qubit error threshold $\epsilon_{\text{th}}$, and
determine the maximum number of qubits that can be sent while maintaining a
multi-qubit fidelity above this threshold777We note that this definition of
channel capacity differs from more conventional definitions Nielsen and Chuang
(2002); we do not expect this difference to qualitatively affect the scaling
of the channel capacity with $K$, as the fidelity drops off steeply above the
capacity [Fig. 4(b)]., $F\geq 1-n\,\epsilon_{\text{th}}$. We are interested in
how the channel capacity scales with the number of couplings, $K$, while
allowing both $g$ and $\mathcal{S}_{K}$ (determined by the evolution time) to
vary.
In 0D RUCs, all errors increase with $g$, so it is optimal to set $g$ to its
minimal value, $g\mathcal{S}/N=\pi$. This gives a per qubit error
$\begin{split}\frac{\epsilon}{n}\sim&\frac{1}{p}+\frac{1}{\mathcal{S}_{K}}+\bigg{[}\frac{n\mathcal{S}_{K}^{2}}{K^{2}}+\ldots\bigg{]}+\bigg{[}\frac{n^{2}}{K}+\ldots\bigg{]}.\end{split}$
(69)
The first term is negligible in the large $p$ limit and so we will neglect it
from here on.
We minimize the remaining terms with respect to $\mathcal{S}_{K}$. There are
two relevant regimes. For $n\lesssim\sqrt{K}$, the minimum is determined
entirely by the leading order contributions in $n\mathcal{S}_{K}/K$ to the
error (i.e. neglecting the ellipses). Taking the derivative and setting to
zero, we have the minimum at $\mathcal{S}^{(1)}_{K}\sim K^{2/3}/n^{1/3}$. As
we increase $n$, the optimal size approaches the value
$\mathcal{S}_{K}^{(2)}\sim K/n$. At this point, size addition errors of all
orders (i.e. the ellipses) become large, and so the true minimum becomes fixed
just below $\mathcal{S}^{(2)}_{K}$. This crossover between these two minima
occurs at $n\sim\sqrt{K}$, at which
$\mathcal{S}^{(1)}_{K}\sim\mathcal{S}^{(2)}_{K}$.
The above minima give two distinct scalings for the per qubit error and thus
the channel capacity. The first minimum has a per qubit error
$\epsilon^{(1)}/n\sim(n/K^{2})^{1/3}$, which gives rise to a superlinear
channel capacity, $n\lesssim\epsilon_{\text{th}}^{3}K^{2}$. However, as we
increase $K$, this capacity eventually surpasses the value $\sqrt{K}$. Above
this, the optimal size is given by the second minimum, which has an error
$\epsilon^{(2)}/n\sim n/K$, and thus the channel features an asymptotically
linear capacity,
$n\lesssim\epsilon_{\text{th}}K.$ (70)
This is a stronger instance of the strict general bound Eq. (57). Intuitively,
this channel capacity arises because the individual $K$-sizes must be large,
$\mathcal{S}_{K}\gg 1$, for the $K$-size to be tightly peaked, while at same
time the combined $K$-size must be much smaller than $K$, $n\mathcal{S}_{K}\ll
K$, for the $K$-sizes to add; hence $n\ll K$.
We test this scaling numerically by simulating the teleportation protocol and
measuring the per qubit fidelity, $F$, as a function of $n$ and $K$.
Specifically, for each value of $K$, we sweep the number of qubits $n$ and
determine the maximum qubits that can be sent before the infidelity exceeds a
threshold, $1-F=\epsilon_{\textrm{th}}$. These results are shown in Fig. 4(b)
and exhibit a clear linear trend across two orders of magnitude, confirming
our prediction of a linear channel capacity.
A few final remarks are in order. First, while in principle the per qubit
fidelity can be calculated by taking the $n^{\text{th}}$ root of the full
$n$-body fidelity, this approach is numerically unstable for large $n$. Thus,
we instead compute the fidelity of a _single_ qubit, while trying to send
multiple qubits, using an approach derived in Appendix D. This amounts to
performing a sum analogous to Eq. (46), but only including pairs of $Q_{1}$
and $Q_{2}$ that are equal on all sites except for one.
Second, the range of system parameters that lie within the linear scaling
regime is ultimately constrained by the finite total system size, $N=10^{8}$.
In particular, to maximize the linear scaling regime, we choose $p=101$ and
$\epsilon_{\textrm{th}}=0.07$. The former ensures that finite-$p$ errors are
negligible, while the latter allows the number of qubits at the threshold to
be large enough to access the $n\gtrsim\sqrt{K}$ regime but small enough that
the operators are initially dilute, i.e. $n\ll N/p$.
Figure 4: Teleportation of multiple qubits in 0D RUCs. (a) Teleportation
fidelity as a function of time for teleportation of $n=1,3,10$ qubits. The
multi-qubit fidelity decays and revives earlier than the single-qubit
fidelity, since multi-qubit operators both have a larger size width and
saturate the system size earlier. At late times, teleportation of multiple
qubits is not possible and the fidelity is trivial. (b) The teleportation
fidelity per qubit $F$ decreases as the number of qubits is increased (inset),
due to errors in size addition. By setting a threshold for the minimal
accepted fidelity (dashed line), we observe a linear scaling of the channel
capacity $n_{\text{max}}$ as a function of the number of coupled qubits $K$.
### VI.4 Large-$q$ SYK model: infinite temperature
We now demonstrate peaked-size teleportation in a 0D Hamiltonian system, the
large-$q$ SYK model, at infinite temperature. While teleportation at low
temperatures in the SYK model is known to succeed via the gravitational
mechanism, teleportation at infinite temperature was discovered only recently
Gao and Jafferis (2019). In addition to showing that this mechanism is in fact
peaked-size teleportation, we also find that, remarkably, _all_ qualitative
aspects of this teleportation match those of 0D RUCs.
The large-$q$ SYK model is defined by the Hamiltonian Maldacena and Stanford
(2016); Qi and Streicher (2019):
$H=i^{q/2}\sum_{1\leq j_{1}\leq\ldots\leq
j_{q}}J_{j_{1},\ldots,j_{q}}\psi_{j_{1}}\ldots\psi_{j_{q}},$ (71)
where $\psi_{i}$ are Majorana fermions,
$\\{\psi_{i},\psi_{j}\\}=2\delta_{ij}$, and the couplings are drawn
independently from a Gaussian distribution with zero mean and a variance
$\langle J_{j_{1},\ldots,j_{q}}^{2}\rangle=J^{2}/2q\binom{N-1}{q-1}$.
To construct the teleportation protocol for the SYK model, we first define the
$N$-fermion EPR state,
$\psi_{j,l}\ket{\text{FEPR}}\equiv-i\psi_{j,r}\ket{\text{FEPR}},\,\,\,\forall\,j=1,\ldots,N$
(72)
From this, the TFD state is obtained as before,
$\ket{\text{TFD}}\equiv e^{-\beta H_{l}/2}\ket{\text{FEPR}}.$ (73)
For the two-sided coupling, we consider the simple bilinear interaction,
$V=\frac{1}{2qN}\sum_{j=0}^{N}i\psi_{j,l}\psi_{j,r},$ (74)
which measures the size of operators in the Majorana string basis, divided by
$qN$ Roberts _et al._ (2018); Qi and Streicher (2019).
As in 0D RUCs, the size and size width of time-evolved operators in the SYK
model increase exponentially in time, and exhibit a large separation only when
initially encoded in $p$-body operators. To see this, we can generalize
previous computations of size distributions in the large-$q$ SYK model Qi and
Streicher (2019) to initial $p$-body operators,
$\psi=\psi_{1}\psi_{2}\ldots\psi_{p}$; this relies on the factorization of SYK
correlation functions in the large-$N$ limit Gao and Jafferis (2019). After
the relaxation time ($t\gtrsim 1/J$), but before the scrambling time
($t\lesssim\log(N/p)/J$), the size and size width are:
$\mathcal{S}\approx\frac{p}{2}e^{2Jt},\,\,\,\,\,\,\,\,\delta\mathcal{S}\approx\frac{\sqrt{2qp}}{4}e^{2Jt}.$
(75)
The scaling $\delta\mathcal{S}\sim\mathcal{S}/\sqrt{p}$ matches that found for
0D RUCs; in particular, ensuring a large separation between the size and size
width requires $p\gg 1$.
This large separation suggests that peaked-size teleportation is possible at
early times in the large-$p$ limit. To verify this, we analyze the two-sided
correlator, which is given by Gao _et al._ (2017)
$\begin{split}C_{\psi}(t)&=\langle
e^{-igV}\psi_{r}(-t)e^{igV}\psi_{l}(t)\rangle\\\
&=\bigg{(}\frac{1}{1+i\frac{g}{N}\frac{1}{4}e^{2Jt}}\bigg{)}^{2p/q}\\\
\end{split}$ (76)
at infinite temperature before the scrambling time888The inclusion of
$e^{-igV}$ in the correlator applies a phase $e^{-ig}$ to the bra on the left
side, which conveniently subtracts off the constant term when $V$ to operator
size.. For large $p$ and early times, we can approximate the correlator as
$\begin{split}C_{\psi}(t)&\approx\exp(-i\frac{g}{qN}\frac{p}{2}e^{2Jt}),\\\
\end{split}$ (77)
using $(1+ix)^{m}\approx e^{imx}$, valid when
$mx^{2}\equiv\frac{2p}{q}\big{(}\frac{g}{N}\frac{1}{4}e^{2Jt}\big{)}^{2}\ll
1$.
Crucially, as expected for peaked-size teleportation, the correlator in this
regime corresponds to an overall phase equal the average operator size, Eq.
(75), multiplied by $g/qN$. This indicates that teleportation succeeds with
nearly maximal fidelity beginning when $g\mathcal{S}/qN\approx 1$.
Teleportation continues to succeed until the above approximation breaks down,
which occurs when the size width, $\delta\mathcal{S}$, becomes of order
$(g/qN)^{-1}$. As for all scrambling systems, the two-sided correlator is
expected to revive at late times, $t\gtrsim\log(N/p)/J$, at which point the
sizes saturate the entire system Gao _et al._ (2017); Maldacena _et al._
(2017) (see Section V); this is not reflected in Eq. (76), which is valid only
before the scrambling time.
## VII Beyond peaked-size teleportation
In this Section, we seek to understand the interplay between peaked-size and
gravitational teleportation. A central theme in this understanding is a
comparison between the size distribution introduced in Section III, and the
_winding size distribution_ introduced in Ref. Brown _et al._ (2019); Nezami
_et al._ (2021).
To illustrate the distinction between these distributions, consider a time-
evolved Majorana fermion operator, decomposed in a basis of Majorana strings,
$\chi$:
$\psi(t)\rho^{1/2}=\sum_{\chi}c_{\chi}\chi.$ (78)
From this decomposition, we define the size distribution,
$P(\mathcal{S})=\sum_{\chi\,:\,\mathcal{S}[\chi]=\mathcal{S}}|c_{\chi}|^{2},$
(79)
and the winding size distribution Brown _et al._ (2019); Nezami _et al._
(2021),
$f(\mathcal{S})=\sum_{\chi\,:\,\mathcal{S}[\chi]=\mathcal{S}}c_{\chi}^{2},$
(80)
where $\mathcal{S}[\chi]$ is the size of the string $\chi$. Note that the size
distribution is real-valued, while the winding size distribution may be
complex.
The teleportation correlators are, in fact, directly related to the winding
size distribution:
$C_{\psi}(t)=-i\sum_{\mathcal{S}=0}^{\infty}e^{-ig\mathcal{S}/qN}f(\mathcal{S}),$
(81)
which can be derived by explicitly plugging Eq. (78) into the teleportation
correlator. The size distribution, by contrast, is related to “one-sided”
correlation functions, e.g. Eq. (32), where both instances of the time-evolved
operator appear on the same side of the TFD state.
Despite this distinction, we have so far been able to analyze teleportation
using the size distribution, as opposed to the winding size distribution,
because the two are equal in two circumstances. The first is at infinite
temperature, where the coefficients $c_{\chi}$ are real because $\psi(t)$ is
Hermitian. The second has been precisely our focus: when size distributions
are perfectly tightly peaked, in which case both distributions approach a
delta function.
In what follows, we describe several scenarios in which the distinction
between the two distributions becomes relevant. First, we analyze the general
behavior of the teleportation correlator at early times in large-$N$ systems;
this analysis relies on the connection between the teleportation correlator
and OTOCs and demonstrates that the correlator deviates from the peaked-size
prediction whenever the OTOC contains an imaginary part. Second, we review
recent results showing that this deviation eventually leads to low temperature
correlators with $\mathcal{O}(1)$ magnitude in models where the winding size
distribution takes a particular form, which is conjectured to be the
microscopic origin of gravitational teleportation Brown _et al._ (2019);
Nezami _et al._ (2021). Third, we return to teleportation in the large-$q$
SYK model and show that this model interpolates between gravitational
teleportation at low temperatures and peaked-size teleportation at high
temperatures. Surprisingly, this interpolation occurs despite the fact that
the large-$p$ encoding ensures a large separation between the size and size
width, i.e. the size distribution naively appears tightly peaked, even at low
temperatures. Finally, motivated by this smooth interpolation, we conclude
this Section by searching for a ‘dual’ description of peaked-size
teleportation in a bulk gravitational theory. In particular, we argue that
strong stringy effects lead to the same qualitative features as peaked-size
teleportation.
### VII.1 Early time teleportation
Let us begin by considering the first moments of the size and winding size
distribution. As we previously saw, the first moment of the former, i.e. the
average size, is measured by a one-sided OTOC [Eq. (32)]. Similarly, the first
moment of the winding size distribution is given by a two-sided OTOC,
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 0.90001pt\hbox{\hskip
0.6pt\includegraphics[scale={.3}]{fig-V-OTOC-
two}}}\end{array}}\centering\@add@centering$ (83)
which differs from Eq. (32) in terms of the placement of the thermal density
matrix. Just as for the distributions themselves, the two-sided OTOC is equal
to the one-sided OTOC at infinite temperature and when sizes are perfectly
tightly peaked.
The early time behavior of the teleportation correlator is directly dependent
on the first moment of winding size distribution. We have already encountered
this in the specific case of peaked-size teleportation, for which the two
distributions are equal and the phase of the correlator is given by the
operator size (see Section III). To generalize this relation beyond peaked-
size regime, we consider two simplifying assumptions. First, we focus on 0D
large-$N$ systems, e.g. the SYK model, with a $p$-body initial encoding. In
such systems, the teleportation correlator factorizes into a product of
single-body correlators (up to $1/N$ corrections):
$\begin{split}C_{\psi}(t)&=\left<e^{-igV}\psi_{r}(-t)e^{igV}\psi_{l}(t)\right>\\\
&\approx\bigg{[}\langle
e^{-igV}\psi_{1,r}(-t)e^{igV}\psi_{1,l}(t)\rangle\bigg{]}^{p},\\\ \end{split}$
(84)
where $\psi_{1}$ is a single-body operator.
Second, generalizing Eq. (35), we consider sufficiently early times to work at
leading order in $g$:
$\begin{split}C_{\psi}(t)\approx&\,e^{-igp\expectationvalue{V}}\bigg{[}\langle\psi_{1,r}\psi_{1,l}\rangle+ig\langle\psi_{1,r}\,V\,\psi_{1,l}\rangle+\ldots\bigg{]}^{p}\\\
\approx&\,e^{-igp\expectationvalue{V}}\langle\psi_{1,r}\psi_{1,l}\rangle^{p}\bigg{[}\exp(igp\frac{\langle\psi_{1,r}\,V\,\psi_{1,l}\rangle}{\langle\psi_{1,r}\psi_{1,l}\rangle})+\ldots\bigg{]}\\\
=&\,(-iG_{\beta})^{p}\exp(-i\frac{gp}{2q}\bigg{[}\frac{\text{OTOC}_{2}}{G_{\beta}}-G_{\beta}\bigg{]})+\ldots\\\
=&\,(-iG_{\beta})^{p}\exp(-i\frac{gp}{2qN}G_{\beta}\mathcal{F}_{2}(t))+\ldots\end{split}$
(85)
where
$G_{\beta}=i\langle\psi_{1,r}\psi_{1,l}\rangle=\tr(\rho^{1/2}\psi_{1}\rho^{1/2}\psi_{1})$
is the imaginary time Green’s function, and $\mathcal{F}_{2}(t)$ is the first-
order, connected component of the two-sided OTOC [Eq. (83)],
$\begin{split}\textrm{OTOC}_{2}&\approx
G_{\beta}^{2}\left(1+\frac{1}{N}\mathcal{F}_{2}(t)+\cdots\right).\end{split}$
(86)
Similar to Eq. (35), the leading correction to Eq. (85) is $\sim
pg^{2}[\expectationvalue{V^{2}}_{\psi}-\expectationvalue{V}_{\psi}^{2}/G_{\beta}]$,
and the approximation holds when this is small.
In chaotic systems, the connected OTOC grows exponentially in time
$\mathcal{F}_{2}(t)\sim e^{\lambda t}$ with a prefactor that is, in general,
complex. From Eq. (85), the real part of $\mathcal{F}_{2}(t)$ causes rapid
phase oscillations in the correlator, while the imaginary part
increases/decreases the correlator magnitude, depending on the sign of the
coupling $g$. As we have seen, the former is the sole effect at infinite
temperature and when sizes are tightly peaked; in such cases, the OTOC is real
and equal to the operator size. Outside of these regimes, the real part of
$\mathcal{F}_{2}(t)$—i.e. the phase of the teleportation correlator—is
fundamentally distinct from the first moment of the size distribution. Rather,
recent work has shown that $\text{Re}\\{\mathcal{F}_{2}(t)\\}$ is computable
via a ladder diagram identity and is physically interpreted as a ‘branching
time’ Gu and Kitaev (2019).
On the other hand, the imaginary part of $\mathcal{F}_{2}(t)$ is dominant in
systems with a gravity dual Kitaev and Suh (2018); Gu and Kitaev (2019) (as
well as other maximally chaotic systems, e.g. maximally chaotic 2D CFTs with a
large central charge Gao and Liu (2018)). It leads to a growth (or decay) in
the magnitude of the correlator, which is not possible in peaked-size
teleportation. This opens the door to $\mathcal{O}(1)$ teleportation
fidelities even at low temperatures Brown _et al._ (2019); Nezami _et al._
(2021) (see the following Section). Interpolating between these two limits, it
has been conjectured that the prefactor of $\mathcal{F}_{2}(t)$ is
proportional to $e^{i\lambda\beta/4\pi}$ Kitaev and Suh (2018); Gu and Kitaev
(2019). This would imply that the imaginary part is dominant if and only if
$\lambda\approx 2\pi\beta$, i.e. the system approaches the bound on chaos.
### VII.2 Gravitational teleportation and the size-winding mechanism
We now move beyond early times and provide a brief review of how the
correlator can achieve its maximal magnitude, $1$, even at finite
temperatures. This occurs via the ‘size winding’ phenomenon introduced in Ref.
Brown _et al._ (2019); Nezami _et al._ (2021) as the microscopic mechanism
for gravitational teleportation. As we emphasize in Section II, maximizing the
magnitude of the correlators is necessary for high fidelity teleportation, but
it is not sufficient: we must also align the correlator phases, for every
operator on the subspace to be teleported.
To begin, note that the winding size distribution is normalized to the two-
point function, $G_{\beta}\leq 1$, in contrast to the size distribution, which
is normalized to $1$. From Eq. (80), we see that this norm being less than one
implies that the phases of the coefficients $c_{\chi}$ are not perfectly
aligned for different strings $\chi$. It is convenient to separate this
misalignment into two classes: first, when coefficients of strings of the same
size $\mathcal{S}$ are misaligned, which manifests in the magnitude of
$f(\mathcal{S})$ being less than maximal for a given $\mathcal{S}$, and
second, when the phases of $f(\mathcal{S})$ for different sizes $\mathcal{S}$
do not align with each other.
We focus on the latter case and, more specifically, consider an ansatz in
which the coefficients’ phases _wind_ with the size Brown _et al._ (2019);
Nezami _et al._ (2021):
$\begin{split}c_{\chi}=e^{i\alpha\mathcal{S}[\chi]}|c_{\chi}|,\end{split}$
(87)
In this case, the coupling of the teleportation protocol, by applying a phase
that is also proportional to the size, can serve to unwind the phases of
$f(\mathcal{S})$ at the value $g/qN=2\alpha$ [see Eq. (80)]. This increases
the teleportation correlator magnitude from its initial value, $G_{\beta}$, to
unity. Although seemingly artificial, we next show that this ansatz holds
exactly for the SYK model at low temperatures.
### VII.3 Large-$q$ SYK model: finite temperature
We now turn to explore the interplay between peaked-size and gravitational
teleportation in an explicit example: the large-$q$ SYK model at finite
temperature and large-$p$ encoding. Despite the fact that this model features
a large separation between the size and size width, we show that teleportation
is _not_ governed by the peaked-size mechanism at low temperatures, due to the
presence of strong size winding.
To begin, let us consider the finite-temperature teleportation correlator,
given by Gao and Jafferis (2019):
$\begin{split}C_{\psi}(t)&=(-iG_{\beta})^{p}\bigg{(}\frac{1}{1-\frac{g}{N}\frac{J}{2\lambda}e^{\lambda
t}\sin(\lambda\beta/4)+i\frac{g}{N}\frac{1}{4}e^{\lambda
t}}\bigg{)}^{2p/q},\\\ \end{split}$ (88)
where
$G_{\beta}^{p}=i^{p}\expectationvalue{\psi_{r}\psi_{l}}=(\lambda/2J)^{2p/q}$
is the $p$-body two-point function, and the Lyapunov exponent $\lambda$
corresponds to the solution of $\beta\lambda=2\beta J\cos(\lambda\beta/4)$,
and interpolates between $2\pi/\beta$ at low temperatures and $2J$ at high
temperatures. At infinite temperature, the correlator reduces to Eq. (76), and
follows our expectations for peaked-size teleportation (see Section VI.4). At
low temperatures, where the model is known to possess a gravitational dual
Kitaev (2015); Maldacena and Stanford (2016); Kitaev and Suh (2018), the
correlator behaves substantially differently; most notably, its magnitude
increases from $G_{\beta}^{p}$ at time zero to unity when $gJe^{\lambda
t}/2\lambda N=1$ [illustrated in Fig. 1(c)].
From this correlator, we can verify the two predictions made in Sections VII.1
and VII.2: ($i$) the early time behavior is governed by the two-sided OTOC,
and ($ii$) the size winding mechanism is responsible for the $\mathcal{O}(1)$
peak in the correlator magnitude at low temperatures. To see the former, we
expand the correlator at leading order:
$\begin{split}C_{\psi}(t)&\approx(-iG_{\beta})^{p}\exp(-\frac{igp}{2qN}\bigg{[}i\frac{2J}{\lambda}e^{\lambda
t}\sin(\lambda\beta/4)+e^{\lambda t}\bigg{]}).\end{split}$ (89)
Indeed, the term in the exponent is directly proportional to the connected
piece of the two-sided OTOC Gu and Kitaev (2019),
$\mathcal{F}_{2}(t)=i\frac{2J}{\lambda}e^{\lambda
t}\sin(\lambda\beta/4)+\frac{1}{N}e^{\lambda t},\\\ $ (90)
matching Eq. (85)999More precisely, the correlator in Eq. (89) is missing a
factor of $G_{\beta}^{p}$ compared to Eq. (85). This same mismatch is noted in
Ref. Qi and Streicher (2019), and is attributed to the large-$q$ limit
utilized for the calculation, since in this limit $G_{\beta}$ approaches 1..
At high temperatures this OTOC is equal to the operator size [Eq. (75)],
resulting in phase oscillations, whereas at low temperatures the OTOC rotates
to become predominantly imaginary, leading to an exponential growth in the
correlator magnitude.
Next, to understand the role of size winding, we must analyze the full winding
size distribution. We can derive this distribution by expanding the
teleportation correlator in powers of $e^{-ig/qN}$ to match Eq. (81). To do
so, it is convenient to consider the exact correlator (before a $g/N\ll 1$
approximation) Qi and Streicher (2019); Gao and Jafferis (2019):
$\begin{split}C_{\psi}&(t)=\\\
&\\!\\!\\!\\!\\!(-iG_{\beta})^{p}\bigg{(}\frac{e^{-ig/2N}}{1+i(1-e^{-ig/N})[\frac{J}{2\lambda}\sin(\lambda\beta/4)+\frac{i}{4}]e^{\lambda
t}}\bigg{)}^{2p/q}\\\ \end{split}$ (91)
Rewriting this correlator using the Taylor expansion,
$\begin{split}&\bigg{(}\frac{1}{1+(1-e^{-\mu})x}\bigg{)}^{2p/q}\\\
&\,\,\,\,\,\,\,\,\,=\frac{1}{(1+x)^{2p/q}}\sum_{n=0}^{\infty}e^{-n\mu}\binom{n+\frac{2p}{q}-1}{n}\frac{1}{(1+1/x)^{n}},\end{split}$
(92)
and identifying the $n^{\text{th}}$ coefficient with the winding size
distribution, we have:
$\begin{split}f(qn+p)=&-\frac{(-iG_{\beta})^{p}}{(1+\frac{J}{2\lambda}e^{\lambda
t}e^{-i\lambda\beta/4})^{2p/q}}\\\
&\times\binom{n+\frac{2p}{q}-1}{n}\frac{1}{(1+\frac{2\lambda}{J}e^{-\lambda
t}e^{i\lambda\beta/4})^{n}}.\end{split}$ (93)
At intermediate times and large $p$, the distribution takes a particularly
simple form,
$\begin{split}f(qn+p)\approx(-iG_{\beta})^{p}\frac{(\gamma+i2\alpha)^{2p/q}}{\Gamma(\frac{2p}{q})}n^{\frac{2p}{q}-1}e^{-\gamma
n}e^{i2\alpha n}\end{split}$ (94)
where we define the size decay rate, $\gamma$, as
$\gamma=\frac{2\lambda}{J}e^{-\lambda
t}\cos(\lambda\beta/4)=\left(\frac{\lambda}{J}\right)^{2}e^{-\lambda t},$ (95)
and the size winding coefficient, $\alpha$, as
$2\alpha=\frac{2\lambda}{J}e^{-\lambda t}\sin(\lambda\beta/4).$ (96)
The above expression holds when $(2p/q)^{2}\ll n\ll
1/\gamma^{2},1/\alpha^{2}$. Crucially, the distribution follows the size
winding ansatz, $f(n)=|f(n)|e^{i2\alpha n}$. Thus, we recognize that the
maximum in the correlator magnitude occurs when the coupling has unwound the
phases of $f(n)$, at $g/N=2\alpha$, as expected from Section VII.2.
The fact that the correlator magnitude increases in time, and moreover reaches
an $\mathcal{O}(1)$ value at low temperatures, is a hallmark of gravitational
teleportation and signals physics outside the peaked-size regime. Naively,
this result is surprising, as we expect the $p$-body encoding to ensure a
peaked size distribution. Indeed, the average size and size width remain
separated by $\sqrt{p}$ at all temperatures Qi and Streicher (2019):
$\mathcal{S}[\psi(t)\rho^{1/2}]-\mathcal{S}[\rho^{1/2}]\approx\frac{p}{2}\left(\frac{2J}{\lambda}\right)^{2}e^{\lambda
t}=\frac{2p}{\gamma},$ (97)
$\delta\mathcal{S}[\psi(t)\rho^{1/2}]\approx\frac{\sqrt{2qp}}{4}\left(\frac{2J}{\lambda}\right)^{2}e^{\lambda
t}=\frac{\sqrt{2qp}}{\gamma}.$ (98)
This demonstrates that our simple intuition, of judging a size distribution to
be tightly peaked if the ratio between the size width and average size is
small, is not always correct. Rather, in Appendix A, we provide a more precise
condition for when peaked-size teleportation holds, and explicitly show that
this condition breaks down for the SYK model at finite temperature (but
remains satisfied at infinite temperature).
Let us now provide intuition for _how_ peaked-size teleportation is modified
by size winding at low temperatures. To this end, we express the SYK
correlator in terms of the winding size distribution parameters:
$\begin{split}C_{\psi}(t)\approx&\,(-iG_{\beta})^{p}\frac{(\gamma+i2\alpha)^{2p/q}}{\Gamma(\frac{2p}{q})}\\\
&\times\int_{0}^{\infty}dn\,n^{\frac{2p}{q}-1}\exp(-\gamma
n)\exp(-i[g/N+2\alpha]n).\\\
=&(-iG_{\beta})^{p}\bigg{[}\frac{\gamma+i2\alpha}{\gamma+i2\alpha+ig/N}\bigg{]}^{2p/q}\\\
\end{split}$ (99)
At early times, this integral can be solved using a saddle-point
approximation. At infinite temperature, the saddle point, $n_{s}$, occurs
precisely at the average size, $n_{s}=(2p/q)/\gamma=\mathcal{S}/q$, giving the
peaked-size correlator,
$C_{\psi}=(-iG_{\beta})^{p}\cdot\exp(-ig\mathcal{S}/qN)$. In contrast, at
finite temperature, the size winding $\alpha$ shifts the saddle point in the
imaginary direction of the complex plane, giving
$n_{s}=(2p/q)/(\gamma+2i\alpha)$ and a correlator
$C_{\psi}=(-iG_{\beta})^{p}\cdot\exp(-ign_{s}/qN)$. From this, we recognize
the saddle point as precisely the two-sided OTOC,
$n_{s}=\frac{p}{2q}\mathcal{F}_{2}(t)$.
The inclusion of the size winding in the low temperature saddle point thus has
two effects. First, it contributes an imaginary part to the OTOC and thereby
increases the magnitude of the teleportation correlator. More subtly, it also
alters the _real_ part of the OTOC. At low temperatures,
$\alpha/\gamma\approx\beta J\gg 1$, and we can approximate the saddle as
$n_{s}\approx(2p/q)/(2i\alpha)+(2p/q\gamma)(\gamma/2\alpha)^{2}$. Recognizing
$\mathcal{S}=2p/\gamma$, we see that the real part of the OTOC now corresponds
to the average size suppressed by two factors of the ratio
$(\alpha/\gamma)^{2}$.
### VII.4 Gravity with stringy effects
While the bulk of this paper approaches teleportation firmly through the lens
of quantum mechanics, we would be remiss not to explore the analog of peaked-
size teleportation in gravitational physics. Specifically, we would like to
ask: is there a teleportation mechanism in gravitational systems that shares
the same features as peaked-size teleportation? Such a connection might seem
surprising, given the prevalence of peaked-size teleportation in quantum
mechanical models with no apparent connection to gravity. Nonetheless, the
smooth blending between gravitational teleportation and peaked-size
teleportation in the SYK model suggests a positive answer.
Here, we demonstrate—in a particular gravitational geometry, AdS2—that an
analog of peaked-size teleportation indeed occurs when strong stringy
corrections are included in the gravitational theory. Intuitively, our results
are consistent with our previous analysis of the SYK model, where, in the dual
gravitational theory, increasing the temperature is known to add stringy
effects Kitaev and Suh (2018).
Our derivation assumes familiarity with the gravitational description of
teleportation in AdS2, a thorough summary of which can be found in the seminal
works of Refs. Gao _et al._ (2017); Maldacena _et al._ (2017). In this
setting, the teleportation correlator can be calculated explicitly by
considering gravitational scattering in a wormhole geometry. We will maintain
our SYK notation, so that $V$ consists of $K$ single-body fermion operators,
$\psi_{i}$, and our input operator is a $p$-body fermion, $\psi$. The
correlator can be solved for by decomposing the fermion operators in a
momentum basis and applying the scattering matrix:
$\begin{split}C^{\text{sc}}_{\psi}(t)&=e^{-ig\langle V\rangle}\int
dk\Psi_{r}(k,t)\Psi_{l}^{*}(k,t)\\\ &\times\exp\left(ig\int
dse^{i\delta(k,s)}i\Psi_{1,r}(s,0)\Psi_{1,l}^{*}(s,0)\right)\end{split}$ (100)
where $\Psi_{l/r}(k,t)$ is the wavefunction for the $p$-body operator inserted
on the left/right boundary with in-falling momentum $k$ (and similarly
$\Psi_{1,l/r}(s,0)$ for any single-body operator in $V$), and
$e^{i\delta(k,s)}$ is the scattering matrix element between $\psi(t)$ and
$\psi_{1}(0)$. In pure gravity, i.e. in the absence of stringy effects, these
quantities take the form Maldacena _et al._ (2017):
$\displaystyle\Psi_{r}(k,t)\Psi_{l}^{*}(k,t)$
$\displaystyle=\frac{(2ike^{-t})^{2\Delta}e^{-4ike^{-t}}}{i\Gamma(2\Delta)(-k)}\Theta(-k)$
(101) $\displaystyle\delta(k,s)$ $\displaystyle=G_{N}ks$ (102)
where we have set $\beta=2\pi$ for convenience, $\Theta(x)$ is the Heavyside
function, and $\Delta=p/q$ is the conformal weight of $\psi$. The single-body
wavefunction, $\Psi_{1}(s,0)$, is obtained by setting $t=0$ and replacing
$\Delta\rightarrow\Delta_{1}=1/q$ (i.e. the conformal weight of a single
fermion).
In the semiclassical limit, we can evaluate the correlator by expanding
$e^{i\delta}$ to linear order in $G_{N}$ Maldacena _et al._ (2017). We find:
$\begin{split}C^{\text{sc}}_{\psi}(t)&=\langle\psi_{l}\psi_{r}\rangle\
\frac{(-i)4^{2\Delta}}{\Gamma(2\Delta)}\\\
\times&\int^{\infty}_{0}d{k}\,(-ik)^{2\Delta-1}\exp\left(-i(\tilde{g}G_{N}e^{t}-4)k\right),\end{split}$
(103)
where $\tilde{g}\equiv g4^{-\Delta_{0}}\Delta_{0}/2$. This expression is
almost identical to the large-$q$ SYK correlator of Eq. (99), setting the size
decay rate to zero, $\gamma=0$, and identifying the momentum $k$ in the
gravitational calculation with the size $n$ in the SYK model Susskind (2018).
Notably, the correlator diverges at the teleportation time,
$4=\tilde{g}G_{N}e^{t}$. In bulk gravity, this divergence is exactly the
light-cone pole between the left and right sides of the traversable wormhole,
and is regulated by including higher order terms in $G_{N}$ or stringy
corrections Maldacena _et al._ (2017).
While the full effects of stringy scattering in an AdS background are not
known, we will take a phenomenological treatment as in Ref. Maldacena _et
al._ (2017); Shenker and Stanford (2015). Here, the total effect of stringy
corrections is to change the scattering amplitude to
$\delta(k,s)=iG_{N}(-iks)^{\varepsilon},\;0\leq\varepsilon\leq 1,$ (104)
where $\varepsilon$ controls the strength of stringy effects, and varies from
$1$ in pure gravity to $0$ in the highly stringy limit.
Again expanding $e^{i\delta}$ to leading order in $G_{N}$, and Wick rotating
$k\rightarrow-ik$, we can write the correlator as
$\begin{split}C^{\text{stringy}}_{\psi}(t)&=\langle\psi_{l}\psi_{r}\rangle\
\frac{4^{2\Delta}}{\Gamma(2\Delta)}\\\ \times&\int
d{k}\,k^{2\Delta-1}e^{-4k}\exp(-i^{1+\varepsilon}gG_{N}A_{\varepsilon}k^{\varepsilon}e^{\varepsilon
t})\end{split}$ (105)
where $A_{\varepsilon}$ is a constant of order 1. Note that the $k$-dependence
in front of exponential is a Poisson distribution with a saddle point at
$k_{s}\approx\Delta/2$ in the heavy particle limit, $\Delta=p/q\gg 1$. At
early times, $e^{\varepsilon t}G_{N}\ll 1$, and for strong stringy effects,
$\varepsilon\rightarrow 0$, the change in this saddle point from the
scattering, $g$, is negligible. In these limits, the saddle point
approximation thus gives the correlator:
$C^{\text{stringy}}_{\psi}(t)\approx\langle\psi_{l}\psi_{r}\rangle\exp(-igG_{N}A_{\varepsilon}(\Delta/2)^{\varepsilon}e^{\varepsilon
t}),$ (106)
which has exactly the same form as in peaked-size teleportation [Eq. (38)]!
Specifically, the correlator is equal to the two-point function,
$G_{\beta}=i\langle\psi_{l}\psi_{r}\rangle,$ multiplied by a pure phase.
Tentatively, this suggests interpreting the phase as the operator size in a
dual boundary theory. This size,
$\mathcal{S}/N\sim G_{N}A_{\varepsilon}(\Delta/2)^{\varepsilon}e^{\varepsilon
t},$ (107)
grows exponentially in time with a non-maximal Lyapunov exponent,
$2\pi\varepsilon/\beta$.
A few remarks are in order. First, while in the above treatment the strength
of stringy effects depends on a ‘free’ parameter $\varepsilon$, we expect that
in a UV complete theory $\varepsilon$ would in turn depend on the temperature
(and other physical parameters). In particular, we expect
$\varepsilon\rightarrow 1$ at low temperature in theories that are dual to
pure gravity, and $\varepsilon\rightarrow 0$ at high temperature, where
stringy, UV effects should play an important role. This statement also follows
from the point of view of the boundary field theory, since the scattering
matrix is proportional to an OTOC of the boundary theory, which is real at
infinite temperature.
Second, if we would like to recover the infinite temperature SYK correlator,
Eq. (76), from the scattering computation, choosing a proper $\varepsilon$ as
a function of $\beta$ is not enough. One also needs to modify the
_wavefunction_ of $\psi$, to:
$\Psi_{r}(k,t)\Psi_{l}^{*}(k,t)=\frac{\varepsilon(2ik^{\varepsilon}e^{-\varepsilon
t})^{2\Delta}e^{-4ik^{\varepsilon}e^{-\varepsilon
t}}}{i\Gamma(2\Delta)(-k)}\Theta(-k)$ (108)
Such a wavefunction modification due to UV data should be model dependent, and
it would be interesting to understand how to derive this ‘stringy-corrected’
wavefunction from the bulk point of view. Nevertheless, one particular feature
of the modified wavefunction has a clear motivation from the boundary
perspective. Specifically, Wick rotating Eq. (108), $k\rightarrow-ik$, leads
to a distribution whose width, $\delta k\sim\Delta^{1/\varepsilon}$,
_broadens_ as $\varepsilon\rightarrow 0$. This broadening increases the phase
variations in the exponential of Eq. (105) and results in the decay of the
correlator at the timescale $e^{\varepsilon t}G_{N}/\sqrt{\Delta}\approx 1$
for small $\varepsilon$. From the boundary point of view, this decay
corresponds to the requirement that the size width must be small,
$g\delta\mathcal{S}/N\lesssim 1$, for peaked-size teleportation, as we saw for
0D RUCs and infinite temperature SYK (Section VI). We expect this decay to be
common to many 0D quantum systems at high temperatures, which suggests that
the broadening of the bulk stringy wavefunction as $\varepsilon\rightarrow 0$
might also be a general feature.
Finally, the most obvious effect of a non-unity $\varepsilon$ is to change the
scattering phase, $\delta(k,s)$, from being real-valued to complex. Indeed, in
the strong stringy limit, $\delta(k,s)$ becomes purely imaginary. In general
scattering theory, a complex $\delta$ means that the scattering matrix,
$e^{i\delta}$, is no longer normalized, and implies the existence of inelastic
scattering Shenker and Stanford (2015). Since peaked-size teleportation is
replicated in the limit $\varepsilon\rightarrow 0$, this suggests a more
general relationship between peaked sizes and inelastic scattering. In
Appendix G, we demonstrate that these two phenomena also coincide at infinite
temperature, for arbitrary wavefunctions and scattering amplitudes.
## VIII Experimental proposals
Having illustrated the wide breadth of physics that enters into the
teleportation protocol, in this Section we outline explicitly how one can
probe this physics in the laboratory. We begin with a qualitative guideline on
using the teleportation circuit to probe scrambling physics. Next, we
introduce a ‘one-sided’ implementation of the teleportation circuit, which
eliminates the need to experimentally prepare the thermofield double state.
This one-sided circuit can be efficiently implemented in several state-of-the-
art experimental platforms. The fundamental requirement is the ability to
time-evolve forwards and backwards under many-body scrambling dynamics, which
is well within reach for existing and near-term quantum simulators with $N\sim
10^{2}$ qubits Bernien _et al._ (2017); Arute _et al._ (2019); Yang _et
al._ (2020). As concrete examples, we present two near-term experimental
realizations of our protocol using pristine and replicable atomic qubits:
first with neutral atoms and second with trapped ions.
### VIII.1 Experimental signatures
We begin by outlining the utility of peaked-size teleportation for probing
operator size distributions in experiment. As we have seen, in addition to
measuring more familiar aspects of the size distribution, i.e. the average
size (equivalent to an average OTOC), the teleportation circuit also opens the
door to measuring the size width and size addition of operators.
The simplest experimental signal to be detected is any non-trivial
teleportation fidelity of a single qubit. This signifies that the implemented
unitary is scrambling and has been demonstrated using the HPR coupling
Landsman _et al._ (2019a); Blok _et al._ (2020). As in the HPR protocol, the
success of teleportation is a _robust_ signature of scrambling physics, in the
sense that it cannot arise from decoherence or experimental error, unlike the
decay of OTOCs Yoshida and Yao (2019). Notably, the teleportation protocol
represents a ‘single-shot’ measurement of scrambling in the presence of error,
and does not rely upon any outside estimate of the error.
Once teleportation is established, measuring the fidelity as a function of the
time parameterizing $U$ probes the size distribution of time-evolved local
operators in two ways. First, within the peaked-size regime, oscillations in
the teleportation fidelity as a function of time provide a direct measurement
of the growth in operator size. In particular, setting $g=2\pi n+\pi$, one
expects to see $n$ oscillations in the teleportation fidelity before it
reaches its late time plateau. The peaks in these oscillations give the
operator size as a function of time: $\mathcal{S}=(m/n)(1-1/d^{2})N$ at the
$m^{\text{th}}$ peak.
Second, since peaked-size teleportation relies on the size _width_
$\delta\mathcal{S}$ being small, $g\delta\mathcal{S}/N\lesssim 1$, its success
or failure indicates whether the width has surpassed the tunable value $N/g$.
Dependent on the model and the value of $g$, this leads to an illuminating
three-regime profile for the teleportation fidelity as a function of time:
initial teleportation when size width is small, no teleportation when
$\delta\mathcal{S}\gtrsim N/g$, and late time teleportation once the size
width relaxes to its small late time value [as depicted schematically in Fig.
1(c) and observed numerically in 0D RUCs in Fig. 2(c)]. As outlined in detail
for 0D, 1D, and 2D RUCs in Section VI, the timescales at which these regimes
occur directly probe the scaling of the size width of the distribution.
Moving forward, teleportation of multiple qubits verifies that the sizes of
these qubits add under operator composition, and could explicitly demonstrate
the equivalent channel capacities of peaked-size and gravitational
teleportation. While operator size addition is trivial when the teleported
qubits are causally separated under $U$, determining the requirements for size
addition under more general dynamics—e.g. all-to-all or power-law
interactions—remains an open question.
Finally, the teleportation protocol can be used as a sharp diagnostic for
gravitational physics. Given the presence of peaked-size teleportation in the
same circuit, the mere observation of teleportation, even of multiple qubits,
is in fact not enough to conclude that gravitational scrambling is present.
Instead, we propose to use two experimental signatures that differ starkly
between the two teleportation mechanisms: ($i$) the teleportation fidelity at
low temperature, and ($ii$) the behavior of the teleportation fidelity as a
function of time, $t$, and the coupling strength, $g$. For the former, the
observation of a high teleportation fidelity, $\sim\mathcal{O}(1)$, at low
temperatures strongly suggests the occurrence of gravitational teleportation,
since the fidelity of peaked-size teleportation is limited at such
temperatures by the (small) two-point function, $G_{\beta}$. For the latter,
one observes that the qualitative profile of the teleportation fidelity as a
function of time differs greatly between the two mechanisms (see Fig. 1(c) for
a comparison between the two, and Figs. 2, 3 for additional examples of
peaked-size teleportation). Furthermore, gravitational teleportation works
only for a specific _sign_ of the coupling, $g>0$, while peaked-size
teleportation is even as a function of $g$ Gao _et al._ (2017); Maldacena
_et al._ (2017); Brown _et al._ (2019); Nezami _et al._ (2021).
### VIII.2 One-sided implementation of teleportation circuit
Figure 5: One-sided implementation (right) of the original two-sided
teleportation protocol (left) (replacing $U\rightarrow U^{T}$ for convenience,
compared to Fig. 1). Blue arrows denote the sequence of operations in the one-
sided protocol, while green and red bands mark corresponding sections of the
two implementations.
Turning towards our experimental proposals, we now introduce a one-sided
implementation of the teleportation protocol, at infinite temperature (Fig.
5). This circuit eliminates the need to prepare the highly entangled
thermofield double state, at the cost of a higher depth quantum circuit. We
derive the one-sided implementation from the ‘two-sided’ implementation
[copied in Fig. 5 from Fig. 1(a)] by sliding all operations from the left side
of the thermofield double state to the right side, using Eq. (11). The one-
sided implementation then proceeds as follows.
The initial state of the circuit corresponds to the top left of the two-sided
implementation. Namely, we initialize the $K$ ‘measured’ qubits of subsystem C
in a definite outcome state, $\ket{o_{1}\cdots o_{K}}$. These states should be
drawn from the distribution of measurement outcomes, but when teleporting an
EPR pair at infinite temperature they will be uniformly distributed. For the
$N-K$ unmeasured qubits, we use the resolution of the identity
$\mathbbm{1}\propto\sum_{s}\ket{s}\bra{s}$ to replace the unterminated legs
with an initial product state in the computational basis, $\ket{o_{K+1}\cdots
o_{N}}$. At the end of the computation, one should average over all $2^{N-K}$
states. Finally, we include one additional ancillary qubit for each qubit to
be teleported, whose initial state is sampled over a complete basis
$\ket{\phi}$ for subsystem A. Similar to the unmeasured qubits, this
corresponds to the unterminated leg of the thermofield double state when we
insert the teleported qubit $\ket{\psi}$ in the two-sided implementation.
Having defined an initial pure state, we now implement the circuit starting
from the top left of the two-sided implementation and proceeding counter-
clockwise (Fig. 5). The circuit consists of three successive applications of
$U$ or $U^{\dagger}$, interspersed with a swap gate exchanging the
‘teleported’ subsystem A of $U$ with the ancillary qubit(s), and operations
$\hat{V}_{i}=e^{igo_{i}\hat{O}_{i}/K}$ determined by the initial state of the
‘measured’ qubits. The outcome of the circuit is an EPR measurement between
the ancilla qubit and subsystem A.
Compared to the two-sided implementation, the only practical drawback of the
one-sided implementation is the additional overhead associated with summing
over the initial states of the $N-K$ unmeasured qubits 101010Note that
averaging over the $K$ measured qubits should not be considered overhead
because it would also be required in the two-sided variant.. Crucially, this
does not yield an exponential overhead as the sum can be approximated through
sampling, requiring a number of realizations that is quadratic in the desired
fidelity. Perhaps even simpler, the same result can computed ‘physically’ by
initializing the qubits in a maximally mixed state instead. In either case,
for measuring operator size it is most accurate to take $K\approx N$, in which
case the overhead is negligible.
A more philosophical difference is that the one-sided implementation no longer
performs teleportation, but rather prepares an EPR pair from an otherwise
scrambled, many-body system. Specifically, we know that upon swapping out,
subsystem A is maximally entangled with the remaining qubits whenever the
unitary, $U$, is scrambling; the one-sided circuit then acts to distill this
entanglement into an output EPR pair. This connection has been noted in
gravity, where the one-sided protocol can be interpreted as distilling the
partner operators of emitted Hawking radiation Yoshida (2019a, b) or observing
behind the horizon in the SYK model Kourkoulou and Maldacena (2017).
### VIII.3 Implementation with neutral Rydberg atoms
Figure 6: (a) In the proposed analog Rydberg teleportation protocol, qubits
are encoded in a ground state $|g\rangle$ and a Rydberg state $|r\rangle$.
Nearest-neighbor interactions (dark blue) can be time-reversed, but next-
nearest neighbor interactions (light blue) cannot. (b) Numerical results
comparing the teleportation fidelity with perfectly reversed time-evolution
(solid) with the proposed, imperfect time-reversal (dashed), for $N=20$ spins
and $\Omega_{i}=.9$, $\Delta_{i}=-1.5$, $V_{0}=1$. (c) In the digital
protocol, qubits are encoded in two hyperfine ground states. Time evolution is
performed by alternating layers of controlled-phase gates between nearest
neighbor atoms and single-qubit rotations (red boxes). Insets show possible
pulse sequences to implement the controlled-phase gate and the single-qubit
rotations Jaksch _et al._ (2000).
One particularly promising platform for implementing the traversable wormhole
protocol is a quantum simulator based on neutral alkali or alkaline-earth
atoms held in a reconfigurable and controllable array of optical dipole traps.
Recent experiments have already achieved near-deterministic trapping and
loading of atoms into arbitrary geometries in one, two, and three dimensions
Xia _et al._ (2015); Maller _et al._ (2015); Barredo _et al._ (2018). By
leveraging the strong dipole coupling between atomic Rydberg states, high-
fidelity analog quantum simulations and digital gates have also recently been
demonstrated Maller _et al._ (2015); Labuhn _et al._ (2016); Bernien _et
al._ (2017); Graham _et al._ (2019); Madjarov _et al._ (2020); Wilson _et
al._ (2018). These demonstrations have primarily used two natural schemes of
encoding qubits into neutral atoms:
1. 1.
A qubit can be encoded by choosing an atomic ground state $|g\rangle$ to be
the $|0\rangle$ state, and a highly excited Rydberg state $|r\rangle$ with
principal quantum number $n\gg 1$ as the $|1\rangle$ state [see Fig. 6(a)].
2. 2.
Alternatively, the qubit states can be chosen as two long-lived hyperfine
ground states (for alkali atoms or fermionic alkaline earth atoms) or a ground
state and a metastable clock state (for bosonic alkaline earth atoms), such
that the $|1\rangle$ state can be coupled to a Rydberg state to perform
entangling gates [see Fig. 6(c)].
We will show how both encodings can be used to realize the teleportation
protocol in feasible near-term experiments. We find that the first encoding is
naturally suited ‘analog’ time-evolution under the native (Ising-type)
Hamiltonian for a Rydberg setup, but is fundamentally limited to smaller
system sizes of $\lesssim 30-35$ qubits (in one spatial dimension) due to the
inability to perfectly time-reverse long-range interactions. On the other
hand, the second encoding is more flexible and allows for digital time-
evolution including RUCs and Floquet dynamics. This time-evolution can be
reversed exactly and is limited only by qubit and gate fidelities. While we
will primarily consider realizations of our protocol in experimental setups
where the neutral atoms are individually trapped in optical tweezers and
undergo (near-)resonant excitation to Rydberg states, we also conclude by
discussing how similar physics can be seen in an optical lattice setup where
the atoms are primarily in ground states $|0\rangle$ and $|1\rangle$, but one
of these states is ‘dressed’ by an off-resonant laser field which couples it
to a Rydberg state Glaetzle _et al._ (2015); Potirniche _et al._ (2017);
Zeiher _et al._ (2017).
_Analog implementation_ —We first consider the encoding where the qubit states
$|0\rangle$ and $|1\rangle$ correspond to a ground state $|g\rangle$ and a
highly excited Rydberg state $|r\rangle$. While neutral atoms are effectively
non-interacting in their ground states, nearby atoms interact strongly via van
der Waals interactions $\propto n^{11}/r^{6}$ if they are both in the Rydberg
state, where $r$ is the distance between the atoms. Thus, if we drive the
transition $|g_{i}\rangle\leftrightarrow|r_{i}\rangle$ at each site $i$ with
tunable Rabi frequency $\Omega_{i}$ and detuning $\Delta_{i}$ [see Fig. 6(b)],
the system will undergo analog time evolution under the Hamiltonian
$H=\sum_{i}\frac{\Omega_{i}}{2}X_{i}+\sum_{i}\Delta_{i}(1-Z_{i})+\sum_{i\neq
j}\frac{V_{ij}}{4}(1-Z_{i})(1-Z_{j})$ (109)
where $X_{i}=|g_{i}\rangle\langle r_{i}|+|r_{i}\rangle\langle g_{i}|$,
$Z_{i}=|g_{i}\rangle\langle g_{i}|-|r_{i}\rangle\langle r_{i}|$, and
$V_{ij}=V_{0}/|i-j|^{6}$ is the van der Waals interaction strength between two
atoms at positions $i$ and $j$.
The Hamiltonian in Eq. (109) is scrambling and exhibits a scrambling time
limited by the smaller of $V_{0}$ and $\Omega_{i}$, $t^{*}\sim
N/\text{min}(V_{0},\Omega_{i})$. To minimize the total evolution time, we set
$|\Omega_{i}|\sim V_{0}$, so that evolution under $H$ for a time
$\sim\\!N/V_{0}$ implements a fully scrambling unitary $U$ in the
teleportation protocol. To implement $U^{\dagger}$, we reverse the sign of the
tunable single-site parameters $\Omega_{i}$ and $\Delta_{i}$, and reverse the
strong nearest-neighbor interactions by conjugating time-evolution via Pauli
operators $X_{i}$ (i.e. applying $\pi$-pulses) on every other site.
In a one-dimensional array, the errors in our implementation will arise from
two main sources: ($i$) the finite lifetime of the Rydberg state, which gives
rise to a nonzero decoherence rate at each of the $N$ sites, and ($ii$) the
weak next-nearest neighbor interactions $\sim\\!V_{0}/2^{6}=V_{0}/64$, which
cannot be time-reversed simultaneously with nearest neighbor interactions. To
estimate the effect of the former, let us consider the specific case of 87Rb
atoms excited to the $70S$ Rydberg state Bernien _et al._ (2017); Labuhn _et
al._ (2016), which has a lifetime $\tau\approx 150$ $\mu\textrm{s}$.
Realistically achievable Rabi frequencies and interaction strengths are of
order $\sim 2\pi\times 10-100$ MHz. The total time to implement the three
scrambling unitaries of the teleportation protocol is thus $\sim
3N/|\Omega_{i}|$; when summed over $N$ qubits and compared to the Rydberg
lifetime, this gives an estimated many-body error $\sim
3N^{2}/|\Omega_{i}|\tau$.
Errors due to imperfect backwards time-evolution are more fundamentally
restrictive to the analog approach. In order to precisely characterize the
effects of these errors on the teleportation fidelity, we perform large-scale
numerical simulations of the teleportation protocol with the Rydberg
Hamiltonian, Eq. (109) dyn . We find that for a one-dimensional chain of up to
$N=20$ atoms, the fidelity under our proposed protocol exhibits a $\sim 10\%$
error at the scrambling time compared to the ideal case [Fig. 6(b)]. This is
consistent with a simple estimate, where the error adds coherently over time-
intervals $\delta t\sim 1/V_{0}$ (the local thermalization time), and
incoherently at larger time-scales. Within each $\delta t$, each atom
accumulates an error $\sim(\delta t\,V_{0}/64)^{2}$; summed over $N$ atoms and
total time $3t^{*}\approx 3N\delta t$, this gives a total many-body error
$\sim 3N^{2}/64^{2}$.
Combined with the Rydberg lifetime error, this suggests that near-term
experiments should be able to implement many-body teleportation in systems of
$N\sim 35$ qubits. We note that in higher dimensions, the smaller relative
distance of next-nearest neighbor atoms gives rise to a substantially larger
error contribution from imperfect time-reversal, and the analog protocol is
not suitable beyond very small system sizes.
_Digital implementation_ —To implement the protocol in larger systems and
higher dimensions, we turn to digital time-evolution, using the second type of
qubit encoding (i.e. hyperfine ground states) [Fig. 6(c)]. In this approach,
we envision applying alternating layers of nearest-neighbor controlled-phase
gates and single-qubit rotations. Here, the controlled-phase gates can be
implemented by applying a simple pulse sequence to excite and de-excite qubits
from the $|1\rangle$ state to the $|r\rangle$ state, so that the wavefunction
acquires a $-1$ phase if either of the two qubits are in the $|1\rangle$
state, but not if both qubits are in the $|0\rangle$ state (see Fig. 6(c)
insets) Jaksch _et al._ (2000). As demonstrated in recent experiments Levine
_et al._ (2019), these Rydberg-mediated controlled-phase gates can be
performed in parallel for sufficiently well-separated pairs of qubits, and
non-nearest neighbor interactions can be avoided by slightly reducing the
parallelism within each layer of controlled-phase gates. Single-qubit
rotations can be performed with sufficiently high fidelity such that the
overall circuit fidelity is primarily limited by the entangling gates Xia _et
al._ (2015); Levine _et al._ (2018).
For a generic choice of gates, the circuit will be fully scrambling when $U$
is composed of $\sim N$ layers of controlled-phase gates. The fidelity of the
overall implementation is again limited by the finite lifetime of the Rydberg
state, which is populated for time $\sim 1/V_{0}$ during each controlled-phase
gate. Assuming the same experimental parameters as in the analog case, one
expects to be able to perform approximately $\Omega\tau\sim 2\pi\times
10^{3}-10^{4}$ controlled-phase gates within the decoherence time-scale. Thus,
in the digital approach, one expects that the teleportation protocol can
naturally be implemented for $N\sim 200$ qubits up to the scrambling time.
Interestingly, the digital approach can also be adapted to experiments using
Rydberg-dressed neutral atoms in an optical lattice Glaetzle _et al._ (2015);
Potirniche _et al._ (2017); Zeiher _et al._ (2017). In such a setup, qubits
are again encoded in hyperfine ground states. Strong Ising-like interactions
are generated by coupling the qubit state $|1\rangle$ to a Rydberg state with
a far-detuned laser field. In this way, the Rydberg interaction gives rise to
an energy shift for two neighboring atoms both in the $|1\rangle$ state.
Analogous to our previous discussion, a simple scrambling unitary could
consist of alternating layers of Rydberg-dressed interactions and single-qubit
rotations. While the total accumulated error in the Rydberg-dressing approach
is comparable to the gate-based protocol, one potential advantage is an
increased tunability of the interactions van Bijnen and Pohl (2014); de
Léséleuc _et al._ (2019).
In all of the above settings, there are three additional ingredients to the
teleportation circuit: ($i$) the ability to ‘swap’ in the qubit $\ket{\phi}$
after the first application of $U$, ($ii$) the single-qubit rotations
$\hat{V}_{i}$, and ($iii$) the final measurement in the EPR basis. In both
digital setups, these are easily accomplished by combining controlled-phase
gates, arbitrary single-qubit rotations, and local measurements. In the analog
setup, we propose to temporarily ‘turn off’ the Hamiltonian by transferring
each Rydberg state $\ket{r}$ to a hyperfine ground state (e.g. the state used
as $\ket{1}$ in the _digital_ protocol) using a resonant laser pulse. Once
this is done, all of the above operations can be performed identically as in
the digital setup. Afterwards, an additional resonant laser pulse returns the
system to the analog encoding.
Figure 7: (a-b) Chain of atomic ions, with qubit states $\ket{0}$, $\ket{1}$
represented by hyperfine ground states. The states are coupled by a pair of
laser beams, one with individual addressing (with strength $g_{1}$, purple)
and one applied globally (with strength $g_{2}$). Each beam is strongly
detuned from an excited state $\ket{e}$ by an amount $\Delta$. The coherent
beatnote between the beams, at frequency $\omega_{0}$, drives stimulated Raman
transitions between the qubit levels with an effective Rabi frequency
$g_{1}g_{2}/2\Delta$, and also modulates the Coulomb interaction between
qubits to give rise to an effective Ising interaction. (a) A two-qubit
entangling gate, $XX_{ij}(\theta)$, (red) is performed by addressing only ions
$i$ and $j$ with the first beam. (b) Half of the qubits are addressed, which
leads to analog time-evolution under the Hamiltonian Eq. (110) (blue) for all
addressed spins. (c) Quantum circuit implementation of the teleportation
protocol at finite temperature. EPR pairs are formed using two-qubit gates.
The TFD state is then prepared via a QAOA approach by iterating multiple times
between two-qubit gates coupling the sides and analog time-evolution on both
sides individually Wu and Hsieh (2019); Zhu _et al._ (2019). The state
$\ket{\psi}$ is inserted either by projectively measuring the designated qubit
and preparing the state, or by digitally swapping in an additional qubit (not
shown). Finally, teleportation is implemented using similar ingredients as
well as feed-forward measurements (purple dotted lines).
### VIII.4 Implementation with trapped ions
A second experimental platform, which naturally enables the implementation of
our proposed many-body teleportation protocol, is arrays of individual trapped
atomic ions Bohnet _et al._ (2016); Vermersch _et al._ (2019); Zhang _et
al._ (2017). Trapped ion qubits feature near-perfect replicability, negligible
idle errors, and the ability to implement both a universal set of
reconfigurable quantum gates Cetina _et al._ (2020) as well as analog long-
range spin Hamiltonians Blatt and Wineland (2008); Monroe and Kim (2013).
Entangling quantum gates have been demonstrated between isolated pairs of
trapped ions with fidelities exceeding $99.9\%$ Ballance _et al._ (2016);
Gaebler _et al._ (2016). Teleportation protocols—including the HPR protocol
Landsman _et al._ (2019a)—involving gate operations, partial measurement and
feedforward operations, have been experimentally realized in a number of
contexts Riebe _et al._ (2004); Barrett _et al._ (2003); Olmschenk _et al._
(2009); Landsman _et al._ (2019a).
Compared to Rydberg atom arrays, trapped ions offer two new regimes for
exploring many-body teleportation. First, trapped ions naturally interact via
a long-range analog Hamiltonian, whose time-evolution can be fully reversed.
Implementing the TW protocol in this setting would provide a window into
operator spreading and size distributions under such long-range dynamics Else
_et al._ (2020); Zhou _et al._ (2020). Second, when operated digitally, the
same long-range interaction enables the preparation of thermofield double
states Martyn and Swingle (2019); Zhu _et al._ (2019); Su (2020); Wu and
Hsieh (2019), paving the way towards a realization of the two-sided TW
protocol at finite temperature.
We begin by outlining the analog and digital forms of time-evolution that are
possible in trapped ion systems. Interactions between qubits typically stem
from state-dependent optical dipole forces that off-resonantly drive motional
sidebands of the qubit Cirac and Zoller (1995); Mølmer and Sørensen (1999).
These phonon sideband operations mediate entanglement and give rise to an
effective Ising coupling. When the optical forces are symmetrically detuned
far from the upper and lower sidebands, the motion is only virtually excited,
resulting in a long-range Ising Hamiltonian [Fig. 7(b)]:
$H=\sum_{i<j}J_{ij}X_{i}X_{j}+B_{z}\sum_{i}Z_{i},$ (110)
where $J_{ij}\approx J_{0}/|i-j|^{\alpha}$, $0<\alpha<3$, and the effective
magnetic field $B_{z}$ can be realized by slightly asymmetrically detuning the
driving field Monroe _et al._ (2019).
On the other hand, when the optical dipole forces are closer to resonances of
the motional modes, one can mediate interactions significantly faster,
allowing for the execution of rapid, entangling quantum gates between pairs of
illuminated ion qubits [Fig. 7(a)] Zhu _et al._ (2006); Debnath _et al._
(2016). The native entangling gates are based upon Ising interactions between
any selected pair of ions with a tunable interaction angle; in particular,
both $XX_{ij}(\theta)=e^{-iX_{i}X_{j}/2}$ and $YY_{ij}(\theta)=e^{-i\theta
Y_{i}Y_{j}/2}$ gates are available and $\theta=\pi/2$ naturally creates an EPR
pair Landsman _et al._ (2019b); Wright _et al._ (2019). Typical entangling
and single qubit operations have durations of $\sim 100$ $\mu$s and $\sim 5$
$\mu$s, respectively, while decoherence time-scales are on the order of $\sim
1000$ s Ballance _et al._ (2016); Gaebler _et al._ (2016).
Let us now describe an implementation of the one-sided TW protocol [Fig. 5].
We first focus on the ability to implement both $U$ and its inverse
$U^{\dagger}$. For analog time-evolution (Eq. 110), $U^{\dagger}$ can be
implemented by changing the sign of the detuning Gärttner _et al._ (2017),
while for digital time-evolution, one can directly invert and reverse the
ordering of the quantum gates.
The one-sided protocol also requires the ability to locally address a sub-
extensive number of individual qubits. In particular, a subset $K$ of the
qubits, which are initially prepared in a product state,
$|o_{1},\ldots,o_{K}\rangle$, must each be rotated by
$\hat{V}_{i}=e^{igo_{i}\hat{O}_{i}/K}$ at a later time. These rotations can be
achieved by taking $\hat{O}_{i}=\hat{Z}_{i}$ and individually addressing the
target ions using an auxiliary “poke” laser beam Smith _et al._ (2016); Zhang
_et al._ (2017).
Following the first application of $U$, one must swap out the qubit(s)
corresponding to the teleported subsystem. This swap can be implemented either
digitally by applying a SWAP-gate, or physically, by exchanging the two ions
via a modulation of the ion trap’s axial fields Hensinger _et al._ (2006);
Monroe and Kim (2013); Kaufmann _et al._ (2017).
Extending this implementation to the two-sided protocol [Fig. 1(a)] at
_infinite temperature_ is straightforward. Initialization into EPR pairs can
be accomplished via simple Ising gates at the input of the circuit [Fig.
7(a,c)], while time-evolution can again take the form of either digital
quantum gates [Fig. 7(a)] or analog Hamiltonian dynamics. To separately
implement analog dynamics on the two sides of the system, one would illuminate
only _half_ of the ion chain at any given time [Fig. 7(b)]; this has the added
benefit of avoiding unwanted coupling between the left and right sides, but
implies that the time-evolution must be performed serially [Fig. 7(c)].
Finally, in the two-sided protocol, one must perform projective measurements
on $K$ qubits that feed-forward to the conditional rotations, $\hat{V}_{i}$.
These partial measurements can be accomplished by using multiple ion species
(i.e. different elements or isotopes) Barrett _et al._ (2003), or
alternatively, this entire procedure can be replaced with a specific
interaction, $e^{igV}$, between the two sides; this interaction is naturally
realized via an $XX_{ij}(\theta)$ gate with $\theta=2g/K$.
Implementing the two-sided protocol at _finite temperature_ requires all of
the above ingredients, in addition to the ability to prepare an entangled
thermofield double state. Interestingly, the TFD state can be prepared
variationally using the quantum approximate optimization algorithm (QAOA) Wu
and Hsieh (2019); Zhu _et al._ (2019), and requires no additional
experimental capabilities beyond those already necessary for the TW protocol.
The optimization step within a QAOA-based TFD preparation relies on a cost
function that requires one to measure the entanglement entropy between the two
sides Wu and Hsieh (2019); Zhu _et al._ (2019). While challenging, this can
in principle be experimentally realized by either using several copies of the
system Daley _et al._ (2012); Abanin and Demler (2012); Johri _et al._
(2017) or via randomized measurements Elben _et al._ (2018), both of which
have been demonstrated in trapped ion experiments Linke _et al._ (2018);
Brydges _et al._ (2019).
## IX Outlook
In this work, we developed a unified framework for understanding many-body
teleportation from the perspective of operator growth under scrambling
dynamics. The unifying concept within this framework is the size distribution
of time-evolved operators Roberts _et al._ (2018); Qi and Streicher (2019);
Qi _et al._ (2019); Brown _et al._ (2019); Nezami _et al._ (2021): these
form the backbone of peaked-size teleportation, and provide a more fine-
grained measure of operator growth compared to the average operator size (as
given by the expectation value of OTOCs).
Our work suggests several future directions for applying and building upon
this framework. First, while we have studied the size distributions in 0D and
$\geq 1$D RUCs, it would be interesting to extend this analysis to a multitude
of other physical systems, where one expects to find qualitatively distinct
behavior. These include long-range interacting systems Else _et al._ (2018);
Tran _et al._ (2020), interacting and non-interacting integrable systems Qi
_et al._ (2019), $\geq 1$D systems with a large on-site Hilbert space Gu _et
al._ (2017), 0D systems with sparse couplings Bentsen _et al._ (2019), and
systems with conserved quantities Khemani _et al._ (2018).
Another set of open questions concerns the notion of operator size at finite
temperature. In systems with peaked size distributions, we found that the
phase of the two-sided teleportation correlator was directly proportional to
the conventional definition of operator size Qi and Streicher (2019).
Surprisingly, we observed that this relationship did not hold in the finite
temperature SYK model; rather, the phase was given by the real part of the
two-sided OTOC. Unlike the conventional size, this OTOC is not UV divergent,
and is thus expected to be inherently independent of the microscopic Hilbert
space. Recent work has shown that its real part isolates an incoherent
component of operator spreading in large-$N$ models Gu and Kitaev (2019);
further work is needed to establish and expand this framework. Related to
these considerations, one may hope to better understand the bulk analogue of
operator size in theories dual to gravity with strong stringy effects. While
we have seen that stringy effects can mimic peaked-size teleportation,
developing a physical interpretation of this correspondence would be extremely
exciting.
Third, we have shown that a promising application of the teleportation
protocol is to distinguish between different classes of scrambling dynamics.
In particular, we have focused on two classes of scramblers—generic
thermalizing systems and those with gravitational duals—and demonstrated that
the key distinction between them is their teleportation fidelity at low
temperatures. It is intriguing to ask whether the fidelity increase associated
with gravitational teleportation may also occur in other systems, without a
gravitational dual. For instance, recently the teleportation correlator
magnitude was observed to increase slightly above $G_{\beta}$ in non-local
random Hamiltonian systems Brown _et al._ (2019); Nezami _et al._ (2021);
generalizing this to other physical models would be of tremendous interest.
One may also wonder what role an extensive low temperature entropy—a key
feature of the SYK model Maldacena and Stanford (2016)—plays in the
teleportation process. In particular, how well can systems with extensive low
temperature entropy but no known gravitational dual teleport Salberger _et
al._ (2017); Alexander _et al._ (2018)? We conjecture that an extensive
entropy would allow one to _locally_ encode each qubit into low-energy degrees
of freedom (i.e. operators with an $\mathcal{O}(1)$ two-point function), since
one would only require $\mathcal{O}(1)$ qubits on the left side of the TFD in
order to have one qubit of mutual information with the right side. Such an
encoding would allow low temperature teleportation with perfect fidelity if
operator sizes were peaked, naturally motivating the study of operator size
distributions in such models.
From an experimental perspective, using teleportation as a near-term
diagnostic of scrambling necessitates a greater understanding of the effects
of experimental imperfections on the protocol. Most significantly, we have
already noted that the teleportation fidelity acts as a robust indicator of
scrambling in the presence of experimental error, in the sense that it can
rise to unity only when scrambling is present Yoshida and Yao (2019). However,
it is unclear how other signatures of the protocol, e.g. the oscillations and
decay/revival profile of the teleportation fidelity, are affected by errors,
and to what extent these effects can be mitigated through post-processing
Swingle and Halpern (2018).
As an additional experimental direction, we note that one can precisely
measure the size distribution of operators via small modifications to the
teleportation protocol, in two distinct ways. In the first, one directly
measures the operator size as a quantum mechanical observable, in which case
the probability distribution of measurement outcomes equals the size
distribution. In the two-sided protocol, this is achieved by measuring the
coupling, $V$, after state insertion and time-evolution. This can also be
generalized to the one-sided protocol, for infinite temperature and classical
couplings. A second approach is to measure the _characteristic function_ of
the size distribution (i.e. its Fourier transform). At infinite temperature,
this is precisely equal to the correlator, Eq. (2), whose real part can be
isolated from the teleportation fidelity of different initial states (see
Appendix C) Roberts _et al._ (2018); Qi and Streicher (2019). Measuring the
correlator’s imaginary part requires one to replace state insertion, e.g. the
projection $(1+Z)/2$, with a unitary operation, $(1+iZ)/\sqrt{2}$, in the
teleportation protocol. At finite temperature, this procedure naturally probes
the winding size distribution (see Section VII.1). To obtain the size
distribution, one should instead measure a ‘one-sided’ correlator, which
corresponds to performing the same protocol but measuring the final quantum
state on the left, not right, side of the TFD state.
Finally, our work has broad implications in the context of quantum coding
theory, where many-body teleportation can be understood as an especially
generic method of entanglement-assisted quantum error correction (EAQEC).
Indeed, the setup for EAQEC is identical to that of the teleportation
protocol: two parties, Alice and Bob, share entanglement (the TFD state),
Alice applies an encoding circuit to her share of qubits (the left unitary,
$U$), and decoding is achieved by teleporting Alice’s quantum state to Bob’s
share of qubits (via the coupling, $V$, and unitaries on right). Crucially,
because the coupling acts on a subsystem of Alice’s qubits, this scheme
protects against any errors on her remaining qubits; moreover, which subsystem
the coupling acts on is arbitrary, so the encoding protects against errors on
_any_ subsystem of $N-K$ qubits. Previous schemes for EAQEC have focused
primarily on encodings via Clifford unitaries. In contrast, many-body
teleportation, and more specifically peaked-size teleportation, succeeds for a
_vastly_ broader class of encoding procedures—i.e. many-body time
dynamics—indicating that naturally occurring, strongly interacting dynamics
offer novel methods of EAQEC. These observations suggest a number of potential
future directions, including theoretically characterizing the quantum
communication capacity of the teleportation protocol for different classes of
dynamics, and experimentally realizing EAQEC in quantum simulators.
_Note added_ : After this work had been completed, we learned of an
independent investigation of gravitational many-body teleportation by Nezami,
Lin, Brown, Gharibyan, Leichenauer, Salton, Susskind, Swingle, and Walker,
which will appear in the same arXiv posting.
_Acknowledgments_ : We gratefully acknowledge discussions with Sepehr Nezami,
Yingfei Gu, Xiangyu Cao, Jaewon Kim, Yimu Bao, Hannes Pichler, Alexander
Keesling, Harry Levine, Geoffrey Pennington, Maxwell Block, Sagar Vijay, and
Daniel Jafferis. This work was supported by the U.S. Department of Energy
through the Quantum Information Science Enabled Discovery (QuantISED) for High
Energy Physics (KA2401032) and through the GeoFlow Grant No. de-sc0019380.
This research used resources of the National Energy Research Scientific
Computing Center, a U.S. Department of Energy Office of Science User Facility
operated under Contract No. DE-AC02-05CH11231. The exact dynamical simulations
used the dynamite Python frontend dyn , which supports a matrix-free
implementation of Krylov subspace methods based on the PETSc and SLEPc
packages. T.S. acknowledges support from the National Science Foundation
Graduate Research Fellowship Program under Grant No. DGE 1752814. P.G.
acknowledges support by the US Department of Energy grants DE-SC0018944 and
DE-SC0019127, and also the Simons foundation as a member of the _It from
Qubit_ collaboration. I.C. acknowledges support from the Alfred Spector and
Rhonda Kost Fellowship of the Hertz Foundation and the Department of Defense
through the National Defense Science and Engineering Graduate Fellowship
Program. E.T.K. acknowledges support from the National Science Foundation
Graduate Research Fellowship Program under Grant Nos. DGE1144152 and
DGE1745303. N.M.L. acknowledges support from the Maryland—Army-Research-Lab
Quantum Partnership under Grant No. W911NF1920181.
## References
* Maldacena _et al._ (2017) J. Maldacena, D. Stanford, and Z. Yang, Fortschritte der Physik 65, 1700034 (2017).
* Bennett _et al._ (1993) C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Physical review letters 70, 1895 (1993).
* Barrett _et al._ (2004) M. Barrett, J. Chiaverini, T. Schaetz, J. Britton, W. Itano, J. Jost, E. Knill, C. Langer, D. Leibfried, R. Ozeri, _et al._ , Nature 429, 737 (2004).
* Riebe _et al._ (2004) M. Riebe, H. Häffner, C. Roos, W. Hänsel, J. Benhelm, G. Lancaster, T. Körber, C. Becher, F. Schmidt-Kaler, D. James, _et al._ , Nature 429, 734 (2004).
* Olmschenk _et al._ (2009) S. Olmschenk, D. N. Matsukevich, P. Maunz, D. Hayes, L.-M. Duan, and C. Monroe, Science 323, 486 (2009).
* Ren _et al._ (2017) J.-G. Ren, P. Xu, H.-L. Yong, L. Zhang, S.-K. Liao, J. Yin, W.-Y. Liu, W.-Q. Cai, M. Yang, L. Li, _et al._ , Nature 549, 70 (2017).
* Gao _et al._ (2017) P. Gao, D. L. Jafferis, and A. C. Wall, 2017, 151 (2017).
* Bao _et al._ (2018) N. Bao, A. Chatwin-Davies, J. Pollack, and G. N. Remmen, 2018, 71 (2018).
* Maldacena and Qi (2018) J. Maldacena and X.-L. Qi, arXiv preprint arXiv:1804.00491 (2018).
* Yoshida and Kitaev (2017) B. Yoshida and A. Kitaev, arXiv preprint arXiv:1710.03363 (2017).
* Yoshida and Yao (2019) B. Yoshida and N. Y. Yao, Physical Review X 9, 011006 (2019).
* Landsman _et al._ (2019a) K. A. Landsman, C. Figgatt, T. Schuster, N. M. Linke, B. Yoshida, N. Y. Yao, and C. Monroe, Nature 567, 61 (2019a).
* Blok _et al._ (2020) M. S. Blok, V. V. Ramasesh, T. Schuster, K. O’Brien, J. M. Kreikebaum, D. Dahlen, A. Morvan, B. Yoshida, N. Y. Yao, and I. Siddiqi, to appear (2020).
* Brown _et al._ (2019) A. R. Brown, H. Gharibyan, S. Leichenauer, H. W. Lin, S. Nezami, G. Salton, L. Susskind, B. Swingle, and M. Walter, arXiv preprint arXiv:1911.06314 (2019).
* Nezami _et al._ (2021) S. Nezami, H. W. Lin, A. R. Brown, H. Gharibyan, S. Leichenauer, G. Salton, L. Susskind, B. Swingle, and M. Walter, to appear (2021).
* Gao and Jafferis (2019) P. Gao and D. L. Jafferis, arXiv preprint arXiv:1911.07416 (2019).
* Sekino and Susskind (2008) Y. Sekino and L. Susskind, 2008, 065 (2008).
* Shenker and Stanford (2014) S. H. Shenker and D. Stanford, 2014, 67 (2014).
* Roberts _et al._ (2015) D. A. Roberts, D. Stanford, and L. Susskind, 2015, 51 (2015).
* Maldacena _et al._ (2016) J. Maldacena, S. H. Shenker, and D. Stanford, 2016, 106 (2016).
* Hosur _et al._ (2016) P. Hosur, X.-L. Qi, D. A. Roberts, and B. Yoshida, 2016, 4 (2016).
* Hayden and Preskill (2007) P. Hayden and J. Preskill, 2007, 120 (2007).
* Roberts _et al._ (2018) D. A. Roberts, D. Stanford, and A. Streicher, 2018, 122 (2018).
* Qi and Streicher (2019) X.-L. Qi and A. Streicher, 2019, 12 (2019).
* Qi _et al._ (2019) X.-L. Qi, E. J. Davis, A. Periwal, and M. Schleier-Smith, arXiv preprint arXiv:1906.00524 (2019).
* Larkin and Ovchinnikov (1969) A. Larkin and Y. N. Ovchinnikov, Sov Phys JETP 28, 1200 (1969).
* Bennett _et al._ (1999) C. H. Bennett, P. W. Shor, J. A. Smolin, and A. V. Thapliyal, Phys. Rev. Lett. 83, 3081 (1999).
* Brun _et al._ (2006) T. Brun, I. Devetak, and M.-H. Hsieh, science 314, 436 (2006).
* Sachdev and Ye (1993) S. Sachdev and J. Ye, Physical review letters 70, 3339 (1993).
* Kitaev (2015) A. Kitaev, “A simple model of quantum holography,” (2015).
* Maldacena and Stanford (2016) J. Maldacena and D. Stanford, Physical Review D 94, 106002 (2016).
* Kitaev and Suh (2018) A. Kitaev and S. J. Suh, 2018, 183 (2018).
* Nahum _et al._ (2018) A. Nahum, S. Vijay, and J. Haah, Physical Review X 8, 021014 (2018).
* Bernien _et al._ (2017) H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletic, and M. D. Lukin, Nature 551, 579 (2017).
* Maller _et al._ (2015) K. M. Maller, M. T. Lichtman, T. Xi, Y. Sun, M. J. Piotrowicz, A. W. Carr, L. Isenhower, and M. Saffman, Phys. Rev. A 92, 022336 (2015).
* Labuhn _et al._ (2016) H. Labuhn, D. Barredo, S. Ravets, S. de D. Léséleuc, M. Macri, T. Lahaye, and A. Browaeys, Nature 534, 667 (2016).
* Graham _et al._ (2019) T. M. Graham, M. Kwon, B. Grinkemeyer, Z. Marra, X. Jiang, M. T. Lichtman, Y. Sun, M. Ebert, and M. Saffman, Phys. Rev. Lett. 123, 230501 (2019).
* Madjarov _et al._ (2020) I. S. Madjarov, J. P. Covey, A. L. Shaw, J. Choi, A. Kale, A. Cooper, H. Picher, V. Schkolnik, J. R. Williams, and M. Endres, Nat. Phys. (2020).
* Wilson _et al._ (2018) J. Wilson, S. Saskin, Y. Meng, S. Ma, R. Dilip, A. Burgers, and J. Thompson, arXiv preprint arXiv:1912.08754 (2018).
* Blatt and Wineland (2008) R. Blatt and D. Wineland, Nature 453, 1008 (2008).
* Monroe and Kim (2013) C. Monroe and J. Kim, Science 339, 1164 (2013).
* Ballance _et al._ (2016) C. J. Ballance, T. P. Harty, N. M. Linke, M. A. Sepiol, and D. M. Lucas, Phys. Rev. Lett. 117, 060504 (2016).
* Gaebler _et al._ (2016) J. P. Gaebler, T. R. Tan, Y. Lin, Y. Wan, R. Bowler, A. C. Keith, S. Glancy, K. Coakley, E. Knill, D. Leibfried, and D. J. Wineland, Phys. Rev. Lett. 117, 060505 (2016).
* Cetina _et al._ (2020) M. Cetina, L. N. Egan, C. A. Noel, M. L. Goldman, A. R. Risinger, D. Zhu, D. Biswas, and C. Monroe, arXiv preprint arXiv:2007.06768 (2020).
* Shenker and Stanford (2015) S. H. Shenker and D. Stanford, 2015, 132 (2015).
* Gu and Kitaev (2019) Y. Gu and A. Kitaev, 2019, 75 (2019).
* Lin _et al._ (2019) H. W. Lin, J. Maldacena, and Y. Zhao, arXiv preprint arXiv:1904.12820 (2019).
* Susskind (2019) L. Susskind, arXiv preprint arXiv:1904.12819 (2019).
* Lin and Susskind (2019) H. W. Lin and L. Susskind, arXiv preprint arXiv:1911.02603 (2019).
* Roberts and Yoshida (2017) D. A. Roberts and B. Yoshida, 2017, 121 (2017).
* Von Keyserlingk _et al._ (2018) C. Von Keyserlingk, T. Rakovszky, F. Pollmann, and S. L. Sondhi, Physical Review X 8, 021013 (2018).
* Khemani _et al._ (2018) V. Khemani, A. Vishwanath, and D. A. Huse, Physical Review X 8, 031057 (2018).
* Rakovszky _et al._ (2018) T. Rakovszky, F. Pollmann, and C. von Keyserlingk, Physical Review X 8, 031058 (2018).
* Li _et al._ (2018) Y. Li, X. Chen, and M. P. Fisher, Physical Review B 98, 205136 (2018).
* Skinner _et al._ (2019) B. Skinner, J. Ruhman, and A. Nahum, Physical Review X 9, 031009 (2019).
* Dankert _et al._ (2009) C. Dankert, R. Cleve, J. Emerson, and E. Livine, Physical Review A 80, 012304 (2009).
* Webb (2015) Z. Webb, arXiv preprint arXiv:1510.02769 (2015).
* Kueng and Gross (2015) R. Kueng and D. Gross, ArXiv abs/1510.02767 (2015).
* Zhu (2017) H. Zhu, Phys. Rev. A 96, 062336 (2017).
* Kardar _et al._ (1986) M. Kardar, G. Parisi, and Y.-C. Zhang, Physical Review Letters 56, 889 (1986).
* Corwin (2012) I. Corwin, Random matrices: Theory and applications 1, 1130001 (2012).
* Nielsen and Chuang (2002) M. A. Nielsen and I. Chuang, “Quantum computation and quantum information,” (2002).
* Gao and Liu (2018) P. Gao and H. Liu, arXiv preprint arXiv:1810.01444 (2018).
* Susskind (2018) L. Susskind, arXiv preprint arXiv:1802.01198 (2018).
* Arute _et al._ (2019) F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. Brandao, D. A. Buell, _et al._ , Nature 574, 505 (2019).
* Yang _et al._ (2020) B. Yang, H. Sun, R. Ott, H.-Y. Wang, T. V. Zache, J. C. Halimeh, Z.-S. Yuan, P. Hauke, and J.-W. Pan, Nature 587, 392 (2020).
* Yoshida (2019a) B. Yoshida, arXiv preprint arXiv:1910.11346 (2019a).
* Yoshida (2019b) B. Yoshida, 2019, 132 (2019b).
* Kourkoulou and Maldacena (2017) I. Kourkoulou and J. Maldacena, arXiv preprint arXiv:1707.02325 (2017).
* Jaksch _et al._ (2000) D. Jaksch, J. I. Cirac, P. Zoller, S. L. Rolston, R. Cote, and M. D. Lukin, Phys. Rev. Lett. 85, 2208 (2000).
* Xia _et al._ (2015) T. Xia, M. Lichtman, K. Maller, A. W. Carr, M. J. Piotrowicz, L. Isenhower, and M. Saffman, Phys. Rev. Lett. 114, 100503 (2015).
* Barredo _et al._ (2018) D. Barredo, V. Lienhard, S. De Leseleuc, T. Lahaye, and A. Browaeys, Nature 561, 79 (2018).
* Glaetzle _et al._ (2015) A. W. Glaetzle, M. Dalmonte, R. Nath, C. Gross, I. Bloch, and P. Zoller, Phys. Rev. Lett. 114, 173002 (2015).
* Potirniche _et al._ (2017) I.-D. Potirniche, A. C. Potter, M. Schleier-Smith, A.Vishwanath, and N. Yao, Phys. Rev. Lett. 119, 123601 (2017).
* Zeiher _et al._ (2017) J. Zeiher, J. y. Choi, A. Rubio-Abadal, T. Pohl, R. van Bijnen, I. Bloch, and C. Gross, Phys. Rev. X 7, 041063 (2017).
* (76) Our parallelized dynamics code is available open-source as the package dynamite: https://dynamite.readthedocs.io/ DOI:10.5281/zenodo.3606826.
* Levine _et al._ (2019) H. Levine, A. Keesling, G. Semeghini, A. Omran, T. T. Wang, S. Ebadi, H. Bernien, M. Greiner, V. Vuletic, H. Pichler, and M. D. Lukin, arXiv preprint arXiv:1908.06101 (2019).
* Levine _et al._ (2018) H. Levine, A. Keesling, A. Omran, H. Bernien, S. Schwartz, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletic, and M. D. Lukin, Phys. Rev. Lett. 121, 123603 (2018).
* van Bijnen and Pohl (2014) R. van Bijnen and T. Pohl, arXiv preprint arXiv:1411.3118 (2014).
* de Léséleuc _et al._ (2019) S. de Léséleuc, V. Lienhard, P. Scholl, D. Barredo, S. Weber, N. Lang, H. P. Büchler, T. Lahaye, and A. Browaeys, Science 365, 775 (2019).
* Wu and Hsieh (2019) J. Wu and T. H. Hsieh, Phys. Rev. Lett. 123, 220502 (2019).
* Zhu _et al._ (2019) D. Zhu, S. Johri, N. M. Linke, K. A. Landsman, N. H. Nguyen, C. H. Alderete, A. Y. Matsuura, T. H. Hsieh, and C. Monroe, arXiv preprint arXiv:1906.02699 (2019).
* Bohnet _et al._ (2016) J. G. Bohnet, B. C. Sawyer, J. W. Britton, M. L. Wall, A. M. Rey, M. Foss-Feig, and J. J. Bollinger, Science 352, 1297 (2016).
* Vermersch _et al._ (2019) B. Vermersch, A. Elben, L. M. Sieberer, N. Y. Yao, and P. Zoller, Physical Review X 9, 021061 (2019).
* Zhang _et al._ (2017) J. Zhang, G. Pagano, P. W. Hess, A. Kyprianidis, P. Becker, H. Kaplan, A. V. Gorshkov, Z.-X. Gong, and C. Monroe, Nature 551, 601 (2017).
* Barrett _et al._ (2003) M. D. Barrett, B. DeMarco, T. Schaetz, V. Meyer, D. Leibfried, J. Britton, J. Chiaverini, W. M. Itano, B. Jelenković, J. D. Jost, C. Langer, T. Rosenband, and D. J. Wineland, Phys. Rev. A 68, 042302 (2003).
* Else _et al._ (2020) D. V. Else, F. Machado, C. Nayak, and N. Y. Yao, Phys. Rev. A 101, 022333 (2020).
* Zhou _et al._ (2020) T. Zhou, S. Xu, X. Chen, A. Guo, and B. Swingle, Phys. Rev. Lett. 124, 180601 (2020).
* Martyn and Swingle (2019) J. Martyn and B. Swingle, Physical Review A 100, 032107 (2019).
* Su (2020) V. P. Su, arXiv preprint arXiv:2009.04488 (2020).
* Cirac and Zoller (1995) J. I. Cirac and P. Zoller, Phys. Rev. Lett. 74, 4091 (1995).
* Mølmer and Sørensen (1999) K. Mølmer and A. Sørensen, Phys. Rev. Lett. 82, 1835 (1999).
* Monroe _et al._ (2019) C. Monroe, W. Campbell, L.-M. Duan, Z.-X. Gong, A. Gorshkov, P. Hess, R. Islam, K. Kim, G. Pagano, P. Richerme, _et al._ , arXiv preprint arXiv:1912.07845 (2019).
* Zhu _et al._ (2006) S.-L. Zhu, C. Monroe, and L.-M. Duan, Phys. Rev. Lett. 97, 050505 (2006).
* Debnath _et al._ (2016) S. Debnath, N. M. Linke, C. Figgatt, K. A. Landsman, K. Wright, and C. Monroe, Nature 563, 63 (2016).
* Landsman _et al._ (2019b) K. A. Landsman, Y. Wu, P. H. Leung, D. Zhu, N. M. Linke, K. R. Brown, L. Duan, and C. Monroe, Phys. Rev. A 100, 022332 (2019b).
* Wright _et al._ (2019) K. Wright _et al._ , Nat. Commun. 10, 5464 (2019).
* Gärttner _et al._ (2017) M. Gärttner, J. G. Bohnet, A. Safavi-Naini, M. L. Wall, J. J. Bollinger, and A. M. Rey, Nature Physics 13, 781 (2017).
* Smith _et al._ (2016) J. Smith, A. Lee, P. Richerme, B. Neyenhuis, P. Hess, P. Hauke, M. Heyl, D. Huse, and C. Monroe, Nature Physics 12, 907 (2016).
* Hensinger _et al._ (2006) W. Hensinger, S. Olmschenk, D. Stick, D. Hucul, M. Yeo, M. Acton, L. Deslauriers, C. Monroe, and J. Rabchuk, Applied Physics Letters 88, 034101 (2006).
* Kaufmann _et al._ (2017) H. Kaufmann, T. Ruster, C. T. Schmiegelow, M. A. Luda, V. Kaushal, J. Schulz, D. von Lindenfels, F. Schmidt-Kaler, and U. G. Poschinger, Phys. Rev. A 95, 052319 (2017).
* Daley _et al._ (2012) A. Daley, H. Pichler, J. Schachenmayer, and P. Zoller, Physical review letters 109, 020505 (2012).
* Abanin and Demler (2012) D. A. Abanin and E. Demler, Physical review letters 109, 020504 (2012).
* Johri _et al._ (2017) S. Johri, D. S. Steiger, and M. Troyer, Phys. Rev. B 96, 195136 (2017).
* Elben _et al._ (2018) A. Elben, B. Vermersch, M. Dalmonte, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 120, 050406 (2018).
* Linke _et al._ (2018) N. M. Linke, S. Johri, C. Figgatt, K. A. Landsman, A. Y. Matsuura, and C. Monroe, Phys. Rev. A 98, 052334 (2018).
* Brydges _et al._ (2019) T. Brydges, A. Elben, P. Jurcevic, B. Vermersch, C. Maier, B. P. Lanyon, P. Zoller, R. Blatt, and C. F. Roos, Science 364, 260 (2019), https://science.sciencemag.org/content/364/6437/260.full.pdf .
* Else _et al._ (2018) D. V. Else, F. Machado, C. Nayak, and N. Y. Yao, arXiv preprint arXiv:1809.06369 (2018).
* Tran _et al._ (2020) M. C. Tran, C.-F. Chen, A. Ehrenberg, A. Y. Guo, A. Deshpande, Y. Hong, Z.-X. Gong, A. V. Gorshkov, and A. Lucas, arXiv preprint arXiv:2001.11509 (2020).
* Gu _et al._ (2017) Y. Gu, X.-L. Qi, and D. Stanford, 2017, 125 (2017).
* Bentsen _et al._ (2019) G. Bentsen, Y. Gu, and A. Lucas, Proceedings of the National Academy of Sciences 116, 6689 (2019).
* Salberger _et al._ (2017) O. Salberger, T. Udagawa, Z. Zhang, H. Katsura, I. Klich, and V. Korepin, Journal of Statistical Mechanics: Theory and Experiment 2017, 063103 (2017).
* Alexander _et al._ (2018) R. N. Alexander, G. Evenbly, and I. Klich, arXiv preprint arXiv:1806.09626 (2018).
* Swingle and Halpern (2018) B. Swingle and N. Y. Halpern, Physical Review A 97, 062113 (2018).
## Appendix A Precise bound for the peaked size regime
As in the main text, we decompose a time-evolved finite temperature operator
into a sum of Pauli strings:
$Q_{A}(t)\rho^{1/2}=\sum_{R}c_{R}(t)S\\\ $ (111)
In this basis, for qubit systems the correlator takes the form
$C_{Q}=\bra{\text{TFD}}\tilde{Q}_{A,r}^{\dagger}(-t)e^{igV}Q_{A,l}(t)\ket{\text{TFD}}=e^{ig+i\pi\mathcal{S}[Q_{A}(t=0)]}\sum_{R}e^{-ig\mathcal{S}[R]/N}c_{R}^{2}(t)=e^{ig+i\pi\mathcal{S}[Q_{A}(t=0)]}\sum_{n}e^{ign/N}f(n)$
(112)
where again $\tilde{Q}_{A,r}^{\dagger}=DQ_{A,r}^{\dagger}D^{\dagger}$ for the
decoding operation $D=Y\otimes\ldots\otimes Y$, and we use
$\bra{\text{TFD}}\tilde{Q}_{A,r}^{\dagger}(-t)=e^{i\pi\mathcal{S}[Q_{A}]}\bra{\text{EPR}}Q_{A,l}(t)\rho^{1/2}$
for qubit Pauli operators $Q_{A}$. Here we define the winding size
distribution Brown _et al._ (2019); Nezami _et al._ (2021)
$f(n)\equiv\sum_{S:\mathcal{S}[R]=n}c_{R}^{2}(t).$ (113)
At finite temperature, this size wavefunction is distinct from the size
distribution:
$P(n)\equiv\sum_{S:\mathcal{S}[R]=n}|c_{R}(t)|^{2},$ (114)
which is a real, normalized probability distribution probed by the _one_
-sided correlator Qi and Streicher (2019)
$\bra{\text{TFD}}Q_{A,l}^{\dagger}(t)e^{igV}Q_{A,l}(t)\ket{\text{TFD}}=e^{ig}\sum_{R}e^{-ig\mathcal{S}[R]/N}|c_{R}|^{2}(t)=\sum_{n}e^{ign/N}P(n).$
(115)
Nevertheless, the size distribution bounds the size wavefunction magnitude via
the triangle inequality:
$|f(n)|\leq P(n),$ (116)
with equality achieved when all Pauli operators of size $n$ contribute the
same phase to $f(n)$.
The average size and size variance are easily found from the size distribution
as
$\mathcal{S}=\int_{0}^{\infty}\,dn\,n\,P(n),\,\,\,\,\,\,\delta\mathcal{S}^{2}+\mathcal{S}^{2}=\int_{0}^{\infty}\,dn\,n^{2}\,P(n)$
(117)
where we work in the continuum limit replacing sums over the size by integrals
for simplicity. We now define the _asymptotic size width with error_
$\varepsilon$ as the minimal width $W_{\varepsilon}$ about the average size
such that
$1-\int_{\mathcal{S}-W_{\varepsilon}}^{\mathcal{S}+W_{\varepsilon}}\,dn\,P(n)\leq\varepsilon,$
(118)
i.e. a fraction $1-\varepsilon$ of the size distribution’s support is
contained in the interval
$I=[\mathcal{S}-W_{\varepsilon},\mathcal{S}+W_{\varepsilon}]$ (the lower limit
of the integral should be bounded by zero; for simpler notation we’ll deal
with this by instead defining $P(n)=f(n)=0$ for $n<0$). We can now separate
the correlator into two pieces, one arising from sizes in the interval $I$ and
the other from the interval’s complement
$\bar{I}=[-\infty,\mathcal{S}-W_{\varepsilon}]\cup[\mathcal{S}+W_{\varepsilon},\infty]$:
$C_{Q}=\int_{I}\,dn\,f(n)e^{ign/N}+R$ (119)
where the remainder $R=\int_{\bar{I}}\,dn\,f(n)e^{ign/N}$ is strictly smaller
than $\varepsilon$:
$\begin{split}|R|&=\left|\int_{\bar{I}}\,dn\,f(n)e^{ign/N}\right|\\\
&\leq\int_{\bar{I}}\,dn\,\left|f(n)e^{ign/N}\right|\\\
&\leq\int_{\bar{I}}\,dn\,\left|P(n)\right|\\\ &\leq\varepsilon\\\ \end{split}$
(120)
Peaked size teleportation occurs in the regime where $gW_{\varepsilon}/N\ll
1$. In this limit, we can expand
$e^{ign/N}=e^{ig\mathcal{S}/N}\left[1+E(n)\right]$ (121)
where the deviation for $n\in I$ is bounded by
$|E(n)|\leq\text{max}_{n\in
I}\bigg{|}1-e^{ig(n-\mathcal{S})/N}\bigg{|}=\bigg{|}\sin(gW_{\varepsilon}/N)\bigg{|},$
(122)
which holds as long as $gW_{\varepsilon}/N\leq\pi/2$. We then have
$\begin{split}C_{Q}&=\int_{I}\,dn\,f(n)e^{ig\mathcal{S}/N}\left[1+E(n)\right]+R\\\
&=e^{ig\mathcal{S}/N}G_{\beta}(Q_{A})+R+R^{\prime}+R^{\prime\prime}\\\
\end{split}$ (123)
where
$G_{\beta}(Q_{A})=\int_{0}^{\infty}dn\,f(n)=\tr(Q_{A}^{\dagger}\rho^{1/2}Q_{A}\rho^{1/2})$
is the imaginary time two-point function, and the error
$R^{\prime}=e^{ig\mathcal{S}/N}\int_{I}\,dn\,f(n)E(n)$ is bounded by
$\begin{split}|R^{\prime}|&=\left|\int_{I}\,dn\,f(n)E(n)\right|\\\
&\leq\int_{I}\,dn\,|f(n)||E(n)|\\\
&\leq\bigg{|}\sin(gW_{\varepsilon}/N)\bigg{|}\int_{I}\,dn\,|f(n)|\\\
&\leq\bigg{|}\sin(gW_{\varepsilon}/N)\bigg{|}\end{split}$ (124)
and the second error $R^{\prime\prime}=G_{\beta}(Q_{A})-\int_{I}\,dn\,f(n)$ is
bounded by
$|R^{\prime\prime}|=\left|G_{\beta}(Q_{A})-\int_{I}\,dn\,f(n)\right|=\left|\int_{\bar{I}}\,dn\,f(n)\right|\leq\varepsilon.$
(125)
We therefore conclude that whenever $gW_{\varepsilon}/N\leq\pi/2$, the
deviation of $C_{Q}$ from the peaked size value is controlled by the upper
bound
$\left|C_{Q}-e^{ig\mathcal{S}/N}G_{\beta}(Q_{A})\right|\leq
2\varepsilon+\bigg{|}\sin(gW_{\varepsilon}/N)\bigg{|}\equiv\mathcal{B}.$ (126)
Practically speaking, the lowest value of $g$ for successful peaked-size
teleportation is $g\mathcal{S}/N=\pi$. Therefore, for a given size
distribution, we can guarantee that peaked-size teleportation is possible if
we find $\varepsilon$ such that $\mathcal{B}\ll G_{\beta}(Q_{A})$, i.e. the
error in the correlator is small compared to the correlator magnitude.
We illustrate this with some examples, in the few cases where we can exactly
solve for operators’ full size distribution. First, consider a thermalized
system at late times, which we will approximate by setting the size
distribution of $Q_{A}(t)$ to be that of a random Pauli string. For large
$n,N$ is a Gaussian distribution with mean $\mathcal{S}=3N/4$ and variance
$\delta\mathcal{S}^{2}=3N/16$:
$P(n)=(3/4)^{n}(1/4)^{N-n}\approx\frac{1}{\sqrt{2\pi}\delta\mathcal{S}}\exp\left(-(n-\mathcal{S})^{2}/2\delta\mathcal{S}^{2}\right).$
(127)
We therefore have
$1-\int_{\mathcal{S}-W_{\varepsilon}}^{\mathcal{S}+W_{\varepsilon}}\,dn\,P(n)=2\,\text{erfc}\left(\frac{W_{\varepsilon}}{\sqrt{2}\delta\mathcal{S}}\right)=\varepsilon.$
(128)
The error function decays exponentially in its argument, so even for
exponentially small $\varepsilon$ we require only
$W_{\varepsilon}=A\delta\mathcal{S}$ for some constant $A\sim\mathcal{O}(1)$.
Setting $g$ equal to its minimal value, $g\mathcal{S}/N=\pi$, we have both
$\varepsilon\ll 1$ and $\big{|}\sin(gW_{\varepsilon}/N)\big{|}\approx
A\delta\mathcal{S}/\mathcal{S}\sim 1/\sqrt{N}\ll 1$, and so peaked size
teleportation is guaranteed.
We can also use this method to guarantee peaked-size teleportation in the
large-$q$ SYK model at infinite temperature. Here, the generating function for
the size distribution is Qi and Streicher (2019)
$\sum_{n}P(n)e^{-\mu n}=\frac{e^{-\mu p}}{(1+(1-e^{-\mu
q})\sinh^{2}Jt)^{2p/q}}=\sum_{n}\frac{\Delta_{n}}{n!}x^{n}(1-x)^{\Delta}e^{-\mu(qn+p)}$
(129)
where we define
$\Delta_{n}\equiv\frac{\Gamma(\Delta+n)}{\Gamma(\Delta)},\quad
x\equiv\frac{\sinh^{2}Jt}{1+\sinh^{2}Jt},\quad\Delta\equiv 2p/q.$ (130)
From this, we can identify the size distribution:
$P(qn+p)=\frac{\Delta_{n}}{n!}x^{n}(1-x)^{\Delta}.$ (131)
The size and size width are
$\mathcal{S}=\overline{n}=\sum_{n}n\frac{\Delta_{n}}{n!}x^{n}(1-x)^{\Delta}=\frac{\Delta
x}{1-x},\qquad\delta\mathcal{S}=\sqrt{\overline{n^{2}}-\overline{n}^{2}}=\frac{\sqrt{\Delta
x}}{1-x}.$ (132)
Therefore, the ratio of size width to average size is
$\delta\mathcal{S}/\mathcal{S}=\sqrt{\frac{x}{\Delta}}\frac{1}{1+x},$ (133)
which approaches zero when $p\rightarrow\infty$ ($\Delta\rightarrow\infty$).
To apply the upper bound Eq. (126), we need to integrate (i.e. sum) the tail
of the size distribution in order to compute its asymptotic width. In this
example, the discrete tail can be summed explicitly and we define
$I(k)\equiv\sum_{n=k}^{\infty}P(qk+p)=\sum_{n=k}^{\infty}\frac{\Delta_{n}}{n!}x^{n}(1-x)^{\Delta}=\frac{B_{x}(k,\Delta)}{B(k,\Delta)}$
(134)
where $B_{x}(a,b)$ and $B(a,b)$ are incomplete and ordinary beta function
respectively. Let us take $k=\bar{n}(1\pm\eta)$ for some small $\eta$
representing the asymptotic width
$W_{\varepsilon}=\bar{n}\eta q.$ (135)
This width corresponds to an error
$\varepsilon=1-I(\bar{n}(1-\eta))+I(\bar{n}(1+\eta)).$ (136)
Taking $g\mathcal{S}/N=\pi$, the upper bound is
$\displaystyle\mathcal{B}$
$\displaystyle=2[1-I(\bar{n}(1-\eta))+I(\bar{n}(1+\eta))]+\sin\frac{2\pi\eta
x}{1+x}$ $\displaystyle=2\left(1-\frac{B_{x}(\frac{\Delta
x(1-\eta)}{1-x},\Delta)}{B(\frac{\Delta
x(1-\eta)}{1-x},\Delta)}+\frac{B_{x}(\frac{\Delta
x(1+\eta)}{1-x},\Delta)}{B(\frac{\Delta
x(1+\eta)}{1-x},\Delta)}\right)+\sin\frac{2\pi\eta x}{1+x}.$ (137)
At infinite temperature $G_{\beta}(Q_{A})=1$, we need to show that the minimum
of $\mathcal{B}$ tends to zero when $\Delta\rightarrow\infty$.
For early time $\sinh Jt\sim\mathcal{O}(1)$, $1-x$ is an order 1 number, and
we take $\Delta\rightarrow\infty$ limit to get
$\frac{B_{x}(\frac{\Delta x(1-\eta)}{1-x},\Delta)}{B(\frac{\Delta
x(1-\eta)}{1-x},\Delta)}\rightarrow 1,\qquad\frac{B_{x}(\frac{\Delta
x(1+\eta)}{1-x},\Delta)}{B(\frac{\Delta x(1+\eta)}{1-x},\Delta)}\rightarrow 0$
(138)
The bound becomes
$\mathcal{B}\rightarrow\sin\frac{2\pi\eta x}{1+x}$ (139)
This basically means that the integrated probability between $\bar{n}(1-\eta)$
and $\bar{n}(1+\eta)$ for any finite $\eta$ is 1. One can thus take
$\eta\rightarrow 0$ with speed slower than $1/\Delta\rightarrow 0$ in order to
have the bound vanish. This computation applies for $x\in(0,1)$, which means
that the peaked size always holds for early time. This is physically
reasonable as the operator has not yet been scrambled extensively. However,
since the size is small at such early times, in order for teleportation to
work we must choose $g\sim N$.
For intermediate times, such that $\sinh^{2}Jt\sim N$ and $\Delta\ll N\sim
1/(1-x)$, we must take the $x\rightarrow 1$ limit first. Using the fact that
$\frac{B_{x}(\frac{\Delta x(1-\eta)}{1-x},\Delta)}{B(\frac{\Delta
x(1-\eta)}{1-x},\Delta)}=1-\frac{(1-x)^{\Delta}x^{\frac{\Delta
x(1-\eta)}{1-x}}\Gamma(\frac{\Delta(1-x\eta)}{1-x})}{\Gamma(\frac{\Delta
x(1-\eta)}{1-x})\Gamma(1+\Delta)}F(1,\frac{\Delta(1-x\eta)}{1-x};\Delta+1;1-x)$
(140)
where $F$ is Gauss hypergeometric function, in $x\rightarrow 1$ limit the
right portion of Eq. (140) tends to
$F(1,\frac{\Delta(1-x\eta)}{1-x};\Delta+1;1-x)\rightarrow{}_{1}F_{1}(1;\Delta+1;\Delta(1-\eta))=\Delta^{1-\Delta}e^{\Delta(1-\eta)}(1-\eta)^{-\Delta}(\Gamma(\Delta)-\Gamma(\Delta,\Delta(1-\eta)))$
(141)
where $\Gamma(x,a)$ is incomplete gamma function. Meanwhile, the left portion
of the second term of Eq. (140) gives
$\frac{(1-x)^{\Delta}x^{\frac{\Delta
x(1-\eta)}{1-x}}\Gamma(\frac{\Delta(1-x\eta)}{1-x})}{\Gamma(\frac{\Delta
x(1-\eta)}{1-x})\Gamma(1+\Delta)}\rightarrow\frac{\Delta^{\Delta}(1-\eta)^{\Delta}e^{-\Delta(1-\eta)}}{\Gamma(1+\Delta)}$
(142)
under $x\rightarrow 1$. Combining the two, we have
$\lim_{x\rightarrow 1}\frac{B_{x}(\frac{\Delta
x(1-\eta)}{1-x},\Delta)}{B(\frac{\Delta
x(1-\eta)}{1-x},\Delta)}=\frac{\Gamma(\Delta,\Delta(1-\eta))}{\Gamma(\Delta)}.$
(143)
It follows that the upper bound is
$\mathcal{B}=2\left(1-\frac{\Gamma(\Delta,\Delta(1-\eta))}{\Gamma(\Delta)}+\frac{\Gamma(\Delta,\Delta(1+\eta))}{\Gamma(\Delta)}\right)+\sin\pi\eta$
(144)
This function has a unique minimum for $\eta\in[0,1/2]$ and this minimum
decreases as $\Delta$ increases. Taking derivative with respect to $\eta$, we
get
$\displaystyle\partial_{\eta}\mathcal{B}$
$\displaystyle=\pi\cos\pi\eta-\frac{2\Delta^{\Delta}}{\Gamma(\Delta)}\left[(1+\eta)^{\Delta-1}e^{-\Delta(1+\eta)}+(1-\eta)^{\Delta-1}e^{-\Delta(1-\eta)}\right]$
$\displaystyle\rightarrow\pi\cos\pi\eta-\sqrt{\frac{2\Delta}{\pi}}\left[(1+\eta)^{\Delta-1}e^{-\Delta\eta}+(1-\eta)^{\Delta-1}e^{\Delta\eta}\right]$
(145)
where in the second step we have taken large $\Delta$ limit. Solving
$\partial_{\eta}\mathcal{B}=0$ in this limit, we find the minimum at
$\eta\approx\sqrt{\frac{1}{\Delta}\log\frac{8\Delta}{\pi^{3}}}\rightarrow 0$
(146)
which in turn gives the limit value of $\mathcal{B}$ to be zero. This proves
that at infinite temperature, teleportation exactly matches the peaked-size
prediction for both early and intermediate times. For late times
$t\gg\frac{1}{2J}\log N$ the size distribution above breaks down, as can be
seen since $P(n)$ is dominated by some $n>N$, which is unphysical since $N$ is
the total number of fermions.
In contrast, we can also show that the above bound does _not_ apply at low
temperatures for large-$q$ SYK, as expected from the main text. At low
temperature, the upper bound $\mathcal{B}$ needs to be much smaller than the
two-sided correlation function $G_{\beta}(Q_{A})\sim(\beta J)^{-2\Delta}$ in
order to guarantee peaked-size teleportation. The low temperature size
distribution is essentially the same as at infinite temperature, requiring
only the replacement
$x\rightarrow\frac{\sinh^{2}\pi t/\beta}{(\pi/\beta J)^{2}+\sinh^{2}\pi
t/\beta}\in[0,1]$ (147)
and adding $e^{-\mu N\delta_{\beta}}$ to the distribution, which shifts the
initial size by a constant amount $N\delta_{\beta}$ (accounting for the size
of the thermal density matrix). Following a similar computation to above, one
can show that $\mathcal{B}$ still asymptotes to zero, but now with a _slower_
speed than $G_{\beta}(Q_{A})$. For example, in the early time and large
$\Delta$ limits, $\mathcal{B}\sim\exp(-\Delta C(x,\eta))/\sqrt{\Delta}$ where
$C(x,\eta)$ is order 1, while $G_{\beta}(Q_{A})\sim\exp(-2\Delta\log(\beta
J))$ is exponentially smaller for large $\beta J$. Therefore, the upper bound
$\mathcal{B}$ fails to guarantee peaked-size teleportation. This is consistent
with the fact that the correlation function $C_{Q}(t)$ in Eq. (91) in low
temperature is far from being a pure phase.
## Appendix B The Hayden-Preskill recovery protocol
In this Section we review the HPR protocol following Refs. Yoshida and Kitaev
(2017); Yoshida and Yao (2019) and derive its equivalence to the TW protocol
in the case of infinite temperature teleportation of a single qubit. This
single-qubit variant of the HPR protocol was experimentally implemented in
Ref. Landsman _et al._ (2019a), although an explicit derivation of its
quantum circuit was not provided.
There are two variants of the HPR protocol: a probabilistic variant, which
teleports successfully only with some finite probability, and a deterministic
variant, which uses an analog of Grover’s search algorithm and succeeds with
unit probability, but involves a more complex decoding operation. Both
protocols take the general form,
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-YK-either-
state}}}\end{array}}\centering\@add@centering$ (149)
shown for teleportation of a quantum state $\ket{\psi}$ (the generalization to
EPR teleportation is straightforward). We now outline the interpretation of
each aspect of the above protocol in the context of the Hayden-Preskill
thought experiment. For consistency with past literature, we have used
_different_ subsystem labels than introduced in the main text—most notably,
subsystem D now denotes the coupled qubits, and subsystem C denotes its
complement. Subsystem B represents an eternal black hole that is maximally
entangled with its past Hawking radiation subsystem B’, as represented by a
dimension $d_{B}=d_{B}^{\prime}$ EPR pair between the two subsystems.
Subsystem A contains the initial state $\ket{\psi}$ of an observer Alice’s
diary. Upon falling into the black hole, the diary’s information is scrambled
by the unitary time-evolution $U$ acting on the left subsystem $l\equiv$ AB
$=$ CD. Far from destroying the information of Alice’s diary, scrambling by
$U$ in fact allows an outside observer Bob to decode the diary if he has
access _any_ few qubits of new Hawking radiation D, along with the past
Hawking radiation B’ and an ancillary EPR pair between A’ and R’, where
$d_{A}^{\prime}=d_{A}$. This decoding relies on OTOCs between subsystem A and
D being minimal, a general feature of thermalizing time-evolution after the
scrambling time. The decoding begins by applying the conjugate time-evolution
$U^{*}$ to the right system $r\equiv$ A’B’ $=$ C’D’, followed by coupling the
black hole and its past radiation with $W=e^{i\pi P_{\text{EPR},D}}$, and
finally by time-evolving the right side under $U^{T}$ and a single-qubit $Y$
gate.
#### B.0.1 Probabilistic decoding: intuition
Although our main focus will be on the deterministic teleportation protocol,
we review the probabilistic protocol here for completeness, and as a
convenient platform to introduce the intuition connecting operator spreading
to the success of teleportation. The decoding operation of the probabilistic
HPR protocol consists of projection onto EPR pairs on a subsystems D, D’:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-YK-probabilistic-
state}}}\end{array}}\centering\@add@centering$ (151)
Perfect teleportation requires $d_{D}\geq d_{A}$, and succeeds with
probability $1/d_{A}^{2}$ when $U$ is maximal scrambling. The non-unity
success probability signifies that the decoding protocol becomes exponentially
more complex with the number of qubits to be teleported.
To provide intuition for the protocol’s success, we analyze the action of EPR
projection on the initial states $Q_{A,l}(t)\ket{\text{TFD}}$. We write
$Q_{A}(t)$ as a sum of Pauli strings $S$ on the entire system:
$Q_{A}(t)=\sum_{R}c_{R}(t)S.$ (152)
Denoting the EPR projector on subsystems D, D’ as $P_{\text{EPR},D}$ and
writing each Pauli string as a tensor product $R=R_{C}\otimes R_{D}$ of Paulis
on subsystems D and C, we have
$P_{\text{EPR},D}R_{l}\ket{\text{EPR}}=\delta_{R_{D},\mathbbm{1}}R_{l}\ket{\text{EPR}},$
(153)
since
$\bra{\text{EPR}_{D,D^{\prime}}}S_{D,l}\ket{\text{EPR}_{D,D^{\prime}}}=\tr_{D}(R_{D})/d_{D}=\delta_{R_{D},\mathbbm{1}}$.
Perfect teleportation is achieved when all input Pauli operators on subsystem
A have spread to subsystem D, such that every Pauli string $S$ composing
$Q_{A}(t)$ has non-identity support on subsystem D, for all non-identity
$Q_{A}$. In this situation, the EPR projector has eigenvalue 1 on the
thermofield double state and eigenvalue 0 in _all_ perturbed states:
$P_{\text{EPR},D}\ket{\text{EPR}}=\ket{\text{EPR}}\,\,\,\,,\,\,\,\,P_{\text{EPR},D}\,Q_{A,l}(t)\ket{\text{EPR}}=0.$
(154)
However, this is no different than projecting onto EPR pairs between
subsystems A and A’ before time-evolution by $U_{l}U^{*}_{r}$! This projection
would, of course, have an action
$P_{\text{EPR}}\ket{\text{EPR}}=\ket{\text{EPR}}\,\,\,\,,\,\,\,\,P_{\text{EPR}}\,Q_{A,l}\ket{\text{EPR}}=\tr(Q_{A})=0.$
(155)
Expressed diagrammatically, this equivalence is:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-YK-probabilistic-state-
equality}}}\end{array}}\centering\@add@centering$ (157)
for all initial states $\psi$. However, performing EPR projection between
subsystems A, A’ before time-evolution is precisely the standard quantum
teleportation protocol, applied to subsystems A, A’, and R’. The scrambling
dynamics of $U$ allow one to perform this teleportation via coupling _any_
subsystem D of the system’s qubits.
#### B.0.2 Deterministic decoding
After scrambling, the probability of successful EPR projection on subsystem D,
$O(1/d_{A}^{2})$, is exponentially small in the size of subsystem A, the state
to be teleported. In contrast to standard teleportation, non-successful EPR
projection (i.e. projection onto a different maximally entangled state, not
$\ket{\text{EPR}_{D,D^{\prime}}}$) _cannot_ be corrected via an additional
decoding operation. This exponential decrease in success probability is
overcome in the deterministic HPR protocol, which uses an analog of Grover’s
search algorithm to search for an EPR pair between subsystems D, D’. The
protocol requires $O(d_{A})$ steps for completion, again exponential in the
number of qubits to be teleported (albeit with half the exponent of the
probabilistic decoding).
Grover’s search algorithm involves two operations: the first applies a minus
sign to the state one is searching for, and the second applies a minus sign to
the system’s initial state. We will search for an EPR pair on subsystem $D$,
so for the first step we apply $W_{D}\equiv 1-2P_{\text{EPR},D}=e^{i\pi
P_{\text{EPR},D}}$:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-
WD}}}\end{array}}\centering\@add@centering$ (159)
In the second step, we flip the sign of the initial state (the time-evolved
EPR pair between A’ and the reference qubit R’) by applying
$\widetilde{W}_{A}\equiv U^{*}W_{A}U^{T}$:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-
WA}}}\end{array}}\centering\@add@centering$ (161)
where $W_{A}=1-2P_{\text{EPR},A}$ acts on A’, R’ to apply a minus sign if the
two are in an EPR pair.
The entire Grover protocol is identical to the probabilistic protocol, but
with EPR measurement replaced by repeated applications of the two above steps
until the EPR pair is found. Displaying, for instance, only the first two
iterations:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-YK-Grover-manysteps-
state}}}\end{array}}\centering\@add@centering$ (163)
After $O(d_{A})$ iterations, the state $\ket{\psi}$ is found on subsystem R’.
#### B.0.3 Single qubit deterministic decoding
Two important simplifications occur to the deterministic HPR protocol in the
case of single qubit teleportation, $d_{A}=2$. The first is that the Grover
operator $W_{A}$ is equal to a SWAP operator composed with single-qubit $Y$
operations. To see this, we expand $W_{A}$ in terms of Pauli operators:
$\begin{split}W_{A}&=1-2P_{\text{EPR},A}\\\
&=1-\frac{2}{d_{A}^{2}}\sum_{P_{A}}P_{A,l}\,P^{*}_{A,r}\\\
&=\frac{1}{2}-\frac{1}{2}X_{l}X_{r}+\frac{1}{2}Y_{l}Y_{r}-\frac{1}{2}Z_{l}Z_{r}\\\
&=\frac{1}{2}Y_{l}\left[1+X_{l}X_{r}+Y_{l}Y_{r}+Z_{l}Z_{r}\right]Y_{l}\\\
&=Y_{l}\,(\text{SWAP})\,Y_{l},\\\ \end{split}$ (164)
where we used the Pauli decomposition for the swap operator between two
$d_{A}$-dimensional boson systems:
$\text{SWAP}=\frac{1}{d_{A}}\sum_{P_{A}}P_{A,l}P^{\dagger}_{A,r}.$ (165)
Expressed graphically, we have
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-WA-
SWAP}}}\end{array}}\centering\@add@centering$ (167)
The second simplification is that Grover’s search for an EPR pair D, D’
succeeds after only one step; this is a general result for Grover’s search in
a $d_{D}^{2}=4$-dimensional database Nielsen and Chuang (2002). It implies
that the Grover protocol can teleport one qubit through the circuit:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-YK-Grover-onestep-
state}}}\end{array}}\centering\@add@centering$ (169)
If we only care about the fidelity of the teleported state, we can neglect the
final application of $U^{*}$. Performing the SWAP gate explicitly, and
neglecting the action of the final $Y$ operator on R’, we have:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-YK-Grover-onestep-state-simplified-
WD}}}\end{array}}\centering\@add@centering$ (171)
This exact circuit has been performed in trapped ion experiment Landsman _et
al._ (2019a). We now make a small cosmetic adjustment, and move the reference
qubit R’ from the far right to the far left,
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-YK-Grover-onestep-state-
Rtoleft}}}\end{array}}\centering\@add@centering$ (173)
Sliding $U^{*}$ to the left side using Eq. (11), we have:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-YK-Grover-onestep-state-
slide}}}\end{array}}\centering\@add@centering$ (175)
This is the same circuit appearing the teleportation protocol of Ref. Brown
_et al._ (2019); Nezami _et al._ (2021), modulo the precise form of the
coupling. In the case of EPR teleportation, we would instead have
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 3.0pt\hbox{\hskip
2.0pt\includegraphics[scale={1.0}]{fig-YK-Grover-onestep-EPR-
slide}}}\end{array}}\centering\@add@centering$ (177)
where subsystems R’ and A’ are in an EPR pair when teleportation is
successful. This is the circuit appearing in Ref. Gao and Jafferis (2019),
modulo the form of the coupling as well as the $Y$ decoding operation. The
lack of a $Y$ decoding operation for fermionic teleportation is discussed in
Appendix F.
## Appendix C State teleportation fidelity
We begin by quickly demonstrating the relation between the correlator of the
time-evolved operator $\outerproduct{\psi}{\phi}$ and the success of state
teleportation. We do so by inserting the resolution of the identity
$\frac{1}{d_{A}}\sum_{\ket{\phi}}\outerproduct{\phi}{\phi}=\mathbbm{1}$ into
the ancillary qubit leg of the diagram for the state teleportation fidelity.
We find:
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 2.09999pt\hbox{\hskip
1.4pt\includegraphics[scale={.7}]{fig-state-
fidelity}}}\end{array}}\centering\@add@centering$ (179)
Plugging Eq. (17) into this diagram clearly provides unit teleportation
fidelity. When teleportation is successful each of the $d_{A}$ terms of the
sum must succeed individually, so the right input state $\ket{\phi}$ will not
affect the success of the teleportation.
As with EPR distillation, we can relate the state teleportation fidelity to
correlators of Pauli operators by decomposing the SWAP operator.
Diagramatically,
$\displaystyle{\begin{array}[]{c}\vbox{\vskip 2.09999pt\hbox{\hskip
1.4pt\includegraphics[scale={.7}]{fig-state-fidelity-
swap}}}\end{array}}\centering\@add@centering$ (181)
and in equation form,
$F_{\psi}=\frac{1}{d_{A}^{2}}\sum_{P_{1},P_{2}}\bra{\psi}P_{2}^{\dagger}P_{1}^{\dagger}\ket{\psi}\cdot\bra{\text{TFD}}P_{2,l}(t)e^{-igV}\outerproduct{\psi}{\psi}_{r}\\!(-t)\,\,e^{igV}P_{1,l}(t)\ket{\text{TFD}}.$
(182)
When the correlators are maximal with phases $e^{i\theta_{P}}$, we can
simplify this expression as
$\begin{split}F_{\psi}&\approx\frac{1}{d_{A}^{2}}\sum_{P_{1},P_{2}}\bra{\psi}P_{2}^{\dagger}P_{1}^{\dagger}\ket{\psi}\cdot\bra{\text{TFD}}P_{2,r}(-t)\outerproduct{\psi}{\psi}_{r}\\!(-t)\,\,P_{1,r}(-t)\ket{\text{TFD}}\\\
&=\frac{1}{d_{A}^{2}}\sum_{P_{1},P_{2}}e^{i(\theta_{P_{1}}-\theta_{P_{2}})}\cdot\bra{\psi}P_{2}^{\dagger}P_{1}^{\dagger}\ket{\psi}\cdot\tr\bigg(\rho\,P_{2}\outerproduct{\psi}{\psi}P_{1}\bigg{missing})\\\
&=\frac{1}{d_{A}^{2}}\sum_{P_{1},P_{2}}e^{i(\theta_{P_{1}}-\theta_{P_{2}})}\cdot\tr\bigg(P_{1}^{\dagger}\outerproduct{\psi}{\psi}P_{2}^{\dagger}\bigg{missing})\cdot\tr\bigg(\rho\,P_{2}\outerproduct{\psi}{\psi}P_{1}\bigg{missing}).\\\
\end{split}$ (183)
As expected, when the phases $e^{i\theta_{P}}$ are the same for all operators,
this gives unit fidelity:
$\begin{split}F_{\psi}&=\frac{1}{d_{A}^{2}}\sum_{P_{1},P_{2}}\tr(P_{1}^{\dagger}\outerproduct{\psi}{\psi}P_{2}^{\dagger})\cdot\tr(\rho\,P_{2}\outerproduct{\psi}{\psi}P_{1})\\\
&=\frac{1}{d_{A}}\sum_{P_{1}}\tr(P_{1}^{\dagger}\outerproduct{\psi}{\psi}\outerproduct{\psi}{\psi}P_{1}\rho)\\\
&=\tr(\outerproduct{\psi}{\psi})\tr(\rho)\\\ &=1,\\\ \end{split}$ (184)
using properties of Pauli operators as a 1-design. Differing phases
$e^{i\theta_{P}}$ cause the terms in the sum to interfere with each other,
giving lower fidelity. At finite temperature, the fidelity of peaked-size
teleportation is again limited. For instance, if $\ket{\psi}$ is a single-
qubit eigenstate of the Pauli $Z$ operator, we have:
$\begin{split}F_{\text{EPR}}&=\frac{1}{2^{2}}\sum_{P_{1},P_{2}}\bra{\psi}P_{2}P_{1}\ket{\psi}\cdot\bra{\text{TFD}}P_{2,l}(t)e^{-igV}[Y\outerproduct{\psi}{\psi}\\!Y]_{r}(-t)\,\,e^{igV}P_{1,l}(t)\ket{\text{TFD}}\\\
&=\frac{1}{2^{2}}\sum_{P_{1},P_{2}}\bra{\psi}P_{2}P_{1}\ket{\psi}\cdot\tr(\outerproduct{\psi}{\psi}\,\rho^{1/2}P_{2}P_{1}\rho^{1/2})\\\
&=\sum_{P}\bra{\psi}P\ket{\psi}\cdot\tr(\outerproduct{\psi}{\psi}\,\rho^{1/2}P\rho^{1/2})\\\
&=2\tr(\outerproduct{\psi}{\psi}\,\rho^{1/2}\outerproduct{\psi}{\psi}\rho^{1/2})\\\
&\approx\frac{1}{2}\tr((\mathbbm{1}+Z)\,\rho^{1/2}(\mathbbm{1}+Z)\rho^{1/2})\\\
&\approx\frac{1}{2}+\frac{1}{2}G(t^{\prime}-t+i\beta/2)+\expectationvalue{Z}_{\beta},\\\
\end{split}$ (185)
where $\expectationvalue{Z}_{\beta}=\tr(Z\rho)$, which averages to zero for
different initial states $\ket{\psi}$.
### C.1 Rydberg simulations
For numerical or experimental simulations, it is often convenient to measure
teleportation state fidelity rather than EPR fidelity to avoid the inclusion
of ancilla qubits. We followed this approach for the Rydberg numerics
presented in Fig. 6. In particular, we utilized the one-sided protocol shown
in Fig. 5(b) except with EPR measurement replaced by measurement of a two-
qubit state $\ket{\psi}\otimes\ket{\psi^{*}}$. To compute the average state
fidelity, we simulated the protocol for all single-qubit states in a 2-design,
i.e. $\ket{\psi}\in\\{\ket{0},\ket{1},\ket{0}\pm\ket{1},\ket{0}\pm
i\ket{1}\\}$. In addition, we sampled uniformly over $\sim 100$ initial states
in the computational basis to account for the $K$ ‘measured’ and $N-K$
‘unmeasured’ qubits. Time evolution was performed using Krylov subspace
methods, which are amenable to parallelization and generally more efficient
than exact diagonalization dyn .
Additional results showing finite-size effects on the fidelity are shown in
Fig. 8. We observe that the fidelity generally increases for larger systems
but decreases for larger values of $g$. This is consistent with our error
analysis, where we showed that the error scales $g^{2}\delta S^{2}/N^{2}$. In
particular, at late times, we expect the size distribution to approach a
binomial and the error to scale as $g^{2}/N$. Finally, let us comment on the
crossing of the curves observed in Fig. 8(a). At early times, smaller systems
acquire a larger phase because the phase scales as $g\mathcal{S}/N$, where $g$
and $\mathcal{S}$ are independent of size; however, at late times, larger
systems saturate to a higher fidelity due to their narrower size
distributions, i.e. $\delta\mathcal{S}/\mathcal{S}\sim 1/\sqrt{N}$. Thus, the
curves intersect near the scrambling time as they transition between these two
regimes.
Figure 8: Finite-size scaling of the Rydberg simulations (a) as a function of
time with $g=\pi$, and (b) as a function of coupling strength $g$ with $t=12$.
The system was evolved under the Rydberg Hamitlonian, Eq. (109), with the same
system parameters as in Fig. 6. We observe the maximum fidelity increase at
larger systems, owing to the narrowing of the size distribution.
## Appendix D Random unitary circuit numerics
Here we provide additional details and numerical data from our random unitary
circuit simulations.
### D.1 Algorithm
As discussed in the main text, our approach relies on the fact that the EPR
fidelity is simulable using a unitary 3-design. Thus, we can replace each
Haar-random gate in the circuit with a random _Clifford_ gate, whose action on
a Pauli operator can be efficiently simulated with a classical computer. More
specifically, our algorithm consists of the following three ingredients.
First, following a standard approach, we represent an $n$-qubit Pauli
operators as a binary string $v=x_{1}x_{2}\cdots x_{n}z_{1}z_{2}\cdots z_{n}$
of length $2n$:
$Q=\prod_{i=1}^{n}Q_{i}(x_{i},z_{i})$ (186)
where $Q_{i}(0,0)=I_{i}$, $Q_{i}(1,0)=X_{i}$, $Q_{i}(0,1)=Z_{i}$, and
$Q_{i}(1,1)=Y_{i}$ denote individual Pauli operators within the Pauli string.
Note that, for our purposes, the overall phase of $Q$ is irrelevant.
Second, we consider circuits composed of random 2-qubit Clifford unitaries,
whose layout depends on the dimension. Each of the 2-qubit unitaries is
sampled uniformly from the set of 2-qubit Clifford unitaries. While efficient
procedures exist to perform this sampling directly, in practice it is
sufficient to pre-compute and enumerate the entire 2-qubit set (consisting of
11520 unitaries). Acting a 2-qubit unitary on a Pauli operator corresponds to
applying a map to the relevant components of $v$, e.g. a unitary with support
on the $j$th and $k$th qubits updates the values of
$(x_{j},z_{j},x_{k},z_{k})$.
Third, for a given circuit $U$, we calculate the EPR fidelity using [Eq.
(46)]:
$F_{\text{EPR}}=\bigg{|}\frac{1}{d_{A}^{2}}\sum_{Q_{A}}e^{i\theta_{Q_{A}}}\bigg{|}^{2}\\\
$ (187)
where
$\theta_{Q_{A}}=g\mathcal{S}_{K}[UQ_{A}U^{\dagger}]/K+\pi\mathcal{S}[Q_{A}].$
(188)
This amounts to determining the size of Pauli operators after evolving under
the Clifford circuit. When $A$ consists of a few qubits, we iterate over all
Pauli operators $Q_{A}$ explicitly; otherwise we compute the sum for $\sim
100$ randomly selected operators. To compute the average EPR fidelity, we
sample $\sim 100$ realizations of $U$.
The time complexity of computing the EPR fidelity scales linearly with the
number of 2-qubit gates in the circuit and does not otherwise depend on the
number of qubits $n$. This is in contrast to most other observables in the
context of Clifford circuits, which require one to compute the evolution of
$n$ distinct operators (the stabilizer generators). It is therefore feasible
to simulate individual realizations of up to $10^{8}$ qubits within a day on
standard single-core processors. Moreover, upon determining the distribution
of sizes for a particular operator in a particular circuit, we can compute the
teleportation fidelity for arbitrary values of $g$ with no additional
computational cost.
Figure 9: Extended data for average operator size and size width in 1D (a)
and 2D (b) RUCs. The average size grows ballistically $\sim t^{d}$ (dashed
line) and saturates at $t_{\textrm{scr}}\sim L\sim N^{1/d}$. The size width
matches the predictions from the KPZ universality class (dashed lines) and
allows us to extract the prefactors in Eq. (60) and (63). In particular, we
determine $\alpha_{\textrm{bulk}}$ and $\beta_{\textrm{bulk}}$ from the
saturation values (light gray), and $\alpha_{\textrm{boundary}}$ and
$\beta_{\textrm{boundary}}$ from the initial growth rate (dark gray).
### D.2 Extended data for 1D and 2D RUCs
_Size distribution—_ The average size and size width for time-evolved
operators in 1D and 2D are shown in Fig. 9. These results match the functional
forms predicted by the KPZ universality class [Eq. (60) and (63)] and allows
us to extract
$\\{\alpha_{\textrm{bulk}},\alpha_{\textrm{boundary}},\beta_{\textrm{bulk}},\beta_{\textrm{boundary}}\\}=\\{0.47,0.18,1.2,4.5\\}$.
_Multiple qubits—_ In Fig. 10, we show the many-body fidelity for teleporting
multiple qubits that are evenly spaced in 1D. At early times ($t<1300$), the
light cones of the time-evolved operators do not overlap and the system
factorizes into $n$ independent channels, allowing for high fidelity multi-
qubit teleportation. At later times, the sizes of the time-evolved operators
begin to overlap and, as anticipated from our arguments on size addition, the
teleportation fidelity becomes highly suppressed. We note, however, one
exception: when only adjacent light cones have overlapped (i.e.
$1300<t<2600$), we observe order-one fidelities for certain values of $g$.
This occurs whenever the multi-qubit size is a multiple of $2\pi K/g$ off from
the size addition value, e.g.
$\mathcal{S}[Q_{1}(t)Q_{2}(t)]=\mathcal{S}[Q_{1}(t)]+\mathcal{S}[Q_{1}(t)]-2\pi
m(K/g)$, where $m$ is an integer value. Therefore, strictly speaking, it is
possible to satisfy the conditions for many-body teleportation without size
addition; nevertheless, it is a non-generic effect that requires finely tuned
values of $g$ and evenly spaced input qubits.
Figure 10: Teleporting multiple qubits ($n=5$) in 1D. (a) Teleportation is
achieved with high fidelity for $t\leq 1300$; this corresponds to the regime
in which the light cones of the operators are non-overlapping, as indicated by
the average operator size (b). Surprisingly, order-one fidelity can also occur
for $1300<t<2600$ (when adjacent light cones have overlapped) but only for
certain values of $g$.
### D.3 Channel capacity for 0D RUCs
The channel capacity corresponds to maximum number of qubits that can be
teleported with a fixed _fidelity per qubit_. One way to estimate this would
be to take the $n$-th root of the many-body EPR fidelity; however, in
practice, this approach is numerically unstable for large $n$. Instead, we
consider a modified protocol where one only measures the fidelity of one of
the $n$ teleported qubits, and neglects the state of the unmeasured qubits. At
infinite temperature, this fidelity is computed by:
$\displaystyle F^{m}_{\textrm{EPR}}$
$\displaystyle=\frac{1}{d_{A}^{4}}\sum_{Q_{1},Q_{2}}\bra{\text{TFD}}Q_{2,l}^{\dagger}(t)\,e^{-igV}\,\tilde{Q}_{2,r}^{m}(-t)\tilde{Q}_{1,r}^{m\dagger}(-t)\,e^{igV}\,Q_{1,l}(t)\ket{\text{TFD}}\,\cdot\tr(Q_{1}^{u\dagger}Q_{2}^{u})$
$\displaystyle=\frac{1}{d_{m}^{4}d_{u}^{2}}\sum_{Q_{1},Q_{2}}e^{i(\theta_{Q_{1}}-\theta_{Q_{2}})}\delta_{Q_{1}^{u},Q_{2}^{u}}$
(189)
where $Q=Q^{m}\otimes Q^{u}$ and $d_{A}=d_{m}d_{u}$, such that $Q^{m}$ acts on
the measured qubit, and $Q^{u}$ acts on the unmeasured qubits. This can be
derived diagrammatically via
$\displaystyle\centering{\begin{array}[]{c}\vbox{\vskip 1.52998pt\hbox{\hskip
1.01999pt\includegraphics[scale={.51}]{fig-state-fidelity-swap-EPR-
singlequbit}}}\end{array}}\centering\@add@centering\@add@centering$ (191)
We see that computing this fidelity is nearly identical to computing the full
many-body fidelity, except we sample only over pairs of Pauli operators
$(Q_{1},Q_{2})$ which are identical on every qubit except for the first.
Our procedure for determining the channel capacity is shown in Fig. 11. First,
we maximize the fidelity with respect to the evolution time $t$ and the
coupling strength $g$: the maximum fidelity occurs during the first peak of
its time profile, and is non-monotonic with respect to $g$. Next, we vary the
number of input qubits $n$ (for fixed number of coupled qubits $K$) and
calculate the point at which the fidelity exceeds a threshold value $1-F\geq
0.07$. Our results from this procedure demonstrate that the channel capacity
follows a linear trend in $K$ across two orders of magnitude, in agreement
with our analytical predictions.
Figure 11: Procedure for determining the channel capacity in 0D RUCs. (a-b)
For each $n$ and $K$, we compute the optimal fidelity as a function of (a) the
evolution time and (b) coupling strength $g$. Typical data are shown for
$n=38$ and $K=9000$. (c) By fitting the optimal fidelity as function of $n$
with a linear fit in log space, we determine the maximum number of qubits that
can be teleported within a threshold fidelity, i.e. $1-F\leq 0.07$ (dashed
line). The intercepts approximately collapse with respect to $n/K$, indicating
that the channel capacity is linear in $K$.
## Appendix E Random circuit calculations
Here we provide more detailed calculations of the size overlap and $K$-size
distribution of random Pauli operators of a fixed size. The former is relevant
to 0D RUCs, as the vanishingly small overlap of random Pauli strings with size
much less than the system size underlies the circuit’s ability to teleport
multiple qubits at intermediate times. The latter is applicable to all systems
when the $K$ coupled qubits are chosen randomly, and quantifies the width
introduced to the $K$-size by this random sampling. In the appropriate limits,
these calculations reproduce the intuitive binomial scalings we argued for in
the main text.
### E.1 Distribution of the overlap of two random Pauli strings
Our primary result is a calculation of the probability distribution of the
overlap $p$ of two randomly sampled Pauli strings of fixed length
$R_{1},R_{2}$, in a system of $N$ qubits. This probability is computed exactly
from various factorials:
$P[p]=\frac{C^{N}_{p}C^{N-p}_{R_{1}-p}C^{N-R_{1}}_{R_{2}-p}}{C^{N}_{R_{1}}C^{N}_{R_{2}}}=\frac{1}{p!}\frac{R_{1}!}{(R_{1}-p)!}\frac{R_{2}!}{(R_{2}-p)!}\frac{(N-R_{1})!(N-R_{2})!}{N!(N-R_{1}-R_{2}+p)!}$
(192)
The numerator computes the number of distinct configurations with Pauli
strings of size $R_{1}$, $R_{2}$ and overlap $p$, while the denominator
computes the number of distinct Pauli strings of size $R_{1}$, $R_{2}$
regardless of the overlap. We are interested in the case where all variables
are extensive (scale with $N$), but $N\gg R_{1},R_{2}\gg p$. We will proceed
by applying Stirling’s approximation to each term above, which holds as long
as all quantities are large compared to 1. For instance, for dummy variables
$n,k$, we have:
$\frac{n!}{(n-k)!}\approx\sqrt{\frac{n}{n-k}}\frac{n^{n}}{(n-k)^{n-k}}e^{-k}=n^{k}\left(1-\frac{k}{n}\right)^{-n+k-1/2}e^{-k}$
(193)
or, taking the logarithm,
$\log\frac{n!}{(n-k)!}\approx
k\log(n)-\left(n-k+\frac{1}{2}\right)\log(1-\frac{k}{n})-k.$ (194)
We will apply this to a few pairs of factorials in our original expression for
$P[p]$. For convenience, we only keep track of the $p$-dependence of the
probability, and neglect overall constants which serve to normalize the
distribution. Anticipating that the average $p$ will be $R_{1}R_{2}/N$, we
expand $p=R_{1}R_{2}/N+\delta$ and work to second order in $\delta$. At the
end we will show that this is justified. We have:
$\displaystyle\log\frac{R_{1}!}{(R_{1}-p)!}$ $\displaystyle\approx
p\log(R_{1})-\left(R_{1}-\frac{R_{1}R_{2}}{N}+\frac{1}{2}\right)\log(1-\frac{R_{2}}{N}-\frac{\delta}{R_{1}})-\frac{R_{1}R_{2}}{N}-\delta$
(195)
Expanding the logarithm using
$\log(1-y-x)\approx\log(1-y)-\frac{x}{1-y}-\frac{1}{2}\frac{x^{2}}{(1-y)^{2}}+O(x^{3})\\\
$ (196)
we have
$\displaystyle\log\frac{R_{1}!}{(R_{1}-p)!}$ $\displaystyle\approx
p\log(R_{1})-\left(R_{1}-\frac{R_{1}R_{2}}{N}-\delta+\frac{1}{2}\right)\left[\log(1-\frac{R_{2}}{N})-\frac{\delta/R_{1}}{1-R_{2}/N}-\frac{(\delta/R_{1})^{2}}{(1-R_{2}/N)^{2}}\right]-\delta+\ldots$
(197) $\displaystyle\approx
p\log(R_{1})+\delta\log(1-\frac{R_{2}}{N})-\frac{1}{2}\delta^{2}\left[\frac{1}{R_{1}}\frac{1}{1-R_{2}/N}\right]+O(\delta/R)+O(\delta^{3}/R^{2})+\ldots.$
This gives
$\displaystyle\log\frac{R_{1}!}{(R_{1}-p)!}\frac{R_{2}!}{(R_{2}-p)!}$
$\displaystyle\approx
p\log(R_{1}R_{2})+\delta\log\left((1-\frac{R_{2}}{N})(1-\frac{R_{1}}{N})\right)-\frac{1}{2}\delta^{2}\left[\frac{1}{R_{1}}\frac{1}{1-R_{2}/N}+\frac{1}{R_{2}}\frac{1}{1-R_{1}/N}\right]$
(198) $\displaystyle\,\,\,\,\,\,\,+O(\delta/R)+O(\delta^{3}/R^{2})+\ldots.$
The last piece is
$\displaystyle\log\frac{N!}{(N-R_{1}-R_{2}+p)!}$
$\displaystyle\approx-p\log(N)-\left(N-R_{1}-R_{2}+\frac{R_{1}R_{2}}{N}+\delta+\frac{1}{2}\right)\log(1-\frac{R_{1}}{N}-\frac{R_{2}}{N}+\frac{R_{1}R_{2}}{N^{2}}+\frac{\delta}{N})+\delta+\ldots$
(199)
$\displaystyle\approx-p\log(N)-\left(N-R_{1}-R_{2}+\frac{R_{1}R_{2}}{N}+\delta+\frac{1}{2}\right)\times$
$\displaystyle\,\,\,\,\,\,\left[\log\left((1-\frac{R_{1}}{N})(1-\frac{R_{2}}{N})\right)+\frac{\delta/N}{(1-\frac{R_{1}}{N})(1-\frac{R_{2}}{N})}-\frac{\delta^{2}/N^{2}}{(1-\frac{R_{1}}{N})^{2}(1-\frac{R_{2}}{N})^{2}}\right]+\delta+\ldots$
$\displaystyle\approx-p\log(N)-\delta\left[\log\left((1-\frac{R_{1}}{N})(1-\frac{R_{2}}{N})\right)\right]-\frac{1}{2}\delta^{2}\left[\frac{1}{N}\frac{1}{(1-R_{1}/N)(1-R_{2}/N)}\right]+$
$\displaystyle\,\,\,\,\,\,\,\,\,O(\delta/N)+O(\delta^{3}/N^{2})$
Combining these together, we have
$\begin{split}\log P[p]\approx&-\log(p!)+p\log(\frac{R_{1}R_{2}}{N})-\\\
&\frac{1}{2}\delta^{2}\left[\frac{1}{R_{1}}\frac{1}{1-R_{2}/N}+\frac{1}{R_{2}}\frac{1}{1-R_{1}/N}+\frac{1}{N}\frac{1}{(1-R_{1}/N)(1-R_{2}/N)}\right]+O(\delta/R)+O(\delta^{3}/R^{2}).\\\
\end{split}$ (200)
Exponentiating,
$P[p]\approx\frac{1}{p!}\left(\frac{R_{1}R_{2}}{N}\right)^{p}\exp\left(-\frac{1}{2}\left(p-\frac{R_{1}R_{2}}{N}\right)^{2}\left[\frac{R_{1}R_{2}}{R_{1}+R_{2}}+O(1/N)\right]^{-1}+O(\delta/R)+O(\delta^{3}/R^{2})\right).$
(201)
The first two terms are precisely a Poisson distribution, which has mean
$R_{1}R_{2}/N$ and width $\sqrt{R_{1}R_{2}/N}$. The exponential is a Gaussian
with the same mean $R_{1}R_{2}/N$, and a larger width
$\sqrt{R_{1}R_{2}/(R_{1}+R_{2})}$. The smaller width determines the width of
the product of the two functions, so we conclude:
$\expectationvalue{p}=\frac{R_{1}R_{2}}{N},\,\,\,\,\,\,\,\,\,\expectationvalue{p^{2}}-\expectationvalue{p}^{2}\approx\frac{R_{1}R_{2}}{N}.$
(202)
This is what we would expect for drawing $p$ random sites out of $N$, where
each site has independent probability $R_{i}/N$ of being in either Pauli
string. The width is subextensive, $\delta\sim\varepsilon\sqrt{N}$, justifying
the higher order terms we neglected along the way.
### E.2 Distribution of the $K$-size
Here we consider the distribution of the $K$-size for a random Pauli string of
fixed total size $\mathcal{S}$. This is in fact an identical problem to
calculating the overlap: the $K$-size is the overlap of the $K$ coupled qubits
with the $\mathcal{S}$ qubits acted on by the operator of interest. We should
just replace $R_{1}\rightarrow K$, $R_{2}\rightarrow\mathcal{S}$,
$p\rightarrow n$ above, where $n$ is the $K$-size. This is confirmed by
comparing the factorial expressions:
$\displaystyle P[n]$
$\displaystyle=\frac{C^{\mathcal{S}}_{n}C^{N-\mathcal{S}}_{K-n}}{C^{N}_{K}}=\frac{1}{n!}\frac{\mathcal{S}!}{(\mathcal{S}-n)!}\frac{K!}{(K-n)!}\frac{(N-\mathcal{S})!(N-K)!}{N!(N-\mathcal{S}-K+n)!}$
(203)
where the numerator computes the number of distinct configurations with $n$
qubits overlapping the Pauli operator support of size $\mathcal{S}$ and $K-n$
qubits not overlapping, and the denominator computes the number of distinct
configurations of the $K$ coupled qubits. There are two regimes of interest:
when $K$ and $\mathcal{S}$ are both extensive, and when $\mathcal{S}$ is
extensive but $K$ is not. The former provides a more accurate measure of the
full operator size ($K\rightarrow N$), while the latter is relevant for
probing the channel capacity. Both regimes share the same mean $K$-size
$\mathcal{S}_{K}$ and $K$-size width $\delta\mathcal{S}_{K}$:
$\mathcal{S}_{K}\equiv\expectationvalue{n}=\frac{\mathcal{S}K}{N},\,\,\,\,\,\,\,\,\,\delta\mathcal{S}_{K}^{2}\equiv\expectationvalue{n^{2}}-\expectationvalue{n}^{2}\approx\frac{\mathcal{S}K}{N}=\mathcal{S}_{K}.$
(204)
This matches our prediction in the main text, which was based on a simple
scenario of picking $K$ sites, each with a $\mathcal{S}/N$ chance of being in
the support of the Pauli operator.
## Appendix F Teleportation of fermions
Here we generalize the teleportation protocol to Majorana fermion systems, as
discussed in the main text for the SYK model. This involves a few small
modifications, stemming from ($i$) a different definition of fermionic EPR
(FEPR) pairs, ($ii$) a different relation between FEPR projector and the SWAP
gate, and ($iii$) modified expectations for operator spreading in Majorana
systems such as SYK.
Consider two complex fermions $\chi_{l}$ and $\chi_{r}$, decomposed into pairs
of Majorana fermions via $\chi_{l}=\psi_{l}^{1}+i\psi_{l}^{2}$,
$\chi_{r}=\psi_{r}^{1}+i\psi_{r}^{2}$. The number operators of the original
fermions are Majorana bilinears, e.g.
$i\psi_{l}^{1}\psi_{l}^{2}=2\hat{N}_{l}-1=(-1)^{\hat{N}_{l}}.$ We define a
single FEPR pair as the positive eigenstate of $i\psi_{l}^{1}\psi_{r}^{1}$ and
$i\psi_{l}^{2}\psi_{r}^{2}$. In the number operator basis of the original
complex fermions, this is the maximally entangled state
$(\ket{10}-i\ket{01})/\sqrt{2}$. Multiple fermion EPR pairs are formed as a
tensor product of single FEPR pairs.
This definition leads to some simple relations when ‘sliding’ fermion
operators around FEPR bras and kets in diagrammatic calculations. We have:
$\begin{split}\psi_{l}^{j}\ket{\text{FEPR}}&=i\psi_{r}^{j}\ket{\text{FEPR}}\\\
\bra{\text{FEPR}}\psi_{l}^{j}&=-i\bra{\text{FEPR}}\psi_{r}^{j},\\\
\end{split}$ (205)
diagrammatically,
$\displaystyle\centering{\begin{array}[]{c}\vbox{\vskip 1.34999pt\hbox{\hskip
0.9pt\includegraphics[scale={.45}]{fig-fermion-
slide.pdf}}}\end{array}}\centering\@add@centering\@add@centering$ (207)
As in bosonic systems, the thermofield double state is obtained by applying
$\rho^{1/2}$ to one side, $\ket{\text{TFD}}=\rho_{l}^{1/2}\ket{\text{FEPR}}$.
Since the SYK Hamiltonian is composed of 4-fermion terms, we have
$\begin{split}H_{l}\ket{\text{TFD}}=(i)^{4}H_{r}\ket{\text{TFD}}=H_{r}\ket{\text{TFD}}.\\\
\end{split}$ (208)
As in bosonic systems, the coupling for Majorana systems [Eq. (74)] measures
the size of Majorana strings.
There are two options teleportation in fermionic system. First, we could
teleport an ordinary bosonic qubit by encoding it into Majorana fermion
operators, for instance:
$\begin{split}X&\equiv i\psi_{1}\psi_{2}\\\ Y&\equiv i\psi_{2}\psi_{3}\\\
Z&\equiv i\psi_{1}\psi_{3}.\\\ \end{split}$ (209)
At infinite temperature before coupling, each of the above operators has a
correlator equal to $-1$, which is exactly the result for bosonic systems, but
without a need for the decoding operation $Y$. At late times, the coupling
$e^{igV}$ applies a relative phase between the identity and non-identity
Paulis, giving correlator phases:
$\mathbbm{1}$ | $e^{ig\expectationvalue{V}}$
---|---
$i\psi_{1}\psi_{2}$ | $-1$
$i\psi_{2}\psi_{3}$ | $-1$
$i\psi_{1}\psi_{3}$ | $-1$
When $g\expectationvalue{V}=\pi$ all correlators have the same phase, and
peaked-size teleportation succeeds with perfect fidelity at infinite
temperature. At intermediate times, peaked-size teleportation of multiple
bosonic qubits will succeed just as in bosonic systems.
The second option is to send a fermionic qubit, for instance by inserting half
of an ancillary FEPR pair. Here we begin with intermediate times, and discuss
a modification necessary for late time teleportation afterwards. We represent
a single complex fermion with two Majorana operators $\psi_{1},\psi_{2}$, and
suppose that the operators’ size distributions are tightly peaked, and the
size of $i\psi_{1}\psi_{2}$ is twice that of the individual Majorana sizes,
denoted $\mathcal{S}$ (this assumption of size addition is appropriate in all-
to-all coupled systems, e.g. SYK, but would not necessarily hold for e.g. a 1D
Majorana chain). The relevant operator correlators after coupling are:
$\mathbbm{1}$ | $1$
---|---
$\psi_{1}$ | $-i\cdot e^{ig\mathcal{S}}$
$\psi_{2}$ | $-i\cdot e^{ig\mathcal{S}}$
$i\psi_{1}\psi_{2}$ | $-1\cdot e^{ig2\mathcal{S}}$
At $g\mathcal{S}=\pi/2$ we have perfect teleportation. This generalizes
straightforwardly to multiple fermionic qubits: a $p$-fermion operator will
gain a phase $i^{p}$ from sliding across the FEPR pair, and a phase
$e^{igp\mathcal{S}}$ from coupling.
At late times, the sizes of initial single-body and two-body Majorana
operators are equal, since they have saturated the size of the system, and the
above operator correlators do not have the same phase. We now show that an
alteration of the encoding procedure can rectify this and lead to perfect late
time teleportation. This alteration is explained by the HPR protocol, and we
derive it by reexamining the equivalence between the HPR and TW protocols in
the case of fermionic qubits. Here, the relevant difference between bosons and
fermions is that the fermionic SWAP gate is _not_ related to the Grover search
operation $1-2P_{\text{FEPR}}$ by single-qubit rotations. Since fermions gain
a minus sign upon exchange, the fermionic SWAP gate takes the form
$\text{SWAP}_{F}=\begin{pmatrix}1&0&0&0\\\ 0&0&1&0\\\ 0&1&0&0\\\ 0&0&0&-1\\\
\end{pmatrix}=\frac{i\psi_{1,l}\psi_{2,l}+i\psi_{1,r}\psi_{2,r}+i\psi_{1,l}\psi_{2,r}-i\psi_{2,l}\psi_{1,r}}{2}.$
(210)
This is a _two-qubit_ controlled-phase (CZ) gate away from
$1-2P_{\text{FEPR}}$:
$1-2P_{\text{FEPR}}=\begin{pmatrix}1&0&0&0\\\ 0&0&i&0\\\ 0&-i&0&0\\\
0&0&0&1\\\
\end{pmatrix}=\frac{1-i\psi_{1,l}\psi_{1,r}-i\psi_{2,l}\psi_{2,r}-(i\psi_{1,l}\psi_{1,r})(i\psi_{2,l}\psi_{2,r})}{2}=\text{SWAP}_{F}\cdot\text{CZ},$
(211)
where the CZ gate is defined as
$\begin{split}\text{CZ}&=\begin{pmatrix}1&0&0&0\\\ 0&i&0&0\\\ 0&0&-i&0\\\
0&0&0&-1\\\
\end{pmatrix}=(1+i)\frac{\psi_{1,l}\psi_{2,l}+i\psi_{1,r}\psi_{2,r}}{2}\\\
&=\exp(i\frac{\pi}{4})\cdot\exp(-i\frac{\pi}{2}[i\psi_{1,l}\psi_{2,l}])\cdot\exp(i\frac{\pi}{4}[i\psi_{1,l}\psi_{2,l}][i\psi_{1,r}\psi_{2,r}]).\\\
\end{split}$ (212)
The single-fermion $\exp(-i\frac{\pi}{2}[i\psi_{1,l}\psi_{2,l}])$ gate occurs
on the swapped-out fermion and may be neglected. Inserting this in place of
the second Grover search operation gives the appropriate teleportation
protocol:
$\displaystyle\centering{\begin{array}[]{c}\vbox{\vskip 1.80002pt\hbox{\hskip
1.20001pt\includegraphics[scale={.6}]{fig-state-fidelity-swap-EPR-fermions-
CZ}}}\end{array}}\centering\@add@centering\@add@centering$ (214)
In the second diagram we have slid the action of each side of the CZ gate such
that the gate acts at the same time and on the same qubits as the initial SWAP
gate.
We can relate the fidelity of teleportation to operator correlators by
decomposing the encoding gate as
$\text{CZ}\cdot\text{SWAP}_{F}=\frac{1}{2}\sum_{j=1}^{4}S^{L}_{j,l}S^{R}_{j,r}$
(215)
where we define the operators:
$j$ | $S^{L}_{j}$ | $S^{R}_{j}$ | $S^{R}_{j}S^{L}_{j}$
---|---|---|---
1 | $\mathbbm{1}$ | $\mathbbm{1}$ | $\mathbbm{1}$
2 | $i\psi_{1}\psi_{2}$ | $i\psi_{1}\psi_{2}$ | $\mathbbm{1}$
3 | $i\psi_{1}$ | $\psi_{1}$ | $i\mathbbm{1}$
4 | $i\psi_{2}$ | $\psi_{2}$ | $i\mathbbm{1}$
according to Eq. (211). The final column displays the product
$S^{L}_{j}S^{R}_{j}$, where both act on the same qubit, which will be useful
shortly. We find a fidelity:
$\displaystyle\centering{\begin{array}[]{c}\vbox{\vskip 1.80002pt\hbox{\hskip
1.20001pt\includegraphics[scale={.6}]{fig-state-fidelity-swap-EPR-
fermions}}}\end{array}}\centering\@add@centering\@add@centering$ (217)
In the peaked-size regime with correlator phases $\theta_{R,j}$, we have
$\begin{split}F_{\text{EPR}}&=\frac{1}{2^{4}}\sum_{j,k}\bra{\text{TFD}}S_{R,j,l}(t)\,e^{-igV}\,[S_{L,j,r}S_{L,k,r}^{\dagger}](-t^{\prime})\,e^{igV}\,S_{R,k,l}^{\dagger}(t)\ket{\text{TFD}}\\\
&=\frac{1}{2^{4}}\sum_{j,k}\exp(-i[\theta_{R,j}-\theta_{R,k}])\tr(S_{R,j}(t-t^{\prime})\,\rho^{1/2}\,S_{L,j}(0)\,S_{L,k}^{\dagger}(0)\,\rho^{1/2}\,S_{R,k}^{\dagger}(t-t^{\prime}))\\\
\end{split}$ (218)
At infinite temperature, late times, and $g\expectationvalue{V}=\pi$, we have
correlator phases $\theta_{R,j}=0$ for the identity and two-bosonic operator
and $\theta_{R,j}=\pi/2$ for single-body fermionic operators, and find perfect
teleportation fidelity:
$\begin{split}F_{\text{EPR}}&=\frac{1}{2^{4}}\sum_{j,k}\exp(-i[\theta_{R,j}-\theta_{R,k}])\tr(S_{R,j}\,S_{L,j}\,S_{L,k}^{\dagger}\,S_{R,k}^{\dagger})\\\
&=\frac{1}{2^{4}}\sum_{j,k}\exp(-i[\theta_{R,j}-\theta_{R,k}])\cdot
i^{F_{j}}\cdot(-i)^{F_{k}}\cdot\tr(i\psi_{1}\psi_{2}\,i\psi_{1}\psi_{2})\\\
&=\frac{1}{2^{4}}\sum_{j,k}\exp(-i[\theta_{R,j}-\theta_{R,k}])\cdot
i^{F_{j}}\cdot(-i)^{F_{k}}\\\ &=\frac{1}{2^{4}}\sum_{j,k}(-i)^{F_{j}}\cdot
i^{F_{k}}\cdot i^{F_{j}}\cdot(-i)^{F_{k}}\\\ &=1,\\\ \end{split}$ (219)
where we define $F_{j}=1$ if $S_{L/R,j}$ is fermionic, and $0$ if bosonic.
We note that for state, as opposed to EPR, teleportation, the above CZ gate
turns out not to be necessary. Since coherent superpositions of different
fermion parity cannot be created by physical Hamiltonians, which are bilinear
in fermionic operators, we should only consider teleporting states of definite
fermion parity. The CZ gate applies only an overall phase on these states, and
so does not affect the success of teleportation.
We can also briefly analyze the low temperature results of Ref. Gao and
Jafferis (2019) through the lens of operator correlator phases. Here, state
teleportation is found to succeed perfectly at low temperatures only when the
initial operators are encoded in $p$-body Majoranas, with $p=q/2+1$, despite
the operator correlators having maximal magnitude for any value of $p$. At the
semiclassical gravity teleportation time, the correlators have phases:
$\mathbbm{1}$ | $1$
---|---
$\psi_{1}$ | $i^{p}(i)^{2p/q}$
$\psi_{2}$ | $i^{p}(i)^{2p/q}$
$i\psi_{1}\psi_{2}$ | $(-1)^{p}(i)^{4p/q}$
For single-body Majoranas, $p=1$, the correlators clearly do not have the same
phase—in fact, their phases are nearly identical to their phases at infinite
temperature with no coupling—so state teleportation is not possible. When
$p=q/2+1$, in the large-$q$ limit, these phases are $1,\pm 1,\pm 1,1$,
respectively, where the sign is determined by whether $p=1,3\text{ mod }4$.
When the sign is odd, it can be corrected via the decoding operation
$i\psi_{1}\psi_{2}=(-1)^{N}$, which applies a minus sign when conjugating
fermionic operators. Either case can therefore achieve perfect teleportation.
## Appendix G Teleportation and inelastic scattering at infinite temperature
In Section VII.4, we found that strong stringy corrections to a bulk theory of
gravity led to peaked-size teleportation as well as a deeply inelastic
scattering amplitude. We will now demonstrate that these two phenomena—peaked-
size teleportation and inelastic scattering—also coincide at infinite
temperature, for arbitrary functional forms of the wavefunctions and
scattering amplitudes. As we argued before, for a UV complete theory of
quantum gravity, strong stringy (and in general deep inelastic) effects are
expected to dominate only at high temperatures, $\beta\rightarrow 0$.
At infinite temperature, the form of the correlator is constrained by the
equality
$C_{\psi}(t;g)^{*}=-C_{\psi}(t;-g).$ (220)
This implies that $C_{\psi}(t)$ can be written as a real function of $ig$
multiplied by the two-point function:
$C_{\psi}(t)=\langle\psi_{l}\psi_{r}\rangle e^{-F(ig,t)}.$ (221)
When $g=0$, $C_{\psi}(t)$ is equal to $\langle\psi_{l}\psi_{r}\rangle$,
implying
$F(ig)=igf_{1}(t)+O(g^{2}),$ (222)
where $f_{1}(t)$ is a real function. Therefore, at this order in $g$, the
infinite temperature correlator is simply the two-point function multiplied by
a pure phase, matching peaked-size teleportation [Eq. (38)].
To justify that higher order terms in $g$ are subleading, we need an
additional assumption: that the wavefunction of $\psi(t)$ has a saddle point
at some momentum $k$. This is analogous to the boundary assumption that
operator sizes are tightly peaked. At early times, this saddle will not be
significantly changed by the coupling, since the derivative of the scattering
matrix with respect to $k$ will be suppressed by $G_{N}$, and at early times
the time-dependence of the wavefunction will not be strong enough to
compensate for this suppression (for example, in semiclassical AdS2, we
observed competition between $e^{2\pi t/\beta}$ and $1/G_{N}$). In such cases,
it is easy to see that Eq. (100) becomes $\langle\psi_{l}\psi_{r}\rangle$
times a pure phase linear in $g$, with higher powers of $g$ suppressed by
$G_{N}$.
Infinite temperature also implies purely inelastic scattering, i.e. the
scattering amplitude, $e^{i\delta}=1-S(k,s)$, is automatically real. To see
this, we first rewrite the correlator in terms of the in-falling momentum
operators, $\hat{P}$ and $\hat{K}$, for $\psi_{1}$ and $\psi(t)$ respectively.
For instance, for the former we have:
$\begin{split}\Psi_{1,r}(s,0)\Psi_{1,l}^{*}(s,0)&=\langle\psi_{1,l}(0)|s\rangle\langle
s|\psi_{1,r}(0)\rangle\\\
&=\int\frac{da}{2\pi}\langle\psi_{1,l}(0)e^{-ia\hat{P}}\psi_{1,r}(0)\rangle
e^{ias}.\end{split}$ (223)
As $\psi(t)$ and $\psi_{1}$ are in principle independent operators, we have
$[\hat{K},\hat{P}]=0$. Using this, we can rewrite Eq. (100) as
$C_{\psi}(t)=\langle\psi_{r}(-t)\exp\left(-igS(\hat{K},\hat{P})i\psi_{1,l}\psi_{1,r}\right)\psi_{l}(t)\rangle.$
(224)
Taking the complex conjugate gives
$\begin{split}C_{\psi}(t)^{*}&=\langle\psi_{l}(t)\exp\left(igS(\hat{K},\hat{P})^{*}(-i)\psi_{1,r}\psi_{1,l}\right)\psi_{r}(-t)\rangle\\\
&=-\langle\psi_{r}(-t)\exp\left(igS(\hat{K},\hat{P})^{*}i\psi_{1,l}\psi_{1,r}\right)\psi_{l}(t)\rangle\end{split}$
(225)
where we used the fact that $\hat{K},\leavevmode\nobreak\ \hat{P}$ are
Hermitian and that at infinite temperature
$\psi_{l}(t)\ket{\text{TFD}}=\psi_{r}(-t)\ket{\text{TFD}}$. Combining this
with Eq. (220) then enforces $S(\hat{K},\hat{P})^{*}=S(\hat{K},\hat{P})$, i.e.
purely inelastic scattering.
|
DESY 21-010
IFT-UAM/CSIC-20-143
FTUAM-20-21
An even lighter QCD axion
Luca Di Luzioa, Belen Gavela${}^{b,\,c}$, Pablo Quileza, Andreas Ringwalda
aDeutsches Elektronen-Synchrotron DESY,
Notkestraße 85, D-22607 Hamburg, Germany
bDepartamento de Física Teórica, Universidad Autónoma de Madrid,
Cantoblanco, 28049, Madrid, Spain
cInstituto de Física Teórica, IFT-UAM/CSIC,
Cantoblanco, 28049, Madrid, Spain
###### Abstract
We explore whether the axion which solves the strong CP problem can naturally
be much lighter than the canonical QCD axion. The $Z_{\cal N}$ symmetry
proposed by Hook, with ${\cal N}$ mirror and degenerate worlds coexisting in
Nature and linked by the axion field, is considered in terms of generic
effective axion couplings. We show that the total potential is safely
approximated by a single cosine in the large ${\cal N}$ limit, and we
determine the analytical formula for the exponentially suppressed axion mass.
The resulting universal enhancement of all axion interactions relative to
those of the canonical QCD axion has a strong impact on the prospects of
axion-like particle experiments such as ALPS II, IAXO and many others:
experiments searching for generic axion-like particles have in fact discovery
potential to solve the strong CP problem. The finite density axion potential
is also analyzed and we show that the $Z_{\cal N}$ asymmetric background of
high-density stellar environments sets already significant model-independent
constraints: $3\leq{\cal N}\lesssim 47$ for an axion scale $f_{a}\lesssim
2.4\times 10^{15}$ GeV, with tantalizing discovery prospects for any value of
$f_{a}$ and down to ${\cal N}\sim 9$ with future neutron star and
gravitational wave data, down to the ultra-light mass region. In addition, two
specific ultraviolet $Z_{\cal N}$ completions are developed: a composite axion
one and a KSVZ-like model with improved Peccei-Quinn quality.
E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>,
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Contents
1. 1 Introduction
2. 2 Down-tuning the axion mass
1. 2.1 The $Z_{2}$ case
2. 2.2 $Z_{\cal N}$ axion
3. 2.3 Axion potential in the large ${\cal N}$ limit
3. 3 UV completions and alternative scenarios
1. 3.1 KSVZ $Z_{\cal N}$ axion
2. 3.2 Composite $Z_{\cal N}$ axion
3. 3.3 Ultra-light QCD axions
4. 3.4 A heavier-than-QCD axion
4. 4 Experimental probes of down-tuned axions
1. 4.1 Axion coupling to photons
2. 4.2 Finite density constraints on $f_{a}$
5. 5 Conclusions
6. A Holomorphicity properties of $Z_{\cal N}$ axion potential
7. B Fourier series of the $Z_{\cal N}$ axion potential
8. C Analytical axion mass dependence from hypergeometric functions
## 1 Introduction
The axion experimental program is in a blooming phase, with several new
experiments and detection concepts promising the exploration of regions of
parameter space thought to be unreachable until a decade ago. Many of those
experiments are simply prototypes, awaiting the jump to become ‘big-
experiments’, or, in the case of more consolidated techniques, they are still
far from saturating their full physics potential. Nonetheless, they sometimes
reach sensitivities which go well-beyond astrophysical limits, albeit often
still far from the customary QCD axion window.
On the other hand, since axion couplings are inherently ultraviolet (UV)
dependent, such early stage experiments already provide valuable probes of the
QCD axion parameter space. Imagine for definiteness that ALPS II would detect
a signal in 2021, would it be possible to interpret that as an axion that
solves the strong CP problem? Since the strong CP problem is one of the
strongest motivations for new physics, if an axion-like particle (ALP) will be
ever discovered, there or elsewhere, it would be compelling to explore whether
it had something to do with the strong CP problem. This work explores whether
wide regions in the ALP parameter space, well outside the traditional QCD
axion band, may correspond to solutions of the strong CP problem. This is a
question of profound theoretical and experimental relevance.
In axion solutions to the strong CP problem111That is, via a global chiral
${\rm U}(1)$ symmetry, exact although hidden (aka spontaneously broken) at the
classical level and explicitly broken by instanton effects at the quantum
level [1, 2]. both the axion mass and the couplings to ordinary matter scale
as $1/f_{a}$, where $f_{a}$ is the axion decay constant, denoting the scale of
new physics. The precise relation between mass and decay constant depends on
the characteristics of the strong interacting sector of the theory. When QCD
is the only confining group to which the axion $a$ couples, in which case we
denote the axion mass as $m_{a}^{\rm QCD}$, they are necessarily linked by the
relation [3, 4]
$m_{a}^{\rm QCD}=\frac{\sqrt{\chi_{\rm QCD}}}{f_{a}}\simeq
m_{\pi}\,f_{\pi}\,\frac{\sqrt{m_{u}\,m_{d}}}{m_{u}+m_{d}}\,\frac{1}{f_{a}}\,,$
(1.1)
where $\chi_{\rm QCD},m_{\pi},f_{\pi},m_{u}$ and $m_{d}$ denote respectively
the QCD topological susceptibility, the pion mass, its decay constant, and the
up and down quark masses. Equation (1.1) is completely model-independent as
far as QCD is the only source of the axion mass, and it defines the “canonical
QCD axion”, also often called “invisible axion”. For this axion the
$aG_{\mu\nu}\tilde{G}^{\mu\nu}$ coupling to the gluon strength $G_{\mu\nu}$ is
directly responsible for the axion mass, since the only source of explicit
breaking of the global axial Peccei-Quinn (PQ) symmetry ${\rm U}(1)_{\rm PQ}$
is its QCD anomaly. The strength of other axion couplings to Standard Model
(SM) fields is instead model-dependent: it varies with the matter content of
the UV complete axion model.
In recent years there have been many attempts to enlarge the canonical QCD
axion window, by considering UV completions of the axion effective Lagrangian
which departed from the minimal DFSZ [5, 6] and KSVZ [7, 8] constructions.
Most approaches actually focussed on the possibility of modifying the Wilson
coefficient of specific axion-SM effective operators [9, 10, 11, 12, 13, 14,
15]. That is, the size of the coupling coefficients, at fixed $f_{a}$, is
modified. This has for example allowed to populate new regions of the
parameter space by moving vertically the axion band in the axion mass versus
coupling plane, see Fig. 1 left. The results are then “channel specific”:
different couplings $c$ are modified differently.
Figure 1: Different approaches to enlarge the parameter space of axions that
solve the strong CP problem. The canonical QCD axion relation is represented
by the lower black line in the $\\{m_{a},c/f_{a}\\}$ parameter space, where
$c$ denotes a generic effective axion coupling. Vertical displacements,
possible within pure QCD axion models (i.e. $m_{a}=m_{a}^{\rm QCD}$), are
depicted on the left. Horizontal displacements (via enlarged strong gauge
sectors) are illustrated on the right for the case of a lighter than usual
axion to be explored here.
The parameter space of solutions can be alternatively changed by varying the
axion mass at fixed $f_{a}$. This corresponds to horizontal displacements of
the canonical axion band in the parameter space, see right panel in Fig. 1. It
always requires that the magnitude of the relation between the axion mass
$m_{a}$ and $1/f_{a}$ departs from that in Eq. (1.1): the confining sector of
the SM must be enlarged beyond QCD. New instanton sources give then additional
contributions to the right-hand side of Eq. (1.1). The practical consequence
is a universal modification of the parameter space of all axion couplings at a
given $m_{a}$, at variance with the vertical displacement scenarios. This
feature could a priori allow for the two mechanisms in Fig. 1 to be
distinguished.222For instance, via the measurement of the axion coupling to
the neutron electric dipole moment (nEDM) operator at CASPER-electric [16,
17], in case the axion would also account for dark matter (DM). The axion-to-
nEDM coupling directly follows from the $m_{a}$–$f_{a}$ relation and so it is
unmodified in standard approaches to axion coupling enhancements (left panel
in Fig. 1) that still rely on Eq. (1.1).
Examples of horizontal enlargement of the parameter space towards the right of
the canonical QCD axion band are heavy axion models that solve the strong CP
problem at low scales (e.g. $f_{a}\sim$ TeV) [18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34]. The present work explores instead left
horizontal shifts: true axions that solve the strong CP problem with $m_{a}\ll
m_{a}^{\rm QCD}$. This avenue is more challenging, since it requires a new
source of PQ breaking aligned with QCD, whose contribution to the axion mass
needs to _almost_ cancel that from QCD without relying on fine-tunings.
A possible mechanism to achieve this lighter-than-usual true axion in a
technically natural way was recently put forth by Hook [35], in terms of a
discrete $Z_{\cal N}$ symmetry. ${\cal N}$ mirror and degenerate worlds would
coexist in Nature, linked by an axion field333This setup for ${\cal N}=2$ had
previously led instead to an enhancement of the axion mass [36], because the
axion field was assumed to be invariant under the $Z_{2}$ transformation.
which implements non-linearly the $Z_{\cal N}$ symmetry. One of those worlds
is our SM one. All the confining sectors contribute now to the right-hand side
of Eq. (1.1), conspiring by symmetry to suppress the axion mass without
spoiling the solution to the strong CP problem. The direct consequence is, for
fixed $f_{a}$, a ${\cal N}$-dependent reduced axion mass in spite of all
confining scales being equal to $\Lambda_{\rm QCD}$. In other words, for a
given value of $m_{a}$ it follows a universal enhancement of all axion
interactions relative to those of the canonical QCD axion. In this paper, we
expand on the mathematical properties of the implementation of the $Z_{\cal
N}$ symmetry and determine the analytic form of the exponential suppression of
the axion mass and its potential in the large ${\cal N}$ limit. The
phenomenological analysis of the number of possible mirror worlds ${\cal N}$
will be next carried out with present and projected data.
The study will also explore the $Z_{\cal N}$ axion potential at finite
density, to confront present constraints and prospects from very dense stellar
objects and gravitational waves. It has been recently pointed out in [37, 38]
that a generic reduced-mass axion leads to strong effects on those systems,
raising the effective mass in the dense media. In the scenario considered
here, a stellar background made only of SM matter is by nature $Z_{\cal
N}$-asymmetric: we will show analytically how such an asymmetric background
breaks the cancellations which guaranteed an exponentially suppressed axion
mass for the $Z_{\cal N}$ symmetric vacuum potential. Limits on the number of
possible worlds will be obtained in turn.
The theoretical framework to be used throughout the work described above is
that of effective axion couplings. Nevertheless, two concrete UV completions
of the $Z_{\cal N}$ scenario under consideration will be developed as well: a
model à la KSVZ [7, 39], and a composite model à la Choi-Kim [40, 41]. The
status of the Peccei-Quinn (PQ) quality problem will be also addressed.
An important remark is that we will consider in this paper experiments that
can test the solution to the strong CP problem without further assumptions.
Indeed, it is most relevant to get a clear panorama on the strong CP problem
by itself, given its fundamental character. In particular, we will not discuss
axion or ALP experiments that do rely on the assumption that the DM of the
Universe may be constituted by axions. The cosmological evolution of the axion
field in the $Z_{\cal N}$ scenario under discussion and its contribution to
the DM relic abundance departs drastically from the standard case, and it is
discussed in a companion paper [42].
The structure of the present paper can be easily inferred from the Table of
Contents.
## 2 Down-tuning the axion mass
In Ref. [35] it was shown how to naturally down-tune the axion mass from its
natural QCD value in Eq. (1.1), exploiting the analyticity structure of the
QCD axion potential in the presence of a $Z_{\cal N}$ symmetry. For
pedagogical purposes, before turning to the generic $Z_{\cal N}$ case we
analyze the (unsuccessful) case of a $Z_{2}$ symmetry: the SM plus one
degenerate mirror world linked by an axion which realizes the symmetry non-
linearly.
### 2.1 The $Z_{2}$ case
Figure 2: $Z_{2}$ axion potential. The mirror contribution to the axion
potential $V_{\text{SM}^{\prime}}(a/f_{a})$ (in green) partially cancels that
of the SM, $V_{\text{SM}}(a/f_{a})$ (in blue), leading to a total shallower
potential $V_{2}(a/f_{a})$ (in orange). The total potential has a maximum in
$a/f_{a}=0$ and thus this $Z_{2}$ axion does not solve the SM strong CP
problem.
Consider the SM plus a complete copy SM′, related via a $Z_{2}$ symmetry which
exchanges each SM field with its mirror counterpart, while the axion field is
shifted by $\pi$:
$\displaystyle Z_{2}:\quad$
$\displaystyle\text{SM}\longleftrightarrow\text{SM}^{\prime}$ (2.1)
$\displaystyle a\longrightarrow a+\pi f_{a}\,.$ (2.2)
The Lagrangian, including the anomalous effective couplings of the axion to SM
fields, then reads
$\mathcal{L}=\mathcal{L}_{\rm{SM}}+\mathcal{L}_{\rm{SM^{\prime}}}+\frac{\alpha_{s}}{8\pi}\,\Big{(}\frac{a}{f_{a}}-\theta\Big{)}G\widetilde{G}+\frac{\alpha_{s}}{8\pi}\,\Big{(}\frac{a}{f_{a}}-\theta+\pi\Big{)}G^{\prime}\widetilde{G}^{\prime}+\dots\,,$
(2.3)
where $\theta$ parametrizes the anomalous QCD coupling, $\alpha_{s}$ is the
QCD fine-structure constant, the Lorentz indices of the field strength
$G_{\mu\nu}$ have been obviated, and the dots stand for possible
$Z_{2}$-symmetric portals between the two mirror worlds (see Sect. 2.2.1).
Without loss of generality, we can perform a uniform shift in $a$ such that
the $\theta$ term in Eq. (2.3) is set to zero. Therefore, the effective
$\theta$-parameter of the SM corresponds to $\theta_{\text{eff}}\equiv\langle
a\rangle/f_{a}$, where $\langle a\rangle$ denotes the vacuum expectation value
(vev) of the axion field.
In the case of an exact $Z_{2}$ symmetry, all couplings and masses of the
mirror world and the SM would coincide with the exception of the effective
$\theta$-parameter. It is this difference (namely the $\pi$ shift in the
effective $\theta$-parameters of the SM and its mirror) the one responsible
for displaced contributions to the total axion potential, with destructive
interference effects. Were the QCD axion potential to be a simple cosine, the
total potential would vanish because the two contributions (from QCD and
mirror QCD) would have exactly the same size but opposite sign, i.e.
$\propto\text{cos}({a}/{f_{a}})$ and
$\propto\text{cos}({a}/{f_{a}}+\pi)=-\text{cos}({a}/{f_{a}})$ respectively.
However, for the true chiral axion potential [43, 44, 45] the exact
cancellation disappears and a residual potential –and thus a non-zero axion
mass– remains, which at leading chiral order reads (keeping only two flavours)
$V_{2}(a)=-\frac{m_{\pi}^{2}f_{\pi}^{2}}{m_{u}+m_{d}}\left\\{\sqrt{m_{u}^{2}+m_{d}^{2}+2m_{u}m_{d}\cos\left(\frac{a}{f_{a}}\right)}+\sqrt{m_{u}^{2}+m_{d}^{2}-2m_{u}m_{d}\cos\left(\frac{a}{f_{a}}\right)}\right\\}\,.$
(2.4)
This $Z_{2}$-symmetric world would not solve the strong CP problem, though,
because $a/f_{a}=0$ is a maximum of the axion potential, as illustrated in
Fig. 2. Indeed, as already pointed out in Ref. [35], $a/f_{a}=0$ is a minimum
of the potential only for odd values of ${\cal N}$, while it is a maximum for
${\cal N}$. Thefore, the simplest viable axion model that solves the strong CP
problem with a reduced axion mass incorporates a $Z_{3}$ symmetry.
### 2.2 $Z_{\cal N}$ axion
We consider now ${\cal N}$ copies of the SM that are interchanged under a
$Z_{\cal N}$ symmetry which is non-linearly realized by the axion field:
$\displaystyle Z_{\cal N}:\quad$
$\displaystyle\text{SM}_{k}\longrightarrow\text{SM}_{k+1\,(\text{mod}\,{\cal
N})}$ (2.5) $\displaystyle a\longrightarrow a+\frac{2\pi k}{{\cal N}}f_{a}\,,$
(2.6)
with $k=0,\ldots,{\cal N}-1$. One of those worlds will be our SM one. The most
general Lagrangian implementing this symmetry describes ${\cal N}$ mirror
worlds whose couplings take exactly the same values as in the SM, with the
exception of the effective $\theta$-parameter: for each copy the effective
$\theta$ value is shifted by $2\pi/{\cal N}$ with respect to that in the
neighbour $k$ sector,
$\mathcal{L}=\sum_{k=0}^{{\cal
N}-1}\left[\mathcal{L}_{\text{SM}_{k}}+\frac{\alpha_{s}}{8\pi}\left(\theta_{a}+\frac{2\pi
k}{{\cal N}}\right)G_{k}\widetilde{G}_{k}\right]\,+\dots$ (2.7)
where $\mathcal{L}_{\text{SM}_{k}}$ denotes exact copies of the SM total
Lagrangian excluding the strong anomalous coupling, and the dots stand for
$Z_{\cal N}$-symmetric portal couplings that may connect those different
sectors (to be discussed in Sect. 2.2.1). In this equation $\theta_{a}\equiv
a/f_{a}$ is the angular axion field defined in the interval $[-\pi,\pi)$, and
a universal (equal for all $k$ sectors) bare theta parameter has been set to
zero via an overall shift of the axion field. The SM is identified from now on
with the $k=0$ sector: to ease the notation, the label $k=0$ on SM quantities
will be often dropped below. Each QCDk sector contributes to the $\theta_{a}$
potential, which in the 2-flavour leading order chiral expansion reads
$V_{\cal N}(\theta_{a})=-A\sum_{k=0}^{{\cal
N}-1}\sqrt{1+z^{2}+2z\cos{\left(\theta_{a}+\frac{2\pi k}{{\cal
N}}\right)}}\,,$ (2.8)
where
$z\equiv m_{u}/m_{d}\approx 0.48\,,\qquad
A\equiv\Sigma_{0}m_{d}\approx\chi_{0}(1+z)/z\,,$ (2.9)
and
$\Sigma_{0}\equiv-\left\langle\overline{u}u\right\rangle=-\left\langle\overline{d}d\right\rangle=m_{\pi}^{2}f_{\pi}^{2}/(m_{u}+m_{d})$
(2.10)
denotes the chiral condensate [44], while $\chi_{0}\approx(75\
\text{MeV})^{4}$ is the zero temperature QCD topological susceptibility [46,
45]. Alternatively, the total $Z_{\cal N}$ axion potential can be written as
$\displaystyle V_{\cal N}(\theta_{a})=-m_{\pi}^{2}f_{\pi}^{2}\sum_{k=0}^{{\cal
N}-1}\sqrt{1-\beta\sin^{2}\left(\frac{\theta_{a}}{2}+\frac{\pi k}{{\cal
N}}\right)}\,,$ (2.11)
where $\beta\equiv 4m_{u}m_{d}/(m_{u}+m_{d})^{2}=4z/(1+z)^{2}$.
For any ${\cal N}$, $\theta_{a}=0$ is an extrema of the axion potential.
Indeed, using the property $\sin({2\pi({\cal N}-k)}/{{\cal
N}})=-\sin\left({2\pi k}/{{\cal N}}\right)$ it is straightforward to see that
$\left.\frac{\partial V_{\cal
N}(\theta_{a})}{\partial\theta_{a}}\right|_{\theta_{a}=0}=\frac{m_{\pi}^{2}f_{\pi}^{2}}{f_{a}}\frac{\beta}{4}\sum_{k=0}^{{\cal
N}-1}\frac{\sin\left(\frac{2\pi k}{{\cal
N}}\right)}{\sqrt{1-\beta\sin^{2}\left(\frac{\pi k}{{\cal N}}\right)}}=0\,.$
(2.12)
The same holds for any $\theta_{a}=2\pi n/{\cal N}$ with $n\in\mathbb{Z}$,
because of the periodicity of the potential. For ${\cal N}$ odd the potential
$V(\theta_{a})$ has ${\cal N}$ minima located at
$\theta_{a}=\\{\pm{2\pi\ell/{\cal
N}}\\}\qquad\qquad\text{for}\quad\ell=0,1,\dots,\frac{{\cal N}-1}{2}\,,$
(2.13)
which includes the origin $\theta_{a}=0$, while for ${\cal N}$ even the origin
becomes a maximum. This result –valid for any ${\cal N}$– can be shown for
instance using the exact Fourier series expansion of the potential in Eqs.
(2.8)-(2.11) (see final part of Appendix C). It follows that ${\cal N}$ odd is
required in order to solve the SM strong CP problem (albeit with a $1/{\cal
N}$ tuning in the cosmological evolution [35, 42]). The $k\neq 0$ worlds have
instead non-zero effective $\theta$-parameters: $\theta_{k}\equiv 2\pi k/{\cal
N}$ for $\left\langle\theta_{a}\right\rangle=0$, see Eq. (2.7). A typical
shape of the axion potential for ${\cal N}=3$ is illustrated in Fig. 3.
Figure 3: $Z_{3}$ axion potential. The contributions from the ${\cal N}=3$
worlds partially cancel each other, leading to an exponentially small total
potential $V_{{\cal N}=3}(\theta_{a})$ (in blue) that exhibits a minimum in
$\theta_{a}=0$.
The different effective $\theta_{k}$ values translate into slightly different
masses for the pion mass in each mirror world, $m_{\pi}(\theta_{k})$. At
quadratic order in $m_{\pi}$ a reduction factor of up to $\sim\sqrt{3}$
results [45],
$m_{\pi}^{2}(\theta_{k})=m_{\pi}^{2}\,\sqrt{1-\frac{4m_{u}m_{d}}{\left(m_{u}+m_{d}\right)^{2}}\sin^{2}\left(\frac{\pi
k}{\mathcal{N}}\right)}\,.$ (2.14)
Interestingly, nuclear physics would be drastically different in the different
mirror copies. In particular, a new scalar pion ($\pi_{k}$) to nucleon
($N_{k}$) coupling is generated in all worlds but the SM one (see e.g. Refs.
[47, 48]):
$\mathcal{L}_{\chi\text{PT}}\supset c_{+}\sum_{k=0}^{{\cal
N}-1}\frac{m_{u}m_{d}\sin\theta_{k}}{[m_{u}^{2}+m_{d}^{2}+2m_{u}m_{d}\cos\theta_{k}]^{1/2}}\frac{\pi_{k}^{a}}{f_{\pi}}\overline{N}_{k}\tau^{a}N_{k}\,,$
(2.15)
where $c_{+}$ is an $\mathcal{O}(1)$ low-energy constant of the baryon chiral
Lagrangian. Its impact on the cosmological histories of the mirror worlds is
discussed in Ref. [42] for the $Z_{\cal N}$ scenario under discussion.
Overall, for our world to be that with vanishing effective $\theta$, the $\sim
10$ orders of magnitude tuning required by the SM strong CP problem has been
traded by a $1/{\cal N}$ adjustment, while ${\cal N}$ could a priori be as low
as ${\cal N}=3$.444Although we work in the exact $Z_{\cal N}$ limit,
cosmological considerations require the temperature of the SM thermal bath to
be higher than that of the other $k\neq 0$ sectors [49, 50, 51]. Mechanisms to
achieve these different temperatures will be discussed in Ref. [42].
#### 2.2.1 Renormalizable portals to the SM
Renormalizable portals between the SM and its mirror copies (left implicit in
Eq. (2.7)) are allowed by the $Z_{\cal N}$ symmetry. In the following, we
classify for completeness the portal operators connecting the different $k$
sectors.
#### Higgs portals
The most general $Z_{\cal N}$ symmetric scalar potential for the Higgs
doublets $H_{k}$ of the different mirror worlds includes terms of the form
$\displaystyle{\cal V}(H_{k})$ $\displaystyle\supset\sum_{i=1}^{({\cal
N}-1)/2}\kappa_{i}\sum_{k=0}^{{\cal
N}-1}\left(\left|H_{k}\right|^{2}-\frac{v^{2}}{2}\right)\left(\left|H_{k+i}\right|^{2}-\frac{v^{2}}{2}\right)\Big{|}_{\,(\text{mod}\,{\cal
N})}\,,$ (2.16)
where $v$ denotes the Higgs vev and $\kappa_{i}$ are dimensionless parameters.
Note that the $Z_{\cal N}$-symmetric mixings between different worlds may
include next-neighbour, next-to-next neighbour etc. interactions. All
$\kappa_{i\geq 1}$ terms provide renormalizable portals between the mirror
Higgs copies ($H_{k\neq 0}$) and the SM Higgs ($H_{k=0}$).
#### Kinetic mixing
Terms mixing the ${\rm U}(1)^{k}_{Y}$ hypercharge field strengths of mirror
worlds are a priori also allowed by the $Z_{\cal N}$ symmetry,
$\mathcal{L}\supset\sum_{i=1}^{({\cal N}-1)/2}\epsilon_{i}\sum_{k=0}^{{\cal
N}-1}F^{\mu\nu}_{k}F_{\mu\nu,\,k+i}\big{|}_{\,(\text{mod}\,{\cal N})}\,,$
(2.17)
where $F^{\mu\nu}_{k}$ denote here the $k$-hypercharge field strenghts and
$\epsilon_{i}$ are free dimensionless parameters.
The above renormalizable portals are subject to strong cosmological
constraints, as discussed in Ref. [42]. This can suggest a ‘naturalness’ issue
for the Higgs and the kinetic portal couplings, as they cannot be forbidden in
terms of internal symmetries. Nevertheless, such small couplings may be
technically natural because of an enhanced Poincaré symmetry [52, 53]: in the
limit where non-renormalizable interactions are neglected, the $\kappa_{i\neq
0}$ and $\epsilon_{i\neq 0}\to 0$ limit corresponds to an enhanced
$\mathcal{P}^{\cal N}$ symmetry (namely an independent space-time Poincaré
transformation $\mathcal{P}$ in each sector). Those couplings are then
protected from receiving radiative corrections other than those induced by the
explicit $\mathcal{P}^{\cal N}$ breaking due to gravitational and axion-
mediated interactions, which are presumably small. In addition, other terms in
the scalar potential which depend on the details of the UV completion of the
$Z_{\cal N}$ axion scenario may be present and strongly constrained; an
example is given below in Sect. 3.1.
### 2.3 Axion potential in the large ${\cal N}$ limit
It is non-trivial to sum the series which defines the axion potential, Eq.
(2.11). However, the presence of the $Z_{\cal N}$ symmetry allows for the
application of powerful mathematical tools related to its Fourier
decomposition and holomorphicity properties, that lead to simplified
expressions in the large ${\cal N}$ limit.
#### 2.3.1 Holomorphicity bounds and convergence of Riemannian sums
As first noticed in Ref. [35], the fact that the potential in Eq. 2.11
corresponds to a Riemann sum allows one to express it as an integral plus
subleading terms,
$V_{\mathcal{N}}\left(\theta_{a}\right)=\sum_{k=0}^{{\cal
N}-1}V\left(\theta_{a}+\frac{2\pi k}{\mathcal{N}}\right)=\frac{{\cal
N}}{2\pi}\int_{0}^{2\pi}V(x)dx+\mathcal{O}({\cal N}^{0})\,,$ (2.18)
where the definition of each single-world potential,
$V\left(\theta_{a}+\frac{2\pi k}{\mathcal{N}}\right)$, can be read off Eq.
(2.8). Most importantly, the integral does not depend on the field
$\theta_{a}$ and the amplitude of the axion potential is thus solely contained
in the subleading terms. The latter are nothing but the error $E$ committed in
approximating the Riemann sum by an integral,
$\displaystyle E_{{\cal N}}(V)=\int_{0}^{2\pi}V(x)dx-\frac{2\pi}{{\cal
N}}\sum_{k=0}^{{\cal N}-1}V\left(\theta_{a}+\frac{2\pi k}{{\cal N}}\right)\,.$
(2.19)
Powerful theorems exist that describe the fast convergence of this
approximation. It can be shown, applying complex analysis, that if some
conditions are satisfied the convergence of the rectangular rule is
exponential (see e.g. Section 3 in Ref. [54]). More precisely, if
$V(\theta_{a})$ is a $2\pi$-periodic function and it can be extended to a
holomorphic function $V(w)$ in a rectangle from 0 to $2\pi$ and from $-ib$ to
$+ib$, then the error of the rectangular rule is constrained as
$\displaystyle\left|E_{{\cal N}}(V)\right|\leq\frac{4\pi M}{e^{{\cal
N}b}-1}\,,$ (2.20)
where $M$ is an upper limit on $V(w)$ in the rectangular region defined above.
As a consequence, the axion mass will be exponentially suppressed for large
${\cal N}$. More in detail, let us apply the theorem to the second derivative
of the potential,
$\displaystyle
V^{\prime\prime}(\theta_{a})=-\frac{m_{\pi}^{2}f_{\pi}^{2}}{2}\frac{z}{1+z}\frac{2\left(1+z^{2}\right)\cos\left(\theta_{a}\right)+z\left[3+\cos\left(\theta_{a}/2\right)\right]}{\left[1+z^{2}+2z\cos\left(\theta_{a}\right)\right]^{3/2}}\,,$
(2.21)
which can be extended in the complex plane to a holomorphic function until the
expression under the square root vanishes. Indeed, this function has branch
points in555This result coincides with that in Ref. [35], which defines
$a=\log(c+\sqrt{c^{2}-1})$, for
$c=\frac{\left(m_{u}+m_{d}\right)^{2}}{2m_{u}m_{d}}-1$. Note that the variable
$a$ can be simplified as $a=\log(m_{d}/m_{u})=-\log z$.
$\displaystyle w_{cut}=\pi\pm i\log z\,.$ (2.22)
Naively, it is tempting to apply the theorem assuming $b=\log z$ in Eq.
(2.20). This is not possible though, since $V^{\prime\prime}(w)$ is not
bounded in the rectangular region, due to a divergence in the branch point. As
we show in Appendix A, it is possible to optimize the bound obtained above on
the axion mass ($V^{\prime\prime}(\theta_{a})/f_{a}^{2}$) by allowing a
departure from $\log z$, $b=\log z+\Delta b$, which leads to
$\displaystyle\Delta b=\frac{3}{2}\frac{1}{{\cal N}}\,,$ (2.23)
where the factor $3/2$ stems from the order of the divergence of Eq. 2.21 in
the branch point $w_{cut}$. Implementing this result in Eq. 2.20, it follows
that
$\displaystyle m_{a}^{2}f^{2}_{a}\leq\left|E_{{\cal
N}}(V^{\prime\prime})\right|\leq\pi
m_{\pi}^{2}f_{\pi}^{2}\sqrt{\frac{1-z}{1+z}}\left(\frac{2}{3}\right)^{3/2}{\cal
N}^{3/2}\frac{1}{e^{-3/2}z^{-{\cal N}}-1}\,.$ (2.24)
In Fig. 4 we compare this analytical bound with the numerical result: our
analytical bound captures the correct dependence on ${\cal N}$ of the $Z_{\cal
N}$ axion mass,
$m_{a}^{2}f^{2}_{a}\propto
m_{\pi}^{2}f_{\pi}^{2}\,\sqrt{\frac{1-z}{1+z}}\,{\cal N}^{3/2}\,z^{\cal N}\,,$
(2.25)
although it misses the overall constant factor. The overall factor will be
analytically determined in the following. Nevertheless, the discussion above
has the two-fold interest of determining the correct exponential suppression
and of being very general, as it only relies on the holomorphicity structure
of the potential, and not on the specific form it takes. As a consequence, the
exponential suppression of the axion mass is not spoiled when considering the
subleading chiral corrections to Eq. 2.11.
#### 2.3.2 Fourier expansion: axion mass from hypergeometric functions
It is possible to gain further physical insight on the origin of the
cancellations in the potential by constructing its Fourier series expansion.
As shown in Appendix B, the Fourier series of any scalar potential respecting
the $Z_{\cal N}$ shift symmetry only receives contributions from modes that
are multiples of ${\cal N}$. Moreover, if the potential can be written as a
sum of shifted contributions, as it is the case for the $Z_{\cal N}$ axion
under discussion –see Eq. 2.18– then the Fourier series of the total potential
$V_{{\cal N}}(\theta_{a})$ can be easily obtained in terms of the Fourier
series of a single $V(\theta_{a})$ term, leading to
$\displaystyle V_{\mathcal{N}}\left(\theta_{a}\right)=2{\cal
N}\sum_{t=1}^{\infty}\hat{V}(t{\cal N})\cos(t\,{\cal N}\theta_{a})\,,$ (2.26)
where $\hat{V}(n)$ denotes the coefficient of the Fourier series for the
single-world potential $V(\theta_{a})$,
$\displaystyle\hat{V}(n)=-\frac{m_{\pi}^{2}f_{\pi}^{2}}{1+z}\int_{0}^{2\pi}\cos(nt)\sqrt{1+z^{2}+2z\cos\left(t\right)}dt\,.$
(2.27)
It is convenient to express this integral in terms of the Gauss hypergeometric
function (see Appendix C and Ref. [55] for conventions and relevant
properties),
$\displaystyle\hat{V}(n)=(-1)^{n+1}\frac{m_{\pi}^{2}f_{\pi}^{2}}{1+z}\,z^{n}\,\frac{\Gamma(n-1/2)}{\Gamma(-1/2)\,n!}\,_{2}F_{1}\left(\begin{array}[]{c}-1/2,\,n-1/2\\\
n+1\end{array}\bigg{|}z^{2}\right)\,.$ (2.30)
As shown in Appendix C, in the large ${\cal N}$ limit this expression further
simplifies to
$\displaystyle\hat{V}(n)\simeq\,(-1)^{n}\,\frac{m_{\pi}^{2}f_{\pi}^{2}}{2\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}n^{-3/2}\,z^{n}\,,$
(2.31)
leading to the following expression for the total potential
$\displaystyle V_{\mathcal{N}}\left(\theta_{a}\right)$
$\displaystyle\simeq\frac{m_{\pi}^{2}f_{\pi}^{2}}{\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}{\cal
N}^{-1/2}\,\sum_{t=1}^{\infty}\,(-1)^{t\,{\cal N}}\,t^{-3/2}\,z^{t\,{\cal
N}}\cos(t\,{\cal N}\theta_{a})$
$\displaystyle\simeq\frac{m_{\pi}^{2}f_{\pi}^{2}}{\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}{\cal
N}^{-1/2}\,\,(-1)^{{\cal N}}\,z^{{\cal N}}\cos({\cal N}\theta_{a})\,,$ (2.32)
where in the second line we have kept only the first mode in the expansion, as
the higher modes are exponentially suppressed with respect to it. The total
potential is thus safely approximated by a single cosine. It trivially follows
from Eq. (2.32) that $\theta_{a}=0$ is a minimum of the total potential for
${\cal N}$ odd, and a maximum for ${\cal N}$ even. Here and all through this
work purely constant terms in the potential are obviated, as they have no
impact on the axion mass.
Eq. (2.32) can be rewritten as
$V_{\mathcal{N}}\left(\theta_{a}\right)\simeq-\frac{m_{a}^{2}f_{a}^{2}}{{\cal
N}^{2}}\,\cos({\cal N}\theta_{a})\,,$ (2.33)
where the $Z_{\cal N}$ axion mass $m_{a}$ in the large ${\cal N}$ limit is
finally given by a compact and analytical formula,
$m_{a}^{2}\,f^{2}_{a}\simeq\frac{m_{\pi}^{2}f_{\pi}^{2}}{\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}\,{\cal
N}^{3/2}\,z^{\cal N}\,.$ (2.34)
The overall coefficient is thus determined, in addition to exhibiting the
$z^{\cal N}$ exponential suppression of the potential and the specific ${\cal
N}$ dependence previously argued in Eq. 2.25 from holomorphicity arguments. In
summary, in the large ${\cal N}$ limit the axion mass is reduced with respect
to that of the QCD axion by a factor
Figure 4: Comparison between different evaluations of the axion mass as a
function of ${\cal N}$. Our large-${\cal N}$ analytical result in Eq. 2.35 (
green curve) provides a remarkably good approximation to the numerical
evaluation (dots).
$\displaystyle\bigg{(}\frac{m_{a}}{m_{a}^{\rm
QCD}}\bigg{)}^{2}\simeq\frac{1}{\sqrt{\pi}}\sqrt{1-z^{2}}(1+z)\,{\cal
N}^{3/2}z^{{\cal N}-1}\,\,,$ (2.35)
where $m_{a}^{\rm QCD}$ denotes the mass of the canonical QCD axion as given
in Eq. (1.1).666Note that, although ${\cal N}=1$ denotes the SM world,
$m_{a}^{\rm QCD}$ does not correspond to ${\cal N}=1$ in Eq. (2.34), because
the latter is only valid in the large ${\cal N}$ limit. This ratio is
illustrated in Fig. 4, which compares the numerical behaviour with: $a)$ the
analytical dependence previously proposed in Ref. [35]; $b)$ that from the
holomorphicity bound in Eq. (2.24); $c)$ the full analytical result in Eq.
(2.35). Our analytical results improve on previous ones by Hook on a number of
aspects: $i)$ the explicit determination of the exponential behavior
controlled by $z^{{\cal N}}\sim 2^{-{\cal N}}$; $ii)$ the improved ${\cal N}$
dependence from the factor ${\cal N}^{3/2}$; $iii)$ the $z$-dependence of the
axion mass in $\sqrt{\frac{1-z}{1+z}}$; $iv)$ the determination of the
prefactor $1/\sqrt{\pi}$.
In practice, the large ${\cal N}$ results in Eqs. (2.32)-(2.35) turn out to be
an excellent approximation already for ${\cal N}=3$.
## 3 UV completions and alternative scenarios
Up to this point, the analysis has been largely independent from the precise
UV completion of the $Z_{\cal N}$ axion scenario. For the sake of
illustration, in this section we provide two UV completions of the axion
effective Lagrangian in Eq. (2.7). We also briefly discuss an alternative
implementation of the $Z_{\cal N}$ symmetry in which the resulting axion is
heavier than usual (rather than lighter).
### 3.1 KSVZ $Z_{\cal N}$ axion
Consider ${\cal N}$ copies of vector-like Dirac fermions $\mathcal{Q}_{k}$
($k=0,\ldots,{\cal N}-1$) transforming in the fundamental representation of
QCDk, together with a gauge singlet complex scalar $\mathcal{S}$. The action
of the $Z_{\cal N}$ symmetry on these fields is postulated to be
$\displaystyle Z_{\cal N}:\ \mathcal{Q}_{k}$
$\displaystyle\to\mathcal{Q}_{k+1\,(\text{mod}\,{\cal N})}\,,$ (3.1)
$\displaystyle\quad\mathcal{S}$ $\displaystyle\to e^{2\pi i/{\cal
N}}\mathcal{S}\,,$ (3.2)
while the SM Lagrangian and its copies obey Eq. (2.5) under $Z_{\cal N}$. The
most general Lagrangian containing the new degrees of freedom then reads
$\mathcal{L}_{\rm UV}=|\partial_{\mu}\mathcal{S}|^{2}+\sum_{k=0}^{{\cal
N}-1}\left[\overline{\mathcal{Q}}_{k}i\not{\mathcal{D}}\mathcal{Q}_{k}+ye^{2\pi
ik/{\cal
N}}\mathcal{S}\overline{\mathcal{Q}}_{k}P_{R}\mathcal{Q}_{k}+\text{h.c.}\right]-{\cal
V}(\mathcal{S},H_{k})\,,$ (3.3)
where $P_{R}\equiv(1+\gamma_{5})/2$. It exhibits an accidental ${\rm
U}(1)_{\rm PQ}$ symmetry
$\displaystyle{\rm U}(1)_{\rm PQ}:\ \mathcal{Q}_{k}$ $\displaystyle\to
e^{-i\gamma_{5}\frac{\alpha}{2}}\mathcal{Q}_{k}\,,$ (3.4)
$\displaystyle\quad\mathcal{S}$ $\displaystyle\to e^{i\alpha}\mathcal{S}\,,$
(3.5)
that is spontaneously broken by the vev of $\mathcal{S}$, $v_{\mathcal{S}}$,
via a proper ‘mexican-hat’ potential ${\cal V}(\mathcal{S},H_{k})$, whose
structure is discussed below. Decomposing the $\mathcal{S}$ field in a polar
basis,
$\mathcal{S}=\frac{1}{\sqrt{2}}(v_{\mathcal{S}}+\rho)e^{i\frac{a}{v_{\mathcal{S}}}}\,,$
(3.6)
in terms of canonically normalized radial ($\rho$) and axion modes, the latter
can be rotated away from the Yukawa term in Eq. (3.3) via an axion-dependent
axial transformation
$\mathcal{Q}_{k}\to e^{-i\gamma_{5}\left(\frac{a}{2v_{\mathcal{S}}}+\frac{\pi
k}{{\cal N}}\right)}\mathcal{Q}_{k}\,.$ (3.7)
The heavy quarks, with real mass777Note that we crucially removed also the
$k$-dependent phases from the Yukawas, in order to properly integrate out the
heavy $\mathcal{Q}_{k}$ fields.
$m_{\mathcal{Q}_{k}}=\frac{y_{\mathcal{S}}v_{\mathcal{S}}}{\sqrt{2}}$, can
next be integrated out in order to obtain the low-energy axion effective field
theory. Because the transformation in Eq. (3.7) is QCDk anomalous, with
anomaly factor $2N_{k}=1$, the resulting axion effective Lagrangian is given
by
$\delta\mathcal{L}_{\rm UV}=\sum_{k=0}^{{\cal
N}-1}\frac{\alpha_{s}}{8\pi}\left(\frac{a}{v_{\mathcal{S}}}+\frac{2\pi
k}{{\cal N}}\right)G_{k}\widetilde{G}_{k}\,,$ (3.8)
which yields precisely Eq. (2.7), after the identification
$v_{\mathcal{S}}=f_{a}$.
Furthermore, the presence of the singlet scalar $\mathcal{S}$ introduces new
scalar portals between the SM and its mirror worlds, in addition to the
generic ones in Eq. 2.16. The scalar potential in the latter equation should
thus be enlarged by
${\cal V}(H_{k})\longrightarrow{\cal V}(\mathcal{S},H_{k})={\cal
V}(H_{k})+\delta{\cal V}\,,$ (3.9)
with
$\displaystyle\delta{\cal V}$
$\displaystyle=\lambda_{\mathcal{S}}\left(\left|\mathcal{S}\right|^{2}-\frac{f_{a}^{2}}{2}\right)^{2}+\kappa_{\mathcal{S}}\left(\left|\mathcal{S}\right|^{2}-\frac{f_{a}^{2}}{2}\right)\sum_{k=0}^{{\cal
N}-1}\left(\left|H_{k}\right|^{2}-\frac{v^{2}}{2}\right)\,.$ (3.10)
Note that, because the Higgs vev $v$ is the same in all $k$ sectors due to the
unbroken $Z_{\cal N}$ symmetry, the required hierarchy of scales is obtained
with a single fine-tuning between $v$ and $f_{a}$, as for elementary canonical
QCD axions.
It is also possible to choose the representations of the $\mathcal{Q}_{k}$
fields to transform non-trivially under the electroweakk gauge groups, so that
they could e.g. mix with SMk quarks in a $Z_{\cal N}$ invariant way and decay
efficiently in the early Universe, thus avoiding possible issues with
colored/charged stable relics in the SM sector [9, 11]. Depending on the
$\mathcal{Q}_{k}$ quantum numbers, this would change in turn the value of the
electromagnetic-to-QCD anomaly ratio of the PQ current, usually denoted as
$E/N$, which enters the axion-photon coupling.
#### 3.1.1 Peccei-Quinn quality
The threat posed on traditional QCD axion models by quantum non-perturbative
gravitational corrections [56, 57, 58, 59, 60, 61, 62, 63, 64, 65] may also
affect the models discussed here, as $f_{a}$ is not very far from the Planck
scale. These contributions are usually parametrized via effective operators,
suppressed by powers of the Planck mass, that could explicitly violate the PQ
symmetry and thus spoil the solution to the strong CP problem [56, 57, 58,
59].888UV sources of PQ breaking can be avoided in some invisible axion
constructions within a variety of extra assumptions or frameworks [66, 67, 68,
69, 70, 71, 72, 73, 74, 75, 76, 77, 78], or be arguably negligible under
certain conditions [79].
Figure 5: Parameter space in the $\\{{\cal N},f_{a}\\}$ plane that is free
from the PQ quality problem, within the KSVZ-like UV completion of the
reduced-mass $Z_{\cal N}$ axion, for the PQ-breaking parameter values
indicated in the text. The PQ protected region has a sizable overlap with the
regions of parameter space where the $Z_{\cal N}$ axion can account for the
total DM relic density, see Ref. [42].
In the context of the KSVZ $Z_{\cal N}$ axion model above, the exponentially
small axion mass could seem to worsen this threat, increasing the sensitivity
to explicit PQ-breaking effective operators. Interestingly, promoting the in
built $Z_{\cal N}$ symmetry to a gauge symmetry leads to an accidental ${\rm
U}(1)_{\rm PQ}$ invariance, that for large ${\cal N}$ is efficiently protected
from those extra sources of explicit breaking. Indeed, the lowest-dimensional
PQ-violating operator in the scalar potential compatible with the $Z_{\cal N}$
symmetry is $\mathcal{S}^{\cal N}$, leading to an explicitly PQ-breaking
contribution to the potential of the form
$V_{\rm PQ-break.}=c\frac{\mathcal{S}^{\cal N}}{M_{\rm Pl}^{{\cal
N}-4}}+\text{h.c.}\supset\frac{\left|c\right|}{2^{{\cal
N}/2-1}}\frac{f_{a}^{\cal N}}{M_{\rm Pl}^{{\cal N}-4}}\cos\left({\cal
N}\theta_{a}+\delta\right)\,,$ (3.11)
where $M_{\rm Pl}=1.22\times 10^{19}$ GeV is the Planck mass and $c$ is a
dimensionless coefficient with phase $\delta\equiv\text{Arg}\ c$. Considering
now $V_{\cal N}(\theta_{a})+V_{\rm PQ-break.}$, expanding for small
$\theta_{a}$ the axion potential $V_{\cal N}(\theta_{a})\approx V_{\cal
N}(0)+\frac{1}{2}m^{2}_{a}f^{2}_{a}\theta_{a}^{2}$, and solving the tadpole
equation, the induced effective $\theta$ parameter in the SM sector reads
$\displaystyle\left\langle\theta_{a}\right\rangle$
$\displaystyle\simeq\frac{\left|c\right|{\cal N}f_{a}^{\cal N}M_{\rm
Pl}^{4}\sin\delta}{2^{{\cal N}/2-1}m^{2}_{a}f_{a}^{2}M_{\rm Pl}^{\cal
N}-\left|c\right|{\cal N}^{2}f_{a}^{\cal N}M_{\rm Pl}^{4}\cos\delta}$
$\displaystyle\simeq
2\sqrt{\pi}\left|c\right|\sin\delta\sqrt{\frac{1+z}{1-z}}\frac{M_{\rm
Pl}^{4}}{m_{\pi}^{2}f_{\pi}^{2}}\frac{1}{\sqrt{{\cal
N}}}\left(\frac{f_{a}}{\sqrt{2}zM_{\rm Pl}}\right)^{\cal N}\,,$ (3.12)
where $m^{2}_{a}$ from Eq. (2.34) has been used, and in the last step we
neglected the second term in the denominator in the first line of Eq. (3.1.1):
this is always justified in the $\left\langle\theta_{a}\right\rangle\lesssim
10^{-10}$ regime.
In summary, unlike the customary ad-hoc $Z_{{\cal N}}$ protection mechanism
for the standard KSVZ axion, in the $Z_{{\cal N}}$ axion scenario under
discussion the discrete symmetry is already present by construction. Note that
the scaling with ${\cal N}$ is slightly different as compared to the standard
KSVZ axion, due to the enhancement factor $1/z^{\cal N}$. But eventually the
$\left(f_{a}/M_{\rm Pl}\right)^{\cal N}$ suppression dominates and provides an
efficient protection mechanism, even though the axion mass is exponentially
suppressed. For the sake of an estimate, Fig. 5 shows the regions in the
$\\{{\cal N},f_{a}\\}$ plane that saturate the nEDM bound for
$\left|c\right|=1$ and $\sin\delta=1$.
### 3.2 Composite $Z_{\cal N}$ axion
It is also possible to construct a UV completion of the $Z_{\cal N}$ scenario
which corresponds to a dynamical (composite) axion à la Kim-Choi [40, 41],
without extending its exotic fermionic content. In the original version of
that model, the SM fields are not charged under the PQ symmetry while two
exotic massless quarks, $\psi$ and $\chi$, transform under an extra confining
“axi-color” group ${\rm SU}(\widetilde{N})_{a}$ and one of them, $\psi$, is
also a triplet of QCD. Upon confinement of the axi-color group at a large
scale $\Lambda_{a}\sim f_{a}\gg\Lambda_{\mathrm{QCD}}$, pseudo-Goldstone
bosons composed of the exotic quarks emerge. All but one of them are coloured
under QCD and become safely heavy. The light remaining one is the composite
axion, whose mass obeys the usual formula for QCD axions Eq. (1.1).
We implement the Kim-Choi idea in the framework of our $Z_{\cal N}$ framework
without increasing the number of massless exotic fermions representations. The
fermion $\psi$ is simply extended to be now a triplet under all
$\text{QCD}_{k}$ mirror sectors, see Table 1. The axion field will thus be
unique and will couple to all anomalous terms.
$\displaystyle\begin{array}[]{c|c|c|c|c|c|c}&{\rm SU}(\widetilde{N})_{a}&{\rm
SU}(3)_{c,\,0}&\dots&{\rm SU}(3)_{c,\,k}&\dots&{\rm SU}(3)_{c,\,{\cal N}}\\\
\hline\cr\psi&\Box&\mathbf{3}&\dots&\mathbf{3}&\dots&\mathbf{3}\\\
\chi&\Box&\mathbf{1}&\dots&\mathbf{1}&\dots&\mathbf{1}\end{array}$ (3.16)
Table 1: Exotic fermionic sector of the $Z_{\cal N}$ composite axion model.
Upon ${\rm SU}(\widetilde{N})_{a}$ confinement at the large scale of order
$f_{a}$, the QCDk couplings $\alpha^{k}_{s}$ can be neglected, and therefore a
large global flavor symmetry arises in the exotic fermionic sector: ${\rm
SU}(3^{\cal N}+1)_{L}\times{\rm SU}(3^{\cal N}+1)_{R}\times{\rm
U}(1)_{V}$.999The ${\rm U}(1)_{A}$ of the exotic sector is explicitly broken
by the ${\rm SU}(\widetilde{N}_{a})$ anomaly. This symmetry is spontaneously
broken down to ${\rm SU}(3^{\cal N}+1)_{L+R}\times{\rm U}(1)_{V}$ by the
exotic fermion condensates. Among the resulting Goldstone bosons, the QCDk
singlet corresponds to the composite axion. Its associated PQ current reads
(with $f_{\mathrm{PQ}}\equiv\widetilde{N}f_{a}$)
$\displaystyle
j_{\mathrm{PQ}}^{\mu}=\overline{\psi}\gamma^{\mu}\gamma^{5}\psi-3^{\cal
N}\overline{\chi}\gamma^{\mu}\gamma^{5}\chi\equiv
f_{\mathrm{PQ}}\partial^{\mu}a\,,$ (3.17)
which corresponds to the only element of the Cartan sub-algebra of ${\rm
SU}(3^{\cal N}+1)$ that has a vanishing anomaly coefficient with respect to
${\rm SU}(\widetilde{N})_{a}$, but a non-vanishing one with respect to all the
QCDk gauge groups.
Without further elements the model would be viable, but all mirror worlds
would have the same $\theta$-parameter: a heavier than usual axion would
result. A simple $Z_{\cal N}$ implementation which leads instead to relatively
shifted potentials, and thus to a reduced axion mass, is to have a relative
phase between the argument of the determinant of the quark mass matrix of the
mirror worlds,
$\displaystyle\arg\left(\mbox{det}\,\left(Y_{u}\,Y_{d}\right)\right)_{k+1}=\arg\left(\mbox{det}\,\left(Y_{u}\,Y_{d}\right)\right)_{k}+\frac{2\pi}{{\cal
N}}\,,$ (3.18)
where $Y_{u}$ and $Y_{d}$ denote the Yukawa matrices for the up and down quark
sectors, respectively. One of the many possible $Z_{\cal N}$ charge
assignments for the quarks that yield Eq. (3.18) is that in which only the
right-handed up quarks would transform as101010Note that a factor of $1/3$ in
the phase takes into account that there are 3 fermion families.
$\displaystyle Z_{\cal N}:U_{R}^{k}$ $\displaystyle\to e^{i{2\pi}/(3{\cal
N})}U_{R}^{k+1}\,,$ (3.19)
corresponding to a Yukawa quark Lagrangian of the form
$\displaystyle\mathcal{L}_{Y}=-\sum_{k=0}^{{\cal N}-1}\big{\\{}e^{i{2\pi
k}/(3{\cal
N})}\overline{Q}_{L}Y_{u}\widetilde{H}U_{R}+\overline{Q}_{L}Y_{d}HD_{R}\big{\\}}_{k}+\text{h.c.}\,.$
(3.20)
The resulting low-energy axion effective field theory is then the desired one
as in Eq. 2.7.
In this $Z_{\cal N}$ composite axion model only the exotic fermions are
charged under the PQ symmetry, while the $Z_{\cal N}$ charges are carried
solely by SM quarks. This means that the $Z_{\cal N}$ and PQ symmetries are
not directly linked. As a consequence, gauging the in-built $Z_{\cal N}$
symmetry would not soften the PQ quality problem, contrary to the case of the
KSVZ $Z_{\cal N}$ axion model discussed earlier above. Our $Z_{\cal N}$
composite axion model is then subject to the usual PQ quality threat. Standard
softening solutions often applied to composite axion models could be explored,
e.g. those based on a chiral gauging of the global symmetry of the coset space
or on introducing a moose structure [66, 69, 73, 80].
### 3.3 Ultra-light QCD axions
The term ultra-light axions usually refers to the mass range
$m_{a}\in\left[10^{-33},10^{-10}\right]$ eV (with the extrema of the interval
corresponding respectively to an axion Compton wavelength of the size of the
Hubble horizon and to the Schwarzschild radius of a stellar mass black hole).
As a theoretical motivation for ultra-light axions, the so-called string
Axiverse [81] is often invoked, according to which a plenitude of ultra-light
axions populating mass regions down to the Hubble scale $10^{-33}$ eV is a
generic prediction of String Theory, although without a direct reference to
the solution of the strong CP problem.111111See e.g. Ref. [82] for an ultra-
light scalar field whose mass is protected by a discrete $Z_{N}$ symmetry but
does not solve the strong CP problem. On the other hand, according to the
usual QCD mass vs. $f_{a}$ relation, Eq. (1.1), axion masses below the peV
correspond to axion decay constants larger than the Planck mass, and hence
they are never entertained within canonical QCD axion models. The $Z_{\cal N}$
axion framework discussed in the present work allows in contrast to populate
the sub-peV axion mass region while keeping sub-Planckian axion decay
constants, with the advantage of providing as well a direct solution to the
strong CP problem. As shown in Sec. 4.2, the tantalizing prospects for testing
the $Z_{\cal N}$ scenario, through observational data on very dense stellar
objects and gravitational waves, can sweep through the discovery region of the
ultra-light axion range.
### 3.4 A heavier-than-QCD axion
A remark is in order regarding the $Z_{\cal N}$ charge of the axion in the
different sectors. If the implementation of the $Z_{\cal N}$ symmetry would be
such that the ${\cal N}$ world replicas are degenerate but the axion field is
exactly the same in all of them, that is, if Eqs. (2.5)–(2.6) were replaced by
$\displaystyle Z_{\cal N}:\quad\text{SM}_{k}$
$\displaystyle\longrightarrow\text{SM}_{k+1\,(\text{mod}\,{\cal N})}$ (3.21)
$\displaystyle a$ $\displaystyle\longrightarrow a\,,$ (3.22)
the potentials of the different mirror worlds would not be relatively shifted
but exactly superpose. The axion would then be a factor $\sqrt{\cal N}$
heavier than the usual QCD axion in Eq. (1.1). This scenario was proposed in
Ref. [36] for a $Z_{2}$ symmetry with just one mirror world degenerate with
the SM, but its generalization to ${\cal N}$ copies is trivial. Such a
heavier-than-QCD axion solution is viable, and it would transform the ALP
arena to the right of the canonical QCD axion band into solutions to the SM
strong CP problem. The axion $Z_{\cal N}$ charge assignment explored
throughout this work, Eq. (2.6), results instead in lighter-than-QCD axions,
that is, solutions located to the left of the QCD axion band. Note that this
option induces a comparatively much larger impact: a natural exponential
suppression of the axion mass $\propto z^{\cal N}$ as the byproduct of the
cancellations between the mirror potentials, Eq. 2.34, instead of the mild
$\sqrt{\cal N}$ enhancement just discussed.
All in all, to explore the right-hand side region of the QCD axion band for
solutions to the strong CP problem, other heavy axion scenarios proposed in
the literature seem more efficient and appealing (e.g. those with mirror
worlds much heavier than the SM, or scenarios with novel confining scales much
larger than $\Lambda_{\rm QCD}$, as mentioned in Sect. 1).
Figure 6: Limits on the axion-photon coupling as a function of the axion mass.
Laboratory constrains [83, 84, 85, 86, 87, 88, 89, 90] and astrophysical
bounds [91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101] are shown in blue and
green, respectively. Projected sensitivities appear in translucent colors
delimited by dashed lines. The orange oblique lines represent the theoretical
prediction for the $Z_{\cal N}$ axion photon couplings assuming $E/N=0$ for
different (odd) number of worlds ${\cal N}$. These lines are solid for the
regions of the parameter space in which the KSVZ UV completion of the $Z_{\cal
N}$ axion is free from PQ quality problem and dashed otherwise. The secondary
vertical axis shows the corresponding axion decay constant $f_{a}$ if $E/N=0$
is assumed. Supplementary constraints in case the axion is assumed to account
for DM can be found in Ref. [42]. Axion limits adapted from Ref. [102].
## 4 Experimental probes of down-tuned axions
The $Z_{\cal N}$ axion with reduced mass can provide a solution to the SM
strong CP problem, independently of whether it accounts or not for the DM
content of the Universe. It is hence interesting to get a perspective on the
experimental panorama that does not rely on the supplementary assumption that
the axion may be the DM particle: all experimental bounds and prospects below
will be independent of that hypothesis. On the other hand, Ref. [42] will
focus on experimental probes that do rely on it.
### 4.1 Axion coupling to photons
From an experimental point of view, a highly relevant axion coupling is that
to photons, defined via the Lagrangian term
$\delta\mathcal{L}=\frac{1}{4}g_{a\gamma}aF\tilde{F}$ as [103, 45]
$g_{a\gamma}=\frac{\alpha}{2\pi f_{a}}(E/N-1.92(4))\,,$ (4.1)
where $E$ and $N$ denote model-dependent anomalous electromagnetic and strong
contributions, respectively. Fig. 6 shows the parameter space of the reference
$Z_{\cal N}$ axion model (with $E/N=0$) in the coupling vs. mass plane.
Predictions for the axion photon coupling are obtained by rescaling the
$Z_{\cal N}$ axion mass in Eq. (2.34) for different values of ${\cal N}$.
Present axion limits and projected sensitivities are displayed as filled and
transparent areas, respectively.
The yellow band depicts the canonical QCD axion solution, which obeys the
well-known relation in Eq. (1.1). The oblique lines indicate instead the
$Z_{\cal N}$ lighter axion solutions to the strong CP problem, as a function
of the number of mirror worlds ${\cal N}$, see Eq. (2.34). Note that the
overall effect of a reduced mass axion is simply a shift towards the left of
the parameter space: each of those oblique lines can be considered to be the
center of a displaced yellow band. It is particularly enticing that
experiments set a priori to only hunt for ALPs may in fact be targeting
solutions to the strong CP problem.
### 4.2 Finite density constraints on $f_{a}$
This subsection summarizes the model-independent constraints on $f_{a}$ for
the $Z_{\cal N}$ scenario under discussion. The result of the analysis is
illustrated in Fig. 8. Interestingly, apart from the usual constraints
stemming from the SN1987A [104] and black hole superradiance measurements
[105, 106, 107, 108] (depicted in purple), novel bounds apply to the
exceptionally light $Z_{\cal N}$ axion due to finite density effects. Indeed,
it has been recently pointed out in Refs. [37, 38] that finite density media
may have a strong impact on the physics of very light axions or ALPs. In those
media, the minimum of the total potential may be shifted to $\pi$. This has a
number of phenomenological consequences that span from the modification of the
nuclear processes in stellar objects due to $\theta\sim\mathcal{O}(1)$, to
modifications in the orbital decay of binary systems (and subsequently in the
emitted gravitational waves).
For the scenario considered here, the important point is that a background
made only of ordinary matter breaks the $Z_{\cal N}$ symmetry. This hampers
the symmetry-induced cancellations in the potential which led to a reduced-
mass axion in vacuum: the effective axion mass will be larger within a dense
medium.
We will first elaborate on the $Z_{\cal N}$ axion potential in a nuclear
medium. Following Refs. [37, 109], one can compute the finite density effects
on the axion potential by considering the quark condensates in a medium made
of non-relativistic neutrons and protons [110, 111]. Applying the Hellmann-
Feynman theorem, the quark condensate at a finite density $n_{N}$ of a given
nucleon $N$ can be expressed as
$\displaystyle\langle\overline{q}q\rangle_{n_{N}}=\langle\overline{q}q\rangle_{0}\left(1-\frac{\sigma_{N}n_{N}}{m_{\pi}^{2}f_{\pi}^{2}}\right)\,,$
(4.2)
where
$\langle\overline{q}q\rangle_{0}=\frac{1}{2}\left(\left\langle\overline{u}u\right\rangle+\left\langle\overline{d}d\right\rangle\right)\equiv-\Sigma_{0}$
is the quark condensate in vacuum –see Eq. (2.10)– and $\sigma_{N}$ is defined
by
$\displaystyle\sigma_{N}\equiv m_{q}\frac{\partial M_{N}}{\partial m_{q}}\,,$
(4.3)
Figure 7: Example of the in-medium potential dependence as a function of the
nuclear density for ${\cal N}=5$. For large densities (light green) the total
potential develops a minimum in $\theta_{a}\sim\pi$.
where $m_{q}\equiv\frac{1}{2}(m_{u}+m_{d})$ and $M_{N}$ is the mass of the
nucleon $N$. Because the $Z_{\cal N}$ potential is proportional to the quark
condensate, see Eq. (2.8), we can simply obtain the potential within a SM
nuclear medium $V^{f.d.}_{\mathcal{N}}\left(\theta_{a},n_{N}\right)$ by
weighting the SM (i.e. $k=0$) contribution in the vacuum potential by the
factor in Eq. 4.2, that is,
$\displaystyle V^{f.d.}_{\mathcal{N}}\left(\theta_{a},n_{N}\right)$
$\displaystyle\simeq\left(1-\frac{\sigma_{N}n_{N}}{m_{\pi}^{2}f_{\pi}^{2}}\right)V(\theta_{a})+\sum_{k=1}^{{\cal
N}-1}V(\theta_{a}+2\pi k/{\cal N})$ (4.4)
$\displaystyle=-\frac{\sigma_{N}n_{N}}{m_{\pi}^{2}f_{\pi}^{2}}V(\theta_{a})+\sum_{k=0}^{{\cal
N}-1}V(\theta_{a}+2\pi k/{\cal N})\xrightarrow{{\cal N}\gg
1\,\,}-\frac{\sigma_{N}n_{N}}{m_{\pi}^{2}f_{\pi}^{2}}V(\theta_{a})\,.$
In the last step of these expressions the large ${\cal N}$ limit has been
taken, which allowed us to neglect the term corresponding to the exponentially
reduced axion potential in vacuum (see Eq. 2.32). This shows that, if the
nucleon density is large enough, the $Z_{\cal N}$ asymmetric background spoils
the cancellations among the mirror world contributions to the potential, in
such a way that the total potential in matter is proportional to minus the SM
one in vacuum $V(\theta_{a})$. Therefore, the minimum of the potential is
located at $\theta_{a}=\pi$. More precisely,
$\displaystyle
V^{f.d.}_{\mathcal{N}}\left(\theta_{a},n_{N}\right)\xrightarrow{{\cal N}\gg
1\,\,}\frac{m_{\pi}^{2}f_{\pi}^{2}}{1+z}\bigg{[}\frac{\sigma_{N}n_{N}}{m_{\pi}^{2}f_{\pi}^{2}}\sqrt{1+z^{2}+2z\cos{\left(\theta_{a}\right)}}\,-\frac{\mathcal{N}^{-1/2}z^{\mathcal{N}}}{\sqrt{\pi}}\sqrt{1-z^{2}}\cos\left(\mathcal{N}\theta_{a}\right)\bigg{]}\,,$
(4.5)
which requires
$\frac{\sigma_{N}n_{N}}{m_{\pi}^{2}f_{\pi}^{2}}\gg z^{\cal N}$ (4.6)
for the minimum to sit at $\theta_{a}=\pi$. This is illustrated in Fig. 7.
A large value of the $\theta$ parameter inside dense stellar objects is rich
in physical consequences, which translates into strong constraints for the
$Z_{\cal N}$ scenario. As it was pointed out in Ref. [37],
$\theta\sim\mathcal{O}(1)$ inside the solar core is excluded due to the
increased proton-neutron mass difference (which would prohibit the neutrino
line corresponding to the Be7-Li7 mass difference observed by Borexino [112]).
Similarly, for $\theta\sim\pi$ in nearby neutron stars (NS),
$\mathrm{Co}^{56}$ would be lighter than $\mathrm{Fe}^{56}$ [47, 48] and
therefore $\mathrm{Fe}^{56}$ could have been depleted due to its $\beta$-decay
to $\mathrm{Co}^{56}$. The presence of iron in the surface of neutron stars
and its implications in terms of the allowed $\theta$ values could be explored
through dedicated X-ray measurements [113]. The corresponding current and
projected constraints that were derived in Ref. [37] (within the simplifying
assumption $z=1$) are translated here to the $Z_{\cal N}$ scenario and further
generalized for any $z$.
A conservative criterion consistent with $\theta=\pi$ inside the medium is to
impose that the axion mass at $\theta_{a}=0$ becomes tachyonic, i.e.
$-m_{T}^{2}>0$ where $m_{T}^{2}$ is defined by
$\displaystyle-
m_{T}^{2}\equiv\frac{d^{2}V^{f.d.}_{\mathcal{N}}}{d^{2}a}\Bigg{|}_{\theta_{a}=0}=\frac{m_{\pi}^{2}f_{\pi}^{2}}{f_{a}^{2}}\left[\frac{1}{\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}\,{\cal
N}^{3/2}\,z^{\cal
N}-\frac{\sigma_{N}n_{N}}{m_{\pi}^{2}f_{\pi}^{2}}\frac{z}{(1+z)^{2}}\right]\,.$
(4.7)
Requiring this quantity to be positive, it directly follows a limit on the
number of worlds allowed by the stellar bounds above:
${\cal N}\lesssim 47\,,$ (4.8)
where the most recent estimation of $\sigma_{N}$ has been used.121212 We
employ here $\sigma_{N}\simeq 59\,\mathrm{MeV}$ which is in agreement with
recent determinations based on Roy-Steiner equations
$\sigma_{N}=59.1(3.5)\,\mathrm{MeV}$ [114] and ChPT estimates
$\sigma_{N}=59(7)\,\mathrm{MeV}$ [115]. This bound does not apply for the
whole range of $f_{a}$, though, because the argument only makes physical sense
as long as the reduced Compton wavelength of the axion is smaller than the
stellar object, $r_{core}\gtrsim 1/m_{a}^{f.d.}$, where $m_{a}^{f.d.}\sim
1/f_{a}$ is the effective axion mass in the medium,
$\displaystyle\big{(}m_{a}^{f.d.}\big{)}^{2}=\frac{d^{2}V^{f.d.}_{\mathcal{N}}}{d^{2}a}\Bigg{|}_{\theta_{a}=\pi}=\frac{m_{\pi}^{2}f_{\pi}^{2}}{f_{a}^{2}}\left[\frac{\sigma_{N}n_{N}}{m_{\pi}^{2}f_{\pi}^{2}}\frac{z}{1-z^{2}}-\frac{1}{\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}\,{\cal
N}^{3/2}\,z^{\cal N}\right]\,.$ (4.9)
For the case of the sun, $r_{core}\sim 139.000$ km implies $f_{a}\lesssim
2.4\times 10^{15}\text{ GeV}$ for the observational constraints to apply.
Finally, the area in parameter space excluded is depicted in dark blue in Fig.
8. Analogously, the future sensitivity prospects from neutron star data are
depicted in shaded blue.131313Our results are analogous to those in Eq. (1.7)
of Ref. [37], with their generic parameter $\epsilon$ identified as
$\epsilon={m_{a}^{2}}/{m_{a}^{2}({\cal
N}=1)}\simeq{\pi}^{-1/2}\sqrt{1-z^{2}}(1+z)\,{\cal N}^{3/2}z^{{\cal N}-1}$.
Note that the location of the QCD axion line, as well as our projected
exclusion regions for neutron stars and gravitational waves, are shifted
towards the left by a factor of five with respect to those in Refs. [37, 38].
Figure 8: Model-independent constraints on the axion scale $f_{a}$ versus
axion mass, from astrophysical data. Regions presently excluded are depicted
in solid colors, while the translucent regions circumscribed by dotted lines
are projections. The orange oblique lines indicate the theoretical prediction
of the reduced-mass $Z_{\cal N}$ QCD axion scenario, as a function of ${\cal
N}$: they are solid where the KSVZ $Z_{\cal N}$ axion is free from the PQ
quality problem, and dashed otherwise. Additional constraints which apply if
the axion is assumed to account as well for DM are discussed in Ref. [42].
Even stronger bounds may be established by relaxing the requirement stemming
from Eq. 4.7. Indeed, as it can be seen in Fig. 8, long before the mass in
$\theta_{a}=0$ becomes tachyonic, the absolute minimum of the potential
corresponds to $\theta\sim\mathcal{O}(1)$. Therefore one could constrain
larger masses or smaller ${\cal N}$ values in the $Z_{\cal N}$ scenario than
those obtained above. This would require, however, a dedicated analysis to
ensure that the axion field would fall into the absolute minimum, so as to
overcome the potential barrier; this development lies beyond the scope of the
present work.
The fact that the position of the minimum of the axion potential depends on
the nuclear density of the medium not only modifies the effective
$\theta$-parameter inside stellar objects but may also source a long-range
force between them. This has been studied in Refs. [37, 38]. This new long-
range force sourced by the axion can be constrained by the measurement of
double pulsar or neutron star (NS) - pulsar binaries [116, 117, 118].
Moreover, the existence of these axionic long-range forces would modify the
gravitational wave signal emitted by NS-NS mergers or black hole-NS mergers
which will be probed in the future by LIGO/VIRGO and Advanced LIGO [38, 119].
The projected constraints from Ref. [38] are show in green in Fig. 8. It is
striking that the whole ultra-light DM region, included the so-called “fuzzy
dark matter” region ($m_{a}\sim 10^{-22}$ eV) [120], will be within
observational reach in the next decades, for a wide range of ${\cal N}$
values.
## 5 Conclusions
An axion which solves the strong CP problem may be much lighter than the
canonical QCD axion, down to the range of ultra-light axions, provided Nature
has a $Z_{\cal N}$ symmetry implemented via ${\cal N}$ degenerate world
copies, one of which is our SM. The axion field realizes the symmetry non-
linearly, which leads to exponential cancellations among the contributions
from each mirror copy to the total axion potential. For large ${\cal N}$, we
have shown that the total axion potential is given by a single cosine and we
determined analytically the –exponentially suppressed– dependence of the axion
mass on the number of mirror worlds, using the properties of hypergeometric
functions and the Fourier expansion. In practice, the formula in Eq. (2.34)
gives an excellent approximation even down to ${\cal N}=3$. We have also
improved the holomorphicity bounds previously obtained.
We compared next the predictions of the theory with present and future data
from experiments which do not rely on the additional assumption that an axion
abundance may explain the DM content of the Universe. It is particularly
enticing that experiments set a priori to hunt only for ALPs may in fact be
targeting solutions to the strong CP problem. For instance, ALPS II is shown
to be able to probe the $Z_{\cal N}$ scenario here discussed down to ${\cal
N}\sim 25$ for a large enough axion-photon coupling, while IAXO and BabyIAXO
may test the whole ${\cal N}$ landscape for values of that coupling even
smaller, see Fig. 6. In turn, Fermi SN data can only reach ${\cal N}\gtrsim
43$ but are sensitive to smaller values of the coupling.
Highly dense stellar bodies allow one to set even stronger bounds in wide
regions of the parameter space. These exciting limits have an added value:
they avoid model-dependent assumptions about the axion couplings to SM
particles, because they rely exclusively on the anomalous axion-gluon
interaction needed to solve to the strong CP problem. A dense medium of
ordinary matter is a background that breaks the $Z_{\cal N}$ symmetry. This
hampers the symmetry-induced cancellations in the total axion potential: the
axion becomes heavier inside dense media and the minimum of the potential is
located at $\theta_{a}=\pi$. From present solar data we obtain the bound
${\cal N}\lesssim 47$ provided $f_{a}\lesssim 2.4\times 10^{15}\text{ GeV}$,
while larger ${\cal N}$ values are allowed for smaller $f_{a}$. Furthermore,
we showed that projected neutron star and pulsar data should allow to test the
scenario down to ${\cal N}\sim 9$ –and possibly even below– for the whole
range of $f_{a}$, see Fig. 8. Furthermore, gravitational wave data from NS-NS
and BH-NS mergers by LIGO/VIRGO and Advanced LIGO will allow to probe all
values of ${\cal N}$ for the remaining $f_{a}$ range, up to the Planck scale
and including the ultra-light axion mass range.
These analytical and phenomenological results have been derived within the
model-independent framework of effective couplings. Nevertheless, for the sake
of illustration, we have developed two examples of UV completed models. One is
a $Z_{\cal N}$ KSVZ model, which is shown to enjoy an improved PQ quality
behaviour: its $Z_{\cal N}$ and PQ symmetries are linked and in consequence
gauging $Z_{\cal N}$ alleviates much the PQ quality problem. The other UV
completion considered in this paper is a $Z_{\cal N}$ version of the composite
axion à la Kim-Choi. While this model is viable, its PQ quality is not
improved with respect to the usual situation, because its $Z_{\cal N}$ and PQ
symmetries are independent.
This work is intended to be a proof-of-concept that a much-lighter-than usual
axion is a viable solution to the strong CP problem, with spectacular
prospects of being tested in near future data. It also pinpoints that
experiments searching for generic ALPs have in fact discovery potential to
solve the strong CP problem.
The down-tuned axion considered here could also explain the DM content of the
Universe in certain regions of the parameter space. The impact of such a light
axion on the cosmological history is significant and it will be discussed in a
separate paper [42].
## Acknowledgments
We thank Gonzalo Alonso-Álvarez, Victor Enguita, Mary K. Gaillard, Yann
Gouttenoire, Benjamin Grinstein, Lam Hui, David B. Kaplan, D. Marsh, V. Mehta,
Philip Sørensen and Neal Weiner for illuminating discussions. M.B.G. and P.Q.
are indebted for hospitality to the Theory Department of Columbia University
in New York, where the initial stage of their work took place. The work of
L.D.L. is supported by the Marie Skłodowska-Curie Individual Fellowship grant
AXIONRUSH (GA 840791). L.D.L., P.Q. and A.R. acknowledge support by the
Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy \- EXC
2121 Quantum Universe - 390833306. M.B.G. acknowledges support from the
“Spanish Agencia Estatal de Investigación” (AEI) and the EU “Fondo Europeo de
Desarrollo Regional” (FEDER) through the projects FPA2016-78645-P and
PID2019-108892RB-I00/AEI/10.13039/501100011033. M.B.G. and P. Q. acknowledge
support from the European Union’s Horizon 2020 research and innovation
programme under the Marie Sklodowska-Curie grant agreements 690575 (RISE
InvisiblesPlus) and 674896 (ITN ELUSIVES), as well as from the Spanish
Research Agency (Agencia Estatal de Investigación) through the grant IFT
Centro de Excelencia Severo Ochoa SEV-2016-0597. This project has received
funding/support from the European Union’s Horizon 2020 research and innovation
programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN.
## Appendix A Holomorphicity properties of $Z_{\cal N}$ axion potential
In order to determine the parameter $b$ in Eq. 2.20, which controls the
exponential suppression of the axion mass, it is necessary to study the region
in the complex plane where the extension of the potential $V(w)$ is
holomorphic. As the plots in Fig. 9 illustrate, both the potential and its
second derivative have branch cuts starting in $w_{cut}=\pi\pm i\log z$.
However, the second derivative $V^{\prime\prime}(w)$ diverges at the branch
point and thus $b$ cannot be extended all the way to $\log z$. In order to
optimize the bound on the axion mass we allow $b$ to depart from $\log z$,
$b=|\log z+\Delta b\,|$. Taking into account that $V^{\prime\prime}(w)$ for
small $\Delta b$ can be approximated by
$\displaystyle V^{\prime\prime}\big{(}\pi+i(\log z+\Delta b)\big{)}\simeq-
m_{\pi}^{2}f_{\pi}^{2}\sqrt{\frac{1-z}{1+z}}\left[\frac{1}{4}\frac{1}{\left(\Delta
b\right)^{3/2}}+\mathcal{O}\big{(}\Delta b^{-1/2}\big{)}\right]\,,$ (A.1)
the procedure amounts to minimize the function $B(\Delta b)$ that determines
the bound $\left|E_{\mathcal{N}}(V)\right|\leq B(\Delta b)$ (see Eq. 2.20,
$\displaystyle B(\Delta b)\equiv\frac{4\pi M(\Delta b)}{e^{{\cal N}|\log
z+\Delta b\,|}-1}=\pi
m_{\pi}^{2}f_{\pi}^{2}\sqrt{\frac{1-z}{1+z}}\,\frac{1}{\left(\Delta
b\right)^{3/2}}\frac{1}{e^{{\cal N}|\log z-\Delta b\,|}-1}\,.$ (A.2)
The requirement $\frac{dB(\Delta b)}{d\left(\Delta b\right)}=0$ shows that the
bound is optimized for
$\displaystyle\Delta b=\frac{3}{2}\frac{1}{{\cal N}}\,,$ (A.3)
where the factor $3/2$ comes form the order of the divergence in Eq. A.1.
Figure 9: Representation of the complex functions $V(w)$ (left) and
$V^{\prime\prime}(w)$ (right). Colors represent the phase of the corresponding
complex function and the brightness represents the modulus. The singularities
can be clearly spotted: branch cuts starting from $w_{cut}=\pi\pm i\log z\,$
in both functions and divergences in those same points for
$V^{\prime\prime}(w)$.
## Appendix B Fourier series of the $Z_{\cal N}$ axion potential
We show here that the coefficients of the Fourier series of any $Z_{\cal N}$
symmetric potential, such as the $Z_{\cal N}$ axion potential in Eq. 2.8,
vanish unless the corresponding Fourier mode is a multiple of ${\cal N}$.
Moreover it will be shown that, when the potential is expressed as
$V_{\mathcal{N}}\left(\theta_{a}\right)=\sum_{k=0}^{{\cal
N}-1}V\left(\theta_{a}+\frac{2\pi k}{\mathcal{N}}\right)\,,$ (B.1)
the non-vanishing coefficients of the Fourier series can be expressed in terms
of the Fourier transformation of a single term in the sum Eq. B.1.
Let us denote by $\hat{V}_{{\cal N}}(n)$ the coefficients of the Fourier
series of the total potential,
$V_{\mathcal{N}}\left(\theta_{a}\right)\equiv\sum_{n=-\infty}^{\infty}e^{in\theta_{a}}\hat{V}_{{\cal
N}}(n)\,,$ (B.2)
and by $\hat{V}_{{2\pi k}/{\mathcal{N}}}(n)$ the coefficients of the Fourier
series of each of the terms in the sum in Eq. (B.1),
$\displaystyle V\left(\theta_{a}+\frac{2\pi
k}{\mathcal{N}}\right)\equiv\sum_{n=-\infty}^{\infty}\hat{V}_{{2\pi
k}/{\mathcal{N}}}(n)e^{in\theta_{a}}\,.$ (B.3)
We will stick to the notation that omits the subindex for the first world
($k=0$), $\hat{V}_{0}(n)\equiv\hat{V}(n)$,
$\displaystyle
V(\theta_{a})=\sum_{n=-\infty}^{\infty}\hat{V}(n)e^{in\theta_{a}}\quad\text{
with }\quad\hat{V}(n)=\frac{1}{2\pi}\int_{0}^{2\pi}V(x)e^{-inx}dx\,.$ (B.4)
Each term in the sum in Eq. (B.1) and Eq. B.3 corresponds to the function in
the first term but with its argument shifted by $\frac{2\pi k}{\mathcal{N}}$.
The shift property of the Fourier series relates the Fourier coefficients of
the shifted functions $\hat{V}_{{2\pi k}/{\mathcal{N}}}(n)$ to that of the
original function,
$\displaystyle\hat{V}_{{2\pi k}/{\mathcal{N}}}(n)=e^{in\frac{2\pi
k}{\mathcal{N}}}\hat{V}(n)\,.$ (B.5)
Substituting this expression in Eq. (B.3), and inserting the latter in Eq.
(B.1), it follows that the total potential can be written as
$V_{\mathcal{N}}\left(\theta_{a}\right)=\sum_{k=0}^{{\cal
N}-1}\sum_{n=-\infty}^{\infty}\hat{V}_{{2\pi
k}/{\mathcal{N}}}(n)e^{in\theta_{a}}=\sum_{n=-\infty}^{\infty}\hat{V}(n)e^{in\theta_{a}}\,\sum_{k=0}^{{\cal
N}-1}e^{in\frac{2\pi k}{\mathcal{N}}}\,.$ (B.6)
Comparing this expression with Eq. (B.2), it follows that the coefficients of
the Fourier series for the total potential are given by
$\displaystyle\hat{V}_{{\cal N}}(n)=\hat{V}(n)\sum_{k=0}^{{\cal
N}-1}e^{in\frac{2\pi k}{\mathcal{N}}}\,.$ (B.7)
Interestingly, these coefficients vanish unless $n$ is a multiple of ${\cal
N}$
$\displaystyle\text{If }n\,(\text{mod}\,{\cal N})\neq 0\,$
$\displaystyle\implies\,\sum_{k=0}^{{\cal N}-1}e^{in\frac{2\pi
k}{\mathcal{N}}}=0\,\implies\hat{V}_{\mathcal{N}}(n)=0\,,$ (B.8)
$\displaystyle\text{If }n\,(\text{mod}\,{\cal N})=0\,$
$\displaystyle\implies\,\sum_{k=0}^{{\cal N}-1}e^{in\frac{2\pi
k}{\mathcal{N}}}={\cal N}\implies\hat{V}_{{\cal N}}(n)={\cal N}\hat{V}(n)\,.$
(B.9)
To sum up, the Fourier series of the total potential $V_{{\cal N}}(\theta)$
can be easily obtained in terms of the Fourier series of a single term
$V(\theta)$ and it only receives contributions from the modes that are
multiples of ${\cal N}$. In our case of interest the potential is real and
even, this translates into
$\displaystyle V_{\mathcal{N}}\left(\theta_{a}\right)=2{\cal
N}\sum_{t=1}^{\infty}\hat{V}(t{\cal N})\cos(t\,{\cal N}\theta_{a})\,,$ (B.10)
where the factor of two comes from the negative modes and the constant term
(i.e. $\theta_{a}$-independent) has been obviated.
## Appendix C Analytical axion mass dependence from hypergeometric functions
We show here that the Fourier series coefficients of the single world axion
potential in Eq. 2.27,
$\displaystyle\hat{V}(n)=-\frac{m_{\pi}^{2}f_{\pi}^{2}}{1+z}\int_{0}^{2\pi}\cos(nt)\sqrt{1+z^{2}+2z\cos\left(t\right)}dt\,,$
(C.1)
can be written for large Fourier modes, $n\gg 1$, as a simple analytical
formula that exponentially decays with $n$. Moreover, by applying the result
in Appendix B, it will be shown that the potential for the $Z_{\cal N}$ axion
approaches a single cosine and a simple formula for the $Z_{\cal N}$ axion
mass follows.
Let us start by relating the Fourier series decomposition of the single world
potential in Eq. C.1 with the Gauss hypergeometric functions (see for example
Eq. (9.112) in Ref. [55]),
$\,{}_{2}F_{1}\left(\begin{array}[]{c}p,\,n+p\\\
n+1\end{array}\bigg{|}\,w^{2}\right)=\frac{w^{-n}}{2\pi}\frac{\Gamma(p)n!}{\Gamma(p+n)}\int_{0}^{2\pi}\frac{\cos(nt)\,dt}{\left(1-2w\cos
t+w^{2}\right)^{p}}\,.$ (C.4)
Via the identification $w=-z$ and $p=-1/2$, $\hat{V}(n)$ can be written as
$\displaystyle\hat{V}(n)=(-1)^{n+1}\frac{m_{\pi}^{2}f_{\pi}^{2}}{1+z}\,z^{n}\,\frac{\Gamma(n-1/2)}{\Gamma(-1/2)\,n!}\,\,_{2}F_{1}\left(\begin{array}[]{c}-1/2,\,n-1/2\\\
n+1\end{array}\bigg{|}z^{2}\right)\,.$ (C.7)
For convenience, the hypergeometric function can be also expressed as (see Eq.
(9.131) from Ref. [55])
$\,{}_{2}F_{1}\left(\begin{array}[]{c}\alpha,\,\beta\\\
\gamma\end{array}\bigg{|}\,w\right)=(1-w)^{-\alpha}\,\,_{2}F_{1}\left(\begin{array}[]{c}\alpha,\,\gamma-\beta\\\
\gamma\end{array}\bigg{|}\,\frac{w}{w-1}\right)\,,$ (C.12)
so that
$\,{}_{2}F_{1}\left(\begin{array}[]{c}-1/2,\,n-1/2\\\
n+1\end{array}\bigg{|}\,z^{2}\right)=\left(1-z^{2}\right)^{1/2}\,_{2}F_{1}\left(\begin{array}[]{c}-1/2,\,3/2\\\
n+1\end{array}\bigg{|}\,\frac{z^{2}}{z^{2}-1}\right)\,.$ (C.17)
The relation in Eq. (C.7) is exact. However, only the modes $n$ which are
multiples of ${\cal N}$ contribute to the potential (see Appendix B), and
therefore it is pertinent to focus on the large $n$ limit. While the limit of
the Gauss hypergeometric function when one or more of its parameters become
large is difficult to compute in general, some asymptotic expansions of the
hypergeometric function are known in the literature. In particular, following
Ref. [121, 122],
$\displaystyle\lim_{\gamma\to\infty}\,{}_{2}F_{1}\left(\begin{array}[]{c}\alpha,\,\beta\\\
\gamma\end{array}\bigg{|}\,w\right)=1+\frac{\alpha\beta}{\gamma}w+\mathcal{O}\left((w/\gamma)^{2}\right)\,.$
(C.20)
In turn, the prefactor in Eq. C.7 simplifies in the large $n$ limit to
$\displaystyle.\lim_{n\to\infty}\frac{\Gamma(n-1/2)}{\Gamma(-1/2)\,n!}=-\frac{1}{2\sqrt{\pi}}n^{-3/2}\,.$
(C.21)
Putting all this together, it follows that, in the large $n$ limit, the
coefficient of the Fourier series for a single world is given by
$\displaystyle\hat{V}(n)=\,(-1)^{n}\,\frac{m_{\pi}^{2}f_{\pi}^{2}}{2\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}n^{-3/2}\,z^{n}\,,$
(C.22)
which in turn implies in this limit that the total $Z_{\cal N}$ potential in
Eq. B.10 can be written as
$\displaystyle V_{\mathcal{N}}\left(\theta_{a}\right)={\cal
N}\sum_{t=1}^{\infty}\,(-1)^{t\,{\cal
N}}\,\frac{m_{\pi}^{2}f_{\pi}^{2}}{2\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}\left(t\,{\cal
N}\right)^{-3/2}\,z^{t\,{\cal N}}\cos(t\,{\cal N}\theta_{a})\,.$ (C.23)
This expression allows us to understand several properties of the total
potential. Firstly, it can be shown now that the total potential approaches a
single cosine in the large ${\cal N}$, since all the other modes are then
exponentially suppressed with respect to the first one,
$\displaystyle\lim_{{\cal N}\to\infty}\bigg{|}\frac{\hat{V}_{\cal N}(t\,{\cal
N})}{\hat{V}_{\cal N}({\cal N})}\bigg{|}=\lim_{{\cal
N}\to\infty}\bigg{|}\frac{\hat{V}(t\,{\cal N})}{\hat{V}({\cal
N})}\bigg{|}=t^{-3/2}z^{(t-1){\cal N}}\longrightarrow 0\,,$ (C.24)
and thus the potential reads
$\displaystyle V_{\mathcal{N}}\left(\theta_{a}\right)\xrightarrow{{\cal N}\gg
1\,\,}\frac{m_{\pi}^{2}f_{\pi}^{2}}{\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}{\cal
N}^{-1/2}\,.\,(-1)^{{\cal N}}\,z^{{\cal N}}\cos({\cal N}\theta_{a})\,,$ (C.25)
Secondly, we can also obtain an analytical expression for the axion mass that
confirms the dependence obtained from the holomorphicity arguments in Section
2.3.1, and completes the expresion with the correct prefactor,
$m_{a}^{2}f^{2}_{a}\simeq\frac{m_{\pi}^{2}f_{\pi}^{2}}{\sqrt{\pi}}\,\sqrt{\frac{1-z}{1+z}}\,{\cal
N}^{3/2}\,z^{\cal N}\,.$ (C.26)
Finally, it is now trivial to show that the potential in the large ${\cal N}$
limit has ${\cal N}$ minima (maxima) located at
$\theta_{a}=\\{\pm{2\pi\ell/{\cal N}}\\}$ for $\ell=0,1,\dots,\frac{{\cal
N}-1}{2}\,,$ for odd (even) ${\cal N}$.
The results above assumed the large ${\cal N}$ limit. However, note that the
conclusion about the location of the extrema is true for any ${\cal N}$. This
can be easily seeing after obtaining the exact Fourier expansion of the
$Z_{\cal N}$ axion potential in Eq. 2.8, which reads,
$\displaystyle
V_{\mathcal{N}}\left(\theta_{a}\right)=-m_{\pi}^{2}f_{\pi}^{2}\,\mathcal{N}\sum_{t=1}^{\infty}(-1)^{t{\cal
N}+1}\sum_{\ell=t{\cal
N}}^{\infty}\frac{(2\ell)!(2\ell)!}{2^{4\ell-1}(2\ell-1)(\ell!)^{2}(\ell-t{\cal
N})!(\ell+t{\cal
N})!}\beta^{\ell}\cos\left(t\,\mathcal{N}\theta_{a}\right)\,.$ (C.27)
For even ${\cal N}$, it trivially follows that $\theta_{a}=0$ is a maximum, as
all factors in this expression are positive except for the factor
$(1-2\ell)<0$. For odd ${\cal N}$ instead the $(-1)^{t\,{\cal N}}$ factor
fluctuates in sign, but the first term ($t=1$) is positive and dominates the
expansion (e.g. it is exponentially larger in magnitude than the $t=2$ term
which is negative). The periodicity of the potential extends these conclusions
to the location of all extrema.
## References
* [1] R. D. Peccei and H. R. Quinn, “CP Conservation in the Presence of Instantons,” Phys. Rev. Lett. 38 (1977) 1440–1443. [,328(1977)].
* [2] R. D. Peccei and H. R. Quinn, “Constraints Imposed by CP Conservation in the Presence of Instantons,” Phys. Rev. D16 (1977) 1791–1797.
* [3] S. Weinberg, “A New Light Boson?,” Phys. Rev. Lett. 40 (1978) 223–226.
* [4] F. Wilczek, “Problem of Strong $P$ and $T$ Invariance in the Presence of Instantons,” Phys. Rev. Lett. 40 (1978) 279–282.
* [5] A. R. Zhitnitsky, “On Possible Suppression of the Axion Hadron Interactions. (In Russian),” Sov. J. Nucl. Phys. 31 (1980) 260. [Yad. Fiz.31,497(1980)].
* [6] M. Dine, W. Fischler, and M. Srednicki, “A Simple Solution to the Strong CP Problem with a Harmless Axion,” Phys. Lett. B104 (1981) 199–202.
* [7] J. E. Kim, “Weak Interaction Singlet and Strong CP Invariance,” Phys. Rev. Lett. 43 (1979) 103.
* [8] M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, “Can Confinement Ensure Natural CP Invariance of Strong Interactions?,” Nucl. Phys. B166 (1980) 493–506.
* [9] L. Di Luzio, F. Mescia, and E. Nardi, “Redefining the Axion Window,” Phys. Rev. Lett. 118 no. 3, (2017) 031801, arXiv:1610.07593 [hep-ph].
* [10] M. Farina, D. Pappadopulo, F. Rompineve, and A. Tesi, “The photo-philic QCD axion,” JHEP 01 (2017) 095, arXiv:1611.09855 [hep-ph].
* [11] L. Di Luzio, F. Mescia, and E. Nardi, “Window for preferred axion models,” Phys. Rev. D96 no. 7, (2017) 075003, 1705.05370.
* [12] P. Agrawal, J. Fan, M. Reece, and L.-T. Wang, “Experimental Targets for Photon Couplings of the QCD Axion,” JHEP 02 (2018) 006, arXiv:1709.06085 [hep-ph].
* [13] G. Marques-Tavares and M. Teo, “Light axions with large hadronic couplings,” JHEP 05 (2018) 180, arXiv:1803.07575 [hep-ph].
* [14] L. Di Luzio, M. Giannotti, E. Nardi, and L. Visinelli, “The landscape of QCD axion models,” Phys. Rept. 870 (2020) 1–117, arXiv:2003.01100 [hep-ph].
* [15] L. Darmé, L. Di Luzio, M. Giannotti, and E. Nardi, “Selective enhancement of the QCD axion couplings,” arXiv:2010.15846 [hep-ph].
* [16] D. Budker, P. W. Graham, M. Ledbetter, S. Rajendran, and A. Sushkov, “Proposal for a Cosmic Axion Spin Precession Experiment (CASPEr),” Phys. Rev. X4 no. 2, (2014) 021030, arXiv:1306.6089 [hep-ph].
* [17] D. F. Jackson Kimball et al., “Overview of the Cosmic Axion Spin Precession Experiment (CASPEr),” arXiv:1711.08999 [physics.ins-det].
* [18] V. A. Rubakov, “Grand unification and heavy axion,” JETP Lett. 65 (1997) 621–624, arXiv:hep-ph/9703409 [hep-ph].
* [19] Z. Berezhiani, L. Gianfagna, and M. Giannotti, “Strong CP problem and mirror world: The Weinberg-Wilczek axion revisited,” Phys. Lett. B500 (2001) 286–296, arXiv:hep-ph/0009290 [hep-ph].
* [20] L. Gianfagna, M. Giannotti, and F. Nesti, “Mirror world, supersymmetric axion and gamma ray bursts,” JHEP 10 (2004) 044, arXiv:hep-ph/0409185.
* [21] S. D. H. Hsu and F. Sannino, “New solutions to the strong CP problem,” Phys. Lett. B605 (2005) 369–375, arXiv:hep-ph/0408319 [hep-ph].
* [22] A. Hook, “Anomalous solutions to the strong CP problem,” Phys. Rev. Lett. 114 no. 14, (2015) 141801, arXiv:1411.3325 [hep-ph].
* [23] H. Fukuda, K. Harigaya, M. Ibe, and T. T. Yanagida, “Model of visible QCD axion,” Phys. Rev. D92 no. 1, (2015) 015021, arXiv:1504.06084 [hep-ph].
* [24] C.-W. Chiang, H. Fukuda, M. Ibe, and T. T. Yanagida, “750 GeV diphoton resonance in a visible heavy QCD axion model,” Phys. Rev. D93 no. 9, (2016) 095016, arXiv:1602.07909 [hep-ph].
* [25] S. Dimopoulos, A. Hook, J. Huang, and G. Marques-Tavares, “A Collider Observable QCD Axion,” JHEP 11 (2016) 052, arXiv:1606.03097 [hep-ph].
* [26] T. Gherghetta, N. Nagata, and M. Shifman, “A Visible QCD Axion from an Enlarged Color Group,” Phys. Rev. D93 no. 11, (2016) 115010, arXiv:1604.01127 [hep-ph].
* [27] A. Kobakhidze, “Heavy axion in asymptotically safe QCD,” arXiv:1607.06552 [hep-ph].
* [28] P. Agrawal and K. Howe, “Factoring the Strong CP Problem,” JHEP 12 (2018) 029, arXiv:1710.04213 [hep-ph].
* [29] P. Agrawal and K. Howe, “A Flavorful Factoring of the Strong CP Problem,” JHEP 12 (2018) 035, arXiv:1712.05803 [hep-ph].
* [30] M. K. Gaillard, M. B. Gavela, R. Houtz, P. Quilez, and R. Del Rey, “Color unified dynamical axion,” Eur. Phys. J. C78 no. 11, (2018) 972, arXiv:1805.06465 [hep-ph].
* [31] M. A. Buen-Abad and J. Fan, “Dynamical axion misalignment with small instantons,” JHEP 12 (2019) 161, arXiv:1911.05737 [hep-ph].
* [32] A. Hook, S. Kumar, Z. Liu, and R. Sundrum, “High Quality QCD Axion and the LHC,” Phys. Rev. Lett. 124 no. 22, (2020) 221801, arXiv:1911.12364 [hep-ph].
* [33] C. Csáki, M. Ruhdorfer, and Y. Shirman, “UV Sensitivity of the Axion Mass from Instantons in Partially Broken Gauge Groups,” JHEP 04 (2020) 031, arXiv:1912.02197 [hep-ph].
* [34] T. Gherghetta and M. D. Nguyen, “A Composite Higgs with a Heavy Composite Axion,” arXiv:2007.10875 [hep-ph].
* [35] A. Hook, “Solving the Hierarchy Problem Discretely,” Phys. Rev. Lett. 120 no. 26, (2018) 261802, arXiv:1802.10093 [hep-ph].
* [36] M. Giannotti, “Mirror world and axion: Relaxing cosmological bounds,” Int. J. Mod. Phys. A 20 (2005) 2454–2458, arXiv:astro-ph/0504636.
* [37] A. Hook and J. Huang, “Probing axions with neutron star inspirals and other stellar processes,” JHEP 06 (2018) 036, arXiv:1708.08464 [hep-ph].
* [38] J. Huang, M. C. Johnson, L. Sagunski, M. Sakellariadou, and J. Zhang, “Prospects for axion searches with Advanced LIGO through binary mergers,” Phys. Rev. D 99 no. 6, (2019) 063013, arXiv:1807.02133 [hep-ph].
* [39] M. A. Shifman, Instantons in Gauge Theories. 1994\.
* [40] J. E. Kim, “A Composite Invisible Axion,” Phys. Rev. D31 (1985) 1733.
* [41] K. Choi and J. E. Kim, “Dynamical Axion,” Phys. Rev. D32 (1985) 1828.
* [42] L. Di Luzio, B. Gavela, P. Quilez, and A. Ringwald, “Dark Matter from an even lighter QCD axion: trapped misalignment (To appear),” arXiv:2102.01082 [hep-ph].
* [43] P. Di Vecchia and G. Veneziano, “Chiral Dynamics in the Large n Limit,” Nucl. Phys. B171 (1980) 253–272.
* [44] H. Leutwyler and A. V. Smilga, “Spectrum of Dirac operator and role of winding number in QCD,” Phys. Rev. D 46 (1992) 5607–5632.
* [45] G. Grilli di Cortona, E. Hardy, J. Pardo Vega, and G. Villadoro, “The QCD axion, precisely,” JHEP 01 (2016) 034, arXiv:1511.02867 [hep-ph].
* [46] S. Borsanyi et al., “Calculation of the axion mass based on high-temperature lattice quantum chromodynamics,” Nature 539 no. 7627, (2016) 69–71, arXiv:1606.07494 [hep-lat].
* [47] L. Ubaldi, “Effects of theta on the deuteron binding energy and the triple-alpha process,” Phys. Rev. D81 (2010) 025011, arXiv:0811.1599 [hep-ph].
* [48] D. Lee, U.-G. Meißner, K. A. Olive, M. Shifman, and T. Vonk, “$\theta$ -dependence of light nuclei and nucleosynthesis,” Phys. Rev. Res. 2 no. 3, (2020) 033392, arXiv:2006.12321 [hep-ph].
* [49] Z. Berezhiani, D. Comelli, and F. L. Villante, “The Early mirror universe: Inflation, baryogenesis, nucleosynthesis and dark matter,” Phys. Lett. B503 (2001) 362–375, arXiv:hep-ph/0008105 [hep-ph].
* [50] Z. Berezhiani, “Mirror world and its cosmological consequences,” Int. J. Mod. Phys. A 19 (2004) 3775–3806, arXiv:hep-ph/0312335.
* [51] R. Foot, “Mirror dark matter: Cosmology, galaxy structure and direct detection,” Int. J. Mod. Phys. A 29 (2014) 1430013, arXiv:1401.3965 [astro-ph.CO].
* [52] R. Foot, A. Kobakhidze, K. L. McDonald, and R. R. Volkas, “Poincare protection for a natural electroweak scale,” Phys. Rev. D 89 no. 11, (2014) 115018, arXiv:1310.0223 [hep-ph].
* [53] R. R. Volkas, A. J. Davies, and G. C. Joshi, “Naturalness of the invisible axion model,” Phys. Lett. B215 (1988) 133–138.
* [54] L. N. Trefethen and J. A. C. Weideman, “The exponentially convergent trapezoidal rule,” SIAM Review 56 no. 3, (2014) 385–458, https://doi.org/10.1137/130932132. https://doi.org/10.1137/130932132.
* [55] I. GRADSHTEYN and I. RYZHIK, “8-9 \- special functions,” in Table of Integrals, Series, and Products, I. GRADSHTEYN and I. RYZHIK, eds., pp. 904 – 1080. Academic Press, 1980. http://www.sciencedirect.com/science/article/pii/B9780122947605500209.
* [56] R. Holman, S. D. H. Hsu, T. W. Kephart, E. W. Kolb, R. Watkins, and L. M. Widrow, “Solutions to the Strong CP Problem in a World with Gravity,” Phys. Lett. B282 (1992) 132–136, arXiv:hep-ph/9203206 [hep-ph].
* [57] M. Kamionkowski and J. March-Russell, “Planck scale physics and the Peccei-Quinn mechanism,” Phys. Lett. B 282 (1992) 137–141, arXiv:hep-th/9202003.
* [58] S. M. Barr and D. Seckel, “Planck scale corrections to axion models,” Phys. Rev. D 46 (1992) 539–549.
* [59] S. Ghigna, M. Lusignoli, and M. Roncadelli, “Instability of the Invisible Axion,” Phys. Lett. B283 (1992) 278–281.
* [60] H. M. Georgi, L. J. Hall, and M. B. Wise, “Grand Unified Models with an Automatic Peccei-Quinn Symmetry,” Nucl. Phys. B192 (1981) 409–416.
* [61] S. B. Giddings and A. Strominger, “Loss of Incoherence and Determination of Coupling Constants in Quantum Gravity,” Nucl. Phys. B307 (1988) 854–866.
* [62] S. R. Coleman, “Why There is Nothing Rather Than Something: a Theory of the Cosmological Constant,” Nucl. Phys. B310 (1988) 643–668.
* [63] G. Gilbert, “Wormhole Induced Proton Decay,” Nucl. Phys. B328 (1989) 159–170.
* [64] S.-J. Rey, “The Axion Dynamics in Wormhole Background,” Phys. Rev. D39 (1989) 3185.
* [65] J. Alvey and M. Escudero, “The axion quality problem: global symmetry breaking and wormholes,” JHEP 01 (2021) 032, arXiv:2009.03917 [hep-ph].
* [66] L. Randall, “Composite axion models and Planck scale physics,” Phys. Lett. B284 (1992) 77–80.
* [67] B. A. Dobrescu, “The Strong CP Problem Versus Planck Scale Physics,” Phys. Rev. D55 (1997) 5826–5833, arXiv:hep-ph/9609221 [hep-ph].
* [68] D. Butter and M. K. Gaillard, “The Axion mass in modular invariant supergravity,” Phys. Lett. B612 (2005) 304–310, arXiv:hep-th/0502100 [hep-th].
* [69] M. Redi and R. Sato, “Composite Accidental Axions,” JHEP 05 (2016) 104, arXiv:1602.05427 [hep-ph].
* [70] H. Fukuda, M. Ibe, M. Suzuki, and T. T. Yanagida, “A ”gauged” $U(1)$ Peccei–Quinn symmetry,” Phys. Lett. B771 (2017) 327–331, arXiv:1703.01112 [hep-ph].
* [71] L. Di Luzio, E. Nardi, and L. Ubaldi, “Accidental Peccei-Quinn symmetry protected to arbitrary order,” Phys. Rev. Lett. 119 no. 1, (2017) 011801, arXiv:1704.01122 [hep-ph].
* [72] H. Fukuda, M. Ibe, M. Suzuki, and T. T. Yanagida, “Gauged Peccei-Quinn symmetry – A case of simultaneous breaking of SUSY and PQ symmetry,” JHEP 07 (2018) 128, arXiv:1803.00759 [hep-ph].
* [73] M. Ibe, M. Suzuki, and T. T. Yanagida, “$B-L$ as a Gauged Peccei-Quinn Symmetry,” JHEP 08 (2018) 049, arXiv:1805.10029 [hep-ph].
* [74] B. Lillard and T. M. P. Tait, “A High Quality Composite Axion,” arXiv:1811.03089 [hep-ph].
* [75] M. Ardu, L. Di Luzio, G. Landini, A. Strumia, D. Teresi, and J.-W. Wang, “Axion quality from the (anti)symmetric of SU($\mathcal{N}$),” JHEP 11 (2020) 090, arXiv:2007.12663 [hep-ph].
* [76] L. Di Luzio, “Accidental SO(10) axion from gauged flavour,” JHEP 11 (2020) 074, arXiv:2008.09119 [hep-ph].
* [77] Q. Bonnefoy, P. Cox, E. Dudas, T. Gherghetta, and M. D. Nguyen, “Flavoured Warped Axion,” arXiv:2012.09728 [hep-ph].
* [78] Q. Bonnefoy, E. Dudas, and S. Pokorski, “Axions in a highly protected gauge symmetry model,” Eur. Phys. J. C 79 no. 1, (2019) 31, arXiv:1804.01112 [hep-ph].
* [79] R. Alonso and A. Urbano, “Wormholes and masses for Goldstone bosons,” arXiv:1706.07415 [hep-ph].
* [80] M. B. Gavela, M. Ibe, P. Quilez, and T. T. Yanagida, “Automatic Peccei–Quinn symmetry,” Eur. Phys. J. C79 no. 6, (2019) 542, arXiv:1812.08174 [hep-ph].
* [81] A. Arvanitaki, S. Dimopoulos, S. Dubovsky, N. Kaloper, and J. March-Russell, “String Axiverse,” Phys. Rev. D 81 (2010) 123530, arXiv:0905.4720 [hep-th].
* [82] D. Brzeminski, Z. Chacko, A. Dev, and A. Hook, “A Time-Varying Fine Structure Constant from Naturally Ultralight Dark Matter,” arXiv:2012.02787 [hep-ph].
* [83] K. Ehret et al., “New ALPS Results on Hidden-Sector Lightweights,” Phys. Lett. B 689 (2010) 149–155, arXiv:1004.1313 [hep-ex].
* [84] CAST Collaboration, S. Andriamonje et al., “An Improved limit on the axion-photon coupling from the CAST experiment,” JCAP 04 (2007) 010, arXiv:hep-ex/0702006.
* [85] CAST Collaboration, V. Anastassopoulos et al., “New CAST Limit on the Axion-Photon Interaction,” Nature Phys. 13 (2017) 584–590, arXiv:1705.02290 [hep-ex].
* [86] M. Betz, F. Caspers, M. Gasior, M. Thumm, and S. Rieger, “First results of the CERN Resonant Weakly Interacting sub-eV Particle Search (CROWS),” Phys. Rev. D 88 no. 7, (2013) 075014, arXiv:1310.8098 [physics.ins-det].
* [87] I. Shilon, A. Dudarev, H. Silva, and H. ten Kate, “Conceptual Design of a New Large Superconducting Toroid for IAXO, the New International AXion Observatory,” IEEE Trans. Appl. Supercond. 23 no. 3, (2013) 4500604, arXiv:1212.4633 [physics.ins-det].
* [88] OSQAR Collaboration, R. Ballou et al., “New exclusion limits on scalar and pseudoscalar axionlike particles from light shining through a wall,” Phys. Rev. D 92 no. 9, (2015) 092002, arXiv:1506.08082 [hep-ex].
* [89] F. Della Valle, A. Ejlli, U. Gastaldi, G. Messineo, E. Milotti, R. Pengo, G. Ruoso, and G. Zavattini, “The PVLAS experiment: measuring vacuum magnetic birefringence and dichroism with a birefringent Fabry–Perot cavity,” Eur. Phys. J. C 76 no. 1, (2016) 24, arXiv:1510.08052 [physics.optics].
* [90] R. Bähre et al., “Any light particle search II —Technical Design Report,” JINST 8 (2013) T09001, arXiv:1302.5647 [physics.ins-det].
* [91] D. Wouters and P. Brun, “Constraints on Axion-like Particles from X-Ray Observations of the Hydra Galaxy Cluster,” Astrophys. J. 772 (2013) 44, arXiv:1304.0989 [astro-ph.HE].
* [92] M. D. Marsh, H. R. Russell, A. C. Fabian, B. P. McNamara, P. Nulsen, and C. S. Reynolds, “A New Bound on Axion-Like Particles,” JCAP 12 (2017) 036, arXiv:1703.07354 [hep-ph].
* [93] C. S. Reynolds, M. D. Marsh, H. R. Russell, A. C. Fabian, R. Smith, F. Tombesi, and S. Veilleux, “Astrophysical limits on very light axion-like particles from Chandra grating spectroscopy of NGC 1275,” arXiv:1907.05475 [hep-ph].
* [94] M. Meyer and T. Petrushevska, “Search for Axionlike-Particle-Induced Prompt $\gamma$-Ray Emission from Extragalactic Core-Collapse Supernovae with the $Fermi$ Large Area Telescope,” Phys. Rev. Lett. 124 no. 23, (2020) 231101, arXiv:2006.06722 [astro-ph.HE]. [Erratum: Phys.Rev.Lett. 125, 119901 (2020)].
* [95] M. Meyer, M. Giannotti, A. Mirizzi, J. Conrad, and M. Sánchez-Conde, “Fermi Large Area Telescope as a Galactic Supernovae Axionscope,” Phys. Rev. Lett. 118 no. 1, (2017) 011103, arXiv:1609.02350 [astro-ph.HE].
* [96] Fermi-LAT Collaboration, M. Ajello et al., “Search for Spectral Irregularities due to Photon–Axionlike-Particle Oscillations with the Fermi Large Area Telescope,” Phys. Rev. Lett. 116 no. 16, (2016) 161101, arXiv:1603.06978 [astro-ph.HE].
* [97] H.E.S.S. Collaboration, A. Abramowski et al., “Constraints on axionlike particles with H.E.S.S. from the irregularity of the PKS 2155-304 energy spectrum,” Phys. Rev. D 88 no. 10, (2013) 102003, arXiv:1311.3148 [astro-ph.HE].
* [98] A. Ayala, I. Domínguez, M. Giannotti, A. Mirizzi, and O. Straniero, “Revisiting the bound on axion-photon coupling from Globular Clusters,” Phys. Rev. Lett. 113 no. 19, (2014) 191302, arXiv:1406.6053 [astro-ph.SR].
* [99] A. Payez, C. Evoli, T. Fischer, M. Giannotti, A. Mirizzi, and A. Ringwald, “Revisiting the SN1987A gamma-ray limit on ultralight axion-like particles,” JCAP 02 (2015) 006, arXiv:1410.3747 [astro-ph.HE].
* [100] A. Abeln et al., “Axion search with BabyIAXO in view of IAXO,” in 40th International Conference on High Energy Physics. 12, 2020. arXiv:2012.06634 [physics.ins-det].
* [101] BabyIAXO Collaboration, A. Abeln et al., “Conceptual Design of BabyIAXO, the intermediate stage towards the International Axion Observatory,” arXiv:2010.12076 [physics.ins-det].
* [102] C. O’Hare, “cajohare/axionlimits: Axionlimits,” July, 2020. https://doi.org/10.5281/zenodo.3932430.
* [103] H. Georgi, D. B. Kaplan, and L. Randall, “Manifesting the Invisible Axion at Low-energies,” Phys. Lett. 169B (1986) 73–78.
* [104] G. G. Raffelt, “Astrophysical axion bounds,” Lect. Notes Phys. 741 (2008) 51–71, arXiv:hep-ph/0611350 [hep-ph]. [,51(2006)].
* [105] A. Arvanitaki, M. Baryakhtar, and X. Huang, “Discovering the QCD Axion with Black Holes and Gravitational Waves,” Phys. Rev. D 91 no. 8, (2015) 084011, arXiv:1411.2263 [hep-ph].
* [106] A. Arvanitaki and S. Dubovsky, “Exploring the String Axiverse with Precision Black Hole Physics,” Phys. Rev. D 83 (2011) 044026, arXiv:1004.3558 [hep-th].
* [107] M. J. Stott, “Ultralight Bosonic Field Mass Bounds from Astrophysical Black Hole Spin,” arXiv:2009.07206 [hep-ph].
* [108] V. M. Mehta, M. Demirtas, C. Long, D. J. E. Marsh, L. Mcallister, and M. J. Stott, “Superradiance Exclusions in the Landscape of Type IIB String Theory,” arXiv:2011.08693 [hep-th].
* [109] R. Balkin, J. Serra, K. Springmann, and A. Weiler, “The QCD axion at finite density,” JHEP 07 (2020) 221, arXiv:2003.04903 [hep-ph].
* [110] T. D. Cohen, R. J. Furnstahl, and D. K. Griegel, “Quark and gluon condensates in nuclear matter,” Phys. Rev. C45 (1992) 1881–1893.
* [111] J. Gasser, H. Leutwyler, and M. Sainio, “Form-factor of the sigma term,” Phys. Lett. B 253 (1991) 260–264.
* [112] Borexino Collaboration, G. Bellini et al., “Final results of Borexino Phase-I on low energy solar neutrino spectroscopy,” Phys. Rev. D89 no. 11, (2014) 112007, arXiv:1308.0443 [hep-ex].
* [113] K. Mukai, “X-ray Emissions from Accreting White Dwarfs: a Review,” Publ. Astron. Soc. Pac. 129 no. 976, (2017) 062001, arXiv:1703.06171 [astro-ph.HE].
* [114] M. Hoferichter, J. Ruiz de Elvira, B. Kubis, and U.-G. Meißner, “High-Precision Determination of the Pion-Nucleon $\sigma$ Term from Roy-Steiner Equations,” Phys. Rev. Lett. 115 (2015) 092301, arXiv:1506.04142 [hep-ph].
* [115] J. Alarcon, J. Martin Camalich, and J. Oller, “The chiral representation of the $\pi N$ scattering amplitude and the pion-nucleon sigma term,” Phys. Rev. D 85 (2012) 051503, arXiv:1110.3797 [hep-ph].
* [116] J. H. Taylor and J. M. Weisberg, “A new test of general relativity: Gravitational radiation and the binary pulsar PS R 1913+16,” Astrophys. J. 253 (1982) 908–920.
* [117] M. Burgay et al., “An Increased estimate of the merger rate of double neutron stars from observations of a highly relativistic system,” Nature 426 (2003) 531–533, arXiv:astro-ph/0312071 [astro-ph].
* [118] C. M. Will, “The Confrontation between general relativity and experiment,” Living Rev. Rel. 4 (2001) 4, arXiv:gr-qc/0103036 [gr-qc].
* [119] LIGO Scientific Collaboration and Virgo Collaboration Collaboration, B. P. Abbott et al., “Gw170817: Observation of gravitational waves from a binary neutron star inspiral,” Phys. Rev. Lett. 119 (Oct, 2017) 161101. https://link.aps.org/doi/10.1103/PhysRevLett.119.161101.
* [120] L. Hui, J. P. Ostriker, S. Tremaine, and E. Witten, “Ultralight scalars as cosmological dark matter,” Phys. Rev. D 95 no. 4, (2017) 043541, arXiv:1610.08297 [astro-ph.CO].
* [121] M. Cvitković, A.-S. Smith, and J. Pande, “Asymptotic expansions of the hypergeometric function with two large parameters—application to the partition function of a lattice gas in a field of traps,” Journal of Physics A: Mathematical and Theoretical 50 no. 26, (Jun, 2017) 265206. https://doi.org/10.1088%2F1751-8121%2Faa7213.
* [122] F. Olver, D. Lozier, R. Boisvert, and C. Clark, “Nist handbook of mathematical functions,”.
|
# bajes: Bayesian inference of multimessenger astrophysical data
Methods and application to gravitational-waves
Matteo Breschi Theoretisch-Physikalisches Institut, Friedrich-Schiller-
Universität Jena, 07743, Jena, Germany Rossella Gamba Theoretisch-
Physikalisches Institut, Friedrich-Schiller-Universität Jena, 07743, Jena,
Germany Sebastiano Bernuzzi Theoretisch-Physikalisches Institut, Friedrich-
Schiller-Universität Jena, 07743, Jena, Germany
###### Abstract
We present bajes, a parallel and lightweight framework for Bayesian inference
of multimessenger transients. bajes is a Python modular package with minimal
dependencies on external libraries adaptable to the majority of the Bayesian
models and to various sampling methods. We describe the general workflow and
the parameter estimation pipeline for compact-binary-coalescence
gravitational-wave transients. The latter is validated against injections of
binary black hole and binary neutron star waveforms, including confidence
interval tests that demonstrates the inference is well-calibrated. Binary
neutron star postmerger injections are also studied using a network of five
detectors made of LIGO, Virgo, KAGRA and Einstein Telescope. Postmerger
signals will be detectable for sources at ${\lesssim}80\,$Mpc, with Einstein
Telescope contributing over 90% of the total signal-to-noise ratio. As a full
scale application, we re-analyze the GWTC-1 black hole transients using the
effective-one-body TEOBResumS approximant, and reproduce selected results with
other approximants. bajes inferences are consistent with previous results; the
direct comparison of bajes and bilby analyses of GW150914 shows a maximum
Jensen-Shannon divergence of $5.2{\times}10^{-4}$. GW170817 is re-analyzed
using TaylorF2 with 5.5PN point-mass and 7.5PN tides, TEOBResumSPA, and
IMRPhenomPv2_NRTidal with different cutoff-frequencies of $1024\,$Hz and
$2048\,$Hz. We find that the former choice minimizes systematics on the
reduced tidal parameter, while a larger amount of tidal information is gained
with the latter choice. bajes can perform these analyses in about 1 day using
128 CPUs.
###### pacs:
04.25.D-, 04.30.Db, 95.30.Sf, 95.30.Lz, 97.60.Jd
## I Introduction
Bayesian inference has become a standard method for the analysis of
astrophysical and cosmological events, e.g. Abbott _et al._ (2019a); Del
Pozzo _et al._ (2018); Wang (2019); Abbott _et al._ (2020a); Hortúa _et
al._ (2020), since it offers a generic statistical framework to rigorously
test hypothesis against observational information. Given a set of parametric
models (hypothesis) and assumptions on the parameters (prior), Bayesian
inference allows one to infer the parameters in terms of probability
distributions, and also to select the best-fitting model among competing
hypotheses. In particular, Bayesian methods are central tools used in
gravitational-wave (GW) astronomy to determine the source properties of an
observed signal Veitch and Vecchio (2010); Veitch _et al._ (2015); Abbott
_et al._ (2020b) and the related applications. Some examples are tests of
General Relativity Abbott _et al._ (2019b); Breschi _et al._ (2019a),
astrophysical population studies Abbott _et al._ (2020c), inferences of the
neutron star matter properties Abbott _et al._ (2018a) and cosmology Abbott
_et al._ (2017a, 2019c). Furthermore, Bayesian inference offers the optimal
framework to combine different observational datasets from multimessenger
astronomical observations, like GW170817 and the electromagnetic (EM)
counterparts Abbott _et al._ (2017b, c); Savchenko _et al._ (2017); Pian
_et al._ (2017); Smartt _et al._ (2017); Tanvir _et al._ (2017); Tanaka _et
al._ (2017); Villar _et al._ (2017). Multimessenger inference of
astrophysical phenomena such as binary neutron star mergers (BNS) is a
fundamental resource to clarify the mechanism at the origin of the radiation,
to obtain accurate inferences on the properties of the source, and to improve
theoretical models gaining information from observational data (e.g. Radice
and Dai, 2019; Dietrich _et al._ , 2020; Breschi _et al._ , 2021a).
In the last years, many efforts have been made by the scientific community in
the development of sophisticated parameter inference tools for astronomical
observations. In the context of GW data analysis, LALSuite LIGO Scientific
Collaboration (2018a) is the official software provided by the LIGO-Virgo
collaboration Aasi _et al._ (2015); Acernese _et al._ (2015) and it offers a
variegated framework for generic analysis of GW data. Other mature software
for parameter estimation of GW transients are also available; some example are
the semi-analytical integrator RIFT Pankow _et al._ (2015); Lange _et al._
(2018), the user-friendly library bilby Ashton _et al._ (2019) and the
inference module of the pycbc package Biwer _et al._ (2019). Bayesian
software for parameter inference of other astrophysical transients have also
been developed, such as tools for high-energy photons from compact objects and
galaxy clusters Nasa High Energy Astrophysics Science Archive Research Center
(2014) (Heasarc); Center ; Olamaie _et al._ (2015); neutrino radiation
Aartsen _et al._ (2020); supernovae transients Shariff _et al._ (2016);
Hinton _et al._ (2019); Möller and de Boissière (2019); pulsar arrival
timings Edwards _et al._ (2006); Lentati _et al._ (2013); Luo _et al._
(2021); and for cosmological inferences Lewis and Bridle (2002); Das and
Souradeep (2014); Ishida _et al._ (2015); Handley _et al._ (2015). Current
pipelines for the analysis of astrophysical and cosmological observations are
targeted to specific applications. However, within a multi-messenger
framework, it is essential to develop a flexible pipeline capable of combining
different datasets and physical models. This issue can be tackled allowing the
infrastructure to merge different Bayesian models, extending the considered
parameter space and generalizing the definition of the likelihood function.
This implies the use of large amounts of data and computationally expensive
models. It follows that efficient parallelization techniques and well-
calibrated proposal methods are necessary to optimize the performances of such
a flexible pipeline.
In this work, we present bajes [baIEs], a Python package for Bayesian
inference developed at Friedrich-Schiller-Universtät Jena. Our goal is to
provide a simple, complete and reliable implementation capable to robustly
perform Bayesian inference on arbitrary sets of data, with specific
functionalities for multimessenger astrophysics. The software is designed to
be state-of-art, simple-to-use and light-weighted with minimal dependencies on
external libraries. The paper is structured as follows. In Sec. II, we recall
the basic concepts of Bayesian theory of probability. In Sec. III, we describe
the design and the workflow of the bajes software. Sec. IV describes the tools
and the methods implemented in bajes for the analysis of GW transients,
including the available templates. In Sec. V, we describe the GW pipeline and
the Bayesian framework of the GW model. In Sec. VI, we present a survey of
injection studies and validation tests performed with artificial binary merger
signals. Sec. VII shows the results of the bajes pipeline on the GW events
observed by the LIGO-Virgo interferometers Abbott _et al._ (2019a, d).
Finally, we conclude in Sec. VIII. The paper concludes with Appendices on the
implemented sampling methods, the proposal methods and a simple use example.
## II Bayesian inference
The task of a Bayesian inference is the formulation and the computation of
conditional probabilities. It is possible to classify this topic in two main
problems: parameter estimation (PE) and model selection (MS). With PE we mean
the evaluation of the characteristic distribution for the parameters that
define the model of interest. On the other hand, with MS we refer to the
discrimination between competing models in light of the data, comparing the
suitability of different assumptions directly on the observation. In order to
discuss how these tasks are achieved, in the following sections we recall the
basic concepts of Bayesian theory of probability. By convention, we label the
natural logarithm as $\log$ throughout all the paper.
### II.1 Bayes' theorem
Given a set of observed data d and a set of parameters $\bm{\theta}$, that
characterizes our model within some background hypothesis $H$, it is possible
to estimate the posterior distribution for $\bm{\theta}$ using the Bayes'
theorem MacKay (2002); Gelman _et al._ (2004); Sivia and Skilling (2006),
$p(\bm{\theta}|\textbf{d},H)=\frac{p(\textbf{d}|\bm{\theta},H)\,p(\bm{\theta}|H)}{p(\textbf{d}|H)}\,,$
(1)
where $p(\textbf{d}|\bm{\theta},H)$ is the likelihood function,
$p(\bm{\theta}|H)$ is the prior distribution and $p(\textbf{d}|H)$ is the
evidence. The likelihood function describes the probability of observing the
data d given $\bm{\theta}$ and assuming that the hypothesis $H$ is true.
Therefore, it encodes the observational information and it predicts the
agreement between the observed data d and the expected outcome for every given
sample $\bm{\theta}$ of the parameter space. The prior distribution
$p(\bm{\theta}|H)$ depicts the knowledge on the parameters before performing
the estimation. Usually, the functional form of this term is chosen in
accordance with geometrical and/or physically-motivated argumentation. The
term $p(\textbf{d}|H)$ is labeled as evidence and it represents the
probability of observing the data d given the hypothesis $H$. The evidence is
also called marginalized likelihood since, according to the marginalization
rule, it can be expressed as
$p(\textbf{d}|H)=\int_{\bm{\Theta}}p(\textbf{d}|\bm{\theta},H)\,p(\bm{\theta}|H)\,\text{d}\bm{\theta}\,,$
(2)
where the integral is extended over the entire parameter space $\bm{\Theta}$.
Subsequently, the posterior distribution $p(\bm{\theta}|\textbf{d},H)$
represents the probability of the parameters $\bm{\theta}$ in light of the
data overhauled by our a priori information. The knowledge of
$p(\bm{\theta}|\textbf{d},H)$ allows us to compute the expectation of the
statistical quantities of interest. For example, the mean values
$E[\bm{\theta}]$ can be estimated as
$E[\bm{\theta}]=\int_{\bm{\Theta}}\bm{\theta}\,p(\bm{\theta}|\textbf{d},H)\,\text{d}\bm{\theta}\,,$
(3)
and, analogously, it is possible to infer the expectation of a generic
function of the parameters $\bm{\theta}$,
$E\left[f(\bm{\theta})\right]=\int_{\bm{\Theta}}f(\bm{\theta})\,p(\bm{\theta}|\textbf{d},H)\,\text{d}\bm{\theta}\,.$
(4)
From this argumentation it follows that, in order to perform PE, we have to
introduce a prior distribution $p(\bm{\theta}|H)$ and a likelihood function
$p(\textbf{d}|\bm{\theta},H)$; then, the properties of the model are encoded
in the posterior distribution $p(\bm{\theta}|\textbf{d},H)$ that can be
computed imposing Eq. (1).
### II.2 Model selection
In Eq. (1), for a fixed set of assumptions $H$, the evidence acts like a
normalization constant; however, this quantity plays a crucial role in the
context of MS. If we are interested in comparing two competing hypotheses,
$H_{A}$ and $H_{B}$, quantifying which one is better explaining the data, in
the Bayesian framework it is natural to introduce the odds ratio,
$\mathcal{O}_{A}^{B}=\frac{p(H_{B}|\textbf{d})}{p(H_{A}|\textbf{d})}=\frac{p(H_{B})}{p(H_{A})}\,\frac{p(\textbf{d}|H_{B})}{p(\textbf{d}|H_{A})}\,.$
(5)
The term $p(H_{i}|\textbf{d})$ represents the posterior probability for the
$i$-th hypothesis given the observed data and the ratio $\mathcal{O}_{A}^{B}$
encodes the will of the data in favoring one model over another. Assuming that
the two hypotheses are equiprobable $p(H_{B})=p(H_{A})$, it is possible to
reduce the computation to the ratio of the evidences, also known as Bayes'
factor,
$\mathcal{B}_{A}^{B}=\frac{p(\textbf{d}|H_{B})}{p(\textbf{d}|H_{A})}\,.$ (6)
If $\mathcal{B}_{A}^{B}<1$ then the hypothesis $A$ is preferred by the data,
otherwise $B$ is favored if $\mathcal{B}_{A}^{B}>1$. However, this rule is not
always straightforward since the estimation of the Bayes' factor might suffer
of uncertainties Mattei (2020); Yao _et al._ (2019). Then, in a realistic
scenario, more stringent bounds are required in order to prefer a hypothesis
Kass and Raftery (1995).
### II.3 Joint distributions
Let us assume that we performed two independent observations, $\textbf{d}_{1}$
and $\textbf{d}_{2}$, and each of them can be modeled using two sets of
parameters, respectively $\bm{\theta}_{1}$ and $\bm{\theta}_{2}$. In general,
it is possible to apply Bayes' theorem, Eq. (1), separately to every set of
measurements. However, if the two events are not independent (e.g. the same
physical process observed by two different observatories), the joint posterior
distribution can be written as
$p(\bm{\theta}_{1},\bm{\theta}_{2}|\textbf{d}_{1},\textbf{d}_{2},H)=\frac{p(\textbf{d}_{1},\textbf{d}_{2}|\bm{\theta}_{1},\bm{\theta}_{2},H)\,p(\bm{\theta}_{1},\bm{\theta}_{2}|H)}{p(\textbf{d}_{1},\textbf{d}_{2}|H)}\,,$
(7)
where $p(\textbf{d}_{1},\textbf{d}_{2}|\bm{\theta}_{1},\bm{\theta}_{2},H)$ is
the joint likelihood function and $p(\bm{\theta}_{1},\bm{\theta}_{2}|H)$ is
the joint prior distribution. If the employed set of parameters
$\bm{\theta}_{1}$ and $\bm{\theta}_{2}$ are independent, the joint
probabilities simply correspond to the product of the single probability
terms. However, if $\bm{\theta}_{1}$ and $\bm{\theta}_{2}$ correlate, the
problem could easily become non-trivial; e.g. the intersection between the two
parameter spaces $\bm{\Theta}_{1}\cap\bm{\Theta}_{2}$ could be not empty, or
the value of the two sets of parameters could be related with each other
$\bm{\theta}_{i}\equiv\bm{\theta}_{i}(\bm{\theta}_{j})$, or unexpected
correlations might appear introducing a larger parameter space.
### II.4 Samplers
In a realistic scenario, the form of the likelihood function is not always
analytically determinable and the parameter space has usually a large number
of dimensions. For these reasons, the evaluation of the posterior distribution
and the estimation of its integral are performed with stochastic techniques.
In particular, sampling methods have proven to be reliable and generic tools
for the analysis of non-analytical forms and multi-dimensional parameter
spaces Allison and Dunkley (2013); Veitch _et al._ (2015); Tichy (2015);
Handley _et al._ (2015), capable to give robust and stable results.
Markov-chain Monte Carlo (MCMC) methods are suitable tools to perform PE,
exploring the parameter space through random-walks and collecting samples
along the evolution. Subsequently, the posterior distribution is estimated
using the set of independent samples of the parameter space. Nevertheless, the
nested sampling Skilling (2006) is an advanced algorithm capable to extract
posterior samples and perform accurate estimation of the evidence, which is
the key quantity for MS. In order to solve this task, each sample is assumed
to be the representative of a isoprobability contour. Then, the evidence is
computed as the sum of the likelihood values weighted on the respective prior
volume, estimated resorting to Bayesian calculations. The details of the
sampling methods are discussed in App. A and App. B.
## III Design of the code
bajes is a pure Python software that aims to provide a versatile and robust
framework for generic Bayesian inference within a simple and clear approach.
In order to achieve this task, the software relies on a modular and composed
architecture and it implements logically specialized objects. Furthermore, we
decide to keep a light-weight setup with minimal dependencies on external
libraries. These properties make bajes a simple and general tool with a wide
range of applicability. The body of the bajes software is constituted by two
components: the inf module, that represents an implementation of the Bayesian
logic, and the obs module, that contains the tools to manage and process
physical data.
The inf module is the Bayesian skeleton of the software and contains the
methods required to instantiate the model to be inferred. Sec. III.1 describes
the tools implemented in bajes.inf and the general workflow of the module. The
Bayesian approach is constituted by three fundamental stages Gelman (2008):
formulating a model, comparing the model with the data and inferring the
properties of the model. The goal of the inf module is to provide a flexible
and general interface capable to adapt itself to a broad variety of problems.
This structure promotes usability and applicability, supplying a comprehensive
and unique architecture that allows the user to tackle specific problems. In
order to enforce these concepts, the software is developed promoting an
intuitive manageability and the simplicity of use: providing the necessary
basic information, the user can easily set up a full Bayesian analysis. App. D
shows an example of a practical PE analysis with bajes tools.
In the context of data analysis, the statistical infrastructure has to be
flanked by the physical characterization of the experimental data with the
purpose of defining a full Bayesian model. This is necessary in order to
connect the statistical properties with the actual observable quantities.
Obviously, the specific physical model to be used depends on the nature of the
analyzed data, on the assumptions made to build the model, and, in general,
different physical events will require tailored treatments and specialized
tools. To address this, bajes provides the obs module, which is a container of
methods needed to characterize and handle specific physical observations. This
module is designed aiming to the analysis of GWs and EM counterparts.
Currently, bajes.obs includes two sub-modules: gw and kn. The gw module
contains the tools necessary to deal with GW analysis and it is described in
Sec. IV. The kn module supplies a framework for the analysis of kilonovae
light curves generated by BNS collisions, following our early work Breschi
_et al._ (2021a). The implementation of the kn module is inspired by the
approach presented in Ref. Perego _et al._ (2017); Breschi _et al._ (2021a)
and it will be detailed in a followup work.
### III.1 Workflow
Figure 1: Schematic representation of the workflow described in Sec. III.1.
The scheme highlights the three stages of the Bayesian formalism: formulation
of the model, fitting the model to the data and inference of the model
properties. The gray box constituted by likelihood function and prior
assumptions represents the Bayesian model to be inferred. The dashed back-
propagating line refer to the case in which the analysis is iterated with an
improved description.
In order to fulfill the generic tasks of Bayesian inference, bajes provides
the tools necessary to instantiate a Bayesian model Gelman _et al._ (2020)
and to extract the statistical quantities of interest, such as posterior
distribution or evidence.
The bajes.inf module supplies a Prior and a Likelihood objects, and their
combination defines the Bayesian model. The Prior provides the implementation
of a generic prior distribution: this object can be instantiated with list of
Parameter, that are going to define the parameter space of the model. It is
also possible to introduce constant and variable quantities to be carried
along the sampling. Subsequently, it is possible to introduce a likelihood
function customizing the specific log_like method of a Likelihood object.
Furthermore, bajes implements a JointPrior and a JointLikelihood objects in
order to handle joint posterior distributions. Additionally, planned
extensions include hierarchical models (e.g. Hinton _et al._ , 2019; Loredo
and Hendry, 2019) and Bayesian methods to deal with error propagation.
Once the Bayesian model is defined, it is possible to fit the model to the
data estimating its properties. The inf module provides a Sampler method that
wraps the provided model and initializes the requested sampling algorithm. As
mentioned in Sec. II.4, the sampler explores the parameter space aiming to
reconstruct the posterior distribution. In order to conduct an accurate
analysis, the Sampler objects rely on auxiliary functions such as the proposal
methods described in App. C: these are statistical techniques that aim to
extract independent samples and to conduct the sampler towards more likely
regions of the parameter space. (see App. A and App. B).
When the Sampler completes the analysis, the algorithm returns the conditioned
probability of interest and the properties of the model can be inferred. This
information allows us to to compute the statistical quantities that
characterize the model and make it possible to test competing hypotheses and
verify different assumptions. Moreover, from these results it is possible to
understand the limits of the involved description; then, in general, the model
can be improved and the workflow can be iterated with the reviewed model,
improving the understanding and the modeling of the observed event. Figure 1
shows a schematic representation of the described workflow.
### III.2 Parallelisation
Figure 2: Scaling plot for the bajes pipeline (see Sec. V). Top panel: The
square markers are the measured speedup factors with respect to the serial
execution-time. The execution-times for the PTMCMC (orange squares) algorithm
are estimated performing $10^{3}$ iterations with $128\times 4$ tempered
chains. The nested sampling execution-times (pink squares for multi-threading
and green squares for MPI) are computed employing the dynesty software with
1024 live points and tolerance equal to 0.1. The blue solid line is the ideal
linear scaling and the yellow solid line is the theoretical scaling of nested
sampling. Bottom panel: Same quantities discussed in the previous panel
expressed in terms of efficiency.
By default, bajes analyses can be performed taking advantage of the parallel-
threading multiprocess Python library. However, with this method the number of
available processes is strictly limited by the size of the single machine and
for non-trivial problems this could be a strong limitation. For this reason,
the bajes software implements a customized method for multi-nodes
communication based on the message passing interface (MPI) protocol.
For ideal scaling, the execution-time of a machine computation is inversely
proportional to the number of central processing units (CPUs), that leads to a
linearly increase of the speedup. However, in a realistic scenario, the
scaling performances of sampling techniques are affected by unavoidable
computational steps serially performed (e.g. temperature swapping in parallel
chains and live points collection of nested sampling) and by the continuous
exchange of information between different processes, required to adapt the
evolution of the algorithm.
MCMC and nested sampling algorithms require separate treatments. The
performances of MCMC sampling are typically quantified in terms of proposal
acceptance and correlation length of the chains Patil _et al._ (2010);
Solonen _et al._ (2012); Foreman-Mackey _et al._ (2013); Veitch _et al._
(2015); Li and Marzouk (2014); Vousden _et al._ (2015), and generally the
overall execution-time is determined by several contributions, such as the
total number of chains, the complexity of the parameter space and the employed
proposal techniques. Estimations of MCMC execution-times Šukys and Kattwinkel
(2017); Robert _et al._ (2018); Morzfeld _et al._ (2019) have shown that the
efficiency drastically decreases for an increasing number of parallel chains.
On the other hand, the parallelization performances of the nested sampling are
well studied Feroz and Hobson (2008); Handley _et al._ (2015); Brewer and
Foreman-Mackey (2016); Higson _et al._ (2018); Smith _et al._ (2020) and the
theoretical speedup factor $S_{\rm NS}$ of this algorithm is expressed as a
sub-linear scaling,
$S_{\rm NS}(n_{\rm live},n_{\rm cpu})=n_{\rm
live}\cdot\log\left(1+\frac{n_{\rm cpu}}{n_{\rm live}}\right)\,.$ (8)
For $n_{\rm live}\gg n_{\rm cpu}$, the values predicted by Eq. (8) are
comparable with a linear trend.
Figure 2 shows the measured speedup factors in the execution-time as a
function of the number of CPUs for different samplers and different
parallelization methods. The speedup factors are computed with respect to the
execution-time measured from the serial job. The execution-times are estimated
from the bajes GW pipeline (see Sec. V) using GW150914 Abbott _et al._
(2016a) as target signal (see Sec. VII.1). Moreover, Figure 2 shows the
respective efficiency rates, that encode the deviation from the ideal linear
scaling. The efficiency is computed as the percentage ratio between the
measure speedup factor and the number of employed CPUs. Ideal linear scaling
has an efficiency of 100%.
For the parallel-tempered MCMC (PTMCMC) algorithm implemented in bajes, we
estimate the speedup performing $10^{3}$ iterations with $128\times 4$
tempered chains, while, for nested sampling, we employ the dynesty software
with 1024 live points and tolerance equal to 0.1. The PTMCMC is not optimal in
terms of execution-time scaling, mainly due to the serial swapping routine.
However, it gives acceptable scaling performances with efficiency $\gtrsim
40\%$ up to $n_{\rm cpu}\simeq 16$ using multi-threading methods. The results
with MPI are worst compared to multi-threading due to the data communication.
Regarding the nested sampling, for a very small number of processes, roughly
$n_{\rm cpu}\lesssim 2$, the multi-threading method gives more efficient
results, since the MPI protocol requires additional time for data
communication. For an increasing number of CPUs, roughly $n_{\rm cpu}\gtrsim
6$, the two parallelization methods give comparable results. However, the
strength of MPI parallelization is the capability of accessing multiple CPUs
located in different physical machines: the MPI interface implemented in bajes
gives an efficiency greater than 70% up to $n_{\rm cpu}\simeq 100$, that is
the typical size of standard PE job.
### III.3 Dependencies
The bajes software is compatible with Python 3.7 or higher versions, and it
can be easily installed using the standard Python setup tools. It relies on
essential modules, preferring those that can be easily installed via pip. The
bajes software mandatory requires the following libraries: numpy Harris _et
al._ (2020) for algebraic computations and array managing, scipy Virtanen _et
al._ (2020) for advanced mathematical tools, and astropy Price-Whelan _et
al._ (2018) is invoked for astrometric computations. However, in order to
execute the pipeline supplied with bajes (see Sec. V), other libraries might
be required: gwpy Macleod _et al._ ; Macleod _et al._ (2019) is used to
access the GWOSC archive Abbott _et al._ (2019d); Trovato (2020); LIGO
Scientific and Virgo Collaboration (a), matplotlib Hunter (2007) and corner
Foreman-Mackey (2016) are employed for plotting. Moreover, if the MPI
parallelisation is requested, the software need the installation of the mpi4py
library Dalcín _et al._ (2005); Dalcin _et al._ (2008, 2011).
In order to perform the sampling, bajes implements a PTMCMC algorithm based on
ptemcee Vousden _et al._ (2015) or can use additional external packages. In
particular, we interface the MCMC emcee Foreman-Mackey _et al._ (2013) and
the nested sampling algorithms of cpnest Pozzo and Veitch and dynesty Speagle
(2020).
## IV Gravitational-waves
The bajes.obs.gw module contains the functions and the tools needed to deal
with gravitational-wave (GW) transients, that are mainly provided by signal
processing and matched-filtered techniques Abbott _et al._ (2020b).
Ground-based GW detectors measure the local perturbations of the spacetime as
time series. Then, if we can believe that a physical GW transient is recorded
in the data, the detector output $d(t)$ is assumed to be the sum of the noise
contribution $n(t)$ and the GW strain $s(t)$, such as
$d(t)=n(t)+s(t)\,.$ (9)
If $n(t)$ respects the conditions of Gaussianity and stationarity and if we
dispose of a template $h(t)$ able to reproduce the real signal $s(t)$, then it
is possible to filter out the noise contribution revealing the presence of a
signal in the observed data. It is useful to introduce the inner product
between two time-domain series, $a(t)$ and $b(t)$, as
$(a|b)=4\Re\int_{0}^{\infty}\frac{a^{*}(f)\,b(f)}{S_{n}(f)}\,\text{d}f\,,$
(10)
where $a(f)$ is the Fourier transform of the time series $a(t)$,
$a(f)=\int_{-\infty}^{+\infty}a(t)\,e^{-2\pi{\rm i}ft}\,\text{d}t\,,$ (11)
and analogously for $b(f)$, while $S_{n}(f)$ is the power spectral density
(PSD) of the noise $n(t)$.
In order to perform inference on GW data, it is necessary to provide an
infrastructure capable to process data segments, characterize the noise
contamination, localize sources and detectors and to generate waveform
templates. In the following sections, we discuss how these tasks are achieved
by the obs.gw module.
### IV.1 Time and frequency series
A realistic portion of data measured by an interferometer is a time series
with constant sampling rate $F_{\rm s}$ and finite duration $T$. The
restriction to a finite domain leads to approximate Eq. (10) and Eq. (11)
numerically, taking the advantage of the fast Fourier transform (FFT)
algorithm Brigham and Morrow (1967). Within this framework, the time step
$\Delta t=1/F_{\rm s}$ and the duration $T$ of the time-domain series are
reflected in the spacing $\Delta f=1/T$ of the frequency bins and in the
maximum frequency accessible from the data $f_{\rm Nyq}=F_{\rm s}/2$, known as
Nyquist's frequency. Then, we can approximate Eq. (10) as
$(a|b)\approx\frac{4}{T}\,\Re\sum_{i}\frac{a^{*}(f_{i})\,b(f_{i})}{S_{n}(f_{i})}\,,$
(12)
where $f_{i}=i\cdot\Delta f$ and the index $i$ takes integer values from 0 to
$F_{\rm s}T/2$. Generally, this summation is performed on a restricted
frequency band, identified by a lower and an upper cutoff-frequencies,
respectively $f_{\rm min}$ and $f_{\rm max}$, in order to neglect irrelevant
portion of the spectrum. From Eq. (10), or its approximation Eq. (12), it is
possible to introduce the signal-to-noise ratio (SNR) as
$\rho=\frac{(d|h)}{\sqrt{(h|h)}}\,,$ (13)
that estimates the power of the signal $h(t)$ enfolded in the data $d(t)$
weighted on the variance of the background noise.
bajes implements a Series object designed to manage time-domain and frequency-
domain series. This instance stores the input series and it computes the FFT
(or the inverse-FFT) of the given data, in order to provide both
representations of the series. The Series object supplies also an interface
capable to to perform tapering, filtering and padding of the input series: we
make use of the Tukey window for the tapering, while the filtering is
performed using a Butterworth filter. Furthermore, the Series object
implements a summation and a product between objects of the same type, defined
in the frequency-domain, and contains methods to compute inner products and
SNRs.
### IV.2 Noise characterisation
As shown in Eq. (9), the measured data $d(t)$ are intrinsically related with
the noise fluctuations $n(t)$. The noise of a GW detector is represented by
stochastic fluctuations Abbott _et al._ (2020b) that propagate to the output.
The primary noise sources in a ground-based interferometer are gravity
gradients and seismic motions Losurdo _et al._ (2001); Daw _et al._ (2004);
Accadia _et al._ (2012), thermal agitation Harry _et al._ (2002); Somiya
(2012), quantum perturbations Lyons _et al._ (2000); Buonanno _et al._
(2003), and internal optomechanical resonances Nachman _et al._ (1997); Bond
_et al._ (2017). Moreover, the time series recorded by a GW detector are also
affected by external non-gravitational signals Coughlin _et al._ (2018), such
as the AC power grid transmission responsible for the 60 Hz peak of LIGOs and
the 50 Hz one of Virgo. The noise fluctuations are assumed to be Gaussian
distributed and stationary on a relatively large time-scale Abbott _et al._
(2020b). Then, the PSD shows the distribution of the noise power for every
frequency component and it can be computed as
$E\Big{[}|n(f)|^{2}\Big{]}=\frac{T}{2}\,S_{n}(f)\,,$ (14)
where the expectation is computed with the temporal average over a period $T$.
In other words, the PSD characterizes the uncertainty of the observed data in
frequency-domain as the variance of the associated noise spectrum. The
amplitude spectral density (ASD) is usually defined as the square root of the
PSD, $\sqrt{S_{n}(f)}$.
The PSD is a key quantity in order to estimate the product Eq. (10), since it
describes the weight of each frequency component. It follows that a full
characterization of the noise sources and a proper estimation of its
contribution are essential in order to perform accurate measurements of GW
transients. A general tool for estimating PSD is the Welch's method Welch
(1967), consisting in the average of the spectral densities computed on chunks
of the full noise segment. However, this is not the only technique suitable
for this task Blais (1996); Cornish and Littenberg (2015).
Aiming to data analysis purposes, bajes implements a Noise object. This
component is capable to estimate the PSD of a given noise time-domain series
using the Welch's method, generate an artificial segments of stationary and
Gaussian noise from a given PSD, and it disposes of methods for PSD
interpolation. Figure 3 shows a comparison of artificial noise segments
generated with bajes and with the pycbc routines Usman _et al._ (2016); Biwer
_et al._ (2019); Nitz _et al._ (2018, 2021), where the total length of the
artificial segment is 1024 s. The histograms and the PSDs show that the
generated noise fluctuations respect the conditions of Gaussianity and
stationarity with a frequency spectrum described by the requested PSD.
Figure 3: Top left panel: Comparison of artificial noise segment produced
using bajes (blue) and pycbc (orange). The segments are computed using LIGO
design sensitivity P1200087 Abbott _et al._ (2018b) with a lower cutoff-
frequency at 20 Hz and a sampling rate of 4096 Hz. The panel shows a 2 s
segment extracted from a segment with total duration of $T=1024~{}{\rm s}$.
Top right panel: Histograms of the time-domain samples computed using the
whole artificial segment of length $T=1024~{}{\rm s}$. Bottom panel: ASDs
reconstructed from the artificial noise segments using the Welch's method. The
spectra are computed using the whole segments with a chuck length of 4 s and
an overlap fraction of 99%. The black dashed line represents the original ASD.
### IV.3 Ground-based interferometers
The current ground-based observatories for GW detection are L-shaped Michelson
interferometers with Fabry-Perot cavities Dooley (2015); Akutsu _et al._
(2019). Each arm has a length of $L\approx 3{-}4~{}{\rm km}$, depending on the
actual detector Aasi _et al._ (2015); Acernese _et al._ (2015), and it is
constituted of two mirrors acting like test-masses. The detector performs
measurements of the gravitational strain as a difference in length of the
orthogonal arms,
$d(t)=\frac{\Delta L(t)}{L}\,,$ (15)
where $\Delta L=\Delta L_{x}-\Delta L_{y}$ is the difference between the
displacements along the two orthogonal arms. The projection of the signal on
the detector can be computed from the GW polarization components
$h_{+,\times}$ as
$h(t)=D_{ij}h_{ij}(t)=F_{+}h_{+}(t)+F_{\times}h_{\times}(t)\,,$ (16)
where $D_{ij}$ is labeled as detector tensor and it depends on the geometry of
the interferometer, while $F_{+,\times}$ are the antenna pattern functions for
each polarization. The antenna patterns $F_{+,\times}$ characterize the
detector sensitivity in the different regions of the sky for every given time
Raymond and Farr (2014).
Generally, in standard observing conditions, the observations of GW transients
are performed simultaneously by a worldwide network of ground-based
interferometers. Thanks to this, it is possible to correlate strains observed
independently in different locations, improving the estimation of the the
astrophysical origin of the transients Nitz _et al._ (2019); Pankow _et al._
(2018).
The necessity to localize a GW observatory in a fixed frame arises, in order
to project the expected signal on the detector and to estimate the light
travel time from each detector in the network. For these tasks, bajes disposes
of a Detector class able to characterize a ground-based interferometer. This
object is identified with the coordinates of the site of the interferometer
(latitude, longitude and altitude) and the angles of the two arms (azimuth and
tilt). It is also possible to initialize the Detector object to precomputed
detector configurations using the two-digits string identifying a ground-based
interferometer, e.g. H1 for LIGO-Hanford Aasi _et al._ (2015), L1 for LIGO-
Livingston Aasi _et al._ (2015), V1 for Virgo Acernese _et al._ (2015), G1
for GEO600 Luck _et al._ (2010); Dooley (2015), K1 for KAGRA Aso _et al._
(2013); Akutsu _et al._ (2020) and ET for Einstein Telescope (configuration
D) Punturo _et al._ (2010); Hild _et al._ (2011).
### IV.4 Waveform templates
The last ingredient necessary to complete the framework is a waveform
template, i.e. a model able to compute the expected representation of the
signal $h(t,\bm{\theta})$ (or ${h}(f,\bm{\theta})$) for every given set of
parameters $\bm{\theta}$. The Waveform object is a class that access the
methods disposable in the bajes.obs.gw.approx sub-module, and computes the
expected GW polarization components for every given set of parameters. The
frequency-domain waveform in Eq. (16) can be written in terms of the amplitude
$A(f)$ and the phase $\phi(f)$,
$h(f)=A(f)\,e^{-{\rm i}\big{[}\phi_{0}+2\pi t_{0}f+\phi(f)\big{]}}\,,$ (17)
where $\phi_{0}$ and $t_{0}$ are respectively phase and time references.
bajes directly implements and interfaces with all the most advanced GW
templates for quasi-circular compact binary mergers, and includes state-of-art
templates for eccentric and hyperbolic binary black hole (BBH) mergers
Chiaramello and Nagar (2020); Nagar _et al._ (2020a) as well as for BNS
postmerger Breschi _et al._ (2019b). In particular:
* •
TaylorF2: The post-Newtonian (PN) framework Blanchet _et al._ (1990); Faye
_et al._ (2012); Levi and Steinhoff (2016a, b); Mishra _et al._ (2015); Levi
and Steinhoff (2016c) represents a milestone for the description of the two-
body problem. This approach solves the dynamical evolution of a compact binary
with a perturbative method assuming low velocities and weak field
approximations, which are reflected in the condition $v=(\pi\,GMf)^{1/3}\ll
c$, where $v$ is the characteristic velocity in the binary, $M=m_{1}+m_{2}$ is
the total mass and $f$ is the GW frequency. The exact analytic solution of the
gravitational radiation emitted by a point-particle compact binary is known up
to the 3.5PN order. bajes also implements a pseudo-5.5PN accurate description
of the point mass baseline, as derived in Ref. Messina _et al._ (2019). Pure
tidal corrections are implemented up to 7.5PN Vines _et al._ (2011); Damour
_et al._ (2012) and include the recently computed tail terms (6.5, 7.5 PN) of
Henry _et al._ (2020), whereas spin-quadrupole terms are included up to 3.5PN
Nagar _et al._ (2019a).
* •
TEOBResumS: The effective-one-body (EOB) formalism Buonanno and Damour (1999,
2000); Damour _et al._ (2000); Damour (2001); Damour _et al._ (2008, 2015);
Bini _et al._ (2019, 2020a, 2020b) is an Hamiltonian approach that reduces
the two-body problem to a single orbiting object in an effective potential.
TEOBResumS Nagar _et al._ (2018) is an EOB approximant for spinning
coalescing compact binaries Damour and Nagar (2014); Nagar _et al._ (2017)
(black holes or neutron stars) inspiralling along generic orbits. It includes
tidal effects Bini and Damour (2014); Bernuzzi _et al._ (2015a); Akcay _et
al._ (2019) and subdominant modes Nagar _et al._ (2019b, 2020b) up to
$\ell=5$, as well as a description of precession-induced modulations up to
merger Akcay _et al._ (2020). When considering systems evolving along quasi-
circular orbits, the computational performance of the model is enhanced by
means of a post-adiabatic description of the early inspiral Nagar and Rettegno
(2019). Moreover, the model includes the implementations of hyperbolic
encounters Nagar _et al._ (2020a), eccentric mergers Chiaramello and Nagar
(2020) and a frequency domain approximation valid for quasi-circular BNS
coalescences, labeled as TEOBResumSPA Gamba _et al._ (2020a). The model is
publicly available at teo and used in bajes from the provided Python
interface.
* •
NRPM: BNS merger remnants are expected to be loud sources of GWs Takami _et
al._ (2015); Clark _et al._ (2016); Radice _et al._ (2018a), that convey
unique information on the equation of state of hot matter at extreme densities
Radice _et al._ (2017, 2018b); Abbott _et al._ (2018a). NRPM Breschi _et
al._ (2019b) is a analytical model based on numerical relativity BNS
simulations. The model is tuned on a set of simulations covering the range
$q\leq 1.5$ and $\Lambda_{1,2}\gtrsim 400$. For smaller values of the tidal
parameters, the model is identically zero since it is not expected to have a
post-merger signal in these cases, due to prompt black-hole formation Zappa
_et al._ (2018); Agathos _et al._ (2020); Bernuzzi _et al._ (2020). NRPM is
directly implemented in bajes and it can be attached to the TEOBResumS pre-
merger template, obtaining a complete model for the gravitational radiation
expected from BNS coalescences.
* •
MLGW: Machine learning tools can be employed to construct accurate
representations of GW signals. The mlgw package Stefano Schmidt and Walter Del
Pozzo (2020); Schmidt _et al._ (2020) takes advantage of these methods to
generate fast and reliable GW templates for BBH coalescences. The model is
composed by contributions extracted with principal component analysis and a
linear combination of regression models, which maps the orbital parameters of
the black holes to the reduced representation of the wave. A complete model
includes two principal component models, for both phase and amplitude of the
wave, and a mixture of regression models for each of the principal components
considered. The algorithm is trained on time-domain models and tested only for
aligned spin BBHs. Currently, the released software provides the
representations of EOB templates, TEOBResumS Nagar _et al._ (2018) and
SEOBNRv4 Bohé _et al._ (2017).
* •
GWSurrogate: The templates provided by the gwsurrogate package Blackman _et
al._ implements fast waveforms based on reduced-order models Field _et al._
(2014) trained on numerical relativity simulations. The NRSur7dq4 model Varma
_et al._ (2019a) is a precessing extension of the model presented in Ref.
Blackman _et al._ (2017) trained on a set of simulations with $q\leq 4$ and
$\chi_{1},\chi_{2}\leq 0.8$ that contains all higher-order modes with
$\ell\leq 4$. On the other hand, NRHybSur3dq8 Varma _et al._ (2019b) and its
tidal version Barkett _et al._ (2020) are calibrated using hybridized
waveforms in order to increase the number of orbits of the training templates,
improving the quality of the approximation. This model is tuned in a wider
range in the mass ratio, $q\leq 8$, but it does not include precession
contributions.
* •
LALSimulation: The LIGO Algorithm Library LALSuite LIGO Scientific
Collaboration (2018a) is the official LIGO-Virgo Collaboration software and it
provides the largest variety of waveform template models. bajes implements the
waveform generator of LALSimulation, a module of LALSuite. For the results of
this paper, we make use of this implementation in order to employ IMRPhenomPv2
approximant Husa _et al._ (2016); Khan _et al._ (2016) and its tidal
extension, IMRPhenomPv2_NRTidal Dietrich _et al._ (2017). A list of all the
approximants available through LALSimulation can be found at LIGO Scientific
Collaboration (2018b)
## V Pipeline
bajes provides a customized and automatized pipeline for the analysis of GW
transients and EM counterparts. In this section, we discuss the model
implemented to perform PE analysis on GW transients with bajes.
In the context of GW data analysis, we introduce the working hypotheses that
are going to define the employed Bayesian model. We call the assumption that
the data contains a non-vanishing GW transient signal hypothesis $H_{\rm S}$
i.e. $d(t)=n(t)+s(t)$ with $s(t)\neq 0$. On the other hand, the noise
hypothesis $H_{\rm N}$ is the condition for which the recorded data
corresponds only to pure noise realization, i.e. $d(t)=n(t)$. Then, in the
signal hypothesis condition, a GW signal emitted by a quasi-circular compact
binary coalescence (CBC) can be fully characterized with a set of 17
parameters according to General Relativity. Precisely,
$\bm{\theta}_{\rm
cbc}=\\{m_{1},m_{2},\bm{\chi}_{1},\bm{\chi}_{2},\Lambda_{1},\Lambda_{2},D_{L},\iota,\alpha,\delta,\psi,t_{0},\phi_{0}\\}$
(18)
where:
* •
$m_{1,2}$ refer to the detector-frame masses of the two objects, taken with
the convention $m_{1}{\geq}\,m_{2}$;
* •
$\bm{\chi}_{1,2}$ are the dimensionless spin vectors,
$\bm{\chi}_{i}=\frac{c\,\textbf{S}_{i}}{Gm_{i}^{2}}\,,\quad\,i=1,2\,,$ (19)
where $\textbf{S}_{1,2}$ are the spins of the two objects, $c$ is the speed of
light and $G$ is the gravitational constant.
* •
$\Lambda_{1,2}$ are the dimensionless tidal polarizability parameters that
encode the quadrupolar deformability of the $i$-th object under the effect of
an external force,
$\Lambda_{i}=\frac{2}{3}\,k_{2,i}\,\left(\frac{c^{2}\,R_{i}}{Gm_{i}}\right)^{5}\,,\quad\,i=1,2\,,$
(20)
where $k_{2,i}$ and $R_{i}$ are respectively the second Love number and the
radius of the $i$-th object ($k_{2}$ is identically zero for black holes).
* •
$D_{L}$ is the luminosity distance of the source from the observer;
* •
$\iota$ is the angle between the line of sight and the total angular momentum
of the system and it takes value in the range $[0,\pi]$,
* •
$\\{\alpha,\delta\\}$ are respectively right ascension and declination angles
that identify the sky position of the source;
* •
$\psi$ is the polarisation angle and it takes value in the range $[0,\pi]$;
* •
$\\{t_{0},\phi_{0}\\}$ are respectively reference time and reference phase.
The sampling is performed promoting the chirp mass $\mathcal{M}$ and the mass
ratio $q$,
$\mathcal{M}=\frac{(m_{1}m_{2})^{3/5}}{(m_{1}+m_{2})^{1/5}}\,,\quad
q=\frac{m_{1}}{m_{2}}\geq 1\,,$ (21)
over the single mass components, since they appear to be less correlated for
this type of signals Veitch and Vecchio (2010); Veitch _et al._ (2015). For
spinning binary mergers, we define the effective spin parameter $\chi_{\rm
eff}$ as
$\chi_{\rm eff}=\frac{m_{1}\chi_{1,z}+m_{2}\chi_{2,z}}{m_{1}+m_{2}}\,,$ (22)
that encodes the aligned-spin contribution and it can lead to narrower
uncertainties than the single spin components Ng _et al._ (2018).
Furthermore, in the context of BNS mergers, it is useful to introduce the
reduced tidal parameter $\tilde{\Lambda}$,
$\tilde{\Lambda}=\frac{16}{13}\left[\frac{(m_{1}+12m_{2})m_{1}^{4}\Lambda_{1}}{M^{5}}+(1\leftrightarrow
2)\right]\,$ (23)
and the asymmetric tidal parameter ${\delta\tilde{\Lambda}}$,
${\delta\tilde{\Lambda}}=\left[1-\frac{7996}{1319}\frac{m_{2}}{m_{1}}-\frac{11005}{1319}\left(\frac{m_{2}}{m_{1}}\right)^{2}\right]\frac{m_{1}^{6}\Lambda_{1}}{M^{6}}-(1\leftrightarrow
2)\,,$ (24)
where $M=m_{1}+m_{2}$. The tidal parameters ${\tilde{\Lambda}}$ and
${\delta\tilde{\Lambda}}$ are respectively proportional to the leading order
and the next-to-leading order tidal contributions according with PN expansion.
Generally, concerning GW analysis, it is common to label $\bm{\theta}_{\rm
int}=\\{m_{1},m_{2},\bm{\chi}_{1},\bm{\chi}_{2},\Lambda_{1},\Lambda_{2}\\}$ as
intrinsic parameters, since they affect the physical dynamics of the system;
while, the extrinsic parameters $\bm{\theta}_{\rm
ext}=\\{D_{L},\iota,\alpha,\delta,\psi,t_{0},\phi_{0}\\}$ are related with the
observed signal by trivial geometrical argumentation. Moreover, it is possible
to include in the pipeline further parameters in order to take into account
the calibration of the input quantities, such as calibration envelopes and PSD
uncertainties. For a detailed discussion about these topics, see Sec. V.4.
In the following sections, we present the implementations available in the
bajes GW pipeline.
### V.1 Data segments
The default GW routine implemented in bajes is designed for binary mergers
analyses. In general, the pipeline is able to access to the open-source
database of GWOSC Abbott _et al._ (2019d); Trovato (2020); LIGO Scientific
and Virgo Collaboration (a), to all the data released with GWTC-1 Abbott _et
al._ (2019a) and to the expected PSD curves for current and next-generation
detectors Punturo _et al._ (2010); Hild _et al._ (2011).
The input data to be analyzed by the pipeline can be gathered in different
ways. The GW pipeline provides a customized injection generator capable to
produce artificial data given a prescribed set of parameters and the detector
sensitivity curves. Moreover, the bajes pipeline allows to access the
observational data recorded by the LIGO-Virgo interferometers Abbott _et al._
(2018b); Aasi _et al._ (2015); Acernese _et al._ (2015) from the GWOSC,
specifying the central value of the GPS time and the duration of the segment.
When the data information is gathered, the pipeline initializes the Likelihood
function and the Prior with the requested parameters belonging to the set
$\bm{\theta}_{\rm cbc}$, and it passes these arguments to the requested
sampler which performs the Bayesian inference.
### V.2 Prior distributions
The prior distribution for the masses is chosen flat in the components
$\\{m_{1},m_{2}\\}$, that can be written in terms of $\\{\mathcal{M},q\\}$ as
$p(\mathcal{M},q|H_{\rm
S})=\frac{\mathcal{M}}{\Pi_{\mathcal{M}}\Pi_{q}}\,\left(\frac{1+q}{q^{3}}\right)^{2/5}\,,$
(25)
where $\Pi_{\mathcal{M}}$ and $\Pi_{q}$ are the prior volumes limited by the
bounds $[\mathcal{M}_{\rm min},\mathcal{M}_{\rm max}]$ in chirp mass and
$[1,q_{\rm max}]$ in mass ratio,
$\begin{split}\Pi_{\mathcal{M}}&=\frac{1}{2}\left(\mathcal{M}^{2}_{\rm
max}-\mathcal{M}_{\rm min}^{2}\right)\,,\\\
\Pi_{q}&=5\left[\frac{2^{2/5}\sqrt{\pi}\,\Gamma\left(\frac{4}{5}\right)}{\Gamma\left(\frac{3}{10}\right)}-\frac{{}_{2}F_{1}\left(-\frac{2}{5},-\frac{1}{5},\frac{4}{5},-q_{\rm
max}\right)}{q_{\rm max}^{1/5}}\right]\,,\\\ \end{split}$ (26)
where ${}_{2}F_{1}(a,b,c;z)$ is the hypergeometric function and $\Gamma(x)$ is
the Euler function.
The spin vectors can be written in the polar frame of the binary as
$\bm{\chi}_{i}=\\{\chi_{i},\vartheta_{i},\varphi_{i}\\}$, where $\chi_{i}$ is
the spin magnitude, $\vartheta_{i}$ is the tilt angle and $\varphi_{i}$ is the
complimentary azimuthal angle between $\bm{\chi}_{i}$ and the orbital angular
momentum L of the binary. The prior distribution for these quantities is
specified by the maximum value of spin magnitude $\chi_{\rm max}\geq 0$ and it
can be chosen between the following:
* •
Isotropic prior with precessing spins: the prior on the angular components
$\\{\vartheta_{i},\varphi_{i}\\}$ is isotropic over the solid angle, while the
spin magnitude is uniformly distributed in the range $[0,\chi_{\rm max}]$,
$p(\chi_{i},\vartheta_{i},\varphi_{i}|H_{\rm
S})=\frac{\sin\vartheta_{i}}{4\pi\chi_{\rm max}}\,.$ (27)
* •
Isotropic prior with aligned spins: this case is identical to the isotropic
one except for the assumption of aligned spins, $\vartheta_{i}=0,\pi$. The
$xy$ components of the spin vectors are marginalized, obtaining the form Lange
_et al._ (2018)
$p(\chi_{i,z}|H_{\rm S})=\frac{1}{2\chi_{\rm max}}\log\left|\frac{\chi_{\rm
max}}{\chi_{i,z}}\right|\,.$ (28)
* •
Volumetric prior with precessing spins: the distribution is taken uniform in
all Cartesian components, i.e. flat over the sphere with radius $\chi_{\rm
max}$. This prior can be written as
$p(\chi_{i},\vartheta_{i},\varphi_{i}|H_{\rm
S})=\frac{3}{4\pi}\,\frac{\chi_{i}^{2}\sin\vartheta_{i}}{\chi^{3}_{\rm
max}}\,.$ (29)
* •
Volumetric prior with aligned spins: the same of volumetric case with aligned
components; the marginalization over the $xy$ components leads to the form
$p(\chi_{i,z}|H_{\rm S})=\frac{9}{16\pi}\frac{\chi^{2}_{\rm
max}-\chi_{i,z}^{2}}{\chi^{3}_{\rm max}}\,.$ (30)
The prior distribution for the sky position parameters $\\{\alpha,\delta\\}$
is taken isotropic over the entire solid angle, such that $\alpha\in[0,2\pi]$
and $\delta\in[-\pi/2,+\pi/2]$,
$p(\alpha,\delta|H_{\rm S})=\frac{\cos\delta}{4\pi}\,,$ (31)
and analogously for the inclination $\iota$ in the range $[0,\pi]$,
$p(\iota|H_{\rm S})=\frac{\sin\iota}{2}\,.$ (32)
Regarding the luminosity distance the bounds are specified by the lower and
the upper bounds $[D_{\rm min},D_{\rm max}]$, and the analytic form of the
prior can be chosen between the followings:
* •
Volumetric prior: general analysis assumes that the source is uniformly
distributed over the sphere centred around the detectors, then
$p(D_{L}|H_{\rm S})=\frac{3D_{L}^{2}}{D_{\rm max}^{3}-D_{\rm min}^{3}}\,.$
(33)
* •
Comoving-volumetric prior: in order to take into account the cosmological
expansion of the Universe, a prior uniformly distributed over the comoving
volume $V_{C}$ is a more suitable physically-motivated choice. Within this
assumption, the prior on the luminosity distance can be written as
$p(D_{L}|H_{\rm S})\propto\frac{\text{d}V_{C}}{\text{d}D_{L}}\,.$ (34)
The luminosity distance $D_{L}$ and the comoving volume $V_{C}$ are related
through the redshift $z$ for a fixed cosmological model; by default, bajes
acquires the values of the cosmological constants from Ref. Aghanim _et al._
(2020).
* •
Source-frame prior: as shown in Ref. Abbott _et al._ (2019c, 2020a), Eq. (34)
does not take into account contributions due to time dilatation. Then, we can
introduce a prior distribution uniformly distributed in the source-frame
volume as
$p(D_{L}|H_{\rm
S})\propto\frac{1}{1+z}\cdot\frac{\text{d}V_{C}}{\text{d}D_{L}}\,,$ (35)
where the factor $(1+z)^{-1}$ converts the merger rate from the source frame
to the detector frame.
* •
Log-uniform prior: it could be useful to introduce a uniform prior in $\log
D_{L}$, then
$p(D_{L}|H_{\rm S})=\frac{D_{L}^{-1}}{\log\left(D_{\rm max}/D_{\rm
min}\right)}\,.$ (36)
For the remaining parameters, i.e. $\\{\psi,t_{0},\phi_{0}\\}$ and
$\\{\Lambda_{1},\Lambda_{2}\\}$ (if required), their prior distributions are
taken uniform within the provided bounds. Then, the overall prior is the
product of the priors of the single parameters.
### V.3 Likelihood function
The key ingredient of the inference is the likelihood function, that encodes
the capability of a given model $h(t,\bm{\theta}_{\rm cbc})$ to match the
observed data $d(t)$. For Gaussian and stationary noise $n(t)$, we expect the
mean of the noise fluctuations to be centered around zero with a variance
described by the PSD in the frequency-domain, i.e.
$p(n|H_{\rm N})\propto e^{-\frac{1}{2}(n|n)}\,.$ (37)
It follows that, within the signal hypothesis $H_{\rm S}$ and supposing that
we dispose of a template $h(t,\bm{\theta}_{\rm cbc})$ capable to reproduce the
real signal $s(t)$ for a given set of $\bm{\theta}_{\rm cbc}$, the log-
likelihood function can be written as the frequency-domain residuals between
the recorded data and the template with the product defined in Eq. (10),
$p(d|\bm{\theta}_{\rm cbc},H_{\rm
S})=\frac{1}{\mathcal{N}}\,e^{-\frac{1}{2}(d-h|d-h)}\,.$ (38)
where $\mathcal{N}$ is the normalisation constant, that can be expressed in
terms of the PSD using the numerical approximation Eq. (12),
$\mathcal{N}\approx\prod_{i}\sqrt{\frac{\pi\,T\,S_{n}(f_{i})}{2}}\,.$ (39)
Then, the Bayes's factor of the signal hypothesis against the noise assumption
can be computed as
$\mathcal{B}^{\rm S}_{\rm N}=\frac{p(d|H_{\rm S})}{p(d|H_{\rm
N})}=\int_{\bm{\Theta}}\exp\left[\big{(}d\big{|}h(\bm{\theta}_{\rm
cbc})\big{)}-\frac{1}{2}\big{(}h(\bm{\theta}_{\rm
cbc})\big{|}h(\bm{\theta}_{\rm cbc})\big{)}\right]\,p(\bm{\theta}_{\rm
cbc}|H_{\rm S})\,\text{d}\bm{\theta}_{\rm cbc}\,.$ (40)
Combining Eq. (38) with Eq. (17), it is possible to write the explicit
dependency of the likelihood with respect to the reference parameters
$\\{\phi_{0},t_{0}\\}$, since these values have no physical relevance, we
marginalize the posterior distribution with respect to $\\{\phi_{0},t_{0}\\}$
in order to increase the efficiency of the sampling. The marginalization over
$\phi_{0}$ can be computed analytically and the solution can be written in
terms of the modified Bessel function of the first kind Veitch and Del Pozzo .
For the time-shift $t_{0}$, the computation is semi-analytical since the
values of the likelihood are evaluated on a equally-spaced grid resorting on
the FFT computation Farr .
### V.4 Additional implementations
In order to perform accurate and reliable inferences of GW transients, the
pipeline requires refinements and auxiliary control systems. In this section
we discuss some of the additional tools implemented in the GW pipeline
supplied with bajes.
#### V.4.1 Calibration envelopes
The necessity of calibration envelopes Vitale _et al._ (2012) arises due to
imperfect knowledge of the interferometer response to differential arm length
changes Abbott _et al._ (2017d); Cahillane _et al._ (2017); Acernese _et
al._ (2018), which affects the transfer functions of the detector components
introducing systematic errors that propagate to the recorded data. These
uncertainties are estimated by inspecting the detector control systems and
propagating the measurements into a frequency-dependent probability
distribution. Subsequently, the information on calibration errors must be
taken into account when inferring the astrophysical parameters of GW signals.
In order to achieve this task, it is useful to introduce two auxiliary
functions $\delta A(f)$ and $\delta\phi(f)$ that characterise respectively the
amplitude and the phase uncertainties of the measured data segments. Then, the
calibration envelopes $\\{\delta A(f),\delta\phi(f)\\}$ can be taken into
account in the Bayesian model as
$h(f)\to\big{[}1+\delta A(f)\big{]}\,e^{{\rm i}\delta\phi(f)}\,h(f)\,.$ (41)
This procedure is accomplished specifying the values of the calibration
envelopes at predefined logarithmic-spaced frequency nodes $f_{j}$ and
linearly interpolated over the interested frequency axis. The calibration
parameters $\\{\delta A(f_{j}),\delta\phi(f_{j})\\}$ are introduced in the
sampling and estimated along with the signal parameters. The prior for
calibration envelopes $\\{\delta A(f_{j}),\delta\phi(f_{j})\\}$ is a
multivariate normal distribution with variance specified by the measured
calibration errors.
#### V.4.2 PSD uncertainties
The usage of a fixed estimated PSD might generate biases due to non-stationary
effects and unaccounted slow variations in the noise spectrum. Then, it arises
the necessity to take into account the uncertainty of the PSD estimate during
the inference of the properties of a GW signal. For this reason, the pipeline
allows the possibility to include PSD uncertainty weights $\eta_{j}$ Veitch
_et al._ (2015): the Fourier domain is dived in predefined logarithmic-spaced
bins $[f_{j},f_{j+1}]$ and the weights are included such that
$S_{n}(f)\to\eta_{j}\,S_{n}(f)\,,\quad{\rm for}\,f_{j}\leq f<f_{j+1}\,,$ (42)
where $\eta_{j}$ is taken constant in respective frequency bin. The full set
of $\\{\eta_{j}\\}$ parameters, one for every frequency bin, is introduced in
the sampling and they are estimated during the exploration with the signal
parameters. The prior distribution for the PSD uncertainty $\eta_{j}$ is take
normal with mean zero and variance $1/N_{j}$, where $N_{j}$ is the number of
data sample enclosed in the bin $[f_{j},f_{j+1}]$. This scheme has shown to
improve the robustness of the GW inference Littenberg _et al._ (2013) and it
offers a flexible model capable to quantify the differences between the
estimated PSD and the spectrum of the analyzed data.
## VI Injection studies
In this section, we show the results coming from a set of injection studies
performed with the bajes pipeline, in order to test the sampling routines with
the GW infrastructure. An injection is a simulated GW signal that has been
added into a time-domain segment. For our studies, we generate artificial
noise segments according with a prescribed PSD assuming Gaussian and
stationary fluctuations, as discussed in Sec. IV.2. Subsequently, the
artificial signal $h(t)$ is simulated according with the input parameters
$\bm{\theta}_{\rm cbc}$, projected on the requested detectors and included in
the data segment. Finally, the data are analyzed by the bajes pipeline
resorting to the framework described above. The sensitivity curves employed
for these studies correspond to the noise spectra expected at design
sensitivity for current ground-based detector Aasi _et al._ (2015); Abbott
_et al._ (2018b); Harry (2010); Acernese _et al._ (2015); Aso _et al._
(2013); Akutsu _et al._ (2019) and for next-generation interferometers
Punturo _et al._ (2010); Hild _et al._ (2011). The properties of the
injected signals are described in the following paragraphs depending on the
particular kind of source.
Table 1: Recovered parameters during the BBH injections studies. The signal
has been injected in H1+L1 using design sensitivity curves, with an overall
network SNR of 14. The data has been analyzed with the bajes PTMCMC sampling.
The reported values correspond to the medians with the 90% credible regions.
The last column reports the estimated logarithmic Bayes' factor and the
associated standard deviation.
Approximant | $\mathcal{M}$ | $q$ | $\chi_{1,z}$ | $\chi_{2,z}$ | $\chi_{\rm eff}$ | $D_{L}$ | $\iota$ | $\alpha$ | $\delta$ | $\log\mathcal{B}^{\rm S}_{\rm N}$ |
---|---|---|---|---|---|---|---|---|---|---|---
| $[{\rm M_{\odot}}]$ | | | | | $[{\rm Gpc}]$ | $[{\rm rad}]$ | $[{\rm rad}]$ | $[{\rm rad}]$ | |
Injected | 30.0 | 2.0 | 0.3 | 0.0 | 0.2 | 3.0 | 0.0 | 0.372 | 0.811 | – |
TEOBResumS | ${30.63}^{+1.84}_{-1.64}$ | ${1.56}^{+0.78}_{-0.49}$ | ${0.24}^{+0.36}_{-0.41}$ | ${0.17}^{+0.59}_{-0.63}$ | ${0.23}^{+0.14}_{-0.15}$ | ${2.49}^{+1.29}_{-1.17}$ | ${0.82}^{+2.05}_{-0.63}$ | ${2.17}^{+4.05}_{-2.13}$ | ${0.21}^{+1.10}_{-1.09}$ | ${73.29}^{+2.64}_{-2.64}$ |
IMRPhenomPv2 | ${30.34}^{+1.77}_{-1.69}$ | ${1.50}^{+0.83}_{-0.46}$ | ${0.17}^{+0.43}_{-0.43}$ | ${0.17}^{+0.57}_{-0.60}$ | ${0.19}^{+0.14}_{-0.16}$ | ${2.47}^{+1.30}_{-1.22}$ | ${0.80}^{+2.06}_{-0.62}$ | ${2.34}^{+3.88}_{-2.31}$ | ${0.17}^{+1.13}_{-1.05}$ | ${73.16}^{+2.66}_{-2.66}$ |
NRSur7dq4 | ${30.35}^{+1.75}_{-1.60}$ | ${1.56}^{+0.82}_{-0.53}$ | ${0.18}^{+0.37}_{-0.39}$ | ${0.17}^{+0.59}_{-0.63}$ | ${0.19}^{+0.56}_{-0.63}$ | ${2.49}^{+1.25}_{-1.23}$ | ${0.84}^{+2.02}_{-0.64}$ | ${2.11}^{+4.12}_{-2.07}$ | ${0.21}^{+1.10}_{-1.09}$ | ${73.11}^{+2.72}_{-2.72}$ |
#### VI.0.1 Binary black holes
The first example corresponds to an aligned-spinning BBH coalescence with
intrinsic parameters {$\mathcal{M}=30~{}{\rm M_{\odot}}$, $q=2$,
$\chi_{1,z}=0.3$, $\chi_{2,z}=0.$} located at a luminosity distance
$D_{L}=3~{}{\rm Gpc}$ with inclination angle $\iota=0$. The signal is injected
such that the merger occurs at GPS time 1126259462.0 with a sampling rate of
4096 Hz and a signal length of $16~{}{\rm s}$. The data are analyzed using two
detectors, H1 and L1, with LIGO design sensitivity P1200087 Aasi _et al._
(2015); Abbott _et al._ (2018b); Harry (2010). The sky location of the source
corresponds to the position of maximum sensitivity for the detector H1,
$\\{\alpha=0.372,\delta=0.811\\}$. The injected signal is generated with
TEOBResumS waveform model (employing only the dominant mode) with a network
SNR of 14, corresponding to 11 in H1 and 9 in L1.
The recovery of the posterior distribution is performed with PTMCMC sampling
with 8 tempered ensembles and 128 chains per ensemble. Moreover, we requested
$8{\times}10^{3}$ burn-in iterations. The injected strain is analyzed in the
frequency domain from 20 Hz to 1 kHz employing three different templates:
TEOBResumS, IMRPhenomPv2 and NRSur7dq4. The likelihood is marginalized over
reference time and phase. We set the chirp mass prior in $[23,37]\,{\rm
M_{\odot}}$ and the mass ratio in $[1,8]$. The spins are kept aligned with an
isotropic prior in the range $[-0.9,+0.9]$ for every component. We employ
volumetric prior for the luminosity distance in the support $[100~{}{\rm
Mpc},5~{}{\rm Gpc}]$ and the prior distributions for the remaining parameters
are chosen according with Sec. V.2.
Figure 4: Sampler acceptances during the BBH injection study performed with
TEOBResumS described in Sec. VI.0.1. The blue line is the proposal acceptance
of the untempered ensemble averaged over the chains, the yellow and the green
lines are respectively the proposal and the swapping acceptances of whole
sampler averaged over all the tempered ensembles. The solid lines represent
the median values and the shadowed areas are the 90% credible regions. The
vertical black line is the requested last burn-in iteration.
Figure 4 shows the acceptance fractions for the analysis performed with
TEOBResumS waveform model. The inferences performed with other approximants
shown similar behaviours. The untempered ensemble required less then $10^{4}$
iterations to converged to the maximum-posterior value. After the requested
burn-in, the untempered ensemble shown an average acceptance of $15\%$ and,
averaging over all the tempered ensembles, the sampler advanced with a global
proposal acceptance of ${\sim}45\%$ and with a global swapping acceptance of
${\sim}50\%$. The final autocorrelation length (ACL) of the untempered
ensemble corresponds to a lag of 70 iterations and the sampler collected a
final amount of $1.5{\times}10^{4}$ independent posterior samples.
Table 1 shows the recovered mean values and Figure 5 shows the recovered
marginalized posterior distribution for some exemplary parameters. The
marginalized posterior distributions enclose the injected values within 90%
credible intervals for all the waveform approximants. The estimated evidences
slightly prefer TEOBResumS waveform, accordingly with the injected template.
However, these values lie in the same range for all the analyses, leading to a
not fully resolved model selection. This is due to the large uncertainties
associated to the evidence estimation of the PTMCMC and with the relatively
low SNR of the injected signal. For the latter reason, it is also not possible
to reveal systematic differences between the different approximants Abbott
_et al._ (2017e), and the results of the employed templates are largely
consistent between each other.
Figure 5: Posterior distributions for $\\{q,\chi_{\rm eff}\\}$ and
$\\{D_{L},\iota\\}$ recovered from the injection studies performed on a BBH
signal with two inteferometers (H1+L1) at design sensitivities with network
SNR of 14. The artificial signal has been generated with TEOBResumS model and
the injected parameters are marked with black lines and squares. The contours
represent the 50% (thick) and the 90% (thin) credible regions. The recovery
has been performed with four different approximants analysing the frequency
range from 20 Hz to 1 kHz. The estimation of the luminosity distance is
affected by the degeneracy with the inclination angle Abbott _et al._
(2016b), due to the correlations in the strain amplitude for aligned-spin
sources.
#### VI.0.2 Binary neutron stars inspiral
Table 2: Recovered parameters during the inpiralling BNS injections studies.
The signal has been injected in H1+L1 using design sensitivity curves, with an
overall network SNR of 20. The data has been analyzed with the nested sampling
provided by dynesty. The reported values correspond to the medians with the
90% credible regions. The last column reports the estimated logarithmic Bayes'
factor and the associated standard deviation.
Approximant | $\mathcal{M}$ | $q$ | $\chi_{\rm eff}$ | ${\tilde{\Lambda}}$ | $D_{L}$ | $\iota$ | $\alpha$ | $\delta$ | $\log\mathcal{B}^{\rm S}_{\rm N}$ |
---|---|---|---|---|---|---|---|---|---|---
| $[{\rm M_{\odot}}]$ | | | | $[{\rm Mpc}]$ | $[{\rm rad}]$ | $[{\rm rad}]$ | $[{\rm rad}]$ | |
Injected | 1.188 | 1.00 | 0.00 | 600 | 120 | 0.00 | 0.372 | 0.811 | – |
TEOBResumSPA | ${1.1880}^{+0.0002}_{-0.0002}$ | ${1.20}^{+0.42}_{-0.17}$ | ${0.01}^{+0.05}_{-0.01}$ | ${435}^{+305}_{-248}$ | ${113}^{+19}_{-38}$ | ${0.60}^{+1.98}_{-0.42}$ | ${0.48}^{+0.26}_{-0.12}$ | ${0.86}^{+0.20}_{-0.21}$ | ${564.6}^{+0.3}_{-0.3}$ |
TaylorF2 | ${1.1880}^{+0.0005}_{-0.0001}$ | ${1.28}^{+1.16}_{-0.25}$ | ${0.01}^{+0.10}_{-0.01}$ | ${392}^{+415}_{-260}$ | ${106}^{+26}_{-38}$ | ${0.82}^{+1.91}_{-0.56}$ | ${0.75}^{+3.51}_{-0.33}$ | ${0.81}^{+0.28}_{-1.56}$ | ${564.3}^{+0.3}_{-0.3}$ |
IMRPhenomPv2NRT | ${1.1880}^{+0.0002}_{-0.0001}$ | ${1.25}^{+0.36}_{-0.21}$ | ${0.01}^{+0.02}_{-0.01}$ | ${316}^{+304}_{-215}$ | ${111}^{+20}_{-50}$ | ${0.76}^{+1.98}_{-0.55}$ | ${0.63}^{+3.54}_{-0.21}$ | ${0.79}^{+0.24}_{-1.49}$ | ${563.9}^{+0.3}_{-0.3}$ |
In this section we analyse an inspiralling nonspinning BNS merger with
intrinsic parameters are {$\mathcal{M}=1.188~{}{\rm M_{\odot}}$, $q=1$,
$\Lambda_{1}=\Lambda_{2}=600$}. The source is located at a luminosity distance
$D_{L}=120~{}{\rm Mpc}$ with inclination angle $\iota=0$. The signal is
injected such that the merger occurs at GPS time 1126259462.0 with a sampling
rate of 4096 Hz and a signal length of 128 s. The data are analyzed using two
detectors, H1 and L1, with LIGO design sensitivity P1200087 Aasi _et al._
(2015); Abbott _et al._ (2018b); Harry (2010). The sky location of the source
corresponds to the position of maximum sensitivity for the detector H1,
$\\{\alpha=0.372,\delta=0.811\\}$. The injected signal is generated with
TEOBResumS waveform model with a network SNR of 20, corresponding to 15 in H1
and 13 in L1.
The recovery of the posterior distribution is performed employing nested
sampling algorithm with 1024 live point and tolerance equal to $0.1$.
Furthermore, we set respectively the minimum and the maximum number of
iterations for every MCMC sub-chain to 32 and $4{\times}10^{3}$. The injected
strain is analyzed in the frequency domain from 20 Hz to 1 kHz employing three
different templates: TEOBResumSPA, IMRPhenomPv2_NRTidal and TaylorF2 including
5.5PN point-mass corrections and 6PN tidal contributions. We set the chirp
mass prior in $[0.8,2]\,{\rm M_{\odot}}$ and the mass ratio in $[1,4]$. The
spins are kept aligned with an isotropic prior in the range $[-0.9,+0.9]$ for
every component. The tidal parameters are extracted in the range $[0,5000]$.
We employ volumetric prior for the luminosity distance in the support
$[10,400]\,{\rm Mpc}$ and the prior distributions for the remaining parameters
are chosen according with Sec. V.2.
Figure 6 shows the number of iterations of the MCMC sub-chains employed to
propose a new point as a function of the prior mass $X$ (see App. B for the
definition). The actual values extracted from the sampler have been smoothed
with a Savitzky-Golay filter for visualisation purposes. The shadowed region
shows the difference between the actual MCMC-chain lengths and the filtered
estimation. Large values of MCMC-chain length (say $\gtrsim 250$) characterize
the more expensive steps, where the proposal method requires more iterations
in order to propose a new samples and estimate the boundaries of the current
contour. During the initial stages ($X{\approx}1$), the boundaries defined by
the current live points were comparable with the prior and the sampler
required many iterations in order to propose new samples. Subsequently, for
$\log X\lesssim 10^{4}$, the sampler identified the region that encloses the
majority of the posterior volume and the algorithm advanced spending
${\sim}100$ iterations to propose a new sample. The length of the MCMC-chain
slightly increases again during the latest stages, since the sampler has to
reach the bulk of the posterior distribution restricting the boundaries to a
neighborhood of the maximum-likelihood values.
Table 2 shows the recovered median values and Figure 7 shows the marginalized
posterior distribution for some exemplary parameters. The recovered values are
in agreement with the properties of the injected signal: the posterior
distribution encloses the injected sample for all the parameter in the 50%
credible region, with a small bias in the in the maximum-posterior value for
the reduced tidal parameter ${\tilde{\Lambda}}$, corresponding to roughly
${\sim}150$. However, this behavior is expected Wade _et al._ (2014); Abbott
_et al._ (2019e); Dudi _et al._ (2018); Samajdar and Dietrich (2019); Gamba
_et al._ (2020b) considering that we employed an upper cutoff-frequency of 1
kHz 111For typical BNS, the information on the tidal parameters is gathered in
frequency range above 800 Hz Damour _et al._ (2012); Gamba _et al._ (2020b).
combined with the large aligned-spin prior 222Large spin effects can mitigate
the tidal contributions, leading to an underestimate the tidal parameters
Samajdar and Dietrich (2019).. The estimated evidences slightly prefer
TEOBResumSPA approximant, accordingly with the injected model; however, due to
the low SNR, they do not show any strong preference.
Figure 6: Length of the MCMC sub-chains during the nested sampling performed
with TEOBResumSPA described in Sec. VI.0.2. The actual values extracted from
the sampler have been smoothed with a Savitzky-Golay filter for visualisation
purposes. The shadowed region shows the difference between the actual lengths
and the filtered estimation. The values are lower-bounded by the requested
minimum value, set equal to 32.
Figure 7: Posterior distributions for $\\{q,{\tilde{\Lambda}}\\}$ and
$\\{\alpha,\delta\\}$ recovered from the injection studies performed on an
inspiralling BNS signal with two inteferometers (H1+L1) at design
sensitivities with network SNR of 20. The artificial signal has been generated
with TEOBResumS model and the injected parameters are marked with black lines
and squares. The recovery has been performed with three different approximants
analysing frequency range from 20 Hz to 1 kHz. The degeneracy in the sky
location can be removed introducing a third detector Abbott _et al._ (2017f)
and it is due to the correlations between longitudinal and latitudinal angles
that concur in the estimation of times of arrival in the different
interferometers.
#### VI.0.3 Binary neutron stars postmerger
Table 3: Recovered parameters during the BNS postmerger injection survey. The
reported values correspond to the medians with the 90% credible regions.
SNR | $\mathcal{M}$ | ${\tilde{\Lambda}}$ | $f_{2}$ | $R_{\rm max}$ | $\log\mathcal{B}^{\rm S}_{\rm N}$
---|---|---|---|---|---
| $[{\rm M_{\odot}}]$ | | $[{\rm kHz}]$ | $[{\rm km}]$ |
Injected | 1.188 | 600 | 2.94 | 10.8 | –
8 | ${2.2}^{+0.8}_{-1.5}$ | ${2700}^{+1800}_{-1800}$ | ${0.8}^{+2.1}_{-0.3}$ | ${27}^{+10}_{-19}$ | ${0.2}^{+0.2}_{-0.2}$
9 | ${1.3}^{+1.6}_{-0.7}$ | ${2350}^{+1900}_{-1700}$ | ${1.7}^{+1.2}_{-1.1}$ | ${14}^{+22}_{-6}$ | ${2.6}^{+1.3}_{-1.3}$
10 | ${0.8}^{+2.2}_{-0.2}$ | ${2830}^{+1630}_{-1910}$ | ${2.7}^{+0.3}_{-2.0}$ | ${8.9}^{+2.8}_{-0.9}$ | ${11.0}^{+4.8}_{-4.8}$
12 | ${0.8}^{+2.1}_{-0.2}$ | ${1860}^{+2570}_{-1410}$ | ${2.9}^{+0.7}_{-2.3}$ | ${9.0}^{+2.7}_{-0.8}$ | ${13.9}^{+6.3}_{-6.3}$
16 | ${0.78}^{+0.43}_{-0.18}$ | ${1780}^{+2640}_{-1220}$ | ${2.93}^{+0.02}_{-0.39}$ | ${8.9}^{+2.0}_{-0.8}$ | ${45.2}^{+4.2}_{-4.2}$
32 | ${0.80}^{+0.45}_{-0.12}$ | ${1730}^{+730}_{-1220}$ | ${2.93}^{+0.02}_{-0.03}$ | ${9.0}^{+2.1}_{-0.6}$ | ${271}^{+29}_{-29}$
Figure 8: Marginalized posterior distributions for
$\\{f_{2},{\tilde{\Lambda}},R_{\rm max}\\}$ recovered in the BNS postmerger
injection survey using a five-detector-network H1+L1+V1+K1+ET at design
sensitivities varying the luminosity distance and locating the source in the
position of maximum sensitivity for ET. The injected signals have been
generated with NRPM template and recovered with the same model. Figure 9:
Frequency-domain amplitude strains in the ET detector for the analysis at SNR
16 corresponding to a luminosity distance of 40 Mpc. The plotted quantities
correspond to the injected signal (black), the full artificial data strain
(grey) and the recovered template (blue), where the solid line is the mean
value and the shadowed area is the 90% credible region.
We perform a set of BNS postmerger injections using a five-detector network
which includes: H1 and L1 with LIGO design sensitivity P1200087 Aasi _et al._
(2015); Abbott _et al._ (2018b); Harry (2010), V1 with Virgo design
sensitivity P1200087 Acernese _et al._ (2015), K1 with KAGRA design
sensitivity T1600593 Aso _et al._ (2013); Akutsu _et al._ (2019) and the
third-generation interferometer ET (configuration D) P1600143 Punturo _et
al._ (2010); Hild _et al._ (2011). The injected signals are generated with
NRPM and correspond to the postmerger radiations of non-spinning BNSs with
intrinsic parameters {$\mathcal{M}=1.188~{}{\rm M_{\odot}}$, $q=1$,
$\Lambda_{1}=\Lambda_{2}=600$}. The signal is injected such that the merger
occurs at GPS time 1126259462.0 with a sampling rate of 8192 Hz and a signal
length of 4 s. The sky location of the source corresponds to the position of
maximum sensitivity for the detector ET, $\\{\alpha=2.640,\delta=0.762\\}$.
The signals are injected at different luminosity distances in order to
simulate different SNRs, spanning the range from 8 to 32, which corresponds to
$D_{L}\in[20,80]~{}{\rm Mpc}$. We observe that, for these kind of signal, ET
is the most relevant detector and it concurs in the determination of the SNR
with a weight larger than 90% for all the analyzed cases.
For these studies, we employ the PTMCMC sampler using 8 tempered ensembles
with 128 chains and $2\times 10^{4}$ burn-in iterations. The injected strain
is analyzed in the frequency domain from 1 kHz to 4 kHz employing NRPM
waveforms 333 The same template model is used for both injection and recovery,
in order to avoid noise contributions different from the simulated detector
noise.. We set the chirp mass prior in $[0.8,2]\,{\rm M_{\odot}}$ and the mass
ratio in $[1,1.5]$. The spins are kept fixed to zeros for every component. The
tidal parameters are extracted in the range $[0,5000]$. We employ volumetric
prior for the luminosity distance in the support $[5~{}{\rm Mpc},400~{}{\rm
Mpc}]$ and the prior distributions for the remaining parameters are chosen
according with Sec. V.2.
As shown in Ref. Bernuzzi _et al._ (2015b), the postmerger GW radiation of a
long-lived BNS merger is characterized by a main frequency peak in the Fourier
domain ($f_{2}$) that can be parametrized with quasiuniversal (EOS-
insensitive) relations involving the tidal polarizability parameters. The NRPM
model is constructed using these relations that allow one to constrain
${\tilde{\Lambda}}$ from postmerger observations and at the same time map the
properties of a postmerger signal into the inspiral parameters of the binary
Breschi _et al._ (2019b). Furthermore, numerical relativity simulations
Bauswein _et al._ (2012); Breschi _et al._ (2019b) have shown that the
postmerger frequency peak $f_{2}$ can be related to the radius $R_{\rm max}$
of the maximum mass configuration of a non-rotating neutron star. For the
injected sources, we get $f_{2}=2.94~{}{\rm kHz}$ and $R_{\rm max}=10.8~{}{\rm
km}$.
Tab 3 shows the recovered mean values and Figure 8 shows the marginalized
posterior distributions for $f_{2}$ and for ${\tilde{\Lambda}}$ recovered
during the survey described above and Figure 9 presents the recovered
postmerger signal recovered in the ET detector for the case with SNR 16. The
Bayes factors shows evidence of signal from SNR 9; however, in order to
estimate $f_{2}$ with an accuracy of $O(0.1~{}{\rm kHz})$, the method requires
an SNR ${\gtrsim}12$. The mean values estimated from the marginalized
posteriors agree with the injected properties of the signal within the 90%
credible intervals; however, it is interesting to observe that, due to the
correlations induced by the EOS-insensitive relations, the sampler explores
non-trivial degeneracy between the intrinsic parameters. The same behavior has
been shown in Ref. Breschi _et al._ (2019b). These correlations strongly
affect the estimation of ${\tilde{\Lambda}}$ and PE of a postmerger signal is
only capable of imposing an upper bound for this parameter. For example, with
SNR 32 it would be possible to constrain the value of ${\tilde{\Lambda}}$ with
an uncertainty of ${\sim}10^{4}$, that is a much larger value compared with
the uncertainties coming from the analysis of the inspiral data (see Tab. 2).
Nevertheless, the observation of a postmerger signal would extraordinarily
extend our knowledge regarding neutron star matter Bauswein _et al._ (2014);
Radice _et al._ (2017); Agathos _et al._ (2020), allowing us to verify our
current models and to constrain the extreme-density properties of the EOS,
such as the radius of the maximum mass star $R_{\rm max}$ and the inference of
softening effects at high-densities Bauswein _et al._ (2019); Breschi _et
al._ (2019b). These constraints can be further improved with the inclusion of
the inspiral information within a full inspiral-merger-postmerger analysis of
the observed BNS signal.
#### VI.0.4 Confidence interval test
Figure 10: Fraction of events found within a confidence interval for test
described in Sec. VI.0.4, for which a set of 100 BBH injections was used. For
every parameter, the label shows the $p$-value of the KS test. The recovered
$p$-values are uniformly distributed with a $p$-value of 58%.
The confidence interval (CI) test has become a standard control check to
verify the reliability of a GW pipeline (e.g. Veitch _et al._ , 2015; Del
Pozzo _et al._ , 2018; Romero-Shaw _et al._ , 2020), since it ensures that
the recovered probability distributions are truly representative of the
inferred confidence levels. For each parameter employed in the analyses, the
CI test measures the fraction of true values that follow below a given
credible level and, if the algorithm is well-calibrated, we expect this two
quantities to be proportional. The test is performed using a large set of
injected signals with parameters extracted from a population prior
$\mathfrak{p}(\bm{\theta}_{\rm cbc})$. Then, we conduct PE analyses in order
to recover the posterior distributions for every injection and the CIs can be
estimated from the posterior distributions by determining the quantiles under
which the true parameters lie. Then, the population prior
$\mathfrak{p}(\bm{\theta}_{\rm cbc})$ is used as input prior distribution for
the analysis of the injected signal 444We observe that, in order to perform an
accurate test the employed prior has to be a good representation of the
population distribution. In our case this is ensured by definition since the
injected signals are extracted from the employed prior
$\mathfrak{p}(\bm{\theta}_{\rm cbc})$..
For our CI test, we inject 100 BBH signals employing the prior distribution
used for the parameters of GW150914 (see Sec. VII.1), that includes 9
parameters. The signals are generated using MLGW waveform template and
injected in the two LIGO detectors, H1 and L1, at design sensitivity using
segments of duration 8 s. The analyzed frequency range goes from 20 Hz to 1
kHz. We use the nested sampling provided by dynesty with 1024 live points and
tolerance equal to 0.1. The likelihood function is marginalized over reference
phase and time shift. Figure 10 shows the recovered fractions of events found
within an increasing confidence level. The fraction of event is expected to be
uniformly distributed if the prior distributions is a good approximation of
the underlying injected population distributions.
For each parameter, we compute the $p$-value of Kolmogorov-Smirnov (KS) test,
quantifying the consistency with uniformly distributed events. The results are
shown between round brackets in Figure 10. From these results, we estimate the
combined $p$-values quantifying the probability that the individual $p$-values
are extracted from a uniform distribution. We estimate an overall $p$-value of
$58\%$, according with analogous computations performed with a set of 9 random
uniformly-distributed samples. Furthermore, we compute the Jensen-Shannon (JS)
divergence between the distribution of fraction of events with respect to a
uniform distribution. The results lie around $2\times 10^{-3}$ for all the
parameters, in agreement with analogous estimations performed on a set of 100
uniformly-distributed random samples. These results confirm that the pipeline
is well-calibrated.
## VII LIGO-Virgo transients
In this section, we apply the bajes pipeline to the GW transients Abbott _et
al._ (2019a) observed by the LIGO-Virgo interferometers Abbott _et al._
(2018b); Aasi _et al._ (2015); Acernese _et al._ (2015). For all the
analyses, the data are extracted from the GWOSC archive Abbott _et al._
(2019d); Trovato (2020); LIGO Scientific and Virgo Collaboration (a) with a
sampling rate of 4096 Hz resorting to the gwpy facilities. The analyzed
strains are centered around the nominal GPS time. Subsequently, the strains
are windowed and transformed in the Fourier space, using the tools described
in Sec. IV. PSD curves and calibration uncertainties are taken from the
official LIGO-Virgo data release of GWTC-1 Abbott _et al._ (2019a); LIGO
Scientific and Virgo Collaboration (b, c). We use the nested sampling
implemented in dynesty, employing 2048 live points with a tolerance equal to
0.1. With these settings, we collect ${\sim}5\times 10^{4}$ samples from each
PE analysis. The measured quantities reported in the text correspond to the
median values and to the 90% credible regions, except when explicitly
mentioned.
We note that the prior assumptions employed in bajes slightly differs from the
ones of the official LIGO-Virgo analysis. In the latter, the sampling is
performed imposing additional bounds in the mass components Abbott _et al._
(2016a, c, 2017b, 2018a); while, in bajes, the samples are extracted from the
entire square defined by the $\\{\mathcal{M},q\\}$ bounds. Moreover, the
strains analyzed by the bajes pipeline are slightly shifted in time with
respect the official LIGO-Virgo segments due to different reference
conventions.
### VII.1 GW150914
Figure 11: Waveform template reconstructed from the analysis of GW150914 wit
TEOBResumS compared with the LIGO-Hanford (top panel) and LIGO-Livingston
(bottom panel) data. The black lines are the whitened strains recorded by the
LIGO inteferometers, where we applied a band-pass filter in the frequency
window $[20~{}{\rm Hz},480~{}{\rm Hz}]$ only for visualization purposes. The
colored lines are the median recovered template projected on the respective
detector (orange for LIGO-Hanford and green for LIGO-Livingston) and the
shadowed areas represents the 90% credible regions. The estimated network SNR
of the signal corresponds to 22.
Figure 12: Posterior distributions for the intrinsic (left) and extrinsic
(right) parameters of GW150914 estimated with bajes pipeline employing
TEOBResumS waveform model with aligned spin components. All higher-order modes
up to $\ell=5$ (with $m=\pm\ell$) have been used to generate the waveform
models. The chirp mass is expressed in Solar masses ${\rm M_{\odot}}$, the
luminosity distance is expressed in megaparsec Mpc, while the angles
$\\{\iota,\alpha,\delta\\}$ are in radiants. We report the median value and
the 90% credible regions for each parameter and the contours represent the 50%
and the 90% credible regions.
In this section, we discuss the results obtained from the analysis of the
first GW transient observed by the LIGO interferometers, GW150914 Abbott _et
al._ (2016a, d, b, e). For all the discussed cases, the analyzed strains
correspond to the GWTC-1 release Abbott _et al._ (2019a) of LIGO-Hanford and
LIGO-Livingston data centered around GPS time 1126259462 with a sampling rate
of 4096 Hz and a duration of 8 s. We set the lower cutoff-frequency to 20 Hz
and the highest frequency to 1 kHz. The employed prior is isotropic in spin
components and volumetric in luminosity distance, and it spans the ranges
$\mathcal{M}\in[12,45]~{}{\rm M_{\odot}}$, $q\in[1,8]$,
$\chi_{1,2}\in[0,0.99]$ and $D_{L}\in[100,5000]~{}{\rm Mpc}$. We include 8
spectral nodes for the calibrations of the analyzed strains.
We discuss first the PE analysis employing TEOBResumS model with aligned
spins, including all high-order modes up to $\ell=5$ with $m=\pm\ell$, i.e.
$(2,\pm 2)$, $(3,\pm 3)$, $(4,\pm 4)$, $(5,\pm 5)$. Figure 11 shows the
recovered waveform template compared with the whitened strains recorded by the
LIGO interferometers, and Figure 12 shows the recovered posterior
distribution. We estimated a network SNR of 23. The results are consistent
with similar studies performed with the same approximant Nagar _et al._
(2018, 2020b), recovering the signal of a non-spinning equal-mass BBH merger
with $\mathcal{M}=31.57^{+0.82}_{-0.70}~{}{\rm M_{\odot}}$. The inference of
the extrinsic parameters shows a source located at a luminosity distance of
${\sim}490~{}{\rm Mpc}$ and the area covered by the 90% isoprobability level
of the sky location posterior distributions corresponds to ${\sim}670~{}{\rm
deg}^{2}$. The estimated Bayes' factor corresponds to $\log\mathcal{B}^{\rm
S}_{\rm N}=267.8\pm 0.2$, where the uncertainty is given by the standard
deviation.
We repeat the PE analysis with IMRPhenomPv2 template employing precessing spin
components. In Figure 13, the marginalized posterior distribution of the
recovered masses is compared with the official LIGO-Virgo posterior samples
release with GWTC-1 Abbott _et al._ (2019a) performed the LALInference
routines Veitch _et al._ (2015); LIGO Scientific Collaboration (2018a) using
the same waveform approximant. The two analyses are consistent with each
other, recovering a BBH signal with chirp mass
$\mathcal{M}=31.00^{+1.52}_{-1.49}~{}{\rm M_{\odot}}$ and mass ratio well
constrained around the equal mass case, $q=1.18^{+0.36}_{-0.17}$. The
inference of the effective spin parameter is consistent with zero and the
posterior distribution of the spin components does not show evidence of
precession, according to Ref. Abbott _et al._ (2016c, b, d). Also the
extrinsic parameters show an overall good agreement with previous estimations
performed with the same approximant, locating the source at a luminosity
distance of $D_{L}=458^{+123}_{-169}$ Mpc with a posterior sky-location area
of ${\sim}640~{}{\rm deg}^{2}$ at the 90% credible level.
The main difference between the TEOBResumS posterior and IMRPhenomPv2 one is
the uncertainty on the mass ratio parameter, for which TEOBResumS admits a
largest value of $1.28$ at the 90% credible region, while the IMRPhenomPv2
posterior reaches $1.94$ with the same confidence. However, this disagreement
is mainly due to the different spin assumptions employed for the two analyses
Abbott _et al._ (2016c, b). Moreover, the posterior distribution recovered
with IMRPhenomPv2 shows slightly smaller $\mathcal{M}$ and larger $D_{L}$
compared with the TEOBResumS inference, as shown also in Ref. Nagar _et al._
(2018).
Figure 13: Posterior distributions of the detector-frame mass components
$\\{m_{1},m_{2}\\}$ recovered in analysis of GW150914 with IMRPhenomPv2 (blue
line). The results are compared with the official LIGO-Virgo posterior samples
released with GWTC-1 Abbott _et al._ (2019a) (yellow line) computed using the
LALInference routines Veitch _et al._ (2015); LIGO Scientific Collaboration
(2018a). The central panel shows the 50% and the 90% credible regions.
Finally, we verify the compatibility of the recovered posterior distribution
against of existing GW pipelines. In particular, we employ the bilby pipeline
Ashton _et al._ (2019); Smith _et al._ (2020); Romero-Shaw _et al._ (2020)
in order to estimate the posterior distributions of GW150914, using the same
prior assumptions and settings discussed above. We observe that GW150914 is a
suitable candidate to test the statistical significance of the results and the
agreement between the pipelines: due to the loudness of this event
(corresponding to an SNR $>20$), the overall impact of statistical noise
fluctuations on the recovered posterior distribution is expected to be less
determinant compared with the other BBH mergers presented in GWTC-1 Abbott
_et al._ (2019a). Figure 14 shows the probability-probability (PP) plot of the
marginalized posterior distributions recovered for every parameter. A PP plot
compares the cumulative distributions estimated with two methods, plotting one
against the other. Then, if two probability distributions are identical, the
associated PP plot is represented by a bisector line. In our case, the results
coming from the two pipelines are largely consistent between each other, with
observed deviation fully consistent with statistical fluctuations. This fact
is confirmed by the $p$-values computed between the marginalized distributions
of each parameter: the values are comparable to or larger than $0.4$ except
for $\vartheta_{2}$ and $D_{L}$ parameters, for which we respectively get
$p$-values of 0.15 and 0.11. Furthermore, the legend in Figure 14 shows the
square root of the JS divergence of the marginalized posterior distributions
for the employed parameters: the largest recovered value corresponds to
$5.2\times 10^{-4}$ bit for the tilt angle $\vartheta_{2}$.
Figure 14: PP plot for the marginalized posterior distributions of GW150914
parameters. On the $x$-axis, the cumulative posterior probabilities estimated
with the bilby pipeline and on the $y$-axis the same quantities computed with
the bajes pipeline. Different colors refer to different parameters and the
legend shows also the square root of the Jensen–Shannon divergence.
### VII.2 GW170817
Figure 15: Posterior distributions for the intrinsic (left) and extrinsic
(right) parameters of GW170817 estimated with bajes pipeline employing
TEOBResumSPA waveform model with aligned spin components. The chirp mass is
expressed in Solar masses ${\rm M_{\odot}}$, the luminosity distance is
expressed in megaparsec Mpc, while the angles $\\{\iota,\alpha,\delta\\}$ are
in radiants. We report the median value and the 90% credible levels for each
parameter and the contours represent the 50% and the 90% credible regions.
Figure 16: Posterior distributions for the intrinsic parameters of GW170817
estimated with bajes pipeline employing TaylorF2 (5.5PN + 7.5PN) waveform
model with aligned spin components. The chirp mass is expressed in Solar
masses ${\rm M_{\odot}}$. We report the median value and the 90% credible
levels for each parameter and the contours represent the 50% and the 90%
credible regions. Figure 17: Posterior distributions for the reduced tidal
parameter ${\tilde{\Lambda}}$ of GW170817 estimated using the bajes pipeline
with different waveform approximants and different upper cutoff-frequencies.
The posterior samples has been reweighted to uniform prior distribution and
the plot shows the original employed prior (gray line). Solid lines refer to
the results with $f_{\rm max}=2~{}{\rm kHz}$, while dashed lines are estimated
with $f_{\rm max}=1~{}{\rm kHz}$. TEOBResumSPA (blue line) and TaylorF2
(yellow line) samples are computed employing aligned spins, while precessing
spin components were included for IMRPhenomPv2_NRTidal (green line).
Figure 18: Comparison between the posterior distributions for the tidal
parameters $\\{{\tilde{\Lambda}},{\delta\tilde{\Lambda}}\\}$ estimated with
the bajes pipeline (blue lines) with the official LIGO-Virgo released samples
Abbott _et al._ (2019e) (yellow lines) computed with the LALInference
routines Veitch _et al._ (2015); LIGO Scientific Collaboration (2018a). The
analyses are performed with IMRPhenomPv2_NRTidal waveform approximant. The
left panel shows the results with low-spin prior ($\chi_{\rm max}=0.05$),
while the right panel presents the high-spin prior results ($\chi_{\rm
max}=0.89$)s. For both panels, the central plots show the 50% and 90% credible
regions. We estimate the JS divergences for the marginalized distributions,
finding the values of $8.9\times 10^{-5}$ bit and $5.3\times 10^{-5}$ bit
respectively for ${\tilde{\Lambda}}$ and ${\delta\tilde{\Lambda}}$ for the
high-spin prior case. Regarding the low-spin prior studies, the JS divergences
correspond to $8.3\times 10^{-5}$ bit and $4.7\times 10^{-5}$ bit respectively
for ${\tilde{\Lambda}}$ and ${\delta\tilde{\Lambda}}$.
We analyze the LIGO-Virgo data corresponding to GW170817 Abbott _et al._
(2017b, 2018a, 2019e, 2018c), the first GW signal observed from a BNS merger.
The employed data correspond to the GWTC-1 release Abbott _et al._ (2019a) of
LIGO-Virgo data centered around GPS time 1187008857 with a sampling rate of
4096 Hz and a duration of 128 s analyzing the frequency range from 20 Hz to 2
kHz. The employed prior is isotropic in spin components and volumetric in
luminosity distance, and it spans the ranges $\mathcal{M}\in[1.18,1.21]~{}{\rm
M_{\odot}}$, $q\in[1,8]$, $\chi_{1,2}\in[0,0.89]$, $\Lambda_{1,2}\in[0,5000]$
and $D_{L}\in[5,75]~{}{\rm Mpc}$. We include 4 spectral nodes for the
calibrations of the analyzed segments.
Figure 15 shows the posterior distributions for the parameters recovered
employing TEOBResumSPA approximant (with $\ell=2$ and $|m|=\ell$) with aligned
spins. The recovered detector-frame chirp mass corresponds to
$\mathcal{M}=1.1977^{+0.0002}_{-0.0002}~{}{\rm M_{\odot}}$ and the mass ratio
lies around the equal mass case, $q=1.56^{+0.38}_{-0.27}$. The spin components
do not show evidence of spin contributions, consistently with Ref. Abbott _et
al._ (2019e), with an estimated effective spin of $\chi_{\rm
eff}={0.04}^{+0.06}_{-0.03}$. The primary tidal parameter $\Lambda_{1}$ is
constrained to be $\lesssim 950$ at the 90% confidence level, while the
secondary component $\Lambda_{2}$ is more broadly distributed over the prior.
The recovered tidal parameter posterior estimates a value of
${\tilde{\Lambda}}={607}^{+477}_{-356}$, in rough agreement with previous
estimations obtained from EOB models Abbott _et al._ (2019a); Gamba _et al._
(2020a). The asymmetric tidal parameter ${\delta\tilde{\Lambda}}$ shows a
posterior distribution centered around non-zero values,
${\delta\tilde{\Lambda}}=92^{+200}_{-258}$; however, the hypothesis
${\delta\tilde{\Lambda}}=0$ is confidently included in the posterior support,
corresponding to the $27^{\rm th}$ percentile. Moreover, the measured
${\tilde{\Lambda}}$ is overall consistent with independent estimations coming
from the analysis of the EM counterpart AT2017gfo Radice _et al._ (2018b);
Breschi _et al._ (2021a). Regarding the extrinsic parameters, we recovered a
luminosity distance of $D_{L}=36.7^{+6.2}_{-8.0}~{}{\rm Mpc}$ and a sky
location at $\\{\alpha={3.42}^{+0.02}_{-0.02}~{}{\rm
rad},\delta={-0.36}^{+0.04}_{-0.04}~{}{\rm rad}\\}$. The estimation of the
extrinsic parameters is generally consistent with previous estimations Abbott
_et al._ (2019e, a); Gamba _et al._ (2020a).
Furthermore, the analysis of GW170817 is repeated with a TaylorF2 waveform
template that includes for the first time 5.5PN point-mass corrections Messina
_et al._ (2019) and 7.5PN tidal contributions Vines _et al._ (2011); Damour
_et al._ (2012). This analysis is performed using the same prior assumptions
described above and the posterior distribution for the intrinsic parameters is
shown in Figure 16. The mass parameters recover the values
$\mathcal{M}={1.1976}^{+0.0003}_{-0.0002}~{}{\rm M_{\odot}}$ and
$q=1.42^{+0.79}_{-0.36}$, while the effective spin $\chi_{\rm
eff}={0.02}^{+0.07}_{-0.04}$. The estimated luminosity distance of
$D_{L}={37.1}^{+10.3}_{-12.1}~{}{\rm Mpc}$. The inference of the tidal
components roughly coincides with the estimations coming from the analogous
analysis with PN templates Abbott _et al._ (2019e, a). The primary tidal
component is constrained below $\Lambda_{1}<730$ at 90% credible region, while
the secondary is more broadly distributed over the prior. The reduced tidal
parameter is measured to be ${\tilde{\Lambda}}={404}^{+701}_{-246}$ and the
asymmetric tidal term ${\delta\tilde{\Lambda}}$ is well constrained around
zero with an uncertainty of ${\sim}150$ at the 90% confidence level The median
values of the sky location angles coincide with the estimation performed with
TEOBResumSPA.
The GW170817 PE studies are repeated with IMRPhenomPv2_NRTidal template
employing precessing spin components with high-spins prior ($\chi_{\rm
max}=0.89$) and low-spins prior ($\chi_{\rm max}=0.05$), in order to compare
our results with the official LIGO-Virgo posterior samples presented in Ref.
Abbott _et al._ (2019e). The recovered posterior distributions for the mass
parameters for the low-spin case show a chirp mass of
$1.1975^{+0.0002}_{-0.0002}~{}{\rm M_{\odot}}$ and the mass ratio is
constrained below $1.46$ at the 90% credible region. Regarding the high-spin
prior analysis, we recovered a chirp mass of
$1.1976^{+0.0002}_{-0.0002}~{}{\rm M_{\odot}}$ and the mass ratio favors more
asymmetric values, $q=1.49^{+0.35}_{-0.32}$. Focusing on the tidal parameters,
Figure 18 shows the comparison of the posterior distribution in the
$\\{{\tilde{\Lambda}},{\delta\tilde{\Lambda}}\\}$ plane: the marginalized
distributions and the 90% credible regions coming from bajes are largely
consistent with the official LIGO-Virgo samples, with estimated JS divergences
below $10^{-4}$.
Finally, Figure 17 shows the recovered reduced tidal parameters
${\tilde{\Lambda}}$, where the posterior distributions have been reweighted to
uniform prior distribution. The figure includes the results computed employing
the same waveform models and using a smaller upper cutoff-frequency $f_{\rm
max}=1~{}{\rm kHz}$. The main differences between the analyses with different
$f_{\rm max}$ lie in the results of the tidal sector. Overall the recovered
tidal parameters with $f_{\rm max}=2~{}{\rm kHz}$ appear more constrained with
respect to the cases with $f_{\rm max}=1~{}{\rm kHz}$. This behavior is
expected considering that the tidal information is gathered in high-frequency
regimes Damour _et al._ (2012); Gamba _et al._ (2020b). On the other hand,
the choice of $f_{\rm max}=2~{}{\rm kHz}$ enlarges multimodal and asymmetric
behaviors in the posterior distribution of the reduced tidal parameter and
systematic effects appear to be more relevant between different template
families, as well known from previous studies Dai _et al._ (2018); Narikawa
_et al._ (2019); Gamba _et al._ (2020b). The differences in the
${\tilde{\Lambda}}$ parameters can be led back to the modeling choices of the
employed approximants (see Abbott _et al._ , 2019e; Messina _et al._ , 2019;
Abbott _et al._ , 2019a). The results estimated with TEOBResumSPA at
$1~{}{\rm kHz}$ show a posterior distribution slightly shifted toward lower
values with respect to the analysis at $2~{}{\rm kHz}$, consistently with what
has been observed in the BNS injection study (Sec. VI.0.2). The posterior
distributions for EOB and PN approximants show a good agreement with $f_{\rm
max}=1~{}{\rm kHz}$.
### VII.3 EOB catalog
Table 4: Prior and posterior information for the analyses of the BBH events of
GWTC-1 with TEOBResumS. The GPS time refers to the central value of the time
axis. For all studies, we assume aligned spins with isotropic prior. The
inferred values refer to the medians of the marginalized posterior
distributions and the uncertainties are 90% credible regions, except for the
log-Bayes' factors $\log\mathcal{B}^{\rm S}_{\rm N}$, for which we report the
standard deviations.
Data Information | Prior bounds | Inferred values
---|---|---
Event | GPS time | Duration | $\mathcal{M}$ | $q$ | $\chi_{1,2}$ | $D_{L}$ | $\mathcal{M}$ | $q$ | $\chi_{1,z}$ | $\chi_{2,z}$ | $\chi_{\rm eff}$ | $D_{L}$ | $\log\mathcal{B}^{\rm S}_{\rm N}$
| $[{\rm s}]$ | $[{\rm s}]$ | $[{\rm M_{\odot}}]$ | | | $[{\rm Mpc}]$ | $[{\rm M_{\odot}}]$ | | | | | $[{\rm Mpc}]$ |
GW150914 | 1126259462 | 8 | $[12,45]$ | $[1,8]$ | $[0,0.99]$ | $[100,5000]$ | ${31.9}^{+1.1}_{-1.5}$ | ${1.20}^{+0.29}_{-0.17}$ | ${0.07}^{+0.39}_{-0.28}$ | ${0.00}^{+0.40}_{-0.42}$ | ${0.05}^{+0.10}_{-0.13}$ | ${471}^{+167}_{-231}$ | $267.1^{+0.2}_{-0.2}$
GW151012 | 1128678900 | 16 | $[12,45]$ | $[1,8]$ | $[0,0.99]$ | $[100,5000]$ | ${18.3}^{+1.8}_{-1.0}$ | ${1.86}^{+2.86}_{-0.76}$ | ${0.05}^{+0.33}_{-0.28}$ | ${0.11}^{+0.53}_{-0.48}$ | ${0.09}^{+0.22}_{-0.17}$ | ${1039}^{+627}_{-626}$ | $16.0^{+0.2}_{-0.2}$
GW151226 | 1135136350 | 16 | $[6.5,15]$ | $[1,8]$ | $[0,0.99]$ | $[50,5000]$ | ${9.71}^{+0.07}_{-0.07}$ | ${2.04}^{+1.59}_{-0.93}$ | ${0.38}^{+0.23}_{-0.24}$ | ${0.16}^{+0.58}_{-0.47}$ | ${-0.05}^{+0.45}_{-0.61}$ | ${490}^{+222}_{-240}$ | $36.4^{+0.2}_{-0.2}$
GW170104 | 1167559936 | 16 | $[12,45]$ | $[1,8]$ | $[0,0.99]$ | $[100,5000]$ | ${25.6}^{+1.8}_{-2.1}$ | ${1.56}^{+0.86}_{-0.46}$ | ${0.00}^{+0.29}_{-0.32}$ | ${-0.04}^{+0.40}_{-0.40}$ | ${-0.03}^{+0.21}_{-0.25}$ | ${1069}^{+423}_{-446}$ | $58.4^{+0.2}_{-0.2}$
GW170608 | 1180922494 | 16 | $[5,10]$ | $[1,8]$ | $[0,0.99]$ | $[50,5000]$ | ${8.49}^{+{0.05}}_{-{0.04}}$ | ${1.48}^{+{1.22}}_{-{0.42}}$ | ${0.06}^{+{0.30}}_{-{0.28}}$ | ${0.03}^{+{0.58}}_{-{0.39}}$ | ${0.06}^{+{0.27}}_{-{0.09}}$ | ${298}^{+{146}}_{-{128}}$ | $80.3^{+0.2}_{-0.2}$
GW170729 | 1187529256 | 4 | $[25,175]$ | $[1,8]$ | $[0,0.99]$ | $[100,7000]$ | ${51.4}^{+9.1}_{-9.6}$ | ${1.84}^{+{0.95}}_{-{0.77}}$ | ${0.47}^{+{0.39}}_{-{0.48}}$ | ${-0.05}^{+{0.83}}_{-{0.36}}$ | ${0.30}^{+{0.28}}_{-{0.28}}$ | ${2495}^{+{1600}}_{-{1300}}$ | $27.1^{+0.2}_{-0.2}$
GW170809 | 1185389807 | 16 | $[12,45]$ | $[1,8]$ | $[0,0.99]$ | $[100,5000]$ | ${30.3}^{+{2.3}}_{-{2.0}}$ | ${1.45}^{+{0.72}}_{-{0.39}}$ | ${0.07}^{+{0.33}}_{-{0.27}}$ | ${0.17}^{+{0.58}}_{-{0.21}}$ | ${0.17}^{+{0.24}}_{-{0.21}}$ | ${999}^{+{473}}_{-{483}}$ | $41.8^{+0.2}_{-0.2}$
GW170814 | 1186302519 | 16 | $[12,45]$ | $[1,8]$ | $[0,0.99]$ | $[100,5000]$ | ${26.8}^{+{1.3}}_{-{1.0}}$ | ${1.29}^{+{0.52}}_{-{0.26}}$ | ${0.07}^{+{0.39}}_{-{0.28}}$ | ${0.02}^{+{0.49}}_{-{0.38}}$ | ${0.08}^{+{0.16}}_{-{0.12}}$ | ${540}^{+{224}}_{-{189}}$ | $99.6^{+0.2}_{-0.2}$
GW170818 | 1186741861 | 16 | $[12,45]$ | $[1,8]$ | $[0,0.99]$ | $[100,5000]$ | ${31.8}^{+{3.4}}_{-{2.9}}$ | ${1.48}^{+{0.96}}_{-{0.43}}$ | ${-0.08}^{+{0.27}}_{-{0.35}}$ | ${0.00}^{+{0.45}}_{-{036}}$ | ${-0.06}^{+{0.33}}_{-{0.27}}$ | ${1190}^{+{594}}_{-{438}}$ | $29.7^{+0.2}_{-0.2}$
GW170823 | 1187058327 | 4 | $[25,175]$ | $[1,8]$ | $[0,0.99]$ | $[100,7000]$ | ${37.4}^{+{5.5}}_{-{5.1}}$ | ${1.57}^{+{0.94}}_{-{0.51}}$ | ${-0.01}^{+{0.42}}_{-{0.33}}$ | ${0.06}^{+{0.56}}_{-{0.55}}$ | ${0.03}^{+{0.30}}_{-{0.28}}$ | ${1690}^{+{1081}}_{-{880}}$ | $39.5^{+0.2}_{-0.2}$
As full scale application, we reproduce the analyses of the BBH mergers
published in GWTC-1 Abbott _et al._ (2019a) employing the bajes pipeline and
the time-domain EOB waveform model TEOBResumS, including only the dominant
$(2,2)$ mode. Table 4 shows the priors used for each event, where the nominal
GPS time refers to the central value of the analyzed time axis. For all the
studies, the analyzed frequency range goes from 20 Hz to 1 kHz and we assume
aligned spin components with isotropic prior distribution and volumetric prior
for the luminosity distance. The prior distributions for the other parameters
are chosen accordingly with Sec. V.2. For these studies, we employ 8
calibration envelope nodes for each detector, the phase $\phi_{0}$ is
marginalized during the likelihood evaluations and the time-shift parameter
$t_{0}$ is sampled from a $2~{}{\rm s}$ window centered around the nominal GPS
time.
Figure 19 shows the posterior distributions marginalized in the mass
components and spin magnitudes planes for all the analyzed BBH events. The
mass components are expressed in the source-frame of the binaries assuming the
cosmological model presented in Ref. Aghanim _et al._ (2020). The detector-
frame mass components $m_{i}$ can be estimated in the source-frame of the
binary as
$m_{i}^{\rm src}=\frac{m_{i}}{1+z}\,,\quad i=1,2\,,$ (43)
where $z$ is the cosmological redshift of the source. In general, the
recovered mass parameters show a predominance of equal mass binaries with mass
ratio well constrained below $q\lesssim 3$, except for the low-mass binary
GW151012, that admits values of $q\simeq 5$ at 90% credible level. The
recovered mass components are distributed between $7~{}{\rm M_{\odot}}$ and
$70~{}{\rm M_{\odot}}$, with an abundance in the range $[20~{}{\rm
M_{\odot}},50~{}{\rm M_{\odot}}]$. In terms of spin contributions, the most
interesting events are GW151226, whose posterior distribution excludes the
non-spinning case at 90% confidence level, consistently with Ref. Abbott _et
al._ (2019a); Schmidt _et al._ (2020), and GW170729, which recovers an
effective spin of $\chi_{\rm eff}\approx{0.3}$ and admits values up to
$\chi_{1}\gtrsim 0.9$. The other GW transients show mitigated spin
contributions, with $\chi_{1}\lesssim 0.5$ at the 90% credible level.
Generally, the posterior distributions for the secondary spin component
$\chi_{2}$ are more broadly distributed and less informative than those for
the primary component $\chi_{1}$.
Furthermore, Fig. 20 shows the posterior distributions of final masses $M_{\rm
f}^{\rm src}$ (estimated in the source-frame) and final spin magnitudes
$\chi_{\rm f}$ expected for the merger remnants. The values are computed
resorting to NR formulae presented in Ref. Jiménez-Forteza _et al._ (2017),
calibrated with aligned-spin BBH simulations. The majority of the recovered
final spins $\chi_{\rm f}$ lie around ${\sim}0.67$ due to the moderated spin
contributions of the observed mergers. Regarding the extrinsic parameters,
Fig. 21 shows the posterior distributions of the sky location. The sky maps
for GW170814 and GW170818 show slightly larger bimodal behavior compared with
the results presented in GWTC-1 Abbott _et al._ (2019a). On the other hand,
the 90% credible region for GW170104 appears to be more constrained. Note that
the aligned-spin assumption affects the overall SNR and, then, the recovered
posterior distributions Dal Canton _et al._ (2014, 2015). The measurements
for the sky locations of the other events do not show considerable deviations
from the GWTC-1 estimations Abbott _et al._ (2019a). Fig. 22 shows the
correlations of the the luminosity distance $D_{L}$ with the inclination angle
$\iota$ and with the total mass $M=m_{1}+m_{2}$ (estimated in the detector
frame). The luminosity distances are in agreement with the GWTC-1 estimations,
while the inclination angles show slightly wider posterior supports due to the
degeneracy introduced with the aligned-spins assumption Apostolatos _et al._
(1994).
For the GW150914 case, we can compare the analysis with higher-order modes
(presented in Sec. VII.1) with the results estimated using only dominant mode.
First of all, the results with $\ell\geq 5$ show narrower uncertainties,
consistently with the inclusion of a larger amount of physical information
Calderón Bustillo _et al._ (2017); Breschi _et al._ (2019a); Mills and
Fairhurst (2021). On the other hand, the estimated Bayes' factors do not show
strong evidence in favor or against the inclusion of higher-order modes, as
expected for this kind of source; higher-order modes are expected to be more
relevant for large mass ratios and edge-off binaries Varma _et al._ (2014);
Calderón Bustillo _et al._ (2015). Overall, the median values of the
recovered parameters are consistent between each other except for the mass
ratio, that appears to be more constrained around $q=1$ including higher-
modes.
In conclusion, we shown that TEOBResumS can be effectively applied to BBH
signals, obtaining robust and consistent results Nagar _et al._ (2018). The
main limitation of the presented results comes from the aligned-spins
assumption, that introduces degeneracy with other parameters and induces
biases in population inferences Stevenson _et al._ (2017); Farr _et al._
(2017); Tiwari _et al._ (2018). We are planning to extend the presented
catalog including precessing spin terms Akcay _et al._ (2020) and verifying
the importance of eccentric contributions Chiaramello and Nagar (2020). In
terms of computational cost, TEOBResumS shows an overall good behavior,
performing the analysis of a typical BBH (with length of $8~{}{\rm s}$) in
${\sim}14~{}{\rm hours}$ on 32 CPUs.
Figure 19: Marginalized posterior distribution for the source-frame mass
components $\\{m_{1},m_{2}\\}$ (left) and the spins $\\{\chi_{1},\chi_{\rm
eff}\\}$ (right) of the BBH mergers presented in GWTC-1 Abbott _et al._
(2019a). The PE studies have been performed with TEOBResumS model. The masses
are expressed in the binary source-frame employing the cosmological model
proposed in Ref. Aghanim _et al._ (2020). The contours refer to the 90%
credible regions. Figure 20: Marginalized posterior distribution for the final
masses and spins $\\{M_{\rm f}^{\rm src},\chi_{\rm f}\\}$ for the remnants of
the BBH mergers presented in GWTC-1 Abbott _et al._ (2019a). The analyses are
performed using the bajes pipeline and TEOBResumS waveform approximant. The
values have been computed from the posterior samples employing NR formulae
presented in Ref. Jiménez-Forteza _et al._ (2017). The final masses are
expressed in the source-frame of the binary. The contours refer to the 90%
credible regions. Figure 21: Marginalized posterior distribution for right
ascension and declination angles $\\{\alpha,\delta\\}$ (represented using a
Mollweide projection) for the BBH mergers presented in GWTC-1 Abbott _et al._
(2019a). The analyses are performed using the bajes pipeline and TEOBResumS
waveform approximant. The right ascension $\alpha$ is expressed in hours,
while the declination $\delta$ is reported in degrees. The contours refer to
the 90% credible regions.
Figure 22: Marginalized posterior distribution for inclination angle and
luminosity distance $\\{\iota,D_{L}\\}$ (left) and for luminosity distance and
total mass $\\{D_{L},M\\}$ (right) for the BBH mergers presented in GWTC-1
Abbott _et al._ (2019a). The analyses are performed using the bajes pipeline
and TEOBResumS waveform approximant. The contours refer to the 90% credible
regions.
## VIII Conclusion
In this paper we presented bajes, a parallel, lightweight infrastructure for
Bayesian inference, whose main application is the data analysis of
gravitational-wave and multimessenger transients. bajes is implemented in
Python and comes with a versatile framework for Bayesian inference and
different state-of-art samplers. Furthermore, it provides methods for the
analysis of GW and EM transients emitted by compact binary coalescences. We
benchmarked bajes by means of injection-recovery experiments with BBH merger,
BNS inspiral-merger and postmerger signals. The injection studies and
statistical tests show that the bajes pipeline is well calibrated and it
provides robust results, within the expected statistical fluctuations.
The injections of BNS postmerger signals also offered the first detectability
study with a five-detectors-network including LIGO, Virgo, KAGRA and third-
generation ET Punturo _et al._ (2010); Hild _et al._ (2011). We find BNS
postmerger signal will be detectable for optimally oriented sources located at
${\lesssim}80~{}{\rm Mpc}$. This result is largely merit of the ET sensitivity
Punturo _et al._ (2010); Hild _et al._ (2011), that contribute to 90% of the
SNR. According with recent population studies Abbott _et al._ (2020c) and
using the distance threshold estimated from our survey with third-generation
network (${\sim}$80 Mpc), the detection rate of these sources is expected to
be $0.5{-}2$ events per year. As discussed in Sec. VI.0.3, the detection of
such a transient, combined with the knowledge of EOS-insensitive relations,
can reveal essential properties of the nuclear matter at high densities,
improving significantly the EOS constraints Breschi _et al._ (2019b).
We demonstrated the reliability of bajes in analyzing the observational data
recorded by the LIGO-Virgo interferometers Abbott _et al._ (2019a, d). The
posterior distributions for the parameters of GW transients computed with the
bajes are in agreement with the results from other GW pipelines Veitch _et
al._ (2015); Lange _et al._ (2018). The direct comparison of the bajes
results on GW150914 with the ones obtained with the bilby Ashton _et al._
(2019); Smith _et al._ (2020); Romero-Shaw _et al._ (2020) pipeline shows a
maximum JS divergence of $5.2{\times}10^{-4}$ bit for the tilt angle
$\vartheta_{2}$ and the marginalized posterior distributions are largely
consistent between each other. Furthermore, the analyses of GW150914 with
TEOBResumS approximant slightly emphasize the relevance of higher-order modes
for improving the accuracy of the binary properties estimations.
We performed PE studies on GW170817 using TEOBResumSPA, IMRPhenomPv2_NRTidal
and, for the fitrst time, TaylorF2 including 5.5PN point-mass contributions
and 7.5PN tidal terms. The novel analysis with the extended PN model shows a
good agreement with previous estimations performed with the same template
family Abbott _et al._ (2019a, e). Using IMRPhenomPv2_NRTidal template, we
found full consistency with previous results and the official LIGO-Virgo
posterior samples Abbott _et al._ (2019a, e). Posterior distributions for the
reduced tidal parameter ${\tilde{\Lambda}}$ recovered with upper cutoff-
frequency $f_{\rm max}=2~{}{\rm kHz}$ show larger systematic biases between
different waveform templates compared with the $1~{}{\rm kHz}$ analyses.
However, using a larger $f_{\rm max}$, it is possible to take into account a
larger amount of tidal information that leads to more constrained
measurements. The tidal parameter ${\tilde{\Lambda}}$ estimated with
TEOBResumSPA has slightly larger values compared with the measure of TaylorF2
and IMRPhenomPv2_NRTidal with $f_{\rm max}=2~{}{\rm kHz}$. The results of EOB
and PN models are in overall good agreement if $f_{\rm max}=1~{}{\rm kHz}$ is
employed and also with independent analysis of AT2017gfo Breschi _et al._
(2021a). We note that parallelization methods are key for the PE of BNS
signals associated to long data segments ${\gtrsim}100\,$s. The bajes runs
discussed in Sec. VII.2 were efficiently performed on 128 CPUs with total
execution-time of ${\sim}$1 day.
Future work will present the validation and the application of bajes to
multimessenger analyses, including EM counterparts like kilonovae and
$\gamma$-ray burst (e.g. Hayes _et al._ , 2020; Breschi _et al._ , 2021a).
Moreover, we are implementing reduced-order-quadrature Canizares _et al._
(2015); Smith _et al._ (2016); Morisaki and Raymond (2020) and the relative-
binning Zackay _et al._ (2018); Dai _et al._ (2018) in order to speedup the
likelihood evaluations in the GW studies. Inferences on the properties of
neutron star matter will be supported with the inclusion of a parametrized EOS
sampling method Read _et al._ (2009); Raithel _et al._ (2016). Moreover,
future bajes releases will include an extended set of nested samplers, in
particular algorithms based on machine learning (e.g. Albert, 2020) and
efficiently parallelizable routines (e.g. Buchner, 2021).
bajes is publicly available at
https://github.com/matteobreschi/bajes
and contributions from the community are welcome. The posterior samples
presented in Sec. VII.3 and the configuration files to reproduce the runs are
available on Zenodo Breschi _et al._ (2021b).
###### Acknowledgements.
The authors would like to thank Walter Del Pozzo for useful discussions. M.B.
and S.B. acknowledge support by the EU H2020 under ERC Starting Grant, no.
BinGraSp-714626. R.G. and M.B. acknowledge support from the Deutsche
Forschungsgemeinschaft (DFG) under Grant No. 406116891 within the Research
Training Group RTG 2522/1. The computational experiments were performed on the
ARA cluster at the Friedrich Schiller University Jena supported in part by DFG
grants INST 275/334-1 FUGG and INST 275/363-1 FUGG and and ERC Starting Grant,
no. BinGraSp-714626. This research has made use of data obtained from the
Gravitational Wave Open Science Center LIGO Scientific and Virgo Collaboration
(a), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the
Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation.
Virgo is funded by the French Centre National de Recherche Scientifique
(CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the
Dutch Nikhef, with contributions by Polish and Hungarian institutes.
## Appendix A MCMC and PTMCMC
A generic Markov-chain Monte Carlo (MCMC) algorithm explores the parameter
space moving forward region with increasing value of the probability and
returns a set of independent samples representative of the target probability
density. The MCMC samplers implemented in bajes is based on emcee Foreman-
Mackey _et al._ (2013): this routine represents a simple and complete
implementation of a Metropolis-Hastings sampling that takes advantage of
parallel chains.
The MCMC algorithm can be summarized as follows. An arbitrary number of
chains, say $n_{\rm chain}$, are initialized with as many random prior sample.
For each chain, the last stored sample is evolved and a new sample
$\bm{\theta}^{*}$ is proposed according to predefined proposal methods (see
App. C). The new sample $\bm{\theta}^{*}$ is accepted with probability
${\rm
min}\left[\,1,\,\frac{p(\bm{\theta}^{*}|\textbf{d},H)}{p(\bm{\theta}_{i}|\textbf{d},H)}\,\frac{q(\bm{\theta}_{i}|\bm{\theta}^{*})}{q(\bm{\theta}^{*}|\bm{\theta}_{i})}\right]\,,$
(44)
where $\bm{\theta}_{i}$ is the last sample of the chain and
$q(\bm{\theta}_{i}|\bm{\theta}_{j})$ is the proposal density function computed
between $\bm{\theta}_{i}$ and $\bm{\theta}_{j}$. This procedure is iterated
for every chain of the ensemble and samples are collected during the
exploration. Note that, according with prescription Eq. (44), the probability
of the proposed sample is not required to be strictly greater than that of the
current sample. The initial exploration is called burn-in, in which the chains
randomly explores the surrounding prior volume. The algorithm spends these
iterations in order to localize the maximum-probability region. After a
certain amount of iterations, depending on the complexity of the parameter
space, the chains can converge and the actual collection of posterior samples
starts. Subsequently, when the algorithm reaches the stopping condition, the
burn-in portion is removed, the samples from different chains are joined
together, the autocorrelation length (ACL) is computed in order to estimate
the effective number of independent posterior samples, and the final set of
samples is extracted from the joined set according with the value of ACL. The
stopping criterion implemented in bajes for the MCMC algorithms is defined by
the requested number of output samples $n_{\rm out}$,
$\frac{\left(i-n_{\rm burn}\right)\,n_{\rm chain}}{{\rm ACL}}\geq n_{\rm
out}\,,$ (45)
where $i$ is the current iteration, $n_{\rm chain}$ is the total number of
chains, $n_{\rm burn}$ is the number of iterations required for burn-in and
ACL is computed on the set of post-burn-in samples 555We are planning to
modify the current MCMC stopping condition implementing the Gelman-Rubin
diagnostic test Gelman and Rubin (1992); Brooks and Gelman (1998)..
The MCMC algorithm disposes of a light-weighted settings and it is a fast and
versatile algorithm. However, when the parameter space becomes large or the
distributions have multimodalities, this method could have many issues; such
as insufficient and inaccurate exploration of the parameter space, some chains
could get trapped in a local minima or the ensemble might not be able to reach
convergence. These issues can be mitigated resorting to a large number of
parallel chains or to specific proposal methods (see App. C). Moreover, given
a set of posterior samples, it is possible to estimate the evidence using the
approximation
$p(\textbf{d}|H)\approx\frac{1}{n_{\rm samples}}\sum_{i=1}^{n_{\rm
samples}}p(\bm{\theta}_{i}|\textbf{d},H)\,,$ (46)
where the index $i=1,\dots,n_{\rm samples}$ runs over the posterior samples.
However, in general, Eq. (46) is unable to perform accurate estimations of the
evidence, since the MCMC algorithm is not designed to minutely inspect all the
parameter space.
On the other hand, the parallel tempering MCMC (PTMCMC) Earl and Deem (2005);
Swendsen and Wang (1986); Neal (1996) performs improved exploration of the
parameter space and it provides a more accurate estimations of the evidence
integral compared to standard MCMC techniques. The PTMCMC sampler implemented
in bajes is inspired by ptemcee Vousden _et al._ (2015). The PTMCMC
introduces an inverse temperature coefficient $\beta=1/T\in[0,1]$ in the
computation of posterior distribution, such that
$p_{\beta}(\bm{\theta}|\textbf{d},H)\propto\big{[}p(\textbf{d}|\bm{\theta},H)\big{]}^{\beta}p(\bm{\theta}|H)\,.$
(47)
The set of all chains is grouped in equally-populated sub-ensembles and a
different value of $\beta$ is associated to each tempered ensembles. The
default $\beta$ ladder is geometrically-spaced in the range $[0,1]$. The
algorithm proceeds as the usual MCMC for every chain using the tempered
posterior distribution Eq. (47). For $T=1$, the tempered posterior is
identical to the original one and low-temperature chains will move toward
regions with large likelihood values focusing on the estimation of the volume
of the bulge. However, the contribution of the likelihood function is
mitigated by increasing values of $T$, up to the limit $T\to\infty$, where the
posterior is identically equal to the prior. Then, high-temperature chains
will be able to freely explore the majority the prior support, inspecting the
tails of the targeted posterior distribution and providing a good coverage of
the entire prior volume. Furthermore, the algorithm proposes swaps between
consecutive pairs of chains, received with acceptance
${\rm
min}\left[\,1,\,\left(\frac{p(\textbf{d}|\bm{\theta}_{i},H)}{p(\textbf{d}|\bm{\theta}_{j},H)}\right)^{\beta_{j}-\beta_{i}}\right]\,,$
(48)
where $\bm{\theta}_{i}$ and $\beta_{i}$ are respectively the last sample and
the inverse temperature of the $i$-th chain, and analogously for $j$. If the
swap is accepted the position of the two samples are exchanged in the
different selected chains. This procedure allows the information of the high-
temperature chains to propagate to the low-temperature ones and viceversa,
improving the correlation between the samples. Another key feature of parallel
tempering is that it satisfies the detailed balance condition Sambridge
(2013), required for convergence of the MCMC chains.
Finally, the stopping criterion Eq. (45) is estimated for the untempered
chains; when it is satisfied, the sampler stops and reproduces the posterior
samples using only the chains of the $T=1$ sub-ensemble. Furthermore, using
the auxiliary coefficient $\beta$ and thermodynamic integration Goggans and
Chi (2004); Lartillot and Philippe (2006), it is possible to write the
evidence as
$\log p(\textbf{d}|H)=\int_{0}^{1}E_{\beta}\big{[}\log
p(\textbf{d}|\bm{\theta},H)\big{]}\,\text{d}\beta\,.$ (49)
where the expectation value is estimated using the tempered posterior, i.e.
$E_{\beta}\left[f(\bm{\theta})\right]=\int_{\bm{\Theta}}f(\bm{\theta})\,\big{[}p(\textbf{d}|\bm{\theta},H)\big{]}^{\beta}p(\bm{\theta}|H)\,\text{d}\bm{\theta}\,.$
(50)
Eq. (49) can be estimated through numerical integration. The terms
$E_{\beta}\left[\log p(\textbf{d}|\bm{\theta},H)\right]$ are estimated over
the initial $\beta$ ladder applying Eq. (46) to the tempered posterior samples
and the integral is approximated using the trapezoidal rule. The PTMCMC
represents an improved version of a standard MCMC technique, that aims to
provide much accurate estimations of the evidence. However, the accuracy of
the estimation Eq. (49) strongly depends on the number of employed
temperatures: in complex situations, the total number of chains needed to
accurately estimate the evidence could overcome the number of available
processes, affecting the efficiency of the sampler Gupta _et al._ (2018).
## Appendix B Nested sampling
The nested sampling Skilling (2006); Sivia and Skilling (2006) is a finely-
designed Bayesian technique designed to accurately estimate the evidence
integral and, nevertheless, it provides a set of posterior samples as final
product of the algorithm. The strength of this technique is the capability to
succeed even in cases of high-dimensional parameter space or multimodal
distributions. Nested sampling computes the evidence identifying nested
isoprobability contours and estimating the amount of prior volume enclosed by
each level. The main advantage of this technique is the reduction of the
multidimensional integral Eq. (2) to a one-dimensional problem Chopin and
Robert (2010); Betancourt _et al._ (2011), introducing the variable
$X(\lambda)=\int_{p(\textbf{d}|\bm{\theta},H)>\lambda}p(\bm{\theta}|H)\,\text{d}\bm{\theta}\,.$
(51)
The quantity $X(\lambda)$ is usually labeled as prior mass and it is the
cumulative prior volume covering all likelihood values greater than $\lambda$.
The prior mass takes values in the range $[0,1]$, where $X=1$ corresponds to
the entire prior volume. Then, we can rewrite the likelihood as function of
the prior mass, i.e. $p(\textbf{d}|X(\lambda),H)=\lambda$, from which follows,
$p(\textbf{d}|H)=\int_{0}^{1}p(\textbf{d}|X,H)\,\text{d}X\,.$ (52)
Eq. (52) has a further advantage: by definition, the likelihood
$p(\textbf{d}|X,H)$ is a monotonic decreasing function of $X$. Then, for $X\to
0$, the likelihood tends to its maximum value. Accomplishing the
transformation $\bm{\theta}\to X$ involves dividing the unit prior mass range
into small bins and sorting them by likelihood.
A standard nested sampling routine requires an input number of live points
$n_{\rm live}$ and a real positive number $\zeta$ representing the final
tolerance of the computation. The live points are samples of the parameter
space that are evolved during the routine: starting from a set of $n_{\rm
live}$ initial samples (usually extracted from the prior distribution), the
live point with lowest likelihood value, say $\bm{\theta}_{i}$, is discarded
and replaced with a new point $\bm{\theta}^{*}$ extracted from the prior
distribution that satisfies the relation
$p(\textbf{d}|\bm{\theta}^{*},H)>p(\textbf{d}|\bm{\theta}_{i},H)$. The new
point $\bm{\theta}^{*}$ is usually proposed using internal MCMC routines (see
App. C). The procedure is repeated taking the lowest-likelihood live point at
every iteration, such that the algorithm starts inspecting the entire prior
volume ($X_{0}=1$), and it moves toward lower value of the prior mass,
$0<X_{i}<\dots<X_{2}<X_{1}<X_{0}=1\,,$ (53)
up to the the most likely value(s), where the majority of the likelihood
volume is located. At the $n$-th iteration, the evidence is approximated from
Eq. (52) using trapezoidal rule,
$p_{n}(\textbf{d}|H)\approx\frac{1}{2}\sum_{i=1}^{n}\left(X_{i-1}-X_{i+1}\right)\,p(\textbf{d}|\bm{\theta}_{i},H)\,,$
(54)
where $X_{i}$ is estimated with the expectation value Betancourt _et al._
(2011),
$E[X_{i}]=e^{-i/n_{\rm live}}\,.$ (55)
From Eq. (55), we can deduce that the average volume occupied by a live point
corresponds to the $n_{\rm live}$-th part of the current prior mass. Then,
increasing $n_{\rm live}$, the sampling will perform a finer grained
inspections of the prior volume. In the nested sampling context, the $i$-th
extracted sample is taken as representative element of the respective
isoprobability level of likelihood and, since the algorithm accepts strictly
increasing likelihood values, it ensures that each level is nested in the
previous one. Then, the overall evidence is computed summing all the
likelihood contributions from each nested level weighted on the expected
difference in prior mass, according with Eq. (55). The specific stopping
condition depends on the requested sampler. In general, if the algorithm
converged to the global maximum-likelihood value, at the $n$-th iteration the
evidence is expected to vary at most of
$\Delta_{n}=\max_{i\leq n}\big{[}p(\textbf{d}|\bm{\theta}_{i},H)\big{]}\cdot
X_{n}\,,$ (56)
where $\max_{i\leq n}[p(\textbf{d}|\bm{\theta}_{i},H)]$ is the maximum
likelihood discovered up to the $n$-th iteration. Then, the general stopping
criterion requires that the estimated evidence is not expected to change more
than a factor $e^{\zeta}$, i.e.
$\zeta\geq\log\left(1+\frac{\Delta_{n}}{p_{n}(\textbf{d}|H)}\right)\,.$ (57)
When the stopping condition is satisfied, the sampler stops and it includes
the contributions of the remaining live points to the overall evidence. Then,
the posterior distribution can be reconstructed by the chain of collected
samples, weighting each point according with its probability distribution,
$p(\bm{\theta}_{k}|\textbf{d},H)\approx\frac{\left(X_{k-1}-X_{k+1}\right)}{2}\,\frac{p(\textbf{d}|\bm{\theta}_{k},H)}{p(\textbf{d}|H)}\,,$
(58)
where the index $k$ runs over the extracted samples Hol _et al._ (2006).
The nested sampling routine offers a much better architecture for evidence
estimation than MCMC techniques. In general, the estimated log-evidence
carries a statistical uncertainty inversely proportional to $n_{\rm live}$ due
to the marginalization over the prior mass; while, numerical errors are
dominated by the use of point estimates and by the length of the MCMC sub-
chains $n_{\rm MCMC}$ used to propose new samples, as shown in Ref. Veitch and
Vecchio (2010). This inefficiency can be suppressed estimating the ACL of the
MCMC sub-chains and proposing a new sample independent of the previous one.
Note that also the estimation of the posterior samples Eq. (58) is affected by
statistical and numerical uncertainty Higson _et al._ (2018).
The cpnest Pozzo and Veitch software represents an exemplary implementation
of a standard nested sampling: the code is designed to be nicely interfaced
with user-defined models and its sampling methods can be easily customized. On
the other hand, dynesty Speagle (2020) takes advantage of flexible bounding
methods Buchner (2014, 2017), that aim to define isoprobability contours in
order to exclude least likely regions of the parameter space improving the
robustness of the algorithm. Moreover, dynesty provides an implementation of
dynamic nested sampling Higson _et al._ (2019): this technique allocates an
adaptive number of live points at each iteration iteration $i$, i.e. $n_{\rm
live}\equiv n_{\rm live}(i)$. Since the change in prior volume at a given
iteration depends on the number of live points, as shown in Eq. (55), the
possibility of varying $n_{\rm live}$ gives the algorithm the freedom to
control the effective resolution of the sampling as a function of prior mass,
adapting it to the shape of the posterior in real time and improving the
evaluation of the posterior density. Since the architecture of the dynamic
nested sampling differs from the standard, it requires modified methods in
order to compute the evidence and estimate the posterior. By default, the
importance of each sample for the evidence computation is proportional to the
amount of the posterior density enclosed in the prior mass probed by that
point.
## Appendix C Proposal methods
The exploration of the parameter space is defined by proposal methods which
aim to move a sample toward a more likely position in the parameter space
independent from the previous. The efficiency of the proposals determines the
rate of acceptance of new samples and it affects the final ACL and
subsequently the efficiency of the whole sampler. It follows that these tools
are fundamental for the chains progress and a generic, broad and varied
combination of proposal methods is needed to accurately inspect the entire
parameter space. For this reason, bajes implements an assorted combination of
proposal methods.
Before discussing the specific proposals implemented in bajes, we observe that
a generic proposal method requires the introduction of a proposal distribution
$q$, in order to satisfy the detailed balance. A proposal distribution
$q(\bm{\theta}^{*}|\bm{\theta}_{i})$ quantifies the probability of proposing
$\bm{\theta}^{*}$ given $\bm{\theta}_{i}$. A symmetric proposal is such that
the proposed point is fully independent from the initial sample, i.e.
$q(\bm{\theta}^{*}|\bm{\theta}_{i})\propto 1$ for every
$(\bm{\theta}^{*},\bm{\theta}_{i})$. The interested reader might look at Ref.
Hastings (1970); Roberts and Rosenthal (2007) for details on Markovian
process.
* •
Prior proposal: A new point is extracted from the prior distribution
generating a sample uniformly distributed over an hyper-cube and projecting it
in the current parameter space according with the prescribed prior. This
method could show low acceptance on long-time scales, especially for complex
posterior distribution. However, it can improve the exploration of the
parameter space and it does not require expensive computations.
* •
Stretch proposal: This method is introduced in Ref. Goodman and Weare (2010)
and it make use of an auxiliary random sample $\bm{\theta}_{a}$ extracted from
the history of the chains. A new sample $\bm{\theta}^{*}$ is proposed from the
initial position $\bm{\theta}_{i}$ according with
$\bm{\theta}^{*}=\bm{\theta}_{a}+\xi\,(\bm{\theta}_{i}-\bm{\theta}_{a})\,.$
(59)
where $\xi$ is a scale factor. As pointed out in Ref. Christen and Fox (2010),
this proposal is symmetric if the probability density of scaling factor
$g(\xi)$ satisfy the condition $g(1/\xi)=\xi\,g(\xi)$. The bajes
implementation adopts the settings presented in Ref. Foreman-Mackey _et al._
(2013). In order to satisfy the detailed balance, this proposal method
requires an acceptance factor,
$\frac{q(\bm{\theta}_{i}|\bm{\theta}^{*})}{q(\bm{\theta}^{*}|\bm{\theta}_{i})}=\xi^{n_{\rm
dim}-1}\,,$ (60)
where $n_{\rm dim}$ is the number of dimensions of the parameter space. Eq.
(60) is computed conditioning of the target distribution on the trajectory
described by Eq. (59). The method shows a good adaptation to arbitrary shapes
of the distributions Goodman and Weare (2010) but it might become inefficient
for multimodal cases.
* •
Random walk proposal: This method is introduced in Ref. Goodman and Weare
(2010). The new sample $\bm{\theta}^{*}$ is extracted from a multivariate
normal distribution, centered in the initial sample $\bm{\theta}_{i}$ and with
covariance defined using a subset of $N$ auxiliary points randomly extracted
from the history of the chains. By default, bajes walk proposal uses $N=25$.
This symmetric method is efficient with unimodal distributions and it can
improve the correlation between the samples. However, it becomes inefficient
in case of complex posterior distribution since it is not capable to
arbitrarily adapt its shape.
* •
Replacement proposal: This method is introduced in Ref. Goodman and Weare
(2010). A subset of $N$ auxiliary points
$\bar{\bm{\Theta}}\equiv\\{\bar{\bm{\theta}}_{k}\\}$ is randomly chosen from
the history of the chains and it is used to identify a probability
distribution $p(\bm{\theta}|\bar{\bm{\Theta}})$ from which the new samples
will be extracted. The idea is to estimate $p(\bm{\theta}|\bar{\bm{\Theta}})$
such that is it capable to approximate the target distribution, increasing the
acceptance. Moreover, the estimation can be refined and adapted during the
exploration of the parameter space. In order to estimate
$p(\bm{\theta}|\bar{\bm{\Theta}})$, the replacement proposal implemented in
bajes employs a Gaussian kernel density estimation with $N=25$. However, this
method does not access to unexplored regions of the parameter space, with the
possibility of leading to highly correlated chains. Furthermore, this method
is not symmetric and the proposal distribution is described by
$p(\bm{\theta}|\bar{\bm{\Theta}})$.
* •
Differential evolution proposal: This method is introduced in Ref. Nelson _et
al._ (2013) and it aims to solve problems due to multimodal distributions
using a differential move based on the information on the explored samples.
Two auxiliary random samples $\\{\bm{\theta}_{a,b}\\}$ are extracted from the
from the history of the chains and a new sample $\bm{\theta}^{*}$ is proposed
from $\bm{\theta}_{i}$ as
$\bm{\theta}^{*}=\bm{\theta}_{i}+\gamma\left(\bm{\theta}_{a}-\bm{\theta}_{b}\right)\,,$
(61)
where $\gamma$ is a scale factor whose value is randomly extracted when a new
sample is proposed. The differential evolution proposal of bajes assign
$\gamma=1$ with 50% of probability in order to to improve the mixing between
different modes. The remaining 50% ot the time, the scale factor is extracted
from a normal distribution such that $\gamma\sim{\rm N}(0,2.38/\sqrt{2n_{\rm
dim}})$, where $n_{\rm dim}$ is the dimension of the parameter space. This
choice has been proved to increase the acceptance of the algorithm DEp ;
Roberts and Rosenthal (2001). In general, differential evolution is capable to
capture linear correlations and improve mixing between different modes,
however it can perform poorly in more complicated scenarios.
* •
Eigenvector proposal: This method computes the covariance from history of the
chains and estimates the relative eigenvectors. Then, the new point is
proposed moving the initial sample along a random eigenvector of the
covariance with a scale prescribed by the respective eigenvalue. As shown in
Ref. Veitch _et al._ (2015), this method can improve the efficiency of the
sampler and decrease the correlation of the chains.
* •
Ensemble slice proposal: This method has been introduced in Ref. Karamanis and
Beutler (2020) and it represents an ensemble-based improvement of the standard
slice proposal Neal (2003). Let us call $\bm{\theta}_{i}$ and $p_{i}$
respectively the initial sample and its probability. The method extracts a
value $y\sim{\rm U}(0,p_{i})$ and estimates a direction in the parameter space
$\bm{\eta}$, resorting to the information of the ensemble samples and using
differential and Gaussian estimations. Then, the initial sample is moved along
the slide defined by the direction $\bm{\eta}$ and a new point
$\bm{\theta}^{*}$ is proposed when the associated probability value $p^{*}$ is
greater than $y$. With respect to standard slice sampling Neal (2003), this
method takes advantage of adaptive scale factors refined during the evolution
of the sampler that increase the efficiency of the proposal and drastically
reduce the correlation between the collected samples. On the other hand, the
ensemble slice proposal requires multiple likelihood evaluations; then, when
the likelihood is computationally expensive, it might affect the computational
cost of the whole algorithm.
* •
GW-targeted proposal: As discussed in Ref. Veitch and Vecchio (2010); Veitch
_et al._ (2015), generic posterior distributions of GW signal parameters show
many multimodalities and large correlations between different parameters.
Moreover, its shape is usually elongated and non-regular. Then, in order to
properly and efficiently explore the parameter space, it is useful to inform
the proposal method with known structures expected in the posterior
distributions and it is possible to take advantage of the determined
analytical dependencies on the extrinsic parameters. The GW-targeted proposal
implemented in bajes explores most of the GW specific methods introduced in
Ref. Veitch and Vecchio (2010); Veitch _et al._ (2015); such as sky
reflection, Gibbs sampling for the luminosity distance, and specific methods
to explore the phase-polarization and the distance-inclination correlations.
## Appendix D Simple example
Figure 23: Posterior distribution for the parameters
$\\{\theta_{1},\theta_{2},\theta_{3}\\}$ discussed in App. D. The blue lines
correspond to the results obtained with PTMCMC sampler, while the yellow and
green lines refer respectively to cpnest and dynesty results. The marginalized
contours are the 50% and 90% credible regions.
In this section we show a simple example of Bayesian inference performed with
the samplers implemented in bajes. We use a 3-dimensional parameter space
$\bm{\theta}=\\{\theta_{1},\theta_{2},\theta_{3}\\}$ bounded to
$\theta_{i}\in[-8,+8]$ for $i=1,2,3$ with an uniformly distributed prior
distribution. For the sake of simplicity, we employ a fully-analytical
likelihood function; however, we include multimodalities and non-regular
shapes in order to test the behaviour of the samplers. Introducing the
auxiliary variables
$r_{\pm}=\sqrt{\theta_{1}^{2}+(\theta_{2}\mp
2)^{2}}\,,\quad\zeta=\theta_{3}-\sqrt{1+\theta_{2}^{2}}\,,\\\ $ (62)
we define the likelihood as
$p(d|\bm{\theta},H)=\left[e^{-5(r_{+}-2)^{2}}+e^{-5(r_{-}-2)^{2}}\right]e^{-5\zeta^{2}}\,.$
(63)
To give an idea, the isoprobability contours described by this function are
roughly similar to the union of two toroidal surfaces where the second is
identical to the first except for a rotation of $\pi/2$. The likelihood
function in Eq. (63) can be numerically integrated using the quadrature rule,
resulting in the evidence $\log p(d|H)\approx-5.5583$ with an error of the
order of $O(10^{-5})$.
In order to infer the described model, bajes provides an user-friendly and
simple-to-use interface for generic Bayesian inference. In order to define the
prior distributions, it is sufficient to write a prior configuration file
specifying the name of the parameters and the bounds. For the case discussed
above, we can write the following prior.ini file.
Algorithm 1 prior.ini
1:[x1]
2:min=-8
3:max=+8
4:
5:[x2]
6:min=-8
7:max=+8
8:
9:[x3]
10:min=-8
11:max=+8
Then, the likelihood function can be written in an auxiliary Python file
defining a log_like method. This method will be imported by the bajes routine
and used to determine the log-likelihood function for each sample. We observe
that the only argument of the customized log_like method has to be a
dictionary whose keywords are specified by the prior file. This procedure
easily allows the user the make use of the bajes inference introducing
arbitrary external data or packages. Following our example, we write the
following pseudo-code.
Algorithm 2 like.py
1:from math import sqrt, exp, log
2:
3:def log_like(p):
4: rp = sqrt(p['x1']**2 + (p['x2'] - 2)**2)
5: rm = sqrt(p['x1']**2 + (p['x2'] + 2)**2)
6: zt = p['x3'] - sqrt(1 + p['x2']**2)
7: G1 = exp(-5*(rp - 2)**2)
8: G2 = exp(-5*(rm - 2)**2)
9: return log(G1 + G2) - 5*zt**2
Once the model is defined, the PE job can be submitted with the command
python -m bajes -p prior.ini -l like.py 666The full list of input arguments
can be visualized with the command python -m bajes --help..
For our exercise we employ three samplers: the PTMCMC, the nested sampling
with cpnest and the dynamic nested sampling with dynesty. The PTMCMC algorithm
estimated a log-evidence equal to $-6.4\pm 5.0$, where the reported
uncertainty is the standard deviation. The estimation agrees with the
numerical result; however, its uncertainty is of the same order of the
measurement. This reflects the inability of MCMC methods to meticulously
integrate the features of the targeted parameter space. On the other hand,
cpnest estimated a log-evidence equal to $-5.50\pm 0.09$ and the dynamic
nested sampler of dynesty returned the value of $-5.58\pm 0.14$. These results
highlight the strength of the nested sampling with respect to MCMC techniques
in the evidence evaluation. Figure 23 shows the marginalized posterior
distributions extracted from the posterior samples.
## References
* Abbott _et al._ (2019a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. X9, 031040 (2019a), arXiv:1811.12907 [astro-ph.HE] .
* Del Pozzo _et al._ (2018) W. Del Pozzo, C. Berry, A. Ghosh, T. Haines, L. Singer, and A. Vecchio, Mon. Not. Roy. Astron. Soc. 479, 601 (2018), arXiv:1801.08009 [astro-ph.IM] .
* Wang (2019) D. Wang, Int. J. Mod. Phys. D 28, 1950106 (2019), arXiv:1804.04845 [astro-ph.CO] .
* Abbott _et al._ (2020a) R. Abbott _et al._ (LIGO Scientific, Virgo), (2020a), arXiv:2010.14527 [gr-qc] .
* Hortúa _et al._ (2020) H. J. Hortúa, R. Volpi, D. Marinelli, and L. Malagò, Physical Review D 102 (2020), 10.1103/physrevd.102.103509.
* Veitch and Vecchio (2010) J. Veitch and A. Vecchio, Phys.Rev. D81, 062003 (2010), arXiv:0911.3820 [astro-ph.CO] .
* Veitch _et al._ (2015) J. Veitch _et al._ , Phys. Rev. D91, 042003 (2015), arXiv:1409.7215 [gr-qc] .
* Abbott _et al._ (2020b) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Class. Quant. Grav. 37, 055002 (2020b), arXiv:1908.11170 [gr-qc] .
* Abbott _et al._ (2019b) B. P. Abbott _et al._ (LIGO Scientific, Virgo), (2019b), arXiv:1903.04467 [gr-qc] .
* Breschi _et al._ (2019a) M. Breschi, R. O'Shaughnessy, J. Lange, and O. Birnholtz, (2019a), arXiv:1903.05982 [gr-qc] .
* Abbott _et al._ (2020c) R. Abbott _et al._ (LIGO Scientific, Virgo), (2020c), arXiv:2010.14533 [astro-ph.HE] .
* Abbott _et al._ (2018a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 121, 161101 (2018a), arXiv:1805.11581 [gr-qc] .
* Abbott _et al._ (2017a) B. P. Abbott _et al._ (LIGO Scientific, VINROUGE, Las Cumbres Observatory, DLT40, Virgo, 1M2H, MASTER), Nature (2017a), 10.1038/nature24471, arXiv:1710.05835 [astro-ph.CO] .
* Abbott _et al._ (2019c) B. Abbott _et al._ (LIGO Scientific, Virgo), (2019c), arXiv:1908.06060 [astro-ph.CO] .
* Abbott _et al._ (2017b) B. P. Abbott _et al._ (Virgo, LIGO Scientific), Phys. Rev. Lett. 119, 161101 (2017b), arXiv:1710.05832 [gr-qc] .
* Abbott _et al._ (2017c) B. P. Abbott _et al._ (Virgo, Fermi-GBM, INTEGRAL, LIGO Scientific), Astrophys. J. 848, L13 (2017c), arXiv:1710.05834 [astro-ph.HE] .
* Savchenko _et al._ (2017) V. Savchenko _et al._ , Astrophys. J. 848, L15 (2017), arXiv:1710.05449 [astro-ph.HE] .
* Pian _et al._ (2017) E. Pian _et al._ , Nature (2017), 10.1038/nature24298, arXiv:1710.05858 [astro-ph.HE] .
* Smartt _et al._ (2017) S. J. Smartt _et al._ , Nature (2017), 10.1038/nature24303, arXiv:1710.05841 [astro-ph.HE] .
* Tanvir _et al._ (2017) N. R. Tanvir _et al._ , Astrophys. J. 848, L27 (2017), arXiv:1710.05455 [astro-ph.HE] .
* Tanaka _et al._ (2017) M. Tanaka _et al._ , Publ. Astron. Soc. Jap. (2017), 10.1093/pasj/psx121, arXiv:1710.05850 [astro-ph.HE] .
* Villar _et al._ (2017) V. A. Villar _et al._ , Astrophys. J. 851, L21 (2017), arXiv:1710.11576 [astro-ph.HE] .
* Radice and Dai (2019) D. Radice and L. Dai, Eur. Phys. J. A55, 50 (2019), arXiv:1810.12917 [astro-ph.HE] .
* Dietrich _et al._ (2020) T. Dietrich, M. W. Coughlin, P. T. Pang, M. Bulla, J. Heinzel, L. Issa, I. Tews, and S. Antier, (2020), arXiv:2002.11355 [astro-ph.HE] .
* Breschi _et al._ (2021a) M. Breschi, A. Perego, S. Bernuzzi, W. Del Pozzo, V. Nedora, D. Radice, and D. Vescovi, (2021a), arXiv:2101.01201 [astro-ph.HE] .
* LIGO Scientific Collaboration (2018a) LIGO Scientific Collaboration, ``LIGO Algorithm Library - LALSuite,'' free software (GPL) (2018a).
* Aasi _et al._ (2015) J. Aasi _et al._ (LIGO Scientific), Class. Quant. Grav. 32, 074001 (2015), arXiv:1411.4547 [gr-qc] .
* Acernese _et al._ (2015) F. Acernese _et al._ (VIRGO), Class. Quant. Grav. 32, 024001 (2015), arXiv:1408.3978 [gr-qc] .
* Pankow _et al._ (2015) C. Pankow, P. Brady, E. Ochsner, and R. O'Shaughnessy, Phys. Rev. D92, 023002 (2015), arXiv:1502.04370 [gr-qc] .
* Lange _et al._ (2018) J. Lange, R. O'Shaughnessy, and M. Rizzo, (2018), arXiv:1805.10457 [gr-qc] .
* Ashton _et al._ (2019) G. Ashton _et al._ , Astrophys. J. Suppl. 241, 27 (2019), arXiv:1811.02042 [astro-ph.IM] .
* Biwer _et al._ (2019) C. Biwer, C. D. Capano, S. De, M. Cabero, D. A. Brown, A. H. Nitz, and V. Raymond, Publ. Astron. Soc. Pac. 131, 024503 (2019), arXiv:1807.10312 [astro-ph.IM] .
* Nasa High Energy Astrophysics Science Archive Research Center (2014) (Heasarc) Nasa High Energy Astrophysics Science Archive Research Center (Heasarc), ``HEAsoft: Unified Release of FTOOLS and XANADU,'' (2014), ascl:1408.004 .
* (34) N. H. E. A. S. A. R. Center, https://heasarc.gsfc.nasa.gov/docs/software.html, hEASARC Softwares.
* Olamaie _et al._ (2015) M. Olamaie, F. Feroz, K. J. B. Grainge, M. P. Hobson, J. S. Sanders, and R. D. E. Saunders, Mon. Not. Roy. Astron. Soc. 446, 1799 (2015), arXiv:1310.1885 [astro-ph.CO] .
* Aartsen _et al._ (2020) M. G. Aartsen _et al._ (IceCube), Astrophys. J. 893, 102 (2020), arXiv:1909.08623 [astro-ph.HE] .
* Shariff _et al._ (2016) H. Shariff, X. Jiao, R. Trotta, and D. A. van Dyk, Astrophys. J. 827, 1 (2016), arXiv:1510.05954 [astro-ph.CO] .
* Hinton _et al._ (2019) S. R. Hinton, T. M. Davis, A. G. Kim, D. Brout, C. B. D’Andrea, R. Kessler, J. Lasker, C. Lidman, E. Macaulay, A. Möller, and et al., The Astrophysical Journal 876, 15 (2019).
* Möller and de Boissière (2019) A. Möller and T. de Boissière, Monthly Notices of the Royal Astronomical Society 491, 4277–4293 (2019).
* Edwards _et al._ (2006) R. T. Edwards, G. B. Hobbs, and R. N. Manchester, Mon. Not. Roy. Astron. Soc. 372, 1549 (2006), arXiv:astro-ph/0607664 .
* Lentati _et al._ (2013) L. Lentati, P. Alexander, M. P. Hobson, F. Feroz, R. van Haasteren, K. J. Lee, and R. M. Shannon, Monthly Notices of the Royal Astronomical Society 437, 3004–3023 (2013).
* Luo _et al._ (2021) J. Luo, S. Ransom, P. Demorest, P. S. Ray, A. Archibald, M. Kerr, R. J. Jennings, M. Bachetti, R. van Haasteren, C. A. Champagne, J. Colen, C. Phillips, J. Zimmerman, K. Stovall, M. T. Lam, and F. A. Jenet, ``Pint: A modern software package for pulsar timing,'' (2021), arXiv:2012.00074 [astro-ph.IM] .
* Lewis and Bridle (2002) A. Lewis and S. Bridle, Phys. Rev. D 66, 103511 (2002), arXiv:astro-ph/0205436 .
* Das and Souradeep (2014) S. Das and T. Souradeep, JCAP 07, 018 (2014), arXiv:1403.1271 [astro-ph.CO] .
* Ishida _et al._ (2015) E. E. O. Ishida, S. D. P. Vitenti, M. Penna-Lima, J. Cisewski, R. S. de Souza, A. M. M. Trindade, E. Cameron, and V. C. Busti (COIN), Astron. Comput. 13, 1 (2015), arXiv:1504.06129 [astro-ph.CO] .
* Handley _et al._ (2015) W. J. Handley, M. P. Hobson, and A. N. Lasenby, Monthly Notices of the Royal Astronomical Society 453, 4385–4399 (2015).
* Abbott _et al._ (2019d) R. Abbott _et al._ (LIGO Scientific, Virgo), (2019d), arXiv:1912.11716 [gr-qc] .
* MacKay (2002) D. J. C. MacKay, _Information Theory, Inference and Learning Algorithms_ (Cambridge University Press, USA, 2002).
* Gelman _et al._ (2004) A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin, _Bayesian Data Analysis_ , 2nd ed. (Chapman and Hall/CRC, 2004).
* Sivia and Skilling (2006) D. S. Sivia and J. Skilling, _Data Analysis - A Bayesian Tutorial_ , 2nd ed., Oxford Science Publications (Oxford University Press, 2006).
* Mattei (2020) P.-A. Mattei, ``A parsimonious tour of bayesian model uncertainty,'' (2020), arXiv:1902.05539 [stat.ME] .
* Yao _et al._ (2019) J. Yao, W. Pan, S. Ghosh, and F. Doshi-Velez, CoRR abs/1906.09686 (2019), arXiv:1906.09686 .
* Kass and Raftery (1995) R. E. Kass and A. E. Raftery, Journal of the American Statistical Association 90, 773 (1995), https://amstat.tandfonline.com/doi/pdf/10.1080/01621459.1995.10476572 .
* Allison and Dunkley (2013) R. Allison and J. Dunkley, Monthly Notices of the Royal Astronomical Society 437, 3918–3928 (2013).
* Tichy (2015) M. C. Tichy, Physical Review A 91 (2015), 10.1103/physreva.91.022316.
* Skilling (2006) J. Skilling, Bayesian Anal. 1, 833 (2006).
* Gelman (2008) A. Gelman, Bayesian Anal. 3, 445 (2008).
* Perego _et al._ (2017) A. Perego, D. Radice, and S. Bernuzzi, Astrophys. J. 850, L37 (2017), arXiv:1711.03982 [astro-ph.HE] .
* Gelman _et al._ (2020) A. Gelman, A. Vehtari, D. Simpson, C. C. Margossian, B. Carpenter, Y. Yao, L. Kennedy, J. Gabry, P.-C. Bürkner, and M. Modrák, ``Bayesian workflow,'' (2020), arXiv:2011.01808 [stat.ME] .
* Loredo and Hendry (2019) T. J. Loredo and M. A. Hendry, ``Multilevel and hierarchical bayesian modeling of cosmic populations,'' (2019), arXiv:1911.12337 [astro-ph.IM] .
* Patil _et al._ (2010) A. Patil, D. Huard, and C. J. Fonnesbeck, Journal of Statistical Software, Articles 35, 1 (2010).
* Solonen _et al._ (2012) A. Solonen, P. Ollinaho, M. Laine, H. Haario, J. Tamminen, and H. Järvinen, Bayesian Anal. 7, 715 (2012).
* Foreman-Mackey _et al._ (2013) D. Foreman-Mackey, D. W. Hogg, D. Lang, and J. Goodman, Publications of the Astronomical Society of the Pacific 125, 306–312 (2013).
* Li and Marzouk (2014) J. Li and Y. M. Marzouk, SIAM Journal on Scientific Computing 36, A1163–A1186 (2014).
* Vousden _et al._ (2015) W. D. Vousden, W. M. Farr, and I. Mandel, Monthly Notices of the Royal Astronomical Society 455, 1919–1937 (2015).
* Šukys and Kattwinkel (2017) J. Šukys and M. Kattwinkel, ``Spux: Scalable particle markov chain monte carlo for uncertainty quantification in stochastic ecological models,'' (2017), arXiv:1711.01410 [stat.CO] .
* Robert _et al._ (2018) C. P. Robert, V. Elvira, N. Tawn, and C. Wu, ``Accelerating mcmc algorithms,'' (2018), arXiv:1804.02719 [stat.CO] .
* Morzfeld _et al._ (2019) M. Morzfeld, X. T. Tong, and Y. M. Marzouk, ``Localization for mcmc: sampling high-dimensional posterior distributions with local structure,'' (2019), arXiv:1710.07747 [stat.ME] .
* Feroz and Hobson (2008) F. Feroz and M. P. Hobson, Monthly Notices of the Royal Astronomical Society 384, 449 (2008), https://academic.oup.com/mnras/article-pdf/384/2/449/3378518/mnras0384-0449.pdf .
* Brewer and Foreman-Mackey (2016) B. J. Brewer and D. Foreman-Mackey, (2016), arXiv:1606.03757 [stat.CO] .
* Higson _et al._ (2018) E. Higson, W. Handley, M. Hobson, and A. Lasenby, Bayesian Analysis 13, 873–896 (2018).
* Smith _et al._ (2020) R. J. E. Smith, G. Ashton, A. Vajpeyi, and C. Talbot, Mon. Not. Roy. Astron. Soc. 498, 4492 (2020), arXiv:1909.11873 [gr-qc] .
* Abbott _et al._ (2016a) B. P. Abbott _et al._ (Virgo, LIGO Scientific), Phys. Rev. Lett. 116, 061102 (2016a), arXiv:1602.03837 [gr-qc] .
* Harris _et al._ (2020) C. R. Harris _et al._ , Nature 585, 357–362 (2020).
* Virtanen _et al._ (2020) P. Virtanen _et al._ , Nature Methods 17, 261 (2020).
* Price-Whelan _et al._ (2018) A. Price-Whelan _et al._ , Astron. J. 156, 123 (2018), arXiv:1801.02634 .
* (77) D. Macleod _et al._ , https://github.com/gwpy/gwpy, GWpy, a python package for gravitational-wave astrophysics.
* Macleod _et al._ (2019) D. Macleod _et al._ , ``GWpy: Python package for studying data from gravitational-wave detectors,'' (2019), ascl:1912.016 .
* Trovato (2020) A. Trovato (Ligo Scientific, Virgo), PoS Asterics2019, 082 (2020).
* LIGO Scientific and Virgo Collaboration (a) LIGO Scientific and Virgo Collaboration, https://www.gw-openscience.org (a), Gravitational-Wave Open Science Center.
* Hunter (2007) J. D. Hunter, Computing in Science Engineering 9, 90 (2007).
* Foreman-Mackey (2016) D. Foreman-Mackey, The Journal of Open Source Software 1, 24 (2016).
* Dalcín _et al._ (2005) L. Dalcín, R. Paz, and M. Storti, Journal of Parallel and Distributed Computing 65, 1108 (2005).
* Dalcin _et al._ (2008) L. Dalcin, R. Paz, M. Storti, and J. D’Elia, Journal of Parallel and Distributed Computing 68, 655 (2008).
* Dalcin _et al._ (2011) L. D. Dalcin, R. R. Paz, P. A. Kler, and A. Cosimo, Advances in Water Resources 34, 1124 (2011), new Computational Methods and Software Tools.
* (86) W. D. Pozzo and J. Veitch, https://github.com/johnveitch/cpnest, Parallel nested sampling algorithm in python.
* Speagle (2020) J. S. Speagle, Monthly Notices of the Royal Astronomical Society 493, 3132?3158 (2020).
* Brigham and Morrow (1967) E. O. Brigham and R. E. Morrow, IEEE Spectrum 4, 63 (1967).
* Losurdo _et al._ (2001) G. Losurdo _et al._ , Rev. Sci. Instrum. 72, 3653 (2001), arXiv:gr-qc/0105110 .
* Daw _et al._ (2004) E. Daw, J. Giaime, D. Lormand, M. Lubinski, and J. Zweizig, Class. Quant. Grav. 21, 2255 (2004), arXiv:gr-qc/0403046 .
* Accadia _et al._ (2012) T. Accadia _et al._ (VIRGO), Class. Quant. Grav. 29, 025005 (2012), arXiv:1108.1598 [gr-qc] .
* Harry _et al._ (2002) G. M. Harry _et al._ , Class. Quant. Grav. 19, 897 (2002), arXiv:gr-qc/0109073 .
* Somiya (2012) K. Somiya (KAGRA), Class. Quant. Grav. 29, 124007 (2012), arXiv:1111.7185 [gr-qc] .
* Lyons _et al._ (2000) T. T. Lyons, M. W. Regehr, and F. J. Raab, Appl. Opt. 39, 6761 (2000).
* Buonanno _et al._ (2003) A. Buonanno, Y.-b. Chen, and N. Mavalvala, Phys. Rev. D 67, 122005 (2003), arXiv:gr-qc/0302041 .
* Nachman _et al._ (1997) P. Nachman, P. M. Pellegrino, and A. C. Bernstein, American Journal of Physics 65, 441 (1997).
* Bond _et al._ (2017) C. Bond, D. Brown, A. Freise, and K. A. Strain, Living Reviews in Relativity 19 (2017).
* Coughlin _et al._ (2018) M. W. Coughlin _et al._ , Phys. Rev. D 97, 102007 (2018), arXiv:1802.00885 [gr-qc] .
* Welch (1967) P. Welch, IEEE Transactions on Audio and Electroacoustics 15, 70 (1967).
* Blais (1996) J. A. R. Blais, Journal of Geodesy 70 (1996), 10.1007/BF00868187.
* Cornish and Littenberg (2015) N. J. Cornish and T. B. Littenberg, Class. Quant. Grav. 32, 135012 (2015), arXiv:1410.3835 [gr-qc] .
* Usman _et al._ (2016) S. A. Usman _et al._ , Class. Quant. Grav. 33, 215004 (2016), arXiv:1508.02357 [gr-qc] .
* Nitz _et al._ (2018) A. H. Nitz, T. Dal Canton, D. Davis, and S. Reyes, Phys. Rev. D 98, 024050 (2018), arXiv:1805.11174 [gr-qc] .
* Nitz _et al._ (2021) A. Nitz, I. Harry, D. Brown, C. M. Biwer, J. Willis, T. D. Canton, C. Capano, L. Pekowsky, T. Dent, A. R. Williamson, G. S. Davies, S. De, M. Cabero, B. Machenschalk, P. Kumar, S. Reyes, D. Macleod, dfinstad, F. Pannarale, T. Massinger, S. Kumar, M. Tápai, L. Singer, S. Khan, S. Fairhurst, A. Nielsen, S. Singh, shasvath, and B. U. V. Gadre, ``gwastro/pycbc: 1.18.0 release of pycbc,'' (2021).
* Abbott _et al._ (2018b) B. Abbott _et al._ (KAGRA, LIGO Scientific, VIRGO), Living Rev. Rel. 21, 3 (2018b), arXiv:1304.0670 [gr-qc] .
* Dooley (2015) K. L. Dooley (LIGO Scientific), J. Phys. Conf. Ser. 610, 012015 (2015), arXiv:1411.6588 [physics.ins-det] .
* Akutsu _et al._ (2019) T. Akutsu _et al._ (KAGRA), Nature Astron. 3, 35 (2019), arXiv:1811.08079 [gr-qc] .
* Raymond and Farr (2014) V. Raymond and W. Farr, (2014), arXiv:1402.0053 [gr-qc] .
* Nitz _et al._ (2019) A. H. Nitz, C. Capano, A. B. Nielsen, S. Reyes, R. White, D. A. Brown, and B. Krishnan, Astrophys. J. 872, 195 (2019), arXiv:1811.01921 [gr-qc] .
* Pankow _et al._ (2018) C. Pankow, E. A. Chase, S. Coughlin, M. Zevin, and V. Kalogera, Astrophys. J. Lett. 854, L25 (2018), arXiv:1801.02674 [astro-ph.HE] .
* Luck _et al._ (2010) H. Luck _et al._ , J. Phys. Conf. Ser. 228, 012012 (2010), arXiv:1004.0339 [gr-qc] .
* Aso _et al._ (2013) Y. Aso, Y. Michimura, K. Somiya, M. Ando, O. Miyakawa, T. Sekiguchi, D. Tatsumi, and H. Yamamoto (KAGRA), Phys. Rev. D 88, 043007 (2013), arXiv:1306.6747 [gr-qc] .
* Akutsu _et al._ (2020) T. Akutsu _et al._ (KAGRA), (2020), arXiv:2005.05574 [physics.ins-det] .
* Punturo _et al._ (2010) M. Punturo, M. Abernathy, F. Acernese, B. Allen, N. Andersson, _et al._ , Class.Quant.Grav. 27, 194002 (2010).
* Hild _et al._ (2011) S. Hild _et al._ , Class. Quant. Grav. 28, 094013 (2011), arXiv:1012.0908 [gr-qc] .
* Chiaramello and Nagar (2020) D. Chiaramello and A. Nagar, Phys. Rev. D 101, 101501 (2020), arXiv:2001.11736 [gr-qc] .
* Nagar _et al._ (2020a) A. Nagar, P. Rettegno, R. Gamba, and S. Bernuzzi, (2020a), arXiv:2009.12857 [gr-qc] .
* Breschi _et al._ (2019b) M. Breschi, S. Bernuzzi, F. Zappa, M. Agathos, A. Perego, D. Radice, and A. Nagar, Phys. Rev. D100, 104029 (2019b), arXiv:1908.11418 [gr-qc] .
* Blanchet _et al._ (1990) L. Blanchet, T. Damour, and G. Schafer, Mon. Not. Roy. Astron. Soc. 242, 289 (1990).
* Faye _et al._ (2012) G. Faye, S. Marsat, L. Blanchet, and B. R. Iyer, Class. Quant. Grav. 29, 175004 (2012), arXiv:1204.1043 [gr-qc] .
* Levi and Steinhoff (2016a) M. Levi and J. Steinhoff, JCAP 1601, 008 (2016a), arXiv:1506.05794 [gr-qc] .
* Levi and Steinhoff (2016b) M. Levi and J. Steinhoff, JCAP 1601, 011 (2016b), arXiv:1506.05056 [gr-qc] .
* Mishra _et al._ (2015) C. K. Mishra, K. G. Arun, and B. R. Iyer, Phys. Rev. D91, 084040 (2015), arXiv:1501.07096 [gr-qc] .
* Levi and Steinhoff (2016c) M. Levi and J. Steinhoff, (2016c), arXiv:1607.04252 [gr-qc] .
* Messina _et al._ (2019) F. Messina, R. Dudi, A. Nagar, and S. Bernuzzi, Phys. Rev. D99, 124051 (2019), arXiv:1904.09558 [gr-qc] .
* Vines _et al._ (2011) J. Vines, E. E. Flanagan, and T. Hinderer, Phys. Rev. D83, 084051 (2011), arXiv:1101.1673 [gr-qc] .
* Damour _et al._ (2012) T. Damour, A. Nagar, and L. Villain, Phys.Rev. D85, 123007 (2012), arXiv:1203.4352 [gr-qc] .
* Henry _et al._ (2020) Q. Henry, G. Faye, and L. Blanchet, Phys. Rev. D 102, 044033 (2020), arXiv:2005.13367 [gr-qc] .
* Nagar _et al._ (2019a) A. Nagar, F. Messina, P. Rettegno, D. Bini, T. Damour, A. Geralico, S. Akcay, and S. Bernuzzi, Phys. Rev. D99, 044007 (2019a), arXiv:1812.07923 [gr-qc] .
* Buonanno and Damour (1999) A. Buonanno and T. Damour, Phys. Rev. D59, 084006 (1999), arXiv:gr-qc/9811091 .
* Buonanno and Damour (2000) A. Buonanno and T. Damour, Phys. Rev. D62, 064015 (2000), arXiv:gr-qc/0001013 .
* Damour _et al._ (2000) T. Damour, P. Jaranowski, and G. Schaefer, Phys. Rev. D62, 084011 (2000), arXiv:gr-qc/0005034 [gr-qc] .
* Damour (2001) T. Damour, Phys. Rev. D64, 124013 (2001), arXiv:gr-qc/0103018 .
* Damour _et al._ (2008) T. Damour, P. Jaranowski, and G. Schäfer, Phys.Rev. D78, 024009 (2008), arXiv:0803.0915 [gr-qc] .
* Damour _et al._ (2015) T. Damour, P. Jaranowski, and G. Schäfer, Phys. Rev. D91, 084024 (2015), arXiv:1502.07245 [gr-qc] .
* Bini _et al._ (2019) D. Bini, T. Damour, and A. Geralico, Phys. Rev. Lett. 123, 231104 (2019), arXiv:1909.02375 [gr-qc] .
* Bini _et al._ (2020a) D. Bini, T. Damour, and A. Geralico, Phys. Rev. D 102, 024062 (2020a), arXiv:2003.11891 [gr-qc] .
* Bini _et al._ (2020b) D. Bini, T. Damour, and A. Geralico, Phys. Rev. D 102, 024061 (2020b), arXiv:2004.05407 [gr-qc] .
* Nagar _et al._ (2018) A. Nagar _et al._ , Phys. Rev. D98, 104052 (2018), arXiv:1806.01772 [gr-qc] .
* Damour and Nagar (2014) T. Damour and A. Nagar, Phys.Rev. D90, 044018 (2014), arXiv:1406.6913 [gr-qc] .
* Nagar _et al._ (2017) A. Nagar, G. Riemenschneider, and G. Pratten, Phys. Rev. D96, 084045 (2017), arXiv:1703.06814 [gr-qc] .
* Bini and Damour (2014) D. Bini and T. Damour, Phys.Rev. D90, 124037 (2014), arXiv:1409.6933 [gr-qc] .
* Bernuzzi _et al._ (2015a) S. Bernuzzi, A. Nagar, T. Dietrich, and T. Damour, Phys.Rev.Lett. 114, 161103 (2015a), arXiv:1412.4553 [gr-qc] .
* Akcay _et al._ (2019) S. Akcay, S. Bernuzzi, F. Messina, A. Nagar, N. Ortiz, and P. Rettegno, Phys. Rev. D99, 044051 (2019), arXiv:1812.02744 [gr-qc] .
* Nagar _et al._ (2019b) A. Nagar, G. Pratten, G. Riemenschneider, and R. Gamba, (2019b), arXiv:1904.09550 [gr-qc] .
* Nagar _et al._ (2020b) A. Nagar, G. Riemenschneider, G. Pratten, P. Rettegno, and F. Messina, (2020b), arXiv:2001.09082 [gr-qc] .
* Akcay _et al._ (2020) S. Akcay, R. Gamba, and S. Bernuzzi, (2020), arXiv:2005.05338 [gr-qc] .
* Nagar and Rettegno (2019) A. Nagar and P. Rettegno, Phys. Rev. D99, 021501 (2019), arXiv:1805.03891 [gr-qc] .
* Gamba _et al._ (2020a) R. Gamba, S. Bernuzzi, and A. Nagar, (2020a), arXiv:2012.00027 [gr-qc] .
* (150) https://bitbucket.org/eob_ihes/teobresums/src/master/, TEOBResumS code.
* Takami _et al._ (2015) K. Takami, L. Rezzolla, and L. Baiotti, Phys.Rev. D91, 064001 (2015), arXiv:1412.3240 [gr-qc] .
* Clark _et al._ (2016) J. A. Clark, A. Bauswein, N. Stergioulas, and D. Shoemaker, Class. Quant. Grav. 33, 085003 (2016), arXiv:1509.08522 [astro-ph.HE] .
* Radice _et al._ (2018a) D. Radice, A. Perego, S. Bernuzzi, and B. Zhang, Mon. Not. Roy. Astron. Soc. 481, 3670 (2018a), arXiv:1803.10865 [astro-ph.HE] .
* Radice _et al._ (2017) D. Radice, S. Bernuzzi, W. Del Pozzo, L. F. Roberts, and C. D. Ott, Astrophys. J. 842, L10 (2017), arXiv:1612.06429 [astro-ph.HE] .
* Radice _et al._ (2018b) D. Radice, A. Perego, F. Zappa, and S. Bernuzzi, Astrophys. J. 852, L29 (2018b), arXiv:1711.03647 [astro-ph.HE] .
* Zappa _et al._ (2018) F. Zappa, S. Bernuzzi, D. Radice, A. Perego, and T. Dietrich, Phys. Rev. Lett. 120, 111101 (2018), arXiv:1712.04267 [gr-qc] .
* Agathos _et al._ (2020) M. Agathos, F. Zappa, S. Bernuzzi, A. Perego, M. Breschi, and D. Radice, Phys. Rev. D101, 044006 (2020), arXiv:1908.05442 [gr-qc] .
* Bernuzzi _et al._ (2020) S. Bernuzzi _et al._ , Mon. Not. Roy. Astron. Soc. (2020), 10.1093/mnras/staa1860, arXiv:2003.06015 [astro-ph.HE] .
* Stefano Schmidt and Walter Del Pozzo (2020) Stefano Schmidt and Walter Del Pozzo, ``mlgw,'' https://pypi.org/project/mlgw/ (2020).
* Schmidt _et al._ (2020) S. Schmidt, M. Breschi, R. Gamba, G. Pagano, P. Rettegno, G. Riemenschneider, S. Bernuzzi, A. Nagar, and W. Del Pozzo, (2020), arXiv:2011.01958 [gr-qc] .
* Bohé _et al._ (2017) A. Bohé _et al._ , Phys. Rev. D95, 044028 (2017), arXiv:1611.03703 [gr-qc] .
* (162) J. Blackman, S. Field, C. Galley, V. Varma, and K. Barkett, https://pypi.org/project/gwsurrogate/, gwsurrogate.
* Field _et al._ (2014) S. E. Field, C. R. Galley, J. S. Hesthaven, J. Kaye, and M. Tiglio, Phys.Rev. X4, 031006 (2014), arXiv:1308.3565 [gr-qc] .
* Varma _et al._ (2019a) V. Varma, S. E. Field, M. A. Scheel, J. Blackman, D. Gerosa, L. C. Stein, L. E. Kidder, and H. P. Pfeiffer, Phys. Rev. Research. 1, 033015 (2019a), arXiv:1905.09300 [gr-qc] .
* Blackman _et al._ (2017) J. Blackman, S. E. Field, M. A. Scheel, C. R. Galley, C. D. Ott, M. Boyle, L. E. Kidder, H. P. Pfeiffer, and B. Szilágyi, Phys. Rev. D 96, 024058 (2017), arXiv:1705.07089 [gr-qc] .
* Varma _et al._ (2019b) V. Varma, S. E. Field, M. A. Scheel, J. Blackman, L. E. Kidder, and H. P. Pfeiffer, Phys. Rev. D99, 064045 (2019b), arXiv:1812.07865 [gr-qc] .
* Barkett _et al._ (2020) K. Barkett, Y. Chen, M. A. Scheel, and V. Varma, Phys. Rev. D 102, 024031 (2020), arXiv:1911.10440 [gr-qc] .
* Husa _et al._ (2016) S. Husa, S. Khan, M. Hannam, M. Pürrer, F. Ohme, X. Jiménez Forteza, and A. Bohé, Phys. Rev. D93, 044006 (2016), arXiv:1508.07250 [gr-qc] .
* Khan _et al._ (2016) S. Khan, S. Husa, M. Hannam, F. Ohme, M. Pürrer, X. Jiménez Forteza, and A. Bohé, Phys. Rev. D93, 044007 (2016), arXiv:1508.07253 [gr-qc] .
* Dietrich _et al._ (2017) T. Dietrich, S. Bernuzzi, and W. Tichy, Phys. Rev. D96, 121501 (2017), arXiv:1706.02969 [gr-qc] .
* LIGO Scientific Collaboration (2018b) LIGO Scientific Collaboration, ``LIGO Algorithm Library - LALSuite,'' free software (GPL) (2018b).
* Ng _et al._ (2018) K. K. Ng, S. Vitale, A. Zimmerman, K. Chatziioannou, D. Gerosa, and C.-J. Haster, Phys. Rev. D 98, 083007 (2018), arXiv:1805.03046 [gr-qc] .
* Aghanim _et al._ (2020) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO] .
* (174) J. Veitch and W. Del Pozzo, ``Analytic marginalisation of phase parameter,'' https://dcc.ligo.org/LIGO-T1300326.
* (175) W. M. Farr, ``Marginalisation of the Time Parameter in Gravitational Wave Parameter Estimation,'' https://dcc.ligo.org/LIGO-T1400460.
* Vitale _et al._ (2012) S. Vitale, W. Del Pozzo, T. G. Li, C. Van Den Broeck, I. Mandel, B. Aylott, and J. Veitch, Phys. Rev. D 85, 064034 (2012), arXiv:1111.3044 [gr-qc] .
* Abbott _et al._ (2017d) B. Abbott _et al._ (LIGO Scientific), Phys. Rev. D 95, 062003 (2017d), arXiv:1602.03845 [gr-qc] .
* Cahillane _et al._ (2017) C. Cahillane, J. Betzwieser, D. A. Brown, E. Goetz, E. D. Hall, K. Izumi, S. Kandhasamy, S. Karki, J. S. Kissel, G. Mendell, R. L. Savage, D. Tuyenbayev, A. Urban, A. Viets, M. Wade, and A. J. Weinstein, Phys. Rev. D 96, 102001 (2017).
* Acernese _et al._ (2018) F. Acernese _et al._ (Virgo), Classical and Quantum Gravity 35, 205004 (2018).
* Littenberg _et al._ (2013) T. B. Littenberg, M. Coughlin, B. Farr, and W. M. Farr, Phys. Rev. D 88, 084044 (2013), arXiv:1307.8195 [astro-ph.IM] .
* Harry (2010) G. M. Harry (LIGO Scientific Collaboration), Class.Quant.Grav. 27, 084006 (2010).
* Abbott _et al._ (2017e) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Class. Quant. Grav. 34, 104002 (2017e), arXiv:1611.07531 [gr-qc] .
* Abbott _et al._ (2016b) B. P. Abbott _et al._ (Virgo, LIGO Scientific), Phys. Rev. Lett. 116, 241102 (2016b), arXiv:1602.03840 [gr-qc] .
* Wade _et al._ (2014) L. Wade, J. D. Creighton, E. Ochsner, B. D. Lackey, B. F. Farr, T. B. Littenberg, and V. Raymond, Phys. Rev. D 89, 103012 (2014), arXiv:1402.5156 [gr-qc] .
* Abbott _et al._ (2019e) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. X9, 011001 (2019e), arXiv:1805.11579 [gr-qc] .
* Dudi _et al._ (2018) R. Dudi, F. Pannarale, T. Dietrich, M. Hannam, S. Bernuzzi, F. Ohme, and B. Bruegmann, (2018), arXiv:1808.09749 [gr-qc] .
* Samajdar and Dietrich (2019) A. Samajdar and T. Dietrich, Phys. Rev. D 100, 024046 (2019), arXiv:1905.03118 [gr-qc] .
* Gamba _et al._ (2020b) R. Gamba, M. Breschi, S. Bernuzzi, M. Agathos, and A. Nagar, (2020b), arXiv:2009.08467 [gr-qc] .
* Abbott _et al._ (2017f) B. P. Abbott _et al._ (Virgo, LIGO Scientific), Phys. Rev. Lett. 119, 141101 (2017f), arXiv:1709.09660 [gr-qc] .
* Bernuzzi _et al._ (2015b) S. Bernuzzi, T. Dietrich, and A. Nagar, Phys. Rev. Lett. 115, 091101 (2015b), arXiv:1504.01764 [gr-qc] .
* Bauswein _et al._ (2012) A. Bauswein, H. Janka, K. Hebeler, and A. Schwenk, Phys. Rev. D 86, 063001 (2012), arXiv:1204.1888 [astro-ph.SR] .
* Bauswein _et al._ (2014) A. Bauswein, N. Stergioulas, and H.-T. Janka, Phys.Rev. D90, 023002 (2014), arXiv:1403.5301 [astro-ph.SR] .
* Bauswein _et al._ (2019) A. Bauswein, N.-U. F. Bastian, D. B. Blaschke, K. Chatziioannou, J. A. Clark, T. Fischer, and M. Oertel, Phys. Rev. Lett. 122, 061102 (2019), arXiv:1809.01116 [astro-ph.HE] .
* Romero-Shaw _et al._ (2020) I. Romero-Shaw _et al._ , Mon. Not. Roy. Astron. Soc. 499, 3295 (2020), arXiv:2006.00714 [astro-ph.IM] .
* LIGO Scientific and Virgo Collaboration (b) LIGO Scientific and Virgo Collaboration, https://dcc.ligo.org/LIGO-P1900040/public (b), Power Spectral Densities (PSD) release for GWTC-1.
* LIGO Scientific and Virgo Collaboration (c) LIGO Scientific and Virgo Collaboration, https://dcc.ligo.org/LIGO-P1900011/public (c), Calibration uncertainty envelope release for GWTC-1.
* Abbott _et al._ (2016c) T. D. Abbott _et al._ (Virgo, LIGO Scientific), Phys. Rev. X6, 041014 (2016c), arXiv:1606.01210 [gr-qc] .
* Abbott _et al._ (2016d) B. P. Abbott _et al._ (Virgo, LIGO Scientific), Astrophys. J. 818, L22 (2016d), arXiv:1602.03846 [astro-ph.HE] .
* Abbott _et al._ (2016e) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 116, 221101 (2016e), [Erratum: Phys. Rev. Lett.121,no.12,129902(2018)], arXiv:1602.03841 [gr-qc] .
* Abbott _et al._ (2018c) B. P. Abbott _et al._ (LIGO Scientific, Virgo), (2018c), 10.3847/1538-4357/ab0f3d, arXiv:1810.02581 [gr-qc] .
* Dai _et al._ (2018) L. Dai, T. Venumadhav, and B. Zackay, (2018), arXiv:1806.08793 [gr-qc] .
* Narikawa _et al._ (2019) T. Narikawa, N. Uchikata, K. Kawaguchi, K. Kiuchi, K. Kyutoku, M. Shibata, and H. Tagoshi, Phys. Rev. Research. 1, 033055 (2019), arXiv:1812.06100 [astro-ph.HE] .
* Jiménez-Forteza _et al._ (2017) X. Jiménez-Forteza, D. Keitel, S. Husa, M. Hannam, S. Khan, and M. Pürrer, Phys. Rev. D95, 064024 (2017), arXiv:1611.00332 [gr-qc] .
* Dal Canton _et al._ (2014) T. Dal Canton _et al._ , Phys. Rev. D90, 082004 (2014), arXiv:1405.6731 [gr-qc] .
* Dal Canton _et al._ (2015) T. Dal Canton, A. P. Lundgren, and A. B. Nielsen, Phys. Rev. D 91, 062010 (2015), arXiv:1411.6815 [gr-qc] .
* Apostolatos _et al._ (1994) T. A. Apostolatos, C. Cutler, G. J. Sussman, and K. S. Thorne, Phys. Rev. D 49, 6274 (1994).
* Calderón Bustillo _et al._ (2017) J. Calderón Bustillo, P. Laguna, and D. Shoemaker, Phys. Rev. D 95, 104038 (2017), arXiv:1612.02340 [gr-qc] .
* Mills and Fairhurst (2021) C. Mills and S. Fairhurst, Phys. Rev. D 103, 024042 (2021), arXiv:2007.04313 [gr-qc] .
* Varma _et al._ (2014) V. Varma, P. Ajith, S. Husa, J. C. Bustillo, M. Hannam, and M. Pürrer, Phys. Rev. D90, 124004 (2014), arXiv:1409.2349 [gr-qc] .
* Calderón Bustillo _et al._ (2015) J. Calderón Bustillo, A. Bohé, S. Husa, A. M. Sintes, M. Hannam, and M. Pürrer, (2015), arXiv:1501.00918 [gr-qc] .
* Stevenson _et al._ (2017) S. Stevenson, C. P. L. Berry, and I. Mandel, Mon. Not. Roy. Astron. Soc. 471, 2801 (2017), arXiv:1703.06873 [astro-ph.HE] .
* Farr _et al._ (2017) W. M. Farr, S. Stevenson, M. Coleman Miller, I. Mandel, B. Farr, and A. Vecchio, Nature 548, 426 (2017), arXiv:1706.01385 [astro-ph.HE] .
* Tiwari _et al._ (2018) V. Tiwari, S. Fairhurst, and M. Hannam, Astrophys. J. 868, 140 (2018), arXiv:1809.01401 [gr-qc] .
* Hayes _et al._ (2020) F. Hayes, I. S. Heng, J. Veitch, and D. Williams, Astrophys. J. 891, 124 (2020), arXiv:1911.04190 [astro-ph.HE] .
* Canizares _et al._ (2015) P. Canizares, S. E. Field, J. Gair, V. Raymond, R. Smith, and M. Tiglio, Phys. Rev. Lett. 114, 071104 (2015), arXiv:1404.6284 [gr-qc] .
* Smith _et al._ (2016) R. Smith, S. E. Field, K. Blackburn, C.-J. Haster, M. Pürrer, V. Raymond, and P. Schmidt, Phys. Rev. D94, 044031 (2016), arXiv:1604.08253 [gr-qc] .
* Morisaki and Raymond (2020) S. Morisaki and V. Raymond, Phys. Rev. D 102, 104020 (2020), arXiv:2007.09108 [gr-qc] .
* Zackay _et al._ (2018) B. Zackay, L. Dai, and T. Venumadhav, (2018), arXiv:1806.08792 [astro-ph.IM] .
* Read _et al._ (2009) J. S. Read, B. D. Lackey, B. J. Owen, and J. L. Friedman, Phys. Rev. D79, 124032 (2009), arXiv:0812.2163 [astro-ph] .
* Raithel _et al._ (2016) C. A. Raithel, F. Ozel, and D. Psaltis, Astrophys. J. 831, 44 (2016), arXiv:1605.03591 [astro-ph.HE] .
* Albert (2020) J. G. Albert, ``Jaxns: a high-performance nested sampling package based on jax,'' (2020), arXiv:2012.15286 [astro-ph.IM] .
* Buchner (2021) J. Buchner, ``Ultranest – a robust, general purpose bayesian inference engine,'' (2021), arXiv:2101.09604 [stat.CO] .
* Breschi _et al._ (2021b) M. Breschi, R. Gamba, and S. Bernuzzi, ``bajes: Bayesian inference of multimessenger astrophysical data, methods and application to gravitational-waves,'' (2021b).
* Gelman and Rubin (1992) A. Gelman and D. B. Rubin, Statist. Sci. 7, 457 (1992).
* Brooks and Gelman (1998) S. P. Brooks and A. Gelman, Journal of Computational and Graphical Statistics 7, 434 (1998), https://www.tandfonline.com/doi/pdf/10.1080/10618600.1998.10474787 .
* Earl and Deem (2005) D. J. Earl and M. W. Deem, Phys. Chem. Chem. Phys. 7, 3910 (2005).
* Swendsen and Wang (1986) R. H. Swendsen and J.-S. Wang, Phys. Rev. Lett. 57, 2607 (1986).
* Neal (1996) R. M. Neal, Statistics and Computing 6, 353 (1996).
* Sambridge (2013) M. Sambridge, Geophysical Journal International 196, 357 (2013).
* Goggans and Chi (2004) P. M. Goggans and Y. Chi, AIP Conference Proceedings 707, 59 (2004).
* Lartillot and Philippe (2006) N. Lartillot and H. a. Philippe, Syst Biol. 55, 195 (2006).
* Gupta _et al._ (2018) S. Gupta, L. Hainsworth, J. Hogg, R. Lee, and J. Faeder, _2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP)_ , (2018), arXiv:1801.09831 [q-bio.QM] .
* Chopin and Robert (2010) N. Chopin and C. P. Robert, Biometrika 97, 741–755 (2010).
* Betancourt _et al._ (2011) M. Betancourt, A. Mohammad-Djafari, J.-F. Bercher, and P. Bessiere, (2011), 10.1063/1.3573613.
* Hol _et al._ (2006) J. D. Hol, T. B. Schon, and F. Gustafsson, in _2006 IEEE Nonlinear Statistical Signal Processing Workshop_ (2006) pp. 79–82.
* Buchner (2014) J. Buchner, arXiv e-prints , arXiv:1407.5459 (2014), arXiv:1407.5459 [stat.CO] .
* Buchner (2017) J. Buchner, arXiv e-prints , arXiv:1707.04476 (2017), arXiv:1707.04476 [stat.CO] .
* Higson _et al._ (2019) E. Higson, W. Handley, M. Hobson, and A. Lasenby, Statistics and Computing (2019), 10.1007/s11222-018-9844-0.
* Hastings (1970) W. K. Hastings, Biometrika 57, 97 (1970), https://academic.oup.com/biomet/article-pdf/57/1/97/23940249/57-1-97.pdf .
* Roberts and Rosenthal (2007) G. O. Roberts and J. S. Rosenthal, Journal of Applied Probability 44, 458–475 (2007).
* Goodman and Weare (2010) J. Goodman and J. Weare, Comm. App. Math. and Comp. Sci. 5 (2010), 10.2140/camcos.2010.5.65.
* Christen and Fox (2010) J. A. Christen and C. Fox, Bayesian Analysis 5, 263–282 (2010).
* Nelson _et al._ (2013) B. Nelson, E. B. Ford, and M. J. Payne, The Astrophysical Journal Supplement Series 210, 11 (2013).
* (244) .
* Roberts and Rosenthal (2001) G. O. Roberts and J. S. Rosenthal, Statistical Science 16, 351 (2001).
* Karamanis and Beutler (2020) M. Karamanis and F. Beutler, (2020), arXiv:2002.06212 [stat.ML] .
* Neal (2003) R. M. Neal, Ann. Statist. 31, 705 (2003).
|
# MICROSCOPE’s constraint on a short-range fifth force
Joel Bergé1, Martin Pernot-Borràs12, Jean-Philippe Uzan34, Philippe Brax5,
Ratana Chhun1, Gilles Métris6, Manuel Rodrigues1, Pierre Touboul1 1 DPHY,
ONERA, Université Paris Saclay, F-92322 Châtillon, France 2 Sorbonne
Université, CNRS, Institut d’Astrophysique de Paris, IAP, F-75014 Paris,
France 3 CNRS, Institut d’Astrophysique de Paris, IAP, F-75014 Paris, France
4 Institut Lagrange de Paris, 98 bis, Bd Arago, 75014 Paris, France 5
Institut de Physique Théorique, Université Paris-Saclay, CEA, CNRS, F-91191
Gif-sur-Yvette Cedex, France 6 Université Côte d’Azur, Observatoire de la
Côte d’Azur, CNRS, IRD, Géoazur, 250 avenue Albert Einstein, F-06560 Valbonne,
France<EMAIL_ADDRESS>
###### Abstract
The MICROSCOPE experiment was designed to test the weak equivalence principle
in space, by comparing the low-frequency dynamics of cylindrical “free-
falling” test masses controlled by electrostatic forces. We use data taken
during technical sessions aimed at estimating the electrostatic stiffness of
MICROSCOPE’s sensors to constrain a short-range Yukawa deviation from
Newtonian gravity. We take advantage of the fact that in the limit of small
displacements, the gravitational interaction (both Newtonian and Yukawa-like)
between nested cylinders is linear, and thus simply characterised by a
stiffness. By measuring the total stiffness of the forces acting on a test
mass as it moves, and comparing it with the theoretical electrostatic
stiffness (expected to dominate), it is a priori possible to infer constraints
on the Yukawa potential parameters. However, we find that measurement
uncertainties are dominated by the gold wires used to control the electric
charge of the test masses, though their related stiffness is indeed smaller
than the expected electrostatic stiffness. Moreover, we find a non-zero
unaccounted for stiffness that depends on the instrument’s electric
configuration, hinting at the presence of patch-field effects. Added to
significant uncertainties on the electrostatic model, they only allow for poor
constraints on the Yukawa potential. This is not surprising, as MICROSCOPE was
not designed for this measurement, but this analysis is the first step to new
experimental searches for non-Newtonian gravity in space.
††: Class. Quantum Grav.
* July 2020
Keywords: Experimental Gravitation, Modified Gravity, Yukawa potential,
Electrostatic accelerometer
## 1 Introduction
A hundred years after its invention, Einstein’s theory of General Relativity
(GR) still passes all experimental tests [1], from early tests (the Mercury
perihelion puzzle and the measurement of the gravitational deflection of
stars’ light passing near the Sun by Eddington) to current tests
(gravitational lensing [2, 3], gravitational redshift [4, 5], gravitational
waves direct detection [6]). However, in order to withstand (not so recent)
astrophysical and cosmological observations, GR must be supplemented by dark
matter and dark energy. The former explains the flat rotation curve of
galaxies and their dynamics in clusters [7, 8], while the latter explains the
acceleration of the cosmic expansion [9, 10]. Whether our theory of
gravitation must be revised or the content of our Universe better understood
is still an open discussion [11, 12]. In this article, we adopt the former
possibility.
Theories beyond the standard model propose the existence of new fields and
particles. For instance, string-inspired theories introduce a spin-0 dilaton-
like particle (see e.g. Refs. [13, 14]), while scalar-tensor models modify
GR’s equations via the introduction of a new scalar field (see e.g. Refs. [15,
16, 11]). Although a new very light scalar field should entail the appearance
of a new long-range force incompatible with current Solar System tests, its
existence can be made compatible with experimental constraints by virtue of a
screening mechanism that makes the field’s mass environment-dependent, thereby
hiding it from local experimental tests [13, 17, 18, 19, 20, 21, 22, 23, 24].
Those models can nevertheless have measurable effects, such as an apparent
violation of the equivalence principle (see e.g. Refs. [20, 25]) or a
variation of fundamental constants [26, 27].
Looking for short-range deviations from Newtonian gravity is essential to test
low-energy limits of high-energy alternative theories (such as string theory
or extra dimensions) and is the goal of several experimental efforts (see
Refs. [28, 29, 30] for reviews and references therein, and Refs. [31, 32] for
recent results). While most of them are highly optimised to look for specific
minute signals, we aim, in this article, to search for a short-range deviation
from Newtonian gravity as a byproduct of MICROSCOPE data.
The MICROSCOPE space experiment tested the weak equivalence principle (WEP) to
a record accuracy [33, 34] via the comparison of the acceleration of two test
masses freely falling while orbiting the Earth. If the WEP is violated, a
signal is expected at a frequency defined as the sum of the satellite’s
orbital and spinning frequencies. Since MICROSCOPE orbits the Earth at a 700
km altitude, the experiment is then mostly sensitive to long-ranged (more than
a few hundred kilometers) modifications of gravitation. Its first results thus
allowed us to set new limits on beyond-GR models involving long-range
deviations from Newtonian gravity parametrised by a Yukawa potential, a light
dilaton [35] and a U-boson [36, 37]. Updates of those works are under way
following the final MICROSCOPE results [38, 39].
In this article, we use MICROSCOPE sessions dedicated to the in-flight
characterisation of its instrument to look for short-range deviations of
Newtonian gravity. Although in WEP-test configuration, the MICROSCOPE test
masses are kept almost motionless by the electrostatic measurement apparatus
and are (by design) barely affected by the satellite’s and instrument’s self-
gravity, this is not the case in some technical sessions where they are set in
a sinusoidal motion. In this situation, they are sensitive to other forces
such as the instrument’s electrostatic stiffness and the gravitational force
from other parts of the instrument. Given the geometry of the MICROSCOPE
instrument, we can expect to see deviations from Newtonian gravity ranging
from the millimetre to the decimetre scales once all other environmental
interactions are accounted for (because gravity tests are degenerate with the
environment, as shown e.g. in Ref. [40]). Nevertheless, we must note that
MICROSCOPE was not designed for this experiment, and we cannot expect to
obtain competitive results. This paper is a prospective analysis of this new
experimental concept so that its primary intent is to present this idea, and
show that it should be substantially improved to test short-range gravity in
space, hence allowing us to discuss its feasibility and limits.
The layout of this paper is as follows. We briefly introduce the Yukawa
deviation from Newtonian gravity in Sect. 2. Section 3 gives a primer about
MICROSCOPE’s experimental concept; we present the equations of the dynamics of
the test masses that are used in the remainder of the paper to measure an
overall stiffness and infer constraints on a Yukawa interaction. Section 4
presents the specific experiment used in this work to measure the stiffness,
and provides an exhaustive account of the forces at play, some of which are
estimated from the data in Sect. 5. We present the data analysis procedure and
the measurement of relevant parameters in Sect. 5. We then provide new (albeit
not competitive) constraints in Sect. 6, before concluding in Sect. 7.
Appendices give a pedagogical derivation of the electrostatic force at play
along MICROSCOPE’s cylinders’ radial axes, and an analytical expression for
the gravitational (both Newtonian and Yukawa) interaction between two
cylinders.
## 2 Yukawa gravity
We parametrise a deviation from Newtonian gravity with a Yukawa potential,
which is simply added to the Newtonian potential. The total gravitational
potential created by a point-mass of mass $M$ at distance $r$ is then
$V(r)=-\frac{GM}{r}\left[1+\alpha\exp\left(-\frac{r}{\lambda}\right)\right],$
(1)
where $G$ is Newton’s gravitational constant, $\alpha$ is the strength of the
Yukawa deviation compared to Newtonian gravity and $\lambda$ is the range of
the corresponding fifth force.
Despite its simplicity, the Yukawa parametrisation is useful as it describes
the fifth force created by a massive scalar field in the Newtonian regime (see
e.g. the Supplemental material of Ref. [35] and references therein). The range
$\lambda$ corresponds to the Compton wavelength of the scalar field, and
$\alpha$ is linked to its scalar charge. Phenomenologically, this charge can
depend on the composition of the interacting bodies in various ways, e.g.
through combinations of their baryon and lepton numbers [28]. In this paper,
we consider composition-independent Yukawa interactions only (thereby, we
assume a universal scalar charge), and we do not relate to any
phenomenological subatomic model, but instead consider only $\alpha$ as the
parameter to constrain.
Many experiments have already provided tight constraints on its range and
strength, from sub-millimeter to Solar System scales (e.g. Refs. [29, 28] and
references therein, and Refs. [41, 42, 43, 44, 35] for more recent works). In
this article, we are concerned with ranges between $\lambda\approx
10^{-3}~{}{\rm m}$ and $\lambda\approx 10^{-1}~{}{\rm m}$, corresponding to
the scale of MICROSCOPE’s instrument. The best constraints on the strength of
a Yukawa potential for such ranges are $|\alpha|\leqslant 10^{-3}$ [45, 46,
31].
## 3 MICROSCOPE experiment concept and test masses dynamics
MICROSCOPE was designed as a test of the universality of free-fall, relying on
an easy recipe: drop two test bodies and compare their fall. However, instead
of letting two test bodies freely orbit the Earth and monitoring their
relative drift, MICROSCOPE uses electrostatic forces to maintain two test
masses centered with respect to each other. This is done with a differential
ultrasensitive electrostatic accelerometer, consisting of two coaxial and
concentric cylinders made of different materials. The difference of electric
potentials applied to keep the cylinders in equilibrium is a direct measure of
the difference in their motion.
This section provides a primer about the MICROSCOPE experimental concept. We
start with a description of the capacitive detection and electrostatic control
principle, driving us to a short presentation of the instrument. We then
present a simplified equation of the dynamics of a test mass. Details can be
found in Refs [47, 48, 49].
### 3.1 Electrostatic measurement principle
The electrostatic control of the test masses relies on two nested control
loops. The first one is inside the payload: each test mass is placed between
pairs of electrodes and its motion with respect to its cage is monitored by
capacitive sensors. It can be kept motionless by applying the electrostatic
force required to compensate all other forces, such that the knowledge of the
applied electrostatic potential provides a measurement of the acceleration
which would affect the test mass with respect to the satellite in the absence
of the control loop. Note that even if there is no net motion with respect to
the satellite, it is common and convenient to call electrostatic acceleration
the electrostatic force divided by the mass; this definition will be used all
along the paper. The second control loop is included in the satellite’s Drag-
Free and Attitude Control System (DFACS), which aims to counteract external
disturbances via the action of cold gas thrusters. This system also ensures a
very accurate control of the pointing and of the attitude of the satellite
from the measurements of angular position delivered by the stellar sensors and
of the angular acceleration delivered by the instrument itself.
Details about the former control loop can be found in Ref. [48]. Nevertheless,
to enlighten the discussion of the test masses’ dynamics below, we provide
some elements about the (central to the measurement) detection and action
processes. The former allows for the measurement of the motion of the test
mass, while the latter allows for its electrostatic control and the direct
measurement of its acceleration.
A test mass surrounded by two opposite electrodes can be considered as a
double capacitor with capacitances $C_{1}$ and $C_{2}$. The motion of the test
mass induces a variation in the capacitances; the detector senses it and
outputs a related voltage $V_{\rm det}=2V_{d}(C_{1}-C_{2})/C_{\rm eq}$, where
$V_{d}$ is the potential of the test mass and $C_{\rm eq}$ is the capacitance
of the capacitor formed by the test mass and the electrodes when the test mass
is at the centre of the cage. The capacitances $C_{i}$ depend on the geometry
of the sensor, and therefore their form differ along the longitudinal and
radial axes of the instrument; nevertheless, it can be shown that at first
order, $V_{\rm det}$ is proportional to the displacement $\delta$ of the test
mass about the center of the cage along all axes [48].
The control loop digitises the detector output voltage $V_{\rm det}$ and
computes the actuation voltage to apply to the electrodes in order to
compensate for the displacement of the test mass, and recentre it in the cage.
The (restoring) electrostatic force applied by an electrode $i$ is then
$\vv{F}_{{\rm el},i}=\frac{1}{2}(V_{i}-V_{p})^{2}\vv\nabla C,$ (2)
where $\vv\nabla C$ is the spatial gradient of the capacitance. The
(polarisation) potential of the test-mass $V_{p}$ is maintained constant and
the potential $V_{i}$ of the electrode is tuned by the servo-control loop.
This “action” takes the general form
$F_{{\rm el},i}\approx-m\left\\{G_{\rm
act}V_{e}+\omega_{p}^{2}\left[1+\left(\frac{V_{e}}{V_{p}}\right)^{2}\right]\delta\right\\},$
(3)
where the sensitivity factor $G_{\rm act}$ and the stiffness coefficient
$\omega_{p}^{2}$ depend on the geometry of the sensor, $V_{e}$ is the voltage
output from the control loop and applied to the electrodes, and $m$ is the
mass of the test mass.
If $G_{\rm act}$ is known well enough, the acceleration of the test-mass can
be measured through the voltage $V_{e}$ required to apply the restoring force.
This measurement is perturbed by the electrostatic stiffness, which introduces
a bias if the test-mass is not servo-controlled to the equilibrium point.
Nevertheless, the asymmetry in the design of the electrostatic configuration
and the displacement are sufficiently small to ignore it during nominal WEP
test operations. Instead, in this paper, we use measurement sessions where the
displacement $\delta$ is not negligible, allowing for the estimation of the
electrostatic stiffness.
### 3.2 Instrumental apparatus
The core of MICROSCOPE’s instrument consists of two differential
accelerometers (or Sensor Units – SU), the test masses of which are co-axial
cylinders kept in equilibrium with electrostatic actuation [48]. The test
masses’ materials were chosen carefully so as to maximize a potential
violation of the WEP from a light dilaton [13, 50, 51] and to optimise their
industrial machining: the SUEP (Equivalence Principle test Sensor Unit) test
masses are made of alloys of platinum-rhodium (PtRh10 – 90% Pt, 10% Rh) and
titanium-aluminium-vanadium (TA6V – 90% Ti, 6% Al, 4% V), while the SUREF
(Reference Sensor Unit) test masses are made of the same PtRh10 alloy. For
each sensor, we call “IS1” the internal test mass and “IS2” the external one.
For instance, the internal mass of SUREF is named IS1-SUREF, and the external
mass of SUEP is called IS2-SUEP.
As shown above, the test masses of each SU are controlled electrostatically
with no mechanical contact but a thin 7 $\mu$m-diameter gold wire used to fix
the masses’ electrical potential to the electronics reference voltage. Two
Front-End Electronics Unit (FEEU) boxes (one per SU) include the capacitive
sensing of masses, the reference voltage sources and the analog electronics to
generate the electrical voltages applied to the electrodes; an Interface
Control Unit (ICU) includes the digital electronics associated with the servo-
loop digital control laws, as well as the interfaces to the satellite’s data
bus.
Fig. 1 shows a cutaway view of one SU, with its two test masses, their
surrounding electrodes-bearing cylinders, cylindrical invar shield, base
plate, upper clamp and vacuum system.
Figure 1: Cutaway view of a MICROSCOPE sensor, with its two test masses, their
surrounding electrodes-bearing cylinders, cylindrical invar shield, base
plate, upper clamp and vacuum system. The volume of a sensor is $348\times
180\times 180~{}{\rm mm}^{3}$; the test masses have length ranging from 43 mm
to 80 mm and outer radii ranging from 39 mm to 69 mm, and are separated from
the electrode-bearing cylinders by 600 $\mu$m gaps [48]. The reference system
is shown on the left of the figure. Figure from Ref. [34, 48].
### 3.3 MICROSCOPE’s test mass measured acceleration
The dynamics of MICROSCOPE’s test masses, as required for testing the WEP, is
discussed at length in Ref. [47]. In particular, Ref. [47] focusses on the
differential motion of two test masses and discriminates between inertial and
gravitational masses. In this section, we summarise the equations pertaining
to the dynamics of a given test mass, and since this article is not concerned
with the WEP, we identify the inertial and gravitational masses (thence some
simplification with respect to Ref. [47]).
Up to electrostatic parasitic forces (see below), the electrostatic force (3)
corresponds to a “control” acceleration responding to the contribution of the
various contributors to the dynamics of the test mass,
$\vec{\Gamma}_{\rm cont}=\frac{\vec{F}_{\rm
el}}{m}=\vv{\Delta\Gamma}_{\Earth}+\vec{\Gamma}_{\rm kin}-\frac{\vec{F}_{\rm
loc}}{m}-\frac{\vec{F}_{\rm pa}}{m}+\frac{\vec{F}_{\rm
ext}}{M}+\frac{\vec{F}_{\rm th}}{M}$ (4)
where $m$ and $M$ are the masses of the test mass and of the satellite,
$\vec{F}_{\rm ext}$ are non-gravitational forces affecting the satellite
(atmospheric drag, Solar radiation pressure), $\vec{F}_{\rm th}$ are forces
applied by the thrusters (to compensate for external forces) and $\vec{F}_{\rm
loc}$ and $\vec{F}_{\rm pa}$ are local forces (inside the sensor) that we can
consider individually (e.g. electrostatic stiffness, gold wire stiffness,
self-gravity) or as collective contributions (e.g. electrostatic parasitic
forces), respectively. We denote as $\vv{\Delta\Gamma}_{\Earth}$ the
difference between the Earth gravitational acceleration at the center of the
satellite and that at the center of the test mass. We assume that the test
masses are homogeneous. Moreover, since we are concerned with short-range
Yukawa deviations only, we assume that the Yukawa contribution to the Earth’s
gravity acceleration acting on the test-masses is negligible. Finally, the
second term of the r.h.s. of Eq. (4) contains the contribution from the
satellite’s inertia and from the motion of the test-mass,
$\vv\Gamma_{\rm kin}=[{\rm In}]\vv{P}+2[\Omega]\dot{\vv{P}}+\ddot{\vv{P}},$
(5)
where $\vv{P}$ is the position of the test mass with respect to the center of
the satellite, $[{\rm In}]\equiv[\dot{\Omega}]+[\Omega][\Omega]$ is the
gradient of inertia matrix of the satellite and $[\Omega]$ its angular
velocity.
Noting that since the applied electrostatic force of Eq. (4) is the sum of the
measured “action” force (3) summed over all electrodes and parasitic
electrostatic forces,
$\vec{F}_{\rm el}=\vec{F}_{\rm el,meas}+\vec{F}_{\rm elec,par},$ (6)
we show in A that the measured acceleration of a test mass, expressed in the
instrument frame, is
$\vv\Gamma_{\rm
meas|instr}=\vv{B}_{0}+\vv{\Delta\Gamma}_{\Earth{\rm|sat}}+\vv\Gamma_{\rm
kin|sat}-\frac{\vec{F}_{\rm loc|instr}}{m}+\vv{n},$ (7)
where $\vv{n}$ is the measurement noise and $\vv{B}_{0}$ is an overall bias
defined from the local parasitic forces and measurement bias.
## 4 Stiffness: experimental measurement and contributors
A short-range Yukawa fifth force may hide in the local component $\vec{F}_{\rm
loc}$ of the force. As described in the remainder of the paper, our analysis
is based on Eq. (7) and consists in:
* •
measuring the overall stiffness using dedicated sessions;
* •
estimating (when possible) or modelling all possible contributors to
$\vec{F}_{\rm loc}$, then subtracting them (but a Yukawa interaction) from the
measured overall stiffness;
* •
extracting constraints on a Yukawa interaction from the residuals.
In this section, we first describe the experimental approach to measure the
overall stiffness (subsection 4.1), before listing (subsection 4.2) and
discussing one by one the contributions to $\vec{F}_{\rm loc}$, including the
Yukawa interaction, in the remaining subsections. The second item of our
programme is completed in this section (contributions that cannot be estimated
and are thus modelled –electrostatic stiffness, thermal effects and Newtonian
gravitational interaction) and in Sect. 5, where contributions that can be
extracted from the data (gold wire and Yukawa interaction) are estimated. The
last item is the subject of Sect. 6.
### 4.1 Electrostatic stiffness measurement sessions
The stiffness is the derivative of force with respect to the position.
Measurement sessions were dedicated to measure MICROSCOPE’s instrument
stiffness [52], the stiffness being expectedly dominated by an electrostatic
stiffness (see Sect. 4.3 and Ref. [48]). The principle of the measurement is
to impart a $f=1/300$ Hz sinusoidal excitation of amplitude $x_{0}=5~{}\mu$m
to the test mass through the electronics control loop. The position of the
test mass is thus forced to be
$x(t)=x_{0}\sin(\omega t+\psi),$ (8)
where $x(t)$ is any axis ($x,y,z$) of the instrument (along which we aim to
estimate the electrostatic stiffness), $\omega=2\pi f$ and $\psi$ a given
phase. The acceleration (7) measured as the test mass motion is forced in
position is
$\vv\Gamma_{\rm meas}=\vv\Gamma_{\rm
exc}+\vv{B}_{0}+\vv{\Delta\Gamma}_{\Earth}+\vv\Gamma_{\rm
kin}-\frac{\vec{F}_{\rm loc}}{m}+\vv{n},$ (9)
where we dropped the subscripts “$|$instr” and “$|$sat” for simplicity, and
where $\vv\Gamma_{\rm exc}~{}=~{}x_{0}~{}\omega^{2}~{}\sin(\omega t+\psi)$ is
the excitation acceleration imparted to the test mass from the electronics
control loop.
Two measurements lasting 1750 s were performed for each axis of each test mass
(one measurement per available electrical configuration –subsection 4.3 and
B). Figure 2 shows the process to measure the stiffness of SUREF’s internal
mass along its $Y$ axis: the acceleration measured by the sensor (red) is
compared to its input position (blue).
Figure 2: Experimental process to measure a MICROSCOPE’s sensor’s stiffness
(here, the stiffness of SUREF’s internal mass is estimated along its $Y$
axis): the test mass is excited in position with a known amplitude of 5
$\mu$m, and we measure its response in acceleration (left panel); the
acceleration is mostly due to the electrostatic stiffness. Right panel: test
mass’ acceleration as a function of its position.
In the remainder of this paper, we subtract the Earth gravity modeled as
described in Refs. [34, 53], as well as the kinematic acceleration inferred
from satellite’s attitude measurements, from the measured acceleration. We
thus deal with the acceleration
$\displaystyle\vv\Gamma$ $\displaystyle\equiv\vv\Gamma_{\rm
meas}-\vv{\Delta\Gamma}_{\Earth}-\vv\Gamma_{\rm kin}$ (10)
$\displaystyle=\vv\Gamma_{\rm exc}+\vv{B}_{0}-\frac{\vec{F}_{\rm
loc}}{m}+\vv{n}.$ (11)
This acceleration is dominated by the local one $\vec{F}_{\rm loc}/m$, while
the excitation acceleration $\vv\Gamma_{\rm exc}$ is negligible. In Ref. [52],
it was assumed that only the electrostatic stiffness $k_{\epsilon}$ played a
significant role, such that $\vec{F}_{\rm loc}=-k_{\epsilon}\vec{x}$, where
$\vec{x}$ is the displacement of the test mass with respect to its equilibrium
position, and Eq. (11) became (ignoring the quadratic factor)
$\vv\Gamma=\vv\Gamma_{\rm
exc}+\vv{B}_{0}+\frac{k_{\epsilon}}{m}\vec{x}+\vv{n}.$ (12)
Under these assumptions, the electrostatic stiffness is simply the slope of
the $\vv\Gamma-\vec{x}$ relation (up to the factor $m$), as shown in the right
panel of Fig. 2. Chhun et al. [52] used this simple technique to estimate the
electrostatic stiffness on the three axes of each MICROSCOPE’s test mass. They
found significant disagreements with expectations from the theoretical model
summarised in Sect. 4.3, which they explained by model inaccuracies and
contribution from the gold wire aimed to control the charge of the test masses
(Sect. 4.4). We discuss their results in C.
### 4.2 Local forces budget
In what follows, we go beyond the simple assumptions of Ref. [52] and explore
how the same measurement could shed light on short-range non-Newtonian
gravity.
To that goal, we need to take into account all the different local forces
applied to the test mass as it moves inside its electrode cage during sessions
dedicated to the measurement of the stiffness. Our central equation is Eq.
(11), in which the local force is the sum of the following contributors
discussed in this section: electrostatic force $\vv{F}_{\rm el}$ (Sect. 4.3);
force due to the gold wire $\vv{F}_{\rm w}$ (Sect. 4.4); radiation pressure
$\vv{F}_{\rm p}$ (Sect. 4.5); radiometric effect $\vv{F}_{\rm r}$ (Sect. 4.6);
Newtonian $\vv{F}_{\rm N}$ and non-Newtonian (Yukawa) $\vv{F}_{\rm Y}$ gravity
(Sect. 4.8), such that $\vv{F}_{\rm loc}=\vv{F}_{\rm el}+\vv{F}_{\rm
w}+\vv{F}_{\rm p}+\vv{F}_{\rm r}+\vv{F}_{\rm N}+\vv{F}_{\rm Y}$.
### 4.3 Electrostatic force
The electrostatic force used to control the test mass is discussed at length
in Ref. [48]. Here, we shall only state that it consists of a bias
$\vv{b}_{\epsilon}$ and a stiffness $k_{\epsilon}$,
$\vv{F}_{\rm el}=\vv{b}_{\epsilon}-k_{\epsilon}\vv{x}.$ (13)
Those factors depend on the geometry of the test mass and of the electrodes,
and on the electric configuration (voltages applied to the different parts of
the sensor). In particular, the electrostatic stiffness along the $X$-axis is
expected to be zero for all sensors. For completeness, and since this paper
particularly focuses on the stiffness, we provide below the electrostatic
force imparted by the full set of electrodes on the radial axes when the test
mass moves along the $Y$-axis [48]:
$F_{\rm
el}(y)\approx-\frac{4\epsilon_{0}S_{y}}{e_{i}^{2}}\frac{\sin(\alpha_{y}/2)}{\alpha_{y}/2}(V^{\prime}_{p_{y}}-V_{p})v_{y}\\\
+\frac{2\epsilon_{0}S}{e_{i}^{3}}\left(1+\frac{\sin\alpha_{y}}{\alpha_{y}}\right)[(V^{\prime}_{p_{y}}-V_{p})^{2}+V_{d}^{2}]y\\\
+\frac{2\epsilon_{0}S}{e_{i}^{3}}\left(1-\frac{\sin\alpha_{z}}{\alpha_{z}}\right)[(V^{\prime}_{p_{z}}-V_{p})^{2}+V_{d}^{2}]y\\\
+\frac{2\pi\epsilon_{0}L_{x}R_{x}}{e_{e}^{3}}[(V^{\prime}_{p_{x}}-V_{p})^{2}+V_{d}^{2}]y\\\
+\frac{\pi\epsilon_{0}R_{\phi}L_{\phi}}{e_{e}^{3}}[(V^{\prime}_{p_{\phi}}-V_{p})^{2}+V_{d}^{2}]y,$
(14)
where $\epsilon_{0}$ is the vacuum permittivity, $e_{i}$ ($e_{e}$) is the gap
between the inner (outer) electrode cylinder and the test-mass, and where we
assumed that all control voltages listed in Ref. [48] are small compared to
the $V_{p}$ and $V_{d}$ voltages. Those two voltages describe the electric
configuration. Two configurations are available: high-resolution mode (HRM)
and full-range mode (FRM). They are detailed in Ref. [48] and summarised in B.
The first term of the r.h.s. of Eq. (14) defines the gain of the detector (the
force being proportional to the control voltage $v_{y}$); the other terms
define the stiffness created by the $Y$, $Z$, $X$ and $\phi$ electrodes. In
this equation, $S$ is the surface of the $Y$ and $Z$ electrodes, $R_{x}$ and
$R_{\phi}$ are the inner radius of the $X$ and $\phi$ electrodes, and $L_{x}$
and $L_{\phi}$ are their length. The angles $\alpha_{y}$ and $\alpha_{z}$ are
defined as the angle between the displacement of the test mass and the $Y$ and
$Z$ axes, respectively. D proves the form of the stiffness created by the $Y$
electrodes (second term of the r.h.s. of the equation).
We assessed the accuracy of the stiffness terms of the model (14) with finite
elements simulations. We found it to be biased high: finite elements models
provide an electrostatic stiffness 7% to 10% lower than the model (14).
Nevertheless, in the remainder of this paper, instead of relying on finite
elements simulations, we use Eq. (14) corrected by a 8.5% bias. This allows us
to easily propagate metrology and voltage uncertainties in the electrostatic
stiffness model, without the need to run a time-consuming simulation for each
allowed set of parameters. We then add an extra 3% statistical error to those
uncertainties to reflect the uncertainty on the bias of the model. The 7th
column of Table 2 (denoted $k_{\epsilon,{\rm th}}$) lists the electrostatic
stiffness expected for each test mass of MICROSCOPE.
### 4.4 Gold wire
The electric charge on test masses is controlled via a gold wire linking them
to the spacecraft body. The wire can be modelled as a spring acting on the
test mass with the force
$\vv{F_{w}}=-k_{w}[1+{\rm i}\phi(f)]\vv{x}-\lambda_{w}\dot{\vv{x}},$ (15)
where $\lambda_{w}$ describes the viscous damping of the wire, $k_{w}$ is the
wire stiffness and $\phi(f)$ describes the internal damping; note that $\phi$
can depend on the frequency $f$. The wire quality factor $Q=1/\phi$.
For a sinusoidal motion of the test mass (along the $j$th axis)
$x_{j}(t)=x_{j0}\sin(\omega t+\psi)$, the force exerted by the gold wire is
the sum of an out-of-phase sinusoidal signal [54] and a (velocity-
proportional) quadrature signal
$F_{w,j}(t)=-k_{w,j}x_{j0}\sin(\omega
t+\psi-\phi)+\lambda_{w}x_{j0}\omega\cos(\omega t+\psi).$ (16)
Thermal dissipation in the wire is the origin of the $f^{-1/2}$ low-frequency
noise that limits MICROSCOPE’s test of the WEP [33]. Using the dissipation-
fluctuation theorem, it can be shown that this acceleration noise is [54, 55]
$\Gamma_{n,w}(f)=\frac{1}{m}\sqrt{\frac{4k_{B}T}{2\pi}\frac{k_{w}}{Q(f)}}f^{-1/2}\,\,{\rm
ms}^{-2}/\sqrt{\rm Hz},$ (17)
where $m$ is the mass of the test mass, $T$ is the temperature and $k_{B}$ is
the Boltzmann constant. This allows for an estimation of the $k_{w}/Q$ ratio
from the spectral density of long measurement sessions (see Sect. 5.2).
### 4.5 Radiation pressure
The electrode-bearing cylinders, being at temperature $T$, emit thermal
radiation through photons that eventually hit the test mass and transfer their
momentum to it, thus creating a pressure. A gradient of temperature and a
difference of temperature $\Delta T$ between the electrodes surrounding the
test mass will therefore cause a force directed from the hottest to the
coldest regions [56, 57]:
$\vv{F}_{\rm p}=\frac{16}{3c}S\sigma\Delta TT^{3}\vv{e},$ (18)
where $T$ is the average temperature, $c$ the speed of light, $\sigma$ the
Stefan-Boltzmann constant, $S$ the surface of the test mass, and $\vv{e}$ is
the vector directed from the hottest to the coldest region.
The temperature and its gradient did not evolve in time during the measurement
sessions used in this paper (six temperature probes are positioned on each
sensor in such a way that we can monitor the temperature and have a glimpse at
its gradient [58]; in the worst case, we could note a 0.003K evolution of the
temperature during the measurement –while its mean is about 280K–, with all
probes affected by the same evolution, entailing an unmeasurably small
variation of the temperature gradients). Therefore, as far as we are
concerned, we can consider the radiation pressure-induced force as a simple
bias. Given the measured temperatures, an order of magnitude estimation allows
us to expect the corresponding acceleration to be at a level of $a_{p}\lesssim
10^{-10}~{}{\rm ms}^{-2}$.
### 4.6 Radiometer effect
Taking its name from Crookes’ radiometer, originally thought to prove the
photon pressure, the radiometer effect is actually a residual gas effect
affecting test masses in rarefied atmospheres whose mean free path exceed the
size of the container. In this case, equilibrium conditions do not happen when
pressure is uniform, but when the ratios of pressure to square root of
temperature equal one another [56, 57].
This entails a force on the test mass proportional to temperature gradient
about its faces $\Delta T$,
$\vv{F}_{\rm r}=\frac{1}{2}PS\frac{\Delta T}{T}\vv{e},$ (19)
where $P$ is the pressure in the container, $S$ the surface of the test mass
orthogonal to the temperature gradient, $T$ the average temperature in the
container and, as before, $\vv{e}$ is the vector directed from the hottest to
the coldest regions.
Even when stationary, a non-linear temperature profile can cause a position-
dependent radiometric effect and potentially a stiffness. However, the sparse
temperature measurements in MICROSCOPE sensors do not allow us to go beyond
the linear temperature profile hypothesis, thereby limiting our modelling of
the radiometric effect to a constant acceleration. Orders of magnitude
estimates provide a level of acceleration of the same order as the radiation
pressure, $a_{r}\lesssim 10^{-10}~{}{\rm ms}^{-2}$.
### 4.7 Other non-gravitational effects
#### 4.7.1 Residual gas drag
The test mass moves in an imperfect vacuum, so that drag may be expected.
Orders of magnitude estimates provide a related acceleration $\approx
10^{-23}~{}{\rm ms}^{-2}$, well below our capacity to detect it [48].
#### 4.7.2 Outgassing
Gas molecules are released from the materials of the instrument’s parts (in
particular the electrode-bearing cylinders) and can impact the test mass and
modify the pressure inside the instrument [56]. However, the vacuum system was
designed, and the materials chosen, such that outgassing can be safely ignored
[48].
#### 4.7.3 Lorentz force
Test masses have a non-zero magnetic moment, and can therefore be affected by
Lorentz forces, either from the Earth magnetic field or local magnetic fields.
The former applies a periodic signal at the orbital frequency and therefore
does not affect the stiffness measurements (besides the fact that the Earth
magnetic field is largely suppressed by MICROSCOPE’s instrument magnetic
shield). Local magnetic fields are more difficult to assess. However, noting
that their effect on the test of the WEP is subdominant [58], we ignore them
in this paper.
#### 4.7.4 Contact-potential differences and patch fields
Inhomogeneous distributions of surface potentials create a force between
charged surfaces. MICROSCOPE’s instrument can be affected by such patch
effects, which act as an additional stiffness dependent on the test masses’
voltages, thus on the electric configuration [59]. It goes beyond the scope of
this article to develop a model of patch effects in MICROSCOPE, and we will
not try to quantify them. Note that they may affect MICROSCOPE only in the
stiffness measurement sessions used in this paper, where test masses are set
in motion; in MICROSCOPE’s test of the WEP, test masses are kept motionless,
and thus less sensitive to patch effects (which act as a bias).
#### 4.7.5 Misalignments and geometrical defects
Very small misalignments between MICROSCOPE’s cylinders can be estimated [58].
As they break the cylindrical symmetry, they can introduce additional terms in
the electrostatic stiffness [60]. However, as we show below, the error budget
in stiffness measurement sessions is largely dominated by the gold wire, so
that we can safely ignore them for the purpose of this paper (thereby
justifying our $[\theta_{j}]={\rm Id}$ assumption in A).
### 4.8 Local gravity
The local gravity force applied to a MICROSCOPE test mass is the sum of the
forces between that test mass and the parts making the corresponding sensor
(Fig. 1):
* •
seven co-axial cylinders: two silica electrode-bearing cylinders surrounding
the test mass, the second test mass and two other silica electrode-bearing
cylinders surrounding it and two cylindrical invar shields,
* •
and four plain cylinders: a silica base plate, an invar base plate, an invar
upper clamp, and a vacuum system.
The characteristics of those elements can be found in Ref. [48]. As we show
below, the gravity force is dominated by the closest elements, so that we can
safely neglect the contribution from the other sensor and from the satellite
itself.
Here, we provide a model of the local gravity from first principles, in which
we neglect metrology uncertainties [48]. The gravitational interaction between
two bodies centered on $O_{1}$ and $O_{2}$ is
$\overrightarrow{F}=-\int_{V_{1}}{\rm d}V_{1}\int_{V_{2}}{\rm
d}V_{2}\frac{\partial
V(\overrightarrow{r_{1}}-\overrightarrow{r_{2}})}{\partial
r}\overrightarrow{O_{1}O_{2}},$ (20)
where the 3-dimensional integrals are taken over the volume of the two bodies,
and $\vv{r_{i}}=(x_{i},y_{i},z_{i})$ is the coordinate vector of an
infinitesimal volume element of the $i$th body. Noting $\rho_{i}$ the $i$th
body’s density, the Newtonian potential between infinitesimal volumes
$V_{N}(\overrightarrow{r_{1}}-\overrightarrow{r_{2}})=-\frac{G\rho_{1}\rho_{2}{\rm
d}V_{1}{\rm d}V_{2}}{|\overrightarrow{r_{1}}-\overrightarrow{r_{2}}|},$ (21)
and the Yukawa potential of strength $\alpha$ and range $\lambda$ between
infinitesimal volumes
$V_{Y}(\overrightarrow{r_{1}}-\overrightarrow{r_{2}})=-\alpha\frac{G\rho_{1}\rho_{2}}{|\overrightarrow{r_{1}}-\overrightarrow{r_{2}}|}\exp\left(-\frac{|\overrightarrow{r_{1}}-\overrightarrow{r_{2}}|}{\lambda}\right){\rm
d}V_{1}{\rm d}V_{2}.$ (22)
In the present case, as shown in Fig. 1, all contributions are interactions
between cylinders, either empty (test masses, electrode-bearing cylinders,
shield) or full (base plate, upper clamp). For simplicity, we also assume that
the vacuum system is a full cylinder. Computing the gravitational force
applied to the test mass then boils down to computing the interaction between
perfectly aligned cylinders (as we assumed in Sect. 3), and therefore
computing the 6-dimensional integral (20).
E shows that, in the limit of small displacements with which we are concerned
in that article, the 6D integral (20) can be reduced to a 1D integral
depending on the geometry of the pair of cylinders. In this case, the
gravitational force can be Taylor-expanded, and is dominated by a stiffness
term $K_{1}$. The expressions given below apply both to the Newtonian
($\alpha=1$, $\lambda\rightarrow\infty$) and Yukawa forces. They give the
force exerted by any one of MICROSCOPE’s cylinders on a test mass.
#### 4.8.1 Longitudinal force
In the limit of small displacements $\delta$ of the test mass along the
cylinders’ axis, the force is given by
${\mathcal{F}}_{x}(x_{0},\delta)\approx-16\pi^{2}G\rho\rho^{\prime}\alpha\sum_{i}K_{i}(x_{0})\delta^{i},$
(23)
where $x_{0}$ is the distance between the center of the test mass and the
center of the source cylinder along their longitudinal axis ($x_{0}=0$ when
the source is either an electrode-bearing cylinder or the other test mass, but
$|x_{0}|>0$ if the source is one cylinder of the base or the vacuum system –in
which case the source and the test mass are above each other), and where the
$x$ subscript corresponds to MICROSCOPE’s (longitudinal) $X$-axis but is
referred to as $z$ in the more conventional cylindrical coordinate system used
in E. The $K_{i}$ coefficients depend on the geometry of the test mass –
source pair as follows. If $a$ and $b$ are the inner and outer radii of the
cylinder source, $2\ell$ its height and $\rho$ its density; and if
$a^{\prime}$ and $b^{\prime}$ are the inner and outer radii of the test mass,
$2L$ its height and $\rho^{\prime}$ its density, then:
1. 1.
if the test mass is shorter than the source and they are concentric (which is
the case e.g. of the pair made of the internal test mass and any electrode-
bearing cylinder),
$\displaystyle K_{0}(x_{0})$ $\displaystyle=0$ (24a) $\displaystyle
K_{1}(x_{0})$
$\displaystyle=\int_{0}^{\infty}\frac{W(k;a^{\prime},b^{\prime})W(k;a,b)}{\kappa
k}{\rm e}^{-\kappa\ell}\sinh(\kappa L){\rm d}k$ (24b) $\displaystyle
K_{2}(x_{0})$ $\displaystyle=0$ (24c) $\displaystyle K_{3}(x_{0})$
$\displaystyle=\int_{0}^{\infty}\frac{\kappa}{6}\frac{W(k;a^{\prime},b^{\prime})W(k;a,b)}{k}{\rm
e}^{-\kappa\ell}\sinh(\kappa L){\rm d}k,$ (24d)
where we introduced the parameter
$\kappa=\sqrt{k^{2}+1/\lambda^{2}}$ (25)
and the function
$W(k;a,b)=bJ_{1}(kb)-aJ_{1}(ka),$ (26)
where $J_{i}$ are Bessel functions of the first kind.
2. 2.
if the test mass is longer than the source and they are concentric (which is
the case of the pair made of the internal test mass as the source and the
external test mass): the force is formally identical to that of the previous
case, with $\ell$ and $L$ switching their roles.
3. 3.
if the test mass and the source are above each other,
$\displaystyle K_{0}(x_{0})$
$\displaystyle=\frac{x_{0}}{|x_{0}|}\int_{0}^{\infty}\frac{W(k;a^{\prime},b^{\prime})W(k;a,b)}{\kappa^{2}k}{\rm
e}^{-\kappa|x_{0}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (27a)
$\displaystyle K_{1}(x_{0})$
$\displaystyle=-\int_{0}^{\infty}\frac{W(k;a^{\prime},b^{\prime})W(k;a,b)}{\kappa
k}{\rm e}^{-\kappa|x_{0}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (27b)
$\displaystyle K_{2}(x_{0})$
$\displaystyle=\frac{x_{0}}{|x_{0}|}\int_{0}^{\infty}\frac{W(k;a^{\prime},b^{\prime})W(k;a,b)}{2k}{\rm
e}^{-\kappa|x_{0}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (27c)
$\displaystyle K_{3}(x_{0})$
$\displaystyle=-\int_{0}^{\infty}\frac{\kappa}{6k}W(k;a^{\prime},b^{\prime})W(k;a,b){\rm
e}^{-\kappa|x_{0}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k.$ (27d)
#### 4.8.2 Radial force
Similarly, at third order in $\delta/a^{\prime}$, where $\delta$ is the
displacement of the test mass along a radial axis ($Y$ or $Z$), the radial
force created by any one of the other cylinders is
${\mathcal{F}}_{r}(x_{0},\delta)\approx-2\pi^{2}G\rho\rho^{\prime}\alpha(K_{1}(x_{0})\delta+K_{3}(x_{0})\delta^{3}),$
(28)
where the $K_{i}$ coefficients depend on the geometry of the test mass –
source pair:
1. 1.
if the test mass is shorter than the source and they are nested (which is the
case e.g. of the pair made of the internal test mass and any electrode-bearing
cylinder),
$\displaystyle K_{1}(x_{0})$
$\displaystyle=4\int_{0}^{\infty}\frac{kW(k;a^{\prime},b^{\prime})W(k;a,b)}{\kappa^{2}}\left[L-\frac{{\rm
e}^{-\kappa\ell}}{\kappa}\sinh(\kappa L)\cosh(\kappa|x_{0}|)\right]{\rm d}k$
(29a) $\displaystyle K_{3}(x_{0})$
$\displaystyle=-\int_{0}^{\infty}\frac{k^{3}W(k;a^{\prime},b^{\prime})W(k;a,b)}{\kappa^{2}}\left[L-\frac{{\rm
e}^{-\kappa\ell}}{\kappa}\sinh(\kappa L)\cosh(\kappa|x_{0}|)\right]{\rm d}k,$
(29b)
with $x_{0}\approx 0$ in this case.
2. 2.
if the test mass is longer than the source and they are nested (which is the
case of the pair made of the internal test mass as the source and the external
test mass): the force is formally identical to that of the previous case, with
$\ell$ and $L$ switching roles.
3. 3.
if the test mass and the source are above each other,
$\displaystyle K_{1}(x_{0})$
$\displaystyle=4\int_{0}^{\infty}\frac{kW(k;a^{\prime},b^{\prime})W(k;a,b)}{\kappa^{2}}\frac{{\rm
e}^{-\kappa|x_{0}|}}{\kappa}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (30a)
$\displaystyle K_{3}(x_{0})$
$\displaystyle=-\int_{0}^{\infty}\frac{k^{3}W(k;a^{\prime},b^{\prime})W(k;a,b)}{\kappa^{2}}\frac{{\rm
e}^{-\kappa|x_{0}|}}{\kappa}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (30b)
#### 4.8.3 Total gravitational force
The gravity force applied to a MICROSCOPE test mass is just the sum of the
Newton and Yukawa forces created by the aforementioned instruments’ parts,
$\displaystyle\vv{F}_{\rm g}$ $\displaystyle=({\mathcal{F}}_{{\rm
N},x}+{\mathcal{F}}_{{\rm Y},x})\vv{e}_{x}+({\mathcal{F}}_{{\rm
N},r}+{\mathcal{F}}_{{\rm Y},r})\vv{e}_{r}$ (31)
$\displaystyle=\sum_{j}({\mathcal{F}}_{{\rm N},x,j}+{\mathcal{F}}_{{\rm
Y},x,j})\vv{e}_{x}+\sum_{j}({\mathcal{F}}_{{\rm N},r,j}+{\mathcal{F}}_{{\rm
Y},r,j})\vv{e}_{r},$ (32)
where the $r$ subscript stands for the $Y$ and $Z$ axes, and where the forces
created by the $j$th part of the instrument ${\mathcal{F}}_{{\rm N},x,j}$ and
${\mathcal{F}}_{{\rm Y},x,j}$ are given by Eq. (23) and ${\mathcal{F}}_{{\rm
N},r,j}$ and ${\mathcal{F}}_{{\rm Y},r,j}$ by Eq. (28).
As shown in E, a first-order Taylor expansion of Eqs. (23) and (28) is enough
to precisely account for the gravitational interactions in the present
article, where displacements are limited to $5~{}\mu{\rm m}$. This means that
the local gravitation effectively acts as a stiffness on the test masses. We
thus define the Newtonian and Yukawa, radial and longitudinal stiffnesses by
$\displaystyle{\mathcal{F}}_{{\rm N},r}$ $\displaystyle=-k_{{\rm N},r}r$ (33)
$\displaystyle{\mathcal{F}}_{{\rm N},x}$ $\displaystyle=-k_{{\rm N},x}x$ (34)
$\displaystyle{\mathcal{F}}_{{\rm Y},r}$ $\displaystyle=-k_{{\rm Y},r}r$ (35)
$\displaystyle{\mathcal{F}}_{{\rm Y},x}$ $\displaystyle=-k_{{\rm Y},x}x,$ (36)
where $x$ and $r$ are the displacement of the test mass along the longitudinal
and any radial axes of the instrument.
Figure 3: Newtonian (plain bars) and Yukawa (hashed bars) radial stiffnesses
acted by the SUEP’s parts (the external test mass –TM–, the four electrode-
bearing cylinders –IS1-int, IS1-ext, IS2-int, IS2-ext–, and the shielding
cylinders) on its internal test mass. The Yukawa potential is set such that
$(\alpha,\lambda)=(1,0.01~{}{\rm m})$. The contribution of the base plates,
upper clamp and vacuum system to the Yukawa interaction is too small to appear
on the plot.
##### Newtonian gravity
The plain bars of Fig. 3 show the Newtonian stiffnesses from all cylinders on
SUEP’s internal test mass along its radial axis. The force between nested
cylinders is destabilising (negative stiffness), whereas the force from the
base plates, upper clamp and vacuum system stabilises the test mass, with the
total radial force being destabilising. It can also be shown that the
Newtonian gravitational interaction along the $X$ (longitudinal) axis acts as
a stabilising stiffness.
Finally, it can be seen from the figure that the contribution from the outer
shield is subdominant. Thence, those from the other differential sensor and
from the other parts of the satellite are even more subdominant, and we ignore
them.
The next-to-last column of Table 2 lists the Newtonian gravity stiffness of
the four MICROSCOPE test masses along their radial and longitudinal axes.
##### Yukawa gravity
The hashed bars of Fig. 3 show the Yukawa stiffnesses from all cylinders on
SUEP’s internal test mass along its radial axis, for
$(\alpha,\lambda)=(1,0.01~{}{\rm m})$. It can be noted that only co-axial
cylinders contribute, since the base, upper clamp and vacuum system are more
distant than 0.01 m from the test mass. Similarly, the closest cylinders
provide most of the signal. It can be noted that the Yukawa stiffness of the
closest cylinders is larger than their Newtonian stiffness. This difference
comes from the fact that with $\lambda=1~{}{\rm cm}$, only a restricted part
of the cylinders interact, causing an effect more complex than just an
exponential decay proportional to the Newtonian stiffness.
Fig. 4 shows Yukawa gravity’s stiffness as a function of Yukawa’s range
$\lambda$ for all MICROSCOPE test masses, along their radial (left panel) and
longitudinal (right panel) axes. Starting from the smaller $\lambda$ reachable
(linked to the distance between a test mass and its closest cylinder), the
radial stiffness increases steadily as more and more co-axial cylinders are
within reach of $\lambda$ and contribute to the gravity signal. The stiffness
peaks around $\lambda\approx 0.01~{}{\rm m}$, where the base and upper
cylinders start to contribute but with an opposite sign stiffness, thereby
decreasing it until the Newtonian regime is reached when $\lambda$ becomes
larger than the sensor’s largest scale. The longitudinal stiffness shows a
similar behaviour, though it changes sign while more and more cylinders
contribute to the signal.
Figure 4: Yukawa stiffness (normalised by $\alpha$) for the four MICROSCOPE
test masses, on the radial (left) and longitudinal (right) axes, as a function
of Yukawa’s range.
Comparing Fig. 4 with Table 2, it is clear that the gravity stiffness (and
therefore, signal) is largely subdominant. We put it to test in Sect. 5.
### 4.9 Summary: measured acceleration
Taking all the forces above into account, the acceleration (11) of a test mass
measured along the $i$th axis during a stiffness characterisation session is
$\Gamma_{i}(t)=b_{\epsilon,i}+a_{p,i}+a_{r,i}+\frac{m\omega^{2}+k_{\epsilon,i}+k_{N,i}}{m}x_{i0}\sin(\omega
t+\psi)\\\ +\frac{k_{w,i}}{m}x_{i0}\sin(\omega
t+\psi-\phi)+\frac{\lambda_{w,i}}{m}\omega x_{i0}\cos(\omega
t+\psi)+\frac{k_{Y,i}(\alpha,\lambda)}{m_{j}}x_{i0}\sin(\omega t+\psi),$ (37)
where we singled out the Yukawa gravity contribution and made its stiffness’
dependence on $(\alpha,\lambda)$ explicit, since it is this very dependence
that we aim to constrain in the remainder of this article. The first three
terms of the equation comprise the effect of the measurement bias, radiation
pressure and photometric effect (all three acting as a constant bias) and the
fourth term encompasses the in-phase stiffnesses (excitation, electrostatic
and Newtonian gravity). The last three terms give the effect of the gold wire
(internal and viscous damping), and the Yukawa gravity contribution.
## 5 Data analysis
This section presents least-squares estimates of the parameters introduced in
the previous section. We use the acceleration data introduced in subsection
4.1 to estimate two components of the stiffness (that of the gold wire $k_{w}$
and a linear combination $k_{0}$ of the electrostatic and gravitational
stiffness), the quality factor of the gold wire, and a velocity-dependent
coefficient for each axis of each sensor (Eq. 38 below). Then, in subsection
5.3.3, we subtract theoretical models of the electrostatic and Newtonian
gravity stiffness from the estimated $k_{0}$; any residual should come either
from a Yukawa stiffness or from unaccounted for contributors. We perform the
exercice in the two electrical configurations (HRM and FRM) summarised in B.
### 5.1 Measurement equation
The measurement equation (37) could be used in its original form to extract
the unknown parameters from the data and simultaneously constrain the Yukawa
interaction’s parameters. However, since the Yukawa contribution is expected
to be at most of the order of the Newtonian contribution, which is itself
largely less than the electrostatic stiffness, its parameters have a small
constraining power on the data, and we find more suited to first estimate an
overall stiffness, from which we can eventually extract the $(\alpha,\lambda)$
parameters.
Moreover, Eq. (37) requires the estimation of two phases. The first one,
$\psi$, is that of the excitation signal and can be estimated a priori by
fitting the position data, then used as a known parameter in the following
analysis. The second, $\phi$, is the phase-offset induced by the gold wire’s
internal damping. Instead of trying to estimate it from the data (which may be
difficult given the 4 Hz sampling of data, when assuming that the quality
factor of the wire is in the range $Q\approx 1-100$, corresponding to a time
offset less than 2 s), we recast Eq. (37) as
$\Gamma(t)=b+\left(\kappa_{0}+\kappa_{w}\cos\phi\right)\sin(\omega
t+\psi)-\left(\kappa_{w}\sin\phi-\kappa_{\lambda}\right)\cos(\omega t+\psi),$
(38)
where $b\equiv b_{\epsilon}+a_{p}+a_{r}$, $\kappa_{0}\equiv
x_{0}(m\omega^{2}+k_{\epsilon}+k_{N}+k_{Y})/m$, $\kappa_{w}\equiv
x_{0}k_{w}/m$ and $\kappa_{\lambda}\equiv x_{0}\lambda_{w}\omega/m$, and we
dropped the $i$ subscript for simplicity.
Five parameters are left for estimation: $b$, $\kappa_{0}$, $\kappa_{w}$,
$\kappa_{\lambda}$ and $\phi=1/Q$. It is however clear that fitting Eq. (38)
will provide only three independent constraints. If estimating $b$ will be
easy, the other parameters will remain degenerate unless we can use some prior
knowledge. We show in Sect. 5.2 that we can obtain an independent estimate of
the $\kappa_{w}/Q=\kappa_{w}\phi$ combination.
### 5.2 Gold wire’s $k_{w}/Q$ ratio
As shown in Sect. 4.4, fitting the low-frequency part of the spectral density
of the acceleration measured along a given axis can provide an estimate of the
ratio $k_{w}/Q$ for this axis (from Eq. 17) once temperature data are
available (which is the case for all measurement sessions). Performing this
task for the three linear axes, we can get an estimate of the gold wire
stiffness along each axis, and the orientation of the force due to the wire.
This force is presumably collinear with the wire, although the glue clamping
process may complexify it. Noting $\varphi$ the angle between the force and
the test mass longitudinal axis ($X$-axis), and $\theta$ the angle between the
$Y$-axis and the projection of the wire on the ($y,z$) plane (Fig. 5), the
three stiffnesses that can be measured are
$\displaystyle k_{w,x}$ $\displaystyle=$ $\displaystyle|k_{w}|\cos\varphi$
(39) $\displaystyle k_{w,y}$ $\displaystyle=$
$\displaystyle|k_{w}|\sin\varphi\cos\theta$ (40) $\displaystyle k_{w,z}$
$\displaystyle=$ $\displaystyle|k_{w}|\sin\varphi\sin\theta$ (41)
from which we can recover the modulus of the stiffness and the orientation of
the wire. Fig. 6 shows the fit corresponding to IS1-SUEP’s $X$-axis from the
session used to estimate the WEP in Ref. [33]. Values obtained for the
internal sensor of both SU are given in Table 1. We checked that estimates
from different sessions are consistent. Note that rigorously, since the drag-
free system is controlled by the external sensor, fitting the internal
sensor’s spectral density only provides information about the sum of the
$k_{w}/mQ$ ratios of both sensors (where $m$ is their mass). Nevertheless,
under the assumptions that their masses are similar (which is enough given the
goals of this article), that their wires have similar $k_{w}/Q$ ratios, and
that their spectral density are uncorrelated, fitting the internal sensor’s
spectral density indeed provides a constraint on each sensor’s wire’s
$k_{w}/Q$ ratio.
Figure 5: Gold wire and test mass geometry. Figure 6: Typical (square-root) spectral density of the acceleration measured along the $X$-axis of IS1-SUEP. The $f^{-1/2}$ low-frequency part originates from thermal dissipation in the wire (Eq. 17), and the high-frequency $f^{2}$ increase is due to the capacitive detector’s noise [61]. The orange line is the best fit of the low-frequency part. Table 1: Gold wire stiffness [$10^{-3}$Nm-1] and orientation of the force [deg] estimated from long measurement sessions’ acceleration noise spectral density. | $k_{w,x}/Q$ | $k_{w,y}/Q$ | $k_{w,z}/Q$ | $k_{w}/Q$ | $\varphi$ | $\theta$
---|---|---|---|---|---|---
IS1-SUEP | $1.14\pm 0.20$ | $0.31\pm 0.05$ | $1.26\pm 0.20$ | $1.73\pm 0.10$ | $48\pm 4$ | $70\pm 25$
IS1-SUREF | $0.07\pm 0.01$ | $0.22\pm 0.10$ | $0.66\pm 0.06$ | $0.70\pm 0.08$ | $84\pm 1$ | $66\pm 30$
Willemenot & Touboul [55] used a torsion pendulum to characterise a gold wire
similar to those used by MICROSCOPE. Assuming that the wire is deformed
perpendicular to its principal axis (i.e. in flexion), they give a convenient
scaling to quantify the wire’s stiffness
$k_{w}=2.91\times 10^{-5}\left(\frac{r_{w}}{3.75~{}\mu{\rm
m}}\right)^{4}\left(\frac{1.7~{}{\rm
cm}}{l_{w}}\right)^{3}\left(\frac{E}{7.85\times 10^{10}~{}{\rm
Nm}^{2}}\right),$ (42)
where $r_{w}$ is the radius of the wire, $l_{w}$ its length and $E$ its Young
modulus. Using MICROSCOPE’s gold wires’ characteristics ($r_{w}=3.5~{}\mu{\rm
m}$, $l_{w}=2.5~{}{\rm cm}$ and $E~{}=~{}7.85~{}\times~{}10^{10}~{}{\rm
Nm}^{2}$), we expect $k_{w}\approx 9\times 10^{-6}$. Combined with a quality
factor $Q\approx 100$ as measured in Ref. [55], this scaling provides
$k_{w}/Q\approx 10^{-7}~{}{\rm N/m}$, in flagrant contradiction with the
values estimated from flight data (Table 1).
Two explanations can be proposed: (i) the wire does not behave as shown in
Ref. [55] or (ii) its quality factor is much lower than expected. In the
former explanation, the wire may work in compression (i.e. it is deformed
along its principal axis), which potentially increases its stiffness. In the
latter, the mounting process (wires being glued to the test masses) may
decrease the overall quality factor; differences between the glue points in
MICROSCOPE and in Ref. [55] may explain a significant difference of quality
factor.
Assuming that the electrostatic model of the instrument is correct and that
the measured stiffness is dominated by the electrostatic stiffness hints at a
low quality factor $Q\approx 1$. Note however that, even if the quality factor
is really that low, MICROSCOPE’s main results (the test of the WEP) depend on
the $k_{w}/Q$ ratio, and are thus unaffected by the current analysis.
### 5.3 Stiffness estimation
We now come back to Eq. (38), with the aim to estimate the model parameters
for the four sensors, starting with radial axes.
#### 5.3.1 Radial axes ($Y$ and $Z$)
The following assumptions allow us to break the degeneracy between the
parameters mentioned in Sect. 5.1:
* •
for a given sensor and a given axis $j$, the gold wire’s ratio $k_{w,j}/Q$ is
independent of the electrical configuration (HRM or FRM), and can be estimated
as shown in Sect. 5.2 for the internal sensors. We further assume that the
mounting of gold wires is general enough to assume that the external sensors’
$k_{w,j}/Q$ ratio is the same as that of the internal sensor [48].
* •
the ratio $k_{w,j}/Q$ varies from one axis to another, but the quality factor
$Q$ is a true constant for a given sensor. In other words, since $k_{w,j}$ and
$Q$ are degenerate, we assume that only the stiffness depends on the
direction.
* •
by cylindrical symmetry, the total stiffness of the radial axis $j$ ($j=y,z$)
$k_{0,j}~{}=~{}m~{}\omega^{2}~{}+~{}k_{\epsilon,j}~{}+~{}k_{N}~{}+~{}k_{Y}$ is
independent of the axis, and depends on the electrical configuration only
through the electrostatic stiffness $k_{\epsilon}$. This assumption is
reasonable given the estimated metrology uncertainties [48].
Denoting $\hat{\chi}_{y}=k_{w,y}/Q$ and $\hat{\chi_{z}}=k_{w,z}/Q$ the radial
gold wire’s ratios estimated in Table 1, and combining constraints from the
model (38), where we add the subscripts ‘F’ and ‘H’ for measurements in FRM
and HRM modes, we obtain the following system of equations for a given sensor:
$\displaystyle\hat{a}_{0yF}$ $\displaystyle=\kappa_{0rF}+\kappa_{wy}\cos\phi$
(43a) $\displaystyle\hat{a}_{0yH}$
$\displaystyle=\kappa_{0rH}+\kappa_{wy}\cos\phi$ (43b)
$\displaystyle\hat{a}_{0zF}$ $\displaystyle=\kappa_{0rF}+\kappa_{wz}\cos\phi$
(43c) $\displaystyle\hat{a}_{0zH}$
$\displaystyle=\kappa_{0rH}+\kappa_{wz}\cos\phi$ (43d)
$\displaystyle\hat{a}_{wyF}$
$\displaystyle=-\kappa_{wy}\sin\phi+\kappa_{\lambda yF}$ (43e)
$\displaystyle\hat{a}_{wyH}$
$\displaystyle=-\kappa_{wy}\sin\phi+\kappa_{\lambda yH}$ (43f)
$\displaystyle\hat{a}_{wzF}$
$\displaystyle=-\kappa_{wz}\sin\phi+\kappa_{\lambda zF}$ (43g)
$\displaystyle\hat{a}_{wzH}$
$\displaystyle=-\kappa_{wz}\sin\phi+\kappa_{\lambda zH}$ (43h)
$\displaystyle\hat{\chi}_{y}$ $\displaystyle=\frac{k_{wy}}{Q}$ (43i)
$\displaystyle\hat{\chi}_{z}$ $\displaystyle=\frac{k_{wz}}{Q},$ (43j)
where we recall that $\phi=1/Q$, $\kappa_{wj}=x_{0}k_{wj}/m$, and similarly
for $\kappa_{0r}$ (with the subscript $r=y,z$), $\kappa_{\lambda}$ and the
$\hat{a}_{0}$ and $\hat{a}_{w}$ coefficients are the estimates of the sine and
cosine coefficients of Eq. (38).
On the one hand, Eqs. (43e-43h) trivially give the velocity-dependent terms as
a functions of the unknown $Q$ and estimated $\chi_{j}$ and $a_{wj}$. On the
other hand, Eqs. (43a-43d), (43i-43j) can be combined to give
$2(\hat{\chi}_{y}-\hat{\chi}_{z})\frac{x_{0}}{m}Q\cos\left(\frac{1}{Q}\right)=\hat{a}_{0yF}+\hat{a}_{0yH}-\hat{a}_{wzF}-\hat{a}_{wzH},$
(44)
thus providing the equation
$x\cos\left(\frac{1}{x}\right)-\xi=0$ (45)
of which $Q$ is a root, where $\xi$ is defined through parameters estimated
from Table 1 and fitting Eq. (38) for the sensor’s two radial axes in each
electrical configuration.
Once $Q$ is estimated, Eqs. (43a-43d) readily provide $\kappa_{0rF}$ and
$\kappa_{0rH}$. Actually, they give two estimates of each, which we checked to
be consistent.
#### 5.3.2 Longitudinal axis $X$
Under the same assumptions, it is then straightforward to estimate the
$X$-axis stiffness from Eq. (38) and Table 1, for a given electrical
configuration (that we do not make explicit in the equations below for
simplicity):
$\displaystyle\kappa_{0x}$
$\displaystyle=\hat{a}_{0x}-\frac{x_{0}}{m}\hat{\chi}_{x}Q\cos\left(\frac{1}{Q}\right)$
(46a) $\displaystyle\kappa_{\lambda x}$
$\displaystyle=\hat{a}_{wx}+\frac{x_{0}}{m}\hat{\chi}_{x}Q\sin\left(\frac{1}{Q}\right).$
(46b)
#### 5.3.3 Results
Our results are listed in Table 2 for each sensor, in their two electrical
configurations. The left group’s four columns show the instrumental
parameters: total in-phase stiffness $k_{0}$, gold wire stiffness $k_{w}$ and
quality factor $Q$, and velocity-dependent coefficient $\lambda_{w}$. The next
two columns give the theoretical electrostatic stiffness $k_{\epsilon,{\rm
th}}$ and the Newtonian gravity stiffness. The last column lists the
difference between the theoretical and the estimated electrostatic stiffness
$\Delta k=\hat{k}_{0}-k_{N}-m\omega^{2}-k_{\epsilon,{\rm th}}$. Error bars
give $1\sigma$ uncertainties.
The electrostatic stiffness estimated in the HRM electrical configuration is
consistent with the theoretical one for most sensors and axes, with at most a
$\approx 2\sigma$ discrepancy. However, we note a significant difference in
the FRM configuration (Fig. 7). Being dependent on the electrical
configuration, this discrepancy hints at the existence of an electric
potential-dependent additional stiffness completely degenerate with the
electrostatic one. Patch effects could be at its origin: the discrepancy being
significant for higher voltages is indeed consistent with the voltage-
dependence of patch effects’ stiffness. Disentangling this puzzle would
require modeling patch effects in MICROSCOPE’s sensors. As this goes far
beyond the scope of this paper, and since our experiment is not competitive
with other short-ranged forces searches (as we discuss below), we let the
question of this discrepancy open. In the remainder of this paper, we thence
use HRM measurements only.
Figure 7: Difference between theoretical electrostatic stiffness and measured
total in-phase stiffnesses corrected for the excitation and Newtonian gravity
stiffnesses, $\Delta k=\hat{k}_{0}-k_{N}-m\omega^{2}-k_{\epsilon,{\rm th}}$,
for all axes (longitudinal and radial) of each sensor, in the HRM (diamonds)
and FRM (squares) electric configurations.
The gold wire’s quality factor is lower than could be expected from Ref. [55].
Nevertheless, $Q$ being close to 1 is consistent with our discussion in Sect.
5.2. Correspondingly, the gold wires’ stiffness is small and negligible
compared to the electrostatic stiffness of the radial axes, as was assumed by
Ref. [52]; however, those stiffnesses remain significantly degenerate in the
longitudinal axis, giving a total stiffness similar to that estimated in Ref.
[52]. Moreover, this degeneracy, that also exists for the radial axes,
explains that the total errors are dominated by the uncertainty on the gold
wire’s stiffness ($k_{0}$ and $k_{w}$ are degenerate in the amplitude of Eq.
(38)’s sine, meaning that $\hat{k}_{0}$ errors actually come from those on
$k_{w}/Q$).
Table 2: Estimated model parameters. The left group’s four columns show the instrumental parameters: total in-phase stiffness $k_{0}$, gold wire stiffness $k_{w}$ and quality factor $Q$, and velocity-dependent coefficient $\lambda_{w}$. The next two columns give the theoretical electrostatic stiffness $k_{\epsilon,{\rm th}}$ and the Newtonian gravity stiffness. The last column lists the difference between the theoretical and the estimated electrostatic stiffness $\Delta k=\hat{k}_{0}-k_{N}-m\omega^{2}-k_{\epsilon,{\rm th}}$. Error bars give $1\sigma$ uncertainties. Sensor | Axis (mode) | $\hat{k}_{0}$ | $\hat{k}_{w}$ | $\hat{Q}$ | $\hat{\lambda}_{w}$ | $k_{\epsilon,{\rm th}}$ | $k_{N}$ | $\Delta k$
---|---|---|---|---|---|---|---|---
| | [$\times 10^{-2}~{}{\rm N/m}$] | [$\times 10^{-2}~{}{\rm N/m}$] | | [$\times 10^{-2}~{}{\rm Ns/m}$] | [$\times 10^{-2}~{}{\rm N/m}$] | [$\times 10^{-8}$ N/m] | [$\times 10^{-2}~{}{\rm N/m}$]
SUEP | $X$ (HRM) | -0.00$\pm$0.12 | 0.16$\pm$0.08 | 1.5$\pm$0.7 | 4.23$\pm$3.82 | 0.00$\pm$0.01 | 1.22 | -0.021$\pm$0.119
IS1 | $Y$ (HRM) | -1.55$\pm$0.06 | 0.04$\pm$0.08 | 1.5$\pm$0.7 | 1.87$\pm$1.87 | -1.57$\pm$0.05 | -0.96 | 0.009$\pm$0.081
| $Z$ (HRM) | -1.65$\pm$0.26 | 0.19$\pm$0.31 | 1.5$\pm$0.7 | 5.50$\pm$7.49 | -1.57$\pm$0.05 | -0.96 | -0.095$\pm$0.260
| $X$ (FRM) | 0.04$\pm$0.12 | 0.16$\pm$0.08 | 1.5$\pm$0.7 | 2.88$\pm$3.89 | 0.00$\pm$0.01 | 1.22 | 0.024$\pm$0.120
| $Y$ (FRM) | -18.85$\pm$0.06 | 0.04$\pm$0.08 | 1.5$\pm$0.7 | 1.35$\pm$1.89 | -14.08$\pm$0.46 | -0.96 | -4.795$\pm$0.460
| $Z$ (FRM) | -18.87$\pm$0.26 | 0.19$\pm$0.31 | 1.5$\pm$0.7 | 4.94$\pm$7.53 | -14.08$\pm$0.46 | -0.96 | -4.809$\pm$0.523
SUEP | $X$ (HRM) | -1.35$\pm$1.01 | 1.41$\pm$1.01 | 12.8$\pm$8.9 | 5.60$\pm$2.93 | 0.00$\pm$0.01 | 1.27 | -1.366$\pm$1.015
IS2 | $Y$ (HRM) | -6.96$\pm$0.73 | 0.40$\pm$0.73 | 13.4$\pm$9.5 | 5.03$\pm$2.38 | -7.01$\pm$0.23 | -0.88 | 0.039$\pm$0.760
| $Z$ (HRM) | -8.20$\pm$2.94 | 1.74$\pm$2.95 | 13.4$\pm$9.5 | 9.65$\pm$9.53 | -7.01$\pm$0.23 | -0.88 | -1.201$\pm$2.949
| $X$ (FRM) | -1.32$\pm$1.02 | 1.41$\pm$1.01 | 12.8$\pm$8.9 | 5.96$\pm$2.97 | 0.00$\pm$0.01 | 1.27 | -1.337$\pm$1.015
| $Y$ (FRM) | -78.47$\pm$0.68 | 0.38$\pm$0.69 | 12.7$\pm$8.6 | 2.25$\pm$2.39 | -107.02$\pm$3.37 | -0.88 | 28.538$\pm$3.440
| $Z$ (FRM) | -78.56$\pm$2.77 | 1.66$\pm$2.78 | 12.7$\pm$8.6 | 7.03$\pm$9.53 | -107.03$\pm$3.37 | -0.88 | 28.456$\pm$4.366
SUREF | $X$ (HRM) | 0.06$\pm$0.09 | 0.02$\pm$0.01 | 2.3$\pm$1.3 | -0.52$\pm$4.35 | 0.00$\pm$0.01 | 23.65 | 0.041$\pm$0.092
IS1 | $Y$ (HRM) | -1.58$\pm$0.08 | 0.05$\pm$0.08 | 2.5$\pm$1.3 | 1.60$\pm$1.33 | -1.81$\pm$0.06 | -12.32 | 0.209$\pm$0.095
| $Z$ (HRM) | -1.71$\pm$0.12 | 0.18$\pm$0.12 | 2.5$\pm$1.3 | 3.80$\pm$1.40 | -1.81$\pm$0.06 | -12.32 | 0.082$\pm$0.133
| $X$ (FRM) | 0.06$\pm$0.09 | 0.02$\pm$0.01 | 2.3$\pm$1.3 | -0.92$\pm$4.51 | 0.00$\pm$0.01 | 23.65 | 0.042$\pm$0.095
| $Y$ (FRM) | -19.25$\pm$0.07 | 0.05$\pm$0.08 | 2.5$\pm$1.3 | 5.72$\pm$1.33 | -27.31$\pm$0.86 | -12.32 | 8.047$\pm$0.862
| $Z$ (FRM) | -19.16$\pm$0.12 | 0.18$\pm$0.12 | 2.5$\pm$1.3 | 7.79$\pm$1.41 | -27.31$\pm$0.86 | -12.32 | 8.134$\pm$0.866
SUREF | $X$ (HRM) | 0.20$\pm$0.35 | 0.20$\pm$0.17 | 28.9$\pm$20.9 | 0.85$\pm$14.57 | 0.00$\pm$0.01 | 5.72 | 0.144$\pm$0.347
IS2 | $Y$ (HRM) | -8.91$\pm$1.09 | 0.68$\pm$1.09 | 33.9$\pm$19.2 | 5.81$\pm$1.44 | -9.09$\pm$0.29 | -3.98 | 0.111$\pm$1.126
| $Z$ (HRM) | -9.56$\pm$1.69 | 2.38$\pm$1.68 | 33.9$\pm$19.2 | 9.34$\pm$2.00 | -9.08$\pm$0.29 | -3.98 | -0.540$\pm$1.710
| $X$ (FRM) | 0.15$\pm$0.34 | 0.20$\pm$0.17 | 28.9$\pm$20.9 | -2.96$\pm$14.45 | 0.00$\pm$0.01 | 5.72 | 0.089$\pm$0.345
| $Y$ (FRM) | -80.24$\pm$1.04 | 0.65$\pm$1.04 | 32.4$\pm$19.3 | 21.79$\pm$1.45 | -110.17$\pm$3.47 | -3.98 | 29.877$\pm$3.629
| $Z$ (FRM) | -80.09$\pm$1.66 | 2.27$\pm$1.66 | 32.4$\pm$19.3 | 24.64$\pm$1.88 | -110.13$\pm$3.47 | -3.98 | 29.988$\pm$3.847
## 6 Constraints on short-ranged Yukawa deviation
In the previous section, we invoked patch effects to account for the non-zero
difference between theoretical electrostatic stiffness and measured total in-
phase stiffness corrected for the excitation and Newtonian gravity
stiffnesses, $\Delta k=\hat{k}_{0}-k_{N}-m\omega^{2}-k_{\epsilon,{\rm th}}$.
Actually, $\Delta k$ also contains the putative Yukawa potential that we aim
to constrain in this paper.
Given the obvious dependence of $\Delta k$ on the electric configuration,
which cannot be explained by a Yukawa-like gravity interaction, we exclude the
obviously biased FRM measurements from our analysis below. Furthermore, as
shown in Fig. 4, a Yukawa potential has a stronger signature on the radial
axes than on the longitudinal one. Therefore, we use only the stiffness
estimated on the radial axes in the HRM configuration to infer constraints on
the Yukawa interaction. Fig. 8 shows the corresponding $\Delta k$ estimate,
together with their weighted average and $1\sigma$ uncertainty (dashed line
and grey area), $\langle\Delta k\rangle=(7.1\pm 6.0)\times 10^{-4}~{}{\rm
N/m}$. It can be noted that SUEP and SUREF have similar behaviours. This is
expected since they are identical –up to their external test mass and small
machining errors.
Figure 8: Difference between theoretical electrostatic stiffnesses and
measured total in-phase stiffness corrected for the excitation and Newtonian
gravity stiffnesses, $\Delta k=\hat{k}_{0}-k_{N}-m\omega^{2}-k_{\epsilon,{\rm
th}}$, for the radial axes of each sensor, in the HRM electric configuration.
The dashed line is the $\Delta k$ weighted average and the grey area shows its
$1\sigma$ weighted uncertainty.
The marginal offset from 0 is surely due to unaccounted for patch effects and
a possible suboptimal calibration of our electrostatic model. However, as
error bars are largely dominated by gold wires, and are significantly larger
than the remaining bias, we use this estimation of $\langle\Delta k\rangle$ to
infer the 95% (2 $\sigma$) upper bound on the Yukawa potential in Fig. 9,
noting that a positive $\langle\Delta k\rangle$ corresponds to a negative
$\alpha$. Note that since our estimated $\Delta k$ is consistent with 0, we
merely consider that a Yukawa interaction can be present within the error
bars; we do not claim that it explains $\Delta k$’s slight offset from 0.
The curves in the lower part of Fig. 9 show the current best upper bounds on a
Yukawa potential, inferred from dedicated torsion balance experiments [62, 41,
45, 31]. Note that the Eöt-Wash group recently updated its constraints [32];
however, since they have been improved below the ranges of $\lambda$
considered here, we do not show them in Fig. 9. Our constraints are clearly
poor compared to the state of the art. It would have been surprising
otherwise, since MICROSCOPE was not designed to look for short-range
deviations from Newtonian gravity. However, our results suggest that thanks to
its non-trivial geometry, an experiment looking like MICROSCOPE, if highly
optimised, may allow for new constraints of gravity through the measurement of
the interaction between several bodies.
Figure 9: 95% confidence contour for a Yukawa potential. The light yellow area
shows the excluded region by various experiments: Irvine [62], Eöt-Wash 2007
[41], HUST 2012 [45], HUST 2020 [31], and the yellow area shows the region
excluded by the current work.
## 7 Conclusion
We used in-flight technical measurements aimed to characterise MICROSCOPE’s
instrument to search for short-ranged Yukawa deviations from Newtonian
gravity. MICROSCOPE not being designed for this task, this article serves as a
proposal for a new experimental concept in the search of small-scale
modifications of gravitation, as well as a first proof of concept. The
analysis is based on the estimation of the stiffness of the force underwent by
MICROSCOPE’s test masses as they are set in motion in their cage.
We listed all forces possibly intervening in the measurement, and computed the
total stiffness. We found that estimation uncertainties are dominated by those
coming from the gold wires’ stiffness and quality factor (those wires being
used to control the potential of the test masses). As the electrostatic
stiffness is expected to dominate over other stiffnesses, we compared it with
the estimated total stiffness. We found a non-zero difference depending on the
instrument’s electric configuration, hinting at unaccounted for patch field
forces. Due to the complexity of their modeling, we removed those measurements
with a significant bias from our inference of the Yukawa potential parameters.
Not surprisingly, our constraints on the Yukawa potential parameter space
($\alpha$, $\lambda$) are not competitive with the published ones, obtained
with dedicated laboratory tests. We find $|\alpha|<10^{4}-10^{6}$ for
$10^{-4}~{}{\rm m}\leqslant\lambda\leqslant 1~{}{\rm m}$, eight orders of
magnitude above the best current upper bounds. Nevertheless, our work can be
the starting point for optimisations to be implemented in the proposed
MICROSCOPE’s follow up. The gold wire should be replaced by a contactless
charge control management, as envisioned for LISA [63, 64]; this replacement
is already planned, since the gold wire is the main limiting factor for
MICROSCOPE’s test of the WEP [33, 34]. Furthermore, patch effects will need to
be either controlled or measurable, for instance by including a Kelvin probe
in the instrument. Finally, a possible Yukawa interaction at ranges
$10^{-4}~{}{\rm m}\leqslant\lambda\leqslant 1~{}{\rm m}$ is expected to have a
strengh $\alpha<10^{-4}$, corresponding to a stiffness seven orders of
magnitude lower than the electrostatic stiffness. Since MICROSCOPE’s
capacitive control and measurement prevents us from using an electrostatic
shield similar to that used by torsion pendulum experiments, a competitive
experimental constrain will thus require a control of the instrument’s
theoretical model of one part in 10 millions. Whether this endeavour is
possible remains an open question. Nevertheless, it could be circumvented by
performing the measurement with several (more than two) voltages switching on
and off different sets of electrodes to empirically determine the geometry
dependence of the electrostatic stiffness.
In the meantime, we use the measurements presented in this paper to provide
new constraints on the chameleon model in a companion paper [65] based on
Refs. [66, 67].
We acknowledge useful discussions with Bruno Christophe and Bernard Foulon,
and thank Vincent Lebat for comments on this article. We acknowledge the
financial support of CNES through the APR program (“GMscope+” and “Microscope
2” projects). MPB is supported by a CNES/ONERA PhD grant. This work uses
technical details of the T-SAGE instrument, installed on the CNES-ESA-ONERA-
CNRS-OCA-DLR-ZARM MICROSCOPE mission. This work is supported in part by the EU
Horizon 2020 research and innovation programme under the Marie-Sklodowska
grant No. 690575. This article is based upon work related to the COST Action
CA15117 (CANTATA) supported by COST (European Cooperation in Science and
Technology).
## Appendix A Test mass dynamics
Equation (4) is an idealised version of the more realistic description of Ref.
[47]. First, the sensor is not perfectly aligned with the satellite’s frame,
as described by the $[\theta]$ matrix
$\vec{\Gamma}_{\rm cont|instr}=[\theta]\left(\vv{\Delta\Gamma}_{\Earth|\rm
sat}+\vec{\Gamma}_{\rm kin|sat}+\frac{\vec{F}_{\rm
ext|sat}}{M}+\frac{\vec{F}_{\rm th|sat}}{M}\right)-\frac{\vec{F}_{\rm
loc|instr}}{m}-\frac{\vec{F}_{\rm pa|instr}}{m},$ (47)
where the subscripts “$|$instr” and “$|$sat” mean that forces and
accelerations are expressed in the instrument or satellite frame,
respectively.
Moreover, the measured acceleration is given by the control acceleration (47)
affected by the matrix $[A]$ containing the instrument’s scale factors, by
electrostatic parasitic forces (since the applied electrostatic forces are the
sum of the measured and parasitic electrostatic forces $m\vec{\Gamma}_{\rm
cont|instr}=\vec{F}_{\rm el}=\vec{F}_{\rm el,meas}+\vec{F}_{\rm elec,par}$),
by the measurement bias $\vv{b}_{0}$ due to the read-out circuit and by noise
$\vv{n}$:
$\vv\Gamma_{\rm meas|instr}=\vv{b}_{0}+[A]\left(\vv\Gamma_{\rm
cont|instr}-\frac{\vv{F}_{\rm
elec,par|instr}}{m}\right)+K_{2}\left[\vv\Gamma_{\rm
cont|sat}\right]^{2}+\vv{n}.$ (48)
We can then wrap up and write the measured acceleration explicitly:
$\vv\Gamma_{\rm
meas|instr}=\vv{B}_{0}+[A][\theta]\left(\vv{\Delta\Gamma}_{\Earth{\rm|sat}}+\vv\Gamma_{\rm
kin|sat}+\frac{\vv{F}_{\rm ext|sat}}{M}+\frac{\vv{F}_{\rm
th|sat}}{M}\right)-[A]\frac{\vec{F}_{\rm loc|instr}}{m}\\\
+K_{2}\left[\vv\Gamma_{\rm cont|sat}\right]^{2}+\vv{n},$ (49)
where
$\vv{B}_{0}\equiv\vv{b}_{0}-[A]\left(\frac{\vec{F}_{\rm
pa|instr}}{m}+\frac{\vec{F}_{\rm elec,par|instr}}{m}\right)$ (50)
is the overall bias and $K_{2}$ is the quadratic factor accounting for non-
linearities in the electronics.
In this article, following the measurements of Ref. [34], we assume that
$[A]=[\theta]={\rm Id}$ (Identity matrix), that the drag-free perfectly
cancels the external forces and we ignore the quadratic factor (see Refs. [33,
49, 52]), so that our main measurement equation is
$\vv\Gamma_{\rm
meas|instr}=\vv{B}_{0}+\vv{\Delta\Gamma}_{\Earth{\rm|sat}}+\vv\Gamma_{\rm
kin|sat}-\frac{\vec{F}_{\rm loc|instr}}{m}+\vv{n}.$ (51)
## Appendix B Electric configurations
MICROSCOPE can be used with two electric configurations: in the full-range
mode (FRM), voltages are high enough to be able to acquire the test masses,
while the high-resolution mode (HRM), with lower voltages, allows for an
optimal control of the test masses. Tables 3 and 4 summarise the corresponding
voltages (which appear in Eq. 14). See Ref. [48] for details.
Table 3: High-resolution mode (HRM) electric configuration. All voltages are in V. | $V_{d}$ | $V_{p}$ | $V^{\prime}_{px}$ | $V^{\prime}_{py/z}$ | $V^{\prime}_{p\phi}$
---|---|---|---|---|---
IS1-SUEP | 5 | 5 | -5 | 2.5 | -10
IS2-SUEP | 5 | 5 | 0 | 2.5 | -10
IS1-SUREF | 5 | 5 | -5 | 2.5 | -10
IS2-SUREF | 5 | 5 | -10 | 0 | -10
Table 4: Full-range mode (FRM) electric configuration. All voltages are in V. | $V_{d}$ | $V_{p}$ | $V^{\prime}_{px}$ | $V^{\prime}_{py/z}$ | $V^{\prime}_{p\phi}$
---|---|---|---|---|---
IS1-SUEP | 1 | 42 | 0 | 0 | 0
IS2-SUEP | 1 | 42 | 0 | 0 | 0
IS1-SUREF | 1 | 42 | 0 | 0 | 0
IS2-SUREF | 1 | 42 | 0 | 0 | 0
## Appendix C Discussion of Chhun et al. [52] analysis
In Ref. [52], Chhun et al. compute the electrostatic stiffness in HRM, using
the same measurement sessions as those used here, with a simple ratio of sines
amplitudes. They neglect the local gravity stiffness and assume a negligible
gold wire’s stiffness $k_{w}\approx 0$ and no velocity-dependent term
($\lambda_{w}=0$), and they fit the position and acceleration as
$\displaystyle x(t)$ $\displaystyle=x_{0}\sin(\omega t+\psi_{x})$ (52)
$\displaystyle\Gamma(t)$ $\displaystyle=\Gamma_{0}\sin(\omega
t+\psi_{\Gamma}),$ (53)
and infer $k_{\epsilon}=m\Gamma_{0}/x_{0}-m\omega^{2}$, with the implicit
assumption that $\psi_{x}=\psi_{\Gamma}$. Table 5 sums up their results.
Two important points need to be highlighted. First, the stiffnesses estimated
under the very restrictive assumptions of Ref. [52] are close to (yet
inconsistent with) the expected electrostatic stiffnesses (with an accuracy
ranging from a few to a dozen percent, especially on the radial axes, see
Table 2). Second, the stiffness estimated on the radial axes are consistent
with each other, thus showing a good degree of cylindrical symmetry; this
symmetry is clearly expected for the electrostatic stiffness, but may seem
accidental for the gold wires. Unless coincidental, those facts hint towards a
total stiffness indeed dominated by the electrostatic stiffness, with
negligible other contributors (e.g. gold wires).
Table 5: Total stiffness (identified as the electrostatic stiffness) measured in Ref. [52]. The expected values can be found in Table 2. | $k_{\epsilon,x}~{}[\times 10^{-3}~{}{\rm N/m}]$ | $k_{\epsilon,y}~{}[\times 10^{-2}~{}{\rm N/m}]$ | $k_{\epsilon,z}~{}[\times 10^{-2}~{}{\rm N/m}]$
---|---|---|---
IS1-SUEP | $1.396\pm 0.003$ | $-1.494\pm 0.001$ | $-1.478\pm 0.002$
IS2-SUEP | $0.639\pm 0.002$ | $-6.424\pm 0.001$ | $-6.310\pm 0.001$
IS1-SUREF | $0.837\pm 0.003$ | $-1.515\pm 0.000$ | $-1.514\pm 0.000$
IS2-SUREF | $4.424\pm 0.010$ | $-8.170\pm 0.001$ | $-7.144\pm 0.001$
It is instructive to consider Eq. (38) in view of Ref. [52] analysis. However,
instead of assuming that the gold wire has no stiffness, we now assume that
its quality factor $Q\gg 1$ (this is equivalent from the point of view of
MICROSCOPE’s test of the WEP, where only the ratio $k_{w}/Q$ enters the
measurement). Thus assuming $\phi\rightarrow 0$, we re-write Eq. (38) as
(Taylor expanding the sine and cosine at first order in $\phi$)
$\Gamma(t)=\pm\sqrt{\kappa_{0}^{2}+2\kappa_{0}\kappa_{w}+\kappa_{w}^{2}(1+\phi^{2})}\sin\left(\omega
t+\psi-\arctan\frac{\kappa_{w}\phi}{\kappa_{0}\kappa_{w}}\right),$ (54)
which tends to $\lim_{\phi\to
0}\Gamma(t)=\pm|\kappa_{0}+\kappa_{w}|\sin(\omega t+\psi)$. It is thus clear
that using Eq. (52), Ref. [52] estimates the total stiffness. Nevertheless, a
subtlety remains. Rigorously, although the phase in Eq. (54) should be that of
the excitation, $\psi=\psi_{x}$, which may (and does) differ from the phase of
the acceleration $\psi_{\Gamma}$, Ref. [52] assumes $\psi_{x}=\psi_{\Gamma}$
(which is consistent with the assumption that the gold wire has zero
stiffness). Unfortunately, the experiment contradicts this assumption (at
least on the radial axes).
Relaxing the $\psi_{x}=\psi_{\Gamma}$ hypothesis of Eq. (52), we find almost
unchanged total stiffnesses (with percent-level modifications), but a small
residual with a $\pi/2$ phase offset remains after removing the best fit from
the acceleration. This remaining small signal may be the sign of a non-zero
contribution of the gold wires. Indeed, Eq. (38) shows that at first order,
the amplitude of this residual signal is proportional to the $k_{w}/Q$ ratio
of the gold wires (when assuming $\lambda_{w}=0$). Alas, this does not teach
us anything about the absolute order of magnitude of either $k_{w}$ or $Q$.
## Appendix D Radial electrostatic stiffness due to the Y electrodes
In this appendix, we give a detailed computation of the electrostatic
stiffness created by MICROSCOPE’s $Y$ electrodes on a given test mass as the
test mass moves along the $Y$-axis (but remains at $z=0$). Although this is
textbook physics, this section allows us to clarify the model of the
electrostatic stiffness. See Ref. [60] for the detailed general case. See Ref.
[48] for details about the geometry involved in this computation. In short, a
given test mass is controled along its $Y$-axis by two pairs of diametrically-
opposed $Y$ electrodes (at potential $V_{e+}$ and $V_{e-}$), completed by two
pairs of $Z$-electrodes, as shown in Fig. 10.
Figure 10: Test mass’ $Y$\- and $Z$-axes control geometry. Upper panel: test
mass (light brown) and inner electrode-bearing silica cylinder, with its two
rings of pairs of electrodes to control the $Y$-axis (red) and the $Z-$axis
(blue). The outer electrode-bearing silica cylinder controls the $X$-axis and
is not shown here (see Ref. [48]). Lower panel: Radial cut of a ring of $Y$
and $Z$ electrodes geometry, when the test mass is offset by $y$ along the
$Y$-axis, with $e$ being the gap between the electrodes and the test mass in
equilibrium. The inner cylinder carries the electrodes ($Y$ and $Z$ along the
corresponding axes –$Y$ electrodes are shown at potential $V_{e+}$ and
$V_{e-}$) of external radius $R_{ey}$; electrodes are separated by dips of
width $d_{3}$. The test mass (of inner radius $R_{mi}$ and potential $V_{\rm
TM}$) surrounds this inner cylinder, and can move around it.
### D.1 Electrostatic force between the plates of a capacitor
At constant potential, the electrostatic force between conductors reads
$F_{\rm elec}=\nabla U$, where $U$ is the electrostatic energy. For a
capacitor,
$U=\frac{1}{2}CV^{2},$ (55)
where $C$ is its capacitance and $V$ the potential difference between its
plates. The electrostatic force created along the $y$-axis is then
$F(y)=\frac{1}{2}\frac{\partial C}{\partial y}V^{2}.$ (56)
### D.2 Capacitance of one $Y$ electrode – test mass pair
Assuming electrodes are on an infinite cylinder (this assumption is reasonable
since electrodes are far enough from the edges of the cylinder) and using the
Gauss theorem, it is easy to show that the electric field of an electrode (of
surface charge $\sigma$) at a distance $r$ from the axis of the cylinder is
$E(r)=\frac{\sigma R_{ey}}{\epsilon_{0}r}.$ (57)
The electric potential of the electrode is thus
$V(r)=\frac{R_{ey}\sigma}{\epsilon_{0}}\ln r.$ (58)
Finally, the capacitance of the electrode-test mass pair
$C=\frac{Q}{\Delta
V}=\frac{1}{4}\left(2\pi-4\frac{d_{3}}{R_{ey}}\right)\frac{L_{y}\epsilon_{0}}{\ln\frac{R_{mi}}{R_{ey}}},$
(59)
where the charge
$Q=\sigma
S=\frac{\sigma}{4}\left(2\pi-4\frac{d_{3}}{R_{ey}}\right)R_{ey}L_{y},$ (60)
where $S$ is the surface of an electrode (of length $L_{y}$).
Denoting $e\equiv R_{mi}-R_{ey}$ the gap between the cylinder and the test
mass, in the limit $e\ll R_{ey}\sim R_{mi}$, Eq. (59) reads
$C=\frac{1}{4}\left(2\pi-4\frac{d_{3}}{R_{ey}}\right)L_{y}\epsilon_{0}\frac{R_{mi}+R_{ey}}{2e}.$
(61)
### D.3 $Y$ electrodes electrostatic stiffness
When moving the test mass by an amount $y$ along the $Y$-axis, the
electrostatic force between the electrodes and the test mass is the sum of the
forces between the test mass and the $V_{e+}$ and $V_{e-}$ electrodes,
$F=F_{+}+F_{-}$ (so far we consider only one pair of electrodes).
Those forces are, from Eq. (56),
$F_{+}=\frac{1}{2}\frac{\partial C_{+}}{\partial y}(V_{\rm TM}-V_{e+})^{2},$
(62) $F_{-}=\frac{1}{2}\frac{\partial C_{-}}{\partial y}(V_{\rm
TM}+V_{e-})^{2},$ (63)
with
$C_{\pm}=\frac{1}{4}\left(2\pi-4\frac{d_{3}}{R_{ey}}\right)L_{y}\epsilon_{0}\frac{R_{mi}+R_{ey}}{2(e\pm
y)}.$ (64)
The total force is thus
$F=k^{\prime}\left[-\frac{(V_{\rm TM}-V_{e+})^{2}}{(e+y)^{2}}+\frac{(V_{\rm
TM}-V_{e-})^{2}}{(e-y)^{2}}\right],$ (65)
where
$k^{\prime}\equiv\frac{1}{16}\left(2\pi-4\frac{d_{3}}{R_{ey}}\right)L_{y}\epsilon_{0}(R_{mi}+R_{ey}).$
(66)
Assuming $y\ll e$, the force reads, at first order in $y/e$,
$F=\frac{k^{\prime}}{e^{2}}\left[(V_{\rm
TM}-V_{e-})^{2}\left(1+2\frac{y}{e}\right)-(V_{\rm
TM}-V_{e+})^{2}\left(1-2\frac{y}{e}\right)\right].$ (67)
Keeping only the (stiffness) terms proportional to the displacement $y$ and
expanding the square sums, we get
$F=2\frac{k^{\prime}}{e^{3}}\left[-2(V_{e+}+V_{e-})V_{\rm
TM}+V_{e-}^{2}+V_{e+}^{2}+2V_{\rm TM}^{2}\right]y,$ (68)
with [48]
$\left\\{\begin{array}[]{l}V_{e-}=V^{\prime}_{p}-v_{y}\\\
V_{e+}=V^{\prime}_{p}+v_{y}\\\ V_{\rm
TM}=V_{p}+\sqrt{2}V_{d}\sin\omega_{d}t,\end{array}\right.$ (69)
of which we take the mean value $\langle V_{\rm TM}\rangle=V_{p}$ and $\langle
V_{\rm TM}^{2}\rangle=V_{p}^{2}+V_{d}^{2}$ (and omit the $\langle\dots\rangle$
symbol hereafter), such that the stiffness contribution to the force is
$F=4\frac{k^{\prime}}{e^{3}}\left[(V_{p}-V^{\prime}_{p})^{2}+V_{d}^{2}\right]y.$
(70)
Considering now the two pairs of electrodes, and substituting Eq. (66) to
$k^{\prime}$,
$F=\frac{1}{2}\left(2\pi-4\frac{d_{3}}{R_{ey}}\right)L_{y}\epsilon_{0}\frac{R_{mi}+R_{ey}}{e^{3}}\left[(V_{p}-V^{\prime}_{p})^{2}+V_{d}^{2}\right]y.$
(71)
Since $R_{mi}\approx R_{ey}$, using the expression for the surface of an
electrode (Eq. 60), we find the expression given in Eq. (14), with
$\alpha_{y}=0$.
## Appendix E Gravitational force between hollow cylinders
Ref. [68] derives the longitudinal $F_{z}(r,z)$ and axial $F_{r}(r,z)$ forces
between two hollow cylinders by a Yukawa gravitation. In this appendix, we use
those results to complement them with the cases at hand in this paper. Note
that contrary to the MICROSCOPE reference frame used in the main text, we use
a more intuitive coordinate frame, where the $z$-axis is along the main axis
of the cylinders, so that the natural cylindrical system $(r,\varphi,z)$
holds. This is the convention of Ref. [68].
The gravitational force created along the $z$-axis on a unit mass at
$(r,\theta,z)$ by a hollow cylinder of inner and outer radii $a$ and $b$,
height $2\ell$ and density $\rho$ is [68]
$F_{z}(r,z)=-2\pi G\alpha\rho\int_{0}^{\infty}\frac{J_{0}(kr){\rm
d}k}{\kappa}\left[bJ_{1}(kb)-aJ_{1}(ka)\right]\times\left\\{\begin{array}[]{l}h_{2}(z;k)\,\,{\rm
if}\,-\ell\leqslant z\leqslant\ell\\\ h_{1}(z;k)\,\,{\rm if}\,z>\ell\\\
h_{3}(z;k)\,\,{\rm if}\,z<-\ell.\\\ \end{array}\right.$ (72)
where $\kappa$ is defined in Eq. (25), with $\lambda$ the Yukawa interaction
range, $J_{i}$ are Bessel functions of the first kind and the $h_{i}$
functions depend on the altitude of the unit mass and are defined below
111Note that $h_{1}$ and $h_{3}$ are confused in Ref. [68]. The Newtonian
interaction is straightforward to recover by setting
$\lambda\rightarrow\infty$ (and $\alpha=1$).
The corresponding radial force is given by
$F_{r}(r,z)=-2\pi G\alpha\rho\int_{0}^{\infty}\frac{kJ_{1}(kr){\rm
d}k}{\kappa^{2}}\left[bJ_{1}(kb)-aJ_{1}(ka)\right]\times\left\\{\begin{array}[]{l}h_{4}(z;k)\,\,{\rm
if}\,-\ell\leqslant z\leqslant\ell\\\ h_{1}(z;k)\,\,{\rm if}\,z>\ell\\\
-h_{3}(z;k)\,\,{\rm if}\,z<-\ell\\\ \end{array}\right.$ (73)
The $h_{i}$ functions are defined as
$\begin{array}[]{l}h_{1}(z;k)=\exp[-\kappa(z-\ell)]-\exp[-\kappa(\ell+z)]\\\
h_{2}(z;k)=\exp[-\kappa(\ell-z)]-\exp[-\kappa(\ell+z)]\\\
h_{3}(z;k)=\exp[\kappa(z-\ell)]-\exp[\kappa(\ell+z)]\\\
h_{4}(z;k)=2-\exp[-\kappa(\ell-z)]-\exp[-\kappa(\ell+z)].\end{array}$ (74)
### E.1 Forces on a full cylinder
The forces exerted by the previous cylinder (called the “source”, centered on
$(x,y,z)=(0,0,0)$) on another full cylinder (called the “target”, centered on
$(x_{s},0,z_{s})$) of radius $a^{\prime}$, height $2L$ and density
$\rho^{\prime}$, is obtained by integrating Eqs. (72) and (73) on the volume
of the target (at this point in the computation, we do not care whether the
geometry is physically sound –i.e. cylinders may overlap; this will be done
below):
${\mathcal{F}_{z}}(x_{s},z_{s})=\rho^{\prime}\iint{\rm d}x{\rm d}y\int_{z_{\rm
min}}^{z_{\rm max}}{\rm d}zF_{z}(r,z),$ (75)
and similarly for ${\mathcal{F}_{r}}(x_{s},z_{s})$, where, for convenience, we
express the volume in Cartesian coordinates (though we will quickly return to
cylindrical coordinates below), with $r=\sqrt{x^{2}+y^{2}}$. The $z$-integral
is taken from the base $z_{\rm min}$ to the top $z_{\rm max}$ of the target
cylinder, and the $(x,y)$-integral is taken over the disk section of the
cylinder. We explicit them below.
#### E.1.1 $z$-integral
$F_{z}(r,z)$ and $F_{r}(r,z)$ depend on $z$ only through the $h_{i}$
functions, so it is enough to compute $H_{i}(k)=\int_{z_{\rm min}}^{z_{\rm
max}}h_{i}(z;k){\rm d}z$. Several cases depending on the position of the
target with respect to the source must be considered:
1. 1.
Target’s $z$-extension fully contained in source’s $z$-extension: in this
case, $z_{s}-L>-\ell$ and $z_{s}+L<\ell$, and only $h_{2}$ and $h_{4}$ are
defined. Their integrals are straightforward to compute, with $z_{\rm
min}=z_{s}-L$ and $z_{\rm max}=z_{s}+L$:
$H_{2}(z_{s},k)=\frac{4{\rm e}^{-\kappa\ell}}{\kappa}\sinh(\kappa
L)\sinh(\kappa z_{s})$ (76)
and
$H_{4}(z_{s},k)=4L-\frac{4{\rm e}^{-\kappa\ell}}{\kappa}\sinh(\kappa
L)\cosh(\kappa z_{s}).$ (77)
2. 2.
Target’s $z$-extension fully covering source’s $z$-extension ($z_{s}-L<-\ell$
and $z_{s}+L>\ell$): in this case, all $h_{i}$ are defined, and
$H_{1}(z_{s},k)=-\frac{2}{\kappa}\left({\rm e}^{-\kappa(z_{s}+L)}-{\rm
e}^{-\kappa\ell}\right)\sinh(\kappa\ell),$ (78)
$H_{3}(z_{s},k)=-\frac{2}{\kappa}\left({\rm e}^{-\kappa\ell}-{\rm
e}^{\kappa(z_{s}-L)}\right)\sinh(\kappa\ell),$ (79)
$H_{4}(z_{s},k)=4\ell-4\frac{{\rm e}^{-\kappa\ell}}{\kappa}\sinh(\kappa\ell)$
(80)
and $H_{2}(z_{s},k)=0$ by symmetry.
3. 3.
Target fully above source ($z_{s}-L>\ell$): in this case, only $h_{1}$ is
defined and
$H_{1}(z_{s},k)=\frac{4{\rm e}^{-\kappa
z_{s}}}{\kappa}\sinh(\kappa\ell)\sinh(\kappa L).$ (81)
4. 4.
Target fully below source ($z_{s}+L<-\ell$): in this case, only $h_{3}$ is
defined and
$H_{3}(z_{s},k)=-\frac{4{\rm e}^{\kappa
z_{s}}}{\kappa}\sinh(\kappa\ell)\sinh(\kappa L).$ (82)
5. 5.
Other cases correspond to the target’s and the source’s $z$-extension
overlapping, with none completely covering the other. Since they are not of
use in MICROSCOPE, we do not consider them here.
#### E.1.2 $(x,y)$-integral
With no loss of generality, we can set the target cylinder on
$(x_{s},y_{s})=(x_{s},0)$ in the $(x,y)$-plane ($y_{s}\neq 0$ cases are
recovered by a simple rotation). For an arbitrary function $f$,
$\iint{\rm d}x{\rm d}yf(x,y)=\int_{\theta_{-}}^{\theta_{+}}{\rm
d}\theta\int_{R_{-}(\theta)}^{R_{+}(\theta)}f(r,\theta)r{\rm d}r,$ (83)
where the integration boundaries depend on the geometry of the problem. Let us
assume that the disk over which we take the integral is centered on
$(x_{s},y_{s})=(x_{s},0)$ and has a radius $a$ (not to be confused with the
radius of the source –which is of no use here).
1. 1.
$|x_{s}|>a$
This case is illustrated by the left panel of Fig. 11. It is easy to show that
the $\theta$ integral runs from $\theta_{-}=-\arcsin(a/|x_{s}|)$ to
$\theta_{+}=\arcsin(a/|x_{s}|)$. For a given $\theta$ in that domain, the
$r$-integration then runs from $R_{-}(\theta)$ to $R_{+}(\theta)$ which are
solutions of the quadratic equation
$R^{2}-2x_{s}R\cos\theta+x_{s}^{2}-a^{2}=0,$ (84)
and are given by
$R_{\pm}(\theta)=x_{s}\cos\theta\pm\sqrt{a^{2}-x_{s}^{2}\sin^{2}\theta}.$ (85)
Figure 11: $(x,y)$-integration geometry.
2. 2.
$|x_{s}|\leqslant a$
In this case, shown in the right panel of Fig. 11, the $\theta$ boundaries are
trivially $\theta_{-}=0$ and $\theta_{+}=2\pi$. It is also trivial that for a
given $\theta$, $R_{-}(\theta)=0$. Finally, it can be shown that the upper
$r$-boundary is the same as that of the previous case,
$R_{+}(\theta)=x_{s}\cos\theta+\sqrt{a^{2}-x_{s}^{2}\sin^{2}\theta}$.
#### E.1.3 Longitudinal and radial forces
Noting that the $r$-dependence of the ${\mathcal{F}}_{z}(x_{s},z_{s})$ force
appears only in the $J_{0}$ Bessel function, and using Eq. (72) we re-write
Eq. (75) as
${\mathcal{F}}_{z}(x_{s},z_{s})=-2\pi
G\rho\rho^{\prime}\alpha\int_{0}^{\infty}\frac{K_{z}(k)}{\kappa}\left[bJ_{1}(kb)-aJ_{1}(ka)\right]\left[H_{1}(z_{s},k)+H_{2}(z_{s},k)+H_{3}(z_{s},k)\right]{\rm
d}k,$ (86)
where we abusively sum the $H_{i}$ functions (setting them to 0 outside their
definition range).
Since all cylinders of a given MICROSCOPE’s sensor unit are co-axial, we
consider only the $|x_{s}|\leqslant a$ case in this paper, so that the
$(x,y)$-integration is
$K_{z}(k)=\int_{0}^{2\pi}{\rm d}\theta\int_{0}^{R_{+}(\theta)}J_{0}(kr){\rm
d}r,$ (87)
and we note that
$I_{z}(k,\theta)\equiv\int_{0}^{R_{+}(\theta)}rJ_{0}(kr){\rm
d}r=\frac{R_{+}(\theta)J_{1}[kR_{+}(\theta)]}{k}.$ (88)
Similarly, the radial force
${\mathcal{F}}_{r}(x_{s},z_{s})=-2\pi
G\rho\rho^{\prime}\alpha\int_{0}^{\infty}\frac{K_{r}(k)}{\kappa}\left[bJ_{1}(kb)-aJ_{1}(ka)\right]\left[H_{1}(z_{s},k)+H_{4}(z_{s},k)-H_{3}(z_{s},k)\right]{\rm
d}k,$ (89)
with
$K_{r}(k)=\int_{0}^{2\pi}\cos{\theta}{\rm
d}\theta\int_{0}^{R_{+}(\theta)}krJ_{1}(kr){\rm d}r,$ (90)
and
$\begin{array}[]{lll}I_{r}(k,\theta)&\equiv&\int_{0}^{R_{+}(\theta)}krJ_{1}(kr){\rm
d}r\\\
&=&\frac{\pi}{2}\left\\{R_{+}(\theta)J_{1}[kR_{+}(\theta)]H_{0}[kR_{+}(\theta)]-R_{+}(\theta)J_{0}[kR_{+}(\theta)]H_{1}[kR_{+}(\theta)]\right\\},\end{array}$
(91)
where $H_{0}$ and $H_{1}$ are Struve functions (not to be confused with the
previous $H_{i}$ functions).
Without any further assumptions, we cannot integrate Eqs. (87) and (90) over
$\theta$ analytically, and we end up with a 2D integral for the force between
the two cylinders. We show below that in the limit of small displacements, we
can integrate them analytically. Nevertheless, in the general case, the
$\theta$ integrations are easily performed numerically.
#### E.1.4 Small displacements limit: longitudinal force
We assume that the target cylinder (of radius $a^{\prime}$) moves about a
“reference” position ($\bar{x},0,\bar{z}$), with a small displacement $\delta$
along the $z$-axis. Assuming that $\bar{x}\ll a^{\prime}$, at first order in
$\bar{x}/a^{\prime}$,
$I_{z}(k,\theta)\approx
a^{\prime}\frac{J_{1}(ka^{\prime})}{k}+a^{\prime}J_{0}(ka^{\prime})\cos\theta\bar{x},$
(92)
so that
$K_{z}(k)\approx 2\pi\frac{a^{\prime}J_{1}(ka^{\prime})}{k}.$ (93)
Denoting $z_{s}=\bar{z}+\delta$ the altitude of the target’s center, and
expanding the $H_{i}$ functions in the limit of small $\delta$, taking care of
their definition ranges (E.1.1), we find that the longitudinal force created
on the cylinder of radius $a^{\prime}$ is, at third order:
1. 1.
if $\bar{z}\ll(\ell,L)$ and $\ell>L$ (target’s $z$-extension fully covered by
that of the source):
${\mathcal{F}}_{z}(\bar{z},\delta)\approx-16\pi^{2}G\rho\rho^{\prime}\alpha(K_{1}\delta+K_{3}\delta^{3}),$
(94)
where
$K_{1}=\int_{0}^{\infty}\frac{a^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa
k}{\rm e}^{-\kappa\ell}\sinh(\kappa L){\rm d}k$ (95)
and
$K_{3}=\int_{0}^{\infty}\frac{\kappa}{6}\frac{a^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]}{k}{\rm
e}^{-\kappa\ell}\sinh(\kappa L){\rm d}k.$ (96)
2. 2.
if $\bar{z}\ll(\ell,L)$ and $\ell<L$ (source’s $z$-extension fully covered by
that of the target): the force is formally identical to that of the previous
case, with $\ell$ and $L$ switching their roles.
3. 3.
if $|\bar{z}|>\ell+L$ (cylinders above each other):
${\mathcal{F}}_{z}(\bar{z},\delta)\approx
16\pi^{2}G\rho\rho^{\prime}\alpha(K_{0}+K_{1}\delta+K_{2}\delta^{2}+K_{3}\delta^{3}),$
(97)
with
$K_{0}=-\frac{\bar{z}}{|\bar{z}|}\int_{0}^{\infty}\frac{a^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}k}{\rm
e}^{-\kappa|\bar{z}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k,$ (98)
$K_{1}=\int_{0}^{\infty}\frac{a^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa
k}{\rm e}^{-\kappa|\bar{z}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k,$ (99)
$K_{2}=-\frac{\bar{z}}{|\bar{z}|}\int_{0}^{\infty}\frac{a^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]}{2k}{\rm
e}^{-\kappa|\bar{z}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k,$ (100)
and
$K_{4}=\int_{0}^{\infty}\frac{\kappa}{6k}a^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]{\rm
e}^{-\kappa|\bar{z}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k.$ (101)
#### E.1.5 Small displacements limit: radial force
We assume that the target cylinder (of radius $a^{\prime}$) moves about a
“reference” position ($\bar{x},0,\bar{z}$), with a small displacement $\delta$
along the $X$-axis. Assuming that $\bar{x}\ll a^{\prime}$, at third order in
$\delta/a^{\prime}$,
$I_{r}(k,\theta)\approx\frac{\pi
a^{\prime}}{2}[J_{1}(ka^{\prime})H_{0}(ka^{\prime})-J_{0}(ka^{\prime})H_{1}(ka^{\prime})]+[ka^{\prime}J_{1}(ka^{\prime})\cos\theta]\delta\\\
+\frac{k}{2}[ka^{\prime}J_{0}(ka^{\prime})\cos^{2}\theta-
J_{1}(ka^{\prime})\sin^{2}\theta]\delta^{2}\\\
+\frac{k^{2}}{6}[J_{0}(ka^{\prime})\cos(3\theta)-ka^{\prime}J_{1}(ka^{\prime})\cos^{3}\theta]\delta^{3},$
(102)
so that
$K_{r}(k)\approx\pi ka^{\prime}J_{1}(ka^{\prime})\delta-\frac{\pi
k^{2}}{8}ka^{\prime}J_{1}(ka^{\prime})\delta^{3}.$ (103)
The radial force created on the cylinder of radius $a^{\prime}$ is thus, at
third order
${\mathcal{F}}_{r}(\bar{z},\delta)\approx-2\pi^{2}G\rho\rho^{\prime}\alpha(K_{1}\delta+K_{3}\delta^{3}),$
(104)
where, in the definition ranges (E.1.1):
1. 1.
if $\bar{z}\ll(\ell,L)$ and $\ell>L$ (target’s $z$-extension fully covered by
that of the source):
$K_{1}=4\int_{0}^{\infty}\frac{ka^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}}\left[L-\frac{{\rm
e}^{-\kappa\ell}}{\kappa}\sinh(\kappa L)\cosh(\kappa\bar{z})\right]{\rm d}k$
(105)
and
$K_{3}=-\int_{0}^{\infty}\frac{k^{3}a^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}}\left[L-\frac{{\rm
e}^{-\kappa\ell}}{\kappa}\sinh(\kappa L)\cosh(\kappa\bar{z})\right]{\rm d}k.$
(106)
2. 2.
if $\bar{z}\ll(\ell,L)$ and $\ell<L$ (source’s $z$-extension fully covered by
that of the target): the force is formally identical to that of the previous
case, with $\ell$ and $L$ switching their roles.
3. 3.
if $|\bar{z}|>\ell+L$ (cylinders above each other):
$K_{1}=4\int_{0}^{\infty}\frac{ka^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}}\frac{{\rm
e}^{-\kappa|\bar{z}|}}{\kappa}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (107)
$K_{3}=-\int_{0}^{\infty}\frac{k^{3}a^{\prime}J_{1}(ka^{\prime})[bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}}\frac{{\rm
e}^{-\kappa|\bar{z}|}}{\kappa}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k.$ (108)
### E.2 Forces between hollow cylinders
We finally come back to the problem at hand: the gravitational force between
the two hollow cylinders defined at the beginning of this appendix. By virtue
of the superposition principle, it is given by subtracting the force between
the hollow source cylinder and two full target cylinders of radii $a$ and $b$.
Thus, in the limit of small displacements, the longitudinal and radial forces
are formally given by Eqs. (94), (97) and (104), with the $K_{i}$ coefficients
given below (they are obviously identical to those given in the main text in
the MICROSCOPE coordinates system, where the $x$ and $z$-axes are inverted).
#### E.2.1 Longitudinal force
1. 1.
if $\bar{z}\ll(\ell,L)$ and $\ell>L$ (target’s $z$-extension fully covered by
that of the source):
${\mathcal{F}}_{z}(\bar{z},\delta)\approx-16\pi^{2}G\rho\rho^{\prime}\alpha(K_{1}\delta+K_{3}\delta^{3}),$
(109)
where
$\displaystyle K_{1}$
$\displaystyle=\int_{0}^{\infty}\frac{[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa
k}{\rm e}^{-\kappa\ell}\sinh(\kappa L){\rm d}k$ (110) $\displaystyle K_{3}$
$\displaystyle=\int_{0}^{\infty}\frac{\kappa}{6}\frac{[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]}{k}{\rm
e}^{-\kappa\ell}\sinh(\kappa L){\rm d}k.$ (111)
2. 2.
if $\bar{z}\ll(\ell,L)$ and $\ell<L$ (source’s $z$-extension fully covered by
that of the target): the force is formally identical to that of the previous
case, with $\ell$ and $L$ switching their roles.
3. 3.
if $|\bar{z}|>\ell+L$ (cylinders above each other):
${\mathcal{F}}_{z}(\bar{z},\delta)\approx
16\pi^{2}G\rho\rho^{\prime}\alpha(K_{0}+K_{1}\delta+K_{2}\delta^{2}+K_{3}\delta^{3}),$
(112)
with
$\displaystyle K_{0}$
$\displaystyle=-\frac{\bar{z}}{|\bar{z}|}\int_{0}^{\infty}\frac{[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}k}{\rm
e}^{-\kappa|\bar{z}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (113)
$\displaystyle K_{1}$
$\displaystyle=\int_{0}^{\infty}\frac{[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa
k}{\rm e}^{-\kappa|\bar{z}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (114)
$\displaystyle K_{2}$
$\displaystyle=-\frac{\bar{z}}{|\bar{z}|}\int_{0}^{\infty}\frac{[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]}{2k}{\rm
e}^{-\kappa|\bar{z}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (115)
$\displaystyle K_{4}$
$\displaystyle=\int_{0}^{\infty}\frac{\kappa}{6k}[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]{\rm
e}^{-\kappa|\bar{z}|}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k,$ (116)
#### E.2.2 Radial force
${\mathcal{F}}_{r}(\bar{z},\delta)\approx-2\pi^{2}G\rho\rho^{\prime}\alpha(K_{1}\delta+K_{3}\delta^{3}),$
(117)
1. 1.
if $\bar{z}\ll(\ell,L)$ and $\ell>L$ (target’s $z$-extension fully covered by
that of the source):
$\displaystyle K_{1}$
$\displaystyle=4\int_{0}^{\infty}\frac{k[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}}\left[L-\frac{{\rm
e}^{-\kappa\ell}}{\kappa}\sinh(\kappa L)\cosh(\kappa\bar{z})\right]{\rm d}k$
(118) $\displaystyle K_{3}$
$\displaystyle=-\int_{0}^{\infty}\frac{k^{3}[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}}\left[L-\frac{{\rm
e}^{-\kappa\ell}}{\kappa}\sinh(\kappa L)\cosh(\kappa\bar{z})\right]{\rm d}k.$
(119)
2. 2.
if $\bar{z}\ll(\ell,L)$ and $\ell<L$ (source’s $z$-extension fully covered by
that of the target): the force is formally identical to that of the previous
case, with $\ell$ and $L$ switching their roles.
3. 3.
if $|\bar{z}|>\ell+L$ (cylinders above each other):
$\displaystyle K_{1}$
$\displaystyle=4\int_{0}^{\infty}\frac{k[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}}\frac{{\rm
e}^{-\kappa|\bar{z}|}}{\kappa}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k$ (120)
$\displaystyle K_{3}$
$\displaystyle=-\int_{0}^{\infty}\frac{k^{3}[b^{\prime}J_{1}(kb^{\prime})-a^{\prime}J_{1}(ka^{\prime})][bJ_{1}(kb)-aJ_{1}(ka)]}{\kappa^{2}}\frac{{\rm
e}^{-\kappa|\bar{z}|}}{\kappa}\sinh(\kappa\ell)\sinh(\kappa L){\rm d}k.$ (121)
### E.3 MICROSCOPE gravitational stiffness
Fig. 12 shows the relative difference between the exact expression (73) and
its first-order Taylor expansion (first term of Eq. 117) for the radial force
created by the parts of MICROSCOPE’s SUEP on the inner test mass, when the
test mass moves within the range used to estimate the stiffness in flight. A
first-order approximation provides a $10^{-5}$ accuracy on the gravitational
forces, and can thus be safely used.
Figure 12: Relative difference between the exact expression (73) and its first
order Taylor expansion (first term of Eq. 117) for the radial force created by
the parts of MICROSCOPE’s SUEP on the inner test mass, as a function of the
displacement of the test mass. Left: Newtonian force. Right: Yukawa force, for
$(\alpha,\lambda)=(1,0.01~{}{\rm m})$; only those cylinders which create a
non-negligible Yukawa force allowing for a well-behaved $F_{\rm
stiffness}/F_{\rm exact}$ ratio are shown.
Finally, note that we compared the analytic results of this appendix with
numerical simulations and with the (different) analytic expressions of Ref.
[69], and found a good agreement between all methods.
## References
## References
* [1] Will C M 2014 Living Reviews in Relativity 17 4 (Preprint 1403.7377)
* [2] Bartelmann M and Schneider P 2001 Physics Reports 340 291–472 (Preprint astro-ph/9912508)
* [3] Hoekstra H and Jain B 2008 Annual Review of Nuclear and Particle Science 58 99–123 (Preprint 0805.0139)
* [4] Delva P, Puchades N, Schönemann E, Dilssner F, Courde C, Bertone S, Gonzalez F, Hees A, Le Poncin-Lafitte C, Meynadier F, Prieto-Cerdeira R, Sohet B, Ventura-Traveset J and Wolf P 2018 Physical Review Letters 121 231101 (Preprint 1812.03711)
* [5] Herrmann S, Finke F, Lülf M, Kichakova O, Puetzfeld D, Knickmann D, List M, Rievers B, Giorgi G, Günther C, Dittus H, Prieto-Cerdeira R, Dilssner F, Gonzalez F, Schönemann E, Ventura-Traveset J and Lämmerzahl C 2018 Physical Review Letters 121 231102 (Preprint 1812.09161)
* [6] Abbott B P et al. 2016 Phys. Rev. Lett. 116 061102
* [7] Zwicky F 1933 Helvetica Physica Acta 6 110–127
* [8] Rubin V C and Ford Jr W K 1970 Astrophysical Journal 159 379
* [9] Riess A G, Filippenko A V, Challis P, Clocchiatti A, Diercks A, Garnavich P M, Gilliland R L, Hogan C J, Jha S, Kirshner R P, Leibundgut B, Phillips M M, Reiss D, Schmidt B P, Schommer R A, Smith R C, Spyromilio J, Stubbs C, Suntzeff N B and Tonry J 1998 Astronomical Journal 116 1009–1038 (Preprint astro-ph/9805201)
* [10] Perlmutter S, Aldering G, Goldhaber G, Knop R A, Nugent P, Castro P G, Deustua S, Fabbro S, Goobar A, Groom D E, Hook I M, Kim A G, Kim M Y, Lee J C, Nunes N J, Pain R, Pennypacker C R, Quimby R, Lidman C, Ellis R S, Irwin M, McMahon R G, Ruiz-Lapuente P, Walton N, Schaefer B, Boyle B J, Filippenko A V, Matheson T, Fruchter A S, Panagia N, Newberg H J M, Couch W J and Project T S C 1999 Astrophysical Journal 517 565–586 (Preprint astro-ph/9812133)
* [11] Joyce A, Jain B, Khoury J and Trodden M 2015 Physics Reports 568 1–98 (Preprint 1407.0059)
* [12] Joyce A, Lombriser L and Schmidt F 2016 Annual Review of Nuclear and Particle Science 66 95–122 (Preprint 1601.06133)
* [13] Damour T and Polyakov A M 1994 Nucl. Phys. B 423 532–558 (Preprint hep-th/9401069)
* [14] Damour T, Piazza F and Veneziano G 2002 Physical Review Letters 89 081601 (Preprint gr-qc/0204094)
* [15] Damour T and Esposito-Farese G 1992 Classical and Quantum Gravity 9 2093–2176
* [16] Clifton T, Ferreira P G, Padilla A and Skordis C 2012 Physics Reports 513 1–189 (Preprint 1106.2476)
* [17] Vainshtein A I 1972 Physics Letters B 39 393–394
* [18] Damour T and Nordtvedt K 1993 Phys. Rev. Lett. 70 2217–2219
* [19] Khoury J and Weltman A 2004 Phys. Rev. D 69 044026 (Preprint astro-ph/0309411)
* [20] Khoury J and Weltman A 2004 Phys. Rev. Lett. 93 171104 (Preprint astro-ph/0309300)
* [21] Babichev E, Deffayet C and Ziour R 2009 Int. J. Mod. Phys. D 18 2147–2154 (Preprint 0905.2943)
* [22] Hinterbichler K and Khoury J 2010 Phys. Rev. Lett. 104 231301 (Preprint 1001.4525)
* [23] Brax P, Burrage C and Davis A C 2013 JCAP 1 020 (Preprint 1209.1293)
* [24] Burrage C and Sakstein J 2018 Living Reviews in Relativity 21 1 (Preprint 1709.09071)
* [25] Damour T 2012 Classical and Quantum Gravity 29 184001 (Preprint 1202.6311)
* [26] Uzan J P 2003 Rev. Mod. Phys. 75 403 (Preprint hep-ph/0205340)
* [27] Uzan J P 2011 Living Rev. Rel. 14 2 (Preprint 1009.5514)
* [28] Fischbach E and Talmadge C L 1999 The Search for Non-Newtonian Gravity
* [29] Adelberger E G, Heckel B R and Nelson A E 2003 Annual Review of Nuclear and Particle Science 53 77–121 (Preprint hep-ph/0307284)
* [30] Adelberger E G, Gundlach J H, Heckel B R, Hoedl S and Schlamminger S 2009 Progress in Particle and Nuclear Physics 62 102–134
* [31] Tan W H, Du A B, Dong W C, Yang S Q, Shao C G, Guan S G, Wang Q L, Zhan B F, Luo P S, Tu L C and Luo J 2020 Physical Review Letters 124 051301
* [32] Lee J G, Adelberger E G, Cook T S, Fleischer S M and Heckel B R 2020 Physical Review Letters 124 101101 (Preprint 2002.11761)
* [33] Touboul P, Métris G, Rodrigues M, André Y, Baghi Q, Bergé J, Boulanger D, Bremer S, Carle P, Chhun R, Christophe B, Cipolla V, Damour T, Danto P, Dittus H, Fayet P, Foulon B, Gageant C, Guidotti P Y, Hagedorn D, Hardy E, Huynh P A, Inchauspe H, Kayser P, Lala S, Lämmerzahl C, Lebat V, Leseur P, Liorzou F, List M, Löffler F, Panet I, Pouilloux B, Prieur P, Rebray A, Reynaud S, Rievers B, Robert A, Selig H, Serron L, Sumner T, Tanguy N and Visser P 2017 Physical Review Letters 119 231101 (Preprint 1712.01176)
* [34] Touboul P, Métris G, Rodrigues M, André Y, Baghi Q, Bergé J, Boulanger D, Bremer S, Chhun R, Christophe B, Cipolla V, Damour T, Danto P, Dittus H, Fayet P, Foulon B, Guidotti P Y, Hardy E, Huynh P A, Lämmerzahl C, Lebat V, Liorzou F, List M, Panet I, Pires S, Pouilloux B, Prieur P, Reynaud S, Rievers B, Robert A, Selig H, Serron L, Sumner T and Visser P 2019 Classical and Quantum Gravity 36 225006 (Preprint 1909.10598)
* [35] Bergé J, Brax P, Métris G, Pernot-Borràs M, Touboul P and Uzan J P 2018 Physical Review Letters 120 141101 (Preprint 1712.00483)
* [36] Fayet P 2018 Phys. Rev. D 97 055039 (Preprint 1712.00856)
* [37] Fayet P 2019 Phys. Rev. D 99(5) 055043 URL https://link.aps.org/doi/10.1103/PhysRevD.99.055043
* [38] Touboul P in prep Class. Quant. Grav.
* [39] Métris G in prep Class. Quant. Grav.
* [40] Bergé J, Brax P, Pernot-Borràs M and Uzan J P 2018 Classical and Quantum Gravity 35 234001 (Preprint 1808.00340)
* [41] Kapner D J, Cook T S, Adelberger E G, Gundlach J H, Heckel B R, Hoyle C D and Swanson H E 2007 Physical Review Letters 98 021101 (Preprint hep-ph/0611184)
* [42] Masuda M and Sasaki M 2009 Physical Review Letters 102 171101 (Preprint 0904.1834)
* [43] Sushkov A O, Kim W J, Dalvit D A R and Lamoreaux S K 2011 Physical Review Letters 107 171101 (Preprint 1108.2547)
* [44] Klimchitskaya G L and Mostepanenko V M 2014 Gravitation and Cosmology 20 3–9 (Preprint 1403.5778)
* [45] Yang S Q, Zhan B F, Wang Q L, Shao C G, Tu L C, Tan W H and Luo J 2012 Physical Review Letters 108 081101
* [46] Tan W H, Yang S Q, Shao C G, Li J, Du A B, Zhan B F, Wang Q L, Luo P S, Tu L C and Luo J 2016 Physical Review Letters 116 131101
* [47] Touboul P, Rodrigues M, Métris G, Chhun R, Robert A, Baghi Q, Hardy E, Bergé J, Boulanger D, Christophe B, Cipolla V, Foulon B, Guidotti P Y, Huynh P A, Lebat V, Liorzou F, Pouilloux B, Prieur P and Reynaud S 2020 arXiv e-prints arXiv:2012.06472 (Preprint 2012.06472)
* [48] Liorzou F, Touboul P, Rodrigues M, Métris G, André Y, Bergé J, Boulanger D, Bremer S, Chhun R, Christophe B, Danto P, Foulon B, Hagedorn D, Hardy E, Huynh P A, Lämmerzahl C, Lebat V, List M, Löffler F, Rievers B, Robert A and Selig H 2020 arXiv e-prints arXiv:2012.11232 (Preprint 2012.11232)
* [49] Robert A, Cipolla V, Prieur P, Touboul P, Métris G, Rodrigues M, André Y, Bergé J, Boulanger D, Chhun R, Christophe B, Guidotti P Y, Hardy E, Lebat V, Lienart T, Liorzou F and Pouilloux B 2020 arXiv e-prints arXiv:2012.06479 (Preprint 2012.06479)
* [50] Damour T and Donoghue J F 2010 Physical Review D 82 084033 (Preprint 1007.2792)
* [51] Damour T and Donoghue J F 2010 Class. Quant. Grav. 27 202001 (Preprint 1007.2790)
* [52] Chhun R in prep Class. Quant. Grav.
* [53] Bergé J, Baghi Q, Hardy E, Métris G, Robert A, Rodrigues M, Touboul P, Chhun R, Guidotti P Y, Pires S, Reynaud S, Serron L and Travert J M 2020 arXiv e-prints arXiv:2012.06484 (Preprint 2012.06484)
* [54] Saulson P R 1990 Physical Review D 42 2437–2445
* [55] Willemenot E and Touboul P 2000 Review of Scientific Instruments 71 302–309
* [56] Nofrarias i Serra M 2007 Thermal Diagnostics in the LISA Technology Package Experiment Ph.D. thesis Departament de Física Fonamental. Universitat de Barcelona
* [57] Carbone L, Cavalleri A, Ciani G, Dolesi R, Hueller M, Tombolato D, Vitale S and Weber W J 2007 Physical Review D 76 102003 (Preprint 0706.4402)
* [58] Hardy E a TBP Class. Quant. Grav.
* [59] Speake C C 1996 Classical and Quantum Gravity 13 A291–A297
* [60] Hudson D 2007 Investigation expérimentale et théorique du prototype du capteur inertiel pour la vérification du principe d’équivalence dans la mission spatiale MICROSCOPE Ph.D. thesis Université Pierre et Marie Curie
* [61] Touboul P 2009 Space Science Reviews 148 455–474
* [62] Hoskins J K, Newman R D, Spero R and Schultz J 1985 Physical Review D 32 3084–3095
* [63] Sumner T J, Shaul D N A, Schulte M O, Waschke S, Hollington D and Araújo H 2009 Classical and Quantum Gravity 26 094006
* [64] Armano M, Audley H, Baird J, Binetruy P, Born M, Bortoluzzi D, Castelli E, Cavalleri A, Cesarini A, Cruise A M, Danzmann K, de Deus Silva M, Diepholz I, Dixon G, Dolesi R, Ferraioli L, Ferroni V, Fitzsimons E D, Freschi M, Gesa L, Gibert F, Giardini D, Giusteri R, Grimani C, Grzymisch J, Harrison I, Heinzel G, Hewitson M, Hollington D, Hoyland D, Hueller M, Inchauspé H, Jennrich O, Jetzer P, Karnesis N, Kaune B, Korsakova N, Killow C J, Lobo J A, Lloro I, Liu L, López-Zaragoza J P, Maarschalkerweerd R, Mance D, Meshksar N, Martín V, Martin-Polo L, Martino J, Martin-Porqueras F, Mateos I, McNamara P W, Mendes J, Mendes L, Nofrarias M, Paczkowski S, Perreur-Lloyd M, Petiteau A, Pivato P, Plagnol E, Ramos-Castro J, Reiche J, Robertson D I, Rivas F, Russano G, Slutsky J, Sopuerta C F, Sumner T, Texier D, Thorpe J I, Vetrugno D, Vitale S, Wanner G, Ward H, Wass P J, Weber W J, Wissel L, Wittchen A and Zweifel P 2018 Physical Review Letters 120 061101
* [65] Pernot-Borràs M, Bergé J, Brax P and Uzan J P in prep Physical Review D
* [66] Pernot-Borràs M, Bergé J, Brax P and Uzan J P 2019 Physical Review D 100 084006 (Preprint 1907.10546)
* [67] Pernot-Borràs M, Bergé J, Brax P and Uzan J P 2020 Physical Review D 101 124056 (Preprint 2004.08403)
* [68] Lockerbie N A, Veryaskin A V and Xu X 1996 Journal of Physics A Mathematical General 29 4649–4663
* [69] Hoyle C D, Kapner D J, Heckel B R, Adelberger E G, Gundlach J H, Schmidt U and Swanson H E 2004 Physical Review D 70 042004 (Preprint hep-ph/0405262)
|
# Constraints on chameleon gravity from the measurement of the electrostatic
stiffness of the MICROSCOPE mission accelerometers
Martin Pernot-Borràs<EMAIL_ADDRESS>DPHY, ONERA, Université
Paris Saclay, F-92322 Châtillon, France Institut d’Astrophysique de Paris,
CNRS UMR 7095, Université Pierre & Marie Curie - Paris VI, 98 bis Bd Arago,
75014 Paris, France Joel Bergé<EMAIL_ADDRESS>DPHY, ONERA, Université
Paris Saclay, F-92322 Châtillon, France Philippe Brax Institut de Physique
Théorique, Université Paris-Saclay, CEA, CNRS, F-91191 Gif-sur-Yvette Cedex,
France Jean-Philippe Uzan<EMAIL_ADDRESS>Institut d’Astrophysique de Paris,
CNRS UMR 7095, Université Pierre & Marie Curie - Paris VI, 98 bis Bd Arago,
75014 Paris, France Sorbonne Universités, Institut Lagrange de Paris, 98 bis,
Bd Arago, 75014 Paris, France Gilles Métris Université Côte d’Azur,
Observatoire de la Côte d’Azur, CNRS, IRD, Géoazur, 250 avenue Albert
Einstein, F-06560 Valbonne, France Manuel Rodrigues DPHY, ONERA, Université
Paris Saclay, F-92322 Châtillon, France Pierre Touboul DPHY, ONERA,
Université Paris Saclay, F-92322 Châtillon, France
###### Abstract
This article is dedicated to the use the MICROSCOPE mission’s data to test
chameleon theory of gravity. We take advantage of the technical sessions aimed
to characterize the electrostatic stiffness of MICROSCOPE’s instrument
intrinsic to its capacitive measurement system. Any discrepancy between the
expected and measured stiffness may result from unaccounted-for contributors,
i.e. extra-forces. This work considers the case of chameleon gravity as a
possible contributor. It was previously shown that in situations similar to
these measurement sessions, a chameleon fifth force appears and acts as a
stiffness for small displacements. The magnitude of this new component of the
stiffness is computed over the chameleon’s parameter space. It allows us to
derive constraints by excluding any force inconsistent with the MICROSCOPE
data. As expected –since MICROSCOPE was not designed for the purpose of such
an analysis–, these new bounds are not competitive with state-of-the-art
constraints, but they could be improved by a better estimation of all effects
at play in these sessions. Hence our work illustrates this novel technique as
a new way of constraining fifth forces.
## I Introduction
This article follows up from a series of articles PRD1 ; PRD2 ; PRD3 aiming
to test modified gravity theories with data from the MICROSCOPE mission. This
mission provided the tightest constraint to date on the weak equivalence
principle (WEP) touboul_microscope_2017 ; Touboul_2019 . Its instrument is
based on a couple of accelerometers measuring the differential acceleration of
two cylindrical test masses of different compositions. It contains four test
masses: two cylinders of different composition in the SUEP (Equivalence
Principle test Sensor Unit) sensor unit that is used to perform the WEP test
and two cylinders of same composition in the SUREF (Reference Sensor Unit)
sensor unit used as a reference. In Ref. PRL , we directly used the WEP test
results to improve the current constraints on the existence of unscreened
scalar fifth forces, a massive Yukawa fifth force and a light dilaton field
PhysRevD.82.084033 .
In Ref. CQG2 , we proposed a new way of testing such theories by using
sessions dedicated to measuring the electrostatic stiffness inherent to the
capacitive measurement system of MICROSCOPE. An electrostatic destabilizing
force appears when a test mass is displaced from its rest position: it is
linearly dependent to this displacement in the limit where it is small. We
call stiffness its associated linear factor. It has been measured by applying
a sinusoidal displacement of each test mass separately with an amplitude of
$5\,\mu{\rm m}$. The result of this series of tests has been compared to
electrostatic models and a discrepancy has been pinpointed Chhun . In Ref.
CQG2 , we modeled the total stiffness and studied all possible sources of
forces to explain this discrepancy. They consist of mainly: (1) the satellite
Newtonian self-gravity and (2) the stiffness of a 7-$\mu{\rm m}$-thick-gold-
wire used to control the electrical potential of the test masses that acts as
a spring. We found that the contribution of the former is sub-dominant. After
determining the parameters of the latter to evaluate its contribution to the
stiffness, we found an unexplained residual component that depends on the
electrical configuration, hinting at patch field effects. We nonetheless
considered the possibility that this discrepancy may originate from modified
gravity fifth forces sourced by the satellite and experimental apparatus. We
have already been able to set constraints on a Yukawa-like interaction by
excluding any parameters of the interaction that lead to a stiffness larger
than the discrepancy CQG2 . As expected, since MICROSCOPE was not originally
designed to such a test –leading to a loose estimation of the gold-wire-
stiffness for instance–, the constraints are not competitive with state-of-
the-art constraints but it opens a possible novel way of testing fifth force
and demonstrate that its effect has to be modeled in details at each step of
the experiment.
This article aims to extend this analysis to the chameleon gravity model
(khoury_chameleon_2004a, ; khoury_chameleon_2004, ). Unlike Yukawa model, this
scalar field enjoys a screening mechanism that makes its fifth force more
sensitive to the matter environment and more subtle to compute. We use the
numerical methods developed in Refs. PRD1 ; PRD2 to compute the chameleon
profile associated to a geometry of nested cylinders. In these articles, we
studied the case of dis-centering one of the cylinders and showed that it
should experience a chameleonic force acting as a stiffness for small
displacements. Its magnitude depends on the geometrical parameters of the
cylinders and on the parameters of the chameleon theory. This study was
performed for only two nested cylinders. Here, we extend this methods to
compute the field and the force associated to the geometry of the MICROSCOPE’s
instrument with the proper geometrical parameters. Each sensor unit is
composed of eight cylinders: two cylindrical test mass cylinders, each of
which is surrounded by two electrode cylinders; and two ferrule cylinders
encompassing all the six cylinders Touboul_2019 . The end of these ferrules
are closed by two “lids” that we do not consider in this study.
This article is organized as follows. In Section II, we detail the methods
used to compute the chameleon stiffness, and more particularly the necessity
of different approximations for the different regimes of the chameleon
gravity. In Section III, we present the constraints obtained by combining the
results of these computations and the analysis of the MICROSCOPE stiffness
measurement sessions from Ref. CQG2 . To finish, in Section IV, we discuss our
results and the limits of this new approach.
## II Chamelon stiffness
### II.1 Methods
We use three different methods to compute the chameleon stiffness depending on
the regimes of the chameleon field. These regimes occur for the MICROSCOPE
geometry for different zones of the chameleon parameter space PRD1 . The
chameleon field is parameterized by three parameters: its coupling constant to
matter $\beta$, and the energy scale $\Lambda$ and index $n$ of its inverse-
power law potential. We can distinguish three main regimes: the screened
regime in which a test mass and the two electrode cylinders surrounding it can
be considered as an isolated system due to the fact that the electrode
cylinders screen the field; a deeply screened regime in which the screening of
the test mass is too deep to compute the profile associated to three cylinders
and instead we need to consider it as two separate pairs of screened
cylinders; and a unscreened regime in which the field penetrates all cylinders
so that all of them must be taken into account when computing the field
profile. Let us detail the computation techniques used in each regime.
#### Screened regime.
This regime appears when the Compton wavelengths of the field in the cylinders
are of the order of a twentieth of their thickness. It can be addressed by
using the semi-analytic 2D model we developed in Ref. PRD2 . This method was
initially applied to two cylinders. Here, we modify it to include a third one.
We impose the boundary conditions in the two external cylinders in such a way
that the field must reach the minimum of the potentials associated to their
densities. We displace the central test mass cylinder and solve the field’s
multipole from which we compute the force.
#### Deeply screened regime.
This regime occurs when the Compton wavelengths are smaller than a twentieth
of the cylinder’s thickness. In this regime the screening of the test mass
makes it impossible to use the previous method, as the value of the field
reached deep in the test mass is so close to the value that minimizes its
potential, that it is smaller than the typical numerical precision of a
computer. We instead use a 1D method, and consider the three cylinders as two
distinct pairs of screened parallel walls. To mimic two opposite sides of the
cylinders, we consider two such systems. This 1D approximation is justified by
the fact that we showed, in Ref. PRD2 , that the chameleonic force computed in
these planar and cylindrical configurations lead to the same order of
magnitude for the acceleration experienced by a test mass. We thus postulate,
for these analogous situations, that the test masses’ accelerations verify
$a_{\rm 2D}=\alpha\,a_{\rm 1D}$, where $\alpha$ is a geometrical factor that
is expected to be of order unity.
From this equality, by using Newton’s law, one can obtain a relation between
the surface force $F_{\rm s,1D}$ experienced by the two walls in a planar
configuration and the force per unit length $F_{\rm l,2D}$ experienced by a
cylinder in the corresponding 2D configuration. The ratio of masses leads to
the ratio of the wall thicknesses and the transverse section area of the
cylinder in the relation
$F_{\rm l,2D}\approx\alpha\frac{\pi\left[(d+e)^{2}-d^{2}\right]}{2\,e}F_{\rm
s,1D},$ (1)
where $d$ and $e$ are respectively the internal radius and the thickness of
the test mass cylinder. The value of $\alpha$ is discussed in Fig. 1 and
below.
#### Unscreened regime.
This regime takes place when the Compton wavelengths of the field in the
cylinders are larger than their thicknesses. In this case the boundary
conditions must be set at some distance much larger than the Compton
wavelength associated to the density outside the cylinders. In this regime,
this Compton wavelength is likely to be so large that one must perform large
steps in terms of the numerical resolution in this zone, hence losing the
accuracy on the result. To overcome this issue we again addressed this regime
with a 1D resolution. In a 1D problem, as discussed in Ref. PRD1 , the
chameleon equation can indeed be integrated once in the region external to the
cylinders and obtain, at the boundary of the external cylinder, a condition
$\phi^{\prime}[\phi(x_{\rm b})]$ giving the field derivative as a function of
the field value, which ensure that the boundary conditions are respected far
from it. We can use this condition to perform a dichotomy method to adjust the
initial condition of our numerical method. We proceed in the same way as the
case of asymmetrical parallel walls in Ref. PRD2 , with the difference that
instead of using for the dichotomy method the verification that the boundary
conditions are respected at some large distance from the cylinders, we check
that the aforementioned condition is verified at the boundary of the outer
cylinder. These two conditions are equivalent but the latter allows us to
bypass solving the field in the external region.
Similarly to the previous regime, in this 1D resolution, to mimic two opposite
radial directions of a 8-nested-cylinder-configuration, we consider a set of
16 parallel walls. In this 1D configuration a test mass is represented by two
of these walls. We again use Eq. (1) to compute the corresponding 2D force.
Note that due to the symmetry breaking by the shifting of the walls, the
initial conditions cannot be set at the center of the 16 walls but instead at
a slightly shifted location that we determine similarly as in Ref. PRD2 .
Figure 1: Result of the comparison of the acceleration experienced by a test
mass in planar and cylindrical configurations in different regimes of
screening quantified by $\frac{e}{2\lambda_{\rm c}}$, where $e$ is the
thickness of the cylinder and $\lambda_{\rm c}$ the Compton wavelength of the
field associated to it.
To evaluate $\alpha$ in Eq. (1), we compare the forces computed in 1D and 2D.
This requires to extend the method used in the screened regime to the other
regimes. To overcome the problem encountered in these regimes, we considered
an unrealistic configuration of 3 cylinders of same density with a external
vacuum much denser than the vacuum of space. This allows us to avoid the
numerical resolution issue encountered in the unscreened regime. Even if
unrealistic, it allows us to quantify the geometrical factor between planar
and cylindrical geometries, that we expect to be independent of the densities.
As depicted in Fig.1, the numerical comparison strongly hints at $\alpha=1/2$,
a value reached in most of the screening range but that appears to be smaller
for unscreened situations. We interpret this latter behavior as the 2D method
reaching its limits and we instead expect $\alpha=1/2$ also this regime. This
is justified by the longer Compton wavelength in this regime, that leads the
field’s gradient to vary slowly within the cylinder. By approximating this
gradient by the one obtained in planar situations, one directly obtains Eq.(1)
with $\alpha=1/2$ 111The origin of this value comes from the fact that while
for planar situations all parts of a wall are subjected to a force, for
cylindrical configurations, only the parts of the cylinder that are closer to
the axis of displacement contribute to the acceleration. This is due to the
projection of the force that is mainly radially directed and to the effective
radial displacement of the cylinder that varies with the cylindrical angle..
Hence we choose to generalize this result to all regimes in our present study.
### II.2 Results
Figure 2: Scaling of the chameleon fifth force as a function of the
displacement in the range $\delta=1\ldots 10~{}\mu$m for different set of
parameters ($\Lambda,\beta)$ assuming $n=1$. The force is expressed in
newtons. $\Lambda$ are chosen in the range $10^{-1}-3\times 10^{2}$ eV and
$\beta$ in the range $6-10^{7}$. This shows that $\log F=\log k_{\rm
chameleon}(\Lambda,\beta)+\log\delta$ is a good approximation to the behavior
of the force at small displacements. We use a log-log plot for convenience but
it is easily checked that the slope is unity so that linearity is confirmed.
First, we check numerically that the force is linear for small displacements.
As shown in Ref. PRD2 , this is expected to be the case even though the theory
is non-linear. Figure 2 depicts the behavior of $F(\delta)$ in the range
$\delta=1\ldots 10~{}\mu$m relevant for our study. Besides, we know that by
symmetry $F(0)=0$. Hence it confirms that in this range of displacements it is
safe to model the chameleon fifth force by a stiffness $k_{\rm
chameleon}(\Lambda,\beta)$ (measured in N.m-1) so that
$F=k_{\rm chameleon}(\Lambda,\beta)\times\delta+{\cal O}(\delta^{2}).$ (2)
Even though one can witness a small deviation of this linear relation for
$\delta\sim 10~{}\mu$m for the largest values of $\Lambda$, these results
comfort us in the linearity assumption in the range of displacements
compatible with the MICROSCOPE data we are using and the parameter space we
consider.
Then, we present the numerical results in Fig. 3. We computed the chameleonic
stiffness $k_{\rm chameleon}(\Lambda,\beta)$ experienced by a test mass when
displaced radially by $1\,\mu{\rm m}$. This figure shows the result for SUEP-
IS2, the external test mass of SUEP. We spanned the parameter space
$(\beta,\Lambda)$ for $n=1$, we denote each computation by a point with a
color code that labels which of the three method were used. To obtain the
continuous evolution of the stiffness with $(\beta,\Lambda)$, we performed a
linear interpolation of the simulation points in log-scale. We show with the
black solid line, the contour line at which the obtained chameleonic stiffness
equals the 2-$\sigma$ uncertainty on the discrepancy $\Delta k_{\rm MIC}$ on
the stiffness measured in the MICROSCOPE sessions as presented in Ref. CQG2 .
This latter article presents two distinct estimations over two perpendicular
radial axis of the cylinder; the chameleonic stiffness being expected to be
the same over these axes, we choose to average these two estimations and
quadratically average the error bars.
Figure 3: Evolution with the parameters $\beta$ and $\Lambda$ of the
chameleonic stiffness $k_{\rm chameleon}$ of the external test mass of the
sensor unit SUEP from the MICROSCOPE mission for $n=1$. Its magnitude is shown
by the background colors. This function is obtained by linearly interpolating
the data points. These points are the result of the three numerical methods
discussed in the main text that are here distinguished by different points of
colors. This is represented in log-scale for $\beta$, $\lambda$, $k_{\rm
chameleon}$. The uniform-dark-blue-region corresponds to parameters for which
we are unable to compute the stiffness. The black line is the contour line at
which the stiffness is equal to the measured 2-$\sigma$ uncertainty on the
discrepancy $\Delta k_{\rm MIC}$ in the MICROSCOPE experiment.
## III Constraints on the chameleon’s parameters
The results shown in Fig. 3 mean that above the black line the chameleonic
stiffness is too large to explain the observed stiffness residual in
MICROSCOPE. This stiffness could be compatible with these measurements, if a
stabilizing stiffness of the same magnitude were to exist. Nevertheless
standard physics combined with our understanding of the instrument do not
provide any such contribution. Thence we interpret these results as excluding
the existence of a chameleon field for these parameters. Below the black line,
the chameleonic stiffness is within the error bars of the observed discrepancy
so that we cannot exclude its existence.
Note that we have not been able to span the whole parameter space. Our methods
are unable to determine the stiffness for large $\beta$ and $\Lambda$. We
expect this to be caused by the fact that the field magnitude becomes so large
that our numerical precision fails at describing the gradient in the test
mass. Thus, the force vanishes. Nevertheless we can guess the behavior of the
stiffness in these unexplored regions. For very large $\Lambda$, the field
tends to be completely unscreened such that we expect it to converge towards a
flat field providing a lower force. For very large $\beta$, on the contrary,
the field tends to be more screened. At some point we expect the field to be
able to reach the minimum of its potential in the inter-cylinder vacuum gaps,
such that the cylinders would not interact through the scalar field anymore.
In this case the field is equivalent to the field of an infinitely thick
cylinder and gap. Given the inter-cylinder gaps of $600\,\mu{\rm m}$ for
MICROSCOPE, we expect this to happen for $\beta\gtrsim 10^{19}$. We thus
expect the MICROSCOPE constraint to have a rectangular shape.
Figure 4: Constraints on the chameleon model for $n=1$ from the MICROSCOPE
experiment using stiffness measurement sessions: the region excluded at
2-$\sigma$ is above the four lines described in the legend. They correspond to
the different test masses: IS1 (resp. IS2) denotes the the internal (resp.
external) test masses of the SUREF and SUEP sensor units. Their constraints
are compared to the current constraints from other experiments denoted with
the colored regions as presented in Refs. (burrage_tests_2018, ; BraxReview,
). They come from atomic interferometry (purple, hamilton_atom-
interferometry_2015 ; jaffe_testing_2017 ), Eöt-Wash group’s torsion balance
experiments (green, (upadhye_dark_2012, ; PhysRevLett.98.021101, )), Casimir
effect measurements (yellow, brax_detecting_2007 ; PhysRevD.75.077101 ),
astrophysics tests (blue, Jain_2013 ; Cabr__2012 ; PhysRevD.97.104055 ) and
lensing (pink, 10.1093/mnras/stw1617 ), precision atomic tests (orange,
PhysRevD.83.035020 ; PhysRevD.82.125020 ), microsphere (blue line,
PhysRevLett.117.101101 ) and neutron interferometry (blue and red point,
LEMMEL2015310 ; PhysRevD.93.062001 ). The horizontal doted line denotes the
energy scale of dark energy.
We applied the same procedure to the other three test masses. The result are
summarized in Fig. 4. It shows the 2$\sigma$-constraints from each test mass:
the internal mass of each sensor unit is called IS1 and the external IS2. We
compare the MICROSCOPE constraints to the current constrains summarized in
Refs.burrage_tests_2018 ; BraxReview . They overlap the constraints from atom
interferometry hamilton_atom-interferometry_2015 ; jaffe_testing_2017 ,
torsion balances (upadhye_dark_2012, ; PhysRevLett.98.021101, ) and Casimir
effect experiments brax_detecting_2007 ; PhysRevD.75.077101 . Nevertheless,
they are not competitive with current constraints. This is not surprising
since MICROSCOPE was not designed for this test.
## IV Discussion
The best constraints are obtained from the internal test masses –IS1. This is
explained by a better estimation –by one order of magnitude– of the gold-wire-
stiffness CQG2 leading to a lower residual stiffness. The competitivity of
the internal masses is nonetheless depleted by the shortness of these test
masses relative to the external ones Touboul_2019 . We observe that the
constraints from the internal test masses are very similar, their slight
difference is only caused by a slightly different residual stiffness. They
indeed experience the same chameleonic stiffness, which is consistent with the
fact that they have the same geometrical parameters and are of the same
composition. This tells us that the effect on the inner masses from the
external test mass –of different compositions for the two sensor units– is
negligible even in the unscreened regime –upper part of the constraint.
Comparing the chameleonic forces of the external test masses –that have same
geometrical parameters but different densities– is interesting for the
phenomenology of a WEP violation. This requires to normalize them by their
masses. Doing so reveals that they each experience, in these dis-centered
configurations, a different acceleration in both screened and unscreened
regime. This confirms the ability of the chameleon field to provide an
apparent WEP-violation-signal as the only result of the different densities of
test masses through their different screening factors khoury_chameleon_2004 .
This has no direct repercussion on MICROSCOPE’s WEP test as: (1) it is
performed in a situation where coaxiality of all cylinders is well controlled
Touboul_2019 , (2) it is performed on a couple of test masses –IS1 and IS2–
belonging to the same sensor unit for which the different geometrical
parameters could also be the source of a differential acceleration. This
dependence of the force to the test masses’ densities nonetheless hints at an
apparent-chameleonic-WEP-violation to appear in MICROSCOPE’s WEP test. Note
that the common wisdom about chameleon inducing apparent WEP violation in
screened regimes (khoury_chameleon_2004a, ; khoury_chameleon_2004, ) is not
applicable to MICROSCOPE’s test of the WEP, since in this case, the satellite
itself screens the Earth’s chameleon field PRD1 , preventing any WEP violation
signal at the frequency aimed by MICROSCOPE. Instead, we expect such a signal
to appear in a lightly screened regime where the Earth’s chameleon profile can
penetrate the instrument. Of course in such a regime the density dependence of
the force would be depleted but the signal it induces might still be
detectable if the precision of the experiment is high enough. Estimating this
effect is beyond the scope of this article.
We obtained these new constraints from numerical simulations of the chameleon
profiles in the nested-cylinder-geometry of the MICROSCOPE experiment. Some
approximations must be discussed. Firstly, when evaluating the chameleonic
stiffness, we used the profiles of infinitely extended cylinders. In
MICROSCOPE, the cylinders being finite, we expect edge effects to appear that
would require 3D simulations to quantify and that are beyond the scope of this
study. Nevertheless, we expect these effects to decrease the computed
stiffness. We indeed predict the field to behave as follows. On the one hand
far from the ends of a cylinder, the transverse profile should be close to the
one of infinite cylinders. On the other hand, at its ends, it should be
influenced by the two cylindrical “lids” that close the ends of the electrode
cylinders. We expect the presence of this matter to affect the chameleon
profile in such a way that it is flattened in comparison to the profile of
infinite cylinders. This flattening would reduce the gradients in the test
mass at its ends, leading our computed stiffness to be overestimated. This
would induce our constraints to be slightly decreased.
Another assumption is that we computed the profile for a static configuration
while the stiffness measurement sessions involve a periodic motion of the test
mass. The validity of this quasi-static assumption depends on the relaxation
time of the field in response to a change in the matter distribution. We
expect this assumption to stay valid as long as the movement are slow compared
to the relaxation speed of the field. In analogy with gravitational waves
PhysRevD.57.2061 , and consistently with the discussion from Ref.
burrage_probing_2015 , we expect this speed to be close to the speed of light
for light fields and lower for massive fields. This assumption could thus be
questionable for chameleon parameters providing the heaviest fields such as in
the deeply screened regime. Nevertheless, this regime is not accessible to our
methods.
Finally, we idealized the MICROSCOPE geometry by not taking into account the
influence of MICROSCOPE’s satellite but only the effect of the instrument.
This is debatable in the regime where the field is unscreened. The complex
geometry of the satellite could introduce peculiar effects on the chameleonic
force. Nonetheless, given the null-effect on the internal test mass of the
external ones, and the low factor of 100 between the mass of the cylinders and
of the satellite, we expect the influence of mass distribution closest to the
test masses, i.e. the electrodes cylinders, to be dominant. This has for
instance been demonstrated for a Yukawa fifth force in Ref. CQG2 .
To conclude, this work extends the search for new methods to test chameleon
models in the laboratory PRD3 or in space Berge:2019zjj ; Berge:2018htm .
Here we took advantage of MICROSCOPE’s instrumental characterization
measurements to draw constraints on the chameleon field. An unexplained
discrepancy between the measured and expected electrostatic stiffness might
hint at a non-zero chameleonic force. The constraints we obtained are not
competitive with state-of-the-art constraints. This is not a surprise.
MICROSCOPE was not designed for testing short-ranged modified gravity
theories. The main limitations of this test come from modeling uncertainties
of the theoretical electrostatic stiffness and from the poor knowledge of the
gold-wire characteristics. A better estimation of these physical parameters
would reduce the error bars on the stiffness discrepancy. An alternative,
under study for a next mission battelier2019exploring , is to suppress this
gold wire as done in LISA Pathfinder PhysRevLett.116.231101 . Besides, patch
field effects may be the most likely phenomenon to explain the observed
discrepancy on the measurement of the stiffness CQG2 . Estimating these
effects would deplete this discrepancy and thus improve the sensitivity of the
test. While awaiting these developments, the constraints we have provided are
conservative.
## Acknowledgment
We thank the members of the MICROSCOPE Science Working Group for allowing us
to start this project and encouraging us to pursue it. We acknowledge the
financial support of CNES through the APR program (“GMscope+” project). MPB is
supported by a CNES/ONERA PhD grant. This work uses technical details of the
T-SAGE instrument, installed on the CNES-ESA-ONERA-CNRS-OCA-DLR-ZARM
MICROSCOPE mission. This work is supported in part by the EU Horizon 2020
research and innovation programme under the Marie-Sklodowska grant No. 690575.
This article is based upon work related to the COST Action CA15117 (CANTATA)
supported by COST (European Cooperation in Science and Technology).
## References
* (1) M. Pernot-Borràs, J. Bergé, P. Brax, and J.-P. Uzan, “General study of chameleon fifth force in gravity space experiments,” Phys. Rev. D, vol. 100, p. 084006, Oct 2019.
* (2) M. Pernot-Borràs, J. Bergé, P. Brax, and J.-P. Uzan, “Fifth force induced by a chameleon field on nested cylinders,” Phys. Rev. D, vol. 101, p. 124056, Jun 2020.
* (3) J.-P. Uzan, M. Pernot-Borràs, and J. Bergé, “Effects of a scalar fifth force on the dynamics of a charged particle as a new experimental design to test chameleon theories,” Phys. Rev. D, vol. 102, no. 4, p. 044059, 2020\.
* (4) P. Touboul et al., “Microscope mission: First results of a space test of the equivalence principle,” Phys. Rev. Lett., vol. 119, p. 231101, Dec. 2017.
* (5) P. Touboul et al., “Space test of the equivalence principle: first results of the MICROSCOPE mission,” Classical and Quantum Gravity, vol. 36, p. 225006, oct 2019.
* (6) J. Bergé, P. Brax, G. Métris, M. Pernot-Borràs, P. Touboul, and J.-P. Uzan, “Microscope mission: First constraints on the violation of the weak equivalence principle by a light scalar dilaton,” Phys. Rev. Lett., vol. 120, p. 141101, Apr 2018.
* (7) T. Damour and J. F. Donoghue, “Equivalence principle violations and couplings of a light dilaton,” Phys. Rev. D, vol. 82, p. 084033, Oct 2010.
* (8) J. Bergé, M. Pernot-Borràs, J.-P. Uzan, P. Brax, R. Chhun, G. Métris, M. Rodrigues, and P. Touboul, “Microscope’s constraint on a short-range fifth force,” to be submitted to CQG, 2020.
* (9) R. Chhun Classical and Quantum Gravity, in prep.
* (10) J. Khoury and A. Weltman, “Chameleon Fields: Awaiting Surprises for Tests of Gravity in Space,” Phys. Rev. Lett., vol. 93, p. 171104, Oct. 2004.
* (11) J. Khoury and A. Weltman, “Chameleon cosmology,” Phys. Rev. D, vol. 69, p. 044026, Feb. 2004.
* (12) C. Burrage and J. Sakstein, “Tests of chameleon gravity,” Living Rev Relativ, vol. 21, p. 1, Dec. 2018.
* (13) P. Brax, C. Burrage, and A.-C. Davis, “Laboratory tests of screened modified gravity,” International Journal of Modern Physics D, 06 2018.
* (14) P. Hamilton, M. Jaffe, P. Haslinger, Q. Simmons, H. Müller, and J. Khoury, “Atom-interferometry constraints on dark energy,” Science, vol. 349, pp. 849–851, Aug. 2015. arXiv: 1502.03888.
* (15) M. Jaffe, P. Haslinger, V. Xu, P. Hamilton, A. Upadhye, B. Elder, J. Khoury, and H. Müller, “Testing sub-gravitational forces on atoms from a miniature in-vacuum source mass,” Nature Physics, vol. 13, p. 938, July 2017.
* (16) A. Upadhye, “Dark energy fifth forces in torsion pendulum experiments,” Phys. Rev. D, vol. 86, p. 102003, Nov. 2012.
* (17) D. J. Kapner, T. S. Cook, E. G. Adelberger, J. H. Gundlach, B. R. Heckel, C. D. Hoyle, and H. E. Swanson, “Tests of the gravitational inverse-square law below the dark-energy length scale,” Phys. Rev. Lett., vol. 98, p. 021101, Jan 2007.
* (18) P. Brax, C. van de Bruck, A.-C. Davis, D. F. Mota, and D. Shaw, “Detecting chameleons through Casimir force measurements,” Phys. Rev. D, vol. 76, p. 124034, Dec. 2007.
* (19) R. S. Decca, D. López, E. Fischbach, G. L. Klimchitskaya, D. E. Krause, and V. M. Mostepanenko, “Tests of new physics from precise measurements of the casimir pressure between two gold-coated plates,” Phys. Rev. D, vol. 75, p. 077101, Apr 2007.
* (20) B. Jain, V. Vikram, and J. Sakstein, “Astrophysical tests of modified gravity : constraints from distance indicators in the nearby universe,” The Astrophysical Journal, vol. 779, p. 39, nov 2013.
* (21) A. Cabré, V. Vikram, G.-B. Zhao, B. Jain, and K. Koyama, “Astrophysical tests of gravity: a screening map of the nearby universe,” Journal of Cosmology and Astroparticle Physics, vol. 2012, pp. 034–034, jul 2012.
* (22) V. Vikram, J. Sakstein, C. Davis, and A. Neil, “Astrophysical tests of modified gravity: Stellar and gaseous rotation curves in dwarf galaxies,” Phys. Rev. D, vol. 97, p. 104055, May 2018.
* (23) H. Wilcox, R. C. Nichol, G.-B. Zhao, D. Bacon, K. Koyama, and A. K. Romer, “Simulation tests of galaxy cluster constraints on chameleon gravity,” Monthly Notices of the Royal Astronomical Society, vol. 462, pp. 715–725, 07 2016.
* (24) P. Brax and C. Burrage, “Atomic precision tests and light scalar couplings,” Phys. Rev. D, vol. 83, p. 035020, Feb 2011.
* (25) J. Jaeckel and S. Roy, “Spectroscopy as a test of coulomb’s law: A probe of the hidden sector,” Phys. Rev. D, vol. 82, p. 125020, Dec 2010.
* (26) A. D. Rider, D. C. Moore, C. P. Blakemore, M. Louis, M. Lu, and G. Gratta, “Search for screened interactions associated with dark energy below the $100\text{ }\mu\mathrm{m}$ length scale,” Phys. Rev. Lett., vol. 117, p. 101101, Aug 2016.
* (27) H. Lemmel, P. Brax, A. Ivanov, T. Jenke, G. Pignol, M. Pitschmann, T. Potocar, M. Wellenzohn, M. Zawisky, and H. Abele, “Neutron interferometry constrains dark energy chameleon fields,” Physics Letters B, vol. 743, pp. 310 – 314, 2015.
* (28) K. Li, M. Arif, D. G. Cory, R. Haun, B. Heacock, M. G. Huber, J. Nsofini, D. A. Pushin, P. Saggu, D. Sarenac, C. B. Shahi, V. Skavysh, W. M. Snow, and A. R. Young, “Neutron limit on the strongly-coupled chameleon field,” Phys. Rev. D, vol. 93, p. 062001, Mar 2016.
* (29) C. M. Will, “Bounding the mass of the graviton using gravitational-wave observations of inspiralling compact binaries,” Phys. Rev. D, vol. 57, pp. 2061–2068, Feb 1998.
* (30) C. Burrage, E. J. Copeland, and E. A. Hinds, “Probing dark energy with atom interferometry,” J. Cosmol. Astropart. Phys., vol. 2015, no. 03, p. 042, 2015.
* (31) J. Bergé et al., “The local dark sector. Probing gravitation’s low-acceleration frontier and dark matter in the Solar System neighborhood,” 9 2019.
* (32) J. Bergé, P. Brax, M. Pernot-Borràs, and J.-P. Uzan, “Interpretation of geodesy experiments in non-Newtonian theories of gravity,” Class. Quant. Grav., vol. 35, no. 23, p. 234001, 2018.
* (33) B. Battelier et al., “Exploring the foundations of the universe with space tests of the equivalence principle,” 2019.
* (34) M. Armano et al., “Sub-femto-$g$ free fall for space-based gravitational wave observatories: Lisa pathfinder results,” Phys. Rev. Lett., vol. 116, p. 231101, Jun 2016.
|
# Decay in norm of transfer operators for semiflows
Ian Melbourne Mathematics Institute, University of Warwick, Coventry, CV4 7AL,
UK Nicolò Paviato Mathematics Institute, University of Warwick, Coventry, CV4
7AL, UK Dalia Terhesiu Mathematisch Instituut, University of Leiden, Niels
Bohrweg 1, 2333 CA Leiden, Netherlands
(9 April 2021)
###### Abstract
We establish exponential decay in Hölder norm of transfer operators applied to
smooth observables of uniformly and nonuniformly expanding semiflows with
exponential decay of correlations.
## 1 Introduction
Exponential decay of correlations is well-understood for large classes of
uniformly and nonuniformly expanding maps, see for example [8, 11, 13, 14, 16,
20, 21, 22, 23, 26]. The typical method of proof is to establish a spectral
gap for the associated transfer operator $L$. Such a spectral gap yields a
decay rate $\|L^{n}v-\int v\|\leq C_{v}e^{-an}$ for $v$ lying in a suitable
function space, where $a$, $C_{v}$ are positive constants. Decay of
correlations is an immediate consequence of such decay for $L^{n}$.
Results on decay of correlations lead to numerous statistical limit theorems.
Although not needed for results such as the central limit theorem, strong norm
control on $L^{n}v$ is often useful for finer statistical properties. For
example, rates of convergence in the central limit theorem [12] and the
associated functional central limit theorem [4] rely heavily on control of
operator norms.
In this paper, we consider norm decay of transfer operators for uniformly and
nonuniformly expanding semiflows. Here, the standard method is to deduce decay
of the correlation function from analyticity of Laplace transforms, bypassing
spectral properties of $L_{t}$, see [10, 15, 19]. As far as we know, the only
result on spectral gaps for transfer operators of semiflows is due to Tsujii
[24]. However, this result is for suspension semiflows over the doubling map
with a $C^{3}$ roof function, where the smoothness of the roof function is
crucial and very restrictive. A similar result for contact Anosov flows is
proved in [25]. Both of the papers [24, 25] obtain spectral gaps for $L_{t}$
acting on a suitable anisotropic Banach space. Apart from these, there are
apparently no previous results on norm decay of transfer operators for
semiflows and flows.
Recently, in [17], we showed that spectral gaps are impossible in Hölder
spaces with exponent greater than $\frac{1}{2}$ (and in any Banach space that
embeds in such a Hölder space). Nevertheless, our aim of controlling the
Hölder norm of $L_{t}v$ for a large class of semiflows and observables $v$
remains viable, and our main result is the first in this direction. We
consider uniformly and nonuniformly expanding semiflows satisfying a
Dolgopyat-type estimate [10]. Such an estimate plays a key role in proving
exponential decay of correlations for the semiflow. Theorem 2.2 below shows
how to use this estimate to prove exponential decay of $L_{t}v$ in a Hölder
norm for smooth mean zero observables satisfying a good support condition.
Apart from the Dolgopyat estimate, the main ingredient is an operator renewal
equation for semiflows [18] which enables consideration of the operator
Laplace transform $\int_{0}^{\infty}e^{-st}L_{t}\,dt$.
The remainder of the paper is organised as follows. In Section 2, we recall
the setup for nonuniformly expanding semiflows with exponential decay of
correlations and state our main result, Theorem 2.2, on decay in norm. In
Section 3, we prove Theorem 2.2.
#### Notation
We use “big O” and $\ll$ notation interchangeably, writing $a_{n}=O(b_{n})$ or
$a_{n}\ll b_{n}$ if there are constants $C>0$, $n_{0}\geq 1$ such that
$a_{n}\leq Cb_{n}$ for all $n\geq n_{0}$.
## 2 Setup and statement of the main result
In this section, we state our result on Hölder norm decay of transfer
operators for uniformly and nonuniformly expanding semiflows.
Let $(Y,d)$ be a bounded metric space with Borel probability measure $\mu$ and
an at most countable measurable partition $\\{Y_{j}\\}$. Let $F:Y\to Y$ be a
measure-preserving transformation such that $F$ restricts to a measure-
theoretic bijection from $Y_{j}$ onto $Y$ for each $j$. Let $g=d\mu/(d\mu\circ
F)$ be the inverse Jacobian of $F$.
Fix $\eta\in(0,1)$. Assume that there are constants $\lambda>1$ and $C>0$ such
that $d(Fy,Fy^{\prime})\geq\lambda d(y,y^{\prime})$ and $|\log g(y)-\log
g(y^{\prime})|\leq Cd(Fy,Fy^{\prime})^{\eta}$ for all $y,y^{\prime}\in Y_{j}$,
$j\geq 1$. In particular, $F$ is a Gibbs-Markov map as in [2] (see also [1,
3]) with ergodic (and mixing) invariant measure $\mu$.
Let $\varphi:Y\to[2,\infty)$ be a piecewise continuous roof function. We
assume that there is a constant $C>0$ such that
$|\varphi(y)-\varphi(y^{\prime})|\leq Cd(Fy,Fy^{\prime})^{\eta}$ (2.1)
for all $y,y^{\prime}\in Y_{j}$, $j\geq 1$. Also, we assume exponential tails,
namely that there exists $\delta_{0}>0$ such that
${\textstyle\sum_{j}}\mu(Y_{j})e^{\delta_{0}|1_{Y_{j}}\varphi|_{\infty}}<\infty.$
(2.2)
Define the suspension $Y^{\varphi}=\\{(y,u)\in
Y\times[0,\infty):u\in[0,\varphi(y)]\\}/\sim$ where
$(y,\varphi(y))\sim(Fy,0)$. The suspension semiflow $F_{t}:Y^{\varphi}\to
Y^{\varphi}$ is given by $F_{t}(y,u)=(y,u+t)$ computed modulo identifications.
We define the ergodic $F_{t}$-invariant probability measure
$\mu^{\varphi}=(\mu\times{\rm Lebesgue})/\bar{\varphi}$ where
$\bar{\varphi}=\int_{Y}\varphi\,d\mu$. 111We call such semiflows “nonuniformly
expanding” since they are the continuous time analogue of maps that are
nonuniformly expanding in the sense of Young [26]. “Uniformly expanding”
semiflows are those with $\varphi$ bounded; they have bounded distortion as
well as uniform expansion.
Let $L_{t}:L^{1}(Y^{\varphi})\to L^{1}(Y^{\varphi})$ denote the transfer
operator corresponding to $F_{t}$ (so
$\int_{Y^{\varphi}}L_{t}v\,w\,d\mu^{\varphi}=\int_{Y^{\varphi}}v\,w\circ
F_{t}\,d\mu^{\varphi}$ for all $v\in L^{1}(Y^{\varphi})$, $w\in
L^{\infty}(Y^{\varphi})$, $t>0$) and let $R_{0}:L^{1}(Y)\to L^{1}(Y)$ denote
the transfer operator for $F$. Recall (see for example [2]) that
$(R_{0}v)(y)={\textstyle\sum_{j}}g(y_{j})v(y_{j})$ where $y_{j}$ is the unique
preimage of $y$ under $F|Y_{j}$, and there is a constant $C>0$ such that
$|g(y)|\leq C\mu(Y_{j}),\qquad|g(y)-g(y^{\prime})|\leq
C\mu(Y_{j})d(Fy,Fy^{\prime})^{\eta},$ (2.3)
for all $y,y^{\prime}\in Y_{j}$, $j\geq 1$.
#### Function space on $Y^{\varphi}$
Let $Y^{\varphi}_{j}=\\{(y,u)\in Y^{\varphi}:y\in Y_{j}\\}$. Fix
$\eta\in(0,1]$, $\delta>0$. For $v:Y^{\varphi}\to{\mathbb{R}}$, define
$|v|_{\delta,\infty}=\sup_{(y,u)\in Y^{\varphi}}e^{-\delta u}|v(y,u)|$ and
$\|v\|_{\delta,\eta}=|v|_{\delta,\infty}+|v|_{\delta,\eta},\qquad|v|_{\delta,\eta}=\sup_{j\geq
1}\sup_{(y,u),(y^{\prime},u)\in Y_{j}^{\varphi},\,y\neq y^{\prime}}e^{-\delta
u}\frac{|v(y,u)-v(y^{\prime},u)|}{d(y,y^{\prime})^{\eta}}.$
Then ${\mathcal{F}}_{\delta,\eta}(Y^{\varphi})$ consists of observables
$v:Y^{\varphi}\to{\mathbb{R}}$ with $\|v\|_{\delta,\eta}<\infty$.
Next, define $\partial_{u}v$ to be the partial derivative of $v$ with respect
to $u$ at points $(y,u)\in Y^{\varphi}$ with $u\in(0,\varphi(y))$ and to be
the appropriate one-sided partial derivative when $u\in\\{0,\,\varphi(y)\\}$.
For $m\geq 0$, define ${\mathcal{F}}_{\delta,\eta,m}(Y^{\varphi})$ to consist
of observables $v:Y^{\varphi}\to{\mathbb{R}}$ such that
$\partial_{u}^{j}v\in{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})$ for
$j=0,1,\dots,m$, with norm
$\|v\|_{\delta,\eta,m}=\max_{j=0,\dots,m}\|\partial_{u}^{j}v\|_{\delta,\eta}$.
###### Definition 2.1
We say that a function $u:Y^{\varphi}\to{\mathbb{R}}$ has good support if
there exists $r>0$ such that $\operatorname{supp}v\subset\\{(y,u)\in
Y\times{\mathbb{R}}:u\in[r,\varphi(y)-r]\\}$.
For functions with good support, $\partial_{u}v$ coincides with the derivative
$\partial_{t}v=\lim_{h\to 0}(v\circ F_{h}-v)/h$ in the flow direction.
Let
${\mathcal{F}}_{\delta,\eta,m}^{0}(Y^{\varphi})=\\{v\in{\mathcal{F}}_{\delta,\eta,m}(Y^{\varphi}):{\textstyle\int}_{Y^{\varphi}}v\,d\mu^{\varphi}=0\\}.$
We write ${\mathcal{F}}_{\delta,\eta}(Y^{\varphi})$ and
${\mathcal{F}}_{\delta,\eta}^{0}(Y^{\varphi})$ when $m=0$.
#### Function space on $Y$
For $v:Y\to{\mathbb{R}}$, define
$\|v\|_{\eta}=|v|_{\infty}+|v|_{\eta},\qquad|v|_{\eta}=\sup_{j\geq
1}\sup_{y,y^{\prime}\in Y_{j},\,y\neq
y^{\prime}}|v(y)-v(y^{\prime})|/d(y,y^{\prime})^{\eta}.$
Let ${\mathcal{F}}_{\eta}(Y)$ consist of observables $v:Y\to{\mathbb{R}}$ with
$\|v\|_{\eta}<\infty$.
#### Dolgopyat estimate
Define the twisted transfer operators
$\widehat{R}_{0}(s):L^{1}(Y)\to
L^{1}(Y),\qquad\widehat{R}_{0}(s)v=R_{0}(e^{-s\varphi}v).$
We assume that there exists $\gamma\in(0,1)$, $\epsilon>0$, $m_{0}\geq 0$,
$A,D>0$ such that
$\displaystyle{\|\widehat{R}_{0}(s)^{n}\|}_{{\mathcal{F}}_{\eta}(Y)\mapsto{\mathcal{F}}_{\eta}(Y)}\leq|b|^{m_{0}}\gamma^{n}$
(2.4)
for all $s=a+ib\in{\mathbb{C}}$ with $|a|<\epsilon$, $|b|\geq D$ and all
$n\geq A\log|b|$. Such an assumption holds in the settings of [5, 6, 7, 10].
Now we can state our main result on norm decay for $L_{t}$.
###### Theorem 2.2
Under these assumptions, there exists $\epsilon>0$, $m\geq 1$, $C>0$ such that
$\|L_{t}v\|_{\delta,\eta,1}\leq Ce^{-\epsilon
t}\|v\|_{\delta,\eta,m}\quad\text{for all $t>0$}$
for all $v\in{\mathcal{F}}_{\delta,\eta,m}^{0}(Y^{\varphi})$ with good
support.
###### Remark 2.3
Since the norm applied to $v$ is stronger than the norm applied to $L_{t}v$,
Theorem 2.2 does not imply a spectral gap for $L_{t}$. We note that the norm
on ${\mathcal{F}}_{\delta,\eta,1}$ gives no Hölder control in the flow
direction when passing through points of the form $(y,\varphi(y))$. This lack
of control is a barrier to mollification arguments of the type usually used to
pass from smooth observables to Hölder observables. In fact, such arguments
are doomed to fail at the operator level by [17, Theorem 1.1] when
$\eta>\frac{1}{2}$ and hence seem unlikely for any $\eta$.
###### Remark 2.4
Usually, we can take $m_{0}\in(0,1)$ in (2.4) in which case $m=3$ suffices in
Theorem 2.2.
There are numerous simplifications when $\\{Y_{j}\\}$ is a finite partition.
In particular, conditions (2.1) and (2.2) are redundant and we can take
$\delta=0$.
## 3 Proof of Theorem 2.2
Our proof of norm decay is broken into three parts. In Subsection 3.1, we
recall a continuous-time operator renewal equation [18] which enables
estimates of Laplace transforms of transfer operators at the level of $Y$. In
Subsection 3.2, we show how to pass to estimates of Laplace transforms of
$L_{t}$. In Subsection 3.3, we invert the Laplace transform to obtain norm
decay of $L_{t}$.
### 3.1 Operator renewal equation
Let ${\widetilde{Y}}=Y\times[0,1]$ and define
$\widetilde{F}:{\widetilde{Y}}\to{\widetilde{Y}},\qquad\widetilde{F}(y,u)=(Fy,u),$
with transfer operator $\widetilde{R}:L^{1}({\widetilde{Y}})\to
L^{1}({\widetilde{Y}})$. Also, define
$\widetilde{\varphi}:{\widetilde{Y}}\to[2,\infty),\quad\widetilde{\varphi}(y,u)=\varphi(y).$
Define the twisted transfer operators
$\widehat{R}(s):L^{1}({\widetilde{Y}})\to
L^{1}({\widetilde{Y}}),\qquad\widehat{R}(s)v=\widetilde{R}(e^{-s\widetilde{\varphi}}v).$
Let ${\widetilde{Y}}_{j}=Y_{j}\times[0,1]$. For
$v:{\widetilde{Y}}\to{\mathbb{R}}$, define
$\|v\|_{\eta}=|v|_{\infty}+|v|_{\eta},\qquad|v|_{\eta}=\sup_{j\geq
1}\sup_{(y,u),(y^{\prime},u)\in{\widetilde{Y}}_{j},\,y\neq
y^{\prime}}|v(y,u)-v(y^{\prime},u)|/d(y,y^{\prime})^{\eta}.$
Let ${\mathcal{F}}_{\eta}({\widetilde{Y}})$ consist of observables
$v:{\widetilde{Y}}\to{\mathbb{R}}$ with $\|v\|_{\eta}<\infty$. Let
${\mathcal{F}}_{\eta}^{0}({\widetilde{Y}})=\\{v\in{\mathcal{F}}_{\eta}({\widetilde{Y}}):{\textstyle\int}_{\widetilde{Y}}v\,d\tilde{\mu}=0\\}$
where $\tilde{\mu}=\mu\times\operatorname{Leb}_{[0,1]}$.
###### Lemma 3.1
Write $s=a+ib\in{\mathbb{C}}$. There exists $\epsilon>0$, $m_{1}\geq 0$, $C>0$
such that
* (a)
$s\mapsto(I-\widehat{R}(s))^{-1}:{\mathcal{F}}_{\eta}^{0}({\widetilde{Y}})\to{\mathcal{F}}_{\eta}({\widetilde{Y}})$
is analytic on $\\{|a|<\epsilon\\}$;
* (b)
$s\mapsto(I-\widehat{R}(s))^{-1}:{\mathcal{F}}_{\eta}({\widetilde{Y}})\to{\mathcal{F}}_{\eta}({\widetilde{Y}})$
is analytic on $\\{|a|<\epsilon\\}$ except for a simple pole at $s=0$;
* (c)
${\|(I-\widehat{R}(s))^{-1}\|}_{{\mathcal{F}}_{\eta}({\widetilde{Y}})\mapsto{\mathcal{F}}_{\eta}({\widetilde{Y}})}\leq
C|b|^{m_{1}}$ for $|a|\leq\epsilon$, $|b|\geq 1$.
###### Proof.
It suffices to verify these properties for $Z(s)=(I-\widehat{R}_{0}(s))^{-1}$
on $Y$. They immediately transfer to $(I-\widehat{R}(s))^{-1}$ on
${\widetilde{Y}}$ since $(\widehat{R}v)(y,u)=(\widehat{R}_{0}v^{u})(y)$ where
$v^{u}(y)=v(y,u)$.
The arguments for passing from (2.4) to the desired properties for $Z(s)$ are
standard. For completeness, we sketch these details now recalling arguments
from [5]. Define ${\mathcal{F}}_{\eta}(Y)$ with norm $\|\;\|_{\eta}$ by
restricting to $u=0$ (this coincides with the usual Hölder space on $Y$). Let
$A$, $D$, $\epsilon$ and $m_{0}$ be as in (2.4). Increase $A$ and $D$ so that
$D>1$ and $|b|^{m_{0}}\gamma^{[A\log|b|]}\leq\frac{1}{2}$ for $|b|\geq D$.
Suppose that $|a|\leq\epsilon$, $|b|\geq D$. Then
$\|\widehat{R}_{0}(s)^{[A\log|b|]}\|_{\eta}\leq|b|^{m_{0}}\gamma^{[A\log|b|]}\leq\frac{1}{2}$
and $\|(I-\widehat{R}_{0}(s)^{[A\log|b|]})^{-1}\|_{\eta}\leq 2.$
As in [5, Proposition 2.5], we can shrink $\epsilon$ so that
$s\to\widehat{R}_{0}(s)$ is continuous on ${\mathcal{F}}_{\eta}(Y)$ for
$|a|\leq\epsilon$. The simple eigenvalue $1$ for $\widehat{R}_{0}(0)=R_{0}$
extends to a continuous family of simple eigenvalues $\lambda(s)$ for
$|s|\leq\epsilon$. Hence we can choose $\epsilon$ so that
$\frac{1}{2}<\lambda(a)<2$ for $|a|\leq\epsilon$. By [5, Corollary 2.8],
$\|\widehat{R}_{0}(s)^{n}\|_{\eta}\ll|b|\lambda(a)^{n}\leq|b|2^{n}$ for all
$n\geq 1$, $|a|\leq\epsilon$, $|b|\geq D$. Hence
$\displaystyle\|Z(s)\|_{\eta}$
$\displaystyle\leq\big{(}1+\|\widehat{R}_{0}(s)\|_{\eta}+\dots+\|\widehat{R}_{0}(s)^{[A\log|b|]-1}\|_{\eta}\big{)}\|(I-\widehat{R}_{0}(s)^{[A\log|b|]})^{-1}\|_{\eta}$
$\displaystyle\ll(\log|b|)\,|b|\,2^{A\log|b|}\leq|b|^{m_{1}},$
with $m_{1}=1+A\log 2$. This proves analyticity on the region
$\\{|a|<\epsilon,|b|>D\\}$ with the desired estimates for property (c) on this
region.
For $|a|\leq\epsilon$, $|b|\leq D$, we recall arguments from the proof of [5,
Lemma 2.22] (where $\widehat{R}_{0}(s)$ is denoted $Q_{s}$). For $\epsilon$
sufficiently small, the part of spectrum of $\widehat{R}_{0}(s)$ that is close
to $1$ consists only of isolated eigenvalues. Also, the spectral radius of
$\widehat{R}_{0}(s)$ is at most $\lambda(a)$ and $\lambda(a)<1$ for
$a\in[0,\epsilon]$, so $s\mapsto Z(s)$ is analytic on $\\{0<a<\epsilon\\}$.
Suppose that $\widehat{R}_{0}(ib)v=v$ for some $v\in{\mathcal{F}}_{\eta}(Y)$,
$b\neq 0$. Choose $q\geq 1$ such that $q|b|>D$. Since $\widehat{R}_{0}(s)$ is
the $L^{2}$ adjoint of $v\mapsto e^{s\varphi}v\circ F$, we have
$e^{ib\varphi}v\circ F=v$. Hence $e^{iqb\varphi}v^{q}\circ F=v^{q}$ and so
$\widehat{R}_{0}(iqb)v^{q}=v^{q}$. But $\|Z(iqb)v^{q}\|_{\eta}<\infty$, so
$v=0$. Hence $1\not\in\operatorname{spec}\widehat{R}_{0}(ib)$ for all $b\neq
0$. It follows that for all $b\neq 0$ there exists an open set
$U_{b}\subset{\mathbb{C}}$ containing $ib$ such that
$1\not\in\operatorname{spec}\widehat{R}_{0}(s)$ for all $s\in U_{b}$, and so
$s\mapsto Z(s)$ is analytic on $U_{b}$.
Next, we recall that for $s$ near to zero, $\lambda(s)=1+cs+O(s^{2})$ where
$c<0$. Hence $s\mapsto Z(s)$ has a simple pole at zero. It follows that there
exists $\epsilon>0$ such that $s\mapsto Z(s)$ is analytic on
$\\{|a|<\epsilon,|b|<2D\\}$ except for a simple pole at $s=0$. Combining this
with the estimates on $\\{|a|<\epsilon,|b|\geq D\\}$ we have proved properties
(b) and (c) for $Z(s)$.
Finally, the spectral projection $\pi$ corresponding to the eigenvalue
$\lambda(0)=1$ for $\widehat{R}_{0}(0)=R$ is given by $\pi v=\int_{Y}v\,d\mu$.
Hence the pole disappears on restriction to observables of mean zero, proving
property (a) for $Z(s)$. ∎
Next define
$T_{t}v=1_{\widetilde{Y}}L_{t}(1_{\widetilde{Y}}v),\qquad
U_{t}v=1_{\widetilde{Y}}L_{t}(1_{\\{\widetilde{\varphi}>t\\}}v)$
and
$\qquad\widehat{T}(s)=\int_{0}^{\infty}e^{-st}T_{t}\,dt,\qquad\widehat{U}(s)=\int_{0}^{\infty}e^{-st}U_{t}\,dt,$
By [18, Theorem 3.3], we have the operator renewal equation
$\widehat{T}=\widehat{U}(I-\widehat{R})^{-1}.$
###### Proposition 3.2
There exists $\epsilon>0$, $C>0$ such that
$s\mapsto\widehat{U}(s):{\mathcal{F}}_{\eta}({\widetilde{Y}})\to{\mathcal{F}}_{\eta}({\widetilde{Y}})$
is analytic on $\\{|a|<\epsilon\\}$ and
${\|\widehat{U}(s)\|}_{{\mathcal{F}}_{\eta}({\widetilde{Y}})\mapsto{\mathcal{F}}_{\eta}({\widetilde{Y}})}\leq
C|s|$ for $|a|\leq\epsilon$.
###### Proof.
By [18, Proposition 3.4],
$(U_{t}v)(y,u)=\begin{cases}v(y,u-t)1_{[t,1]}(u)&0\leq t\leq 1\\\
(\widetilde{R}v_{t})(y,u)&t>1\end{cases}$
where $v_{t}(y,u)=1_{\\{t<\varphi(y)<t+1-u\\}}v(y,u-t+\varphi(y))$. Hence
$\widehat{U}(s)=\widehat{U}_{1}(s)+\widehat{U}_{2}(s)$ where
$(\widehat{U}_{1}(s)v)(y,u)=\int_{0}^{u}e^{-st}v(y,u-t)\,dt,\qquad\widehat{U}_{2}(s)v=\int_{1}^{\infty}e^{-st}\widetilde{R}v_{t}\,dt.$
It is clear that $\|\widehat{U}_{1}(s)v\|_{\eta}\leq
e^{\epsilon}\|v\|_{\eta}$. We focus attention on the second term
$(\widehat{U}_{2}(s)v)(y,u)={\textstyle\sum_{j}}g(y_{j})\int_{1}^{\infty}e^{-st}v_{t}(y_{j},u)\,dt={\textstyle\sum_{j}}g(y_{j})\widehat{V}(s)(y_{j},u),$
where $\widehat{V}(s)(y,u)=\int_{u}^{1}e^{s(t-u-\varphi)}v(y,t)\,dt$. Clearly,
$|1_{Y_{j}}\widehat{V}(s)|_{\infty}\leq
e^{\epsilon|1_{Y_{j}}\varphi|_{\infty}}|v|_{\infty}$. Also,
$\widehat{V}(s)(y,u)-\widehat{V}(s)(y^{\prime},u)=I+J,$
where
$\displaystyle I$
$\displaystyle=\int_{u}^{1}(e^{s(t-u-\varphi(y))}-e^{s(t-u-\varphi(y^{\prime}))})v(y,t)\,dt,$
$\displaystyle J$
$\displaystyle=\int_{u}^{1}e^{s(t-u-\varphi(y^{\prime}))}(v(y,t)-v(y^{\prime},t))\,dt.$
For $y,y^{\prime}\in Y_{j}$,
$|I|\leq|v|_{\infty}\int_{u}^{1}e^{\epsilon(|1_{Y_{j}}\varphi|_{\infty}+u-t)}|s||\varphi(y)-\varphi(y^{\prime})|\,dt\ll|s||v|_{\infty}\,e^{\epsilon|1_{Y_{j}}\varphi|_{\infty}}d(Fy,Fy^{\prime})^{\eta}$
by (2.1), and
$|J|\leq\int_{u}^{1}e^{\epsilon(|1_{Y_{j}}\varphi|_{\infty}+u-t)}|v(y,t)-v(y^{\prime},t)|\,dt\leq
e^{\epsilon|1_{Y_{j}}\varphi|_{\infty}}|v|_{\eta}\,d(y,y^{\prime})^{\eta}.$
Hence
$|\widehat{V}(s)(y,u)-\widehat{V}(s)(y^{\prime},u)|_{\eta}\ll|s|e^{\epsilon|1_{Y_{j}}\varphi|_{\infty}}\|v\|_{\eta}\,d(Fy,Fy^{\prime})^{\eta}$.
It follows from the estimates for $1_{Y_{j}}\widehat{V}(s)$ together with
(2.3) that
$\|\widehat{U}_{2}(s)v\|_{\eta}\ll{\textstyle\sum_{j}}|s|\mu(Y_{j})e^{\epsilon|1_{Y_{j}}\varphi|_{\infty}}\|v\|_{\eta}$.
By (2.2), $\|\widehat{U}_{2}(s)v\|_{\eta}\ll|s|\|v\|_{\eta}$ for $\epsilon$
sufficiently small. We conclude that
$\|\widehat{U}(s)v\|_{\eta}\ll|s|\|v\|_{\eta}$. ∎
### 3.2 From $\widehat{T}$ on ${\widetilde{Y}}$ to $\widehat{L}$ on
$Y^{\varphi}$
Lemma 3.1 and Proposition 3.2 yield analyticity and estimates for
$\widehat{T}=\widehat{U}(I-\widehat{R})^{-1}$ on ${\widetilde{Y}}$. In this
subsection, we show how these properties are inherited by
$\widehat{L}(s)=\int_{0}^{\infty}e^{-st}L_{t}\,dt$ on $Y^{\varphi}$.
###### Remark 3.3
The approach in this subsection is similar to that in [9, Section 5] but there
are some important differences. The rationale behind the two step
decomposition in Propositions 3.4 and 3.5 below is that the discreteness of
the decomposition in Proposition 3.4 simplifies many formulas significantly.
In particular, the previously problematic term $E_{t}$ in [9] becomes
elementary (and vanishes for large $t$ when $\varphi$ is bounded). The
decomposition in Proposition 3.5 remains continuous to simplify the estimates
in Proposition 3.8.
Since the setting in [9] is different (infinite ergodic theory, reinducing) we
keep the exposition here self-contained even where the estimates coincide with
those in [9].
Define
$\displaystyle A_{n}:L^{1}({\widetilde{Y}})\to L^{1}(Y^{\varphi}),$
$\displaystyle(A_{n}v)(y,u)=1_{\\{n\leq u<n+1\\}}(L_{n}v)(y,u),\;n\geq 0,$
$\displaystyle E_{t}:L^{1}(Y^{\varphi})\to L^{1}(Y^{\varphi}),$
$\displaystyle(E_{t}v)(y,u)=1_{\\{[t]+1\leq
u\leq\varphi(y)\\}}(L_{t}v)(y,u),\;t>0.$
###### Proposition 3.4
$\displaystyle L_{t}=\sum_{j=0}^{[t]}A_{j}1_{\widetilde{Y}}L_{t-j}+E_{t}$ for
$t>0$.
###### Proof.
For $y\in Y$, $u\in(0,\varphi(y))$,
$\displaystyle(L_{t}v)(y,u)$ $\displaystyle=\sum_{j=0}^{[t]}1_{\\{j\leq
u<j+1\\}}(L_{t}v)(y,u)+1_{\\{[t]+1\leq u\leq\varphi(y)\\}}(L_{t}v)(y,u)$
$\displaystyle=\sum_{j=0}^{[t]}(A_{j}L_{t-j}v)(y,u)+(E_{t}v)(y,u).$
Now use that $A_{n}=A_{n}1_{\widetilde{Y}}$. ∎
Next, define
$\displaystyle B_{t}:L^{1}(Y^{\varphi})\to L^{1}({\widetilde{Y}}),$
$\displaystyle B_{t}v=1_{\widetilde{Y}}L_{t}(1_{\Delta_{t}}v),$ $\displaystyle
G_{t}:L^{1}(Y^{\varphi})\to L^{1}({\widetilde{Y}}),$ $\displaystyle
G_{t}v=B_{t}(\omega(t)v),$ $\displaystyle H_{t}:L^{1}(Y^{\varphi})\to
L^{1}({\widetilde{Y}}),$ $\displaystyle
H_{t}v=1_{\widetilde{Y}}L_{t}(1_{\Delta^{\prime}_{t}}v),$
for $t>0$, where
$\displaystyle\Delta_{t}$ $\displaystyle=\\{(y,u)\in
Y^{\varphi}:\varphi(y)-t\leq u<\varphi(y)-t+1\\}$
$\displaystyle\Delta^{\prime}_{t}$ $\displaystyle=\\{(y,u)\in
Y^{\varphi}:u<\varphi(y)-t\\},\qquad\omega(t)(y,u)=\varphi(y)-u-t+1.$
###### Proposition 3.5
$\displaystyle
1_{\widetilde{Y}}L_{t}=\int_{0}^{t}T_{t-\tau}B_{\tau}\,d\tau+G_{t}+H_{t}$ for
$t>0$.
###### Proof.
Let $y\in Y$, $u\in[0,\varphi(y)]$. Then
$\displaystyle\int_{0}^{t}1_{\Delta_{\tau}}(y,u)\,d\tau$
$\displaystyle=\int_{0}^{t}1_{\\{\varphi(y)-u\leq\tau\leq\varphi(y)-u+1\\}}\,d\tau$
$\displaystyle=1_{\\{t\geq\varphi(y)-u+1\\}}+1_{\\{\varphi(y)-u\leq
t<\varphi(y)-u+1\\}}(t-\varphi(y)+u)$
$\displaystyle=1-1_{\\{t<\varphi(y)-u+1\\}}+1_{\\{\varphi(y)-u\leq
t<\varphi(y)-u+1\\}}(t-\varphi(y)+u)$
$\displaystyle=1-1_{\Delta_{t}^{\prime}}(y,u)+1_{\Delta_{t}}(y,u)(t-\varphi(y)+u-1).$
Hence
$\int_{0}^{t}1_{\Delta_{\tau}}\,d\tau=1-1_{\Delta_{t}}\omega(t)-1_{\Delta^{\prime}_{t}}.$
It follows that
$\displaystyle\int_{0}^{t}T_{t-\tau}B_{\tau}$ $\displaystyle
v\,d\tau=1_{\widetilde{Y}}\int_{0}^{t}L_{t-\tau}1_{\widetilde{Y}}B_{\tau}v\,d\tau=1_{\widetilde{Y}}\int_{0}^{t}L_{t-\tau}B_{\tau}v\,d\tau$
$\displaystyle=1_{\widetilde{Y}}\int_{0}^{t}L_{t-\tau}L_{\tau}(1_{\Delta_{\tau}}v)\,d\tau=1_{\widetilde{Y}}L_{t}\Big{(}\int_{0}^{t}1_{\Delta_{\tau}}v\,d\tau\Big{)}=1_{\widetilde{Y}}L_{t}v-G_{t}v-H_{t}v$
as required. ∎
We have already defined the Laplace transforms $\widehat{L}(s)$ and
$\widehat{T}(s)$ for $s=a+ib$ with $a>0$. Similarly, define
$\displaystyle\widehat{B}(s)$
$\displaystyle=\int_{0}^{\infty}e^{-st}B_{t}\,dt,\qquad$
$\displaystyle\widehat{E}(s)$
$\displaystyle=\int_{0}^{\infty}e^{-st}E_{t}\,dt,$
$\displaystyle\widehat{G}(s)$
$\displaystyle=\int_{0}^{\infty}e^{-st}G_{t}\,dt,\qquad$
$\displaystyle\widehat{H}(s)$
$\displaystyle=\int_{0}^{\infty}e^{-st}H_{t}\,dt.$
Also, we define the discrete transform
$\displaystyle\widehat{A}(s)=\sum_{n=0}^{\infty}e^{-sn}A_{n}.$
###### Corollary 3.6
$\widehat{L}(s)=\widehat{A}(s)\widehat{T}(s)\widehat{B}(s)+\widehat{A}(s)\widehat{G}(s)+\widehat{A}(s)\widehat{H}(s)+\widehat{E}(s)$
for $a>0$.
###### Proof.
By Proposition 3.4,
$\displaystyle\widehat{L}(s)-\widehat{E}(s)$
$\displaystyle=\int_{0}^{\infty}e^{-st}\sum_{j=0}^{[t]}A_{j}1_{\widetilde{Y}}L_{t-j}\,dt=\sum_{j=0}^{\infty}e^{-sj}A_{j}1_{\widetilde{Y}}\int_{j}^{\infty}e^{-s(t-j)}L_{t-j}\,dt$
$\displaystyle=\widehat{A}(s)1_{\widetilde{Y}}\int_{0}^{\infty}e^{-st}L_{t}\,dt=\widehat{A}(s)1_{\widetilde{Y}}\widehat{L}(s).$
Hence $\widehat{L}=\widehat{A}1_{\widetilde{Y}}\widehat{L}+\widehat{E}$. In
addition, by Proposition 3.5, $\displaystyle
1_{\widetilde{Y}}\widehat{L}=\widehat{T}\widehat{B}+\widehat{G}+\widehat{H}$.
∎
###### Proposition 3.7
Let $\delta>\epsilon>0$. Then there is a constant $C>0$ such that
* (a)
${\|\widehat{A}(s)\|}_{{\mathcal{F}}_{\eta}({\widetilde{Y}})\to{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})}\leq
1$,
* (b)
${\|\widehat{E}(s)\|}_{{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})\to{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})}\leq
C$,
* (c)
${\|\widehat{H}(s)\|}_{{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})\to{\mathcal{F}}_{\eta}({\widetilde{Y}})}\leq
e^{\delta}$,
for $|a|\leq\epsilon$.
###### Proof.
(a) Let $v\in{\mathcal{F}}_{\eta}({\widetilde{Y}})$. Let
$(y,u),\,(y^{\prime},u)\in Y_{j}^{\varphi}$, $j\geq 1$. Since
$(A_{n}v)(y,u)=1_{\\{n\leq u<n+1\\}}v(y,u-n)$,
$(\widehat{A}(s)v)(y,u)=\sum_{n=0}^{\infty}e^{-sn}1_{\\{n\leq
u<n+1\\}}v(y,u-n)=e^{-s[u]}v(y,u-[u]).$
Hence
$|(\widehat{A}(s)v)(y,u)|\leq e^{\epsilon
u}|v|_{\infty},\quad|(\widehat{A}(s)v)(y,u)-(\widehat{A}(s)v)(y^{\prime},u)|\leq
e^{\epsilon u}|v|_{\eta}\,d(y,y^{\prime})^{\eta}.$
That is, $|\widehat{A}(s)v|_{\epsilon,\infty}\leq|v|_{\infty}$,
$|\widehat{A}(s)v|_{\epsilon,\eta}\leq|v|_{\eta}$. Hence
$\|\widehat{A}(s)v\|_{\delta,\eta}\leq\|\widehat{A}(s)v\|_{\epsilon,\eta}\leq\|v\|_{\eta}$.
(b) We take $C=1/(\delta-\epsilon)$. Let
$v\in{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})$. Let $(y,u),\,(y^{\prime},u)\in
Y_{j}^{\varphi}$, $j\geq 1$. Note that $(E_{t}v)(y,u)=1_{\\{[t]+1\leq
u\\}}v(y,u-t)$, so
$(\widehat{E}(s)v)(y,u)=\int_{0}^{\infty}e^{-st}1_{\\{[t]+1\leq
u\\}}v(y,u-t)\,dt.$
Hence
$|(\widehat{E}(s)v)(y,u)|\leq\int_{0}^{\infty}e^{\epsilon
t}|v|_{\delta,\infty}\,e^{\delta(u-t)}\,dt=C|v|_{\delta,\infty}\,e^{\delta
u},$
and
$|(\widehat{E}(s)v)(y,u)-(\widehat{E}(s)v)(y^{\prime},u)|\leq\int_{0}^{\infty}e^{\epsilon
t}|v|_{\delta,\eta}\,d(y,y^{\prime})^{\eta}e^{\delta(u-t)}\,dt=Ce^{\delta
u}|v|_{\delta,\eta}\,d(y,y^{\prime})^{\eta}.$
That is, $|\widehat{E}(s)v|_{\delta,\infty}\leq|v|_{\delta,\infty}$ and
$|\widehat{E}(s)v|_{\delta,\eta}\leq|v|_{\delta,\eta}$.
(c) Let $v\in{\mathcal{F}}_{\epsilon,\eta}(Y^{\varphi})$. Let
$(y,u),\,(y^{\prime},u)\in{\widetilde{Y}}_{j}$, $j\geq 1$. Then
$(H_{t}v)(y,u)=1_{\\{t<u\\}}v(y,u-t)$ and
$(\widehat{H}(s)v)(y,u)=\int_{0}^{u}e^{-st}v(y,u-t)\,dt$. Hence,
$\displaystyle|\widehat{H}(s)v|_{\infty}\leq
e^{\delta}|v|_{\delta,\infty}\quad\text{and}\quad|(\widehat{H}(s)v)(y,u)-(\widehat{H}(s)v)(y^{\prime},u)|\leq
e^{\delta}|v|_{\delta,\eta}\,d(y,y^{\prime})^{\eta}.$
The result follows. ∎
###### Proposition 3.8
There exists $\delta>\epsilon>0$, $C>0$ such that
${\|\widehat{B}(s)\|}_{{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})\to{\mathcal{F}}_{\eta}({\widetilde{Y}})}\leq
C|s|\quad\text{and}\quad{\|\widehat{G}(s)\|}_{{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})\to{\mathcal{F}}_{\eta}({\widetilde{Y}})}\leq
C|s|\quad\text{for $|a|\leq\epsilon$.}$
###### Proof.
Let $v\in L^{1}(Y^{\varphi})$, $w\in L^{\infty}({\widetilde{Y}})$. Using that
$F_{t}(y,u)=(Fy,u+t-\varphi(y))$ for $(y,u)\in\Delta_{t}$,
$\displaystyle\int_{\widetilde{Y}}B_{t}v\,w\,d\tilde{\mu}$
$\displaystyle=\bar{\varphi}\int_{Y^{\varphi}}L_{t}(1_{\Delta_{t}}v)\,w\,d\mu^{\varphi}=\bar{\varphi}\int_{Y^{\varphi}}1_{\Delta_{t}}v\,w\circ
F_{t}\,d\mu^{\varphi}$ $\displaystyle=\int_{Y}\int_{0}^{\varphi(y)}1_{\\{0\leq
u+t-\varphi(y)<1\\}}v(y,u)w(Fy,u+t-\varphi)\,du\,d\mu$
$\displaystyle=\int_{Y}\int_{t-\varphi(y)}^{t}1_{\\{0\leq
u<1\\}}v(y,u+\varphi(y)-t)w(Fy,u)\,du\,d\mu$
$\displaystyle=\int_{\widetilde{Y}}v_{t}\,w\circ\widetilde{F}\,d\tilde{\mu}=\int_{\widetilde{Y}}\widetilde{R}v_{t}\,w\,d\tilde{\mu}$
where $v_{t}(y,u)=1_{\\{0<u+\varphi(y)-t<\varphi(y)\\}}v(y,u+\varphi(y)-t)$.
Hence $B_{t}v=\widetilde{R}v_{t}$ and it follows immediately that
$G_{t}v=\widetilde{R}(\omega(t)v)_{t}$. But
$(\omega(t)v)_{t}(y,u)=1_{\\{0<u+\varphi(y)-t<\varphi(y)\\}}(\omega(t)v)(y,u+\varphi(y)-t)=(1-u)v_{t}(y,u),$
so $(G_{t}v)(y,u)=(1-u)(B_{t}v)(y,u)$.
Next, $\widehat{B}(s)v=\widetilde{R}\widehat{V}(s)$ where
$\displaystyle\widehat{V}(s)(y,u)=\int_{0}^{\infty}e^{-st}v_{t}(y,u)\,dt$
$\displaystyle=\int_{u}^{u+\varphi(y)}e^{-st}v(y,u+\varphi(y)-t)\,dt$
$\displaystyle=\int_{0}^{\varphi(y)}e^{-s(\varphi(y)+u-t)}v(y,t)\,dt.$
It is immediate that
$\displaystyle(\widehat{G}(s)v)(y,u)=(1-u)(\widehat{B}(s)v)(y,u).$ (3.1)
Suppose that $\delta>\epsilon>0$ are fixed. Let
$v\in{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})$. Let
$(y,u),\,(y^{\prime},u)\in{\widetilde{Y}}_{j}$, $j\geq 1$. Then
$|\widehat{V}(s)(y,u)|\leq\int_{0}^{\varphi(y)}e^{-a\,(\varphi(y)+u-t)}|v|_{\delta,\infty}\,e^{\delta
t}\,dt\ll e^{\delta\varphi(y)}|v|_{\delta,\infty}$
and so $|1_{Y_{j}}\widehat{V}(s)|_{\infty}\ll
e^{\delta|1_{Y_{j}}\varphi|_{\infty}}|v|_{\delta,\infty}$.
Next, suppose without loss that $\varphi(y^{\prime})\leq\varphi(y)$. Then
$\widehat{V}(s)(y,u)-\widehat{V}(s)(y^{\prime},u)=J_{1}+J_{2}+J_{3}$
where
$\displaystyle J_{1}$
$\displaystyle=\int_{0}^{\varphi(y)}(e^{-s(\varphi(y)+u-t)}-e^{-s(\varphi(y^{\prime})+u-t)})v(y,t)\,dt,$
$\displaystyle J_{2}$
$\displaystyle=\int_{0}^{\varphi(y)}e^{-s(\varphi(y^{\prime})+u-t)}(v(y,t)-v(y^{\prime},t))\,dt,$
$\displaystyle J_{3}$
$\displaystyle=\int_{\varphi(y^{\prime})}^{\varphi(y)}e^{-s(\varphi(y^{\prime})+u-t)}v(y^{\prime},t)\,dt.$
For notational convenience we suppose that $a\in(-\epsilon,0)$ since the range
$a\geq 0$ is simpler. Using (2.1),
$\displaystyle|J_{1}|$
$\displaystyle\leq\int_{0}^{\varphi(y)}e^{\epsilon(|1_{Y_{j}}\varphi|_{\infty}+1-t)}|s||\varphi(y)-\varphi(y^{\prime})||v|_{\delta,\infty}\,e^{\delta
t}\,dt$
$\displaystyle\ll|s|\varphi(y)e^{\delta|1_{Y_{j}}\varphi|_{\infty}}\,d(Fy,Fy^{\prime})^{\eta}|v|_{\delta,\infty}\ll|s|e^{2\delta|1_{Y_{j}}\varphi|_{\infty}}\,d(Fy,Fy^{\prime})^{\eta}|v|_{\delta,\infty},$
$\displaystyle|J_{2}|$
$\displaystyle\leq\int_{0}^{\varphi(y)}e^{\epsilon(|1_{Y_{j}}\varphi|_{\infty}+1-t)}|v|_{\delta,\eta}\,e^{\delta
t}d(y,y^{\prime})^{\eta}\,dt\ll
e^{\delta|1_{Y_{j}}\varphi|_{\infty}}\,d(y,y^{\prime})^{\eta}|v|_{\delta,\eta},$
$\displaystyle|J_{3}|$
$\displaystyle\leq\int_{\varphi(y^{\prime})}^{\varphi(y)}e^{\epsilon(|1_{Y_{j}}\varphi|_{\infty}+1-t)}|v|_{\delta,\infty}\,e^{\delta
t}\,dt\ll
e^{2\delta|1_{Y_{j}}\varphi|_{\infty}}|v|_{\delta,\infty}\,d(Fy,Fy^{\prime})^{\eta}.$
Hence
$|\widehat{V}(s)(y,u)-\widehat{V}(s)(y,u)|\ll|s|e^{2\delta|1_{Y_{j}}\varphi|_{\infty}}\|v\|_{\delta,\eta}\,d(Fy,Fy^{\prime})^{\eta}.$
Now, for $(y,u)\in{\widetilde{Y}}$,
$(\widehat{B}(s)v)(y,u)=(\widetilde{R}\widehat{V}(s))(y,u)={\textstyle\sum_{j}}g(y_{j})\widehat{V}(s)(y_{j},u),$
where $y_{j}$ is the unique preimage of $y$ under $F|Y_{j}$. It follows from
the estimates for $\widehat{V}(s)$ together with (2.3) that
$\|\widehat{B}(s)v\|_{\eta}\ll|s|{\textstyle\sum_{j}}\mu(Y_{j})e^{2\delta|1_{Y_{j}}\varphi|_{\infty}}\|v\|_{\delta,\eta}.$
Shrinking $\delta$, the desired estimate for $\widehat{B}$ follows from (2.2).
Finally, the estimate for $\widehat{G}$ follows from (3.1). ∎
###### Proposition 3.9
$\int_{\widetilde{Y}}\widehat{B}(0)v\,d\tilde{\mu}=\bar{\varphi}\int_{Y^{\varphi}}v\,d\mu^{\varphi}$
for $v\in L^{1}(Y^{\varphi})$.
###### Proof.
By the definition of $\widehat{B}$,
$\displaystyle\int_{\widetilde{Y}}$
$\displaystyle\widehat{B}(0)v\,d\tilde{\mu}=\int_{\widetilde{Y}}\int_{0}^{\infty}L_{t}(1_{\Delta_{t}}v)\,dt\,d\tilde{\mu}=\bar{\varphi}\int_{0}^{\infty}\int_{Y^{\varphi}}L_{t}(1_{\Delta_{t}}v)\,d\mu^{\varphi}\,dt$
$\displaystyle=\bar{\varphi}\int_{0}^{\infty}\int_{Y^{\varphi}}1_{\Delta_{t}}v\,d\mu^{\varphi}\,dt=\bar{\varphi}\int_{Y^{\varphi}}\int_{0}^{\infty}1_{\\{\varphi-u<t<\varphi-u+1\\}}v\,dt\,d\mu^{\varphi}=\bar{\varphi}\int_{Y^{\varphi}}v\,d\mu^{\varphi},$
as required. ∎
###### Lemma 3.10
Write $s=a+ib\in{\mathbb{C}}$. There exists $\epsilon>0$, $\delta>0$,
$m_{2}\geq 0$, $C>0$ such that
* (a)
$s\mapsto\widehat{L}(s):{\mathcal{F}}_{\delta,\eta}^{0}(Y^{\varphi})\to{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})$
is analytic on $\\{|a|<\epsilon\\}$;
* (b)
$s\mapsto\widehat{L}(s):{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})\to{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})$
is analytic on $\\{|a|<\epsilon\\}$ except for a simple pole at $s=0$;
* (c)
$\|\widehat{L}(s)v\|_{\delta,\eta}\leq C|b|^{m_{2}}\|v\|_{\delta,\eta}$ for
$|a|\leq\epsilon$, $|b|\geq 1$,
$v\in{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})$.
###### Proof.
Recall that
$\widehat{L}=\widehat{A}\widehat{T}\widehat{B}+\widehat{A}\widehat{G}+\widehat{A}\widehat{H}+\widehat{E},\qquad\widehat{T}=\widehat{U}(I-\widehat{R})^{-1}$
where $\widehat{U}$, $\widehat{A}$, $\widehat{B}$, $\widehat{G}$,
$\widehat{H}$ and $\widehat{E}$ are analytic by Propositions 3.2, 3.7 and 3.8.
Hence part (b) follows immediately from Lemma 3.1(b). Also, part (c) follows
using Lemma 3.1(c).
By Proposition 3.9,
$\widehat{B}(0)({\mathcal{F}}_{\delta,\eta}^{0}(Y^{\varphi}))\subset{\mathcal{F}}_{\eta}^{0}({\widetilde{Y}})$.
Hence the simple pole at $s=0$ for $(I-\widehat{R})^{-1}\widehat{B}$
disappears on restriction to ${\mathcal{F}}_{\delta,\eta}^{0}(Y^{\varphi})$ by
Lemma 3.1(a). This proves part (a). ∎
### 3.3 Moving the contour of integration
###### Proposition 3.11
Let $m\geq 1$. Let $v\in{\mathcal{F}}_{\delta,\eta,m}(Y^{\varphi})$ with good
support. Then
$\widehat{L}(s)v=\sum_{j=0}^{m-1}(-1)^{j}s^{-(j+1)}\partial_{t}^{j}v+(-1)^{m}s^{-m}\widehat{L}(s)\partial_{t}^{m}v$
for $a>0$.
###### Proof.
Recall that $\operatorname{supp}v\subset\\{(y,u)\in
Y^{\varphi}:u\in[r,\varphi(y)-r]\\}$ for some $r>0$. For $h\in[0,r]$, we can
define $(\Psi_{h}v)(y,u)=v(y,u-h)$ and then $(\Psi_{h}v)\circ F_{h}=v$.
Let $w\in L^{\infty}(Y^{\varphi})$ and write
$\rho_{v,w}(t)=\int_{Y^{\varphi}}v\,w_{t}\,d\mu^{\varphi}$ where $w_{t}=w\circ
F_{t}$. Then for $h\in[0,r]$,
$\rho_{v,w}(t+h)=\int_{Y^{\varphi}}v\,w_{t}\circ
F_{h}\,d\mu^{\varphi}=\int_{Y^{\varphi}}(\Psi_{h}v)\circ F_{h}\,w_{t}\circ
F_{h}\,d\mu^{\varphi}=\int_{Y^{\varphi}}\Psi_{h}v\,w_{t}\,d\mu^{\varphi}.$
Hence
$h^{-1}(\rho_{v,w}(t+h)-\rho_{v,w}(t))=\int_{Y^{\varphi}}h^{-1}(\Psi_{h}v-v)\,w_{t}\,d\mu^{\varphi}$
so
$\rho^{\prime}_{v,w}(t)=-\int_{Y^{\varphi}}\partial_{t}v\,w_{t}\,d\mu^{\varphi}=-\int_{Y^{\varphi}}\partial_{t}v\,w\circ
F_{t}\,d\mu^{\varphi}=-\rho_{\partial_{t}v,w}(t).$
Inductively, $\rho^{(j)}_{v,w}(t)=(-1)^{j}\rho_{\partial_{t}^{j}v,w}(t)$.
Now
$\int_{Y^{\varphi}}\widehat{L}(s)v\,w\,d\mu^{\varphi}=\int_{0}^{\infty}e^{-st}\int_{Y^{\varphi}}L_{t}v\,w\,d\mu^{\varphi}\,dt=\int_{0}^{\infty}e^{-st}\rho_{v,w}(t)\,dt$,
so repeatedly integrating by parts,
$\displaystyle\int_{Y^{\varphi}}\widehat{L}(s)v\,w\,d\mu^{\varphi}$
$\displaystyle=\sum_{j=0}^{m-1}s^{-(j+1)}\rho^{(j)}_{v,w}(0)+s^{-m}\int_{0}^{\infty}e^{-st}\rho^{(m)}_{v,w}(t)\,dt$
$\displaystyle=\sum_{j=0}^{m-1}(-1)^{j}s^{-(j+1)}\rho_{\partial_{t}^{j}v,w}(0)+(-1)^{m}s^{-m}\int_{0}^{\infty}e^{-st}\rho_{\partial_{t}^{m}v,w}(t)\,dt$
$\displaystyle=\int_{Y^{\varphi}}\sum_{j=0}^{m-1}(-1)^{j}s^{-(j+1)}\partial_{t}^{j}v\,w\,d\mu^{\varphi}+(-1)^{m}s^{-m}\int_{0}^{\infty}e^{-st}\rho_{\partial_{t}^{m}v,w}(t)\,dt.$
Finally,
$\int_{0}^{\infty}e^{-st}\rho_{\partial_{t}^{m}v,w}(t)\,dt=\int_{Y^{\varphi}}\widehat{L}(s)\partial_{t}^{m}v\,w\,d\mu^{\varphi}$
and the result follows since $w\in L^{\infty}(Y^{\varphi})$ is arbitrary. ∎
We can now estimate $\|L_{t}v\|_{\delta,\eta}$.
###### Corollary 3.12
Under the assumptions of Theorem 2.2, there exists $\epsilon>0$, $m_{3}\geq
1$, $C>0$ such that
$\|L_{t}v\|_{\delta,\eta}\leq Ce^{-\epsilon
t}\|v\|_{\delta,\eta,m_{3}}\quad\text{for all $t>0$}$
for all $v\in{\mathcal{F}}_{\delta,\eta,m_{3}}^{0}(Y^{\varphi})$ with good
support.
###### Proof.
Let $m_{3}=m_{2}+2$. By Lemma 3.10(a),
$\widehat{L}(s):{\mathcal{F}}_{\delta,\eta,m_{3}}^{0}(Y^{\varphi})\to{\mathcal{F}}_{\delta,\eta}(Y^{\varphi})$
is analytic for $|a|\leq\epsilon$. The alternative expression in Proposition
3.11 is also analytic on this region (the apparent singularity at $s=0$ is
removable by the equality with the analytic function $\widehat{L}$). Hence we
can move the contour of integration to $s=-\epsilon+ib$ when computing the
inverse Laplace transform, to obtain
$\displaystyle L_{t}v$
$\displaystyle=\int_{-\infty}^{\infty}e^{st}\Big{(}\sum_{j=0}^{m_{3}-1}(-1)^{j}s^{-(j+1)}\partial_{t}^{j}v+(-1)^{m_{3}}s^{-{m_{3}}}\widehat{L}(s)\partial_{t}^{m_{3}}v\Big{)}\,db$
$\displaystyle=e^{-\epsilon
t}\sum_{j=0}^{m_{3}-1}(-1)^{j}\partial_{t}^{j}v\int_{-\infty}^{\infty}e^{ibt}s^{-(j+1)}\,db+(-1)^{m_{3}}e^{-\epsilon
t}\int_{-\infty}^{\infty}e^{ibt}s^{-{m_{3}}}\widehat{L}(s)\partial_{t}^{m_{3}}v\,db.$
The final term is estimated using Lemma 3.10(b,c):
$\Big{\|}\int_{-\infty}^{\infty}e^{ibt}s^{-{m_{3}}}\widehat{L}(s)\partial_{t}^{m_{3}}v\,db\Big{\|}_{\delta,\eta}\ll\int_{-\infty}^{\infty}(1+|b|)^{-(m_{2}+2)}(1+|b|)^{m_{2}}\|\partial_{t}^{m_{3}}v\|_{\delta,\eta}\,db\ll\|v\|_{\delta,\eta,{m_{3}}}.$
Clearly, the integrals $\int_{-\infty}^{\infty}e^{ibt}s^{-(j+1)}\,db$ converge
absolutely for $j\geq 1$, while the integral for $j=0$ converges as an
improper Riemann integral. Hence altogether we obtain that
$\|L_{t}v\|_{\delta,\eta}\ll e^{-\epsilon t}\|v\|_{\delta,\eta,m_{3}}$. ∎
For the proof of Theorem 2.2, it remains to estimate
$\|\partial_{u}L_{t}v\|_{\delta,\eta}$. Recall that the transfer operator
$R_{0}$ for $F$ has weight function $g$. We have the pointwise formula
$(R_{0}^{k}v)(y)=\sum_{F^{k}y^{\prime}=y}g_{k}(y^{\prime})v(y^{\prime})$ where
$g_{k}=g\,\dots\,g\circ F^{k-1}$. Let
$\varphi_{k}=\sum_{j=0}^{k-1}\varphi\circ F^{j}$.
###### Proposition 3.13
Let $v\in L^{1}(Y^{\varphi})$. Then for all $t>0$, $(y,u)\in Y^{\varphi}$,
$(L_{t}v)(y,u)=\sum_{k=0}^{[t/2]}\sum_{F^{k}y^{\prime}=y}g_{k}(y^{\prime})1_{\\{0\leq
u-t+\varphi_{k}(y^{\prime})<\varphi(y^{\prime})\\}}v(y^{\prime},u-t+\varphi_{k}(y^{\prime})).$
###### Proof.
The lap number $N_{t}(y,u)\in[0,t/2]\cap{\mathbb{N}}$ is the unique integer
$k\geq 0$ such that $u+t-\varphi_{k}(y)\in[0,\varphi(F^{k}y))$. In particular,
$F_{t}(y,u)=(F^{N_{t}(y,u)}y,u+t-\varphi_{N_{t}(y,u)}(y))$. For $w\in
L^{\infty}(Y^{\varphi})$,
$\displaystyle\int_{Y^{\varphi}}$ $\displaystyle
L_{t}(1_{\\{N_{t}=k\\}}v)\,w\,d\mu^{\varphi}=\int_{Y^{\varphi}}1_{\\{N_{t}=k\\}}v\,w\circ
F_{t}\,d\mu^{\varphi}$
$\displaystyle=\bar{\varphi}^{-1}\int_{Y}\int_{0}^{\varphi(y)}1_{\\{0\leq
u+t-\varphi_{k}(y)<\varphi(F^{k}y)\\}}v(y,u)\,w(F^{k}y,u+t-\varphi_{k}(y))\,du\,d\mu$
$\displaystyle=\bar{\varphi}^{-1}\int_{Y}\int_{0}^{\varphi(F^{k}y)}1_{\\{0\leq
u-t+\varphi_{k}(y)<\varphi(y)\\}}v(y,u-t+\varphi_{k}(y))\,w(F^{k}y,u)\,du\,d\mu.$
Writing $v_{t,k}^{u}(y)=1_{\\{0\leq
u-t+\varphi_{k}(y)<\varphi(y)\\}}v(y,u-t+\varphi_{k}(y))$ and
$w^{u}(y)=w(y,u)$,
$\displaystyle\int_{Y^{\varphi}}$ $\displaystyle
L_{t}(1_{\\{N_{t}=k\\}}v)\,w\,d\mu^{\varphi}=\bar{\varphi}^{-1}\int_{0}^{\infty}\int_{Y}1_{\\{u<\varphi\circ
F^{k}\\}}v_{t,k}^{u}\,w^{u}\circ F^{k}\,d\mu\,du$
$\displaystyle=\bar{\varphi}^{-1}\int_{0}^{\infty}\int_{Y}1_{\\{u<\varphi\\}}R_{0}^{k}v_{t,k}^{u}\,w^{u}\,d\mu\,du=\int_{Y^{\varphi}}(R_{0}^{k}v_{t,k}^{u})(y)\,w(y,u)\,d\mu^{\varphi}.$
Hence,
$\displaystyle(L_{t}v)(y,u)=\sum_{k=0}^{[t/2]}(L_{t}(1_{\\{N_{t}=k\\}}v)(y,u)=\sum_{k=0}^{[t/2]}(R_{0}^{k}v_{t,k}^{u})(y).$
The result follows from the pointwise formula for $R_{0}^{k}$. ∎
Proof of Theorem 2.2 Let $m=m_{3}+1$. By Corollary 3.12,
$\|L_{t}v\|_{\delta,\eta}\ll e^{-\epsilon t}\|v\|_{\delta,\eta,m}$.
Recall that $\partial_{u}$ denotes the ordinary derivative with respect to $u$
at $0<u<\varphi(y)$ and denotes the appropriate one-sided derivative at $u=0$
and $u=\varphi(y)$. Since $v$ has good support, the indicator functions in the
right-hand side of the formula in Proposition 3.13 are constant on the support
of $v$. It follows that $\partial_{u}L_{t}v=L_{t}(\partial_{u}v)$. By
Corollary 3.12,
$\|\partial_{u}L_{t}v\|_{\delta,\eta}=\|L_{t}(\partial_{u}v)\|_{\delta,\eta}\ll
e^{-\epsilon t}\|\partial_{u}v\|_{\delta,\eta,m_{3}}\leq e^{-\epsilon
t}\|v\|_{\delta,\eta,m}.$
Hence, $\|L_{t}v\|_{\delta,\eta,1}\ll e^{-\epsilon t}\|v\|_{\delta,\eta,m}$ as
required. ∎
## References
* [1] J. Aaronson. _An Introduction to Infinite Ergodic Theory_. Math. Surveys and Monographs 50, Amer. Math. Soc., 1997.
* [2] J. Aaronson and M. Denker. Local limit theorems for partial sums of stationary sequences generated by Gibbs-Markov maps. _Stoch. Dyn._ 1 (2001) 193–237.
* [3] J. Aaronson, M. Denker and M. Urbański. Ergodic theory for Markov fibred systems and parabolic rational maps. _Trans. Amer. Math. Soc._ 337 (1993) 495–548.
* [4] M. Antoniou and I. Melbourne. Rate of convergence in the weak invariance principle for deterministic systems. _Comm. Math. Phys._ (2019) 1147–1165.
* [5] V. Araújo and I. Melbourne. Exponential decay of correlations for nonuniformly hyperbolic flows with a $C^{1+\alpha}$ stable foliation, including the classical Lorenz attractor. _Ann. Henri Poincaré_ 17 (2016) 2975–3004.
* [6] A. Avila, S. Gouëzel and J. Yoccoz. Exponential mixing for the Teichmüller flow. _Publ. Math. Inst. Hautes Études Sci._ 104 (2006) 143–211.
* [7] V. Baladi and B. Vallée. Exponential decay of correlations for surface semi-flows without finite Markov partitions. _Proc. Amer. Math. Soc._ 133 (2005) 865–874.
* [8] R. Bowen. _Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms_. Lecture Notes in Math. 470, Springer, Berlin, 1975\.
* [9] H. Bruin, I. Melbourne and D. Terhesiu. Rates of mixing for nonMarkov infinite measure semiflows. _Trans. Amer. Math. Soc._ 371 (2019) 7343–7386.
* [10] D. Dolgopyat. On the decay of correlations in Anosov flows. _Ann. of Math._ 147 (1998) 357–390.
* [11] P. Góra and A. Boyarsky. Absolutely continuous invariant measures for piecewise expanding $C^{2}$ transformation in ${\bf R}^{N}$. _Israel J. Math._ 67 (1989) 272–286.
* [12] S. Gouëzel. Berry-Esseen theorem and local limit theorem for non uniformly expanding maps. _Ann. Inst. H. Poincaré Probab. Statist._ 41 (2005) 997–1024.
* [13] F. Hofbauer and G. Keller. Ergodic properties of invariant measures for piecewise monotonic transformations. _Math. Z._ 180 (1982) 119–140.
* [14] G. Keller. Generalized bounded variation and applications to piecewise monotonic transformations. _Z. Wahrsch. Verw. Gebiete_ 69 (1985) 461–478.
* [15] C. Liverani. On contact Anosov flows. _Ann. of Math._ 159 (2004) 1275–1312.
* [16] C. Liverani. Multidimensional expanding maps with singularities: a pedestrian approach. _Ergodic Theory Dynam. Systems_ 33 (2013) 168–182.
* [17] I. Melbourne, N. Paviato and D. Terhesiu. Nonexistence of spectral gaps in Hölder spaces for continuous time dynamical systems. _Israel J. Math._ , to appear.
* [18] I. Melbourne and D. Terhesiu. Operator renewal theory for continuous time dynamical systems with finite and infinite measure. _Monatsh. Math._ 182 (2017) 377–431.
* [19] M. Pollicott. On the rate of mixing of Axiom A flows. _Invent. Math._ 81 (1985) 413–426.
* [20] D. Ruelle. _Thermodynamic Formalism_. Encyclopedia of Math. and its Applications 5, Addison Wesley, Massachusetts, 1978.
* [21] M. Rychlik. Bounded variation and invariant measures. _Studia Math._ 76 (1983) 69–80.
* [22] B. Saussol. Absolutely continuous invariant measures for multidimensional expanding maps. _Israel J. Math._ 116 (2000) 223–248.
* [23] Y. G. Sinaĭ. Gibbs measures in ergodic theory. _Russ. Math. Surv._ 27 (1972) 21–70.
* [24] M. Tsujii. Decay of correlations in suspension semi-flows of angle-multiplying maps. _Ergodic Theory Dynam. Systems_ 28 (2008) 291–317.
* [25] M. Tsujii. Quasi-compactness of transfer operators for contact Anosov flows. _Nonlinearity_ 23 (2010) 1495–1545.
* [26] L.-S. Young. Statistical properties of dynamical systems with some hyperbolicity. _Ann. of Math._ 147 (1998) 585–650.
* [27] L.-S. Young. Recurrence times and rates of mixing. _Israel J. Math._ 110 (1999) 153–188.
|
aainstitutetext: LIBPhys-UC, Physics Department, University of Coimbra,
Rua Larga, 3004-516 Coimbra, Portgal
# Electroluminescence yield in pure krypton
R.D.P. Mano, a C.A.O. Henriques a F.D. Amaro a,1 and C.M.B.
Monteiro111Corresponding author<EMAIL_ADDRESS>
###### Abstract
The krypton electroluminescence yield was studied, at room temperature, as a
function of electric field in the gas scintillation gap. A large area
avalanche photodiode has been used to allow the simultaneous detection of the
electroluminescence pulses as well as the direct interaction of x-rays, the
latter being used as a reference for the calculation of the number of charge
carriers produced by the electroluminescence pulses and, thus, the
determination of the number of photons impinging the photodiode. An
amplification parameter of 113 photons per kV per drifting electron and a
scintillation threshold of 2.7 Td ( 0.7 kV cm-1 bar-1 at 293 K ) was obtained,
in good agreement with the simulation data reported in the literature. On the
other hand, the ionisation threshold in krypton was found to be around 13.5 Td
(3.4 kV cm-1 bar-1), less than what had been obtained by the most recent
simulation work-package. The krypton amplification parameter is about 80% and
140% of those measured for xenon and argon, respectively. The
electroluminescence yield in krypton is of great importance for modeling
krypton-based double-phase or high-pressure gas detectors, which may be used
in future rare event detection experiments.
## 1 Introduction
The electroluminescence yield of gaseous xenon and argon has been studied in
detail, both experimentally (e.g. see 1 ; 2 ; 3 ; 4 ; 5 ; 6 ; 7 and
references therein) and through simulation tools 8 ; 9 ; 10 ; 11 ; 12 . At
present, the main drive for those studies is the ongoing development of dual-
phase 13 ; 14 ; 15 ; 16 ; 17 ; 18 ; 19 and high-pressure gaseous 20 ; 21 ; 22
; 23 optical Time Projection Chambers (TPC), which make use of the secondary
scintillation, - electroluminescence (EL) - processes in the gas for the
amplification of the primary ionisation signals produced by radiation
interaction inside the TPC active volume. The R&D of such TPCs aims at
application to Dark Matter search 13 ; 14 ; 15 ; 16 ; 17 and to neutrino
physics, such as neutrino oscillation 18 ; 19 , double beta decay 20 ; 21 ; 22
and double electron capture 24 detection. The physics behind these rare event
detection experiments is of paramount importance in contemporary particle
physics, nuclear physics and cosmology, justifying the enormous R&D efforts
carried out by the scientific community.
The radioactivity of 85Kr has been a drawback for the use of krypton in rare
event detection experiments, being this gas the less studied one among the
noble gases. To the best of our knowledge, the electroluminescence in krypton
has only been studied by simulation 8 ; 10 and there haven’t been published
any experimental results, up to now, to benchmark the simulation tools.
Nevertheless, there are two experiments that make use of krypton, namely the
measurement of the double electron capture in 78Kr 25 ; 26 ; 27 ; 28 and the
search for solar hadronic axions emitted in the $M$1 transition of 83Kr nuclei
29 ; 30 ; 31 ; 32 . Moreover, 83Kr has also been proposed for inelastic dark
matter search 33 . The enrichment of a given isotope of a noble gas is,
nowadays, a matured technique, not significantly expensive, allowing for the
reduction of the radioactive isotope to tolerable levels for a particular
experiment.
The double-electron capture half-life is an important benchmark for nuclear
structure models 34 ; 35 ; 36 ; 37 ; 38 , providing vital experimental
constraints. In addition, it presents a significant step in the search for
neutrinoless double electron capture. The latter can complement the search for
neutrinoless double beta decay. Both would unveil the Majorana nature of the
neutrino, access the absolute neutrino mass and contribute to understand the
dominance of matter over antimatter by means of leptogenesis. On the other
hand, axions and axion-like particles are potential candidates for the
constituent particles of dark matter, being the main reason for extensive
axion searches, e.g. see 39 and references therein for detailed theoretical
and experimental reviews.
The referred to above rare event search experiments, having krypton as the
target, have been carried out with gas proportional counters using enriched
krypton 25 ; 26 ; 27 ; 28 ; 29 ; 30 ; 31 ; 32 . Nevertheless, optical TPCs
deliver higher gains with reduced electronic noise and overly improved energy
resolution when compared to proportional counters 40 ; 41 ; 42 ; 43 . In
addition, the use of a 2D-readout for the EL signal allows the reconstruction
of the topology of the ionisation event 42 ; 43 in a more effective way than
the complex analysis of the wave form associated to the ionisation events in
the above-mentioned proportional counters. Therefore, the use of optical TPCs
will allow larger sensitive volumes, better event discrimination and a more
effective background reduction than the present proportional counters, hence
having the potential for improved sensitivity and accuracy.
Having this in mind, in this work we present an experimental study of the
electroluminescence yield of pure Kr and compare the obtained results with
those attained from simulation studies 8 ; 9 ; 10 . The setup is described in
section 2, and in section 3 we discuss the methodology that was followed in
order to obtain the absolute EL yield, while in section 4 we present the
obtained results and corresponding discussion, summarising the main
conclusions in section 5.
## 2 Experimental setup
In this work, we used a gas proportional scintillation counter (GPSC) 40 ,
Fig.1,
Figure 1: Schematic of the GPSC with a large-area APD as the photosensor.
irradiated with a 1.5-mm collimated beam of 5.9-keV x-rays emitted from a 55Fe
radioactive source, having a Cr filter to absorb the most part of the 6.4-keV
Mn $K_{\beta}$ line. The primary electron cloud resulting from the x-ray
interactions in the absorption region are driven towards the scintillation
region by a weak electric field, with intensity below the gas excitation
threshold. The intensity of the electric field in the scintillation region is
set above the gas excitation threshold, but below the ionisation threshold, to
prevent electron multiplication. Upon crossing the scintillation region, the
electrons are accelerated by the electric field, gaining enough energy to
excite the gas media by electron impact, leading to an electroluminescence
pulse with a large number of VUV photons as a result of the gas de-excitation
processes. The number of VUV photons is proportional to the number of primary
electrons crossing the scintillation region and, thus, to the incident x-ray
energy. The detector response to x-rays has been studied in detail for Xe 46 ,
Ar 47 and Kr 48 gas fillings.
The GPSC depicted in Fig.1 has a 2.5-cm deep absorption region, a 0.9-cm deep
scintillation region and is filled with Kr at a pressure of 1.1 bar,
continuously purified through St707 SAES getters 44 . G1 and G2 are meshes,
made out of stainless steel wires, 80-$\mu$m in diameter with 900-$\mu$m
spacing. The radiation window holder and its focusing electrode are made of
stainless steel and are kept at negative voltage, while the stainless steel
G2-holder and the detector body are maintained at ground potential. The
voltage difference between the radiation window and G1 determines the electric
field in the absorption region, while the voltage of G1 determines the
electric field in the scintillation region. A Macor piece isolates the
radiation window holder, the G1 holder and its feedthrough, being vacuum-
sealed onto the stainless steel using a low-vapour pressure epoxy. The
electroluminescence pulses are readout by a VUV-sensitive silicon large-area
avalanche photodiode (LAAPD) 45 , having a 16-mm diameter active area. The
LAAPD is vacuum-sealed by compressing its enclosure against the detector
bottom plate, using an indium ring. The LAAPD signals are fed through a low-
noise, 1.5 V/pC, charge pre-amplifier followed by an amplifier with 2 $\mu$s
shaping time and are pulse-height analysed with a multi-channel analyser
(MCA).
## 3 Absolute electroluminescence yield measurement methodology
Most of the 5.9-kev x-rays interact in the absorption region producing, in the
LAAPD, signals of large amplitude as a result of the electroluminescence.
Nevertheless, a small fraction of the 5.9 keV x-rays are transmitted through
the gas and interact directly in the LAAPD producing signals with lower
amplitude when compared to those resulting from the x-ray interactions in the
gas. Fig.2 depicts a typical pulse-height distribution of the signals at the
LAAPD output, obtained when irradiating the detector with 5.9-keV x-rays. It
includes the Kr electroluminescence peak, more intense and in the high-
amplitude region, the peak resulting from the direct interactions of the
x-rays in the LAAPD, much less intense and in the low-amplitude region, and
the electronic noise tail in the low-amplitude limit. While the amplitude of
the electroluminescence peak depends on both scintillation region- and LAAPD-
biasing, the amplitude of the events resulting from direct x-ray interaction
in the LAAPD just depends on the LAAPD-biasing. In addition, the latter peak
is present even for a null electric field in the scintillation region and/or
when the detector is under vacuum. For pulse amplitude measurements, the
pulse-height distributions were fit to Gaussian functions superimposed on a
linear background, from which the Gaussian centroids were determined.
Figure 2: Typical pulse-height distribution obtained for 5.9-keV x-rays and
electric field intensities of 0.34 and 3.4 kV cm-1 bar-1 in the absorption and
scintillation region, respectively. The LAAPD bias voltage was 1840 V,
corresponding to a gain of $\sim$ 150.
The presence of the peak of direct x-ray interactions in the LAAPD is of
utmost importance, as the average number of charge carriers produced in the
silicon wafer by the x-ray interactions, $N_{RX}$ is well known and it is used
as a reference for the electroluminescence peak. Comparing both pulse-heights,
a ratio can be found between the pulse amplitudes resulting from the
electroluminescence and from the direct x-ray absorption in the LAAPD. This
ratio allows a direct quantification of the number of charge carriers produced
in the LAAPD by the electroluminescence pulse and, thus, the number of VUV-
photons impinging the LAAPD, given its quantum efficiency. The concurrent
detection of the light pulses and the x-rays in the photosensor under the same
conditions in the same setup allows a straight forward measurement of the
number of photons impinging on it. This method has been used for measuring Xe
and Ar electroluminescence yield in uniform electric fields 1 ; 2 and in
electron avalanches of GEMs, THGEMs and Micromegas micropatterned structures
49 ; 50 . The number of charge carriers produced in the LAAPD by the
electroluminescence pulse is:
$N_{VUV}=\left(\frac{A_{EL}}{A_{RX}}\right)N_{RX}$ (1)
being $A_{RX}$ and $A_{EL}$ the amplitude of the peaks resulting from the
direct x-ray interactions in the LAAPD and from the scintillation produced by
the x-ray interactions in the gas, read in the MCA. The non-linear response of
the LAAPD to 5.9-keV x-rays, e.g. see Fig.20 of 51 , was considered as the
ratio $A_{EL}$/$A_{RX}$ corrected for this effect. A factor of 0.94 was used
in this correction, for a LAAPD biasing voltage of 1840 V used throughout this
work, which corresponds to a LAAPD gain about 150. Knowing the quantum
efficiency, QE, of the LAAPD and the optical transparency of G2, T, and the
fraction of the average solid angle, $\Omega_{f}=\Omega/4\pi$ subtended by the
active area of the photosensor relative to the primary electron path, the
total number of photons produced in the electroluminescence pulse is obtained
by:
$N_{total,VUV}=\left(\frac{A_{EL}}{A_{RX}}\right)\frac{N_{RX}}{QE\times
T\times\Omega_{f}}$ (2)
The electroluminescence yield is defined as the number of photons produced per
drifting primary electron per unit path length:
$Y=\left(\frac{A_{EL}}{A_{RX}}\right)\frac{N_{RX}}{QE\times
T\times\Omega_{f}\times N_{e}\times d}$ (3)
where $N_{e}$ is the number of primary electrons produced in Kr by a 5.9 kev
x-ray interaction and $d$ is the scintillation region depth.
As the w-value in silicon is 3.62 eV, e.g. 52 and references therein, the
average number of free electrons produced in the LAAPD by the full absorption
of the 5.9-keV x-rays is $N_{RX}$ = 1.63$\times$103 electrons. The w-value in
Kr is 24.2 eV 53 , thus being $N_{e}$ = 244 electrons. The optical
transmission of G2 mesh is T=83$\%$ and the fraction of the average solid
angle has been computed by Monte Carlo simulation 54 to be
$\Omega_{f}$=0.215.
At atmospheric pressures, the electroluminescence of Kr consists of a narrow
line peaking at 148 nm with 5 nm FWHM 55 , called second continuum, being the
emissions in the visible and in the IR regions below few percent in comparison
with that in the VUV range 55 ; 56 , thus considering its contribution
negligible. The processes leading to emission in the second continuum can be
schematized as
$e^{-}+Kr\rightarrow e^{-}+Kr^{*}$,
$Kr^{*}+2Kr\rightarrow Kr^{*}_{2}+Kr$,
$Kr^{*}_{2}\rightarrow 2Kr+h\nu$.
The electron impact with Kr atoms induces excited atoms, which through three-
body collisions creates excited excimers, $Kr^{*}_{2}$, that decay emitting
one VUV photon, $h\nu$. It corresponds to transitions of the singlet and
triplet bound molecular states, from vibrationally relaxed levels, to the
repulsive ground state.
A LAAPD QE was measured to be 0.90 for Kr EL 57 . According to the
manufacturer, the LAAPD fabrication technology is well established, and quite
good reproducibility is obtained and it is expected that the behaviour
observed for individual LAAPDs is representative for any of these devices. In
fact, 58 have measured the relative QE of $\sim$ 600 LAAPDs and obtained an
approximately Gaussian distribution with a FWHM of $\sim$ 0.1. Therefore, we
have considered an uncertainty of $\pm$ 0.08 ($\sim$ 2$\sigma$) for the LAAPD
QE, being this the major source of uncertainty in our measurements.
## 4 Experimental results and discussion
In Fig.3 we depict the reduced electroluminescence yield, Y/N, i.e. the
electroluminescence yield divided by the number density of the gas, as a
function of reduced electric field, E/N, in the scintillation region. The data
was taken using a constant electric field of 0.36 kV/cm in the absorption
region, while varying the electric field in the scintillation region. Three
independent runs have been performed with several days of interval and with a
room temperature between 25 and 26∘C, showing a good reproducibility of the
experimental results. The reduced EL yield exhibits an approximately linear
trend with the reduced electric field, a behavior similar to that of Ar and Xe
and mixtures of Xe with He or with molecular additives 1 ; 2 ; 3 ; 4 ; 5 ; 6 ;
7 ; 8 ; 9 ; 10 ; 11 ; 12 . For each run, Fig.3 also depicts a linear fit
superimposed to the experimental data, excluding the two data points taken at
the highest reduced electric field where secondary ionisation is already non-
negligible. Simulation results from 8 ; 10 are also depicted for comparison.
To the best of our knowledge, there are no other experimental or simulation
results in the literature.
Figure 3: Krypton reduced electroluminescence yield as a function of reduced
electric field for three different runs (this work) and the respective linear
fit to the data below 14 Td, as well as for data obtained from Monte Carlo
simulation 8 ; 10 .
The reduced electroluminescence yield can be approximately represented as
$\mathrm{Y/N\ (10^{-17}\ photons\ electron^{-1}\ cm^{2}\ atom^{-1})=0.113\
E/N-0.30}$ (4)
where E/N is given in Td (10-17 V cm2 atom-1). This equation can also be
represented as a function of pressure, at a given temperature used to convert
the gas density into pressure. At room temperature Eq.4 can be expressed as
$\mathrm{Y/p\ (photons\ electron^{-1}\ cm^{-1}\ bar^{-1})=113\ E/p-74}$ (5)
The slope of the linear dependence denotes the scintillation amplification
parameter, i.e., the number of photons produced per drifting electron and per
volt, and is in good agreement with what has been obtained by Monte Carlo
simulations for room temperature 8 ; 10 . In addition, the excitation
threshold for Kr, defined as the extrapolation of the linear trend to zero
scintillation, 2.7 Td (0.7 kV cm-1 bar-1 at 293 K), is also in good agreement
with the values presented in the literature 8 ; 10 ; 59 . On the other hand,
at a reduced electric field of 14 Td (3.5 kV cm-1 bar-1) the experimental data
depart from the linear trend, denoting already the presence of a non-
negligible amount of charge multiplication in the scintillation region.
Therefore, from the experimental data we can conclude that the Kr ionisation
threshold should be around 13.5 Td (3.4 kV cm-1 bar-1). This value is above
that obtained from the MC simulation of 8 and is lower than foreseen by the
most recent simulation toolkit 10 , demonstrating the importance of the data
obtained in this work, for the development of future optical-TPCs based on Kr
filling.
For comparison, Fig.4 depicts the experimental results obtained with the
present method for the electroluminescence yield in Xe 1 , Kr (this work) and
Ar 3 along with the respective simulation results obtained by the most recent
simulation work-package for EL production in noble gases 10 . The
scintillation amplification parameter in Kr and Ar is about 80$\%$ and 60$\%$,
respectively, of that for Xe.
Figure 4: Reduced electroluminescence yield in Xe, Kr and Ar as a function of
reduced electric field. Data points correspond to experimental data from 1 ,
this work and 3 , while the curves correspond to simulation data 10 .
## 5 Conclusions
We have performed experimental studies on the reduced electroluminescence
yield of pure Kr at room temperature and compared it with those obtained by
Monte Carlo simulation. For the experimental measurements we used a gas
proportional scintillation counter (GPSC), having a VUV-sensitive large area
avalanche photodiode (LAAPD) for the scintillation readout. We used 5.9-keV
x-rays to induce electroluminescence in the GPSC or to interact directly in
the LAAPD. The concurrent detection of the electroluminescence pulses and the
x-rays in the photosensor under the same conditions and in the same setup,
allows to use the number of charge carriers produced by the x-rays interacting
directly in the LAAPD as a reference for determining the number of charge
carriers produced by the electroluminescence in the LAAPD, allowing a
straightforward measurement of the number of photons impinging on it. The
reduced electroluminescence yield exhibits a linear dependence with reduced
electric field, with an amplification parameter of 113 photons per kV, per
electron, and a scintillation threshold of 2.7 Td (0.7 kV cm-1 bar-1 at 293
K), in good agreement with the simulation data present in the literature.
Above 14 Td (3.5 kV cm-1 bar-1) the reduced electroluminescence yield departs
from the linear trend, showing a faster increase due to the additional
scintillation produced by extra secondary electrons resulting from the charge
multiplication onset. The Kr amplification parameter is about 80$\%$ and
140$\%$ of that measured for Xe and Ar, respectively.
###### Acknowledgements.
This work is funded by FEDER, through the Programa Operacional Factores de
Competitividade - COMPETE and by National funds through FCT - Fundação para a
Ciência e Tecnologia in the frame of project PTDC/FIS/NUC/1534/2014 and
UID/FIS/04559/2020 (LIBPhys).
## References
* (1) C.M.B. Monteiro, et al., _Secondary Scintillation Yield in Pure Xenon_ , _J. Instrum._ 2 (2007) P05001.
* (2) E.D.C. Freitas, et al., _Secondary scintillation yield in high-pressure xenon gas for neutrinoless double beta decay $(0\nu\beta\beta)$ search_, _Phys. Lett. B_ 684 (2010) 205.
* (3) C.M.B. Monteiro, et al., _Secondary scintillation yield in pure argon_ , _Phys. Lett. B_ 668 (2008) 167.
* (4) A. Buzulutskov et al., _Revealing neutral bremsstrahlung in two-phase argon electroluminescence_ , _Astropart. Phys._ 103 (2018) 29.
* (5) C.A.O. Henriques et al. (NEXT Collaboration), _Secondary scintillation yield of xenon with sub-percent levels of $CO_{2}$ additive for rare-event detection_, _Phys. Lett. B_ 773 (2017) 663.
* (6) C.A.O. Henriques et al. (NEXT Collaboration), _Electroluminescence TPCs at the thermal diffusion limit_ , _JHEP_ 1 (2019) 027.
* (7) A.F.M. Fernandes et al., (NEXT Collaboration), _Electroluminescence Yield in low-diffusion Xe-He gas mixtures for rare-event detection_ , _JHEP_ 34 (2020) 034.
* (8) T.H.V.T. Dias et al., _A unidimensional Monte-Carlo simulation of electron drift velocities and electroluminescence in argon, krypton and xenon_ , _J. Phys. D, Appl. Phys._ 19 (1986) 527.
* (9) F.P. Santos et al., _3-Dimensional Monte-Carlo calculation of the VUV electroluminescence and other electron-transport parameters in xenon_ , _J. Phys. D, Appl. Phys._ 27 (1994) 42.
* (10) C.A.B. Oliveira et al., _A simulation toolkit for electroluminescence assessment in rare event experiments_ , _Phys. Lett. B_ 703 (2011) 217.
* (11) C.A.B. Oliveira et al., _Energy resolution studies for NEXT_ , _J. Instrum._ 6 (2011) P05007.
* (12) C.D.R. Azevedo et al., (NEXT Collaboration), _Microscopic simulation of xenon-based optical TPCs in the presence of molecular additives_ , _Nucl. Intrum. Meth. A_ 877 (2018) 157.
* (13) K. Pushkin (LZ Collaboration), _Direct search for WIMP dark matter particles with the LUX-ZEPLIN (LZ) detector_ , _Nucl. Instrum. Meth. A_ 936 (2019) 162.
* (14) X. Cui et al., (PandaX-II collaboration), _Dark Matter Results From 54-Ton-Day Exposure of PandaX-II Experiment_ , _Phys. Rev. Lett._ 119 (2017) 181302.
* (15) E. Aprile et al., (XENON Collaboration), _The XENON1T dark matter experiment_ , _Eur. Phys. J. C_ 77 (2017) 881.
* (16) J. Aalbers et al., _DARWIN: towards the ultimate dark matter detector_ , _J. Cosm. Astrop. Phys._ 11 (2016) 017.
* (17) P. Agnes et al., (DarkSide Collaboration), _DarkSide-50 532-day dark matter search with low-radioactivity argon_ , _Phys. Rev. D_ 98 (2018) 102006.
* (18) B. Aimard et al., _A 4 tonne demonstrator for large-scale dual-phase liquid argon time projection chambers_ , _J. Instrum._ 13 (2018) P11003.
* (19) T. Lux, _Charge and light production in the charge readout system of a dual phase LAr TPC_ , _J. Instrum._ 14 (2019) P03006.
* (20) J. Martin-Albo et al., (NEXT Collaboration), _Sensitivity of NEXT-100 to neutrinoless double beta decay_ , _JHEP_ 5 (2016) 159.
* (21) K. Nakamura et al., _AXEL-a high pressure xenon gas TPC for neutrinoless double beta decay search_ , _Nucl. Instrum. Meth. A_ 845 (2017) 394.
* (22) A. Fieguth, _Search for double $\beta$-decays of 124Xe with XENON100 & XENON1T_, _J. Phys. Conference Series_ 888 (2017) 012251.
* (23) D. Gonzalez-Diaz et al., _Gaseous and dual-phase time projection chambers for imaging rare processes_ , _Nucl. Instrum. Meth. A_ 878 (2018) 200.
* (24) E. Aprile et al., (XENON Collaboration), _Observation of two-neutrino double electron capture in 124Xe with XENON1T_, _Nature_ 568 (2019) 532.
* (25) Yu. M. Gavrilyuk et al., _Indications of $2\nu 2K$-capture in 78Kr_, _Phys. Rev. C_ 87 (2013) 035501.
* (26) S. Bustabad et al., _Examination of the possible enhancement of neutrinoless double-electron capture in ${}^{78}Kr$_, _Phys. Rev. C_ 88 (2013) 035502.
* (27) S. S. Ratkevich et al., _Comparative study of the double-K-shell-vacancy production in single- and double-electron-capture decay_ , _Phys. Rev. C_ 96 (2017) 065502.
* (28) Yu. M. Gavrilyuk et al., _Results of In-Depth Analysis of Data Obtained in the Experimental Search for 2K(2 $\nu$)-Capture in 78Kr_, _Phys. Part. Nucl._ 49 (2018) 540.
* (29) Yu. M. Gavrilyuk et al., _New Experiment on Search for the Resonance Absorption of Solar Axion Emitted in the $M$1 Transition of 83Kr Nuclei_, _JETP Letters_ 101 (2015) 664.
* (30) Yu. M. Gavrilyuk et al., _Search for Resonant Absorption of Solar Axions Emitted in $M$1-Transitions in 83Kr Nuclei: Second Stage of the Experiment_, _Phys. Part. Nucl._ 49 (2018) 94.
* (31) Z. A. Akhmatov et al., _Results of Searching for Solar Hadronic Axions Emitted in the $M$1 Transition in 83Kr Nuclei_, _Physics of Particles and Nuclei_ 49 (2018) 599.
* (32) Yu. M. Gavrilyuk et al., _New Constraints on the Axion–Photon Coupling Constant for Solar Axions_ , _JETP Letters_ 107 (2018) 589.
* (33) J.D. Vergados et al., _Theoretical direct WIMP detection rates for transitions to the first excited state in 83Kr_, _Phys. Rev. D_ 92 (2015) 015015.
* (34) M. Hirsch et al., _Nuclear-structure calculation of $\beta^{+}\beta^{+}$, $\beta^{+}$ /EC and EC/EC decay matrix elements_, _Z. Physik A - Hadrons and Nuclei_ 347 (1994) 151.
* (35) M. Aunola, M. and J. Suhonen, _Systematic study of beta and double beta decay to excited final states_ , _Nucl. Phys. A_ 602 (1996) 133.
* (36) J. Suhonen, _Analysis of double- $\beta$ transitions in 78Kr_, _Phys. Rev. C_ 87 (2013) 034318.
* (37) J. Suhonen, _On the neutrinoless double $\beta^{+}$/EC decays_, _AIP Conference Proceedings_ 1572 (2013) 102; L. Maalampi and J. Suhonen, _Neutrinoless Double $\beta^{+}$/EC Decays_, _Advances in High Energy Physics_ 2013 (2013) 505874.
* (38) E.A.C. Perez et al., _Two-neutrino double electron capture on 124Xe based on an effective theory and the nuclear shell model_, _Phys. Lett. B_ 797 (2019) UNSP 134885.
* (39) M. Tanabashi et al., (Particle Data Group), _Review of Particle Physics_ , _Phys. Rev. D_ 98 (2018) 030001.
* (40) J.M.F. dos Santos et al., _Development of portable gas proportional scintillation counters for x-ray spectrometry_ , _X-ray Spectrom._ 30 (2001) 373.
* (41) C.M.B. Monteiro, et al., _Secondary scintillation yield from gaseous micropattern electron multipliers in direct dark matter detection_ , _Phys. Lett. B_ 677 (2009) 133.
* (42) F. Monrabal et al., (NEXT collaboration), _The NEXT White (NEW) detector_ , _J. Instrum._ 13 (2018) P12010.
* (43) F.M. Brunbauer et al., _Live event reconstruction in an optically read out GEM-based TPC_ , _Nucl. Instrum. Meth. A_ 886 (2018) 24.
* (44) SAES Advanced Technologies SpA, Nucleo Industriale, CP 9367 051-Avezzano(AQ), Italy.
* (45) Deep-UV enhanced series, Advanced Photonix, Inc., 1240 Avenida Acaso, Camarillo, CA 93012, USA.
* (46) J.A.M. Lopes et al., _A xenon gas proportional scintillation counter with a UV-sensitive large-area avalanche photodiode_ , _IEEE Trans. Nucl.Sci._ 48 (2001) 312.
* (47) C.M.B. Monteiro et al., _An argon gas proportional scintillation counter with UV avalanche photodiode scintillation readout_ , _IEEE Trans. Nucl. Sci._ 48 (2001) 1081.
* (48) C.M.B. Monteiro et al., _A Gas Proportional Scintillation Counter with krypton filling_ , _J. Instrum._ 11 (2016) C12079.
* (49) C.M.B. Monteiro, et al., _Secondary scintillation yield from GEM and THGEM gaseous electron multipliers for direct dark matter search_ , _Phys. Lett. B_ 714 (2012) 18.
* (50) C. Balan, et al., _MicrOMEGAs operation in high pressure xenon: Charge and scintillation readout_ , _J. Instrum._ 6 (2011) P02006.
* (51) L.M.P. Fernandes et al., _Characterization of large area avalanche photodiodes in X-ray and VUV-light detection_ , _J. Instrum._ 2 (2007) P08005.
* (52) F. Gao et al., _Monte Carlo method for simulating gamma-ray interaction with materials: A case study on Si_ , _Nucl. Instrum. Meth. A_ 579 (2007) 292.
* (53) L.K. Bronic, _W values and Fano factors for electrons in rare gases and rare gas mixtures_ , _Hoshasen : ionizing radiation_ 24 (1998) 101, https://studylib.net/doc/5876359/%E2%80%9Av-values-and-fano-factors-for-electrons-in-rare.
* (54) J.M.F. dos Santos et al., _The dependence of the energy resolution of gas proportional scintillation counters on the scintillation region to photomultiplier distance_ , _IEEE Trans. Nucl. Sci._ 39 (1992) 541.
* (55) M. Suzuki, S. Kubota, _Mechanism of proportional scintillation in argon, krypton and xenon_ , _Nucl. Instrum. Meth._ 164 (1979) 197.
* (56) C.A.B. Oliveira et al., _Simulation of gaseous Ar and Xe electroluminescence in the near infrared range_ , _Nucl. Instrum. Meth. A_ 722 (2013) 1.
* (57) R. Chandrasekharan et al., _Detection of noble gas scintillation light with large area avalanche photodiodes (LAAPDs)_ , _Nucl. Instrum. Meth. A_ 546 (2005) 426.
* (58) R. Neilson et al., _Characterization of large area APDs for the EXO-200 detector_ , _Nucl. Instrum. Meth. A_ 608 (2009) 68.
* (59) M.A. Feio, et al., _Thresholds for secondary light-emission by noble-gases_ , _Jpn. J. Appl. Phys._ 21 (1982) 1184, and references therein.
|
[table]capposition=top
# Dynamic imaging using a deep generative SToRM (Gen-SToRM) model
Qing Zou, Abdul Haseeb Ahmed, Prashant Nagpal, Stanley Kruger, Mathews Jacob
Qing Zou is with the Applied Mathematics and Computational Sciences (AMCS)
program at the University of Iowa, Iowa City, USA (e-mail: zou-
qing@uiowa.edu). Abdul Haseeb Ahmed and Mathews Jacob are with the Department
of Electrical and Computer Engineering, University of Iowa, Iowa City, USA
(e-mail<EMAIL_ADDRESS>and mathews-jacob@uiowa.edu). Prashant Nagpal
and Stanley Kruger are with the Department of Radiology, University of Iowa,
Iowa City, USA (e-mail<EMAIL_ADDRESS>and stanley-
kruger@uiowa.edu). This work is supported by NIH under Grants R01EB019961 and
R01AG067078-01A1. This work was conducted on an MRI instrument funded by
1S10OD025025-01.
###### Abstract
We introduce a generative smoothness regularization on manifolds (SToRM) model
for the recovery of dynamic image data from highly undersampled measurements.
The model assumes that the images in the dataset are non-linear mappings of
low-dimensional latent vectors. We use the deep convolutional neural network
(CNN) to represent the non-linear transformation. The parameters of the
generator as well as the low-dimensional latent vectors are jointly estimated
only from the undersampled measurements. This approach is different from
traditional CNN approaches that require extensive fully sampled training data.
We penalize the norm of the gradients of the non-linear mapping to constrain
the manifold to be smooth, while temporal gradients of the latent vectors are
penalized to obtain a smoothly varying time-series. The proposed scheme brings
in the spatial regularization provided by the convolutional network. The main
benefit of the proposed scheme is the improvement in image quality and the
orders-of-magnitude reduction in memory demand compared to traditional
manifold models. To minimize the computational complexity of the algorithm, we
introduce an efficient progressive training-in-time approach and an
approximate cost function. These approaches speed up the image reconstructions
and offers better reconstruction performance.
###### Index Terms:
Generative model; CNN; Manifold approach; Unsupervised learning, Deep image
prior
## I Introduction
The imaging of time-varying objects at high spatial and temporal resolution is
key to several modalities, including MRI and microscopy. A central challenge
is the need for high resolution in both space and time [1, 2]. Several
computational imaging strategies have been introduced in MRI to improve the
resolution, especially in the context of free-breathing and ungated cardiac
MRI. A popular approach pursued by several groups is self-gating, where
cardiac and respiratory information is obtained from central k-space regions
(navigators) using bandpass filtering or clustering [3, 4, 5, 6, 7]. The data
is then binned to the respective phases and recovered using total variation or
other priors. Recently, approaches using smooth manifold regularization have
been introduced. These approaches model the images in the time series as
points on a high-dimensional manifold [8, 9, 10, 11, 12]. Manifold
regularization algorithms, including the smoothness regularization on
manifolds (SToRM) framework [8, 9, 10], have shown good performance in several
dynamic imaging applications. Since the data is not explicitly binned into
specific phases as in the self-gating methods, manifold algorithms are less
vulnerable to clustering errors than navigator-based corrections. Despite the
benefits, a key challenge with the current manifold methods is the high memory
demand. Unlike self-gating methods that only recover specific phases, manifold
methods recover the entire time series. The limited memory on current GPUs
restricts the number of frames that can be recovered simultaneously, which
makes it challenging to extend the model to higher dimensionalities. The high
memory demand also makes it difficult to use spatial regularization priors on
the images using deep learned models.
Our main focus is to capitalize on the power of deep convolutional neural
networks (CNN) to introduce a memory efficient generative or synthesis
formulation of SToRM. CNN based approaches are now revolutionizing image
reconstruction, offering significantly improved image quality and fast image
recovery [13, 14, 15, 16, 17, 18, 19]. In the context of MRI, several novel
approaches have been introduced [20, 21], including transfer-learning [22],
domain adaptation [23], learning-based dynamic MRI [24, 25, 26], and
generative-adversarial models [27, 28, 29]. Unlike many CNN-based approaches,
the proposed scheme does not require pre-training using large amounts of
training data. This makes the approach desirable in free-breathing
applications, where the acquisition of fully sampled training data is
infeasible. We note that the classical SToRM approach can be viewed as an
analysis regularization scheme (see Fig. 1.(a)). Specifically, a non-linear
injective mapping is applied on the images such that the mapped points of the
alias-free images lie on a low-dimensional subspace [10, 30, 31]. When
recovering images from undersampled data, the nuclear norm prior is applied in
the transform domain to encourage their non-linear mappings to lie in a
subspace. Unfortunately, this analysis approach requires the storage of all
the image frames in the time series, which translates to high memory demand.
The proposed generative SToRM formulation offers quite significant compression
of the data, which can overcome the above challenge. Both the relation between
the analysis and synthesis formulations and the relation of the synthesis
formulation to neural networks were established in earlier work [31].
We assume that the image volumes in the dataset are smooth non-linear
functions of a few latent variables, i.e.,
$\mathbf{x}_{t}=\mathcal{G}_{\theta}(\mathbf{z}_{t})$, where $\mathbf{z}_{t}$
are the latent vectors in a low-dimensional space. $\mathbf{x}_{t}$ is the
$t$-th generated image frame in the time series. This explicit formulation
implies that the image volumes lie on a smooth non-linear manifold in a high-
dimensional ambient space (see Fig. 1.(b)). The latent variables capture the
differences between the images (e.g., cardiac phase, respiratory phase,
contrast dynamics, subject motion). We model the $\mathcal{G}$ using a CNN,
which offers a significantly compressed representation. Specifically, the
number of parameters required by the model (CNN weights and latent vectors)
are several orders of magnitude smaller than required for the direct
representation of the images. The compact model proportionately reduces the
number of measurements needed to recover the images. In addition, the
compression also enables algorithms with much smaller memory footprint and
computational complexity. We propose to jointly optimize for the network
parameters $\theta$ and the latent vector $\mathbf{z}_{t}$ based on the given
measurements. The smoothness of the manifold generated by
$\mathcal{G}_{\theta}(\mathbf{z})$ depends on the gradient of
$\mathcal{G}_{\theta}$ with respect to its input. To enforce the learning of a
smooth image manifold, we regularize the norm of the Jacobian of the mapping
$\|J_{z}\mathcal{G}_{\theta}\|^{2}$. We experimentally observe that by
penalizing the gradient of the mapping, the network is encouraged to learn
meaningful mappings. Similarly, the images in the time series are expected to
vary smoothly in time. Hence, we also use a Tikhonov smoothness penalty on the
latent vectors $\mathbf{z}_{t}$ to further constrain the solutions. We use the
ADAM optimizer with stochastic gradients, where random batches of
$\mathbf{z}_{i}$ and $\mathbf{b}_{i}$ are chosen at iteration to determine the
parameters. Unlike traditional CNN methods that are fast during
testing/inference, the direct application of this scheme to the dynamic MRI
setting is computationally expensive. We use approximations, including
progressive-in-time optimization and an approximated data term that avoids
non-uniform fast Fourier transforms, to significantly reduce the computational
complexity of the algorithm.
The proposed approach is inspired by deep image prior (DIP), which was
introduced for static imaging problems [32], as well as its extension to
dynamic imaging [33]. The key difference of the proposed formulation is the
joint optimization of the latent variables $\mathbf{z}$ and $\mathcal{G}$. The
work of Jin ea tl. [33] was originally developed for CINE MRI, where the
latent variables were obtained by linearly interpolating noise variables at
the first and last frames. Their extension to real-time applications involved
setting noise latent vectors at multiples of a preselected period, followed by
linearly interpolating the noise variables. This approach is not ideally
suited for applications with free breathing, when the motion is not periodic.
Another key distinction is the use of regularization priors on the network
parameters and latent vectors, which encourages the mapping to be an isometry
between latent and image spaces. Unlike DIP methods, the performance of the
network does not significantly degrade with iterations. While we call our
algorithm “generative SToRM”, we note that our goal is not to generate random
images from stochastic inputs as in generative-adversarial networks (GAN). In
particular, we do not use adversarial loss functions where a discriminator is
jointly learned as in the literature [34, 35].
## II Background
### II-A Dynamic MRI from undersampled data: problem setup
Our main focus is to recover a series of images
$\mathbf{x}_{1},..\mathbf{x}_{M}$ from their undersampled multichannel MRI
measurements. The multidimensional dataset is often compactly represented by
its Casoratti matrix
$\mathbf{X}=\begin{bmatrix}\mathbf{x}_{1}&...&\mathbf{x}_{M}\end{bmatrix}.$
(1)
Each of the images is acquired by different multichannel measurement operators
$\mathbf{b}_{i}=\mathcal{A}_{i}(\mathbf{x}_{i})+\mathbf{n}_{i},$ (2)
where $\mathbf{n}_{i}$ is zero mean Gaussian noise matrix that corrupts the
measurements.
### II-B Smooth manifold models for dynamic MRI
The smooth manifold methods model images $\mathbf{x}_{i}$ in the dynamic time
series as points on a smooth manifold $\mathcal{M}$. These methods are
motivated by continuous domain formulations that recover a function $f$ on a
manifold from its measurements as
$f=\arg\min_{f}\sum_{i}\|f(\mathbf{x}_{i})-\mathbf{b}_{i}\|^{2}+\lambda\int_{\mathcal{M}}\|\nabla_{\mathcal{M}}f\|^{2}d\mathbf{x}$
(3)
where the regularization term involves the smoothness of the function on the
manifold. This problem is adapted to the discrete setting to solve for images
lying on a smooth manifold from its measurements as
$\mathbf{X}=\arg\min_{\mathbf{X}}\sum_{i=1}^{M}\|\mathcal{A}(\mathbf{x}_{i})-\mathbf{b}_{i}\|^{2}+\lambda~{}{\mathrm{trace}}(\mathbf{X}\mathbf{L}\mathbf{X}^{H}),$
(4)
where $\mathbf{L}$ is the graph Laplacian matrix. $\mathbf{L}$ is the discrete
approximation of the Laplace-Beltrami operator on the manifold, which depends
on the structure or geometry of the manifold. The manifold matrix $\mathbf{L}$
is estimated from k-space navigators. Different approaches, ranging from
proximity-based methods [8] to kernel low-rank regularization [10] and sparse
optimization [12], have been introduced.
The results of earlier work [30, 10] show that the above manifold
regularization penalties can be viewed as an analysis prior. In particular,
these schemes rely on a fixed non-linear mapping $\varphi$ of the images. The
theory shows that if the images $\mathbf{x}_{1},..\mathbf{x}_{M}$ lie in a
smooth manifold/surface or union of manifolds/surfaces, the mapped points live
on a subspace or union of subspaces. The low-dimensional property of the
mapped points $\varphi(\mathbf{x}_{1}),..\varphi(\mathbf{x}_{M})$ is used to
recover the images from undersampled data or derive the manifold using a
kernel low-rank minimization scheme:
$\mathbf{X}^{*}=\arg\min_{\mathbf{X}}\sum_{i=1}^{M}\|\mathcal{A}(\mathbf{x}_{i})-\mathbf{b}_{i}\|^{2}+\lambda~{}\|\left[\varphi(\mathbf{x}_{1}),..,\varphi(\mathbf{x}_{N})\right]\|_{*}.$
(5)
This nuclear norm regularization scheme is minimized using an iterative
reweighted algorithm, whose intermediate steps match (4). The non-linear
mapping $\varphi$ may be viewed as an analysis operator that transforms the
original images to a low-dimensional latent subspace, very similar to analysis
sparsity-based approaches used in compressed sensing.
### II-C Unsupervised learning using Deep Image Prior
The recent work of DIP uses the structure of the network as a prior [32],
enabling the recovery of images from ill-posed measurements without any
training data. Specifically, DIP relies on the property that CNN architectures
favor image data more than noise. The regularized reconstruction of an image
from undersampled and noisy measurements is posed in DIP as
$\\{\boldsymbol{\theta}^{*}\\}=\arg\min_{\boldsymbol{\theta}}\left\|\mathcal{A}(\mathbf{x})-\mathbf{b}\right\|^{2}~{}~{}\mbox{such
that}~{}~{}\mathbf{x}=\mathcal{G}_{\boldsymbol{\theta}}[\mathbf{z}]$ (6)
where $\mathbf{x}=\mathcal{G}_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ is the
recovered image, generated by the CNN generator
$\mathcal{G}_{\boldsymbol{\theta}^{*}}$ whose parameters are denoted by
$\boldsymbol{\theta}$. Here, $\mathbf{z}$ is the random latent variable, which
is chosen as random noise and kept fixed.
The above optimization problem is often solved using stochastic gradient
descent (SGD). Since CNNs are efficient in learning natural images, the
solution often converges quickly to a good image. However, when iterated
further, the algorithm also learns to represent the noise in the measurements
if the generator has sufficient capacity, resulting in poor image quality. The
general practice is to rely on early termination to obtain good results. This
approach was recently extended to the dynamic setting by Jin et al. [33],
where a sequence of random vectors was used as the input.
## III Deep generative SToRM model
We now introduce a synthesis SToRM formulation for the recovery of images in a
time series from undersampled data (see Fig. 1.(b)). Rather than relying on a
non-linear mapping of images to a low-dimensional subspace [10] (see Fig.
1.(a)), we model the images in the time series as non-linear functions of
latent vectors living in a low-dimensional subspace.
### III-A Generative model
We model the images in the time series as
$\mathbf{x}_{i}=\mathcal{G}_{\theta}(\mathbf{z}_{i}),i=1,..,M,$ (7)
where $\mathcal{G}_{\theta}$ is a non-linear mapping, which is termed as the
generator. Inspired by the extensive work on generative image models [36, 37,
32], we represent $\mathcal{G}_{\theta}$ by a deep CNN, whose weights are
denoted by $\theta$. The parameters $\mathbf{z}_{i}$ are the latent vectors,
which live in a low-dimensional subspace. The non-linear mapping
$\mathcal{G}_{\theta}$ may be viewed as the inverse of the image-to-latent
space mapping $\varphi$, considered in the SToRM approach.
Figure 1: Illustration of (a) analysis SToRM and (b) generative SToRM.
Analysis SToRM considers a non-linear (e.g. exponential) lifting of the data.
If the original points lie on a smooth manifold, the lifted points lie on a
low-dimensional subspace. The analysis SToRM cost function in (5) is the sum
of the fit of the recovered images to the undersampled measurements and the
nuclear norm of the lifted points. A challenge with analysis SToRM is its high
memory demand and the difficulty in adding spatial regularization. The
proposed method models the images as the non-linear mapping
$\mathcal{G}_{\theta}$ of some latent vectors $\mathbf{z}_{i}$, which lie in a
very low-dimensional space. Note that the same generator is used to model all
the images in the dataset. The number of parameters of the generator and the
latent variables is around the size of a single image, which implies a highly
compressed representation. In addition, the structure of the CNN offers
spatial regularization as shown in DIP. The proposed algorithm in (13)
estimates the parameters of the generator and the latent variables from the
measured data. A distance regularization prior is added to the generator to
ensure that nearby points in the latent subspace are mapped to nearby points
on the manifold. Similarly, a temporal regularization prior is added to the
latent variables. The optimization is performed using ADAM with batches of few
images.
We propose to estimate the parameters of the network $\theta$ as well as the
latent vectors $\mathbf{z}_{i}$ by fitting the model to the undersampled
measurements. The main distinction of our framework with DIP, which is
designed for a single image, is that we use the same generator for all the
images in the dynamic dataset. The latent vector $\mathbf{z}_{i}$ for each
image is different and is also estimated from the measurements. This strategy
allows us to exploit non-local information in the dataset. For example, in
free-breathing cardiac MRI, the latent vectors of images with the same cardiac
and respiratory phase are expected to be similar. When the gradient of the
network is bounded, the output images at these time points are expected to be
the same. The proposed framework is hence expected to learn a common
representation from these time-points, which are often sampled using different
sampling trajectories. Unlike conventional manifold methods [8, 10, 12], the
use of the CNN generator also offers spatial regularization.
It is often impossible to acquire fully-sampled training data in many free-
breathing dynamic imaging applications, and a key benefit of this framework
over conventional neural network schemes is that no training data is required.
As discussed previously, the number of parameters of the model in (7) is
orders of magnitude smaller than the number of pixels in the dataset. The
dramatic compression offered by the representation, together with the mini-
batch training provides a highly memory-efficient alternative to current
manifold based and low-rank/tensor approaches. Although our focus is on
establishing the utility of the scheme in 2-D settings in this paper, the
approach can be readily translated to higher dimensional applications. Another
benefit is the implicit spatial regularization brought in by the convolutional
network as discussed for DIP. We now introduce novel regularization priors on
the network and the latent vectors to further constrain the recovery to reduce
the need for manual early stopping.
### III-B Distance/Network regularization
As in the case of analysis SToRM regularization [8, 10], our interest is in
generating a manifold model that preserves distances. Specifically, we would
like the nearby points in the latent space to map to similar images on the
manifold. With this interest, we now study the relation between the Euclidean
distances between their latent vectors and the shortest distance between the
points on the manifold (geodesic distance).
We consider two points $\mathbf{z}_{1}$ and $\mathbf{z}_{2}$ in the latent
space, which are fed to the generator to obtain $\mathcal{G}(\mathbf{z}_{1})$
and $\mathcal{G}(\mathbf{z}_{2})$, respectively. We have the following result,
which relates the the Euclidean distance
$\|\mathbf{z}_{1}-\mathbf{z}_{2}\|^{2}$ to the geodesic distance ${\rm
dist}_{\mathcal{M}}\left(\mathcal{G}(\mathbf{z}_{1}),\mathcal{G}(\mathbf{z}_{2})\right)$,
which is the shortest distance on the manifold. The setting is illustrated in
Fig. 2, where the geodesic distance is indicated by the red curve.
###### Proposition 1.
Let $\mathbf{z}_{1},\mathbf{z}_{2}\in\mathbb{R}^{n}$ be two nearby points in
the latent space, with mappings denoted by
$\mathcal{G}(\mathbf{z}_{1}),\mathcal{G}(\mathbf{z}_{2})\in\mathcal{M}$. Here,
$\mathcal{M}=\\{G(\mathbf{z})|\mathbf{z}\in\mathbb{R}^{n}\\}$. Then, the
geodesic distance on the manifold satisfies:
${\rm
dist}_{\mathcal{M}}\big{(}\mathcal{G}(\mathbf{z}_{1}),\mathcal{G}(\mathbf{z}_{2})\big{)}\leq\|\mathbf{z}_{1}-\mathbf{z}_{2}\|_{F}~{}\|J_{z}\big{(}\mathcal{G}\left(\mathbf{z}_{1}\right)\big{)}\|_{F}.$
(8)
###### Proof.
The straight-line between the latent vectors is denoted by $c(s),s\in[0,1]$
with $c(0)=\mathbf{z}_{1}$ and $c(1)=\mathbf{z}_{2}$. We also assume that the
line is described in its curvilinear abscissa, which implies
$\|c^{\prime}(s)\|=1;\forall s\in[0,1]$. We note that $\mathcal{G}$ may map to
the black curve, which may be longer than the geodesic distance. We now
compute the length of the black curve $\mathcal{G}[c(s)]$ as
$d=\int_{0}^{1}\|\nabla_{s}\mathcal{G}\left[c(s)\right]\|ds.$ (9)
Using the chain rule and denoting the Jacobian matrix of $\mathcal{G}$ by
$J_{z}$, we can simplify the above distance as
$\displaystyle d$ $\displaystyle=$
$\displaystyle\int_{0}^{1}\|J_{z}\left(\mathcal{G}\right)c^{\prime}(s)\|_{F}ds$
(10) $\displaystyle\leq$
$\displaystyle\int_{0}^{1}\|J_{z}\left(\mathcal{G}\right)\|_{F}~{}\underbrace{\|c^{\prime}(s)\|_{F}}_{1}ds$
$\displaystyle=$
$\displaystyle\|J_{z}\left(\mathcal{G}[\mathbf{z}_{1}]\right)\|_{F}\underbrace{\int_{0}^{1}ds}_{\|\mathbf{z}_{1}-\mathbf{z}_{2}\|}.$
We used the Cauchy-Schwartz inequality in the second step and in the last
step, we use the fact that
$J_{z}\mathcal{G}\left(c(t)\right)=J_{z}\mathcal{G}\left(\mathbf{z}_{1}\right)+\mathcal{O}(t)$
when the points $\mathbf{z}_{1}$ and $\mathbf{z}_{2}$ are close. Since the
geodesic distance is the shortest distance on the manifold, we have ${\rm
dist}_{\mathcal{M}}\big{(}\mathcal{G}(\mathbf{z}_{1}),\mathcal{G}(\mathbf{z}_{2})\big{)}\leq
d$ and hence we obtain (8). ∎
The result in (8) shows that the Frobenius norm of the Jacobian matrix
$\|J_{z}\mathcal{G}\|$ controls how far apart $\mathcal{G}$ maps two vectors
that are close in the latent space. We would like points that are close in the
latent space map to nearby points on the manifold. We hence use the gradient
of the map:
$R_{\rm distance}=\|J_{z}\big{(}\mathcal{G}(\mathbf{z})\big{)}\|_{F}^{2}$ (11)
as a regularization penalty. We note that the above penalty will also
encourage the learning of a mapping $\mathcal{G}$ such that the length of
curve $\mathcal{G}(c(t))$ is the geodesic distance. We note that the above
penalty can also be thought of as a network regularization. Similar gradient
penalties are used in machine learning to improve generalization ability and
to improve the robustness to adversarial attacks [38]. The use of gradient
penalty is observed to be qualitatively equivalent to penalizing the norm of
the weights of the network.
Figure 2: Illustration of the distance penalty. The length of the curve
connecting the images corresponding to $\mathbf{z}_{1}$ and $\mathbf{z}_{2}$
depends on the Frobenius norm of the Jacobian of the mapping $\mathcal{G}$ as
well as the Euclidean distance $\|\mathbf{z}_{1}-\mathbf{z}_{2}\|^{2}$. We are
interested in learning a mapping that preserves distances; we would like
nearby points in the latent space to map to similar images. We hence use the
norm of the Jacobian as the regularization prior, with the goal of preserving
distances.
### III-C Latent vector regularization penalty
The time frames in a dynamic time series have extensive redundancy between
adjacent frames, which is usually capitalized by temporal gradient
regularization. Directly penalizing the temporal gradient norm of the images
requires the computation of the entire image time series, which is difficult
when the entire image time series is not optimized in every batch.
We consider the norm of the finite differences between images specified by
$\|\nabla_{p}\mathbf{G}[\mathbf{z}_{p}]\|^{2}$. Using Taylor series expansion,
we obtain
$\nabla_{p}\mathbf{G}[\mathbf{z}_{p}]=J_{\mathbf{z}}(\mathcal{G}[\mathbf{z}])\nabla_{p}\mathbf{z}+\mathcal{O}(p)$.
We thus have
$\|\nabla_{p}\mathcal{G}[\mathbf{z}_{p}]\|\approx\|J_{\mathbf{z}}(\mathcal{G}[\mathbf{z}])\nabla_{p}\mathbf{z}\|\leq\|J_{\mathbf{z}}(\mathcal{G}[\mathbf{z}])\|~{}\|\nabla_{p}\mathbf{z}\|.$
(12)
Since $J_{\mathbf{z}}(\mathcal{G}[\mathbf{z}])$ is small because of the
distance regularization, we propose to add a temporal regularizer on the
latent vectors. For example, when applied to free-breathing cardiac MRI, we
expect the latent vectors to capture the two main contributors of motion:
cardiac motion and respiratory motion. The temporal regularization encourages
the cardiac and respiratory phases change slowly in time.
### III-D Proposed optimization criterion
Based on the above analysis, we derive the parameters of the network $\theta$
and the low-dimensional latent vectors $\mathbf{z}_{i};i=1,..,M$ from the
measured data by minimizing:
$\displaystyle\mathcal{C}(\mathbf{z},\theta)$ $\displaystyle=$
$\displaystyle\underbrace{\sum_{i=1}^{N}\|\mathcal{A}_{i}\left(\mathcal{G}_{\theta}[\mathbf{z}_{i}]\right)-\mathbf{b}\|^{2}}_{\scriptsize\mbox{data
term}}+\lambda_{1}\underbrace{\|J_{\mathbf{z}}\mathcal{G}_{\theta}(\mathbf{z})\|^{2}}_{\scriptsize\mbox{distance
regularization}}$ (13)
$\displaystyle\qquad+\lambda_{2}\underbrace{\|\nabla_{t}\mathbf{z}_{t}\|^{2}}_{\scriptsize\mbox{latent
regularization}}$
with respect to $\mathbf{z}$ and $\theta$. We use the ADAM optimization to
determine the optimal parameters, and random initialization is used for the
network parameters and latent variables.
A potential challenge with directly solving (13) is its high computational
complexity. Unlike supervised neural network approaches that offer fast
inference, the proposed approach optimizes the network parameters based on the
measured data. This strategy will amount to a long reconstruction time when
there are several image frames in the time series.
### III-E Strategies to reduce computational complexity
To minimize the computational complexity, we now introduce some approximation
strategies.
#### III-E1 Approximate data term for accelerated convergence
When the data is measured using non-Cartesian sampling schemes, $M$ non-
uniform fast Fourier transform (NUFFT) evaluations are needed for the
evaluation of the data term, where $M$ is the number of frames in the dataset.
Similarly, $M$ inverse non-uniform fast Fourier transform (INUFFT) evaluations
are needed for each back-propagation step. These NUFFT evaluations are
computationally expensive, resulting in slow algorithms. In addition, most
non-Cartesian imaging schemes over-sample the center of k-space. Since the
least-square loss function in (5) weights errors in the center of k-space
higher than in outer k-space regions, it is associated with slow convergence.
To speed up the intermediate computations, we propose to use gridding with
density compensation, together with a projection step for the initial
iterations. Specifically, we will use the approximate data term
$D(\mathbf{z},\theta)=\sum_{i=1}^{M}\|\mathcal{P}_{i}\left(\mathcal{G}_{\theta}[\mathbf{z}_{i}]\right)-\mathbf{g}_{i}\|^{2}$
(14)
instead of
$\sum_{i}\|\mathcal{A}_{i}\left(\mathcal{G}[\mathbf{z}_{i}]\right)-\mathbf{b}_{i}\|^{2}$
in early iterations to speed up the computations. Here, $\mathbf{g}_{i}$ are
the gridding reconstructions
$\mathbf{g}_{i}=\left(\mathcal{A}_{i}^{H}\mathcal{A}_{i}\right)^{{\dagger}}\mathcal{A}_{i}^{H}~{}\mathbf{b}_{i}\approx\mathcal{A}_{i}^{H}~{}\mathcal{W}~{}\mathbf{b},$
(15)
where, $\mathcal{W}$ are diagonal matrices corresponding to multiplication by
density compensation factors. The operators $\mathcal{P}_{i}$ in (14) are
projection operators:
$\mathcal{P}_{i}~{}\mathbf{x}=\left(\mathcal{A}_{i}^{H}\mathcal{A}_{i}\right)^{{\dagger}}\left(\mathcal{A}_{i}^{H}\mathcal{A}_{i}\right)~{}\mathbf{x}\approx\left(\mathcal{A}_{i}^{H}~{}\mathcal{W}~{}\mathcal{A}_{i}\right)\mathbf{x}$
(16)
We note that the term
$\left(\mathcal{A}_{i}^{H}~{}\mathcal{W}~{}\mathcal{A}_{i}\right)\mathbf{x}$
can be efficiently computed using Toeplitz embedding, which eliminates the
need for expensive NUFFT and INUFFT steps. In addition, the use of the density
compensation serves as a preconditioner, resulting in faster convergence. Once
the algorithm has approximately converged, we switch the loss term back to (5)
since it is optimal in a maximum likelihood perspective.
#### III-E2 Progressive training-in-time
To further speed up the algorithm, we introduce a progressive training
strategy, which is similar to multi-resolution strategies used in image
processing. In particular, we start with a single frame obtained by pooling
the measured data from all the time frames. Since this average frame is well-
sampled, the algorithm promptly converges to the optimal solution. The
corresponding network serves as a good initialization for the next step.
Following convergence, we increase the number of frames. The optimal $\theta$
parameters from the previous step are used to initialize the generator, while
the latent vector is initialized by the interpolated version of the latent
vector at the previous step. This process is repeated until the desired number
of frames is reached.
Figure 3: Illustration of the progressive training-in-time approach. In the
first level of training, the k-space data of all the frames are binned into
one and we try to solve for the average image in this level. Upon the
convergence of the first step, the parameters and latent variables are
transferred as the initialization of the second step. In the second level of
training, we divide the k-space data into $M$ groups and try to reconstruct
the $M$ average images. Following the convergence, we can move to the final
level of training, where the parameters obtained in the second step and the
linear interpolation of the latent vectors in the second step are chosen as
the initializations of the final step of training.
This progressive training-in-time approach significantly reduces the
computational complexity of the proposed algorithm. In this work, we used a
three-step algorithm. However, the number of steps (levels) of training can be
chosen based on the dataset. This progressive training-in-time approach is
illustrated in Fig. 3.
## IV Implementation details and datasets
### IV-A Structure of the generator
The structure of the generator used in this work is given in Table. I. The
output images have two channels, which correspond to the real and imaginary
parts of the MR images. Note that we have a parameter $d$ in the network. This
user-defined parameter controls the size of the generator or, in other words,
the number of trainable parameters in the generator. We also have a number
$\ell({\mathbf{z}})$ as a user-defined parameter. This parameter represents
the number of elements in each latent vector. In this work, it is chosen as
$\ell({\mathbf{z}})=2$ as we have two motion patterns in cardiac images. We
use leaky ReLU for all the non-linear activations, except at the output layer,
where it is tanh activation.
Input size | filter sz | # filters | Padding | Stride | Output size
---|---|---|---|---|---
$1\times 1\times\ell({\mathbf{z}})$ | $1\times 1$ | 100 | 0 | 1 | $1\times 1\times 100$
$1\times 1\times 100$ | $3\times 3$ | $8d$ | 0 | 1 | $3\times 3\times 8d$
$3\times 3\times 8d$ | $3\times 3$ | $8d$ | 0 | 1 | $5\times 5\times 8d$
$5\times 5\times 8d$ | $4\times 4$ | $4d$ | 1 | 2 | $10\times 10\times 4d$
$10\times 10\times 4d$ | $4\times 4$ | $4d$ | 1 | 2 | $20\times 20\times 4d$
$20\times 20\times 4d$ | $3\times 3$ | $4d$ | 0 | 2 | $41\times 41\times 4d$
$41\times 41\times 4d$ | $5\times 5$ | $2d$ | 1 | 2 | $85\times 85\times 2d$
$85\times 85\times 2d$ | $4\times 4$ | $d$ | 1 | 2 | $170\times 170\times d$
$170\times 170\times d$ | $4\times 4$ | $d$ | 1 | 2 | $340\times 340\times d$
$340\times 340\times d$ | $3\times 3$ | $2$ | 1 | 2 | $340\times 340\times 2$
TABLE I: Architecture of the generator $\mathcal{G}_{\theta}$.
$\ell(\mathbf{z})$ means the number of elements in each latent vector.
### IV-B Datasets
This research study was conducted using data acquired from human subjects. The
Institutional Review Board at the local institution (The University of Iowa)
approved the acquisition of the data, and written consents were obtained from
all subjects. The experiments reported in this paper are based on datasets
collected in the free-breathing mode using the golden angle spiral
trajectory.We acquired eight datasets on a GE 3T scanner. One dataset was used
to identify the optimal hyperparameters of all the algorithms in the proposed
scheme. We then used the hyperparameters to generate the experimental results
for all the remaining datasets reported in this paper. The sequence parameters
for the datasets are: TR = 8.4 ms, FOV = 320 mm$\times$ 320 mm, flip angle =
18∘, slice thickness = 8 mm. The datasets were acquired using a cardiac
multichannel array with 34 channels. We used an automatic algorithm to pre-
select the eight best coils, that provide the best signal to noise ratio in
the region of interest. The removal of the coils with low sensitivities
provided improved reconstructions [39]. We used a PCA-based coil combination
using SVD such that the approximation error $<$ 5%. We then estimated the coil
sensitivity maps based on these virtual channels using the method of Walsh et
al. [40] and assumed they were constant over time.
For each dataset in this research, we binned the data from six spiral
interleaves corresponding to 50 ms temporal resolution. If a Cartesian
acquisition scheme with $TR=3.5ms$ were used, this would correspond to
$\approx$14 lines/frame; with a $340\times 340$ matrix, this corresponds
roughly to an acceleration factor of 24. Moreover, each dataset has more than
500 frames. During reconstruction, we omit the first 20 frames in each dataset
and use the next 500 frames for SToRM reconstructions; this is then used as
the simulated ground truth for comparisons. The experiments were run on a
machine with an Intel Xeon CPU at 2.40 GHz and a Tesla P100-PCIE 16GB GPU. The
source code for the proposed Gen-SToRM scheme can be downloaded from this
link: https://github.com/qing-zou/Gen-SToRM.
### IV-C Quality evaluation metric
In this work, the quantitative comparisons are made using the Signal-to-Error
Ratio (SER) metric (in addition to the standard Peak Signal-to-Noise Ratio
(PSNR) and the Structural Similarity Index Measure (SSIM)) defined as:
${\mathrm{SER}}=20\cdot\log_{10}\frac{\|\mathbf{x}_{orig}\|}{\|\mathbf{x}_{orig}-\mathbf{x}_{recon}\|}.$
Here $\mathbf{x}_{orig}$ and $\mathbf{x}_{recon}$ represent the ground truth
and the reconstructed image. The unit for SER is decibel (dB).
The SER metric requires a reference image, which is chosen as the SToRM
reconstruction with 500 frames. However, we note that this reference may be
imperfect and may suffer from blurring and related artifacts. Hence, we
consider the Blind/referenceless Image Spatial Quality Evaluator (BRISQUE)
[41] to evaluate the score of the image quality. The BRISQUE score is a
perceptual score based on the support vector regression model trained on an
image database with corresponding differential mean opinion score values. The
training image dataset contains images with different distortions. A smaller
score indicates better perceptual quality.
(a) Performance comparison
(b) Latent codes with both terms
(c) Without distance regularization
(d) Without latent regularization
(e) Visual and quantitative comparisons
Figure 4: Illustration of the impact of the regularization terms in the
proposed scheme with $d=24$. We considered three cases in the experiment: (1)
using both regularizations, (2) using only latent regularization, and (3)
using only network regularization; these correspond to the blue, orange, and
yellow curves in (a). In (b), (c), and (d), we showed the learned latent
vectors for the three cases. The visual and quantitative comparisons of the
three cases are shown in (e).
### IV-D State-of-the-art methods for comparison
We compare the proposed scheme with the recent state-of-the-art methods for
free-breathing and ungated cardiac MRI. We note that while there are many deep
learning algorithms for static MRI, those methods are not readily applicable
to our setting.
* •
Analysis SToRM [10, 9], published in 2020: The manifold Laplacian matrix is
estimated from k-space navigators using kernel low-rank regularization,
followed by solving for the images using (4).
* •
Time-DIP [33] implementation based on the arXiv form at the submission of this
article: This is an unsupervised learning scheme, that fixes the latent
variables as noise and solves for the generator parameters. For real-time
applications, Time-DIP chooses a preset period, and the noise vectors of the
frames corresponding to the multiples of the period were chosen as independent
Gaussian variables [33]. The latent variables of the intermediate frames were
obtained using linear interpolation. We chose a period of 20 frames, which
roughly corresponds to the period of the heart beats.
* •
Low-rank [2]: The image frames in the time series are recovered using the
nuclear norm minimization.
### IV-E Hyperparameter tuning
We used one of the acquired datasets to identify the hyperparameters of the
proposed scheme. Since we do not have access to the fully-sampled dataset, we
used the SToRM reconstructions from 500 images (acquisition time of $25$
seconds) as a reference. The smoothness parameter $\lambda$ of this method was
manually selected as $\lambda=0.01$ to obtain the best recovery, as in the
literature [9]. All of the comparisons relied on image recovery from 150
frames (acquisition time of 7.5 seconds). The hyperparameter tuning approach
yielded the parameters $d=40$, $\lambda_{1}=0.0005$, and $\lambda_{2}=2$ for
the proposed approach. We demonstrate the impact of tuning $d$ in Fig. 6,
while the impact of choosing $\lambda_{1}$ and $\lambda_{2}$ is shown in Fig.
4. The hyperparameter optimization of SToRM from 150 frames resulted in the
optimal smoothness parameter $\lambda=0.0075$. For Time-DIP, we follow the
design of the network shown by Jin et al. [33], where the generator consists
of multiple layers of convolution and upsampling operations. To ensure fair
comparison, we used a similar architecture, where the base size of the network
was tuned to obtain the best results.
We use a three-step progressive training strategy. In the first step, the
learning rate for the network is $1\times 10^{-3}$ and 1000 epoches are used.
For the second step of training, the learning rate for the network is $5\times
10^{-4}$ and the learning rate for the latent variable is $5\times 10^{-3}$.
In this stage, 600 epoches are used. In the final step of training, the
learning rate for the network is $5\times 10^{-4}$, the learning rate for the
latent variable is $1\times 10^{-3}$, and 700 epoches are used.
## V Experiments and results
### V-A Impact of different regularization terms
We first study the impact of the two regularization terms in (13). The
parameter $d$ corresponding to the size of the network (see Table I) was
chosen as $d=24$ in this case. In Fig. 4 (a), we plot the reconstruction
performance with respect to the number of epoches for three scenarios: (1)
using both regularization terms; (2) using only latent regularization; and (3)
using only distance/network regularization. In the experiment, we use 500
frames of SToRM ($\sim$ 25 seconds of acquisition) reconstructions, which is
called “SToRM500”, as the reference for SER computations. We tested the
reconstruction performance for the three scenarios using 150 frames, which
corresponds to around 7.5 seconds of acquisition. From the plot, we observe
that without using the network regularization, the SER degrades with
increasing epoches, which is similar to that of DIP. In this case, an early
stopping strategy is needed to obtain good recovery. The latent vectors
corresponding to this setting are shown in (c), which shows mixing between
cardiac and respiratory waveforms. When latent regularization is not used, we
observe that the SER plot is roughly flat, but the latent variables show quite
significant mixing, which translates to blurred reconstructions. By contrast,
when both network and latent regularizations are used, the algorithm converges
to a better solution. We also note that the latent variables are well
decoupled; the blue curve captures the respiratory motion, while the orange
one captures the cardiac motion. We also observe that the reconstructions
agree well with the SToRM reconstructions. The network now learns meaningful
mappings, which translate to improved reconstructions when compared to the
reconstructions obtained without using the regularizers.
Figure 5: Comparisons of the reconstruction performance with and without the
progressive training-in-time strategy using $d=40$. From the plot of SER vs.
running time, we can see that the progressive training-in-time approach yields
better results with much less running time comparing to the training without
using progressive training-in-time. Two reconstructed frames near the end of
systole and diastole using SToRM500, the proposed scheme with progressive
training-in-time and the proposed scheme without using the progressive
training-in-time are shown in the plot as well for comparison purposes. The
average Brisque scores for SToRM500, the reconstruction with progressive
training-in-time, and the reconstruction without progressive training-in-time
are $36.4,37.3$ and $39.1$ respectively.
### V-B Benefit of progressive training-in-time approach
In Fig. 5, we demonstrate the significant reduction in run-time offered by the
progressive training strategy described in Section III-E2. Here, we consider
the recovery from 150 frames with and without the progressive strategy. Both
regularization priors were used in this strategy, and $d$ was chosen as 24. We
plot the reconstruction performance, measured by the SER with respect to the
running time. The SER plots show that the proposed scheme converges in around
$\approx 200$ seconds, while the direct approach takes more than 2000 seconds.
We also note from the SER plots that the solution obtained using progressive
training is superior to the one without progressive training.
### V-C Impact of size of the network
The architecture of the generator $\mathcal{G}_{\theta}$ is given in Table I.
Note that the size of the network is controlled by the user-defined parameter
$d$, which dictates the number of convolution filters and hence the number of
trainable parameters in the network. In this section, we investigate the
impact of the user-defined parameter $d$ on the reconstruction performance. We
tested the reconstruction performance using $d=8,16,24,32,40$, and $48$, and
the obtained results are shown in Fig. 6. From the figure, we see that when
$d=8$ or $d=16$, the generator network is too small to capture the dynamic
variations. When $d=8$, the generator is unable to capture both cardiac motion
and respiratory motion. When $d=16$, part of the respiratory motion is
recovered, while the cardiac motion is still lost. The best SER scores with
respect to SToRM with 500 frames is obtained for $d=24$, while the lowest
Brisque scores are obtained for $d=40$. We also observe that the features
including papillary muscles and myocardium in the $d=40$ results appear
sharper than those of SToRM with 500 frames, even though the proposed
reconstructions were only performed from 150 frames. We use $d=40$ for the
subsequent comparisons in the paper.
Figure 6: Impact of network size on reconstruction performance. In the
experiments, we chose $d=8,16,24,32,40$ and $48$ to investigate the
reconstruction performance. We used 500 frames for SToRM reconstructions
(SToRM500) as the reference for SER comparisons. For the investigation of the
impact of network size on the reconstructions, we used 150 frames. The
diastolic and systolic states and the temporal profiles are shown in the
figure for each case. The Brisque scores and average SER are also reported. It
is worth noting that when $d=40$, the results are even less blurred than the
SToRM500 results, even though only one-third of the data are used.
### V-D Comparison with the state-of-the-art methods
In this section, we compare our proposed scheme with several state-of-the-art
methods for the reconstruction of dynamic images.
Methods | SToRM500 | SToRM150 | Propsed | Time-DIP
---|---|---|---|---
SER (dB) | NA | $17.3$ | $\mathbf{18.2}$ | $16.7$
PSNR (dB) | NA | $32.7$ | $\mathbf{33.5}$ | $32.0$
SSIM | NA | $0.86$ | $\scriptsize\mathbf{0.89}$ | $0.87$
Brisque | $\mathbf{35.2}$ | $40.2$ | $37.1$ | $42.9$
Time (min) | 47 | 13 | 17 | 57
TABLE II: Quantitative comparisons based on six datasets: We used six datasets
to obtain the average SER, PSNR, SSIM, Brisque score, and time used for
reconstruction.
In Fig. 7, we compare the region of interest for SToRM500, SToRM with 150
frames (SToRM150), the proposed method with two different $d$ values, the
unsupervised Time-DIP approach, and the low-rank algorithm. From Fig. 7, we
observe that the proposed scheme can significantly reduce errors in comparison
to SToRM150. Additionally, the proposed scheme is able to capture the motion
patterns better than Time-DIP, while the low-rank method is unable to capture
the motion patterns. From the time profile in Fig. 7, we notice that the
proposed scheme is capable of recovering the abrupt change in blood-pool
contrast between diastole and systole. This is due to inflow effects
associated with gradient echo (GRE) acquisitions. In particular, the blood
from regions outside the slice enters the heart, which did not experience any
of the former slice-selective excitation pulses; the differences in
magnetization of the blood with no magnetization history, and that was within
the slice, results in the abrupt change in intensity. We note that some of the
competing methods such as Time-DIP and low-rank, blur these details.
(a) Visual comparisons
(b) Time profiles
Figure 7: Comparisons with the state-of-the-art methods. The first column of
(a) corresponds to the reconstructions from 500 frames ($\sim$ 25s of
acquisition time), while the rest of the columns are recovered from 150 frames
($\sim$ 7.5s of acquisition time). The top row of (a) corresponds to the
diastole phase, while the third row is the diastole phase. The second row of
(a) is an intermediate one. Fig. (b) corresponds to the time profiles of the
reconstructions. We observe that the proposed ($d=40$) reconstructions exhibit
less blurring and fewer artifacts when compared to SToRM150 and competing
methods.
We also perform the comparisons on a different dataset in Fig. 8. We compare
the proposed scheme with SToRM500, SToRM150, Time-DIP, and the low-rank
approach. The results are shown in Fig. 8. From the figure, we see that the
proposed reconstructions appear less blurred than those of the conventional
schemes.
(a) Visual comparisons
(b) Time profiles
Figure 8: Comparisons with the state-of-the-art methods. The first column of
(a) corresponds to the reconstructions from 500 frames ($\sim$ 25s of
acquisition time), while the rest of the columns are recovered from 150 frames
($\sim$ 7.5s of acquisition time). The top row of (a) corresponds to the
diastole phase, while the third row is the diastole phase. The second row of
(a) is an intermediate one. Fig. (b) corresponds to the time profiles of the
reconstructions. We chose $d=40$ for the proposed scheme. We observe that the
proposed reconstructions appear less blurred when compared to the conventional
schemes.
(a) Latent vectors
(b) Systole in E-E
(c) Systole in E-I
(d) Diastole in E-E
(e) Diastole in E-I
Figure 9: Illustration of the framework of the proposed scheme with $d=40$. We
plot the latent variables of 150 frames in a time series on the first dataset.
We showed four different phases in the time series: systole in End-Expiration
(E-E), systole in End-Inspiration (E-I), diastole in End-Expiration (E-E), and
diastole in End-Inspiration (E-I). A thin green line surrounds the liver in
the image frame to indicate the respiratory phase. The latent vectors
corresponding to the four different phases are indicated in the plot of the
latent vectors.
(a) Latent vectors
(b) Systole in E-E
(c) Systole in E-I
(d) Diastole in E-E
(e) Diastole in E-I
Figure 10: Illustration of the framework of the proposed scheme with $d=40$.
We plot the latent variables of 150 frames in a time series. We showed four
different phases in the time series: systole in End-Expiration (E-E), systole
in End-Inspiration (E-I), diastole in End-Expiration (E-E), and diastole in
End-Inspiration (E-I). The latent vectors corresponding to the four different
phases are indicated in the plot of the latent vectors.
We also compared the proposed scheme with SToRM500, SToRM150, and the
unsupervised Time-DIP approach quantitatively. We omit the low-rank method
here because low-rank approach often failed in some datasets. The quantitative
comparisons are shown in Table II. We used SToRM500 as the reference for SER,
PSNR, and SSIM calculations. The quantitative results are based on the average
performance from six datasets.
Finally, we illustrate the proposed approaches in Fig. 9 and Fig. 10,
respectively. The proposed approach decoupled the latent vectors corresponding
to the cardiac and respiratory phases well, as shown in the representative
examples in Fig. 9 (a) and Fig. 10 (a).
## VI Conclusion
In this work, we introduced an unsupervised generative SToRM framework for the
recovery of free-breathing cardiac images from spiral acquisitions. This work
assumes that the images are generated by a non-linear CNN-based generator
$\mathcal{G}_{\theta}$, which maps the low-dimensional latent variables to
high-resolution images. Unlike traditional supervised CNN methods, the
proposed approach does not require any training data. The parameters of the
generator and the latent variables are directly estimated from the
undersampled data. The key benefit for this generative model is its ability to
compress the data, which results in a memory-effective algorithm. To improve
the performance, we introduced a network/distance regularization and a latent
variable regularization. The combination of the priors ensures the learning of
representations that preserve distances and ensure the temporal smoothness of
the recovered images; the regularized approach provides improved
reconstructions while minimizing the need for early stopping. To reduce the
computational complexity, we introduced a fast approximation of the data loss
term as well as a progressive training-in-time strategy. These approximations
result in an algorithm with computational complexity comparable to our prior
SToRM algorithm. The main benefits of this scheme are the improved performance
and considerably reduced memory demand. While our main focus in this work was
to establish the benefits of this work in 2D, we plan to extend this work to
3D applications in the future.
## Acknowledgement
The authors would like to thank Ms. Melanie Laverman from the University of
Iowa for making editorial corrections to refine this paper.
## References
* [1] J. Tsao, P. Boesiger, and K. P. Pruessmann, “k-t blast and k-t sense: dynamic mri with high frame rate exploiting spatiotemporal correlations,” Magnetic Resonance in Medicine, vol. 50, no. 5, pp. 1031–1042, 2003\.
* [2] S. G. Lingala, Y. Hu, E. DiBella, and M. Jacob, “Accelerated dynamic mri exploiting sparsity and low-rank structure: kt slr,” IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1042–1054, 2011.
* [3] L. Feng et al., “Golden-angle radial sparse parallel mri: combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric mri,” Magnetic Resonance in Medicine, vol. 72, no. 3, pp. 707–717, 2014\.
* [4] L. Feng, L. Axel, H. Chandarana, K. T. Block, D. K. Sodickson, and R. Otazo, “Xd-grasp: golden-angle radial mri with reconstruction of extra motion-state dimensions using compressed sensing,” Magnetic Resonance in Medicine, vol. 75, no. 2, pp. 775–788, 2016\.
* [5] A. G. Christodoulou et al., “Magnetic resonance multitasking for motion-resolved quantitative cardiovascular imaging,” Nature Biomedical Engineering, vol. 2, no. 4, pp. 215–226, 2018\.
* [6] C. Prieto et al., “Highly efficient respiratory motion compensated free-breathing coronary mra using golden-step cartesian acquisition,” Journal of Magnetic Resonance Imaging, vol. 41, no. 3, pp. 738–746, 2015.
* [7] A. Bustin, N. Fuin, R. M. Botnar, and C. Prieto, “From compressed-sensing to artificial intelligence-based cardiac mri reconstruction,” Frontiers in Cardiovascular Medicine, vol. 7, pp. 17, 2020.
* [8] S. Poddar and M. Jacob, “Dynamic mri using smoothness regularization on manifolds (storm),” IEEE Transactions on Medical Imaging, vol. 35, no. 4, pp. 1106–1115, 2015.
* [9] A. H. Ahmed, R. Zhou, Y. Yang, P. Nagpal, M. Salerno, and M. Jacob, “Free-breathing and ungated dynamic mri using navigator-less spiral storm,” IEEE Transactions on Medical Imaging, 2020.
* [10] S. Poddar, Y. Q. Mohsin, D. Ansah, B. Thattaliyath, R. Ashwath, and M. Jacob, “Manifold recovery using kernel low-rank regularization: Application to dynamic imaging,” IEEE Transactions on Computational Imaging, vol. 5, no. 3, pp. 478–491, 2019.
* [11] U. Nakarmi, Y. Wang, J. Lyu, D. Liang, and L. Ying, “A kernel-based low-rank (klr) model for low-dimensional manifold recovery in highly accelerated dynamic mri,” IEEE Transactions on Medical Imaging, vol. 36, no. 11, pp. 2297–2307, 2017.
* [12] U. Nakarmi, K. Slavakis, and L. Ying, “Mls: Joint manifold-learning and sparsity-aware framework for highly accelerated dynamic magnetic resonance imaging,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE, 2018, pp. 1213–1216.
* [13] G. Wang, “A perspective on deep imaging,” IEEE Access, vol. 4, pp. 8914–8924, 2016.
* [14] G. Wang, M. Kalra, and C. G. Orton, “Machine learning will transform radiology significantly within the next 5 years,” Medical Physics, vol. 44, no. 6, pp. 2041–2044, 2017.
* [15] G. Dardikman-Yoffe and Y. C. Eldar, “Learned sparcom: Unfolded deep super-resolution microscopy,” arXiv preprint arXiv:2004.09270, 2020.
* [16] J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM Journal on Imaging Sciences, vol. 11, no. 2, pp. 991–1048, 2018.
* [17] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
* [18] V. Monga, Y. Li, and Y. C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” arXiv preprint arXiv:1912.10557, 2019.
* [19] A. Pramanik, H. K. Aggarwal, and M. Jacob, “Deep generalization of structured low-rank algorithms (deep-slr),” IEEE Transactions on Medical Imaging, 2020.
* [20] S. Wang et al., “Accelerating magnetic resonance imaging via deep learning,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). IEEE, 2016, pp. 514–517.
* [21] S. Wang et al., “Deepcomplexmri: Exploiting deep residual network for fast parallel mr imaging with complex convolution,” Magnetic Resonance Imaging, vol. 68, pp. 136–147, 2020.
* [22] S. UH Dar, M. Özbey, A. B. Çatlı, and T. Çukur, “A transfer-learning approach for accelerated mri using deep neural networks,” Magnetic resonance in medicine, vol. 84, no. 2, pp. 663–685, 2020\.
* [23] Y. Han et al., “Deep learning with domain adaptation for accelerated projection-reconstruction mr,” Magnetic resonance in medicine, vol. 80, no. 3, pp. 1189–1205, 2018\.
* [24] T. Sanchez et al., “Scalable learning-based sampling optimization for compressive dynamic mri,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 8584–8588.
* [25] S. Wang et al., “Dimension: dynamic mr imaging with both k-space and spatial prior knowledge obtained via multi-supervised network training,” NMR in Biomedicine, p. e4131, 2019.
* [26] S. Wang et al., “Lantern: Learn analysis transform network for dynamic magnetic resonance imaging,” Inverse Problems & Imaging, p. 1, 2020.
* [27] S. UH Dar et al., “Prior-guided image reconstruction for accelerated multi-contrast mri via generative adversarial networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 6, pp. 1072–1087, 2020.
* [28] S. UH Dar et al., “Image synthesis in multi-contrast mri with conditional generative adversarial networks,” IEEE transactions on medical imaging, vol. 38, no. 10, pp. 2375–2388, 2019.
* [29] M. Yurt, S. UH Dar, A. Erdem, E. Erdem, and T. Çukur, “mustgan: Multi-stream generative adversarial networks for mr image synthesis,” arXiv preprint arXiv:1909.11504, 2019.
* [30] Q. Zou, S. Poddar, and M. Jacob, “Sampling of planar curves: Theory and fast algorithms,” IEEE Transactions on Signal Processing, vol. 67, no. 24, pp. 6455–6467, 2019.
* [31] Q. Zou and M. Jacob, “Recovery of surfaces and functions in high dimensions: sampling theory and links to neural networks,” SIAM Journal on Imaging Sciences, in press.
* [32] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9446–9454.
* [33] K. H. Jin, H. Gupta, J. Yerly, M. Stuber, and M. Unser, “Time-dependent deep image prior for dynamic mri,” arXiv preprint arXiv:1910.01684, 2019.
* [34] A. Bora, E. Price, and A. G. Dimakis, “Ambientgan: Generative models from lossy measurements,” in International Conference on Learning Representations, 2018.
* [35] K. Kazuhiro et al., “Generative adversarial networks for the creation of realistic artificial brain magnetic resonance images,” Tomography, vol. 4, no. 4, pp. 159, 2018.
* [36] I. Goodfellow et al., “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
* [37] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017.
* [38] D. Varga, A. Csiszárik, and Z. Zombori, “Gradient regularization improves accuracy of discriminative models,” arXiv preprint arXiv:1712.09936, 2017.
* [39] R. Zhou et al., “Free-breathing cine imaging with motion-corrected reconstruction at 3t using spiral acquisition with respiratory correction and cardiac self-gating (sparcs),” Magnetic resonance in medicine, vol. 82, no. 2, pp. 706–720, 2019\.
* [40] D. O. Walsh, A. F. Gmitro, and M. W. Marcellin, “Adaptive reconstruction of phased array mr imagery,” Magnetic Resonance in Medicine, vol. 43, no. 5, pp. 682–690, 2000\.
* [41] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695–4708, 2012.
|
# Monte-Carlo Applications for Partially Polarised Inverse External-Compton
Scattering (MAPPIES) - I. Description of the code and First Results
Lenté Dreyer Centre of Space Research, North-West University, Potchefstroom
2531, South Africa Markus Böttcher Centre of Space Research, North-West
University, Potchefstroom 2531, South Africa
(Accepted November 10, 2020)
###### Abstract
The radiation mechanisms responsible for the multiwavelength emission from
relativistic jet sources are poorly understood. The modelling of the spectral
energy distributions (SEDs) and light curves alone is not adequate to
distinguish between existing models. Polarisation in the $X$-ray and
$\gamma$-ray regime of these sources may provide new and unique information
about the jet physics and radiation mechanisms. Several upcoming projects will
be able to deliver polarimetric measurements of the brightest $X$-ray sources,
including active galactic nuclei (AGN) jets and $\gamma$-ray bursts (GRBs).
This article describes the development of a new Monte-Carlo code – MAPPIES
(Monte-Carlo Applications for Partially Polarised Inverse External-Compton
Scattering) – for polarisation-dependent Compton scattering in relativistic
jet sources. Generic results for Compton polarisation in the Thomson and
Klein-Nishina regimes are presented.
BL Lacertae objects: general – galaxies: active – galaxies: jets – gamma rays:
galaxies – polarization – radiation mechanisms: non-thermal – relativistic
processes – scattering – $X$-rays: galaxies
††journal: ApJ
## 1 Introduction
The radiation from jetted astrophysical sources (e.g. active galactic nuclei
(AGNs) and $\gamma$-ray bursts (GRBs)) is characterised by their spectral
energy distribution (SED) which can be modelled in many different ways, whilst
being consistent with the spectral shape of the SED (e.g. Burrows et al.
(2006); Walcher et al. (2011); Böttcher et al. (2013)). Additional constraints
are therefore required in order to distinguish between models. Relativistic
jets are accompanied by the acceleration of particles up to very high
energies, as well as the production of secondary, non-thermal radiation
(Böttcher et al., 2012). Understanding the particle acceleration, radiation
mechanisms, and the magnetic field structure of these jets is among the
primary goals in the field of high-energy astrophysics. Polarisation is
fundamentally linked to the internal geometry of astrophysical sources, and
therefore carries important information about the astrophysical environment in
terms of the how the magnetic field is linked into the dynamics and
acceleration of the energetic particles (see Trippe (2014) for a general
review). Polarisation of the emission from jet-like astrophysical sources adds
two essential parameters – the polarisation degree (PD) and the polarisation
angle (PA) – to those already derived from spectra and variability.
Synchrotron polarisation of the radio/optical emission from astrophysical jets
has been a standard diagnostic to examine the magnetic field (e.g. Conway et
al. (1993); Zhang et al. (2014); Gabuzda (2018)), since the polarisation
measurements combined with the spectra and variability of the emission reveal
critical information about the magnetic field structure in the emission
region. However, since the radio/optical polarisation may come from regions
that do not emit high-energy radiation, an important challenge that remains
for high-energy astronomy is measuring the polarisation in the ultraviolet
(UV), $X$-ray, and $\gamma$-ray regimes in order to probe the most active jet
regions with powerful particle acceleration. High-energy polarisation
measurements may provide unambiguous constraints on the geometry and structure
of the astrophysical source, for example by constraining the orientation of
the accretion-disks with respect to our line of sight (e.g. Schnittman &
Krolik (2010, 2009); Laurent et al. (2011); Beheshtipour et al. (2017)).
Compared to the orientation of the relativistic jet, multiwavelength
polarisation, therefore, holds vital
Table 1: List of recent, upcoming, and future proposed missions to measure the polarisation of the high-energy emission from jet-like astrophysical sources. Polarimeter | Energy [keV] | References | Science objectives
---|---|---|---
Lightweight Asymmetry and Magnetism | $0.25$ | She et al. (2015) | Blazar jets, and thermal
Probe (LAMP) | | Li et al. (2019) | emission from pulsars
Experiment Demonstration of a Soft | $0.2-0.8$ | Marshall et al. (2017) | AGNs, binaries, and
$X$-ray Polarimeter (REDSoX) | | Marshall et al. (2019) | isolated pulsars
The $X$-ray Polarization Probe (XPP) | $0.2-60$ | Jahoda et al. (2019) | Black holes, neutron stars,
| | Jahoda et al. (2019) | magnetars, and AGNs
Enhanced $X$-ray Timing and Polarimetry | $2.0-10$ | Zhang et al. (2016) | Black holes, neutron stars,
Mission (eXTP) | | Zhang et al. (2019b) | and AGNs
Imaging $X$-ray Polarimetry Explorer (IXPE) | $2.0-8.0$ | Weisskopf et al. (2016) | AGNs, black holes,
| | Sgrò & IXPE Team (2019) | and neutron stars.
POLAR and POLAR-2 | $0.05-0.5$ | Zhang et al. (2018) | GRBs
| | Kole (2019) |
$X$-ray Polarimeter Experiment (POLIX) | $5.0-30$ | Paul et al. (2016) | AGN jets, black holes, and
| | | accretion powered pulsars.
X Calibur and XL-Calibur | $20-40$ | Kislat (2019) | AGN jets, black holes, and
| | Abarr & Krawczynski (2020) | neutron stars.
information on the extreme physical processes and morphology of the sources
and their jets (see e.g. Krawczynski et al. (2011); Zhang (2017); Krawczynski
et al. (2019); Liodakis et al. (2020); Böttcher (2019); Rani et al. (2019) for
reviews).
The general formalism for calculating high-energy polarisation has been well
established. Synchrotron radiation of relativistic charged particles in
ordered magnetic fields is expected to be both linearly and circularly
polarized (Westfold, 1959; Rybicki & Lightman, 1979). Compton scattering off
relativistic electrons will reduce the degree of polarisation to about half of
the seed photon field’s polarisation (Bonometto et al., 1970). Since there is
no existing technology to measure high-energy circular polarization, and the
radiation is treated as a collection of particles, rather than an
electromagnetic wave, only linear polarisation is considered. The Klein-
Nishina cross section is generally dependent on the polarisation. Polarised
photons scatter preferentially in a direction perpendicular to their
electrical field vector, and the electric field vectors of the scattered
photons tend to align with the seed photons’ electric field vectors (Matt et
al., 1996). Polarisation can therefore be induced when non-relativistic
electrons scatter an anisotropic photon field, even if the seed photons are
unpolarised. Compton scattering off relativistic electrons, however, is not
expected to induce polarisation since the seed photon field is approximately
axisymmetric around the electron momentum in the electron rest frame, making
any anisotropy of the photon field irrelevant. In a model where thermal and a
power-law tail of non-thermal electrons (in an emission region that moves
along the jet with a bulk Lorentz factor $\Gamma_{jet}\geq 10$) scatter an
external optical/UV radiation field, the hard $X$-ray/$\gamma$-ray radiation
will results from scattering off relativistic electrons and is thus expected
to be unpolarised. The UV/soft $X$-ray radiation, on the other hand, results
from scattering off non-relativistic thermal electrons and can therefore be
highly polarised (e.g. Schnittman & Krolik (2009)).
A formalism for evaluating Compton polarisation in the Thomson regime was
developed by Bonometto et al. (1970), and applied to Synchrotron Self Compton
(SSC) emission by Bonometto & Saggion (1973). Calculations of Compton
polarisation in the Thomson and Klein-Nishina regimes were provided by Sunyaev
& Titarchuk (1984), following the Monte-Carlo approach. Krawczynski (2012) and
Beheshtipour et al. (2017) also followed the Monte Carlo approach to calculate
the Compton polarisation in the Thomson and Klein-Nishina regimes, which
included the contribution of non-thermal electrons in the emission region, and
verified the analytical results of Bonometto et al. (1970) in the Thomson
regime. This article describes the development of a new Monte-Carlo code –
MAPPIES (Monte-Carlo Applications for Partially Polarised Inverse External-
Compton Scattering) – for polarisation dependent Compton scattering in jetted
astrophysical sources.
The potential of using high-energy polarisation as a diagnostic for different
radiation mechanisms is briefly discussed in section 2. In section 3, the
description of the MAPPIES code is presented, followed by the results for
Compton polarisation in the Thomson and Klein Nishina regimes in section 4,
and a summary in section 5.
## 2 Scientific Potential of High-Energy Polarisation
High-energy polarimetry may play a crucial role in understanding the extreme
physics of high-energy radiation, neutrino production, and particle
acceleration in jet-like astrophysical sources. Polarisation of
$X$-ray/$\gamma$-ray emission has remained largely unexplored, partly due to
the difficulty in the detection of high-energy polarisation. However,
advancements of new technology lead to several projects that may be able to
deliver polarimetric measurements of high-energy emission from the brightest
$X$-ray sources, with estimates of a minimum detectable degree of polarisation
(MDP) down to $10\%$ for moderately bright sources (McConnell & Ryan, 2004).
Examples of recent, upcoming, and proposed missions to measure the
polarisation in the high-energy regime of the emission from jet-like
astrophysical sources is listed in Table 1. It is thus timely to consider the
model predictions for different models of various astrophysical sources. In
this section, a short overview of the model predictions for the radiation
mechanisms of high-energy emission from the AGN jets and GRBs is given.
### 2.1 High-energy emission from AGNs
Jet dominated AGNs, which are among the most powerful high-energy emitters in
the Universe, harbor supermassive black holes (SMBHs) at their central
engines. In about $10\%$ of AGNs, mass accretion onto this SMBH is accompanied
by the production of relativistic jets, whose bulk energy is converted into
kinetic energy of electrons, multiwavelength radiation, and possibly particle
emission of ions and neutrinos. Blazars, in which one of the jets is closely
aligned to our line of sight (Urry & Padovani, 1995), are the most numerous
class of extragalactic $\gamma$-ray sources detected (e.g. Aharonian et al.
(2009); Ackermann et al. (2016)). The SEDs of blazars are dominated by non-
thermal radiation across the entire electromagnetic spectrum. The radio
through optical/UV emission is well explained by synchrotron emission from
relativistic electrons in the jet, which is consistent with moderate PDs (up
to PD $\sim(3-40)\%$) in the optical (Böttcher et al., 2012; Zhang et al.,
2014). For high-energy-synchrotron-peaked (HSP) blazars, the synchrotron
emission extends well into the $X$-ray regime, and may thus be confirmed by
$X$-ray polarisation (e.g. Krawczynski et al. (2011)).
The origin of the high-energy ($X$-ray/$\gamma$-ray) component in the SEDs of
blazars is still unclear with two viable models, both consistent with the
shape of the SED: The first leptonic model argues that the high-energy
component is due to Compton scattering off the same electrons that emit the
radio to UV/$X$-ray radiation. The seed photons can then either be synchrotron
photons or infrared (IR)/optical/UV photons external to the jet (possibly from
the accretion disk, the broad line region (BLR), or a dusty torus). In the
second hadronic model, non-thermal protons radiate via proton synchrotron
emission and interact with low-energy photons via photo-pair and photo-pion
interactions leading, in most cases, to synchrotron-supported pair cascades
developing in the emission region (for a general review on the features of
these models, see Böttcher (2010)). High-energy polarisation provides an
excellent diagnostic to distinguish between leptonic and hadronic emission
scenarios, since hadronic models intrinsically predict higher degrees of
$X$-ray and $\gamma$-ray polarisation than leptonic models (Zhang & Böttcher,
2013; Paliya et al., 2018).
The production of high-energy neutrinos provides evidence for hadronic
interactions (e.g. Atoyan & Dermer (2001); Dermer & Menon (2010); Tavecchio &
Ghisellini (2015)). If blazars accelerate enough high-energy protons, the
protons may interact with the local blazar radiation field and produce charged
pions which decay and emit neutrinos. The recent IceCube-170922A neutrino
event, which was reported to coincide with the blazar TXS 0506+056 in flaring
state (IceCube Collaboration et al., 2018), indicates that hadronic processes
may operate in a blazar jet. Many models have been put forward for an
explanation of the corresponding SED of TXS 0506+056 during the neutrino alert
(e.g. Ansoldi et al. (2018); Keivani et al. (2018); Murase et al. (2018);
Padovani et al. (2018); Cerruti et al. (2019); Reimer et al. (2019); Gao et
al. (2019)), which can be categorised into two groups: The first is a leptonic
scenario where inverse-Compton dominates the high-energy emission, with a
subdominant hadronic component which produces the neutrinos as well as a
considerable amount of $X$-rays through synchrotron emission from hadronically
induced cascades. The second is a hadronic scenario where the $X$-ray emission
consists of both proton synchrotron and cascading synchrotron, while the
$\gamma$-ray emission is dominated by proton synchrotron. $X$-ray polarisation
can probe the secondary pair synchrotron contribution complementary to the
neutrino detection, while $\gamma$-ray polarisation can be used to distinguish
between the inverse-Compton and proton synchrotron scenarios (Rani et al.,
2019). For instance, Zhang et al. (2019a) found that the proton synchrotron
(hadronic) scenarios generally predict higher PDs across the high-energy
component than the inverse-Compton (leptonic) dominated scenarios.
The SEDs of some blazars contain an excess in the UV and/or soft $X$-ray
regime (e.g. Masnou et al. (1992); Grandi et al. (1997); Haardt et al. (1998);
Pian et al. (1999); Raiteri et al. (2005); Palma et al. (2011); Ackermann et
al. (2012); Paliya et al. (2015); Pal et al. (2020)) called the big blue bump
(BBB). Various models for the emission responsible for the BBB have been
proposed, which include: (1) Thermal emission from the accretion disk that
feeds the SMBH (e.g. Pian et al. (1999); Blaes et al. (2001); Paliya et al.
(2015); Pal et al. (2020)), (2) a higher than Galactic dust-to-gas ratio
towards the source (e.g. Ravasio et al. (2003)), (3) a distinct synchrotron
component from a different region in a multi-zone construction (e.g. Paltani
et al. (1998); Raiteri et al. (2006); Ostorero et al. (2004); Roustazadeh &
Böttcher (2012)), (4) the detection of a Klein-Nishina effect on the
synchrotron spectrum (e.g. Ravasio et al. (2003); Moderski et al. (2005)), and
(5) bulk Compton emission (e.g. Sikora et al. (1994, 1997); Błażejowski et al.
(2000); Ackermann et al. (2012); Baring et al. (2017)). The polarisation of
the UV/$X$-ray emission from blazars may yield significant insights in order
to distinguish between different BBB emission scenarios. For instance, a BBB
due to (unpolarised) thermal emission from an accretion-disk predicts that the
polarisation will decrease with increasing frequency throughout the optical/UV
regime, while Roustazadeh & Böttcher (2012) predicted that a BBB due to
cascade synchrotron emission would result in PDs that show a weak dependence
on the frequency over the optical/UV regime. If the BBB arises from the bulk
Compton feature, the thermal Comptonisation process should lead to significant
polarisation of the Compton emission from the UV/$X$-ray excess in the SED
(Baring et al., 2017).
### 2.2 Gamma-ray burst prompt emission
GRBs are the strongest explosions in the Universe, separated into two phases:
The initial burst of $\gamma$-rays (i.e. the prompt emission) that can last
from a fraction of a second to hundreds of seconds, and a longer lasting (from
days to weeks) afterglow emission. There are at least two classes of GRBs
depending on the duration of the the prompt emission phase, believed to be
associated with the formation of two oppositely directed ultra-relativistic
jets (McConnell et al., 2019): Short GRBs ($\leq 2$ seconds), which are
believed to be associated with the merger of compact star binaries, and long
GRBs ($>2$ seconds), which are believed to be associated with the death of a
massive star. The afterglow has been well studied across the entire
electromagnetic spectrum, contributing in our understanding of the later
stages of GRB jets (e.g. Sari (1997); Böttcher & Dermer (1998); Pian et al.
(1999); Piran & Granot (2001); Granot (2008); Racusin et al. (2011)).
However, given the erratic nature of GRBs, understanding the physics of the
early phase of the jet propagation is more challenging since it depends on the
short lived high-energy prompt emission. The observed spectra of GRB prompt
emission are often well described by a Band function (Band et al., 1993),
which consists of a broken power-law with a smooth break at a characteristic
peak energy (i.e. the peak of the spectrum when plotted in terms of the energy
output per decade). Many diverse models for the emission mechanism that can
explain the Band like non-thermal prompt emission spectra have been proposed
(see e.g. Baring & Braby (2004); Pe’er (2015)), which include: (1) Optically
thin synchrotron radiation with either random or ordered magnetic fields, (2)
SSC emission, (3) Compton drag models, and (4) photospheric models. The models
show that an integrated understanding of the geometry and physical processes
close to the central engine may only be accomplished through high-energy
polarimetry, since it depends on the emission processes involved that produce
the prompt emission (see e.g. McConnell et al. (2019); Gill et al. (2020) for
recent reviews).
Emission due to the SSC process can be moderately polarised, with maximum PDs
$\sim 24\%$ for a simplified model (Chang & Lin, 2014). However, SSC has been
disfavored as a plausible emission mechanism by GRB energetics (see e.g.
Baring & Braby (2004); Piran et al. (2009)). The predicted linear polarisation
for photospheric models is relatively low, although the polarisation can be as
high as PD $\sim 40\%$ depending on the line of he sight (Lundman et al.,
2018). Models that argue for synchrotron radiation and/or inverse-Compton
scattering between softer photons and relativistic electrons (i.e. Compton
drag) predict high PDs (Lyutikov et al., 2003; Gill et al., 2020). Detection
of highly polarised prompt GRB emission would thus support both synchrotron
and Compton drag models. However, Toma et al. (2009) showed that a statistical
study of a sample of GRBs could then be used to differentiate between the
models that either invoke ordered or random magnetic fields.
Table 2: The input parameters of the MAPPIES code for different seed photon and electron energy distributions. Model description | Parameter description
---|---
Emission region | Lorentz factor of the jet, $\Gamma_{jet}$
| Redshift of the source, $z$
Seed photon distributions
(1) Isotropic, single-temperature | Number of seed photons considered, $N_{phot}$
black body distribution. | Temperature of the seed photons in eV, $kT_{rad}$
| Number of seed photons considered, $N_{phot}$
(2) Multi-temperature | Black hole mass in g, $M_{BH}$
accretion-disk | Inner disk radius of the accretion-disk in cm, $R^{in}_{D}$
spectrum. | Outer disk radius of the accretion-disk in cm, $R^{out}_{D}$
| Distance between the central black hole and the emission region in cm, $h$
| Accretion-disk luminosity in $\mathrm{~{}erg\cdot s^{-1}}$, $L_{D}$
Electron energy distributions
(1) Purely thermal (Maxwell) distribution. | Thermal temperature of electrons in eV, $kT_{e}$
(2) Hybrid (Maxwell + power-law) | Fraction of non-thermal electrons, $f_{nth}$
distribution. | Power-law index of the power-law distribution of non-thermal electrons, $p$
| Maximum Lorentz factor of non-thermal electrons, $\gamma_{max}$
(3) Input distribution | File of electron distribution, $n(\gamma)$
The fraction of the GRBs with significant polarisation is higher for models
that have ordered magnetic fields than those with random magnetic fields. The
PD measurements of a sample of GRBs which result in a distribution peaking at
high PDs would therefore favor synchrotron models with ordered magnetic
fields.
A compact detector for GRB polarisation, POLAR (Zhang et al., 2018), was
designed to produce linear polarisation measurements in the energy range of
$\sim 50-500$ keV and detected 55 GRBs during 2016 and 2017 (Xiong et al.,
2017; Kole, 2018). A time integrated analysis for a number of GRBs observed by
POLAR was done by Kole (2018), which showed low or unpolarised prompt emission
in the energy regime of $\sim 30-750$ keV. The results therefore favored most
emission mechanism models, expect synchrotron radiation with ordered magnetic
field configurations. Time resolved analysis for a selected sample of the GRBs
revealed PDs with a changing in PA. For instance, the sample of GRBs for which
intra-pulse time-resolved studies were possible indicated PDs $\sim 30\%$ with
an evolving PA. This indicates that low polarisation signals from the time-
integrated analysis could be a result of the summation of changing
polarisation signals for different epochs. The work of Kole et al. (2020) did
not include energy resolved studies which have the potential to test
predictions for different models. Various components of the prompt emission
can have different distinct PDs, for example, Lundman et al. (2018) predicted
significant polarisation in the energy regime of 10s of keV, but also
predicted that the PD can be lost at higher energies due to Comptonisation.
Energy-dependent polarisation studies can therefore be a powerful diagnostic
for different emission models. The code presented in this paper can be used
for energy- and angle-dependent studies of Compton polarisation from
relativistic jet sources.
## 3 The MAPPIES code
In this section, a newly developed Monte-Carlo code (MAPPIES) for
polarisation-dependent Compton scattering of radiation fields in relativistic
jets of e.g. GRBs and AGNs is presented. A flow diagram of the code is shown
in Figure 1: An external radiation field, originating in the laboratory frame
(which can be e.g. the rest frame of the AGN, or the rest frame of the GRB
progenitor), scatters off an arbitrary (thermal and non-thermal) electron
distribution, assumed to be isotropic in the co-moving frame of the emission
region that moves along the jet with a relativistic speed (i.e. a bulk Lorentz
factor $\Gamma_{jet}\gg 1$). The MAPPIES code is written in the object
orientated programming language C++, and performs the radiation transfer
simulation by tracking every photon individually (i.e. the single photon
approach) through the computational domain. The Monte-Carlo single photon
approach is very flexible in terms of the Comptonising medium, as well as the
directional and energy distributions of the seed photons and electrons. The
input parameters (listed in Table 2) determine the characteristics of the
emission region, the seed photon distribution (discussed in section 3.1), and
the electron energy distribution (discussed in section 3.2). The polarisation
signatures are calculated using the Stokes formalism (Stokes, 1851), and the
polarisation-dependent Compton scattering of the seed photons is evaluated
using Monte-Carlo methods by Matt et al. (1996) (discussed in section 3.3). A
comparison to previously published results is given in Appendix A.
In what follows, quantities in the laboratory frame and the co-moving frame of
the emission region are denoted with subscripts lab and em, respectively.
Quantities in the electron rest frame and the observer’s frame are denoted
with subscripts e and obs, respectively, and the scattered photon quantities
are denoted with a superscript sc. While following the single photon approach,
an additional subscript i is used to label the quantities of the current,
individual photon. All random numbers are denoted with $\xi$ and are drawn
with the Mersenne Twister (Matsumoto & Nishimura, 1998) between $0$ and $1$,
unless specified otherwise.
Figure 1: A flow diagram for the Monte-Carlo simulation of the MAPPIES code.
The input parameters define the characteristics of the emission region (which
moves along the jet with a bulk Lorentz factor $\Gamma_{jet}$), the seed
photon distribution (drawn from an isotropic, single-temperature black body
distribution, or a multi-temperature accretion-disk spectrum), and the
electron energy distribution (drawn from a purely thermal distribution, a
hybrid distribution, or an input electron spectrum). The Monte-Carlo
simulation is shown for a single photon, and will continue for the number of
seed photons considered. The photon is transformed between the laboratory
frame (shown in the purple shaded area), the emission frame (shown in the blue
shaded area), and the electron rest frame (shown in the grey shaded area).
After evaluating the full Klein-Nishina cross section in the electron rest
frame, the code continues with only the scattered photons.
### 3.1 Seed photon fields
The seed photons are drawn in the laboratory frame from either an isotropic,
single-temperature black body distribution, or from a multi-temperature
accretion-disk spectrum. In the first case, the polar angle $\Theta_{i,lab}$
and azimuthal angle $\Phi_{i,lab}$ is drawn from an isotropic distribution, so
that
$\displaystyle\Theta_{i,lab}$ $\displaystyle=$
$\displaystyle\mathrm{cos}^{-1}(2\xi_{1}-1)$ $\displaystyle\Phi_{i,lab}$
$\displaystyle=$ $\displaystyle 2\pi\xi_{2}$ (1)
with $\xi_{1}$ and $\xi_{2}$ are random numbers between 0 and 1. The seed
photon energy $\epsilon_{i,lab}$ is drawn from a black body distribution that
corresponds to a single temperature $kT_{rad}$ (defined as an input
parameter), following the Monte-Carlo methods by Pozdnyakov et al. (1983). The
photon is then boosted to the co-moving frame of the emission region with the
bulk boost equations (Böttcher et al., 2012)
$\displaystyle\epsilon_{i,em}$ $\displaystyle=$
$\displaystyle\Gamma_{jet}\epsilon_{i,lab}\left(1-\beta_{jet}\cos\Theta_{i,lab}\right)$
$\displaystyle\cos\Theta_{i,em}$ $\displaystyle=$
$\displaystyle\frac{\cos\Theta_{i,lab}-\beta_{jet}}{1-\beta_{jet}\cos\Theta_{i,lab}}.$
(2)
An illustration of how a single photon is drawn from an accretion-disk
spectrum is given in Figure 2. The calculation requires the following input
parameters: The mass of the black hole $M_{BH}$, the inner-disk radius
$R_{D}^{in}$, the outer-disk radius $R_{D}^{out}$, the accretion-disk
luminosity $L_{D}$, and the height of the emission region $h$ (i.e. the
distance between the central black hole and the emission region). The flux per
unit radius is given by
$\displaystyle dF=\left(2\pi
r\cos\Theta_{lab}\sigma_{SB}T_{rad}^{4}(r)\right)dr$ (3)
where
$\displaystyle kT_{rad}(r)=k\left[\frac{3GM_{BH}\dot{m}}{8\pi
r^{3}\sigma_{SB}}\left(1-\sqrt{\frac{R_{D}^{in}}{r}}\right)\right]^{\frac{1}{4}}$
(4)
is the radial temperature structure, and
$\displaystyle\cos\Theta_{lab}=\frac{h}{\sqrt{r^{2}+h^{2}}}$ (5)
is the cosine of the angle between the line of sight from the emission region
to $r$ and the normal to the disk. In Equation 4,
$\dot{m}=\dot{M}_{Edd}(L_{D}/L_{Edd})$ is the accretion rate of the disk, with
$\dot{M}_{Edd}=L_{Edd}/(0.1c^{2})$ the Eddington accretion rate relative to
the Eddington luminosity $L_{Edd}=[(1.26\times
10^{46}M_{BH})/(10^{8}M_{\odot})]~{}\mathrm{erg\cdot s^{-1}}$,
$\sigma_{SB}\sim 5.7\times 10^{-5}\mathrm{erg\cdot cm^{-2}\cdot s^{-1}\cdot
K^{-4}}$ is the Stefan-Boltzmann constant, $G\sim 2\times
10^{-8}\mathrm{cm^{3}\cdot g^{-1}\cdot s^{-2}}$ is the gravitational constant,
$c\sim 3\times 10^{10}\mathrm{cm\cdot s^{-1}}$ is the speed of light, and
$M_{\odot}\sim 2\times 10^{33}$ g is the solar mass.
Figure 2: An illustration of how a single photon (denoted with a subscript i)
is drawn from an accretion-disk spectrum. The angle between the line of sight
from the emission region $r_{i}$ and the normal to the disk is denoted with
$\Theta_{i,lab}$, and $h$ is the distance between the central black hole and
the emission region.
The cumulative probability to receive a photon from a radius $r<r_{i}$ is
obtained from Equation 3 as
$\displaystyle P(r<r_{i})$ $\displaystyle=$
$\displaystyle\frac{1}{N}\int^{r_{i}}_{R_{D}^{in}}\left[\frac{1}{r^{2}\sqrt{r^{2}+h^{2}}}-\frac{\sqrt{R_{D}^{in}}}{r^{\frac{5}{2}}\sqrt{r^{2}+h^{2}}}\right]dr$
(6) $\displaystyle=$ $\displaystyle N^{-1}(I_{1}-\sqrt{R_{D}^{in}}I_{2})$
with the normalisation given by
$\displaystyle
N=\int^{R_{D}^{out}}_{R_{D}^{in}}\left[\frac{1}{r^{2}\sqrt{r^{2}+h^{2}}}-\frac{\sqrt{R_{D}^{in}}}{r^{\frac{5}{2}}\sqrt{r^{2}+h^{2}}}\right]dr.$
(7)
In Equation 6,
$\displaystyle I_{1}$ $\displaystyle=$
$\displaystyle\int^{r_{i}}_{R_{D}^{in}}\frac{1}{r^{2}\sqrt{r^{2}+h^{2}}}dr$
(8) $\displaystyle=$
$\displaystyle\left[-\frac{\sqrt{r^{2}+h^{2}}}{h^{2}r}\right]^{r_{i}}_{R_{D}^{in}}$
and
$\displaystyle
I_{2}=\int^{r_{i}}_{R_{D}^{in}}\frac{1}{r^{\frac{5}{2}}\sqrt{r^{2}+h^{2}}}dr$
(9)
which is solved with the assumption that $r\ll h$ in most relevant cases. The
typical outer-disk radius $R_{D}^{out}\sim 10^{3}R_{G}10^{3}\approx 1.5\times
10^{16}M_{8}$ cm, where $M_{8}=M_{BH}/(10^{8}M_{\odot})$ and $R_{G}$ is the
gravitational radius of the black hole. Therefore, $r<h$ anywhere in the disk
when $M_{BH}\lesssim 3\times 10^{8}M_{\odot}$ and/or $h\lesssim 10^{17}$. The
Taylor expansion,
$\displaystyle\frac{1}{\sqrt{r^{2}+h^{2}}}$ $\displaystyle=$
$\displaystyle\frac{1}{z\sqrt{1+(r/z)^{2}}}$ (10) $\displaystyle=$
$\displaystyle\frac{1}{z}\sum^{\infty}_{n=0}\binom{-\frac{1}{2}}{n}\left(\frac{r}{z}\right)^{2n}$
can then be used, so that, to first order
$\displaystyle
I_{2}=\sum_{n=0}^{\infty}\binom{-\frac{1}{2}}{n}z^{-(2n+1)}\int^{r}_{R_{D}^{in}}r^{2n-\frac{3}{2}}dr$
(11)
For typical emission-region heights of $h\gtrsim 10^{17}$ cm, $r\ll h$
everywhere in the disk, which gives
$\displaystyle I_{2}(r\ll
h)=\left[-\frac{2}{hr^{\frac{3}{2}}}-\frac{\sqrt{r}}{h^{3}}\right]^{r_{i}}_{R_{D}^{in}}$
(12)
The radius $r_{i}$ is drawn by calculating $P(r<r_{i})$ for different values
of $r_{i}\in[R_{D}^{in},R_{D}^{out}]$, until a given random number
$\xi=P(r<r_{i})$. The temperature of the disk at $r_{i}$, from which the
photon energy is sampled, $KT_{i,rad}$, and polar angle of the seed photon,
$\Theta_{i,lab}$, are subsequently calculated with Equations 4 and 5,
respectively. The energy of the photon $\epsilon_{i,lab}$ is drawn from a
black body distribution that corresponds to $KT_{i,rad}$ (following the Monte-
Carlo methods by Pozdnyakov et al. (1983)), and boosted to the co-moving frame
of the emission region with Equation 3.1.
### 3.2 Electron energy distributions
The electrons are assumed to be isotropic in the co-moving frame of the
emission region. An electron is randomly assigned to every photon as it is
transported through the computational domain, drawn from (1) a purely thermal
(Maxwell) distribution, (2) a hybrid (Maxwell + power-law) distribution, or
(3) a user-defined input electron spectrum. In the first case, the thermal
temperature $kT_{e}$ is given as an input parameter, and a Lorentz factor
$\gamma_{i}$ of an electron is drawn from a Maxwellian distribution, following
the Monte-Carlo methods by Pozdnyakov et al. (1983). In the case of thermal
and a power-law tail of non-thermal electrons three additional parameters are
required: The fraction of the electrons that are assumed to be non-thermal
$f_{nth}$, the power-law index of the non-thermal electrons $p$ (which are
drawn from a power-law distribution $n_{pl}(\gamma)=N_{pl}\gamma^{-p}$, with
$N_{pl}$ the normalisation constant), and the Lorentz factor that corresponds
to the cut-off energy of the power-law tail $\gamma_{max}$. The Lorentz factor
which corresponds to where the power-law tail begins, $\gamma_{min}$, is
determined by iteration until
$\displaystyle f_{nth}=\frac{n_{pl}}{n_{th}+n_{pl}}$ (13)
is equal to the input parameter $f_{nth}$ defined, where
$\displaystyle
n_{pl}=N_{pl}\int^{\gamma_{max}}_{\gamma_{min}}\tilde{\gamma}^{-p}d\tilde{\gamma}$
(14)
is the power-law tail and
$\displaystyle
n_{th}=N_{th}\int^{\gamma_{min}}_{1}\tilde{\gamma}^{2}\tilde{\beta}e^{-\frac{\tilde{\gamma}}{\Theta_{e}}}d\tilde{\gamma}$
(15)
is the number of thermal electrons, with
$\tilde{\beta}=\sqrt{1-\tilde{\gamma}^{-2}}$,
$\Theta_{e}=kT_{e}/(m_{e}c^{2})$, and $N_{th}=1$ an arbitrary normalisation
constant. At the point of intersection,
$\displaystyle
N_{pl}\gamma_{min}^{-p}=N_{th}\gamma_{min}^{2}\beta_{min}e^{-\frac{\gamma_{min}}{\Theta_{e}}}$
(16)
During the Monte-Carlo simulation, a random number $\xi_{1}\in[0,1]$ is
compared to $f_{nth}$ in order to draw $\gamma_{i}$ from either a power-law
distribution or a Maxwellian distribution. If $\xi_{1}\leq f_{nth}$,
$\gamma_{i}$ is drawn from a power-law distribution,
$\displaystyle\gamma_{i}=\left[\xi\left(\gamma_{max}^{1-p}-\gamma_{min}^{1-p}\right)+\gamma_{min}^{1-p}\right]^{\frac{1}{1-p}}$
(17)
where $\xi\in[0,1]$. If $\xi_{1}>f_{nth}$, $\gamma_{i}$ is drawn from a
Maxwellian distribution, in which case, a new random number $\xi_{1}\in[0,1]$
will be drawn if $\gamma_{i}>\gamma_{min}$, and the process will be repeated.
In the case where the electron energy is drawn from a user-defined
distribution, an input electron spectrum $n(\gamma)$ alone is required. The
cumulative distribution function (CDF),
$\displaystyle
P(\gamma<\gamma_{i})=\frac{\int^{\gamma_{i}}_{1}n(\tilde{\gamma})d\tilde{\gamma}}{\int^{\gamma_{max}}_{1}n(\tilde{\gamma})d\tilde{\gamma}}$
(18)
is calculated from the input electron spectrum, and $\gamma_{i}$ is drawn such
that $\xi=P(\gamma<\gamma_{i})$.
### 3.3 Polarisation-dependent Compton Scattering
The polarisation signatures of the photons are calculated using the Stokes
formalism (Stokes, 1851). The contributions of the photon to the second
($Q_{i}$) and third ($U_{i}$) Stokes parameters are calculated by following
the Monte-Carlo methods by Matt et al. (1996), and summed over all the photons
after the simulation is complete. Since the photons are individually
transported through the computational domain, every photon is $100\%$
polarised. However, the external radiation is assumed to be unpolarised. The
polarisation vector $\vec{P}_{i,em}$ (which points in the direction of the
electric field vector) of every seed photon is thus randomly drawn
perpendicular to the photon’s direction of propagation, which results in a
zero net polarisation (see e.g. Tamborra et al. (2018)).
The probability of the photon to undergo Compton scattering is determined by
the (polarisation averaged) Compton cross section,
$\displaystyle\frac{\sigma_{KN}}{\sigma_{T}}$ $\displaystyle=$
$\displaystyle\frac{3}{4}\left[\frac{1+x_{e}}{x_{e}^{3}}\left(\frac{2x_{e}(1+x_{e})}{1+2x_{e}}-\ln(1+2x_{e})\right)\right]$
(19)
$\displaystyle+\frac{3}{4}\left[\frac{1}{2x_{e}}\ln(1+2x_{e})-\frac{1+3x_{e}}{\left(1+2x_{e}\right)^{2}}\right]$
most conveniently evaluated in the electron rest frame, with $\sigma_{T}\sim
6.7\times 10^{-25}~{}\mathrm{cm^{2}}$ the Thomson cross section (Böttcher et
al., 2012), and $x_{e}=\epsilon_{e}/m_{e}c^{2}$ the dimensionless seed photon
energy. The seed photon is therefore transformed to the electron rest frame
with the Lorentz matrix (Bonometto et al., 1970; Tamborra et al., 2018), and a
random number $\xi\in[0,1]$ is drawn to determine whether the seed photon will
be scattered. If $\xi>(\sigma_{i,KN}/\sigma_{T})$ (where $\sigma_{i,KN}$ is
the full Klein-Nishina cross section for the current photon), the photon will
continue in the same direction without scattering, otherwise the Compton
scattering event will be simulated.
The geometry of the Compton effect for a single photon in the electron rest
frame is illustrated in Figure 3. The energy of the scattered photon is given
by
$\displaystyle\epsilon_{i,e}^{sc}=\frac{\epsilon_{i,e}}{1-x_{i,e}\cos\Theta_{i,e}^{sc}}$
(20)
The probability of a photon to have a scattering angle of $\Theta_{i,e}^{sc}$
(the angle between the seed and scattered photon) is given by
$\displaystyle P(\Theta_{i,e}^{sc}<\Theta_{e}^{sc})$ $\displaystyle=$
$\displaystyle\frac{x_{i,e}\left[\frac{3}{2}+\mu_{e}^{sc}(1-\frac{1}{2}\mu_{e}^{sc})\right]+\frac{1}{3}\left[1+(\mu_{e}^{sc})^{3}\right]}{\frac{2}{3}+2x_{i,e}+\frac{1}{x_{i,e}}\ln(1+2x_{i,e})}$
$\displaystyle-$
$\displaystyle\frac{\frac{1}{x_{i,e}}\bigg{\\{}\ln\left[1+x_{i,e}(1-\mu_{e}^{sc})\right]-\ln(1+2x_{i,e})\bigg{\\}}}{\frac{2}{3}+2x_{i,e}+\frac{1}{x_{i,e}}\ln(1+2x_{i,e})}$
where $\mu_{e}^{sc}=\cos\Theta_{e}^{sc}$ (Matt et al., 1996). The scattering
angle $\Theta_{i,e}^{sc}$ is thus calculated by drawing a random number $\xi$
and finding the value of $\Theta_{e}^{sc}$ for which
$P(\Theta_{i,e}^{sc}<\Theta_{e}^{sc})=\xi$.
The azimuthal distribution of the photons is dependent on the seed photon
polarisation. When the seed photons are unpolarised, the angle between
$\vec{P}_{i,e}^{sc}$ and the plane of scattering, $\Phi_{i,e}^{sc}$, can be
assumed to be isotropically distributed in the electron rest frame (Matt et
al., 1996). However, while following a single photon approach, every
individual photon is fully polarised. The azimuthal angle $\Phi_{i,e}^{sc}$ is
thus drawn by calculating the probability of a fully polarised photon to have
an angle $\Phi_{i,e}^{sc}$ (Matt et al., 1996),
$\displaystyle P(\Phi_{i,e}^{sc}<\Phi_{e}^{sc})$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi}\left[\Phi_{e}^{sc}-\frac{\sin^{2}\Theta_{e}^{sc}\sin\Phi_{e}^{sc}cos\Phi_{e}^{sc}}{\frac{x_{i,e}}{x_{i,e}^{sc}}+\frac{x_{i,e}^{sc}}{x_{i,e}^{sc}}-\sin^{2}\Theta_{e}^{sc}}\right]$
and finding the value of $\Phi^{sc}_{e}$ for which
$P(\Phi^{sc}_{i,e}<\Phi^{sc}_{e})=\xi$, where $\xi$ is a newly drawn random
number.
Figure 3: Illustration of the geometry of the Compton effect for a single
photon (denoted with a subscript i) in the electron rest frame. The seed
photon, moving in a direction $\vec{D}_{i,e}$, with an energy
$\epsilon_{i,e}$, and a polarisation vector $\vec{P}_{i,e}$ (which points in
the direction of the electric field vector) is shown in purple. The scattered
photon is shown in blue with a direction $\vec{D}^{sc}_{i,e}$, an energy
$\epsilon^{sc}_{i,e}$, and a polarisation vector $\vec{P}^{sc}_{i,e}$. The
polar scattering angle $\Theta^{sc}_{i,e}$ is the angle between the seed and
scattered photon, while the azimuth scattering angle $\Phi^{sc}_{i,e}$ is the
angle between $\vec{P}_{i,e}$ and the plane of scattering.
The PD due to scattering is then calculated for a fully polarised photon as
$\displaystyle\mathrm{PD}_{i,e}^{sc}=2\left[\frac{1-\sin^{2}\Theta^{sc}_{i,e}\cos^{2}\Phi^{sc}_{i,e}}{\frac{x_{i,e}}{x_{i,e}^{sc}}+\frac{x_{i,e}^{sc}}{x_{i,e}}-2\sin^{2}\Theta^{sc}_{i,e}\cos^{2}\Phi^{sc}_{i,e}}\right]$
(23)
in the electron rest frame (Matt et al., 1996), which determines whether the
photon will be polarised after scattering. A random number $\xi\in[0,1]$ is
drawn and compared to $\mathrm{PD}_{i,e}^{sc}$ in order to calculate the
polarisation vector $\vec{P}_{i,e}^{sc}$ of the scattered photon. If
$\xi<\mathrm{PD}_{i,e}^{sc}$, the photon will contribute to the partially
polarised Compton emission with a polarisation vector of
$\displaystyle\vec{P}^{sc}_{i,e}=\frac{(\vec{P}_{i,e}\times\vec{D}^{sc}_{i,e})\times\vec{D}^{sc}_{i,e}}{|\vec{P_{i,e}^{sc}}|}$
(24)
where $\vec{D}^{sc}_{i,e}$ is the scattered photon’s direction of propagation
(Angel, 1969). Otherwise, $\vec{P}_{i,e}^{sc}$ is randomly drawn perpendicular
to $\vec{D}_{i,e}^{sc}$, and will contribute to the unpolarised fraction of
the Compton emission (Matt et al., 1996; Tamborra et al., 2018).
The scattered photon is transformed back into the co-moving frame of the
emission region with the Lorentz matrix, where the photon’s contribution to
the second ($Q^{sc}_{i}$) and third ($U^{sc}_{i}$) Stokes parameters is
calculated. The photon is boosted into the laboratory frame with the bulk
boost equations (Böttcher et al., 2012),
$\displaystyle\epsilon_{i,lab}^{sc}$ $\displaystyle=$
$\displaystyle\Gamma_{jet}\epsilon_{i,em}^{sc}\left(1+\beta_{jet}\cos\Theta_{i,em}^{sc}\right)$
$\displaystyle\cos\Theta_{i,lab}^{sc}$ $\displaystyle=$
$\displaystyle\frac{\cos\Theta_{i,em}^{sc}+\beta_{jet}}{1+\beta_{jet}\cos\Theta_{i,em}^{sc}}$
(25)
and shifted to the observer’s frame where
$\epsilon_{i,obs}^{sc}=\epsilon_{i,lab}^{sc}/(1+z)$, with $z$ the redshift of
the source.
The Compton polarisation signatures are calculated after the simulation is
complete by summing the contributions of the scattered photons to the Stokes
parameters, so that
$\displaystyle\mathrm{Q}=\sum_{i=0}^{N^{sc}_{phot}}\mathrm{Q^{sc}_{i}}\quad
and\quad\mathrm{U}=\sum_{i=0}^{N^{sc}_{phot}}\mathrm{U^{sc}_{i}}$ (26)
where $N^{sc}_{phot}$ is the number of the scattered photons in the specified
direction. The polarisation signatures of the Compton emission are determined
as
$\displaystyle\mathrm{PD}=\frac{\sqrt{\mathrm{Q}^{2}+\mathrm{U}^{2}}}{N^{sc}_{phot}}\quad
and\quad\mathrm{PA}=\frac{1}{2}\tan^{-1}\frac{\mathrm{U}}{\mathrm{Q}}$ (27)
and binned in viewing angles, $\Theta_{i,lab}^{sc}$, and energy,
$\epsilon_{i,obs}^{sc}$. This allows us to identify the viewing angle and
photon energy range at which the maximum PD occurs, thus offering the best
opportunities to measure Compton polarisation.
## 4 Compton polarisation in the Thomson and Klein-Nishina regimes
The MAPPIES code presented in this paper can be used to simulate the
polarisation due to Compton scattering of different seed photon fields and
electrons with different energy distributions. In this section, generic
results for the Compton polarisation in the Thomson and Klein-Nishina regimes
are presented. Only results for the isotropic black body target photons are
shown in this paper, while results for an accretion-disk spectrum will be
presented in a companion paper (Dreyer and Böttcher 2020, in preparation) for
applications to specific AGN. The results are shown for the combination of
free parameters listed in Table 3. The seed photons are drawn in the
laboratory frame from an isotropic, single-temperature black body distribution
(shown in the top panel of Figure 4) with $kT_{rad}=0.5$ keV, $kT_{rad}=50$
keV, and $kT_{rad}=500$ keV. The electrons (shown in the bottom panel of
Figure 4) are assumed to be isotropic in the co-moving frame of the emission
region (that moves along the jet with a bulk Lorentz factor of
$\Gamma_{jet}=10$) with thermal temperatures of $kT_{e}=50$ keV, $kT_{e}=500$
keV, and $kT_{e}=5000$ keV. The electron energy is drawn from either a purely
thermal distribution (shown with solid lines) or a hybrid (Maxwell + power-
law) distribution (shown with dashed lines). In all the figures discussed,
results for soft $X$-rays ($kT_{rad}=0.5$ keV) are shown in purple, results
for hard $X$-rays ($kT_{rad}=50$ keV) are shown in blue, and results for
$\gamma$-rays ($kT_{rad}=500$ keV) are shown in grey.
Table 3: The input parameters for the generic results of Compton polarisation in the Thomson and Klein-Nishina regimes. Input Parameter | Value
---|---
Lorentz factor of the jet, $\Gamma_{jet}$ | $10$
Number of seed photons considered | $10^{8}$
Temperature of the seed photons, $kT_{rad}$ | $0.5$ keV
| $50$ keV
| $500$ keV
Thermal temperature of electrons, $kT_{e}$ | $50$ keV
| $500$ keV
| $5000$ keV
Fraction of non-thermal electrons, $f_{nth}$ | $0.02$
Power-law index of the power-law |
distribution of non-thermal electrons, $p$ | $2.0$
Maximum Lorentz factor of the power-law |
distribution of non-thermal electrons, $\gamma_{max}$ | $1.6\times 10^{3}$
Figure 4: The seed photon spectra (top panel) and electron energy
distributions (bottom panel). The photons (top panel) are drawn in the
laboratory frame from an isotropic, single-temperature black body
distributions with $kT_{rad}=0.5$ keV (shown in purple), $kT_{rad}=50$ keV
(shown in blue), and $kT_{rad}=500$ keV (shown in grey). The electrons (bottom
panel) are isotropic in the co-moving frame of the emission region with
thermal temperatures of $kT_{e}=50$ keV, $kT_{e}=500$ keV, and $kT_{e}=5000$
keV that increase with the shade of grey in the bottom panel. The electron
energy is drawn from either a purely thermal distribution (shown with solid
lines) or a hybrid (Maxwell + power-law) distribution (shown with dashed
lines) for the combination of free parameters listed in Table 3.
### 4.1 The Compton spectra
The external Compton spectra resulting from the different combinations of
target radiation fields and electron distributions mentioned above, are shown
in Figure 5. Compton scattering in the Thomson regime ($x_{e}\ll 1$) is almost
elastic in the electron rest frame, while the energy exchange between the seed
photon and electron becomes substantial, along with a reduction of the cross
section, in the Klein-Nishina regime ($x_{e}\gg 1$). Due to relativistic
boosting, the photons that are scattered in the Thomson regime have energies
$\epsilon_{lab}^{sc}\sim\gamma^{2}\Gamma_{jet}^{2}\epsilon_{lab}$ (where
$\gamma$ is the averaged Lorentz factor of the electrons) in the laboratory
frame. Compton scattering off a power-law distribution of non-thermal
electrons results in a power-law distribution of scattered photons (shown with
dashed lines). The photons that are scattered in the Klein-Nishina regime have
cut-off energies in the laboratory frame that correspond to the reduction of
the cross section in the electron rest frame. For electrons with thermal
energies of $kT_{e}=50$ keV, all the electrons are non-relativistic. Soft
$X$-rays and hard $X$-rays are thus scattered in the Thomson regime with
energies $\sim\gamma^{2}\Gamma_{jet}^{2}$ higher than the seed photon
energies. For mildly-relativistic electrons, the peak of the electron
distribution is at $\gamma\sim 2$. Hard $X$-rays and $\gamma$-rays are thus
scattered in the Klein-Nishina regime with very similar high-energy spectra
(shown in blue and grey in Figure 5).
Figure 5: The external Compton spectrum for soft $X$-rays ($kT_{rad}=0.5$ keV;
shown in purple), hard $X$-rays ($kT_{rad}=50$ keV; shown in blue), and
$\gamma$-rays ($kT_{rad}=500$ keV; shown in grey). The results are shown for
scattering off non-relativistic ($kT_{e}=50$ keV), mildly-relativistic
($kT_{e}=500$ keV), and relativistic ($kT_{e}=5000$ keV) electrons with
thermal temperatures that increases with the shade of color in each panel. The
electrons are drawn from either a purely thermal distribution (shown with
solid lines) or a hybrid (Maxwell + power-law) distribution (shown with dashed
lines) for the combination of free parameters listed in Table 3.
### 4.2 The polarisation degree in the Thomson and Klein-Nishina regimes
The Compton cross section is generally dependent on polarisation. The most
familiar form of the polarisation-dependent differential cross section is
given by
$\displaystyle\frac{d\sigma_{KN}}{d\Omega_{e}^{sc}}=\frac{1}{4}r_{e}^{2}\left(\frac{x_{e}}{x_{e}^{sc}}\right)^{2}\left[\frac{x_{e}}{x_{e}^{sc}}+\frac{x_{e}^{sc}}{x_{e}}-2+4\cos^{2}\theta\right]$
(28)
in the electron rest frame, where $\theta$ is the angle between the
polarisation vector of the seed photon $\vec{e}_{e}$ and the polarisation
vector of the scattered photon $\vec{e}_{e}^{sc}$ (Matt et al., 1996). From
Equation 28,
$\displaystyle\frac{d\sigma_{KN}}{d\Omega_{e}^{sc}}\propto\frac{x_{e}}{x_{e}^{sc}}+\frac{x_{e}^{sc}}{x_{e}}-2+4(\vec{e}_{e}\cdot\vec{e}_{e}^{sc}).$
(29)
Since the polarisation term in Equation 29 dominates for $x_{e}\ll 1$, photons
that are scattered in the Thomson regime are expected to be polarised. The
polarisation term becomes negligible for $x_{e}\gg 1$, and polarisation is
thus not expected to be induced for Compton scattering in the Klein-Nishina
regime.
The polarisation signatures are shown as a function of the scattered photon
energy $\epsilon_{lab}^{sc}$ in Figure 6, for viewing angles of
$\Theta_{lab}^{sc}\sim\Gamma_{jet}^{-1}$ rad. The PD decreases with increasing
photon energies and further decreases for higher electron temperatures,
because the polarisation arises only for photons that are scattered in the
Thomson regime. The maximum PD for Compton emission in the Thomson regime
occurs where the thermal, non-relativistic electrons scatter the seed photons
(i.e. to an energy of $\Gamma_{jet}^{2}kT_{rad}$). For electron temperatures
of $kT_{e}=5\times 10^{3}$ keV, all the electrons are highly relativistic with
$\gamma\gtrsim 10$, and no Compton polarisation is induced, irrespective of
whether the photons are scattered in the Thomson or Klein-Nishina regime.
There are more photons produced at higher energies for scattering off a power-
law distribution of non-thermal electrons (shown with dashed lines) than in
the case of scattering off purely thermal electrons (shown with solid lines),
but those photons are unpolarised since the non-thermal electrons are
relativistic.
Figure 6: The polarisation signatures as a function of the scattered photon
energy for soft $X$-rays ($kT_{rad}=0.5$ keV; shown in purple), hard $X$-rays
($kT_{rad}=50$ keV; shown in blue), and $\gamma$-rays ($kT_{rad}=500$ keV;
shown in grey). The results are shown for scattering off non-relativistic
($kT_{e}=50$ keV), mildly-relativistic ($kT_{e}=500$ keV), and relativistic
($kT_{e}=5000$ keV) electrons with thermal temperatures that increases with
the shade of color in each panel. The electrons are drawn from either a purely
thermal distribution (shown with solid lines) or a hybrid (Maxwell + power-
law) distribution (shown with dashed lines) for the combination of free
parameters listed in Table 3.
### 4.3 The polarisation signatures in the Thomson regime
The polarisation signatures due to scattering off non-relativistic and mildly-
relativistic electrons are given as a function of the scattered photon viewing
angle $\Theta_{lab}^{sc}$ in Figure 7 (averaged over all photon energies).
Since the orientation of the polarisation does not change significantly for
different electron energy distributions, the polarisation signatures are
similar for scattering off thermal electrons (shown with solid lines) and a
power-law distribution of non-thermal electrons (shown with dashed lines). The
PA is shown as a function of $\epsilon_{lab}^{sc}$ and $\Theta_{lab}^{sc}$ in
the bottom panels of Figures 6 and 7, respectively. In both cases, the PA for
Compton emission that are polarised assumes a constant value of PA $=\pi/2$
rad, that corresponds to polarisation perpendicular with respect to the jet.
Figure 7: The polarisation signatures as a function of the scattered photon
viewing angle for soft $X$-rays ($kT_{rad}=0.5$ keV; shown in purple) and hard
$X$-rays ($kT_{rad}=50$ keV; shown in blue). The results are shown for
scattering off non-relativistic ($kT_{e}=50$ keV) and mildly-relativistic
($kT_{e}=500$ keV) electrons with thermal temperatures that increases with the
shade of purple and blue. The electrons are drawn from either a purely thermal
distribution (shown with solid lines) or a hybrid (Maxwell + power-law)
distribution (shown with dashed lines) for the combination of free parameters
listed in Table 3. The grey line indicates the viewing angle of
$\Theta_{lab}^{sc}=\Gamma_{jet}^{-1}$ rad, where $\Gamma_{jet}=10$.
The PD as a function of $\Theta_{lab}^{sc}$ (Figure 7) shows at which angles
the maximum polarisation occurs. For scattering to happen in the Thomson
regime, the electrons have to move in the same direction as the seed photons
(backwards in the jet). The scattered photons move perpendicular to their
incoming direction in the electron rest frame which appears at an angle of
$\sim\gamma^{-1}$ rad with respect to the backward direction. For photons that
are scattered in the Thomson regime, the maximum PD occurs at a right angle of
$\Theta_{e}^{sc}\sim\pi/2$ rad in the electron rest frame, which is
essentially the same in the emission-region rest frame for scattering off non-
relativistic ($kT_{e}=50$ keV; when there are hardly any relativistic motions)
electrons. Due to relativistic boosting, photons at
$\Theta_{em}^{sc}\sim\pi/2$ rad in the emission-region rest frame will be
observed at $\Theta_{lab}^{sc}\sim\Gamma_{jet}^{-1}$ rad in the laboratory
frame. The maximum PDs for scattering off non-relativistic electrons occur
thus at angles of $\Theta_{lab}^{sc}\sim\Gamma_{jet}^{-1}$ rad, indicated with
a grey dashed line in Figure 7.
The peak of the electron distribution for mildly-relativistic ($kT_{e}=500$
keV) electrons is around $\gamma\sim 2$. Soft $X$-rays with $kT_{rad}=0.5$ keV
are boosted to $\sim(0.5~{}\mathrm{keV})\Gamma_{jet}=5.0$ keV into the
emission frame, with the black body spectrum peaking at
$x_{em}=\Gamma_{jet}\frac{2.8kT_{rad}}{m_{e}c^{2}}\sim 0.03$. Similarly, hard
$X$-rays with $kT_{rad}=50$ keV are boosted to $\sim 500$ keV into the
emission frame, with the black body spectrum peaking at $x_{em}\sim 2.7$.
Therefore, photons that are scattered by mildly-relativistic electrons have
angles of $\Theta_{em}^{sc}=(\pi-\gamma^{-1})$ rad in the emission-region rest
frame. Relativistic aberration into the laboratory frame causes the maximum PD
to occur at angles that are larger than those in the case of scattering off
non-relativistic electrons.
## 5 Summary and conclusion
The MAPPIES code presented in this paper is capable of predicting the Compton
polarisation in different jet-like astrophysical sources for different photon
energies and electron temperatures (see Appendix A for a comparison to
previously published results). The effects of Compton scattering depend on the
temperature of the seed photons, as well as the Lorentz factors and energy
distribution of the electrons. The photons scatter to higher energies (with a
factor of $\Gamma_{jet}^{2}\gamma^{2}$) in the Thomson regime, and have cut-
off energies that correspond to the reduction of the cross section in the
Klein-Nishina regime. The PD of the scattered photons depends on the effects
of Compton scattering due to the polarisation dependence of the Klein-Nishina
cross section, given by Equation 28. The PD decreases with the increase of
photon energies and higher electron temperatures. The energy regimes with non-
negligible PDs shift to higher energies and become smaller for higher seed
photon temperatures, while narrowing further for higher electron temperatures.
Polarisation is therefore expected to be induced for photons that are
scattered in the Thomson regime, and no polarisation is expected to be induced
for photons that are scattered in the Klein-Nishina regime. For electron
temperatures of $kT_{e}=5000$ keV, essentially all the electrons are highly
relativistic and no Compton polarisation is induced irrespective of whether
the photons are scattered in the Thomson or Klein Nishina regime.
The maximum PD for scattering in the Thomson regime occurs at viewing angles
of $\Theta_{lab}^{sc}\sim\Gamma_{jet}^{-1}$ rad (shown in Figure 7). The
maximum PD occurs at larger angles for scattering off mildly-relativistic
electrons, which suggests that Compton polarisation is sensitive to
relativistic aberration for mildly-relativistic electrons with Lorentz factors
of $\gamma\gtrsim 2$. The PA for the fraction of the scattered photons that
are polarised assumes a constant value of PA $\sim\pi/2$ rad, which
corresponds to polarisation perpendicular with respect to the jet, regardless
of the photon energy and electron temperature.
In the view of the future proposed high-energy polarimetry missions listed in
Table 1, the MAPPIES code can be used to study the expected polarisation
characteristics for various Compton-scattering based emission models for
relativistic jet sources. The code is capable to show how the polarisation
changes as a function of the viewing angle and energy of the Compton emission,
which can serve as a powerful diagnostic for the radiation mechanism
responsible for e.g. the high-energy emission from blazar jets and GRB prompt
emission. The first application of the code will be presented in a follow up
paper (Dreyer and Böttcher 2020, in preparation) to simulate the polarisation
signatures from a model where the BBB in blazar spectra arises from a bulk
Compton feature, as proposed by Baring et al. (2017). The thermal
Comptonisation process should lead to significant polarisation of the emission
from the UV/$X$-ray excess in the SED. This will reinforce future prospects of
using measurements of polarisation signatures to distinguish between different
radiation mechanism models for the sources of interest.
We thank the anonymous referee for giving us an expeditious, helpful, and
constructive report. The work of M.B. is supported through the South African
Research Chair Initiative of the National Research Foundation111Any opinion,
finding and conclusion or recommendation expressed in this material is that of
the authors and the NRF does not accept any liability in this regard. and the
Department of Science and Innovation of South Africa, under SARChI Chair grant
No. 64789.
## Appendix A Comparison to previously published results
The Monte-Carlo code developed by Krawczynski (2012) can be used to
numerically compute the polarisation due to Compton scattering in the Thomson
and Klein-Nishina regimes. The numerical results were compared to analytical
results of Bonometto et al. (1970) that were based on quantum mechanical
scattering calculations in the Thomson regime. The numerical formulation was
subsequently used to study the polarisation of Compton radiation emitted in
the Klein-Nishina regime. In this Appendix, the results for Compton
polarisation in the Thomson and Klein-Nishina regimes from the simulations of
the MAPPIES code are compared to the numerical results of Krawczynski (2012).
The simulations of Krawczynski (2012) were in good agreement with the
analytical calculations of Bonometto et al. (1970) for Compton polarisation in
the Thomson regime. An important implication of the calculations of Bonometto
et al. (1970) is that the PD vanishes for unpolarised photons scattered by
electrons with Lorentz factors $\gamma\gtrsim 10$. Krawczynski (2012) tested
this prediction by simulating the Compton scattering of an isotropic
distribution of $\sim 2.0$ keV mono-energetic, unpolarised seed photons. They
found a net PD $\sim 0.26\%$ due to scattering off mono-energetic electrons
with Lorentz factors of $\gamma=10^{3}$, assumed to be isotropic in the co-
moving frame of the emission region (which moves along the jet with a bulk
Lorentz factor of $\Gamma_{jet}=5$). The MAPPIES code is used to simulate
Comptonisation with the same initial conditions to those of Krawczynski
(2012). The Stokes vectors correspond to a net $\mathrm{PD}\sim 0.09\%$, and
exhibit mean values of 0 when divided into 100 subsets of the simulated
events, which is consistent with unpolarised radiation. The results are
therefore in good agreement with the results of Krawczynski (2012).
The polarisation for Compton emission in the Klein-Nishina regime was
evaluated by Krawczynski (2012) with simulations of Compton scattering off an
isotropic distribution of mono-energetic electrons with Lorentz factors
between 10 and 62500. They considered $\sim 1.3$ keV mono-energetic photons,
assumed to be fully polarised (with initial Stokes vectors of (I, Q, U) = (1,
1, 0)), and uni-directional in the laboratory frame. A similar setup is used
to test the MAPPIES code for Compton polarisation in the Klein-Nishina regime.
The seed photons are drawn in the laboratory frame, assumed to be fully
polarised, mono-energetic with $kT_{rad}=3.1$ keV, and unidirectional with
$(\Theta_{lab},\Phi_{lab})=(1.4,0)$ rad.
In all figures discussed, the results from the MAPPIES code are shown in the
left panels, while the numerical results of Krawczynski (2012) are shown in
the right panels. The intensity and PD are given as a function of the
scattered photon energy in Figures 8 and 9, respectively. The scattered photon
energy is shown in units of the maximum energy allowed kinematically
$y=\frac{x_{lab}^{sc}}{x_{lab}^{max}}$ where
$\displaystyle x_{lab}^{max}=\frac{4\gamma x_{lab}}{1+4\gamma x_{lab}}$ (A1)
with $x_{lab}=\epsilon_{lab}/(m_{e}c^{2})$ and
$x_{lab}^{sc}=\epsilon_{lab}^{sc}/(m_{e}c^{2})$ the dimensionless energy of
the seed and scattered photons, respectively. The intensity of the scattered
photons shifts to higher energies for larger Lorentz factors, and peak towards
$x_{lab}^{max}$ deeper in the Klein-Nishina regime. The PD decreases for
larger Lorentz factors, and is strongly suppressed for $\gamma\gg 10$. In
Figure 10, the net polarisation is shown as a function of the Lorentz factors
of the electrons, where the function $\mathrm{PD}=0.5/(1+x_{e})$ from
Bonometto et al. (1970) is indicated with a red line. The net polarisation
decreases approximately with the inverse of the seed photon energy in the
electron rest frame, consistent with the analytical prediction of
$\mathrm{PD}=0.5(1+x_{e})$ from Bonometto et al. (1970) in the Thomson regime.
The results from the MAPPIES code are therefore overall consistent with those
of Krawczynski (2012).
Figure 8: The intensity of the Compton emission as a function of the scattered
photon energy in units of the maximum kinematically allowed energy
$y=x_{lab}^{sc}/x_{lab}^{max}$. The seed photons are uni-directional and mono-
energetic in the laboratory frame with
$x_{lab}=\epsilon_{lab}/(m_{e}c^{2})=0.0025$. The results are shown for
scattering off mono-energetic electrons, assumed to be isotropic in the co-
moving frame of the emission region, with Lorentz factors of
$\gamma=10,100,500,2500,12500,62500$. Results from Krawczynski (2012) are
shown in the right panel, and the results from the MAPPIES code are shown in
the left panel. Figure 9: The Compton polarisation as a function of the
scattered photon energy in units of the maximum kinematically allowed energy
$y=x_{lab}^{sc}/x_{lab}^{max}$. The seed photons are uni-directional and mono-
energetic in the laboratory frame with
$x_{lab}=\epsilon_{lab}/(m_{e}c^{2})=0.0025$. The results are shown for
scattering off mono-energetic electrons, assumed to be isotropic in the co-
moving frame of the emission region, with Lorentz factors of
$\gamma=10,100,500,2500,12500,62500$. Results from Krawczynski (2012) are
shown in the right panel, and the results from the MAPPIES code are shown in
the left panel. Figure 10: The net $\mathrm{PD}$ of the Compton emission as a
function of Lorentz factors of the electrons. The seed photons are uni-
directional and mono-energetic in the laboratory frame with
$x_{lab}=\epsilon_{lab}/(m_{e}c^{2})=0.0025$. The results are shown for
scattering off mono-energetic electrons, assumed to be isotropic in the co-
moving frame of the emission region, with Lorentz factors of
$\gamma=10,100,500,2500,12500,62500,3.1\times 10^{5},1.6\times 10^{6}$.
Results from Krawczynski (2012) are shown in the right panel, and the results
from the MAPPIES code are shown in the left panel. The function
$\mathrm{PD}=0.5/(1+x_{e})$ from Bonometto et al. (1970) is indicated with a
red line.
## References
* Abarr & Krawczynski (2020) Abarr, Q., & Krawczynski, H. 2020, ApJ, 889, 111, doi: 10.3847/1538-4357/ab5fdf
* Ackermann et al. (2012) Ackermann, M., Ajello, M., Ballet, J., et al. 2012, ApJ, 751, 159, doi: 10.1088/0004-637X/751/2/159
* Ackermann et al. (2016) Ackermann, M., Anantua, R., Asano, K., et al. 2016, ApJ, 824, L20, doi: 10.3847/2041-8205/824/2/L20
* Aharonian et al. (2009) Aharonian, F., Akhperjanian, A. G., Anton, G., et al. 2009, A&A, 499, 273, doi: 10.1051/0004-6361/200811564
* Angel (1969) Angel, J. R. P. 1969, ApJ, 158, 219, doi: 10.1086/150185
* Ansoldi et al. (2018) Ansoldi, S., Antonelli, L. A., Arcaro, C., et al. 2018, ApJ, 863, L10, doi: 10.3847/2041-8213/aad083
* Atoyan & Dermer (2001) Atoyan, A., & Dermer, C. D. 2001, Phys. Rev. Lett., 87, 221102, doi: 10.1103/PhysRevLett.87.221102
* Band et al. (1993) Band, D., Matteson, J., Ford, L., et al. 1993, ApJ, 413, 281, doi: 10.1086/172995
* Baring et al. (2017) Baring, M. G., Böttcher, M., & Summerlin, E. J. 2017, MNRAS, 464, 4875, doi: 10.1093/mnras/stw2344
* Baring & Braby (2004) Baring, M. G., & Braby, M. L. 2004, ApJ, 613, 460, doi: 10.1086/422867
* Beheshtipour et al. (2017) Beheshtipour, B., Krawczynski, H., & Malzac, J. 2017, ApJ, 850, 14, doi: 10.3847/1538-4357/aa906a
* Blaes et al. (2001) Blaes, O., Hubeny, I., Agol, E., & Krolik, J. H. 2001, ApJ, 563, 560, doi: 10.1086/324045
* Błażejowski et al. (2000) Błażejowski, M., Sikora, M., Moderski, R., & Madejski, G. M. 2000, ApJ, 545, 107, doi: 10.1086/317791
* Bonometto et al. (1970) Bonometto, S., Cazzola, P., & Saggion, A. 1970, A&A, 7, 292
* Bonometto & Saggion (1973) Bonometto, S., & Saggion, A. 1973, A&A, 23, 9
* Böttcher (2010) Böttcher, M. 2010, arXiv e-prints, arXiv:1006.5048. https://arxiv.org/abs/1006.5048
* Böttcher (2019) —. 2019, Galaxies, 7, 20, doi: 10.3390/galaxies7010020
* Böttcher & Dermer (1998) Böttcher, M., & Dermer, C. D. 1998, ApJ, 499, L131, doi: 10.1086/311366
* Böttcher et al. (2012) Böttcher, M., Harris, D. E., & Krawczynski, H. 2012, Relativistic Jets from Active Galactic Nuclei (Wiley Online Library)
* Böttcher et al. (2013) Böttcher, M., Reimer, A., Sweeney, K., & Prakash, A. 2013, ApJ, 768, 54, doi: 10.1088/0004-637X/768/1/54
* Burrows et al. (2006) Burrows, D. N., Grupe, D., Capalbi, M., et al. 2006, ApJ, 653, 468, doi: 10.1086/508740
* Cerruti et al. (2019) Cerruti, M., Zech, A., Boisson, C., et al. 2019, MNRAS, 483, L12, doi: 10.1093/mnrasl/sly210
* Chang & Lin (2014) Chang, Z., & Lin, H.-N. 2014, ApJ, 795, 36, doi: 10.1088/0004-637X/795/1/36
* Conway et al. (1993) Conway, R. G., Garrington, S. T., Perley, R. A., & Biretta, J. A. 1993, A&A, 267, 347
* Dermer & Menon (2010) Dermer, C. D., & Menon, G. 2010, in American Astronomical Society Meeting Abstracts, Vol. 215, American Astronomical Society Meeting Abstracts #215, 372.01
* Gabuzda (2018) Gabuzda, D. 2018, Galaxies, 7, 5, doi: 10.3390/galaxies7010005
* Gao et al. (2019) Gao, S., Fedynitch, A., Winter, W., & Pohl, M. 2019, Nature Astronomy, 3, 88, doi: 10.1038/s41550-018-0610-1
* Gill et al. (2020) Gill, R., Granot, J., & Kumar, P. 2020, MNRAS, 491, 3343, doi: 10.1093/mnras/stz2976
* Grandi et al. (1997) Grandi, P., Guainazzi, M., Mineo, T., et al. 1997, A&A, 325, L17. https://arxiv.org/abs/astro-ph/9707073
* Granot (2008) Granot, J. 2008, arXiv e-prints, arXiv:0811.1657. https://arxiv.org/abs/0811.1657
* Haardt et al. (1998) Haardt, F., Fossati, G., Grandi, P., et al. 1998, A&A, 340, 35. https://arxiv.org/abs/astro-ph/9806229
* IceCube Collaboration et al. (2018) IceCube Collaboration, Aartsen, M. G., Ackermann, M., et al. 2018, Science, 361, eaat1378, doi: 10.1126/science.aat1378
* Jahoda et al. (2019) Jahoda, K., Krawczynski, H., Kislat, F., et al. 2019, arXiv e-prints, arXiv:1907.10190. https://arxiv.org/abs/1907.10190
* Keivani et al. (2018) Keivani, A., Murase, K., Petropoulou, M., et al. 2018, ApJ, 864, 84, doi: 10.3847/1538-4357/aad59a
* Kislat (2019) Kislat, F. 2019, in AAS/High Energy Astrophysics Division, AAS/High Energy Astrophysics Division, 109.79
* Kole (2018) Kole, M. 2018, arXiv e-prints, arXiv:1804.04864. https://arxiv.org/abs/1804.04864
* Kole (2019) Kole, M. 2019, in International Cosmic Ray Conference, Vol. 36, 36th International Cosmic Ray Conference (ICRC2019), 572
* Kole et al. (2020) Kole, M., De Angelis, N., Berlato, F., et al. 2020, arXiv e-prints, arXiv:2009.04871. https://arxiv.org/abs/2009.04871
* Krawczynski (2012) Krawczynski, H. 2012, ApJ, 744, 30, doi: 10.1088/0004-637X/744/1/30
* Krawczynski et al. (2011) Krawczynski, H., Garson, A., Guo, Q., et al. 2011, Astroparticle Physics, 34, 550, doi: 10.1016/j.astropartphys.2010.12.001
* Krawczynski et al. (2019) Krawczynski, H., Matt, G., Ingram, A. R., et al. 2019, arXiv e-prints, arXiv:1904.09313. https://arxiv.org/abs/1904.09313
* Laurent et al. (2011) Laurent, P., Rodriguez, J., Wilms, J., et al. 2011, Science, 332, 438, doi: 10.1126/science.1200848
* Li et al. (2019) Li, W., Xing, Y., Yu, Y., et al. 2019, Journal of Astronomical Telescopes, Instruments, and Systems, 5, 019003, doi: 10.1117/1.JATIS.5.1.019003
* Liodakis et al. (2020) Liodakis, I., Peirson, L., & Romani, R. 2020, in American Astronomical Society Meeting Abstracts, American Astronomical Society Meeting Abstracts, 305.09
* Lundman et al. (2018) Lundman, C., Vurm, I., & Beloborodov, A. M. 2018, ApJ, 856, 145, doi: 10.3847/1538-4357/aab3e8
* Lyutikov et al. (2003) Lyutikov, M., Pariev, V. I., & Blandford, R. D. 2003, ApJ, 597, 998, doi: 10.1086/378497
* Marshall et al. (2019) Marshall, H. L., Garner, A., Heine, S. N., et al. 2019, in UV, X-Ray, and Gamma-Ray Space Instrumentation for Astronomy XXI, Vol. 11118, International Society for Optics and Photonics, 111180A
* Marshall et al. (2017) Marshall, H. L., Schulz, N. S., Trowbridge Heine, S. N., et al. 2017, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10397, Proc. SPIE, 103970K, doi: 10.1117/12.2274107
* Masnou et al. (1992) Masnou, J. L., Wilkes, B. J., Elvis, M., McDowell, J. C., & Arnaud, K. A. 1992, A&A, 253, 35
* Matsumoto & Nishimura (1998) Matsumoto, M., & Nishimura, T. 1998, ACM Trans. Model. Comput. Simul., 8, 3–30, doi: 10.1145/272991.272995
* Matt et al. (1996) Matt, G., Feroci, M., Rapisarda, M., & Costa, E. 1996, Radiation Physics and Chemistry, 48, 403, doi: 10.1016/0969-806X(95)00472-A
* McConnell et al. (2019) McConnell, M., Ajello, M., Baring, M., et al. 2019, BAAS, 51, 100
* McConnell & Ryan (2004) McConnell, M. L., & Ryan, J. M. 2004, New A Rev., 48, 215, doi: 10.1016/j.newar.2003.11.029
* Moderski et al. (2005) Moderski, R., Sikora, M., Coppi, P. S., & Aharonian, F. 2005, MNRAS, 363, 954, doi: 10.1111/j.1365-2966.2005.09494.x
* Murase et al. (2018) Murase, K., Oikonomou, F., & Petropoulou, M. 2018, ApJ, 865, 124, doi: 10.3847/1538-4357/aada00
* Ostorero et al. (2004) Ostorero, L., Villata, M., & Raiteri, C. M. 2004, A&A, 419, 913, doi: 10.1051/0004-6361:20035813
* Padovani et al. (2018) Padovani, P., Giommi, P., Resconi, E., et al. 2018, MNRAS, 480, 192, doi: 10.1093/mnras/sty1852
* Pal et al. (2020) Pal, M., Kushwaha, P., Dewangan, G. C., & Pawar, P. K. 2020, ApJ, 890, 47, doi: 10.3847/1538-4357/ab65ee
* Paliya et al. (2015) Paliya, V. S., Sahayanathan, S., & Stalin, C. S. 2015, ApJ, 803, 15, doi: 10.1088/0004-637X/803/1/15
* Paliya et al. (2018) Paliya, V. S., Zhang, H., Böttcher, M., et al. 2018, ApJ, 863, 98, doi: 10.3847/1538-4357/aad1f0
* Palma et al. (2011) Palma, N. I., Böttcher, M., de la Calle, I., et al. 2011, ApJ, 735, 60, doi: 10.1088/0004-637X/735/1/60
* Paltani et al. (1998) Paltani, S., Courvoisier, T. J. L., & Walter, R. 1998, A&A, 340, 47. https://arxiv.org/abs/astro-ph/9809113
* Paul et al. (2016) Paul, B., Gopala Krishna, M. R., & Puthiya Veetil, R. 2016, in 41st COSPAR Scientific Assembly, Vol. 41, E1.15–8–16
* Pe’er (2015) Pe’er, A. 2015, Advances in Astronomy, 2015, 907321, doi: 10.1155/2015/907321
* Pian et al. (1999) Pian, E., Urry, C. M., Maraschi, L., et al. 1999, ApJ, 521, 112, doi: 10.1086/307548
* Piran & Granot (2001) Piran, T., & Granot, J. 2001, in Gamma-ray Bursts in the Afterglow Era, ed. E. Costa, F. Frontera, & J. Hjorth, 300, doi: 10.1007/10853853_80
* Piran et al. (2009) Piran, T., Sari, R., & Zou, Y.-C. 2009, MNRAS, 393, 1107, doi: 10.1111/j.1365-2966.2008.14198.x
* Pozdnyakov et al. (1983) Pozdnyakov, L. A., Sobol, I. M., & Syunyaev, R. A. 1983, Astrophys. Space Phys. Res., 2, 189
* Racusin et al. (2011) Racusin, J. L., Oates, S. R., Schady, P., et al. 2011, ApJ, 738, 138, doi: 10.1088/0004-637X/738/2/138
* Raiteri et al. (2006) Raiteri, C. M., Villata, M., Kadler, M., et al. 2006, A&A, 452, 845, doi: 10.1051/0004-6361:20054409
* Raiteri et al. (2005) Raiteri, C. M., Villata, M., Ibrahimov, M. A., et al. 2005, A&A, 438, 39, doi: 10.1051/0004-6361:20042567
* Rani et al. (2019) Rani, B., Zhang, H., Hunter, S. D., et al. 2019, BAAS, 51, 348. https://arxiv.org/abs/1903.04607
* Ravasio et al. (2003) Ravasio, M., Tagliaferri, G., Ghisellini, G., et al. 2003, A&A, 408, 479, doi: 10.1051/0004-6361:20031015
* Reimer et al. (2019) Reimer, A., Böttcher, M., & Buson, S. 2019, ApJ, 881, 46, doi: 10.3847/1538-4357/ab2bff
* Roustazadeh & Böttcher (2012) Roustazadeh, P., & Böttcher, M. 2012, ApJ, 750, 26, doi: 10.1088/0004-637X/750/1/26
* Rybicki & Lightman (1979) Rybicki, G. B., & Lightman, A. P. 1979, Radiative processes in astrophysics (John Wiley & Sons)
* Sari (1997) Sari, R. 1997, ApJ, 489, L37, doi: 10.1086/310957
* Schnittman & Krolik (2009) Schnittman, J. D., & Krolik, J. H. 2009, ApJ, 701, 1175, doi: 10.1088/0004-637X/701/2/1175
* Schnittman & Krolik (2010) —. 2010, ApJ, 712, 908, doi: 10.1088/0004-637X/712/2/908
* Sgrò & IXPE Team (2019) Sgrò, C., & IXPE Team. 2019, Nuclear Instruments and Methods in Physics Research A, 936, 212, doi: 10.1016/j.nima.2018.10.111
* She et al. (2015) She, R., Feng, H., Muleri, F., et al. 2015, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9601, Proc. SPIE, 96010I, doi: 10.1117/12.2186133
* Sikora et al. (1994) Sikora, M., Begelman, M. C., & Rees, M. J. 1994, ApJ, 421, 153, doi: 10.1086/173633
* Sikora et al. (1997) Sikora, M., Madejski, G., Moderski, R., & Poutanen, J. 1997, ApJ, 484, 108, doi: 10.1086/304305
* Stokes (1851) Stokes, G. G. 1851, Trans. Cambridge Philos., 9, 399
* Sunyaev & Titarchuk (1984) Sunyaev, R. A., & Titarchuk, L. G. 1984, in High Energy Astrophysics and Cosmology, 245
* Tamborra et al. (2018) Tamborra, F., Matt, G., Bianchi, S., & Dovčiak, M. 2018, A&A, 619, A105, doi: 10.1051/0004-6361/201732023
* Tavecchio & Ghisellini (2015) Tavecchio, F., & Ghisellini, G. 2015, MNRAS, 451, 1502, doi: 10.1093/mnras/stv1023
* Toma et al. (2009) Toma, K., Sakamoto, T., Zhang, B., et al. 2009, ApJ, 698, 1042, doi: 10.1088/0004-637X/698/2/1042
* Trippe (2014) Trippe, S. 2014, Journal of Korean Astronomical Society, 47, 15, doi: 10.5303/JKAS.2014.47.1.015
* Urry & Padovani (1995) Urry, C. M., & Padovani, P. 1995, PASP, 107, 803, doi: 10.1086/133630
* Walcher et al. (2011) Walcher, J., Groves, B., Budavári, T., & Dale, D. 2011, Ap&SS, 331, 1, doi: 10.1007/s10509-010-0458-z
* Weisskopf et al. (2016) Weisskopf, M. C., Ramsey, B., O’Dell, S. L., et al. 2016, Results in Physics, 6, 1179, doi: 10.1016/j.rinp.2016.10.021
* Westfold (1959) Westfold, K. C. 1959, ApJ, 130, 241, doi: 10.1086/146713
* Xiong et al. (2017) Xiong, S., Wang, Y., Li, Z., et al. 2017, in International Cosmic Ray Conference, Vol. 301, 35th International Cosmic Ray Conference (ICRC2017), 640
* Zhang (2017) Zhang, H. 2017, Galaxies, 5, 32, doi: 10.3390/galaxies5030032
* Zhang & Böttcher (2013) Zhang, H., & Böttcher, M. 2013, ApJ, 774, 18, doi: 10.1088/0004-637X/774/1/18
* Zhang et al. (2014) Zhang, H., Chen, X., & Böttcher, M. 2014, ApJ, 789, 66, doi: 10.1088/0004-637X/789/1/66
* Zhang et al. (2019a) Zhang, H., Fang, K., Li, H., et al. 2019a, ApJ, 876, 109, doi: 10.3847/1538-4357/ab158d
* Zhang et al. (2019b) Zhang, S., Santangelo, A., Feroci, M., et al. 2019b, Science China Physics, Mechanics, and Astronomy, 62, 29502, doi: 10.1007/s11433-018-9309-2
* Zhang et al. (2016) Zhang, S. N., Feroci, M., Santangelo, A., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9905, Proc. SPIE, 99051Q, doi: 10.1117/12.2232034
* Zhang et al. (2018) Zhang, X. F., Hajdas, W., Xiao, H. L., et al. 2018, Nuclear Instruments and Methods in Physics Research A, 879, 47, doi: 10.1016/j.nima.2017.10.012
|
# Monte-Carlo Applications for Partially Polarized Inverse External-Compton
Scattering (MAPPIES) II - Application to the UV/Soft X-ray Excess in Blazar
Spectra
Lenté Dreyer Centre for Space Research, North-West University, Potchefstroom
2531, South Africa Markus Böttcher Centre for Space Research, North-West
University, Potchefstroom 2531, South Africa
(Accepted January 27, 2021)
###### Abstract
The spectral energy distributions (SEDs) of some blazars exhibit an
ultraviolet (UV) and/or soft X-ray excess, which can be modelled with
different radiation mechanisms. Polarization measurements of the UV/X-ray
emission from blazars may provide new and unique information about the
astrophysical environment of blazar jets, and could thus help to distinguish
between different emission scenarios. In this paper, a new Monte-Carlo code –
MAPPIES (Monte-Carlo Applications for Partially Polarized Inverse External-
Compton Scattering) – for polarization-dependent Compton scattering is used to
simulate the polarization signatures in a model where the UV/soft X-ray excess
arises from the bulk Compton process. Predictions of the expected polarization
signatures of Compton emission from the soft X-ray excess in the SED of AO
0235+164, and the UV excess in the SED of 3C 279 are made for upcoming and
proposed polarimetry missions.
BL Lacertae objects: general – galaxies: active – galaxies: jets – gamma rays:
galaxies – polarization – radiation mechanisms: non-thermal – relativistic
processes – scattering – X-rays: galaxies
††journal: ApJ
## 1 Introduction
Active galactic nuclei (AGNs) are some of the most luminous objects in the
universe. About 10% of AGNs are observed to host relativistic jets, which are
considered to be powerful emitters of radiation across the entire
electromagnetic spectrum. Blazars are an extreme class of AGNs – consisting of
BL Lac objects and flat-spectrum radio quasars (FSRQs) – for which the
observer’s line of sight is closely aligned to the jet axis (Urry & Padovani,
1995; Padovani et al., 2018). Various properties of the radiation from blazars
have been studied with multi-wavelength observations and spectral fitting.
While FSRQs have strong optical emission lines (which indicates the presence
of accretion-disk radiation and a dense broad line region (BLR)), BL Lac
objects typically do not have a luminous accretion-disk or broad lines (Giommi
et al., 2012; Dermer & Giebels, 2016). The spectral energy distributions
(SEDs) of blazars are dominated by non-thermal emission, and generally consist
of two distinct components; the low-frequency (radio through ultraviolet (UV)
or X-ray) component, and the high-frequency (X-ray and $\gamma$-ray)
component. The relativistic jets contain ultra-relativistic electrons that
produce soft photons from radio frequencies up to UV/X-rays through
synchrotron emission, and photons up to very high energies (VHE; $E\geq
100~{}\mathrm{GeV}$) via inverse-Compton (IC) processes (i.e. leptonic
models). Alternatively, high-energy emission can be produced by synchrotron
radiation of pair cascades, powered by hadronic processes and synchrotron
emission of ultra-high-energy protons and muons (i.e. hadronic models). Both
leptonic models and hadronic models are generally able to produce acceptable
fits to blazar SEDs (Böttcher et al., 2013).
Polarization is an important observable that can be used to constrain the
morphology and geometry of the emitting region, and to distinguish between
various emission mechanisms. Polarization measurements of the radio and
optical emission from blazar jets have been the key to understanding many
diverse aspects of blazar jets (see e.g. Böttcher (2019); Trippe (2019); Zhang
(2019) for recent reviews). The radio and optical emission of blazars have
moderate polarization degrees (PDs) up to $(3-30)\%$ (Conway et al., 1993;
Zhang et al., 2014), which correspond to non-thermal electron synchrotron
emission (Westfold, 1959; Rybicki & Lightman, 1979), thus confirming the
dominant radiation mechanism for the radio and optical emission from blazars.
The polarization of the UV, X-ray and $\gamma$-ray emission has so far been
largely unexplored, although its scientific potential has long been
appreciated (see e.g. Krawczynski et al. (2011); Andersson et al. (2015);
Zhang (2017); Mignani et al. (2019); Rani et al. (2019) for reviews). For
instance, the synchrotron origin of the X-ray emission from high-synchrotron-
peaked (HSP) blazars may be confirmed with X-ray polarimetry (e.g. Krawczynski
(2012)). High-energy polarimetry may also be able to distinguish between
leptonic and hadronic emission scenarios for the origin of the high-frequency
component in blazar SEDs, since hadronic models typically predict higher
degrees of X-ray and $\gamma$-ray polarization than leptonic models (Zhang &
Böttcher, 2013; Paliya et al., 2018; Zhang et al., 2019a). In leptonic models,
the high-energy emission can be partially polarized, depending on the source
of the seed photon field for Compton scattering. While external-Compton
emission is expected to be unpolarized, Compton scattering of the polarized
low-frequency synchrotron emission (i.e. synchrotron-self-Compton; SSC) is
expected to be polarized with a polarization degree (PD) that is about half of
the polarization of the synchrotron emission (e.g. Chakraborty et al. (2015)).
In addition to the two characteristic-broad, non-thermal components described
above, blazar SEDs sometimes exhibit an infrared (IR) bump, optical/UV bump
(called the Big Blue Bump (BBB)), and/or soft X-ray excess (e.g. Masnou et al.
(1992); Grandi et al. (1997); Haardt et al. (1998); Pian et al. (1999);
Raiteri et al. (2005); Palma et al. (2011); Ackermann et al. (2012); Paliya et
al. (2015); Pal et al. (2020); see Antonucci (2002); Perlman et al. (2008) for
reviews). Various radiation mechanisms have been proposed for the origin of
the UV/soft X-ray excess in the SEDs, which include the following:
* •
Thermal emission from the accretion-disk (e.g. Pian et al. (1999); Blaes et
al. (2001); Paliya et al. (2015); Pal et al. (2020)).
* •
A higher than galactic dust-to-gas ratio to the source, resulting in an over-
estimation of the neutral-hydrogen column density and, therefore, an over-
correction for the corresponding photo-electric absorption at low X-ray
energies (in this case, the excess would not actually be physical; e.g.
Ravasio et al. (2003)).
* •
A distinct synchrotron component from a different region than the low-
frequency component in a multi-zone construction (e.g. Paltani et al. (1998);
Ostorero et al. (2004); Raiteri et al. (2005)).
* •
Synchrotron emission from VHE $\gamma$-ray induced pair cascades in blazar
environments (Roustazadeh & Böttcher, 2012).
* •
Reduced radiative cooling of the highest-energy electrons in a Compton-
dominated blazar, due to the Klein-Nishina suppression of the Compton cross
section (e.g. Ravasio et al. (2003); Moderski et al. (2005)).
* •
A signature of IC scattering of external radiation fields by a thermal, non-
relativistic population of electrons (i.e. the bulk Compton effect; e.g.
Sikora et al. (1994, 1997); Błażejowski et al. (2000); Ackermann et al.
(2012); Baring et al. (2017)).
Polarization measurements of the UV and soft X-ray emission may yield new and
unique information about these spectral features in the SEDs, thus
distinguishing between different emission scenarios. Given these promising
prospects, numerous polarimetry missions anticipating to deliver polarization
measurements of the UV/X-ray emission from astrophysical sources (including
blazars) are at various stages of planning, design, and operation. It is,
therefore, important to consider predictions of the polarization signatures
for different emission scenarios that may be able to explain the origin of the
UV/soft X-ray excess in blazar spectra.
In this paper, a new Monte-Carlo code – MAPPIES (Monte-Carlo Applications for
Partially Polarized Inverse External-Compton Scattering; Dreyer & Böttcher
(2020)) – is used to simulate the polarization signatures in a model where the
UV/soft X-ray excess arises due to Compton scattering of external fields by
thermal electrons contained in the blazar jet, as proposed by Baring et al.
(2017) for the BL Lac object AO 0235+164. Two blazar case studies are
presented: The BL Lac object AO 0235+164, and the FSRQ 3C 279. An overview of
the soft X-ray excess in the SED of AO 0235+164 and the BBB in the SED of 3C
279 is given in Section 2 and Section 3, respectively. The model setup and
parameters considered for the simulations are described in Section 4, followed
by the results in Section 5, and the conclusions in Section 6.
## 2 The soft X-ray excess in the SED of AO 0235+164
The BL Lac object AO 0235+164 (redshift $z=0.94$) is one of the most prominent
examples of the emergence of a soft X-ray excess (e.g. Junkkarinen et al.
(2004); Raiteri et al. (2005); Ackermann et al. (2012)). The optical to X-ray
SED was reconstructed by Raiteri et al. (2006) with optical through UV data
from the X-ray Multi-Mirror Mission (XMM-Newton; Talavera (2009)) during 2000
- 2005. The source was mostly faint, with a hard X-ray spectrum during that
time, except in February 2002, when the source was found in a bright state
with a steep X-ray spectrum. The XMM-Netwon data indicated the existence of UV
to soft X-ray bump in the SED with an inferred peak frequency of
$\mathrm{log_{10}(\nu/Hz)}\sim 15.5-16.1$. While thermal emission from the
disk might be able to explain the bump, Raiteri et al. (2006) proposed that
synchrotron emission from a region that is closer to the black hole than where
the low-frequency component originates, could be another possible mechanism.
Additionally, Ostorero et al. (2004) showed that the soft X-ray excess can be
obtained in the ambit of the rotating helical jet model (e.g. Villata &
Raiteri (1999)) by admitting a synchrotron contribution to the X-ray
radiation. A soft X-ray excess within the frequency range
$\mathrm{log_{10}(\nu/Hz)}\sim 16.5-18.3$ of the SED, was reported by
Ackermann et al. (2012) during a multi-wavelength campaign between 2008 and
2009, when AO 0235+164 was in a flaring state. The high-frequency component of
the SED was interpreted with the standard leptonic scenario (i.e. IC
scattering of the external IR radiation from the dusty torus). The X-ray data
from the Swift X-ray Telescope (Swift-XRT; Burrows et al. (2005)) and the
Rossi X-ray Timing Explorer (RXTE; Rothschild et al. (1998)) presented a soft
X-ray excess in the SED. Since the X-ray spectrum was too soft to be
attributed to the SSC component, IC scattering off cold electrons in the jet
was considered to be a possibility.
Figure 1: The multi-wavelength spectrum (points) spanning the radio, optical,
X-ray, and $\gamma$-ray bands, together with model fits from Baring et al.
(2017), for the 2008 October high-state Fermi-LAT observation of AO 0235+164
(data from Ackermann et al. (2012)). The blue butterfly block represents
Swift-XRT data. The broad-band model consists of a synchrotron component
(dashed green curve) up to the optical band, a two order SSC contribution in
the optical, X-rays and $\gamma$-rays (dashed orange curve), and external
Compton (EC) emission, including a bulk Compton feature (dotted black curve)
between $\mathrm{log_{10}(\nu/Hz)}\sim 17.4$ and
$\mathrm{log_{10}(\nu/Hz)}\sim 24.4$. The orange curve is the total model
spectrum (which includes a very small correction for $\gamma\gamma$ absorption
by the extra-galactic background light). Taken from Baring et al. (2017).
Baring et al. (2017) employed the multi-wavelength observations of Ackermann
et al. (2012) and modelled the soft X-ray excess as a bulk Compton component
that results from an external radiation field scattering off a thermal
population of shock-heated electrons contained in the blazar jet (see Figure
1). Monte-Carlo simulations of the diffusive shock acceleration (DSA) process
by Summerlin & Baring (2012) were coupled with the radiation transfer modules
of Böttcher et al. (2013), and the fit of the soft X-ray feature through the
bulk Compton process aided in fixing the thermal-to-non-thermal particle ratio
in the jet, thus tightly constraining the particle diffusion parameters in the
DSA process. A qualitative prediction for this scenario is that the thermal
Comptonization process should lead to significant polarization in the soft
X-ray spectral component. In what follows, the model of Baring et al. (2017)
will be considered in order to test this prediction for the soft X-ray excess
in the SED of AO 0235+164, using the MAPPIES code.
## 3 The BBB in the SED of 3C 279
The FSRQ 3C 279 (redshift $z=0.536$) is one of the brightest $\gamma$-ray
sources in the sky, and the first blazar to be detected by the Energetic Gamma
Ray Experiment Telescope (EGRET; Kanbach et al. (1989); Hartman et al.
(1992)). Multi-wavelength variability and radio/optical polarimetry suggest
that the broadband spectrum of 3C 279 at radio to UV frequencies is produced
by synchrotron radiation (e.g. Maraschi et al. (1994); Hartman et al. (1996)).
The UV-optical continuum usually has a steep power-law spectrum, due to
radiation losses of the relativistic electrons in the jet, revealing its non-
thermal origin. The source was monitored by Pian et al. (1999) with the
International Ultraviolet Explorer (IUE; Nichols & Linsky (1996)) – combining
the UV data with observations from the ROentgen SATellite (ROSAT;Truemper
(1993)) and EGRET – during its low state, thus allowing the detection of an
excess in the UV regime at $\mathrm{log_{10}(\nu/Hz)}\sim 15.2$ (i.e, the BBB;
otherwise hidden below the dominant power-law continuum attributed to non-
thermal emission from the jet). The BBB was suggested to be due to thermal
emission from the accretion-disk (see also e.g. Blaes et al. (2001); Pal et
al. (2020)), and the UV luminosity they found for the presumed accretion-disk
responsible for the BBB feature was consistent with estimates provided by
Dermer & Schlickeiser (1993) and Sikora et al. (1994).
Figure 2: The SED of 3C 279 during low-activity states. Simultaneous data from
the Small and Moderate Aperture Research Telescope System (SMARTS), Swift XRT,
and Fermi-LAT are shown with red circles. The green and orange dashed curves
correspond to synchrotron and SSC emission, respectively. The orange solid
curve is the sum of all the radiative mechanisms (the thermal contributions
from the torus, accretion-disk, and X-ray corona, as well as the external-
Compton disk, BLR, and the dusty torus components are not shown in the
figure). From Paliya et al. (2015).
A detailed study of the brightest $\gamma$-ray flare observed from 3C 279 was
presented by Paliya et al. (2015), and the modeling of the low-activity state
showed a slight turnover at $\mathrm{log_{10}(\nu/Hz)}\sim 14.5-15.7$ (see
Figure 2), which was also attributed to the accretion-disk radiation. The
dilution of the synchrotron polarization, due to unpolarized accretion-disk
emission towards the UV, will result in a decrease of the PD throughout the
optical - UV regime. Additionally, the radiation emitted by electron-
scattering–dominated accretion-disks is expected to be considerably polarized,
perpendicular to the disk axis, with a strong angle dependence (up to a PD
$\sim 11.7\%$ for an edge-on disk; e.g. Chandrasekhar (1960)). However,
assuming that the jet of a blazar propagates along the symmetry axis of the
accretion disk, the high-energy emission region in the jet will have a perfect
face-on view of the disk, in which case there is no net polarization in the
accretion-disk emission due to the azimuthal symmetry (Smith et al., 2004).
Synchrotron emission from VHE $\gamma$-ray induced pair cascades in blazar
environments, was suggested by Roustazadeh & Böttcher (2012) as an alternative
contribution to the BBB. This cascade emission may peak in the UV/soft X-ray
range for sufficiently strong magnetic fields, and can resemble the BBB in the
UV regime. The external radiation field was modelled as isotropic black body
radiation with a temperature of $kT_{\mathrm{rad}}\sim 1.7$ eV. The results
illustrated that synchrotron emission from VHE $\gamma$-ray induced pair
cascades can reproduce the BBB in 3C 279, peaking at
$\mathrm{log_{10}(\nu/Hz)}\sim 14.9-15.3$. Within this scenario, the BBB would
result in polarized emission, with a weak dependence on the frequency over the
optical/UV range.
## 4 Model setup
The MAPPIES code is a newly developed Monte-Carlo code for polarization-
dependent Compton scattering of external fields in jet dominated astrophysical
sources (Dreyer & Böttcher, 2020). An external radiation field (originating in
the AGN rest frame) scatters off an arbitrary (thermal and non-thermal)
electron distribution, assumed to be isotropic in the co-moving frame of the
emission region that moves along the jet with a bulk Lorentz factor
$\Gamma_{\mathrm{jet}}$. The polarization signatures are calculated using the
Stokes formalism (Stokes, 1851), and the polarization-dependent Compton
scattering of the seed photons are calculated following Monte-Carlo methods by
Matt et al. (1996). The code is used to simulate the polarization signatures
in a model where the UV/Soft X-ray excess in the SEDs of blazars is due to
bulk Compton emission. In this section, we describe the model setup for the
two case studies of AO 0235+164 and 3C 279. In particular, a description of
the seed photon fields and electron energy distributions is given.
Table 1: The model parameters for the BL Lac object AO 0235+164.
Parameter description | Value
---|---
Redshift of the source, $z$ | $0.94$
Bulk Lorentz factor, $\Gamma_{\mathrm{jet}}$ | $35.0$
Luminosity distance, $d_{L}$ | $1.89\times 10^{28}~{}\mathrm{cm}$
Temperature of the dusty torus, $kT_{\mathrm{rad}}$ | $0.1~{}\mathrm{eV}$
Radiation energy density of emission |
from the dusty torus, $U_{\mathrm{DT}}$ | $0.6~{}\mathrm{erg\cdot cm^{-3}}$
Effective size of the dusty torus $R_{\mathrm{eff}}$aaSee main text for the definition of the effective size | $6\times 10^{17}~{}\mathrm{cm}$
Note. — The model parameters correspond to those of Baring et al. (2017). The
seed photon field is assumed to be IR emission from the dusty torus, and the
electron energy distribution is drawn from the electron fit spectrum of Baring
et al. (2017) in order to simulated the IC scattering off thermal shock-heated
electrons in the jet of the blazar.
Table 2: The model parameters for the FSRQ 3C 279.
Parameter description | Value
---|---
Redshift of the source, $z$ | $0.536$
Bulk Lorentz factor, $\Gamma_{\mathrm{jet}}$ | $10.0$
Luminosity distance, $d_{L}$ | $9.3\times 10^{27}~{}\mathrm{cm}$
Black hole mass, $M_{\mathrm{BH}}$ | $3\times 10^{8}M_{\odot}$
Inner radius of the accretion-disk, $R_{\mathrm{AD}}^{\mathrm{in}}$ | $6R_{\mathrm{G}}$
Outer radius of the accretion-disk, $R_{\mathrm{AD}}^{\mathrm{out}}$ | $10^{3}R_{\mathrm{G}}$
Accretion disk luminosity, $L_{\mathrm{AD}}$ | $1\times 10^{45}~{}\mathrm{erg\cdot s^{-1}}$
Height of the emission region above |
the central black hole, $h$ | $1.4\times 10^{17}~{}\mathrm{cm}$
Note. — The parameters correspond to that of Paliya et al. (2015) for 3C 279
in low-state. The seed photons are assumed to come directly from the disk, and
the electron energy distribution is drawn from the electron fit spectrum of
Baring et al. (2017) in order to simulated the IC scattering off thermal
shock-heated electrons in the jet of the blazar.
In the table above, $M_{\odot}\sim 1.9\times 10^{33}\mathrm{g}$ is the solar
mass, and $R_{\mathrm{G}}=GM_{\mathrm{BH}}/c^{2}$ is the gravitational radius
of the black hole (with $G\sim 6.7\times 10^{-8}\mathrm{cm^{3}\cdot
g^{-1}\cdot s^{-2}}$ the gravitational constant and $c\sim 3\times
10^{10}\mathrm{cm\cdot s^{-1}}$ the speed of light).
### 4.1 The seed photon fields
The primary seed photon fields for IC models of AGNs can either be external
emission from the BLR and/or dusty torus (see e.g. Sikora et al. (1994);
Błażejowski et al. (2000); Ghisellini & Tavecchio (2008)), or direct
accretion-disk emission (see e.g. Dermer & Schlickeiser (1993); Böttcher et
al. (1997); Dermer & Schlickeiser (2002)). The MAPPIES code, therefore, draws
the seed photons from either an isotropic, single-temperature black body
distribution (corresponding to external emission from the BLR and/or dusty
torus), or from a multi-temperature accretion-disk spectrum. In the first
case, the radiation energy density $U_{\mathrm{DT}}^{\mathrm{AGN}}$ appears
uniform in the AGN rest frame, provided that the emission region is located
inside the characteristic scale of the BLR/dusty torus. The angular
distribution in the AGN rest frame varies over angular scales
$\Delta\theta\gg\Gamma_{\mathrm{jet}}^{-1}$ so that the angular distribution
in the co-moving frame is dominated by relativistic aberration rather than
intrinsic anisotropy (Böttcher et al., 2013). The radiation energy density
$U_{\mathrm{DT}}$ is thus boosted to a highly anisotropic field in the
emission frame with
$U_{\mathrm{DT}}\sim\Gamma_{\mathrm{jet}}^{2}U^{\mathrm{AGN}}_{\mathrm{DT}}$.
Figure 3: The total SEDs for AO 0235+164 (solid orange curve) and 3C 279
(solid blue curve). The seed photon spectra are determined by the model
parameters, and are assumed to be IR emission from the dusty torus for AO
0235+164 (dotted orange curve) and direct disk emission for 3C 279 (dotted
orange curve). Compton scattering off thermal electrons in the jet (as shown
in Figure 4) results in soft X-ray radiation for AO 0235+164 (dashed orange
curve) and UV radiation for 3C 279 (dashed blue curve).
Figure 3 shows the seed photon spectra of AO 0235+164 (orange dotted curve)
and 3C 279 (blue dotted curve). The seed photon field for AO 0235+164 is
assumed to be IR emission from the dusty torus ($kT_{\mathrm{rad}}\sim
0.1~{}\mathrm{eV}$; dotted orange curve), analogous to the model used by
Baring et al. (2017). The directly observable flux from the dusty is
calculated from its luminosity, $L_{\mathrm{DT}}=(4\pi
R_{\mathrm{eff}}^{2}c)\times U_{\mathrm{DT}}$, where $R_{\mathrm{eff}}$ is an
effective size. The dusty torus is expected to have a characteristic radius of
$R_{\mathrm{DT}}\sim$ a few pc. However, due to the torus-like geometry, its
effectively IR emitting surface (as seen from a distant observer) is
significantly smaller than $4\pi R_{\mathrm{DT}}^{2}$. We therefore introduce
an effective size of the dusty torus, $R_{\mathrm{eff}}\sim 6\times
10^{17}~{}\mathrm{cm}$, which is expected to be about an order of magnitude
smaller than $R_{\mathrm{DT}}$. The contribution of direct accretion-disk
emission to the radiation density in the emission-region rest frame, is
obtained as $U_{\mathrm{AD}}\sim 2.5\times 10^{-5}~{}\mathrm{erg\cdot
cm^{-3}}$ from Equation 9 of Ghisellini & Madau (1996) for the combination of
free parameters considered by Baring et al. (2017), and is negligible compared
to that of isotropic IR emission from the dusty torus, where
$U_{\mathrm{DT}}\sim\Gamma_{\mathrm{jet}}^{2}U_{\mathrm{DT}}^{\mathrm{AGN}}=0.6~{}\mathrm{erg\cdot
cm^{-3}}$ (with $U_{\mathrm{DT}}^{\mathrm{AGN}}\sim 5\times
10^{-4}~{}\mathrm{erg\cdot cm^{-3}}$ and $\Gamma_{\mathrm{jet}}=35$). The
emission region is assumed to be much closer to a more luminous accretion-disk
for 3C 279 than in the case of AO 0235+164. The seed photons for 3C 279 are
thus drawn from a multi-temperature accretion-disk spectrum (dotted blue
curve), with a disk luminosity of $L_{\mathrm{AD}}=1.0\times
10^{45}~{}\mathrm{erg\cdot s^{-1}}$ and a disk radius
$R_{\mathrm{AD}}=R_{\mathrm{AD}}^{\mathrm{out}}-R_{\mathrm{AD}}^{\mathrm{in}}\sim
4.5\times 10^{16}~{}\mathrm{cm}$ (where $R_{\mathrm{AD}}^{\mathrm{out}}$ and
$R_{\mathrm{AD}}^{\mathrm{in}}$ are the outer and inner radius of the disk,
respectively).
### 4.2 The electron energy distribution
Generally, the effects of Compton scattering depend on the electron energy
distributions. DSA at relativistic shocks is thought to be an important
acceleration mechanism in blazar jets, which may produce the non-thermal
particles that emit the broad-band continuum detected from the jets. Baring et
al. (2017) employed the results from Monte-Carlo simulations of DSA at
relativistic shocks by Summerlin & Baring (2012), in order to model the
particle acceleration at blazar shocks. The simulations captured the
connection between the thermal component and the power-law tail of the non-
thermal electrons in the blazar jet. The soft X-ray excess of AO 0235+164,
modelled as IC scattering off the thermal, non-relativistic shock heated
electrons, tightly constrained the energy dependence of the diffusion
coefficients for the electrons. For more details on the model parameters and
the electron energy distribution resulting in the SED fit, see Baring et al.
(2017).
Figure 4: The thermal and non-thermal electron distribution (with Lorentz
factors of $\gamma$; drawn from the electron fit spectrum of Baring et al.
(2017)) as a function of the dimensionless electron momentum $\gamma\beta$.
In the interpretation of the soft X-ray excess as a bulk Compton signature, a
measurable degree of polarization in the frequency range in which the bulk
Compton radiation dominates is expected, as it results from anisotropic
Compton scattering off thermal, non-relativistic electrons. The MAPPIES code
allows the quantification of this prediction and thereby, in comparison with
future X-ray polarimetry data, the quantification of the presence of a pool of
thermal electrons in the emission region of AO 0235+164. The electron energy
distribution is thus drawn from the electron spectrum used in the SED fit by
Baring et al. (2017), shown in Figure 4. The thermal-to-nonthermal particle
ratio (comparable to that in the case of AO 0235+164) may result from shock
acceleration of electrons in 3C 279 as well. Therefore, as an illustrative
test case, we use the same electron distribution for 3C 279 as AO 0235+164 in
order to study the potential bulk Comptonization effects.
### 4.3 Polarized-dependent Compton scattering
A variety of physical phenomena may alter the polarization state of the
observed radiation, including the influence of the magnetic fields, general
relativity, and the emission mechanisms. Synchrotron radiation of relativistic
charged particles in ordered magnetic fields is expected to be both linearly
and circularly polarized (Westfold, 1959; Rybicki & Lightman, 1979), while
Compton scattering off relativistic electrons will reduce the degree of
polarization to about half of the seed photon field’s polarization (Bonometto
et al., 1970). For both blazar case studies, the seed photons are assumed to
be unpolarized. Due to the polarization dependence of the Klein-Nishina cross
section, polarized photons scatter preferentially in a direction perpendicular
to their electrical field vector, and the electric field vectors of the
scattered photons tend to align with the seed photons’ electric field (Matt et
al., 1996). Polarization can therefore be induced when non-relativistic
electrons scatter an anisotropic photon field, even if the seed photons are
unpolarized.
The polarization signatures of the scattered photons are obtained by summing
the photons’ contribution to the Stokes parameters in a specified direction
after the simulation is complete. Circular polarization may be generated
either as an intrinsic component of Synchrotron radiation (i.e. if the seed
photons are circular polarized Synchrotron emission) or via Faraday conversion
of linear polarization into circular polarization driven by some internal
Faraday rotation (see e.g. Wardle et al. (1998); Homan et al. (2009);
MacDonald & Marscher (2018); Boehm et al. (2019)). Since the seed photon
fields are assumed to be unpolarized external radiation, and high-energy
polarization will not be affected by Faraday rotation due to the $\lambda^{2}$
(where $\lambda$ is the wavelength) dependence of this effect, we only
consider the results of linear polarization. The results of circular
polarization are unessential for making predictions of the expected
polarization signatures for upcoming polarimetry missions, since all existing
or proposed high-energy polarimeters only measure linear polarization. Due to
the photon-counting nature of the X-ray and $\gamma$-ray observatories (and
thus, polarimeters), to our knowledge, there is fundamentally no way to
measure circular polarization in X-rays or $\gamma$-rays.
## 5 Results and discussion
In this section, the polarization signatures of IC scattering for the two
blazar case studies will be discussed. The results of Compton scattering off a
thermal population of shock-heated electrons contained in the jet of the
blazar (as shown in Figure 4), are given for AO 0235+164 and 3C 279 for the
combination of free parameters listed in Table 1 and Table 2, respectively.
### 5.1 The blazar SEDs
Figure 3 shows the SEDs of AO 0235+164 (orange) and 3C 279 (blue). The seed
photon fields are shown in dotted lines (determined by the model parameters;
see Section 4.1), the IC spectra are shown in dashed lines, and the total
(seed + IC) SEDs are shown in solid lines. For AO 0235+164, the seed photon
field is assumed to be IR emission from the dusty torus (dotted orange curve).
The photon frequency increases by a factor of
$\sim\gamma^{2}\Gamma_{\mathrm{jet}}^{2}$
Table 3: Predictions of the expected PDs for future proposed polarimetry
missions in a model where the UV/soft X-ray excess in the SEDs of AO 0235+164
and 3C 279 is due to a bulk Compton feature.
Blazar Case Study | Polarimeter | Frequency range [$\mathrm{log_{10}(\nu/Hz)}$] | PD [%]
---|---|---|---
| POLLUX | $14.4-15.5$ | $12-30$
| LAMP | $16.8$ | $43$
AO 0235+164 | REDSoX | $16.7-17.3$ | $30-46$
(IR emission from | XPP | $16.6-18.3$ | $\lesssim 46$
the dusty torus) | eXTP | $17.0-18.3$ | $\lesssim 40$
| IXPE | $17.7-18.3$ | $\lesssim 20$
| POLIX | $18.0-18.3$ | $\lesssim 10$
3C 279 | POLLUX | $14.4-15.5$ | $18-23$
(direct disk emission) | | |
Note. — The PDs in the table above are estimated from the results shown in the
top panel of Figure 5, where the results for AO 0235+164 are shown in orange,
and the results for 3C 279 are shown in blue. The frequency range listed in
the table refers to the range where polarization could be detectable for the
corresponding polarimetry missions.
(where $\Gamma_{\mathrm{jet}}=35$, and $\gamma\sim 1$) due to Compton
scattering off thermal electrons, which results in soft X-ray radiation
(dashed orange curve). The soft X-ray spectrum peaks at
$\mathrm{log_{10}(\nu/Hz)}\sim 17.3$, in agreement with the detection of the
soft X-ray excess in the SED of AO 0235+164 by Ackermann et al. (2012), and
the results from Baring et al. (2017) (as shown in Figure 1). For 3C 279,
essentially all the seed photons (direct disk emission; dotted blue curve)
enter the emission region from behind, since $(R_{\mathrm{AD}}/h)\sim
0.012\ll\Gamma_{\mathrm{jet}}^{-1}$, causing the photons to receive a negative
Doppler boost into the emission region frame. Compton scattering of the direct
disk emission results, therefore, in UV radiation (dashed blue curve) with
frequencies $\mathrm{log_{10}(\nu/Hz)}\sim 14.0-16.0$, peaking at
$\mathrm{log_{10}(\nu/Hz)}\sim 15$, which is consistent with the detection of
the UV excess by Pian et al. (1999) and Paliya et al. (2015), as well as the
reconstructed BBB of Roustazadeh & Böttcher (2012).
### 5.2 Compton polarization
The Compton polarization signatures are shown as a function of the photon
frequency (in the observer’s frame) for viewing angles of
$\Theta_{\mathrm{AGN}}\sim\Gamma_{\mathrm{jet}}^{-1}~{}\mathrm{rad}$ in Figure
5. The Compton emission from AO 0235+164 (shown in orange) exhibits a
significant PD (top panel) within the frequency range
$\mathrm{log_{10}(\nu/Hz)}\sim 14.3-18.3$ (restricted to the frequency range
where the soft X-ray excess may dominate other – electron synchrotron –
radiation components), with the maximum PD $\sim 48\%$ at a frequency of
$\mathrm{log_{10}(\nu/Hz)}\sim 16.4$, and a PD of $\sim 30\%$ at the peak of
the soft X-ray component (at $\mathrm{log_{10}(\nu/Hz)}\sim 17.3$). The
frequency range in which significant polarization is predicted, covers a
number of upcoming and proposed missions for UV and X-ray polarimetry, which
include POLLUX on board the Large UV/Optical/IR Surveyor (LUVOIR; Ferrari et
al. (2019)), the Lightweight Asymmetry and Magnetism Probe (LAMP; Li et al.
(2019)), the Rocket Experiment Demonstration of a Soft X-ray Polarimeter
(REDSoX; Marshall et al. (2019)), the $X$-ray Polarization Probe (XPP;
Marshall et al. (2019)), the Enhanced $X$-ray Timing and Polarimetry Mission
(eXTP; Zhang et al. (2019b)), the Imaging $X$-ray Polarimetry Explorer (IXPE;
Deininger et al. (2020)), and the $X$-ray Polarimeter Experiment (POLIX; Paul
et al. (2016)). The predicted PDs are listed in Table 3, and are expected to
decrease with increasing frequency for all the polarimeters considered except
POLLUX, for which the PD is expected to increase with increasing frequency.
For 3C 279, the Compton polarization (shown in blue) is detectable in the
frequency range of $\mathrm{log_{10}(\nu/Hz)}\sim 14.3-16.0$ (restricted to
the frequency range where the BBB may make a significant contribution to other
– electron synchrotron and direct accretion-disk – radiation components). The
total observed spectrum has PDs up to $23\%$ (solid blue curve), which is
about half of the bulk Compton emission polarization (dashed blue curve) due
to the contribution of the unpolarized direct disk emission. The UV
polarimeter POLLUX could thus be able to detect the polarization of the BBB in
the SED of 3C 279 within the frequency range $\mathrm{log_{10}(\nu/Hz)}\sim
14.4-15.5$, with expected PDs of $(18-23)\%$ (see Table 3), with the maximum
PD $\sim 23\%$ at the peak of the BBB (at $\mathrm{log_{10}(\nu/Hz)}\sim 15$).
Figure 5: The PD (top panel) and PA (bottom panel) as a function of the photon
frequency for AO 0235+164 (orange) and 3C 279 (blue). The dotted lines
indicate negligible polarization, and the frequency range in which other
(electron-synchrotron) radiation components may dominate over the UV/soft
X-ray components. The dashed lines indicate the polarization of the Compton
emission, and the polarization of the total observed spectrum is shown with
solid lines. Figure 6: The PD (top panel) and PA (bottom panel) as a function
of the scattered photon viewing angle (in the AGN-rest frame) for AO 0235+164
(orange) and 3C 279 (blue). The dotted lines indicate the viewing angle where
the maximum polarization occurs.
The Compton polarization signatures are shown as a function of the scattered
photon viewing angle $\Theta_{\mathrm{AGN}}$ in Figure 6. In the emission
region rest frame, most of the seed photons move in the negative jet
direction. The maximum polarization, therefore, occurs at the right angle,
$\Theta_{\mathrm{AGN}}\sim(\pi/2)~{}\mathrm{rad}$, in the electron rest frame,
which is essentially the same in the emission region rest frame. Boosting to
the AGN rest frame, the maximum PD occurs at
$\Theta_{\mathrm{AGN}}\sim\Gamma_{\mathrm{jet}}^{-1}~{}\mathrm{rad}$, with
$\Gamma_{\mathrm{jet}}=35$ for AO 0235+164 (indicated with a orange dotted
line) and $\Gamma_{\mathrm{jet}}=10$ for 3C 279 (indicated with a blue dotted
line). The polarization angle (PA; shown in the bottom panels of Figures 5 and
6) of the polarized fraction of the Compton emission assumes a constant value
of PA $=(\pi/2)~{}\mathrm{rad}$ for both blazar case studies, which refers to
polarization perpendicular to the jet axis.
## 6 Summary and conclusion
In this paper, the MAPPIES code is used to simulate IC scattering off a
thermal population of shock-heated electrons contained in the blazar jet.
Predictions of the polarization signatures, in a model where the UV/soft X-ray
excess in blazar spectra is due to bulk Comptonization of external radiation
fields, as proposed by Baring et al. (2017), are made for AO 0235+164 and 3C
279. Compton scattering of external IR emission from the dusty torus results
in soft X-ray radiation for AO 0235+164, and Compton scattering of UV emission
from the accretion-disk results in UV radiation for 3C 279 (as shown in Figure
3). The Compton X-ray spectrum of AO 0235+164 agrees with the results of
Baring et al. (2017), and the UV spectrum of 3C 279 is consistent with the BBB
detected by e.g. Pian et al. (1999); Paliya et al. (2015) in the low state.
Therefore, while an isotropic IR radiation field (in the AGN rest frame) is
required to reproduce the soft X-ray excess as bulk Compton emission in the
SED of AO 0235+164, direct disk emission likely dominates the seed radiation
field for 3C 279, with the emission region closer to a more luminous disk
compared to that of AO 0235+164.
The thermal Comptonization process involved in the bulk Compton feature leads
to significant polarization within the UV/soft X-ray excess in the SEDs of
both blazar case studies (PD $\lesssim 48\%$ for AO 0235+164 and PD $\lesssim
23\%$ for 3C 279; as shown in Figure 5). The maximum PD occurs at viewing
angles of $\Theta_{\mathrm{AGN}}\sim\Gamma_{\mathrm{jet}}^{-1}~{}\mathrm{rad}$
(shown in Figure 6), and the PA for the polarized fraction of the Compton
emission assumes a constant value of PA$=(\pi/2)~{}\mathrm{rad}$, which
corresponds to polarization perpendicular to the jet axis. The Compton
polarization of the emission from the UV/soft X-ray excess in the SEDs is
predicted to be detectable within the frequency range of
$\mathrm{log_{10}(\nu/Hz)}\sim 14.3-18.3$ for AO 0235+164, and
$\mathrm{log_{10}(\nu/Hz)}\sim 14.3-16.0$ for 3C 279. Future missions
anticipating to deliver polarization measurements of UV and X-ray emission
from blazar jets may thus be able to detect the Compton polarization of the
UV/soft X-ray excess in the SEDs of AO 0235+164 and 3C 279, with the expected
PDs listed in Table 3. This reinforces future prospects of using the
polarization in the UV and X-ray regime, combined with spectral fitting and
variability, in order to probe different models of the emission mechanisms
responsible for different spectral features in blazar spectra.
## Acknowledgements
The work of Markus Böttcher is supported through the South African Research
Chair Initiative of the National Research Foundation 111Any opinion, finding
and conclusion or recommendation expressed in this material is that of the
authors and the NRF does not accept any liability in this regard. and the
Department of Science and Innovation of South Africa, under SARChI Chair grant
No. 64789.
## References
* Ackermann et al. (2012) Ackermann, M., Ajello, M., Ballet, J., et al. 2012, ApJ, 751, 159, doi: 10.1088/0004-637X/751/2/159
* Andersson et al. (2015) Andersson, B. G., Hines, D. C., Hoffman, J. L., et al. 2015, in Cosmic Origins Program Analysis Group (COPAG) (A White Paper submitted to the NASA 2015 COPAG). https://cor.gsfc.nasa.gov/copag/rfi/
* Antonucci (2002) Antonucci, R. 2002, in Astrophysical Spectropolarimetry, ed. J. Trujillo-Bueno, F. Moreno-Insertis, & F. Sánchez, 151–175. https://arxiv.org/abs/astro-ph/0103048
* Baring et al. (2017) Baring, M. G., Böttcher, M., & Summerlin, E. J. 2017, MNRAS, 464, 4875, doi: 10.1093/mnras/stw2344
* Blaes et al. (2001) Blaes, O., Hubeny, I., Agol, E., & Krolik, J. H. 2001, ApJ, 563, 560, doi: 10.1086/324045
* Błażejowski et al. (2000) Błażejowski, M., Sikora, M., Moderski, R., & Madejski, G. M. 2000, ApJ, 545, 107, doi: 10.1086/317791
* Boehm et al. (2019) Boehm, C., Olivares-Del Campo, A., Ramirez-Quezada, M., & Zhou, Y.-L. 2019, J. Cosmology Astropart. Phys, 2019, 041, doi: 10.1088/1475-7516/2019/12/041
* Bonometto et al. (1970) Bonometto, S., Cazzola, P., & Saggion, A. 1970, A&A, 7, 292
* Böttcher (2019) Böttcher, M. 2019, Galaxies, 7, 20, doi: 10.3390/galaxies7010020
* Böttcher et al. (1997) Böttcher, M., Mause, H., & Schlickeiser, R. 1997, A&A, 324, 395. https://arxiv.org/abs/astro-ph/9604003
* Böttcher et al. (2013) Böttcher, M., Reimer, A., Sweeney, K., & Prakash, A. 2013, ApJ, 768, 54, doi: 10.1088/0004-637X/768/1/54
* Burrows et al. (2005) Burrows, D. N., Hill, J. E., Nousek, J. A., et al. 2005, Space science reviews, 120, 165, doi: 10.1007/s11214-005-5097-2
* Chakraborty et al. (2015) Chakraborty, N., Pavlidou, V., & Fields, B. D. 2015, ApJ, 798, 16, doi: 10.1088/0004-637X/798/1/16
* Chandrasekhar (1960) Chandrasekhar, S. 1960, Radiative transfer (New York Dover)
* Conway et al. (1993) Conway, R. G., Garrington, S. T., Perley, R. A., & Biretta, J. A. 1993, A&A, 267, 347
* Deininger et al. (2020) Deininger, W. D., Kalinowski, W., Masciarelli, J., et al. 2020, in 2020 IEEE Aerospace Conference, IEEE, 1–18
* Dermer & Giebels (2016) Dermer, C. D., & Giebels, B. 2016, Comptes Rendus Physique, 17, 594, doi: 10.1016/j.crhy.2016.04.004
* Dermer & Schlickeiser (1993) Dermer, C. D., & Schlickeiser, R. 1993, ApJ, 416, 458, doi: 10.1086/173251
* Dermer & Schlickeiser (2002) —. 2002, ApJ, 575, 667, doi: 10.1086/341431
* Dreyer & Böttcher (2020) Dreyer, L., & Böttcher, M. 2020, ApJ, in press
* Ferrari et al. (2019) Ferrari, M., Bouret, J.-C., Neiner, C., et al. 2019, in American Astronomical Society Meeting Abstracts, Vol. 233, American Astronomical Society Meeting Abstracts #233, 148.09
* Ghisellini & Madau (1996) Ghisellini, G., & Madau, P. 1996, MNRAS, 280, 67, doi: 10.1093/mnras/280.1.67
* Ghisellini & Tavecchio (2008) Ghisellini, G., & Tavecchio, F. 2008, MNRAS, 387, 1669, doi: 10.1111/j.1365-2966.2008.13360.x
* Giommi et al. (2012) Giommi, P., Padovani, P., Polenta, G., et al. 2012, MNRAS, 420, 2899, doi: 10.1111/j.1365-2966.2011.20044.x
* Grandi et al. (1997) Grandi, P., Guainazzi, M., Mineo, T., et al. 1997, A&A, 325, L17. https://arxiv.org/abs/astro-ph/9707073
* Haardt et al. (1998) Haardt, F., Fossati, G., Grandi, P., et al. 1998, A&A, 340, 35. https://arxiv.org/abs/astro-ph/9806229
* Hartman et al. (1992) Hartman, R. C., Bertsch, D. L., Fichtel, C. E., et al. 1992, ApJ, 385, L1, doi: 10.1086/186263
* Hartman et al. (1996) Hartman, R. C., Webb, J. R., Marscher, A. P., et al. 1996, ApJ, 461, 698, doi: 10.1086/177095
* Homan et al. (2009) Homan, D. C., Lister, M. L., Aller, H. D., Aller, M. F., & Wardle, J. F. C. 2009, ApJ, 696, 328, doi: 10.1088/0004-637X/696/1/328
* Junkkarinen et al. (2004) Junkkarinen, V. T., Cohen, R. D., Beaver, E. A., et al. 2004, ApJ, 614, 658, doi: 10.1086/423777
* Kanbach et al. (1989) Kanbach, G., Bertsch, D. L., Fichtel, C. E., et al. 1989, Space Sci. Rev., 49, 69, doi: 10.1007/BF00173744
* Krawczynski (2012) Krawczynski, H. 2012, ApJ, 744, 30, doi: 10.1088/0004-637X/744/1/30
* Krawczynski et al. (2011) Krawczynski, H., Garson, A., Guo, Q., et al. 2011, Astroparticle Physics, 34, 550, doi: 10.1016/j.astropartphys.2010.12.001
* Li et al. (2019) Li, W., Xing, Y., Yu, Y., et al. 2019, Journal of Astronomical Telescopes, Instruments, and Systems, 5, 019003, doi: 10.1117/1.JATIS.5.1.019003
* MacDonald & Marscher (2018) MacDonald, N. R., & Marscher, A. P. 2018, ApJ, 862, 58, doi: 10.3847/1538-4357/aacc62
* Maraschi et al. (1994) Maraschi, L., Grandi, P., Urry, C. M., et al. 1994, ApJ, 435, L91, doi: 10.1086/187602
* Marshall et al. (2019) Marshall, H., Jahoda, K., Krawczynski, H., & Kislat, F. 2019, in AAS/High Energy Astrophysics Division, AAS/High Energy Astrophysics Division, 109.59
* Marshall et al. (2019) Marshall, H. L., Garner, A., Heine, S. N., et al. 2019, in UV, X-Ray, and Gamma-Ray Space Instrumentation for Astronomy XXI, Vol. 11118, International Society for Optics and Photonics, 111180A
* Masnou et al. (1992) Masnou, J. L., Wilkes, B. J., Elvis, M., McDowell, J. C., & Arnaud, K. A. 1992, A&A, 253, 35
* Matt et al. (1996) Matt, G., Feroci, M., Rapisarda, M., & Costa, E. 1996, Radiation Physics and Chemistry, 48, 403, doi: 10.1016/0969-806X(95)00472-A
* Mignani et al. (2019) Mignani, R., Shearer, A., Słowikowska, A., & Zane, S. 2019, Astronomical Polarisation from the Infrared to Gamma Rays, Vol. 460 (Springer Cham), doi: 10.1007/978-3-030-19715-5
* Moderski et al. (2005) Moderski, R., Sikora, M., Coppi, P. S., & Aharonian, F. 2005, MNRAS, 363, 954, doi: 10.1111/j.1365-2966.2005.09494.x
* Nichols & Linsky (1996) Nichols, J. S., & Linsky, J. L. 1996, AJ, 111, 517, doi: 10.1086/117803
* Ostorero et al. (2004) Ostorero, L., Villata, M., & Raiteri, C. M. 2004, A&A, 419, 913, doi: 10.1051/0004-6361:20035813
* Padovani et al. (2018) Padovani, P., Giommi, P., Resconi, E., et al. 2018, MNRAS, 480, 192, doi: 10.1093/mnras/sty1852
* Pal et al. (2020) Pal, M., Kushwaha, P., Dewangan, G. C., & Pawar, P. K. 2020, ApJ, 890, 47, doi: 10.3847/1538-4357/ab65ee
* Paliya et al. (2015) Paliya, V. S., Sahayanathan, S., & Stalin, C. S. 2015, ApJ, 803, 15, doi: 10.1088/0004-637X/803/1/15
* Paliya et al. (2018) Paliya, V. S., Zhang, H., Böttcher, M., et al. 2018, ApJ, 863, 98, doi: 10.3847/1538-4357/aad1f0
* Palma et al. (2011) Palma, N. I., Böttcher, M., de la Calle, I., et al. 2011, ApJ, 735, 60, doi: 10.1088/0004-637X/735/1/60
* Paltani et al. (1998) Paltani, S., Courvoisier, T. J. L., & Walter, R. 1998, A&A, 340, 47. https://arxiv.org/abs/astro-ph/9809113
* Paul et al. (2016) Paul, B., Gopala Krishna, M. R., & Puthiya Veetil, R. 2016, in 41st COSPAR Scientific Assembly, Vol. 41, E1.15–8–16
* Perlman et al. (2008) Perlman, E., Addison, B., Georganopoulos, M., Wingert, B., & Graff, P. 2008, in Blazar Variability across the Electromagnetic Spectrum (Proceedings of the Workshop on Blazar Variability across the Electromagnetic Spectrum. April 22-25, 2008 Palaiseau, France. Published online at http://pos.sissa.it, id.9), 9. https://arxiv.org/abs/0807.2119
* Pian et al. (1999) Pian, E., Urry, C. M., Maraschi, L., et al. 1999, ApJ, 521, 112, doi: 10.1086/307548
* Raiteri et al. (2006) Raiteri, C. M., Villata, M., Kadler, M., et al. 2006, A&A, 452, 845, doi: 10.1051/0004-6361:20054409
* Raiteri et al. (2005) Raiteri, C. M., Villata, M., Ibrahimov, M. A., et al. 2005, A&A, 438, 39, doi: 10.1051/0004-6361:20042567
* Rani et al. (2019) Rani, B., Zhang, H., Hunter, S. D., et al. 2019, BAAS, 51, 348. https://arxiv.org/abs/1903.04607
* Ravasio et al. (2003) Ravasio, M., Tagliaferri, G., Ghisellini, G., et al. 2003, A&A, 408, 479, doi: 10.1051/0004-6361:20031015
* Rothschild et al. (1998) Rothschild, R. E., Blanco, P. R., Gruber, D. E., et al. 1998, ApJ, 496, 538, doi: 10.1086/305377
* Roustazadeh & Böttcher (2012) Roustazadeh, P., & Böttcher, M. 2012, ApJ, 750, 26, doi: 10.1088/0004-637X/750/1/26
* Rybicki & Lightman (1979) Rybicki, G. B., & Lightman, A. P. 1979, Radiative processes in astrophysics (John Wiley & Sons)
* Sikora et al. (1994) Sikora, M., Begelman, M. C., & Rees, M. J. 1994, ApJ, 421, 153, doi: 10.1086/173633
* Sikora et al. (1997) Sikora, M., Madejski, G., Moderski, R., & Poutanen, J. 1997, ApJ, 484, 108, doi: 10.1086/304305
* Smith et al. (2004) Smith, J. E., Robinson, A., Alexander, D. M., et al. 2004, MNRAS, 350, 140, doi: 10.1111/j.1365-2966.2004.07610.x
* Stokes (1851) Stokes, G. G. 1851, Trans. Cambridge Philos., 9, 399
* Summerlin & Baring (2012) Summerlin, E. J., & Baring, M. G. 2012, ApJ, 745, 63, doi: 10.1088/0004-637X/745/1/63
* Talavera (2009) Talavera, A. 2009, Ap&SS, 320, 177, doi: 10.1007/s10509-008-9760-4
* Trippe (2019) Trippe, S. 2019, When Polarimetry Made History, Vol. 460 (Springer Cham), 1, doi: 10.1007/978-3-030-19715-5_1
* Truemper (1993) Truemper, J. 1993, Science, 260, 1769, doi: 10.1126/science.260.5115.1769
* Urry & Padovani (1995) Urry, C. M., & Padovani, P. 1995, PASP, 107, 803, doi: 10.1086/133630
* Villata & Raiteri (1999) Villata, M., & Raiteri, C. M. 1999, A&A, 347, 30
* Wardle et al. (1998) Wardle, J. F. C., Homan, D. C., Ojha, R., & Roberts, D. H. 1998, Nature, 395, 457, doi: 10.1038/26675
* Westfold (1959) Westfold, K. C. 1959, ApJ, 130, 241, doi: 10.1086/146713
* Zhang (2017) Zhang, H. 2017, Galaxies, 5, 32, doi: 10.3390/galaxies5030032
* Zhang (2019) —. 2019, Galaxies, 7, 85, doi: 10.3390/galaxies7040085
* Zhang & Böttcher (2013) Zhang, H., & Böttcher, M. 2013, ApJ, 774, 18, doi: 10.1088/0004-637X/774/1/18
* Zhang et al. (2014) Zhang, H., Chen, X., & Böttcher, M. 2014, ApJ, 789, 66, doi: 10.1088/0004-637X/789/1/66
* Zhang et al. (2019a) Zhang, H., Fang, K., Li, H., et al. 2019a, ApJ, 876, 109, doi: 10.3847/1538-4357/ab158d
* Zhang et al. (2019b) Zhang, S., Santangelo, A., Feroci, M., et al. 2019b, Science China Physics, Mechanics, and Astronomy, 62, 29502, doi: 10.1007/s11433-018-9309-2
|
# A Search for correlations between turbulence and star formation in LITTLE
THINGS dwarf irregular galaxies
Deidre A. Hunter11affiliation: Lowell Observatory, 1400 West Mars Hill Road,
Flagstaff, Arizona 86001, USA , Bruce G. Elmegreen22affiliation: IBM TJ̇.
Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, New York USA ,
Haylee Archer11affiliation: Lowell Observatory, 1400 West Mars Hill Road,
Flagstaff, Arizona 86001, USA , Caroline E. Simpson33affiliation: Department
of Physics, Florida International University, CP 204, 11200 SW 8th St, Miami,
Florida 33199 USA , and Phil Cigan44affiliation: George Mason University, 4400
University Dr., Fairfax, VA 22030-4444, USA
###### Abstract
Turbulence has the potential for creating gas density enhancements that
initiate cloud and star formation (SF), and it can be generated locally by SF.
To study the connection between turbulence and SF, we looked for relationships
between SF traced by FUV images, and gas turbulence traced by kinetic energy
density (KED) and velocity dispersion ($v_{disp}$) in the LITTLE THINGS sample
of nearby dIrr galaxies. We performed 2D cross-correlations between FUV and
KED images, measured cross-correlations in annuli to produce correlation
coefficients as a function of radius, and determined the cumulative
distribution function of the cross correlation value. We also plotted on a
pixel-by-pixel basis the locally excess KED, $v_{disp}$, and H i mass surface
density, $\Sigma_{\rm HI}$, as determined from the respective values with the
radial profiles subtracted, versus the excess SF rate density $\Sigma_{\rm
SFR}$, for all regions with positive excess $\Sigma_{\rm SFR}$. We found that
$\Sigma_{\rm SFR}$ and KED are poorly correlated. The excess KED associated
with SF implies a $\sim 0.5$% efficiency for supernova energy to pump local H
i turbulence on the scale of resolution here, which is a factor of $\sim 2$
too small for all of the turbulence on a galactic scale. The excess $v_{disp}$
in SF regions is also small, only $\sim 0.37$ km s-1. The local excess in
$\Sigma_{\rm HI}$ corresponding to an excess in $\Sigma_{\rm SFR}$ is
consistent with an H i consumption time of $\sim 1.6$ Gyr in the inner parts
of the galaxies. The similarity between this timescale and the consumption
time for CO implies that CO-dark molecular gas has comparable mass to H i in
the inner disks.
galaxies: irregular — galaxies: star formation — galaxies: ISM — galaxies:
kinematics and dynamics
## 1 Introduction
The gas in the inner parts of spiral galaxies is gravitationally unstable to
the formation of clouds that can go on to form stars (Toomre, 1964; Kennicutt,
1989). However, in dwarf irregular (dIrr) galaxies, the atomic gas densities
are much lower than in spirals and are apparently stable against this
instability (Hunter & Plummer, 1996; Meurer et al., 1996; van Zee et al.,
1997; Hunter et al., 2011). Furthermore, in inner spiral disks star formation
increases as the gas density increases (Bigiel et al., 2008), while in dwarfs
and outer spiral disks the atomic gas density cannot predict star formation
rates (SFRs, Bigiel et al., 2010). So, what drives star formation in dIrr
galaxies?
One process for creating clouds is compression of gas in a supersonically
turbulent medium (Elmegreen, 1993; Mac Low & Klessen, 2004). There is
extensive evidence for interstellar turbulence in galaxies, and turbulence in
typical dIrrs has been shown to be transonic (Burkhart et al., 2010; Maier et
al., 2017) while that in spirals is generally supersonic (Maier et al., 2016).
Furthermore, various distributions in stellar, cluster, and cloud properties
in dwarfs are consistent with sampling a fractal turbulent gas, including
composite cumulative H ii region luminosity functions (Youngblood & Hunter,
1999; Kingsburgh & McCall, 1998), stellar disk power spectra (Willett et al.,
2005), mass functions of clouds and star clusters (Elmegreen & Efremov, 1997;
Hunter et al., 2003; Mac Low & Klessen, 2004), H$\alpha$ probability
distribution functions (Hunter & Elmegreen, 2004), and the correlation between
region size and the star formation time scale (Efremov & Elmegreen, 1998). Dib
& Burkert (2005) found evidence for scales in the interstellar medium (ISM) of
Holmberg II less than 6 kpc in size that they interpret as due to a turbulence
driver acting on that scale. And, Zhang et al. (2012) showed from H i spatial
power spectra that either non-stellar power sources are playing a fundamental
role in driving the ISM turbulence or the nonlinear development of turbulent
structures has little to do with the driving sources. In addition, Hunter et
al. (2001, 2011) have found regions of high velocity dispersion in the H i
distribution of some dIrr galaxies that correlate with a deficit of H i in a
manner suggestive of long-range, turbulent pressure equilibrium (Piontek &
Ostriker, 2005).
Turbulence can create density enhancements that initiate cloud formation
(Krumholz & McKee, 2005), but turbulence also heats gas, which can make it
harder to form clouds (Struck & Smith, 1999). So, how important is turbulence
in driving star formation in dwarfs? It could be essential in outer disks
where gas self-gravity is weak (Elmegreen & Hunter, 2006). Also, a transition
from subsonic to supersonic turbulence in the ISM could be the cause of the
transition in the Schmidt-Kennicutt star formation rate-gas density
relationship from inefficient star formation at low gas surface densities to
star formation at higher densities (Kraljic et al., 2014).
Conversely, how important is star formation in driving turbulence? Simulations
suggest that stellar feedback and supernovae drive turbulence on the scale of
the galaxy thickness (Joung et al., 2009; Kim & Ostriker, 2015), and it may
drive turbulence in molecular clouds (Padoan et al., 2016), along with cloud
self-gravity (Mac Low et al., 2017; Ibáñez-Mejía et al., 2017). Feedback
destroys molecular clouds as well (Kim et al., 2018). Models also suggest
feedback controls the SFR by adjusting the disk thickness and midplane density
(Ostriker et al., 2010) or by compressing nearby clouds, causing them to
collapse (Deharveng et al., 2012; Palmeirim et al., 2017; Egorov et al.,
2017). On a galactic scale, feedback and self-gravity operate together to
drive turbulence (e.g., Goldbaum et al., 2016; Krumholz et al., 2018). These
models are uncertain, however. Other simulations show no need for star
formation to drive turbulence because they reproduce the velocity dispersion
from self-gravity alone; the only thing local feedback needs to do is destroy
the clouds where young stars form, preventing the SFR from getting too large
(Bournaud et al., 2010; Combes et al., 2012; Hopkins et al., 2011).
Observations are usually not decisive about the connection between the SFR and
turbulence. In a study of local dwarfs and low mass spirals, Stilp et al.
(2013) found a correlation between the core velocity dispersion in H i line
profiles and the H i surface density, suggestive of driving by gravitational
instabilities, but they also found a correlation with SFR at $\Sigma_{\rm
SFR}>10^{-4}\;M_{\odot}$ yr-1 kpc-2. Stilp et al. (2013) show that both the H
i velocity dispersion and $\Sigma_{\rm SFR}$ decrease with radius in a galaxy;
that makes correlations between these quantities ambiguous as they both could
depend on another parameter that varies with radius and not each other.
Zhou et al. (2017) studied 8 local galaxies with resolved spectroscopy and
showed on a pixel level that the velocity dispersion of ionized gas does not
change over a factor of $\sim 40$ in SFR per unit area. Also for several
hundred local galaxies in the same survey, Varidel et al. (2020) found a very
small correlation between the galaxy-average vertical velocity dispersion of
ionized gas and the total SFR, with the dispersion increasing by only 6 km s-1
for SFRs between $10^{-3}$ and $10\;M_{\odot}$ yr-1. This contrasts with
observations of high redshift galaxies where these authors show strong
increases in dispersion with SFR density and total rate, respectively, for
rate densities larger than $\sim 0.1\;M_{\odot}$ yr-1 kpc-2 and rates larger
than $\sim 3\;M_{\odot}$ yr-1. This high-redshift correlation was earlier
studied by several groups, including Lehnert et al. (2013) who observed that
the velocity dispersion of ionized gas increases as the square root of the SFR
per unit area. Lehnert et al. (2013) concluded that star formation was the
main driver of turbulence and that it was also sufficient to maintain marginal
stability in a disk. On the other hand, Übler et al. (2019) interpreted the
increase in the ionized gas velocity dispersion with SFR density for high
redshift galaxies as the result of gravitational instabilities, following the
theory in Krumholz et al. (2018).
Bacchini et al. (2020) consider radial profiles of turbulent speeds and SFRs
in local spiral galaxies and account for all of the gas turbulence using
supernovae from young massive stars. They get more effective turbulence
driving than other studies because they include the radial increase in disk
thickness, which decreases the dissipation rate.
In this paper we look for evidence of a spatial correlation between star
formation and turbulence in the LITTLE THINGS sample of nearby dIrr galaxies.
A spatial correlation could be either a cause of star formation through the
production of a gas cloud or a result of star formation through mechanical
energy input to the local ISM through feedback from stars. We construct
turbulent Kinetic Energy Density (KED) maps from the kinetic energy associated
with the bulk motions of the gas \- velocity from H i velocity dispersion
(moment 2) and mass from integrated column density (moment 0) maps, per unit
area in the galaxy. We cross-correlate the KED maps with far-ultraviolet (FUV)
images that trace star formation over the past 200 Myr. Because we are using
intensity-weighted velocity dispersions, the “turbulence” includes all bulk
motions of the gas, including thermal and turbulent. This follows the two-
dimensional (2D) cross-correlation method used by Ioannis Bagetakos (private
communication) in analysis of the spiral galaxy NGC 2403.
We also isolate turbulence in the vicinity of a SF region and determine the
excess KED and velocity dispersion from that region alone. This method removes
any background turbulence that may be generated by other means, such as
gravitational instabilities and collapse.
## 2 Data
LITTLE THINGS555Funded in part by the National Science Foundation through
grants AST-0707563, AST-0707426, AST- 0707468, and AST-0707835 to US-based
LITTLE THINGS team members and with generous technical and logistical support
from the National Radio Astronomy Observatory. is a multi-wavelength survey of
nearby dwarf galaxies (Hunter et al., 2012). The LITTLE THINGS sample is
comprised of 37 dIrr galaxies and 4 Blue Compact Dwarf (BCD) galaxies. The
galaxies are relatively nearby ($\leq$10.3 Mpc; 6″ is $\leq$300 pc), contain
gas so they have the potential for star formation, and are not companions to
larger galaxies. The sample also covers a large range in dwarf galactic
properties such as SFR and absolute magnitude.
We obtained H i observations of the LITTLE THINGS galaxies with the National
Science Foundation’s Karl G. Jansky Very Large Array (VLA666The VLA is a
facility of the National Radio Astronomy Observatory. The National Radio
Astronomy Observatory is a facility of the National Science Foundation
operated under cooperative agreement by Associated Universities, Inc.). The H
i-line data are characterized by high sensitivity ($\leq 1.1$ mJy beam-1 per
channel), high spectral resolution (1.3 or 2.6 km s-1), and high angular
resolution (typically 6″).
Ancillary data used here include far-ultraviolet (FUV) images obtained with
the NASA Galaxy Evolution Explorer satellite (GALEX777GALEX was operated for
NASA by the California Institute of Technology under NASA contract
NAS5-98034.; Martin et al., 2005). These images trace star formation over the
past 200 Myr. These data also yield integrated SFRs (Hunter et al., 2010) and
the radius at which we found the furthest out FUV knot $R_{\rm FUVknot}$ in
each galaxy (Hunter et al., 2016). The SFRs are normalized to the area within
one disk scale length $R_{\rm D}$, although star formation is usually found
beyond 1$R_{\rm D}$. $R_{\rm D}$ is measured from $V$-band surface brightness
profiles (Herrmann et al., 2013). Several of the LITTLE THINGS galaxies
without GALEX FUV images are not included in this study (DDO 155, DDO 165, IC
10, UGC 8508). Pixel values of FUV and $\Sigma_{\rm SFR}$ are not corrected
for extinction due to dust, which tends to be low in these low metallicity
galaxies.
The galaxy sample and characteristics that we use here are given in Table 1.
In some plots, we distinguish between those dIrrs that are classified as
Magellanic irregulars (dIm) and those that are classified as BCDs (Haro 29,
Haro 36, Mrk 178, VIIZw403).
Table 1: The Galaxy Sample | DaaDistance to the galaxy. References are given by Hunter et al. (2012). | MV | $R_{\rm H\alpha}$bbRadius of furthest out detected H ii region $R_{\rm H\alpha}$ in each galaxy from Hunter & Elmegreen (2004). Galaxies without H ii regions or with H ii regions extending beyond the area imaged do not have $R_{H\alpha}$. | $R_{\rm FUVknot}$ccRadius of furthest out detected FUV knot $R_{\rm FUVknot}$ in each galaxy from Hunter et al. (2016). Galaxies without GALEX images have no value for this radius. | $R_{\rm D}$ddDisk scale length $R_{\rm D}$ determined from the $V$-band image surface photometry from Herrmann et al. (2013). In the case of galaxies with breaks in their surface brightness profiles, we have chosen the scale length that describes the primary underlying stellar disk. | $R_{\rm Br}$eeBreak radius $R_{\rm Br}$ where the $V$-band surface brightness profile changes slope given by Herrmann et al. (2013). DDO 47 and DDO 210 do not have breaks in their surface brightness profiles. | $\log{\rm SFR_{D}^{FUV}}$ffSFR measured from the integrated FUV luminosity and normalized to the area within one $R_{\rm D}$ from Hunter et al. (2010). The normalization is independent of the radial extent of the FUV emission in a galaxy. |
---|---|---|---|---|---|---|---|---
Galaxy | (Mpc) | | (kpc) | (kpc) | (kpc) | (kpc) | (M${}_{\sun}\;$ yr-1 kpc-2) |
CVnIdwA | $3.6\pm 0.08$ | $-12.37\pm 0.09$ | 0.69 | 0.49$\pm$0.03 | 0.25$\pm$0.12 | 0.56$\pm$0.49 | $-1.77\pm 0.04$ |
DDO 43 | $7.8\pm 0.8$ | $-15.06\pm 0.22$ | 2.36 | 1.93$\pm$0.08 | 0.87$\pm$0.10 | 1.46$\pm$0.53 | $-2.20\pm 0.04$ |
DDO 46 | $6.1\pm 0.4$ | $-14.67\pm 0.16$ | 1.51 | 3.02$\pm$0.06 | 1.13$\pm$0.05 | 1.27$\pm$0.18 | $-2.45\pm 0.04$ |
DDO 47 | $5.2\pm 0.6$ | $-15.46\pm 0.24$ | 5.58 | 5.58$\pm$0.05 | 1.34$\pm$0.05 | $\cdots$ | $-2.38\pm 0.04$ |
DDO 50 | $3.4\pm 0.05$ | $-16.61\pm 0.03$ | $\cdots$ | 4.86$\pm$0.03 | 1.48$\pm$0.06 | 2.65$\pm$0.27 | $-1.81\pm 0.04$ |
DDO 52 | $10.3\pm 0.8$ | $-15.45\pm 0.17$ | 3.69 | 3.39$\pm$0.10 | 1.26$\pm$0.04 | 2.80$\pm$1.35 | $-2.53\pm 0.04$ |
DDO 53 | $3.6\pm 0.05$ | $-13.84\pm 0.03$ | 1.25 | 1.19$\pm$0.03 | 0.47$\pm$0.01 | 0.62$\pm$0.09 | $-1.96\pm 0.04$ |
DDO 63 | $3.9\pm 0.05$ | $-14.78\pm 0.03$ | 2.26 | 2.89$\pm$0.04 | 0.68$\pm$0.01 | 1.31$\pm$0.10 | $-2.05\pm 0.04$ |
DDO 69 | $0.8\pm 0.04$ | $-11.67\pm 0.11$ | 0.76 | 0.76$\pm$0.01 | 0.19$\pm$0.01 | 0.27$\pm$0.05 | $-2.22\pm 0.04$ |
DDO 70 | $1.3\pm 0.07$ | $-14.10\pm 0.12$ | 1.23 | 1.34$\pm$0.01 | 0.44$\pm$0.01 | 0.13$\pm$0.07 | $-2.17\pm 0.04$ |
DDO 75 | $1.3\pm 0.05$ | $-13.91\pm 0.08$ | 1.17 | 1.38$\pm$0.01 | 0.18$\pm$0.01 | 0.71$\pm$0.08 | $-0.99\pm 0.04$ |
DDO 87 | $7.7\pm 0.5$ | $-14.98\pm 0.15$ | 3.18 | 4.23$\pm$0.07 | 1.21$\pm$0.02 | 0.99$\pm$0.11 | $-2.61\pm 0.04$ |
DDO 101 | $6.4\pm 0.5$ | $-15.01\pm 0.16$ | 1.23 | 1.23$\pm$0.06 | 0.97$\pm$0.06 | 1.16$\pm$0.11 | $-2.84\pm 0.04$ |
DDO 126 | $4.9\pm 0.5$ | $-14.85\pm 0.24$ | 2.84 | 3.37$\pm$0.05 | 0.84$\pm$0.13 | 0.60$\pm$0.05 | $-2.18\pm 0.04$ |
DDO 133 | $3.5\pm 0.2$ | $-14.75\pm 0.16$ | 2.60 | 2.20$\pm$0.03 | 1.22$\pm$0.04 | 2.25$\pm$0.24 | $-2.60\pm 0.04$ |
DDO 154 | $3.7\pm 0.3$ | $-14.19\pm 0.16$ | 1.73 | 2.65$\pm$0.04 | 0.48$\pm$0.02 | 0.62$\pm$0.09 | $-1.77\pm 0.04$ |
DDO 167 | $4.2\pm 0.5$ | $-12.98\pm 0.25$ | 0.81 | 0.70$\pm$0.04 | 0.22$\pm$0.01 | 0.56$\pm$0.11 | $-1.59\pm 0.04$ |
DDO 168 | $4.3\pm 0.5$ | $-15.72\pm 0.25$ | 2.24 | 2.25$\pm$0.04 | 0.83$\pm$0.01 | 0.72$\pm$0.07 | $-2.06\pm 0.04$ |
DDO 187 | $2.2\pm 0.07$ | $-12.68\pm 0.07$ | 0.30 | 0.42$\pm$0.02 | 0.37$\pm$0.06 | 0.28$\pm$0.05 | $-2.60\pm 0.04$ |
DDO 210 | $0.9\pm 0.04$ | $-10.88\pm 0.10$ | $\cdots$ | 0.29$\pm$0.01 | 0.16$\pm$0.01 | $\cdots$ | $-2.66\pm 0.04$ |
DDO 216 | $1.1\pm 0.05$ | $-13.72\pm 0.10$ | 0.42 | 0.59$\pm$0.01 | 0.52$\pm$0.01 | 1.77$\pm$0.45 | $-3.17\pm 0.04$ |
F564-V3 | $8.7\pm 0.7$ | $-13.97\pm 0.18$ | $\cdots$ | 1.24$\pm$0.08 | 0.63$\pm$0.09 | 0.73$\pm$0.40 | $-2.94\pm 0.04$ |
IC 1613 | $0.7\pm 0.05$ | $-14.60\pm 0.16$ | $\cdots$ | 1.77$\pm$0.01 | 0.53$\pm$0.02 | 0.71$\pm$0.12 | $-1.97\pm 0.04$ |
LGS 3 | $0.7\pm 0.08$ | $-9.74\pm 0.25$ | $\cdots$ | 0.32$\pm$0.01 | 0.16$\pm$0.01 | 0.27$\pm$0.08 | $-3.75\pm 0.04$ |
M81dwA | $3.6\pm 0.2$ | $-11.73\pm 0.13$ | $\cdots$ | 0.71$\pm$0.03 | 0.27$\pm$0.00 | 0.38$\pm$0.03 | $-2.30\pm 0.04$ |
NGC 1569 | $3.4\pm 0.2$ | $-18.24\pm 0.13$ | $\cdots$ | 1.14$\pm$0.03 | 0.46$\pm$0.02 | 0.85$\pm$0.24 | $-0.32\pm 0.04$ |
NGC 2366 | $3.4\pm 0.3$ | $-16.79\pm 0.20$ | 5.58 | 6.79$\pm$0.03 | 1.91$\pm$0.25 | 2.57$\pm$0.80 | $-2.04\pm 0.04$ |
NGC 3738 | $4.9\pm 0.5$ | $-17.12\pm 0.24$ | 1.48 | 1.21$\pm$0.05 | 0.77$\pm$0.01 | 1.16$\pm$0.20 | $-1.52\pm 0.04$ |
NGC 4163 | $2.9\pm 0.04$ | $-14.45\pm 0.03$ | 0.88 | 0.47$\pm$0.03 | 0.32$\pm$0.00 | 0.71$\pm$0.48 | $-1.89\pm 0.04$ |
NGC 4214 | $3.0\pm 0.05$ | $-17.63\pm 0.04$ | $\cdots$ | 5.46$\pm$0.03 | 0.75$\pm$0.01 | 0.83$\pm$0.14 | $-1.11\pm 0.04$ |
Sag DIG | $1.1\pm 0.07$ | $-12.46\pm 0.14$ | 0.51 | 0.65$\pm$0.01 | 0.32$\pm$0.05 | 0.57$\pm$0.14 | $-2.40\pm 0.04$ |
WLM | $1.0\pm 0.07$ | $-14.39\pm 0.15$ | 1.24 | 2.06$\pm$0.01 | 1.18$\pm$0.24 | 0.83$\pm$0.16 | $-2.78\pm 0.04$ |
Haro 29 | $5.8\pm 0.3$ | $-14.62\pm 0.11$ | 0.96 | 0.86$\pm$0.06 | 0.33$\pm$0.00 | 1.15$\pm$0.26 | $-1.21\pm 0.04$ |
Haro 36 | $9.3\pm 0.6$ | $-15.91\pm 0.15$ | 1.06 | 1.79$\pm$0.09 | 1.01$\pm$0.00 | 1.16$\pm$0.13 | $-1.88\pm 0.04$ |
Mrk 178 | $3.9\pm 0.5$ | $-14.12\pm 0.26$ | 1.17 | 1.45$\pm$0.04 | 0.19$\pm$0.00 | 0.38$\pm$0.00 | $-1.17\pm 0.04$ |
VIIZw403 | $4.4\pm 0.07$ | $-14.27\pm 0.04$ | 1.27 | 0.33$\pm$0.04 | 0.53$\pm$0.02 | 1.02$\pm$0.29 | $-1.80\pm 0.04$ |
## 3 Cross-correlations
### 3.1 Two-dimensional
KED and FUV images were the inputs to the 2D cross-correlation. We
geometrically transformed the FUV image to match the orientation and field of
view (FOV) of the H i map using OHGEO in the Astronomical Image Processing
System (AIPS) and then smoothed it to the H i beam using SMOTH in AIPS. We
also blanked the pixels outside of the galaxy FUV emission, replacing the
blanked pixels with zeros, so that pure noise would not add to the correlation
coefficient $C_{coef}$. We constructed the KED maps as $0.5\times N_{\rm
HI}\times v_{disp}^{2}$, where $N_{\rm HI}$ is the H i column density in
hydrogen atoms per cm2 and $v_{disp}$ is the velocity dispersion in km s-1.
The conversion from counts in the KED maps to ergs pc-2 is given for each
galaxy in Table 2. Prior to executing the cross-correlations, we scaled both
the FUV and KED images so that the pixel values were the same order of
magnitude (roughly 100). These KED values determined from H i column density
have not been multiplied by 1.36 to include Helium and heavy elements. This
factor will be used later when the efficiency of KED generation is calculated.
We decided not to remove the underlying exponential disks for the 2D cross-
correlations. Although the SFR drops off with radius, the FUV image consists
of knots of young stars and there can be large FUV knots in the outer disks.
For the H i moment 0 and 2 maps, the H i surface density and velocity
dispersion do, on average, change with radius too, but not in a regular and
homogenous fashion. Thus, in the 2D $C_{coef}$ maps exponential structure
could remain.
Here, a $C_{coef}$ of 1 is perfectly correlated such that every bump and
wiggle in one map is exactly reproduced in the other. A value of $-1$ is
perfectly anti-correlated. The amplitude of the peak is a measure of the
coincidence of features in each image. If the KED image correlates well with
the local FUV flux, then the peak will be high and the breadth will be the
average size of their rms summed feature sizes.
We used the commands correl_images and corrmat_analyze in IDL with a python
wrapper. We used this command to shift one image relative to the other over
and over again to yield a map of $C_{coef}$. The peak pixel value in the
$C_{coef}$ map is the adopted $C_{coef}$. For example, for NGC 2366, we did a
150$\times$150 array of offsets. That is, we calculated the $C_{coef}$ for x,y
offset of $-150$, $-150$ to x,y offset of $+150$, +$150$. This produces a
matrix of 301$\times$301 pixels. The peak, in this case, is at pixel 145, 147
and has a value of 0.3 compared to the center pixel 151, 151 value of 0.26.
Thus, the maximum correlation is achieved when the FUV image is shifted
relative to the KED image by the offset corresponding to x,y of 145, 147. We
checked the $C_{coef}$ of a piece of one of the galaxies “by hand” with a
Fortran program we wrote and we obtained the same peak $C_{coef}$. The peak
$C_{coef}$ and x,y shifts to that pixel are given in Table 2. The cross-
correlation matrices are shown in Figure 1.
The shift in x,y is also given in Table 2 relative to the disk scale-length
$R_{\rm D}$, for better comparison to the size of the galaxy. The shifts vary
between 0.02$R_{\rm D}$ (IC 1613) and 4.75$R_{\rm D}$ (Haro 36). 50% of the
galaxies (18) have shifts less than 0.5$R_{\rm D}$, 33% (12) have shifts
between 0.5$R_{\rm D}$ and 1$R_{\rm D}$, and 17% (6) have shifts greater than
1$R_{\rm D}$.
Figure 1: Cross-correlation matrices for each galaxy. The images are displayed
with the same color scale from $C_{coef}$ of zero to 0.8.
Ioannis Bagetakos (private communication) examined the cross-correlation
method on NGC 2403 as a function of image scale, focussing on scales of 0.23
to 3 kpc, and found correlations on various scales for different images such
as star formation tracers, dust, and H i. Thus, we divided our images up into
square sub-regions 16$\times$16, 32$\times$32, 64$\times$64, and
128$\times$128 pixels and computed the $C_{coef}$ in each box. The coefficient
images constructed from this just look like noise and show no particular
connection to the FUV image. So we do not consider them further.
We also applied alternate methods on one galaxy, NGC 2366 with a max
$C_{coef}$ of 0.3, to examine the robustness of our approach. This galaxy was
chosen for initial and special tests because it has a giant H ii region and
the H i velocity dispersion is high around this region, making it an
interesting candidate for looking for a star formation-turbulence correlation.
One problem with cross-correlations, in particular, can be caused by moderate
signal-to-noise (S/N) pixels dampening the value of $C_{coef}$. One simple
diagnostic is a plot of the pixel values of the KED image against the pixel
values of the FUV image, given that the FUV image has been geometrically
transformed and smoothed to the same pixel grid and beam size as the KED
image. We normalized the pixel values in each image to range from 0 to 1, and
this plot is shown in Figure 2. If there were a notable correlation, we would
expect a cluster of points in the top right corner. If the images were anti-
correlated, we would expect clusters of points around the top left and bottom
right. We do not see either of these extremes. While there are some points in
the top left corner, it is not a distinct cluster, rather it appears to be
consistent with a typical tail end of a simple distribution of points from 0
to 1.
Figure 2: Values of pixels in KED image plotted against values of pixels in
FUV image for NGC 2366. The images have each been normalized so that pixel
values are between 0 and 1. A notable correlation would be expected to appear
as a cluster of points in the top right corner (or, in the other corners near
values of 1.0 for anti-correlation), however this evidence is not seen.
We also tried variations of weighted normalized cross-correlations and a
wavelet analysis to NGC 2366. The zero-mean normalized cross correlation
coefficient (ZNCC) is basically the standard Pearson Correlation measure
$\rho$ for a 2D image. Applied to NGC 2366, ZNCC is 0.26. Like $\rho$ or $r$
coefficients, $+1$ is perfect correlation, $-1$ is perfect anti-correlation,
and so 0.26, which is what we found for the max $C_{coef}$, implies a not very
significant degree of correlation. One way to deal with pixels with low S/N is
to use a weighted normalized cross-correlation (WNCC). For this test, we
weighted the pixels by the ratio of their signal to the standard deviation of
values in the map, which is effective at down-weighting background noise
pixels. Using this method, we obtain a WNCC value of -0.023 – effectively
zero, implying no significant correlation between the two images.
Figure 3: Cross-correlation coefficient $r_{w}$ for NGC 2366 KED and FUV
images convolved with progressively larger ‘Mexican hat’ kernels. We find no
significant correlation between the two images at any resolved scale.
For our final test on NGC 2366, we used a wavelet analysis to see if the
degree of correlation depends on scale/resolution. In this process, each image
is convolved with progressively larger 2D kernels or wavelets, in this case a
Ricker or ‘Mexican Hat’ wavelet, and the cross-correlation is calculated at
each of these scales or ’lags’. The result for NGC 2366 is shown in Figure 3.
Strong correlations at a particular spatial scale would be evidenced by
wavelet cross correlation $r_{w}$ values of $\gtrsim$ 0.6 at that scale.
Eventually, as the images are convolved to large enough scales, they become
less resolved and therefore naturally correlate. Figure 3 indicates that there
is not much difference at any of the resolved scales. Using a slightly
modified wavelet from the literature (e.g., Ossenkopf et al., 2008), there may
be evidence of a slightly more prominent correlation between the images at 30
pixel scales (45″), but $r_{w}$ is still not significant.
Thus, we conclude that no matter how we look at the FUV and KED images of NGC
2366, the two images are mildly correlated at best and this does not change
much with scale.
The width of the peak signal in a cross-correlation matrix is expected to
represent the scale of the correlation. However, in our matrices, the width is
not well defined. The issue is demonstrated in Figure 4 where we show a radial
plot and row and column cuts through the peak of the WLM $C_{coef}$ matrix.
The peak is, of course, obvious, but the radial plot is messy and the single
row and column cuts show a complex background. The main feature in the cross
correlation maps is the exponential disk because both the KED and the SFR
density peak in the center with the exponential disk. The width of the
$C_{coef}$ in Figure 4 is influenced more by the width of the disk than the
scale of the 2D correlation. What to take as the baseline for a fit to the
peak is also not clear. Therefore, we do not consider the widths of the peaks
further here.
Figure 4: Cuts through the peak in the $C_{coef}$ matrix of WLM. Top: Radial profile. Middle: Row plot. Bottom: Column plot. Table 2: Correlation Coefficients and Offsets Galaxy | Max $C_{coef}$ | X shiftaaOffset of the pixel with the maximum $C_{coef}$ from the center of the array, in pixels. The pixel scale is 1.5″ except for DDO 216 and Sag DIG where it is 3.5″. | Y shiftaaOffset of the pixel with the maximum $C_{coef}$ from the center of the array, in pixels. The pixel scale is 1.5″ except for DDO 216 and Sag DIG where it is 3.5″. | Shift/$R_{\rm D}$ | Offset (X$\times$Y)bbOffsets in pixels. | Calibration ($10^{41}$)ccConstant by which to convert counts in KED maps to ergs pc-2.
---|---|---|---|---|---|---
CVnIdwA | 0.77 | 3 | 2 | 0.38 | 75$\times$75 | 7.67
DDO 43 | 0.61 | 4 | 13 | 0.89 | 150$\times$150 | 18.33
DDO 46 | 0.57 | -5 | -9 | 0.40 | 150$\times$150 | 26.77
DDO 47 | 0.54 | 2 | -4 | 0.13 | 150$\times$150 | 9.32
DDO 50 | 0.41 | -2 | -8 | 0.14 | 300$\times$300 | 20.40
DDO 52 | 0.56 | 6 | -14 | 0.91 | 150$\times$150 | 24.86
DDO 53 | 0.58 | -4 | 5 | 0.36 | 150$\times$150 | 24.54
DDO 63 | 0.41 | 8 | -5 | 0.39 | 150$\times$150 | 18.81
DDO 69 | 0.50 | -15 | -9 | 0.54 | 150$\times$150 | 28.29
DDO 70 | 0.50 | -23 | 18 | 0.63 | 300$\times$300 | 4.83
DDO 75 | 0.52 | -8 | -2 | 0.43 | 300$\times$300 | 18.09
DDO 87 | 0.48 | -2 | 2 | 0.09 | 150$\times$150 | 18.73
DDO 101 | 0.59 | -15 | 5 | 0.76 | 150$\times$150 | 15.22
DDO 126 | 0.62 | 2 | 3 | 0.15 | 150$\times$150 | 22.95
DDO 133 | 0.65 | 1 | 2 | 0.05 | 150$\times$150 | 6.61
DDO 154 | 0.50 | 10 | -4 | 0.60 | 150$\times$150 | 17.69
DDO 167 | 0.72 | 9 | 11 | 1.97 | 150$\times$150 | 22.95
DDO 168 | 0.68 | -2 | 0 | 0.08 | 150$\times$150 | 19.36
DDO 187 | 0.74 | -8 | 0 | 0.35 | 150$\times$150 | 25.58
DDO 210 | 0.63 | 23 | 1 | 0.94 | 150$\times$150 | 8.77
DDO 216 | 0.63 | 20 | 15 | 0.90 | 75$\times$75 | 3.54
F564-V3 | 0.87 | 0 | -4 | 0.40 | 150$\times$150 | 8.69
IC 1613 | 0.33 | 1 | 2 | 0.02 | 300$\times$300 | 17.69
LGS 3 | 0.46 | 13 | -19 | 0.73 | 150$\times$150 | 8.05
M81dwA | 0.42 | -11 | -16 | 1.88 | 150$\times$150 | 18.01
NGC 1569 | 0.40 | 30 | 6 | 1.65 | 300$\times$300 | 29.08
NGC 2366 | 0.30 | -6 | -4 | 0.09 | 150$\times$150 | 21.51
NGC 3738 | 0.48 | -13 | 7 | 0.68 | 150$\times$150 | 25.50
NGC 4163 | 0.62 | 4 | 12 | 0.83 | 150$\times$150 | 15.46
NGC 4214 | 0.35 | -88 | 8 | 2.57 | 300$\times$300 | 18.17
SagDIG | 0.51 | -7 | 15 | 0.97 | 75$\times$75 | 1.84
WLM | 0.33 | 0 | -32 | 0.20 | 150$\times$150 | 22.95
Haro 29 | 0.63 | -21 | -1 | 2.69 | 150$\times$150 | 23.19
Haro 36 | 0.34 | -46 | -54 | 4.75 | 150$\times$150 | 21.91
Mrk 178 | 0.60 | 2 | -1 | 0.33 | 150$\times$150 | 25.98
VIIZw 403 | 0.68 | 1 | -3 | 0.19 | 150$\times$150 | 12.11
### 3.2 Radial profiles
We also calculated the $C_{coef}$ in annuli from the center of the galaxy
outward. The image was blanked outside of the target annulus, which were
chosen to match those used by Hunter et al. (2012) to produce the H i radial
profiles. We normalized the pixel values in the annulus with respect to the
average in the annulus, so in effect large-scale variations, such as the
exponential fall-off with radius, are taken out. Then we measured the
$C_{coef}$ for the annulus. Figure 5 shows the $C_{coef}$ of the annuli as a
function of annulus distance from the center of the galaxy. The annuli used
galaxy center, ellipticity, and major axis position angle determined from
$V$-band images and given by Hunter et al. (2012).
We see a wide variety of profiles. The central points in NGC 4163 and in
VIIZw403 reach a $C_{coef}$ of nearly 0.95 and a few other galaxies have peaks
as high as 0.9. By contrast the peak in DDO 210 occurs in the outermost
annulus and only reaches a value of 0.14. In many galaxies the $C_{coef}$
drops in value with radius, but in many others it is relatively flat. In a few
galaxies the $C_{coef}$ drops precipitously from a relatively high value for
the inner-most annulus to near zero beyond that radius (DDO 167, F564-V3, Haro
36).
Figure 5: Correlation coefficient between FUV and KED images in annuli as a
function of distance from the center of the galaxy. The $C_{coef}$ profile is
plotted from 0 to 1 for all galaxies for ease of comparison, and the radius is
normalized by the disk scale length measured from the $V$-band image (Table
1). The pixel values in each annulus have been normalized by the average in
the annulus, so large-scale trends with radius have been removed.
## 4 Results
### 4.1 Cross-correlations
Generally, the 2D $C_{coef}$ indicate low levels of correlation between the
FUV and KED images. In Figure 6 we plot the peak $C_{coef}$ against the
integrated SFR for each galaxy to see if a higher level of correlation is
related to the overall SFR. There is no relationship between the two values.
In annuli, $C_{coef}$ can be as high as 0.9 in the center, indicating a
correlation, but the values tend to be low overall, and the radial profiles
exhibit a wide range of shapes. From the images, visually most of the FUV is
patchy and tends to be concentrated towards the central regions of the
galaxies while the H i often extends quite far outside the optical/UV galaxy.
So the birds-eye view of a dIrr might expect a higher correlation in the
central regions where there is ample H i and FUV, with little correlation as
you go farther out where there are typically fewer FUV knots.
By comparison, in the spiral NGC 2403 Ioannis Bagetakos (private
communication) found that the FUV and H i surface mass density are
uncorrelated with a $C_{coef}$$<0.20$. They did, however, find correlations
between dust and star formation ($C_{coef}$$>0.55$) and between PAHs and H i
($C_{coef}$$\sim 0.55$). Bagetakos et al. chose NGC 2403 as their pilot galaxy
because it is in the THINGS sample (Walter et al., 2008) with H i data, as
well as images at 8 microns, 24 microns, H$\alpha$, and FUV, and is nearby
with an H i beam of 136 pc $\times$ 119 pc. As an Scd spiral it is
significantly larger and more massive than the dIrr galaxies in this study.
Figure 6: Integrated SFR normalized to one disk scale length versus the
maximum correlation coefficient for each galaxy. The correlation coefficients
are given in Table 2 and the SFRs are in Table 1.
### 4.2 Degree of lumpiness
Since star formation is usually lumpy, we ask whether the lack of correlation
between FUV and KED images is because KED is smooth compared to FUV or because
lumps in the two images do not correlate. Figure 7 shows the KED maps and FUV
images at full resolution. A contour of the FUV image is superposed on the KED
map to facilitate comparison. We see that FUV and KED maps are both generally
lumpy although the lumps are not necessarily located in the same place.
Figure 7: Kinetic energy density (KED) maps and full-resolution FUV images for
each galaxy. The major FUV knots are contoured and that white contour is shown
on the KED map to facilitate comparison. The conversion of counts in the KED
maps to physical units is given in Table 2. To convert FUV counts s-1 to flux
in units of erg s-1 cm-2 Å-1 multiply by $1.4\times 10^{-15}$. Figures for the
rest of the galaxies in this study are available in the on-line materials (72
images in 12 figures).
To examine the degree of lumpiness, we looked at the fraction of pixels with
raw values above a given percentage of the maximum pixel value in the image.
Specifically, we counted the fraction of total pixels that have counts within
10%, 20%, 30%, 40%, and 50% of the maximum count value in each of the FUV and
KED images. These data are shown in Figure 8 as percentage of total pixels as
a function of selected cut-off deviation from the maximum pixel value in the
image. For example in CVnIdwA, the percentage of pixels with values within 10%
of the maximum value is 0.69% in the FUV image and 1.03% in the KED image,
whereas the percentage of pixels with values within 50% of the maximum value
is 5.65% in the FUV image and 15.98% in the KED image.
To understand what these plots mean we can compare the appearance of the
galaxies in Figure 7 with the plots in Figure 8. We see in the images that
galaxies like LGS3, DDO 87, DDO 133, and SagDIG have a few small FUV knots but
more or bigger KED knots. The KED knots fill more of the area and so a higher
fraction of the pixels are close to the peak intensity. These galaxies have
flat FUV profiles in Figure 8 because very few pixels are close to the peak
intensity, i.e., the FUV is spotty, but they have KED profiles that rise with
percentage of maximum pixel value because the KED is more uniform. DDO 43 is
unique in this sample because it is the only one with an approximately flat
KED profile and an FUV profile that rises with percentage of maximum pixel
value. The reason is clear from Figure 7, which shows that the FUV image of
DDO 43 is filled with bright spots, making most of the image close to the peak
pixel value, while the KED image has weaker peaks that are more spread out.
DDO 167, on the other hand, has FUV and KED knots that are comparable in size,
and FUV and KED profiles that rise together with percentage of maximum pixel
value, as do DDO 47, DDO 101, F564-v3, and NGC 4163. Most of the galaxies have
broader KED distributions than FUV emission, so their KED pixel percentages
rise faster than their FUV pixel percentages as the top percentage of the
maximum pixel value increases.
The general rising trend of the curves in Figure 8 is mostly the result of the
exponential radial profile of the disk with the peaks in KED and FUV standing
a nearly fixed fraction above the mean profile. Figure 9 shows models for
these curves assuming an exponential disk intensity profile $I(r)=e^{-r}$, so
the radius as a function of intensity is $r(I)=-\ln(I)$. The radius at 10% of
the peak is then $r(10\%)=-\ln(1-0.1)$, and the number of pixels brighter than
that is the area of the circle at this radius, or $\pi r(10\%)^{2}$. In
general, for an intensity that is the fraction $x$ down from the peak
intensity, the fraction of pixels in the total disk is
$f(x)=\pi\left(-\ln[1-x]\right)^{2}/\left(\pi r_{\rm max}^{2}\right)$ (1)
where $r_{\rm max}$ is the size of the disk measured in scale lengths. Figure
9 shows $f(x)$ versus $x$ in three cases. The top curve is for an exponential
profile with a scale length 1.5 times larger than the middle curve and an
overall galaxy size the same, $r_{\rm max}=2$ scale lengths. The lower curve
also has a scale length 1.5 times larger than the middle curve but the overall
galaxy size for the lower curve is 1.5 times larger ($r_{\rm max}=3$). Larger
scale lengths for a given galaxy size make the percentage curves rise faster
because more of the disk is close to the peak intensity at the center.
The similarity of the model curves in Figure 9 to the observations in Figure 8
implies that the qualitative effect being captured by the fractional
distribution is the result of the exponential disk. However, the percentage of
pixels observed is much smaller than the model percentage, i.e., several
percent or less for the observations compared to $\sim 10$% at the 50% top
percentage of maximum pixel value. This difference implies that the peaks in
the KED and FUV distributions stand above the exponential disk, so their areas
are a small fraction, $\sim 10$%, of the disk area, but the peak intensities
have about the same radial dependence as the average disk, which means they
are a fixed factor times the average disk brightness.
Figure 8: Percentage of pixels with values above a given percentage of the
maximum value for FUV and KED images.
Figure 9: Model for the curves in Figure 8 based on exponential profiles. The
top curve has a scale length 1.5 times larger than the middle curve and an
overall galaxy size the same, 2 scale lengths. The lower curve also has a
scale length 1.5 times larger than the middle curve but the overall galaxy
size for the lower curve is 1.5 times larger. Larger scale lengths for a given
galaxy size make the percentage curves rise faster because more of the disk is
close to the peak intensity at the center.
### 4.3 Pixel-pixel scatter plots
Another way of looking at the data is to compare individual pixels in pairs of
images. We have done that, examining KED, the velocity dispersion of the gas
$v_{disp}$, and H i surface density $\Sigma_{\rm HI}$ versus SFR surface
density as determined from the FUV images, $\Sigma_{\rm SFR}$. Recall that the
FUV images were geometrically transformed and smoothed to match the pixel size
and resolution of the H i images. For all galaxies but DDO 216 and Sag DIG,
the pixel size is $1.5^{\prime\prime}$ and for these two it is
$3.5^{\prime\prime}$. To compensate for radial trends, we determined the
azimuthally-averaged $\Sigma_{\rm SFR}$, KED, H i and $v_{disp}$ in annuli
from the center of the galaxy and subtracted that from the observations. We
used optically-determined disk parameters of center, minor-to-major axis ratio
$b/a$, and position angle of the major axis from Hunter et al. (2012). The
widths of the annuli, constant in a given galaxy, were chosen to be the same
as those used to measure the H i surface density profiles of Hunter et al.
(2012). The azimuthally-averaged radial profiles of $\Sigma_{\rm SFR}$, KED,
$v_{disp}$, and $\Sigma_{\rm HI}$ are shown for each galaxy in Figure 10. The
pixel-pixel plots of excess KED, $v_{disp}$ and $\Sigma_{\rm HI}$ versus
excess $\Sigma_{\rm SFR}$ are shown in Figures 11-13. All of these quantities
except $v_{disp}$ were corrected to a face-on orientation by multiplying the
fluxes by the cosine of the inclination. The KED units are erg pc-2,
$v_{disp}$ is in km s-1, $\Sigma_{\rm HI}$ is in M${}_{\sun}\;$pc-2 and
$\Sigma_{\rm SFR}$ is in units of $M_{\odot}$ yr-1 pc-2. KED values in Figures
10-13 have not been corrected for Helium and heavy elements. Note that only
the regions of relatively high $\Sigma_{\rm SFR}$ are plotted, i.e., with
positive excess above the annular average, and we plot the logarithm of this
excess. For the quantities on the ordinate, we consider both positive and
negative excess values over the average, so they are not plotted in the log.
Some regions of locally high $\Sigma_{\rm SFR}$ have locally low KED,
$v_{disp}$ or $\Sigma_{\rm HI}$.
In the radial averages shown in Figure 10, we see that KED, $\Sigma_{\rm
SFR}$, and $\Sigma_{\rm HI}$ generally decline with radius. Tamburro et al.
(2009) found this also for spiral galaxies. They also found that $v_{disp}$
declines with radius in their sample, but in our sample of dIrr the drop of
$v_{disp}$ with radius is very minor, if any. They also find a clear
correlation of KED with $\Sigma_{\rm SFR}$ in pixel-pixel plots, whereas our
Figure 11 does not show such a nice correlation.
Figure 10: Azimuthally-averaged radial profiles of $\Sigma_{\rm SFR}$
determined from the FUV, KED (not corrected for He and heavy elements),
$v_{disp}$, and $\Sigma_{\rm HI}$. FUV emission is the limiting quantity in
that it does not go out as far as the other quantities. Optical disk
parameters (center, $b/a$, and major axis position angle) from Hunter et al.
(2012) were used, and holes in the gas or FUV emission were not used in the
averages.
Figure 11: Pixel-pixel plots of the excess KED above the average value at
each radius vs. the log of the excess $\Sigma_{\rm SFR}$. The density of
points is color-coded. Figures for the rest of the galaxies in this study are
available in the on-line materials (there are 6 figures like this for 36
galaxies). The KED has not been corrected for He and heavy elements.
Figure 12: Pixel-pixel plots of the excess $v_{disp}$ above the average value
at each radius vs. the log of the excess $\Sigma_{\rm SFR}$. The density of
points is color-coded. Figures for the rest of the galaxies in this study are
available in the on-line materials (the remaining 36 galaxies are shown in 6
figures).
Figure 13: Pixel-pixel plots of the excess $\Sigma_{\rm HI}$ above the
average value at each radius vs. log of the excess $\Sigma_{\rm SFR}$. The
density of points is color-coded. Figures for the rest of the galaxies in this
study are available in the on-line materials (the remaining 36 galaxies are
shown in 6 figures).
Figures 11 \- 13 typically show concentrations of points at a low excess
$\Sigma_{\rm SFR}$ and a continuation of these points toward higher excess
$\Sigma_{\rm SFR}$. The low excess $\Sigma_{\rm SFR}$ are in the outer disks
and the high excess $\Sigma_{\rm SFR}$ are in the inner disks. Some galaxies
have two concentrations of points in these figures.
To quantify the pixel distributions, we determined the excess $\Sigma_{\rm
SFR}$ and other quantities at the plotted concentrations. For each galaxy we
made a histogram of the log of the excess $\Sigma_{\rm SFR}$ (the abscissa
value) and found the peak at the low density concentration. The excess log
$\Sigma_{\rm SFR}$ in that peak was determined from the average value in the
three bins of the histogram centered there. The bin width was 0.2 in the log
of the excess $\Sigma_{\rm SFR}$. Then for these three bins around the
histogram peak for log excess $\Sigma_{\rm SFR}$, we determined the mean value
of the quantity plotted on the ordinate in the figures, i.e., the excess KED,
$v_{disp}$ and $\Sigma_{\rm HI}$. For the higher excess $\Sigma_{\rm SFR}$, we
took the mean value of the excess $\Sigma_{\rm SFR}$ and the other quantities
for all regions where the log of the excess $\Sigma_{\rm SFR}$ was larger than
the high-SF edge of the concentration of points, typically at $-9.8$ but
ranging from $-9.5$ to $-10.2$ depending on the plotted galaxy. When there was
only one prominent concentration of points in the figure, we determined the
values there.
Figure 14 shows the mean excess KED corrected for Helium and heavy elements,
$v_{disp}$, and $\Sigma_{\rm HI}$ versus the mean of the log of the (positive)
excess $\Sigma_{\rm SFR}$ for all galaxies, with dots corresponding to the low
$\Sigma_{\rm SFR}$ concentrations in the outer disks and crosses corresponding
to the high $\Sigma_{\rm SFR}$ in the inner disks. The curves in the KED plot
show fitted relationships between the KED generated by supernovae and the
$\Sigma_{\rm SFR}$ for the indicated efficiencies of converting SN energy into
turbulence, and for galaxy scale heights of 850 pc and 540 pc. These
theoretical KEDs come from equation 3.7 in Bacchini et al. (2020), which is
$KED_{\rm SN}=\eta\Sigma_{\rm SFR}f_{\rm cc}E_{\rm SN}(2H/v_{\rm turb})$ (2)
where $\eta$ is the efficiency of energy conversion from supernova to
turbulence, $f_{\rm cc}=1.3\times 10^{-2}\;M_{\odot}^{-1}$ is the number of
core-collapse supernovae per solar mass of stars, $E_{\rm SN}=10^{51}$ erg is
the supernova energy, $H$ is the disk thickness and $v_{\rm turb}$ is the
turbulent gas velocity dispersion (the ratio of these latter two quantities
gives the turbulent dissipation time). Bacchini et al. (2020) compare the
radial profiles of turbulent energies in 10 nearby galaxies with the SFRs and
derive an average efficiency of $1.5^{1.8}_{0.8}$% if all of the turbulence
comes from star formation. Because the required efficiency is relatively low,
they concluded that supernovae related to star formation can drive most of the
interstellar turbulence.
Figure 14: Mean excess KED corrected for He and heavy elements, $v_{disp}$,
and $\Sigma_{\rm HI}$ versus the mean of the log of the (positive) excess
$\Sigma_{\rm SFR}$ for all galaxies. Dots correspond to the low $\Sigma_{\rm
SFR}$ concentrations in the outer disks and crosses correspond to high
$\Sigma_{\rm SFR}$ in the inner disks. Left: The curves show fitted
relationships between the KED generated by supernovae and the $\Sigma_{\rm
SFR}$ for the indicated efficiencies of converting SN energy into turbulence,
and for galaxy scale heights of 850 pc and 540 pc. Middle: Average $v_{disp}$
excess at each concentration of excess $\Sigma_{\rm SFR}$ in Figure 12. The
excess velocity dispersion averages 0.34 km s-1 in the outer disk and 0.17 km
s-1 in the inner disk. Right: Average $\Sigma_{\rm HI}$ excess at each
concentration of excess $\Sigma_{\rm SFR}$ in Figure 13. There is a clear
trend toward excess H i at local SF regions.
For the dwarf galaxies studied here, we evaluate equation (2) using scale
heights and velocity dispersions from the average values for 20 dIrrs in
Elmegreen & Hunter (2015), in Table 2 of that paper. For the concentrations of
pixel values corresponding to the outer regions of the galaxies, we take the
average scale height and $v_{disp}$ at 2 scale lengths, which are $H=850$ pc
and $v_{\rm disp}=9.7$ km s-1. For the inner regions, we take the values at 1
scale length, which are 540 pc and 10.7 km s-1. We also include Helium and
heavy elements in the KED by multiplying the H i mass surface density by 1.36.
Then with $f_{\rm cc}$ and $E_{\rm SN}$ given above, equation (2) is fitted
for the efficiency in the two cases, using for the moment $v_{\rm disp}$
instead of $v_{\rm turb}$. The results are drawn as curves in the left panel
of Figure 14. The average local efficiencies for conversion of SN energy to
KED are $\eta=0.0080\pm 0.045$ and $0.0050\pm 0.0075$ for the outer and inner
disk regions, with these assumptions.
The total dispersions used to evaluate $\eta$ include thermal and turbulent
motions, which were distinguished in several limiting cases by Bacchini et al.
(2020) to get the desired $v_{\rm turb}$. If we assume Mach$\sim 1$ turbulence
in the general H i ISM, then $v_{\rm turb}=v_{\rm disp}/2^{0.5}$ and the
derived values of $\eta$ decrease by the factor 0.7, preserving the ratio
$\eta/v$ used to match the KED. Alternatively, we could use thermal
dispersions of 4.9 km s-1 and 6.1 km s-1 modeled for NGC 4736 and NGC 2403
respectively by Bacchini et al. (2020) to estimate that $v_{\rm turb}/v_{\rm
disp}\sim 0.8$, given that $v_{\rm disp}\sim 10$ km s-1 here. Then our derived
$\eta$ should decrease by $\sim 0.8$. The Bacchini et al. (2020) galaxies were
more massive than our dIrr galaxies, but the thermal contributions to $v_{\rm
disp}$ are not likely to be much different. These corrections change the
average value of $\eta=0.0065$ for the two regions in Figure 14 to $\eta\sim
0.0048$, using a mean correction factor of 0.75.
This $\eta$ value is the average for the peak regions of star formation. It
measures how efficiently star formation energy gets into H i turbulent motions
locally in units of the supernova energy per unit mass of young stars. When
normalized this way, other types of energy related to star formation such as
expanding HII regions and stellar winds are included in $\eta$ too. What is
not included as a source of turbulent motion is energy unrelated to star
formation, such as gravitational energy from gas collapse on the scale of the
ISM Jeans length, or collapse energy from transient spiral arms driven by
combined gas and stellar masses, or shock energy from the relative motions of
gas and stellar spiral density waves. If $\eta\sim 0.0048$ measured locally
gives the actual efficiency for star formation to pump turbulence in the H i
gas, then the global turbulent energy pumped by all of the star formation in a
galaxy should equal our local $\eta$ multiplied by the global star formation
rate (along with the other factors in equation 2). Because Bacchini et al.
(2020) found that the global turbulent energy is $1.5^{1.8}_{0.8}$% of the
energy derived from the star formation rate, there would seem to be more
energy required than what star formation alone can provide. The excess energy
needed is a factor of $\sim 0.015/0.0048-1\sim 2$ times the star formation
energy.
This factor has many uncertainties, both from the range in global values
derived by Bacchini et al. (2020) and from the galaxy-to-galaxy or inner-disk
to outer-disk variations derived here. For example, our $\eta\sim 0.0048$ is
closer to that of the dwarf galaxy DDO154 in Bacchini et al. (2020), which had
$\eta=0.009$ assuming a pure warm phase H i. Also, our average $\eta$ for the
inner disk regions in Figure 14 was higher than the average for the inner and
outer disks combined (which gave the value 0.0048) by a factor of 1.2. But
even within this range, the global energy from turbulence seems to be larger
than what can be pumped from star formation alone, if we use local star
formation rates as the basic means of calibrating $\eta$.
Figure 14 for the KED excess has two high points for the inner disk which were
not included in the efficiency fit. These are the galaxies Haro 29 with excess
KED$=16.0\times 10^{45}$ erg pc-2, and NGC 1569 with excess KED$=141\times
10^{45}$ erg pc-2. Correspondingly, Figure 11 shows a scatter of individual
pixel points to very high values of KED for these galaxies.
The middle panel of Figure 14 shows the average $v_{disp}$ excess at each
concentration of excess $\Sigma_{\rm SFR}$ in Figure 12. The excess velocity
dispersion is rarely larger than 1 km s-1 and averages 0.45 km s-1 in the
outer disk, $-0.34$ km s-1 in the inner disk, and 0.37 km s-1 overall. Some
local star formation regions have lower H i velocity dispersions than the
average at that galactocentric radius, giving negative excesses in Figure 14.
These typically small excesses in the local H i velocity dispersion are
consistent with the small feedback efficiencies found above. There is
relatively little generation of turbulence at the positions of star-forming
regions.
The right-hand panel of Figure 14 shows the average $\Sigma_{\rm HI}$ excess
at each concentration of excess $\Sigma_{\rm SFR}$ in Figure 13. There is a
clear trend toward excess H i at local SF regions, although in a few cases the
H i is less than the azimuthal average. This general excess corresponds to a
ratio of $\Sigma_{\rm HI}$ to $\Sigma_{\rm SFR}$ that equals 6.5 Gyr in the
outer disk, 1.2 Gyr in the inner disk and 1.6 Gyr overall, where this latter
fit is shown by the curve in the figure. For this fit, the high point that is
plotted in Figure 14 is excluded; that is for NGC 1569, where the ratio is 31
Gyr. This average ratio of $\sim 1.6$ Gyr is comparable to the consumption
time for molecules, which is about 2 Gyr in Bigiel et al. (2008) and Leroy et
al. (2008). If only molecular clouds form stars, then this similarity implies
that the molecular fraction is about 50% in the inner disk, as suggested using
other properties of H i and star formation in recent papers (Hunter et al.,
2019, 2020; Madden et al., 2020).
## 5 Discussion
Comparisons between the kinetic energy density or velocity dispersion and the
local star formation rate using cross correlations of several types and pixel-
level excesses above the radial average quantities have shown virtually no
connections between large-scale turbulence and star formation. Many of the
galaxies have lumpy KED and FUV images but the lumps are not well correlated
or anti-correlated spatially. This is contrary to some theoretical
expectations and the simulations that have been designed to illustrate those
expectations which suggest that feedback from star formation pumps a
significant amount of interstellar turbulence, and thereby controls the
interstellar scale height and average mid-plane density. While it is generally
accepted that this mid-plane density controls the collapse rate of the ISM and
therefore the average star formation rate, the origin of the turbulence and
other vertical forces which determine the scale height and density have been
difficult to observe directly. Most likely, the maintenance of a modest value
for gravitational stability parameter $Q$ controls the overall interstellar
turbulent speed through pervasive and mild gravitational instabilities, which
also feed the star formation process through cloud formation. This was
demonstrated by Bournaud et al. (2010) and also underlies the Feedback in
Realistic Environments (FIRE) simulations by Hopkins et al. (2014); the
primary role of feedback is to destroy molecular clouds locally (Benincasa et
al., 2020). Our data suggest that this feedback does not extend far enough
from molecular clouds to be visible in the H i at our resolution (from 26 pc
at IC 1613 to 340 pc at DDO 52).
## 6 Summary
We have examined the relationship between star formation, as traced by FUV
images, and turbulence in the gas, as traced by kinetic energy density images
and velocity dispersion maps in the LITTLE THINGS sample of nearby dIrr
galaxies. We performed 2D cross-correlations between FUV and KED images,
finding maximum $C_{coef}$ that indicate little correlation. A plot of
integrated SFR against the maximum $C_{coef}$ also shows no correlation. We
also performed cross-correlations in annuli centered on the optical center of
the galaxy to produce $C_{coef}$ as a function of radius. In some galaxies the
centers have $C_{coef}$ that are high enough to indicate a correlation, and in
some galaxies the $C_{coef}$ drops off with radius from the center.
To look at the images a different way, we determined the fraction of pixels in
the FUV and KED images with values above a given percentage of the maximum
pixel value in the image. Plots of these quantities show different behaviors
for FUV and KED images in many of the galaxies. Finally, we considered on a
pixel-by-pixel basis the excess KED, $v_{disp}$, and $\Sigma_{\rm HI}$ above
the average radial profiles of these quantities and plotted that versus the
excess $\Sigma_{\rm SFR}$. There was a weak tendency to have a higher excess
KED at a higher excess $\Sigma_{\rm SFR}$, corresponding to an efficiency of
kinetic energy input to the local ISM from supernova related to star formation
of about 0.5%. This is too small by a factor of about 2 for star formation to
be the only source of global kinetic energy density. The excess $v_{disp}$
connected with star formation peaks is also small, only $0.37$ km s-1 on
average. The angular scale for these small excesses is typically
$1.5^{\prime\prime}$, which, for a distance of 3 Mpc, corresponds to $\sim 20$
pc.
We are grateful to Dr. C. Bacchini for comments on the manuscript. H.A. is
grateful to the Lowell Observatory Director’s Opportunity Network for funding
to work on this project. Lowell Observatory sits at the base of mountains
sacred to tribes throughout the region. We honor their past, present, and
future generations, who have lived here for millennia and will forever call
this place home. Facilities: VLA GALEX
## References
* Bacchini et al. (2020) Bacchini, C., Fraternali, F., Iorio, G., Pezzulli, G. Marasco, A. & Nipoti, C. 2020, arXiv:2006.10764
* Benincasa et al. (2020) Benincasa, S. M., Loebman, S. R., Wetzel, A., et al. 2020, MNRAS, submitted, arXiv:1911.05251
* Bigiel et al. (2008) Bigiel, F., Leroy, A., Walter, F., et al. 2008, AJ, 136, 2846
* Bigiel et al. (2010) Bigiel, F., Leroy, A., Walter, F., et al. 2010, AJ, 140, 1194
* Bournaud et al. (2010) Bournaud, F., Elmegreen, B.G., Teyssier, R., Block, D.L., & Puerari, I. 2010, MNRAS, 409, 1088
* Burkhart et al. (2010) Burkhart, B., Stanimirović, S., Lazarian, A., & Kowal, G. 2010, ApJ, 708, 1204
* Combes et al. (2012) Combes, F., Boquien, M., Kramer, C. et al. 2012, A&A, 539, A67
* Deharveng et al. (2012) Deharveng, L., Zavagno, A., Anderson, L. D., Motte, F., Abergel, A., André, Ph. Bontemps, S., Leleu, G., Roussel, H., Russeil, D. 2012, A&A, 546, A74
* Dib & Burkert (2005) Dib, S., & Burkert, A. 2005, ApJ, 630, 238
* Efremov & Elmegreen (1998) Efremov, Y. N., & Elmegreen, B. G. 1998, MNRAS, 299, 588
* Egorov et al. (2017) Egorov, O.V., Lozinskaya, T.A., Moiseev, A.V., & Shchekinov, Y.A. 2017, MNRAS, 464, 1833
* Elmegreen (1993) Elmegreen, B.G. 1993, ApJ, 419, L29
* Elmegreen & Efremov (1997) Elmegreen, B. G., & Efremov, Y. N. 1997, ApJ, 480, 235
* Elmegreen & Hunter (2006) Elmegreen, B. G., & Hunter, D. A. 2006, ApJ, 636, 712
* Elmegreen & Hunter (2015) Elmegreen, B. G., & Hunter, D. A. 2015, ApJ, 805, 145
* Goldbaum et al. (2016) Goldbaum, N.J., Krumholz, M.R., Forbes, J.C. 2016, ApJ, 827, 28
* Herrmann et al. (2013) Herrmann, K. A., Hunter, D. A., & Elmegreen, B. G. 2013, AJ, 146, 104
* Hopkins et al. (2011) Hopkins, Philip F., Quataert, E., & Murray, N. 2011, MNRAS, 417, 950
* Hopkins et al. (2014) Hopkins, P. F., Keres̆, D., Oñorbe, J., et al. 2014, MNRAS, 445, 581
* Hunter & Elmegreen (2004) Hunter, D. A., & Elmegreen, B. G. 2004, AJ, 128, 2170
* Hunter et al. (2003) Hunter, D. A., Elmegreen, B. G., Dupuy, T. J., & Mortonson, M. 2003, AJ, 126, 1836
* Hunter et al. (2016) Hunter, D. A., Elmegreen, B. G., & Gehret, E. 2016, AJ, 151, 136
* Hunter et al. (2019) Hunter, D. A., Elmegreen, B. G., & Berger, C. L. 2019, AJ, 157, 241
* Hunter et al. (2020) Hunter, D.A., Elmegreen, B.G., Goldberger, E., et al. 2020, AJ, submitted
* Hunter et al. (2010) Hunter, D. A., Elmegreen, B. G. & Ludka, B. C. 2010, AJ, 139, 447
* Hunter et al. (2011) Hunter, D. A., Elmegreen, B. G., Oh, S.-H., et al. 2011,AJ, 142, 121
* Hunter et al. (2013) Hunter, D. A., Elmegreen, B. G., Rubin, V. C., & Ashburn, A. 2013, AJ, 146, 92
* Hunter et al. (2001) Hunter, D. A., Elmegreen, B. G., & van Woerden, H. 2001, ApJ, 556, 773
* Hunter et al. (2012) Hunter, D. A., Ficut-Vicas, D., Ashley, T., et al. 2012, AJ, 144, id 13
* Hunter & Plummer (1996) Hunter, D. A., & Plummer, J. D. 1996, ApJ, 462, 732
* Ibáñez-Mejía et al. (2017) Ibáñez-Mejía, J.C., Mac Low, M.-M., Klessen, R.S., Baczynski, C. 2017, ApJ, 850, 62
* Joung et al. (2009) Joung, M.R., Mac Low, M.-M., & Bryan, G.L. 2009, ApJ, 704, 137
* Kennicutt (1989) Kennicutt, R. C., Jr. 1989, ApJ, 344, 685
* Kim & Ostriker (2015) Kim, C.-G., & Ostriker, E. C. 2015, ApJ, 815, 67
* Kim et al. (2018) Kim, J.-G., Kim, W.-T., Ostriker, E.C. 2018, ApJ, 859, 68
* Kingsburgh & McCall (1998) Kingsburgh, R. L., & McCall, M. L. 1998, AJ, 116, 2246
* Kraljic et al. (2014) Kraljic, K., Renaud, F., Bournaud, F., et al. 2014, ApJ, 784, 112
* Krumholz & McKee (2005) Krumholz, M. R., & McKee, C. F. 2005, ApJ, 630, 250
* Krumholz et al. (2018) Krumholz, M.R., Burkhart, B., Forbes, J.C., & Crocker, R.M. 2018, MNRAS, 477, 2716
* Lehnert et al. (2013) Lehnert M. D., Le Tiran L., Nesvadba N. P. H., van Driel W., Boulanger F., Di Matteo P., 2013, A&A, 555, A72
* Leroy et al. (2008) Leroy, A. K., Walter, F., Brinks, E., Bigiel, F., de Blok, W. J. G., Madore, B., & Thornley, M. D., 2008, AJ, 136, 2782
* Mac Low et al. (2017) Mac Low, M.-M., Burkert, A., & Ibáñez-Mejía, J. C. 2017, ApJL, 847, L10
* Mac Low & Klessen (2004) Mac Low, M.-M., & Klessen, R. S. 2004, Rev Mod Phys, 76, 125
* Madden et al. (2020) Madden, S. C., Cormier, D., Hony, S., Lebouteiller, V., Abel, N., et al. 2020, A&A, 643, A141
* Maier et al. (2016) Maier, E., Chien, L.-H., & Hunter, D. A. AJ, 152, 134
* Maier et al. (2017) Maier, E., Elmegreen, B. G., Hunter, D. A., et al. 2017, AJ, 153, 163
* Martin et al. (2005) Martin, D. C., Fanson, J., Schiminovich, D., et al. 2005, ApJ, 619, L1
* Meurer et al. (1996) Meurer, G. R., Carignan, C., Beaulieu, S. F., & Freeman, K. C. 1996, AJ, 111, 1551
* Moiseev et al. (2015) Moiseev, A. V., Tikhonov, A. V., & Klypin, A. 2015, MNRAS, 449, 3568
* Ossenkopf et al. (2008) Ossenkopf, V., Krips, M., & Stutzki, J. 2008, å, 485, 917
* Ostriker et al. (2010) Ostriker, E. C., McKee, C. F., & Leroy, A. K. 2010, ApJ, 721, 975
* Padoan et al. (2016) Padoan, P., Pan, L., Haugbolle, T., & Nordlund, Å. 2016, ApJ, 822, 11
* Palmeirim et al. (2017) Palmeirim, P., Zavagno, A., Elia, D. et al. 2017, A&A, 605, A35
* Piontek & Ostriker (2005) Piontek, R. A., & Ostriker, E. C. 2005, ApJ, 629, 849
* Romeo & Mogotsi (2017) Romeo, A.B., & Mogotsi, K.M. 2017, MNRAS, 469, 286
* Stilp et al. (2013) Stilp, A. M., Dalcanton, J. J., Skillman, E., Warren, S. R., Ott, J., & Koribalski, B. 2013, ApJ, 733, 88
* Struck & Smith (1999) Struck, C., & Smith, D. C. 1999, ApJ, 527, 673
* Tamburro et al. (2009) Tamburro, D., Rix, H.-W., Leroy, A. K., Mac Low, M.-M., Walter, F., et al. AJ, 137, 4424
* Toomre (1964) Toomre, A. 1964, ApJ, 139, 1217
* Übler et al. (2019) Übler, H., Genzel, R., Wisnioski, E. et al. 2019, ApJ, 880, 48
* van Zee et al. (1997) van Zee, L., Haynes, M. P., Salzer, J. J., & Broeils, A. H. 1997, AJ, 113, 1618
* Varidel et al. (2020) Varidel, M.R., Croom, S.M., Lewis, G.F. et al. 2020, MNRAS, 495, 2265
* Walter et al. (2008) Walter, F., Brinks, E., de Blok, W. J. G., Bigiel, F., Kennicutt, R. C., Jr., et al. 2008, AJ, 136, 2563
* Willett et al. (2005) Willett, K. W., Elmegreen, B. G., & Hunter, D. A. 2005, AJ, 129, 2186
* Youngblood & Hunter (1999) Youngblood, A. J., & Hunter, D. A. 1999, ApJ, 519, 55
* Zhang et al. (2012) Zhang, H.-X., Hunter, D. A., & Elmegreen, B. G. 2012, ApJ, 754, 29
* Zhou et al. (2017) Zhou, L., Federrath, C., Yuan, T., et al. 2017, MNRAS, 470, 4573
|
# Example of an Highly Branching CD Space
Mattia Magnabosco
###### Abstract
In [3] Ketterer and Rajala showed an example of metric measure space,
satisfying the measure contraction property $\mathsf{MCP}(0,3)$, that has
different topological dimensions at different regions of the space. In this
article I propose a refinement of that example, which satisfies the
$\mathsf{CD}(0,\infty)$ condition, proving the non-constancy of topological
dimension for CD spaces. This example also shows that the weak curvature
dimension bound, in the sense of Lott-Sturm-Villani, is not sufficient to
deduce any reasonable non-branching condition. Moreover, it allows to answer
to some open question proposed by Schultz in [7], about strict curvature
dimension bounds and their stability with respect to the measured Gromov
Hausdorff convergence.
In their remarkable works Lott, Villani [4] and Sturm [10, 11] introduced a
weak notion of curvature dimension bounds, which strongly relies on the theory
of Optimal Transport. Inspired by some results that hold in the Riemannian
case, they defined a consistent notion of curvature dimension bound for metric
measure spaces, that is known as CD condition. The metric measure spaces
satisfying the CD condition are called CD spaces and enjoy some remarkable
analytic and geometric properties.
In this work I present an example of an highly branching CD space, that shows
how the weak curvature dimension bound is not sufficient to deduce any type of
non-branching condition. This example is a refinement of the one by Ketterer
and Rajala [3], in particular, the topological and metric structure of the
space will be essentially the same, while the reference measure will be more
complicated. The metric measure space considered by Ketterer and Rajala in [3]
satisfies the so called measure contraction property $\mathsf{MCP}(0,3)$, but,
as they pointed out in their work, it does not satisfy any CD condition. As I
will highlight in the last section, the modification I am going to present
allows to extend the some of the results in [3] to CD spaces. Moreover it
gives an answer to some questions proposed by Schultz in [7], regarding the
very strict CD condition and the stability of strict CD condition.
Before going on, I want to emphasize that the CD condition is much more
complicated to prove than the MCP condition. In fact there is much more
freedom in choosing the marginals, and consequently it is much more difficult
to avoid mass overlap in a Wasserstein geodesic. For this reason, not only
will the proof be longer than Ketterer and Rajala’s one, but it will also be
more complicated. In particular, I will prove the CD condition for the desired
metric measure space, by proving it for a sequence of metric measure spaces
that converges to it, and using the stability result by Villani (Theorem
1.15). I will also take great inspiration by the work of Rajala in [5], which
shows an efficient way to prove the CD condition in branching metric measure
spaces, through the application of Jacobi equation.
The main consequences of the example I will present in this article are the
following:
* •
any reasonable non-branching condition does not hold in general for (weak) CD
spaces,
* •
the existence of an optimal transport map between two absolutely continuous
marginals is not granted in (weak) CD spaces, without assuming a non-branching
condition,
* •
the very strict CD condition studied by Schultz ([7],[8],[9]) is strictly
stronger than the weak one,
* •
the constancy of topological dimension does not hold in general for (weak) CD
spaces,
* •
the strict CD condition I will define in the last section is not stable with
respect of the measured Gromov Hausdorff convergence,
* •
the strict CD condition is strictly stronger than the weak one.
Let me now briefly explain the structure of this work. In the first section I
recall some preliminary results regarding both the basis of the Optimal
Transport theory and CD spaces. In the second section I am going to introduce
the metric measure spaces which will be the subject of the rest of the
article. In section 3 I simply state and prove some algebraic lemma, which
will help me doing the subsequent computations. In the fourth section I
present some result by Rajala [5], related to Jacobi equation and how it can
be used to prove entropy convexity. This theory requires the existence of a
suitable midpoint selection map, which will be introduced in Section 5, where
I will also prove some of its properties. The sixth section contains the proof
of the main theorem, that puts together all the results proven so far. The
last section aims to draw all the conclusions listed before.
## 1 Preliminary Notions
This first section aims to collect all the preliminary results this work needs
in order to be self contained. In particular I am going to introduce the
Wasserstein space and the entropy functional on it, being then able to define
the notions of curvature dimension bound and CD space. Moreover I am going to
briefly discuss the relation between curvature dimension bound and non-
branching conditions, that is one of the main motivation for this work.
Finally I will define the measured Gromov Hausdorff convergence of metric
measure spaces, stating in the end the stability of curvature dimension bounds
with respect to this convergence.
### 1.1 The Wasserstein Space
Denote by $\mathscr{P}(X)$ the set of Borel probability measures on a Polish
metric space $(X,\mathsf{d})$. Given two measures $\mu,\nu\in\mathscr{P}(X)$
and a Borel cost function $c:X\times X\to[0,\infty]$, the Optimal Transport
problem asks to find minima and minimizers of the quantity
$\min\int_{X\times Y}c(x,y)\,\mathrm{d}\pi(x,y),$ (1)
where $\pi$ varies among all probability measures in $\mathscr{P}(X\times X)$
with first marginal equal to $\mu$ and second marginal equal to $\nu$. If the
cost function $c$ is lower semicontinuous, the minimum in (1) is attained. The
minimizers of this problem are called optimal transport plans and the set of
all of them will be denoted by $\operatorname{OptPlans}(\mu,\nu)$. An optimal
transport plan $\pi\in\operatorname{OptPlans}(\mu,\nu)$ is said to be induced
by a map if there exists a $\mu$-measurable map $T:X\to X$ so that
$\pi=(\operatorname{id},T)_{\\#}\mu$, such a map $T$ will be called optimal
transport map.
A fundamental approach in facing the Optimal Transport problem is the one of
$c$-duality, which allows to prove some very interesting and useful results.
Below I report only the most basic statement, which is the only result I will
need in this work.
###### Definition 1.1.
A set $\Gamma\subset X\times X$ is said to be $c$-cyclically monotone if
$\sum_{i=1}^{N}c\left(x_{i},y_{\sigma(i)}\right)\geq\sum_{i=1}^{N}c\left(x_{i},y_{i}\right)$
for every $N\geq 1$, every permutation $\sigma$ of $\\{1,\dots,N\\}$ and every
$(x_{i},y_{i})\in\Gamma$ for $i=1,\dots,N$.
###### Proposition 1.2.
Let $X$ be a Polish space and $c:X\times X\to[0,\infty]$ a lower
semicontinuous cost function. Then every optimal transport plan
$\pi\in\operatorname{OptPlans}(\mu,\nu)$ such that $\int
c\,\mathrm{d}\pi<\infty$ is concentrated in a $c$-cyclically monotone set.
From now on I am going to consider the Optimal Transport problem in the
special case in which the cost function is equal to the distance squared, that
is $c(x,y)=\mathsf{d}^{2}(x,y)$. In this context the minimization problem
induces the so called Wasserstein distance on the space $\mathscr{P}_{2}(X)$
of probabilities with finite second order moment, that is
$\mathscr{P}_{2}(X):=\left\\{\mu\in\mathscr{P}(X)\,:\,\int\mathsf{d}^{2}(x,x_{0})\,\mathrm{d}\mu(x)<\infty\text{
for one (and thus all) }x_{0}\in X\right\\}.$
###### Definition 1.3 (Wasserstein distance).
Given two measures $\mu,\nu\in\mathscr{P}_{2}(X)$ define their Wasserstein
distance $W_{2}(\mu,\nu)$ as
$W_{2}^{2}(\mu,\nu):=\min\left\\{\int
d^{2}(x,y)\,\mathrm{d}\pi(x,y)\,:\,\pi\in\Gamma(\mu,\nu)\right\\}.$
It is easy to realize that $W_{2}$ is actually a distance on
$\mathscr{P}_{2}(X)$, moreover $(\mathscr{P}_{2}(X),W_{2})$ is a Polish metric
space.
Let me now deal with the geodesic structure of $(\mathscr{P}_{2}(X),W_{2})$,
which, as the following statement shows, is heavily related to the one of the
base space $(X,\mathsf{d})$. First of all, notice that every measure
${\mbox{\boldmath$\pi$}}\in\mathscr{P}(C([0,1],X))$ induces a curve $[0,1]\ni
t\to\mu_{t}=(e_{t})_{\\#}{\mbox{\boldmath$\pi$}}\in\mathscr{P}(X)$, therefore
in the following I will consider measures in $\mathscr{P}(C([0,1],X))$ in
order to consider curves in the Wasserstein space.
###### Proposition 1.4.
If $(X,\mathsf{d})$ is a geodesic space than $(\mathscr{P}_{2}(X),W_{2})$ is
geodesic as well. In particular, given two measures
$\mu,\nu\in\mathscr{P}_{2}(X)$, the measure
${\mbox{\boldmath$\pi$}}\in\mathscr{P}(C([0,1],X))$ is a constant speed
Wassertein geodesic connecting $\mu$ and $\nu$ if and only if it is
concentrated in $\operatorname{Geo}(X)$ (that is the space of constant speed
geodesics in $(X,\mathsf{d})$) and
$(e_{0},e_{1})_{\\#}{\mbox{\boldmath$\pi$}}\in\operatorname{OptPlans}(\mu,\nu)$.
In this case it is said that $\pi$ is an optimal geodesic plan between $\mu$
and $\nu$ and this will be denoted as
${\mbox{\boldmath$\pi$}}\in\operatorname{OptGeo}(\mu,\nu)$.
### 1.2 Curvature Dimension Bounds
In this subsection I introduce the notions of curvature dimension bound and CD
space, pioneered by Lott and Villani [4] and Sturm [10, 11]. Their definition
relies on the notion of entropy functional. As it will be soon clear, the most
appropriate framework in which deal with the entropy functional, is the one of
metric measure spaces.
###### Definition 1.5.
A metric measure space is a triple $(X,\mathsf{d},\mathfrak{m})$, where
$(X,\mathsf{d})$ is a Polish metric space and $\mathfrak{m}$ is a non-negative
and non-null Borel measure on $X$, finite on bounded sets.
In this work I will only deal with compact metric measure spaces, and in
particular $\mathfrak{m}(X)<\infty$. Let me now properly define the entropy
functional.
###### Definition 1.6.
In a metric measure space $(X,\mathsf{d},\mathfrak{m})$, define the relative
entropy functional with respect to the reference measure $\mathfrak{m}$
$\operatorname{Ent}:\mathscr{P}_{2}(X)\to\mathbb{R}\cup\\{+\infty\\}$ as
$\operatorname{Ent}(\mu):=\begin{cases}\int\rho\log\rho\,\mathrm{d}\mathfrak{m}&\text{if
}\mu\ll\nu\text{ and }\mu=\rho\mathfrak{m}\\\
+\infty&\text{otherwise}\end{cases}.$
In this context of this work the entropy functional $\operatorname{Ent}$ is
lower semicontinuous with respect to the Wasserstein convergence, this is not
always true in the non-compact case, when it might happen that
$\mathfrak{m}(X)=+\infty$.
I can now give the definitions of CD condition and CD space.
###### Definition 1.7.
A metric measure space $(X,\mathsf{d},\mathfrak{m})$ is said to satisfy the
(weak) $\mathsf{CD}(K,\infty)$ condition and to be a (weak)
$\mathsf{CD}(K,\infty)$ space, if for every absolutely continuous measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$ there exists a Wasserstein geodesic
with constant speed $(\mu_{t})_{t\in[0,1]}\subset\mathscr{P}_{2}(X)$
connecting them, along which the relative entropy functional is $K$-convex,
that is
$\operatorname{Ent}(\mu_{t})\leq(1-t)\operatorname{Ent}(\mu_{0})+t\operatorname{Ent}(\mu_{1})-t(1-t)\frac{K}{2}W_{2}^{2}(\mu_{0},\mu_{1}),\qquad\text{for
every }t\in[0,1].$ (2)
Moreover $(X,\mathsf{d},\mathfrak{m})$ is said to satisfy a strong
$\mathsf{CD}(K,\infty)$ condition and to be a strong $\mathsf{CD}(K,\infty)$
space if, for every absolutely continuous measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$, the relative entropy functional is
$K$-convex along every Wasserstein geodesic with constant speed connecting
them.
Let me also state a very useful proposition, which provides a simple strategy
to prove the (weak) $\mathsf{CD}(K,\infty)$ condition. Its proof can be found
in [10], anyway I present a brief sketch of it, in order to be self contained.
###### Proposition 1.8.
The metric measure space $(X,\mathsf{d},\mathfrak{m})$ is a
$\mathsf{CD}(K,\infty)$ space if for every pair of absolutely continuous
measures $\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$ there exists a midpoint
$\eta\in\mathscr{P}_{2}(X)$ of $\mu_{0}$ and $\mu_{1}$, absolutely continuous
with respect to $\mathfrak{m}$, satisfying
$\operatorname{Ent}(\eta)\leq\frac{1}{2}\operatorname{Ent}(\mu_{0})+\frac{1}{2}\operatorname{Ent}(\mu_{1})-\frac{K}{8}W_{2}^{2}(\mu_{0},\mu_{1}).$
(3)
###### Proof.
Given two absolutely continuous measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$, define $\mu_{1/2}$ as a midpoint of
$\mu_{0}$ and $\mu_{1}$ satisfying (3). Similarly define $\mu_{1/4}$ as a
midpoint of $\mu_{0}$ and $\mu_{1/2}$ and $\mu_{3/4}$ as a $\mu_{1/2}$ and
$\mu_{1}$, both satisfying (3). Proceeding in this way, it is possible to
define $\mu_{t}$ for all dyadic times
$t\in\big{\\{}\frac{k}{2^{h}}:h\in\mathbb{N}^{+},k=1,\dots,2^{h}-1\big{\\}}$.
An easy induction argument on $h$ shows that
$\operatorname{Ent}(\mu_{t})\leq(1-t)\operatorname{Ent}(\mu_{0})+t\operatorname{Ent}(\mu_{1})-t(1-t)\frac{K}{2}W_{2}^{2}(\mu_{0},\mu_{1}).$
for every dyadic time
$t\in\\{\frac{k}{2^{h}}:h\in\mathbb{N}^{+},k=1,\dots,2^{h}-1\\}$. Defining the
geodesic $(\mu_{t})_{t\in[0,1]}$ as the continuous extension, the lower
semicontinuity of the entropy ensures that, for every $t\in[0,1]$, the measure
$\mu_{t}$ satisfies the equation (2). ∎
In the last part of this subsection I want to present the relation between
curvature dimension bounds and non-branching conditions. The most important
result in this context was proven by Rajala and Sturm in [6]:
###### Theorem 1.9.
Every strong $\mathsf{CD}(K,\infty)$ metric measure space
$(X,\mathsf{d},\mathfrak{m})$ is essentially non-branching, that is for every
absolutely continuous measures $\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$, every
optimal geodesic plan connecting them is concentrated on a non-branching set
of geodesics.
The work of Rajala and Sturm was then generalized by Schultz [7] to the
context of very strict CD spaces.
###### Definition 1.10.
A metric measure space $(X,\mathsf{d},\mathfrak{m})$ is called a very strict
$\mathsf{CD}(K,\infty)$ space if for every absolutely continuous measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$ there exists an optimal geodesic plan
$\eta\in\operatorname{OptGeo}(\mu_{0},\mu_{1})$, so that the entropy
functional $\operatorname{Ent}$ satisfies the K-convexity inequality along
$(\operatorname{restr}_{t_{0}}^{t_{1}})_{\\#}(f\eta)$ for every
$t_{0}<t_{1}\in[0,1]$, and for all bounded Borel functions
$f:\operatorname{Geo}(X)\to\mathbb{R}^{+}$ with $\int f\,\mathrm{d}\eta=1$.
As the reader can easily notice, these spaces are not in general essentially
non-branching, but they satisfy a weaker condition that I will call weak
essentially non-branching.
###### Definition 1.11 (Weak Essentially Non-Branching).
A metric measure space $(X,\mathsf{d},\mathfrak{m})$ is said to be weakly
essentially non-branching if for every absolutely continuous measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$, there exists an optimal geodesic plan
connecting them, that is concentrated on a non-branching set of geodesics.
###### Theorem 1.12 (Schultz [7]).
Every very strict $\mathsf{CD}(K,\infty)$ space is weakly essentially non-
branching.
Notice that the very strict CD condition is intermediate between the weak and
the strong one. It is easy to find examples of very strict CD spaces, which
are not strong CD spaces, while it is not obvious if the very strict condition
is strictly stronger than the weak one. In this work I am going to present an
example of an highly branching (weak) CD space which is not very strict CD.
### 1.3 Measured Gromov Hausdorff Convergence and Stability of CD Spaces
In this subsection I introduce (following [12]) a notion of convergence for
metric measure spaces, that is called measured Gromov Hausdorff convergence.
Roughly speaking, it is the combination of Hausdorff topology for the metric
side, and weak topology for the measure side. In order to properly define the
measured Gromov Hausdorff convergence I have to preliminary introduce the
notion of $\varepsilon$-isometry.
###### Definition 1.13.
A measurable map
$f:(X,\mathsf{d},\mathfrak{m})\to(X^{\prime},\mathsf{d}^{\prime},\mathfrak{m}^{\prime})$
between two metric measure spaces is called an $\varepsilon$-isometry if
1. 1.
it almost preserves the distances, that is:
$\left|\mathsf{d}\left(f(x),f\left(x^{\prime}\right)\right)-\mathsf{d}\left(x,x^{\prime}\right)\right|\leq\varepsilon\quad\text{for
every }x,x^{\prime}\in X,$
2. 2.
it is almost surjective, that is:
$\forall y\in X^{\prime},\,\,\text{there exists }x\in X\text{ such that
}\,\mathsf{d}(f(x),y)\leq\varepsilon.$
###### Definition 1.14.
Let $(X_{k},\mathsf{d}_{k},\mathfrak{m}_{k})_{k\in\mathbb{N}}$ and
$(X,\mathsf{d},\mathfrak{m})$ be compact metric measure spaces. It is said
that the sequence $(X_{k},\mathsf{d}_{k},\mathfrak{m}_{k})_{k\in\mathbb{N}}$
converges to $(X,\mathsf{d},\mathfrak{m})$, in the measured Gromov Hausdorff
sense, if for every $k$ there exists a measurable $\varepsilon_{k}$-isometry
$f_{k}:X_{k}\to X$, where $\varepsilon_{k}\to 0$, such that
$(f_{k})_{\\#}\mathfrak{m}_{k}\rightharpoonup\mathfrak{m}\quad\text{as
}k\to\infty.$ (4)
The measured Gromov Hausdorff convergence can be in some sense metrized by the
$\mathbb{D}$ distance, introduced by Sturm in [11]. Moreover in [2] Gigli,
Mondino and Savaré showed that some different notion of convergence for
(pointed) metric measure spaces are equivalent to the (pointed) measured
Gromov Hausdorff convergence.
I end this subsection, stating the stability of the (weak) CD condition with
respect to the measured Gromov Hausdorff convergence.
###### Theorem 1.15.
Let $(X_{k},\mathsf{d}_{k},\mathfrak{m}_{k})_{k\in\mathbb{N}}$ be a sequence
of compact metric measure spaces converging in the measured Gromov Hausdorff
sense to a compact metric measure space $(X,\mathsf{d},\mathfrak{m})$. Given
$K\in\mathbb{R}$, if each
$(X_{k},\mathsf{d}_{k},\mathfrak{m}_{k})_{k\in\mathbb{N}}$ satisfies the weak
curvature dimension condition $\mathsf{CD}(K,\infty)$, then also
$(X,\mathsf{d},\mathfrak{m})$ satisfies $\mathsf{CD}(K,\infty)$.
## 2 Definition of the Metric Measure Spaces
$\varepsilon$$f_{k,\varepsilon}(x)$ Figure 1: The metric measure space
$(X_{k,\varepsilon},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon})$ with
$\varepsilon>0$. $f_{k,0}(x)$ Figure 2: The metric measure space
$(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$.
In this section I am going to introduce the metric measure spaces that will be
studied in the rest of this work. The definitions that follow are actually
involved and quite complicated, thus I invite the reader to look at Figure 1
and Figure 2, in order to better understand.
###### Definition 2.1.
Fix $\varepsilon,k\in\mathbb{R}$ such that $0\leq\varepsilon<k<\frac{1}{2}$
and let $\varphi:\mathbb{R}\to[0,1]$ be a continuous function such that
$\int\varphi=k$, $\varphi\leq 1$ on the whole $\mathbb{R}$ and $\varphi=0$
outside $[-k,k]$. Define the function
$f_{k,\varepsilon}:[-1,1]\to\mathbb{R}^{+}$ prescribing
$f_{k,\varepsilon}^{\prime\prime}(x)=\varphi(x),\qquad
f_{k,\varepsilon}^{\prime}(-1)=0,\qquad f_{k,\varepsilon}(-1)=\varepsilon.$
Consequently define the set
$X_{k,\varepsilon}=\\{(x,y)\in\mathbb{R}^{2}\,:\,x\in[-1,1]\text{ and }0\leq
y\leq f_{k,\varepsilon}(x)\\}.$
In the following I will use this notation:
$L=X_{k,0}\cap\big{(}\\{f_{k,0}=0\\}\times\mathbb{R}\big{)}\quad\text{and}\quad
C=X_{k,0}\cap\big{(}\\{f_{k,0}\neq 0\\}\times\mathbb{R}\big{)},$
for sake of simplicity, I will not explicit the $k$ dependence, because it
will be clear from the context.
###### Definition 2.2.
Given $\varepsilon,k\in\mathbb{R}$ with $0<\varepsilon<k<\frac{1}{2}$ and
$K\geq 1$, define the measure $\mathfrak{m}_{k,K,\varepsilon}$ on
$X_{k,\varepsilon}$ as
$\mathfrak{m}_{k,K,\varepsilon}=m_{k,K,\varepsilon}(x,y)\cdot\mathscr{L}^{2}|_{X_{k,\varepsilon}}:=\frac{1}{f_{k,\varepsilon}(x)}\exp\left(-K\left(\frac{y}{f_{k,\varepsilon}(x)}\right)^{2}\right)\mathscr{L}^{2}|_{X_{k,\varepsilon}}.$
While for every $0<k<\frac{1}{2}$ and $K\geq 1$, define the measure
$\mathfrak{m}_{k,K,0}$ on $X_{k,0}$ as
$\mathfrak{m}_{k,K,0}:={\left.\kern-1.2ptC_{K}\cdot\chi_{\\{f_{k,0}(x)=0\\}}\cdot\mathcal{H}^{1}\right|_{\\{y=0\\}}}+\chi_{\\{f_{k,0}(x)>0\\}}\frac{1}{f_{k,0}(x)}\exp\left(-K\left(\frac{y}{f_{k,0}(x)}\right)^{2}\right)\cdot\mathscr{L}^{2}|_{X_{k,0}},$
where
$C_{K}=\int_{0}^{1}e^{-Ky^{2}}\,\mathrm{d}y.$
Notice that a simple change of variable shows that
$(\mathtt{p}_{x})_{\\#}\mathfrak{m}_{k,K,0}=(\mathtt{p}_{x})_{\\#}\mathfrak{m}_{k,K,\varepsilon}=C_{K}\cdot\chi_{\\{-1\leq
x\leq 1\\}}\cdot\mathcal{H}^{1},$
for every suitable $k$, $K$ and $\varepsilon$. Moreover, since I have imposed
$\varepsilon<k$, it is easy to realize that $f_{k,\varepsilon}(x)<3k$ for
every $x\in[-1,1]$ (see Figure 1).
In the following I am going to prove that for suitable $k$ and $K$ the metric
measure space
$(X_{k,\varepsilon},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon})$ is a
$\mathsf{CD}(0,\infty)$ space for every $\varepsilon<k$. In particular, in the
next four sections I will consider the metric measure space
$(X_{k,\varepsilon},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon})$, but
I will avoid to indicate the parameters $k$, $K$ and $\varepsilon$ at the
subscript, in order to ease the notation. Later, in the last section, I am
going to prove the measured Gromov Hausdorff convergence of the spaces
$(X_{k,\varepsilon},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon})$ to
the space $(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$, as
$\varepsilon$ goes to $0$. Combining this with the stability result (Theorem
1.15), will follow that $(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$
is itself a $\mathsf{CD}(0,\infty)$ space.
## 3 Preliminary Algebraic Lemmas
In this section I simply state and prove four algebraic lemmas that will be
fundamental in the proof of the main theorem. In particular they will only
help in carrying on the computation, but they do not hide any particular or
sophisticated idea. For this reason I do not spend a lot of words on them and
I immediately go through the proofs.
###### Lemma 3.1.
There exists a constant $C$ such that for every $A\in\mathbb{R}^{+}$ and every
$\delta$ with $|\delta|<\frac{1}{32}$ it holds
$\frac{1}{2}\log(A)\leq\log\bigg{(}1+\bigg{(}\frac{1}{2}+\delta\bigg{)}(A-1)\bigg{)}+C\delta^{2}$
(5)
###### Proof.
Notice that for some $A\in\mathbb{R}^{+}$ the inequality (5) holds also
without the term $+C\delta^{2}$. In particular this is true if and only if
$\sqrt{A}\leq 1+\bigg{(}\frac{1}{2}+\delta\bigg{)}(A-1).$ (6)
Through elementary computation it is easy to show that inequality (6) holds
for every suitable $\delta$ if
$A\notin\bigg{[}1-\frac{8|\delta|}{(2\delta+1)^{2}},1+\frac{8|\delta|}{(2\delta+1)^{2}}\bigg{]}\subset[1-16|\delta|,1+16|\delta|].$
Therefore, in order to conclude the proof, it is sufficient to prove
inequality (5) for $A\in[1-16|\delta|,1+16|\delta|]$. In this case
$\displaystyle\log\bigg{(}1+\bigg{(}\frac{1}{2}+\delta\bigg{)}(A-1)\bigg{)}$
$\displaystyle=\log\bigg{(}1+\frac{1}{2}(A-1)+\delta(A-1)\bigg{)}$
$\displaystyle=\log\bigg{(}1+\frac{1}{2}(A-1)\bigg{)}+\int_{0}^{\delta(A-1)}\frac{1}{1+\frac{1}{2}(A-1)+t}\,\mathrm{d}t$
$\displaystyle\geq\log\bigg{(}1+\frac{1}{2}(A-1)\bigg{)}-\int_{0}^{|\delta(A-1)|}\frac{1}{1-8|\delta|-16\delta^{2}}\,\mathrm{d}t$
$\displaystyle\geq\log\bigg{(}1+\frac{1}{2}(A-1)\bigg{)}-2|\delta(A-1)|$
$\displaystyle\geq\frac{1}{2}\log(A)-32\delta^{2},$
where the last inequality follows from the concavity of the logarithm. ∎
###### Lemma 3.2.
Given $(x_{0},y_{0}),(x_{1},y_{1})\in X$, let $\gamma:[0,1]\to X$ be the
function:
$t\mapsto\bigg{(}(1-t)\frac{y_{0}}{f(x_{0})}+t\frac{y_{1}}{f(x_{1})}\bigg{)}f((1-t)x_{0}+tx_{1})$
then
1. 1.
for every $t\in[0,1]$ it holds
$\left|\frac{\gamma^{\prime}(t)}{x_{1}-x_{0}}-\frac{y_{1}-y_{0}}{x_{1}-x_{0}}\right|\leq
3k,$
2. 2.
if $k<\frac{1}{4}$, for every $t\in[0,1]$ it holds
$\left|\gamma^{\prime\prime}(t)\right|\leq(x_{1}-x_{0})^{2}\frac{2k}{f((1-t)x_{0}+tx_{1})}\left(\left|\frac{y_{1}-y_{0}}{x_{1}-x_{0}}\right|+2\right).$
###### Proof.
The first derivative of $\gamma$ is
$\gamma^{\prime}(t)=\bigg{(}\frac{y_{1}}{f(x_{1})}-\frac{y_{0}}{f(x_{0})}\bigg{)}f((1-t)x_{0}+tx_{1})+\bigg{(}(1-t)\frac{y_{0}}{f(x_{0})}+t\frac{y_{1}}{f(x_{1})}\bigg{)}f^{\prime}((1-t)x_{0}+tx_{1})(x_{1}-x_{0}).$
Therefore, denoting $x_{t}=(1-t)x_{0}+tx_{1}$, it holds
$\displaystyle\frac{\gamma^{\prime}(t)}{x_{1}-x_{0}}$
$\displaystyle=\frac{y_{1}-y_{0}}{x_{1}-x_{0}}+\bigg{(}\frac{1}{f(x_{1})}-\frac{1}{f(x_{t})}\bigg{)}\frac{y_{1}}{x_{1}-x_{0}}f(x_{t})+\bigg{(}\frac{1}{f(x_{t})}-\frac{1}{f(x_{0})}\bigg{)}\frac{y_{0}}{x_{1}-x_{0}}f(x_{t})$
$\displaystyle\quad+\bigg{(}(1-t)\frac{y_{0}}{f(x_{0})}+t\frac{y_{1}}{f(x_{1})}\bigg{)}f^{\prime}(x_{t})$
$\displaystyle=\frac{y_{1}-y_{0}}{x_{1}-x_{0}}+\frac{y_{1}}{f(x_{1})}\frac{f(x_{t})-f(x_{1})}{x_{1}-x_{0}}+\frac{y_{0}}{f(x_{0})}\frac{f(x_{0})-f(x_{t})}{x_{1}-x_{0}}$
$\displaystyle\quad+\bigg{(}(1-t)\frac{y_{0}}{f(x_{0})}+t\frac{y_{1}}{f(x_{1})}\bigg{)}f^{\prime}(x_{t}).$
On the other hand, it is possible to perform the following estimate
$\displaystyle\left|\frac{y_{1}}{f(x_{1})}\frac{f(x_{t})-f(x_{1})}{x_{1}-x_{0}}\right|$
$\displaystyle\leq\left|\frac{y_{1}}{f(x_{1})}\right|\left|\frac{f(x_{t})-f(x_{1})}{x_{1}-x_{0}}\right|\leq\left|\frac{f(x_{t})-f(x_{1})}{x_{1}-x_{0}}\right|\leq\sup
f^{\prime}\leq k,$
and the same calculation can be done for the symmetric term, thus
$\left|\frac{y_{0}}{f(x_{0})}\frac{f(x_{0})-f(x_{t})}{x_{1}-x_{0}}\right|\leq
k.$
Moreover, a similar procedure shows that
$\left|\bigg{(}(1-t)\frac{y_{0}}{f(x_{0})}+t\frac{y_{1}}{f(x_{1})}\bigg{)}f^{\prime}(x_{t})\right|\leq
k.$
Putting together all this estimates, it is possible to conclude that
$\left|\frac{\gamma^{\prime}(t)}{x_{1}-x_{0}}-\frac{y_{1}-y_{0}}{x_{1}-x_{0}}\right|\leq
3k.$
The second derivative of $\gamma$ is
$\displaystyle\gamma^{\prime\prime}(t)=2\bigg{(}\frac{y_{1}}{f(x_{1})}-\frac{y_{0}}{f(x_{0})}\bigg{)}f^{\prime}(x_{t})(x_{1}-x_{0})+\bigg{(}(1-t)\frac{y_{0}}{f(x_{0})}+t\frac{y_{1}}{f(x_{1})}\bigg{)}f^{\prime\prime}(x_{t})(x_{1}-x_{0})^{2}$
Consider the first term of the right hand side, through algebraic
manipulations similar to the ones performed in the first part of the proof, I
obtain
$\displaystyle 2\bigg{(}\frac{y_{1}}{f(x_{1})}-$
$\displaystyle\frac{y_{0}}{f(x_{0})}\bigg{)}f^{\prime}(x_{t})(x_{1}-x_{0})$
$\displaystyle=2\frac{f^{\prime}(x_{t})(x_{1}-x_{0})^{2}}{f(x_{t})}\bigg{[}\frac{y_{1}-y_{0}}{x_{1}-x_{0}}+\frac{y_{1}}{f(x_{1})}\frac{f(x_{t})-f(x_{1})}{x_{1}-x_{0}}+\frac{y_{0}}{f(x_{0})}\frac{f(x_{0})-f(x_{t})}{x_{1}-x_{0}}\bigg{]}.$
Using the same estimates as before, I get
$\displaystyle\left|2\bigg{(}\frac{y_{1}}{f(x_{1})}-\frac{y_{0}}{f(x_{0})}\bigg{)}f^{\prime}(x_{t})(x_{1}-x_{0})\right|$
$\displaystyle\leq
2(x_{1}-x_{0})^{2}\left|\frac{f^{\prime}(x_{t})}{f(x_{t})}\right|\bigg{(}\left|\frac{y_{1}-y_{0}}{x_{1}-x_{0}}\right|+2k\bigg{)}$
$\displaystyle\leq\frac{2k(x_{1}-x_{0})^{2}}{f(x_{t})}\bigg{(}\left|\frac{y_{1}-y_{0}}{x_{1}-x_{0}}\right|+2k\bigg{)}.$
On the other hand, also the second term of the right hand side can be easily
estimate:
$\left|\bigg{(}(1-t)\frac{y_{0}}{f(x_{0})}+t\frac{y_{1}}{f(x_{1})}\bigg{)}f^{\prime\prime}(x_{t})(x_{1}-x_{0})^{2}\right|\leq(x_{1}-x_{0})^{2}.$
Adding this last two inequalities, and using that $f(x_{t})\leq 3k$ I can
conclude
$\left|\gamma^{\prime\prime}(t)\right|\leq(x_{1}-x_{0})^{2}\bigg{[}1+\frac{2k}{f(x_{t})}\bigg{(}\left|\frac{y_{1}-y_{0}}{x_{1}-x_{0}}\right|+2k\bigg{)}\bigg{]}\leq(x_{1}-x_{0})^{2}\frac{2k}{f(x_{t})}\left(\left|\frac{y_{1}-y_{0}}{x_{1}-x_{0}}\right|+2\right),$
where the last inequality holds if $k<\frac{1}{4}$. ∎
###### Lemma 3.3.
Given a fixed a constant $H$, let $y:I=[x_{0},x_{1}]\to\mathbb{R}^{+}$ be a
$C^{2}$ function such that $y^{\prime}(x)\geq\frac{1}{4}$ and
$y^{\prime\prime}(x)\leq H\frac{k}{f(x)}$ for every $x\in I$. Then, for $k$
small enough, it holds
$\log\left(m\left(\frac{x_{0}+x_{1}}{2},y\left(\frac{x_{0}+x_{1}}{2}\right)\right)\right)\geq\log(m(x_{0},y(x_{0})))+\log(m(x_{1},y(x_{1})))+\frac{K}{128f(x_{1})^{2}}(x_{1}-x_{0})^{2}$
###### Proof.
Before going into the proof, I want to point out that the inequality I have to
prove is basically a $K$-convexity inequality. The strategy of the proof
consists in deducing the $K$-convexity from a second derivative estimate. So,
start take the first derivative:
$\displaystyle\frac{\partial}{\partial
x}\left(\log(f(x))+K\left(\frac{y(x)}{f(x)}\right)^{2}\right)=\frac{f^{\prime}(x)}{f(x)}+2K\frac{y(x)}{f(x)}\left(\frac{y^{\prime}(x)}{f(x)}-\frac{y(x)f^{\prime}(x)}{f(x)^{2}}\right)$
Taking another derivative, I obtain
$\displaystyle\frac{\partial^{2}}{\partial
x^{2}}\left(\log(f(x))+K\left(\frac{y(x)}{f(x)}\right)^{2}\right)$
$\displaystyle=\frac{f^{\prime\prime}(x)}{f(x)}-\frac{f^{\prime}(x)^{2}}{f(x)^{2}}+2K\left(\frac{y^{\prime}(x)}{f(x)}-\frac{y(x)f^{\prime}(x)}{f(x)^{2}}\right)^{2}$
$\displaystyle\,+2K\frac{y(x)}{f(x)}\left(\frac{y^{\prime\prime}(x)}{f(x)}-2\frac{y^{\prime}(x)f^{\prime}(x)}{f(x)^{2}}-\frac{y(x)f^{\prime\prime}(x)}{f(x)^{2}}+2\frac{y(x)f^{\prime}(x)^{2}}{f(x)^{3}}\right).$
Therefore, neglecting some positive terms, the following estimate holds
$\displaystyle\frac{\partial^{2}}{\partial
x^{2}}\left(\log(f(x))+K\left(\frac{y(x)}{f(x)}\right)^{2}\right)$
$\displaystyle\geq
2K\frac{y^{\prime}(x)^{2}}{f(x)^{2}}-\frac{f^{\prime}(x)^{2}}{f(x)^{2}}-8K\left|\frac{y(x)y^{\prime}(x)f^{\prime}(x)}{f(x)^{3}}\right|-2K\left|\frac{y(x)y^{\prime\prime}(x)}{f(x)^{2}}\right|$
$\displaystyle-2K\left|\frac{y(x)^{2}f^{\prime\prime}(x)}{f(x)^{3}}\right|.$
Noticing that $\left|\frac{y(x)}{f(x)}\right|\leq 1$,
$\left|f^{\prime}(x)\right|\leq k$ and $\left|f(x)\right|\leq 3k$, I deduce
$\displaystyle\frac{\partial^{2}}{\partial x^{2}}$
$\displaystyle\left(\log(f(x))+K\left(\frac{y(x)}{f(x)}\right)^{2}\right)$
$\displaystyle\qquad\qquad\qquad\qquad\geq\frac{2K}{f(x)^{2}}\bigg{(}y^{\prime}(x)^{2}-\frac{k^{2}}{2K}-4k\left|y^{\prime}(x)\right|-\left|f(x)\right|\left|y^{\prime\prime}(x)\right|-\left|f(x)\right|\left|f^{\prime\prime}(x)\right|\bigg{)}$
$\displaystyle\qquad\qquad\qquad\qquad\geq\frac{2K}{f(x)^{2}}\bigg{(}y^{\prime}(x)^{2}-k^{2}-4k\left|y^{\prime}(x)\right|-kH-3k\bigg{)}$
$\displaystyle\qquad\qquad\qquad\qquad\geq\frac{K}{16f(x)^{2}},$
where the last inequality holds for every suitably small $k$. The thesis
follows by making the uniform convexity explicit and noticing that
$f(x_{1})\geq f(x)$ for every $x\in I$. ∎
Performing the same computations of the previous proof, it possible to prove
the following corollary.
###### Corollary 3.4.
Given a fixed a constant $H$, let $y:I=[x_{0},x_{1}]\to\mathbb{R}^{+}$ be a
$C^{2}$ function such that $y^{\prime}(x)\geq\frac{1}{4}$ and
$y^{\prime\prime}(x)\leq H\frac{k}{f(x)}$ for every $x\in I$. Then, for $k$
small enough, it holds
$K\left(\frac{y\left(\frac{x_{0}+x_{1}}{2}\right)}{f\left(\frac{x_{0}+x_{1}}{2}\right)}\right)^{2}\leq\frac{K}{2}\left(\frac{y(x_{0})}{f(x_{0})}\right)^{2}+\frac{K}{2}\left(\frac{y(x_{1})}{f(x_{1})}\right)^{2}-\frac{K}{128f(x_{1})^{2}}(x_{1}-x_{0})^{2}.$
###### Lemma 3.5.
Let $x\in[-1,1]$ and $\delta>0$ such that $x-\delta,x+\delta\in[-1,1]$, then
$\left|\frac{f(x)}{f(x-\delta)}+\frac{f(x)}{f(x+\delta)}-2\right|\leq\frac{[2k^{2}+f(x)]\delta^{2}}{f(x-\delta)f(x+\delta)}.$
###### Proof.
Denote $I_{1}=\int_{x}^{x-\delta}f^{\prime}(t)\,\mathrm{d}t$ and
$I_{2}=\int_{x}^{x+\delta}f^{\prime}(t)\,\mathrm{d}t$, then
$\displaystyle\left|\frac{f(x)}{f(x-\delta)}+\frac{f(x)}{f(x+\delta)}-2\right|$
$\displaystyle=\left|\frac{f(x)}{f(x)+I_{1}}+\frac{f(x)}{f(x)+I_{2}}-2\right|$
$\displaystyle=\left|\frac{-2I_{1}I_{2}-f(x)(I_{1}+I_{2})}{(f(x)+I_{1})(f(x)+I_{2})}\right|$
$\displaystyle\leq\left|\frac{2I_{1}I_{2}}{(f(x)+I_{1})(f(x)+I_{2})}\right|+\left|\frac{f(x)(I_{1}+I_{2})}{(f(x)+I_{1})(f(x)+I_{2})}\right|.$
But the following estimates hold
$\left|I_{1}+I_{2}\right|=\left|\int_{0}^{\delta}f^{\prime}(x+t)-f^{\prime}(x-\delta+t)\,\mathrm{d}t\right|\leq\int_{0}^{\delta}\left|f^{\prime}(x+t)-f^{\prime}(x-\delta+t)\right|\,\mathrm{d}t\leq\delta^{2}\sup\left|f^{\prime\prime}\right|\leq\delta^{2}$
and
$\left|I_{1}\right|,\left|I_{2}\right|\leq k\delta.$
Using this last two estimates I conclude
$\left|\frac{f(x)}{f(x-\delta)}+\frac{f(x)}{f(x+\delta)}-2\right|\leq\frac{[2k^{2}+f(x)]\delta^{2}}{(f(x)+I_{1})(f(x)+I_{2})}$
∎
## 4 How to Prove Convexity of the Entropy
In this section I prove an important result (Proposition 4.2) that will be
fundamental in the following, in order to prove the CD condition. This results
relies on the possibility to compute the density of a pushforward measure,
through Jacobi equation. For example, consider two measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{2}(\mathbb{R}^{2})$ which are absolutely
continuous with respect to the Lebesgue measure $\mathscr{L}^{2}$, with
density $\rho_{0}$ and $\rho_{1}$ respectively. Suppose there exists a smooth
one-to-one map $T:\mathbb{R}^{2}\to\mathbb{R}^{2}$ such that
$T_{\\#}\mu_{0}=\mu_{1}$, then it is well known that
$\rho_{1}(T(x,y))J_{T}(x,y)=\rho_{0}(x,y),$ (7)
for $\mu_{0}$-almost every $(x,y)$. As shown in [1] the assumptions on the map
$T$ can be relaxed and it is sufficient to require $T$ to be approximately
differentiable and injective outside a $\mu_{0}$-null set. However, in this
work I am going to deal with transport map which are not necessarily
approximately differentiable, but have another rigidity property. Therefore,
the version of Jacobi equation I will need is the following, which is an easy
consequence of Proposition 2.1 in [5].
###### Proposition 4.1.
Let $\mu_{0},\mu_{1}\in\mathscr{P}_{2}(\mathbb{R}^{2})$ be absolutely
continuous with respect to the Lebesgue measure $\mathscr{L}^{2}$. Assume that
there exists a map $T=(T_{1},T_{2})$ which is injective outside a
$\mu_{0}$-null set, such that $T_{\\#}\mu_{0}=\mu_{1}$. Suppose also that
$T_{1}$ locally does not depend on the $y$ coordinate and it is increasing in
$x$, while $T(x,y)$ is increasing in $y$ for every fixed $x$. Then the Jacobi
equation (7) is satisfied with $J_{T}=\frac{\partial T_{1}}{\partial
x}\frac{\partial T_{2}}{\partial y}$.
Now that I have shown that Jacobi equation can be adapted to the context of
this work, I am going to explain how it can be useful in proving convexity of
the entropy functional. The following proposition will be an important element
in the main proof of this article.
###### Proposition 4.2.
Let $\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$ be absolutely continuous measure
and let $T:X\to X$ be an optimal transport map between $\mu_{0}$ and
$\mu_{1}$, in particular $T_{\\#}\mu_{0}=\mu_{1}$. Consider a midpoint
$\mu_{1/2}$ of $\mu_{0}$ and $\mu_{1}$, assume that
$\mu_{1/2}=[M\circ(\operatorname{id},T)]_{\\#}\mu_{0}$ where the map
$M:X\times X\to X$ is a midpoint selection. Suppose also that the maps $T$ and
$M\circ(\operatorname{id},T):X\to X$ are injective outside a set of measure
zero and they satisfy the Jacobian equation, with suitable Jacobian functions
$J_{T}$ and $J_{M\circ(\operatorname{id},T)}$. If
$\log\left(m\big{(}M((x,y),T(x,y))\big{)}J_{M\circ(\operatorname{id},T)}(x,y)\right)\geq\frac{1}{2}\log\left(m(T(x,y))J_{T}(x,y)\right)+\frac{1}{2}\log(m(x,y))$
for $\mu_{0}$ almost every $(x,y)$, then
$\operatorname{Ent}(\mu_{1/2})\leq\frac{1}{2}\operatorname{Ent}(\mu_{0})+\frac{1}{2}\operatorname{Ent}(\mu_{1}).$
(8)
###### Proof.
Suppose $\mu_{0}=\rho_{0}\mathfrak{m}=\tilde{\rho}_{0}\mathscr{L}^{2}$,
$\mu_{1}=\rho_{1}\mathfrak{m}=\tilde{\rho}_{1}\mathscr{L}^{2}$ and
$\mu_{1/2}=\rho\mathfrak{m}=\tilde{\rho}_{1}\mathscr{L}^{2}$. It easy to
realize that, in order to prove (8), it is sufficient to prove that
$\log\big{(}\rho_{1/2}\big{(}M((x,y),T(x,y))\big{)}\big{)}\leq\frac{1}{2}\log\big{(}\rho_{1}(T(x,y))\big{)}+\frac{1}{2}\log(\rho_{0}(x,y)),$
(9)
for $\mu_{0}$-almost every $(x,y)$. On the other hand, the validity of Jacobi
equation ensures that
$\tilde{\rho}_{1}(T(x,y))J_{T}(x,y)=\tilde{\rho}_{0}(x,y),$
for $\mu_{0}$-almost every $(x,y)$, and thus that
$\rho_{1}(T(x,y))m(T(x,y))J_{T}(x,y)=\rho_{0}(x,y)m(x,y).$
for $\mu_{0}$-almost every $(x,y)$. Analogously, it is possible to infer that
$\rho_{1/2}\big{(}M((x,y),T(x,y))\big{)}m\big{(}M((x,y),T(x,y))\big{)}J_{M\circ(\operatorname{id},T)}(x,y)=\rho_{0}(x,y)m(x,y).$
for $\mu_{0}$-almost every $(x,y)$. Therefore (9) is equivalent to
$\log\left(\frac{\rho_{0}(x,y)m(x,y)}{m\big{(}M((x,y),T(x,y))\big{)}J_{M\circ(\operatorname{id},T)}(x,y)}\right)\leq\frac{1}{2}\log\left(\frac{\rho_{0}(x,y)m(x,y)}{m(T(x,y))J_{T}(x,y)}\right)+\frac{1}{2}\log(\rho_{0}(x,y)).$
Some easy algebraic computations show that this last equation is equivalent to
$\log\left(m\big{(}M((x,y),T(x,y))\big{)}J_{M\circ(\operatorname{id},T)}(x,y)\right)\geq\frac{1}{2}\log\left(m(T(x,y))J_{T}(x,y)\right)+\frac{1}{2}\log(m(x,y)),$
concluding the proof. ∎
Notice that the result of this last proposition gains importance if seen in
relation with Proposition 1.8. In fact, Proposition 4.2 provides a strategy to
prove entropy convexity in a suitable midpoint, which is sufficient to deduce
CD condition, according to Proposition 1.8.
I conclude the section with a simple corollary of Proposition 4.2, which will
be useful in the last section of this work. This result is a straightforward
consequence of the previous proof, and takes full advantage of the fact that
Jacobi equation allows to prove entropy convexity pointwise as well as
globally.
###### Corollary 4.3.
Under the same assumptions of Proposition 4.2, it holds that
$\operatorname{Ent}\big{(}[M\circ(\operatorname{id},T)]_{\\#}(f\mu_{0})\big{)}\leq\frac{1}{2}\operatorname{Ent}(f\mu_{0})+\frac{1}{2}\operatorname{Ent}(T_{\\#}(f\mu_{0})),$
for every bounded measurable function $f:X\to\mathbb{R}^{+}$, with $\int
f\,\mathrm{d}\mu_{0}=1$.
## 5 Definition of the Midpoint
As previously pointed out, in order to prove CD condition I am going to prove
entropy convexity in a suitable midpoint of any pair of absolutely continuous
measures. Notice that in an highly branching metric measure space the choice
of a midpoint can be done with great freedom. This is because, in general,
both the optimal transport map and the geodesic interpolation are not unique,
and thus they must be selected in a clever way. In this section, for any pair
of absolutely continuous measures, I define a suitable midpoint and in the
following section I will show that it satisfies the convexity of the entropy.
This midpoint selection is actually quite complicated but it does the job, in
particular I believe there is no way to obtain a considerably simpler one.
Before going into the details, I introduce the following subsets of
$\mathbb{R}^{2}\times\mathbb{R}^{2}$ that will play an important role in the
definition of the midpoint.
###### Definition 5.1.
Define the sets
$V,D,H,H_{0},H_{\frac{1}{2}},H_{1}\subset\mathbb{R}^{2}\times\mathbb{R}^{2}$
as:
$V:=\left\\{\left(\left(x_{0},y_{0}\right),\left(x_{1},y_{1}\right)\right)\in\mathbb{R}^{2}\times\mathbb{R}^{2}:\left|x_{0}-x_{1}\right|<\left|y_{0}-y_{1}\right|\right\\},$
$D:=\left\\{\left(\left(x_{0},y_{0}\right),\left(x_{1},y_{1}\right)\right)\in\mathbb{R}^{2}\times\mathbb{R}^{2}:\left|x_{0}-x_{1}\right|=\left|y_{0}-y_{1}\right|\right\\},$
$H:=\left\\{\left(\left(x_{0},y_{0}\right),\left(x_{1},y_{1}\right)\right)\in\mathbb{R}^{2}\times\mathbb{R}^{2}:\left|x_{0}-x_{1}\right|>\left|y_{0}-y_{1}\right|\right\\}=H_{0}\cup
H_{1},$
where
$H_{0}:=\left\\{\left(\left(x_{0},y_{0}\right),\left(x_{1},y_{1}\right)\right)\in\mathbb{R}^{2}\times\mathbb{R}^{2}:\frac{1}{2}\left|x_{0}-x_{1}\right|\geq\left|y_{0}-y_{1}\right|\right\\},$
$H_{1}:=\left\\{\left(\left(x_{0},y_{0}\right),\left(x_{1},y_{1}\right)\right)\in\mathbb{R}^{2}\times\mathbb{R}^{2}:\left|x_{0}-x_{1}\right|>\left|y_{0}-y_{1}\right|>\frac{1}{2}|x_{0}-x_{1}|\right\\}.$
The first step in the selection of the midpoint consists in choosing a
suitable optimal transport map between two given absolutely continuous
measures. To this aim I use a nice result by Rajala [5], who was able to show
the existence of an optimal transport map with different nice properties. The
idea behind his work is to use consecutive minimization, in order to select a
particular optimal transport plan. The result of Rajala can be summarized in
the following statement.
###### Proposition 5.2.
Given two measures $\mu_{0},\mu_{1}\in\mathscr{P}(\mathbb{R}^{2})$ which are
absolutely continuous with respect to the Lebesgue measure $\mathscr{L}^{2}$,
there exists a measurable optimal transport map $T=(T_{1},T_{2})$, injective
outside a $\mu_{0}$-null set, satisfying $T_{\\#}\mu_{0}=\mu_{1}$, with some
nice rigidity properties. In particular the optimal transport plan
$(\operatorname{id},T)_{\\#}\mu_{0}$ is concentrated in a set $\Gamma\subset
X\times X$, such that for all
$\left(x,y_{1}\right),\left(x,y_{2}\right),\left(x_{1},y\right),\left(x_{2},y\right)\in\left\\{(x,y)\in
X:((x,y),T(x,y))\in\Gamma\right\\}$ it holds that
$\text{if }y_{1}\neq y_{2}\text{ and
}T_{1}\left(x,y_{1}\right)=T_{1}\left(x,y_{2}\right),\text{ then
}\frac{T_{2}\left(x,y_{1}\right)-T_{2}\left(x,y_{2}\right)}{y_{1}-y_{2}}\geq
0$
and
$\text{if }x_{1}\neq x_{2}\text{ and
}T_{2}\left(x_{1},y\right)=T_{2}\left(x_{2},y\right),\text{ then
}\frac{T_{1}\left(x_{1},y\right)-T_{1}\left(x_{2},y\right)}{x_{1}-x_{2}}\geq
0.$
Moreover for $\mu_{0}$-almost every $(x,y)$ I have that
$\begin{array}[]{l}T_{1}\text{ is locally constant in }y,\text{ if
}((x,y),T(x,y))\in H\text{ and }\\\ T_{2}\text{ is locally constant in
}x,\text{ if }((x,y),T(x,y))\in V\end{array}.$
Combining this two properties with some monotonicity properties one can deduce
that the function $T_{1}(x,y)$ is increasing in $x$ for every fixed $y$ and
the function $T_{2}(x,y)$ is increasing in $y$ for every fixed $x$, as a
consequence for $\mu_{0}$-almost every $(x,y)$ it holds
$\frac{\partial T_{1}}{\partial x}\geq 0\text{ and }\frac{\partial
T_{2}}{\partial y}\geq 0,\text{ if }((x,y),T(x,y))\in H\cup V.$
Finally let me point out that, since
$(\operatorname{id},T)_{\\#}\mu_{0}\in\operatorname{OptPlans}(\mu_{0},\mu_{1})$
the usual cyclical monotonicity holds, thus I can assume that for every
$\left(z_{1},w_{1}\right),\left(z_{2},w_{2}\right)\in\Gamma$ it holds
$\mathsf{d}_{\infty}^{2}(z_{1},w_{1})+\mathsf{d}_{\infty}^{2}(z_{2},w_{2})\leq\mathsf{d}_{\infty}^{2}(z_{1},w_{2})+\mathsf{d}_{\infty}^{2}(z_{2},w_{1}).$
Now fix two measures $\mu_{0},\mu_{1}\in\mathscr{P}(X)$ absolutely continuous
with respect to the reference measure $\mathfrak{m}$, and thus also with
respect to the Lebesgue measure $\mathscr{L}^{2}$. Call $T$ the optimal
transport map between $\mu_{0}$ and $\mu_{1}$, that satisfies the requirements
of Proposition 5.2. Moreover denote by $\Gamma$ the full
$(\operatorname{id},T)_{\\#}\mu_{0}$-measure set with all the monotonicity
property stated in Proposition 5.2. In order to identify a midpoint of
$\mu_{0}$ and $\mu_{1}$, I need to choose a proper midpoint interpolation,
that is a measurable map $M:X\times X\to X$ such that
$\mathsf{d}_{\infty}(M(z,w),z)=\mathsf{d}_{\infty}(M(z,w),w)=\frac{1}{2}\mathsf{d}_{\infty}(z,w)\quad\text{for
every }(z,w)\in X\times X,$ (10)
the desired midpoint will be
$M_{\\#}\big{(}(\operatorname{id},T)_{\\#}\mu_{0}\big{)}=[M\circ(\operatorname{id},T)]_{\\#}\mu_{0}$.
Let me now define the midpoint interpolation map $M$ that I will use for the
proof of the main theorem. This definition is actually quite involved, in fact
the map $M$ is defined in different ways on the sets $V$, $D$, $H_{0}$ and
$H_{1}$. In particular the precise definition is the following:
* •
If $\big{(}(x_{0},y_{0}),(x_{1},y_{1})\big{)}\in V\cup D$
$M\big{(}(x_{0},y_{0}),(x_{1},y_{1})\big{)}:=\left(\frac{x_{0}+x_{1}}{2},\frac{y_{0}+y_{1}}{2}\right)$
* •
If $\big{(}(x_{0},y_{0}),(x_{1},y_{1})\big{)}\in H_{0}$,
$M\big{(}(x_{0},y_{0}),(x_{1},y_{1})\big{)}=\left(\frac{x_{0}+x_{1}}{2},\frac{1}{2}\bigg{(}\frac{y_{0}}{f(x_{0})}+\frac{y_{1}}{f(x_{1})}\bigg{)}f\left(\frac{x_{0}+x_{1}}{2}\right)\right).$
* •
If $\big{(}(x_{0},y_{0}),(x_{1},y_{1})\big{)}\in H_{1}$, with $x_{0}<x_{1}$
and $y_{0}<y_{1}$, introduce the quantity
$\tilde{y}(x_{0},x_{1},y_{0})=\frac{1}{2}\bigg{(}\frac{y_{0}}{f(x)}+\frac{y_{0}+\frac{x_{1}-x_{0}}{2}}{f(x_{1})}\bigg{)}f\left(\frac{x_{1}+x_{0}}{2}\right)-y_{0},$
and consequently define
$\displaystyle M$ $\displaystyle\big{(}(x_{0},y_{0}),(x_{1},y_{1})\big{)}$
$\displaystyle\qquad\qquad=\left(\frac{x_{0}+x_{1}}{2},y_{0}+\tilde{y}(x_{0},x_{1},y_{0})+\left(\frac{x_{1}-x_{0}}{2}-\tilde{y}(x_{0},x_{1},y_{0})\right)\left(2\frac{y_{1}-y_{0}}{x_{1}-x_{0}}-1\right)\right).$
In the other cases $M$ can be defined analogously, but I prefer not to
explicitly do it, in order to avoid unnecessary complications in this
definition and in the following. In particular every proof from now on will be
done using only the definition above, implying it can be easily adapted to the
other cases.
With such a complex definition is not completely obvious that the map $M$
satisfies condition (10), but this can be easily proven.
###### Proposition 5.3.
The map $M$ actually defines a midpoint interpolation.
###### Proof.
This is completely obvious if $\big{(}(x_{0},y_{0}),(x_{1},y_{1})\big{)}\in
V\cup D$. While when $\big{(}(x_{0},y_{0}),(x_{1},y_{1})\big{)}\in H_{0}$ the
statement is an easy consequence of Lemma 3.2, provided that $k$ is
sufficiently small, and of the fact that
$0\leq\frac{1}{2}\bigg{(}\frac{y_{0}}{f(x_{0})}+\frac{y_{1}}{f(x_{1})}\bigg{)}f\left(\frac{x_{0}+x_{1}}{2}\right)\leq
f\left(\frac{x_{0}+x_{1}}{2}\right).$
On the other hand if $\big{(}(x_{0},y_{0}),(x_{1},y_{1})\big{)}\in H_{1}$ it
is sufficient to notice that the point
$(\bar{x},\bar{y}):=\left(\frac{x_{0}+x_{1}}{2},y_{0}+\tilde{y}(x_{0},x_{1},y_{0})+\left(\frac{x_{1}-x_{0}}{2}-\tilde{y}(x_{0},x_{1},y_{0})\right)\left(2\frac{y_{1}-y_{0}}{x_{1}-x_{0}}-1\right)\right)$
lies on a convex combination of a 45 degree line and a curve
$\gamma:[x_{0},x_{1}]\to X$, with
$\frac{1}{2}-3k\leq\gamma^{\prime}(x)\leq\frac{1}{2}+3k\quad\text{ for every
}x\in[x_{0},x_{1}],$
according to Lemma 3.2. As a consequence, for a suitably small $k$, this
ensures both that $(\bar{x},\bar{y})\in X$ and that $(\bar{x},\bar{y})$ is a
$\mathsf{d}_{\infty}$-midpoint of $(x_{0},y_{0})$ and $(x_{1},y_{1})$. ∎
In order to efficiently apply Proposition 4.2, I need to compute the Jacobian
of the map $M\circ(\operatorname{id},T)$. Observe that the way $M$ is defined,
combined with the properties of $T$, ensures that
$M\circ(\operatorname{id},T)$ satisfies all the rigidity assumptions of
Proposition 4.1. Therefore, proving the following result, will allow to use
Proposition 4.1 to compute $J_{M\circ(\operatorname{id},T)}$.
###### Proposition 5.4.
The map $M\circ(\operatorname{id},T)$ is injective outside a $\mu_{0}$-null
set.
###### Proof.
First of all notice that it is sufficient to prove the injectivity of $M$ on
$\Gamma$, because $\Gamma$ has full
$(\operatorname{id},T)_{\\#}\mu_{0}$-measure. Thus suppose by contradiction
that there exist
$((x_{1},y_{1}),(x_{1}^{\prime},y_{1}^{\prime}))\neq((x_{2},y_{2}),(x_{2}^{\prime},y_{2}^{\prime}))\in\Gamma$
such that
$M((x_{1},y_{1}),(x_{1}^{\prime},y_{1}^{\prime}))=M((x_{2},y_{2}),(x_{2}^{\prime},y_{2}^{\prime})).$
Following the proof of Lemma 3.7 in [5], I can limit myself to consider the
case when
$((x_{1},y_{1}),(x_{1}^{\prime},y_{1}^{\prime}))\neq((x_{2},y_{2}),(x_{2}^{\prime},y_{2}^{\prime}))\in\Gamma\cap
H.$
In this case, one can easily realize that the cyclical monotonicity properties
of $\Gamma$ imply that $x_{1}=x_{2}$ and $x_{1}^{\prime}=x_{2}^{\prime}$. So,
if
$((x_{1},y_{1}),(x_{1}^{\prime},y_{1}^{\prime})),((x_{2},y_{2}),(x_{2}^{\prime},y_{2}^{\prime}))\in\Gamma\cap
H_{0}$, thesis simply follows from the definition of $M$ and from the
monotonicity of $T_{2}$. While if
$((x_{1},y_{1}),(x_{1}^{\prime},y_{1}^{\prime})),((x_{2},y_{2}),(x_{2}^{\prime},y_{2}^{\prime}))\in\Gamma\cap
H_{1}$ the statement is a consequence of the monotonicity property of $T_{2}$
associated with the fact that the quantity
$y+\tilde{y}(x,x^{\prime},y)+\left(\frac{x^{\prime}-x}{2}-\tilde{y}(x,x^{\prime},y)\right)\left(2\frac{y^{\prime}-y}{x^{\prime}-x}-1\right)$
is locally increasing in $y$ and $y^{\prime}$, when
$((x,y),(x^{\prime},y^{\prime}))\in H_{1}$ (with $x<x^{\prime}$ and
$y<y^{\prime}$). The first monotonicity is not straightforward, therefore I am
going to prove it. First of all notice that, according to Lemma 3.5 and since
$f\leq 3k$, it holds that
$2\left|\frac{\partial}{\partial
y}\tilde{y}(x,x^{\prime},y)\right|=\left|\frac{f\big{(}\frac{x+x^{\prime}}{2}\big{)}}{f(x)}+\frac{f\big{(}\frac{x+x^{\prime}}{2}\big{)}}{f(x^{\prime})}-2\right|\leq\frac{[2k^{2}+3k]\big{(}\frac{x^{\prime}-x}{2}\big{)}^{2}}{f(x)f(x^{\prime})}.$
Moreover, the geometry of the set $X$ allows to deduce that
$\frac{x^{\prime}-x}{2}\leq y+\frac{x^{\prime}-x}{2}\leq f(x^{\prime})\leq
f(x)+k(x^{\prime}-x),$
and consequently $\big{(}1-2k\big{)}\frac{x^{\prime}-x}{2}\leq f(x)\leq
f(x^{\prime})$. On the other hand, observe that Lemma 3.2 guarantees that
$\left(\frac{x^{\prime}-x}{2}-\tilde{y}(x,x^{\prime},y)\right)\leq\left(\frac{1}{2}+3k\right)\frac{x^{\prime}-x}{2}$
Those estimates allow to conclude that
$\displaystyle\frac{\partial}{\partial y}\bigg{[}y+\tilde{y}(x,x^{\prime},y)$
$\displaystyle+\left(\frac{x^{\prime}-x}{2}-\tilde{y}(x,x^{\prime},y)\right)\left(2\frac{y^{\prime}-y}{x^{\prime}-x}-1\right)\bigg{]}$
$\displaystyle\geq\frac{1}{2}-3k-\left|\frac{\partial}{\partial
y}\tilde{y}(x,x^{\prime},y)\right|\left(2-2\frac{y^{\prime}-y}{x^{\prime}-x}\right)$
$\displaystyle\geq\frac{1}{2}-3k-\frac{[2k^{2}+3k]}{(1-2k)^{2}}>0,$
for $k$ sufficiently small. The case when
$((x_{1},y_{1}),(x_{1}^{\prime},y_{1}^{\prime}))\in\Gamma\cap
H_{1},((x_{2},y_{2})$ and $(x_{2}^{\prime},y_{2}^{\prime}))\in\Gamma\cap
H_{0}$ can be treated analogously. ∎
## 6 Proof of CD Condition
In the previous sections I have introduced all I need to prove that the metric
measure space $(X,\mathsf{d}_{\infty},\mathfrak{m})$ satisfies the
$\mathsf{CD}(0,\infty)$ condition. Let me now go into the details of the
proof.
###### Theorem 6.1.
For suitable $k$ and $K$, the metric measure space
$(X_{k,\varepsilon},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon})$ is a
$\mathsf{CD}(0,\infty)$ space, for every $0<\varepsilon<k$.
###### Proof.
Let $\mu_{0},\mu_{1}\in\mathscr{P}(X)$ be absolutely continuous with respect
to the reference measure $\mathfrak{m}$, then, according to Proposition 1.8,
it is sufficient to prove that
$\operatorname{Ent}\big{(}[M\circ(\operatorname{id},T)]_{\\#}\mu_{0}\big{)}\leq\frac{1}{2}\operatorname{Ent}(\mu_{0})+\frac{1}{2}\operatorname{Ent}(\mu_{1}).$
Given Proposition 4.2, it is enough check the validity of
$\log\left(m\big{(}M((x,y),T(x,y))\big{)}J_{M\circ(\operatorname{id},T)}(x,y)\right)\geq\frac{1}{2}\log\left(m(T(x,y))J_{T}(x,y)\right)+\frac{1}{2}\log(m(x,y))$
(11)
for $\mu_{0}$-almost every $(x,y)$. For $\mu_{0}$-almost every $(x,y)\in V\cup
D$ this can be done following [5]. Thus I will treat the other cases and
applying Lemma 3.3.
Notice that, for $\mu_{0}$-almost every $(x,y)$ such that $((x,y),T(x,y))\in
H_{0}\cap\Gamma$, I have
$M\circ(\operatorname{id},T)(x,y)=\left(\frac{x+T_{1}}{2},\frac{1}{2}\bigg{(}\frac{y}{f(x)}+\frac{T_{2}}{f(T_{1})}\bigg{)}f\left(\frac{x+T_{1}}{2}\right)\right).$
Then, according to what I did in previous sections, it is possible to apply
Proposition 4.1 and deduce that
$J_{M\circ(\operatorname{id},T)}(x,y)=\frac{1}{2}\left(1+\frac{\partial
T_{1}}{\partial
x}\right)\frac{1}{2}\bigg{(}\frac{1}{f(x)}+\frac{\frac{\partial
T_{2}}{\partial y}}{f(T_{1})}\bigg{)}f\left(\frac{x+T_{1}}{2}\right),$
for $\mu_{0}$-almost every $(x,y)$ such that $((x,y),T(x,y))\in
H_{0}\cap\Gamma$. Furthermore it holds that
$m\big{(}M((x,y),T(x,y))\big{)}=f\left(\frac{x+T_{1}}{2}\right)^{-1}\exp\left(\frac{-K}{4}\bigg{(}\frac{y}{f(x)}+\frac{T_{2}}{f(T_{1})}\bigg{)}^{2}\right),$
thus, putting together this last two relations, I obtain
$\displaystyle\log\left(m\big{(}M((x,y),T(x,y))\big{)}J_{M\circ(\operatorname{id},T)}(x,y)\right)$
$\displaystyle=\log\Bigg{(}\frac{1}{2}\left(1+\frac{\partial T_{1}}{\partial
x}\right)\frac{1}{2}\bigg{(}\frac{1}{f(x)}+\frac{\frac{\partial
T_{2}}{\partial y}}{f(T_{1})}\bigg{)}$
$\displaystyle\qquad\exp\Bigg{(}\frac{-K}{4}\bigg{(}\frac{y}{f(x)}+\frac{T_{2}}{f(T_{1})}\bigg{)}^{2}\Bigg{)}\Bigg{)}$
$\displaystyle=\log\left(\frac{1}{2}\left(1+\frac{\partial T_{1}}{\partial
x}\right)\right)+\log\left(\frac{1}{2}\bigg{(}\frac{1}{f(x)}+\frac{\frac{\partial
T_{2}}{\partial y}}{f(T_{1})}\bigg{)}\right)$
$\displaystyle\quad-K\bigg{(}\frac{1}{2}\bigg{(}\frac{y}{f(x)}+\frac{T_{2}}{f(T_{1})}\bigg{)}\bigg{)}^{2}$
On the other hand it holds
$\log(m(x,y))=\log\left(\frac{1}{f(x)}\exp\left(-K\bigg{(}\frac{y}{f(x)}\bigg{)}^{2}\right)\right)=\log(1)+\log\left(\frac{1}{f(x)}\right)-K\bigg{(}\frac{y}{f(x)}\bigg{)}^{2}$
and, applying once again Proposition 4.1, this time to the map $T$, also
$\displaystyle\log\left(m(T(x,y))J_{T}(x,y)\right)$
$\displaystyle=\log\left(\frac{\partial T_{1}}{\partial x}\frac{\partial
T_{2}}{\partial
y}\frac{1}{f(T_{1})}\exp\left(-K\bigg{(}\frac{T_{2}}{f(T_{1})}\bigg{)}^{2}\right)\right)$
$\displaystyle=\log\left(\frac{\partial T_{1}}{\partial
x}\right)+\log\left(\frac{\frac{\partial T_{2}}{\partial
y}}{f(T_{1})}\right)-K\bigg{(}\frac{T_{2}}{f(T_{1})}\bigg{)}^{2},$
for $\mu_{0}$-almost every $(x,y)$ such that $((x,y),T(x,y))\in
H_{0}\cap\Gamma$. Putting together this last three equations, inequality (11)
follows from the concavity of the functions $\log$ and $-Kx^{2}$.
Passing now to the last case, for $\mu_{0}$-almost every $(x,y)$ such that
$((x,y),T(x,y))\in H_{1}\cap\Gamma$ (with $x<T_{1}(x,y)$ and $y<T_{2}(x,y)$) I
have
$\displaystyle(S_{1},S_{2})(x,y)$
$\displaystyle:=M\circ(\operatorname{id},T)(x,y)$
$\displaystyle\quad=\left(\frac{x+T_{1}}{2},y+\tilde{y}(x,T_{1},y)+\left(\frac{T_{1}-x}{2}-\tilde{y}(x,T_{1},y)\right)\left(2\frac{T_{2}-y}{T_{1}-x}-1\right)\right).$
Reasoning as before, Proposition 4.1 ensures that, for $\mu_{0}$-almost every
$(x,y)$ such that $((x,y),T(x,y))\in H_{1}\cap\Gamma$ (with $x<T_{1}(x,y)$ and
$y<T_{2}(x,y)$),
$J_{M\circ(\operatorname{id},T)}(x,y)=\frac{\partial S_{1}}{\partial
x}\frac{\partial S_{2}}{\partial y},$
and in particular it holds that
$\frac{\partial S_{1}}{\partial x}=\frac{1}{2}\left(1+\frac{\partial
T_{1}}{\partial x}\right),$
and
$\displaystyle\frac{\partial S_{2}}{\partial y}$
$\displaystyle=1+\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)\left(2-2\frac{T_{2}-y}{T_{1}-x}\right)+2\left(\frac{T_{1}-x}{2}-\tilde{y}(x,T_{1},y)\right)\frac{\frac{\partial
T_{2}}{\partial y}-1}{T_{1}-x}$ $\displaystyle=1+\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)\left(2-2\frac{T_{2}-y}{T_{1}-x}\right)+\left(\frac{\partial
T_{2}}{\partial
y}-1\right)\left(1-\frac{\tilde{y}(x,T_{1},y)}{\frac{T_{1}-x}{2}}\right)$
$\displaystyle=1+\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)\left(2-2\frac{T_{2}-y}{T_{1}-x}\right)+\left(\frac{\partial
T_{2}}{\partial
y}-1\right)\left(\frac{1}{2}-\frac{\tilde{y}(x,T_{1},y)-\frac{T_{1}-x}{4}}{\frac{T_{1}-x}{2}}\right).$
I can now consider the explicit value of $\tilde{y}(x,T_{1},y)$ and notice
that
$\displaystyle\tilde{y}(x,T_{1},y)-\frac{x+T_{1}}{4}$
$\displaystyle=\frac{1}{2}\left[\bigg{(}\frac{y}{f(x)}+\frac{y+\frac{T_{1}-x}{2}}{f(T_{1})}\bigg{)}f\left(\frac{T_{1}+x}{2}\right)-2y-\frac{T_{1}-x}{2}\right]$
$\displaystyle=\frac{1}{2}\left[y\left(\frac{f\left(\frac{T_{1}+x}{2}\right)}{f(x)}+\frac{f\left(\frac{T_{1}+x}{2}\right)}{f(T_{1})}-2\right)+\frac{T_{1}-x}{2}\left(\frac{f\left(\frac{T_{1}+x}{2}\right)}{f(T_{1})}-1\right)\right].$
Moreover, I can easily obtain that
$\left|\frac{f\left(\frac{T_{1}+x}{2}\right)}{f(T_{1})}-1\right|=\left|\frac{f\left(\frac{T_{1}+x}{2}\right)-f(T_{1})}{f(T_{1})}\right|\leq\sup
f^{\prime}\cdot\frac{\frac{T_{1}-x}{2}}{f(T_{1})}=k\cdot\frac{\frac{T_{1}-x}{2}}{f(T_{1})},$
thus applying Lemma 3.5 and noticing that $\frac{y}{f(x)}\leq 1$, I can
conclude that
$\displaystyle\left|\tilde{y}(x,T_{1},y)-\frac{T_{1}-x}{4}\right|\leq\frac{\left[2k^{2}+k+f\left(\frac{T_{1}+x}{2}\right)\right]\left(\frac{T_{1}-x}{2}\right)^{2}}{f(T_{1})}\leq\frac{[2k^{2}+4k]\left(\frac{T_{1}-x}{2}\right)^{2}}{f(T_{1})}$
where in the last inequality I used that $f(x)\leq 3k$ for every $x\in[-1,1]$.
Therefore it is possible to infer that
$\left|\frac{\tilde{y}(x,T_{1},y)-\frac{T_{1}-x}{4}}{\frac{T_{1}-x}{2}}\right|\leq\frac{[2k^{2}+4k]\frac{T_{1}-x}{2}}{f(T_{1})}\leq\frac{1}{64}\frac{\frac{T_{1}-x}{2}}{f(T_{1})}\leq\frac{\frac{T_{1}-x}{2}}{f(T_{1})},$
(12)
for a sufficiently small $k$. Now suppose that
$\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)=\frac{1}{2}\frac{f\big{(}\frac{x+T_{1}}{2}\big{)}}{f(x)}+\frac{1}{2}\frac{f\big{(}\frac{x+T_{1}}{2}\big{)}}{f(T_{1})}-1>0,$
then, after noticing that the geometry of the set $X$ ensures that
$\frac{T_{1}-x}{2}\leq f(T_{1})$, it is possible to apply Lemma 3.1 and obtain
that
$\displaystyle\log\left(\frac{\partial S_{2}}{\partial y}\right)$
$\displaystyle\geq\log\left(1+\left(\frac{\partial T_{2}}{\partial
y}-1\right)\left(\frac{1}{2}-\frac{\tilde{y}(x,T_{1},y)-\frac{T_{1}-x}{4}}{\frac{T_{1}-x}{2}}\right)\right)$
(13) $\displaystyle\geq\log(1)+\log\left(\frac{\partial T_{2}}{\partial
y}\right)-C\frac{\left(\frac{T_{1}-x}{2}\right)^{2}}{f(T_{1})^{2}}.$
On the other hand, it is easy to realize that the point
$M\circ(\operatorname{id},T)(x,y)$ lies on a curve, which is a convex
combination of a 45 degree line and of the curve
$t\mapsto\bigg{(}(1-t)\frac{y}{f(x)}+t\,\frac{y+\frac{T_{1}-x}{2}}{f(T_{1})}\bigg{)}f((1-t)x+tT_{1}).$
Therefore, up to take a suitably small $k$, Lemma 3.2 allows to apply Lemma
3.3, and obtain
$\log\left(m\left(M\circ(\operatorname{id},T)(x,y)\right)\right)\geq\log(m(x,y))+\log(m(T(x,y)))+\frac{K}{128f(T_{1})^{2}}(T_{1}-x)^{2}.$
(14)
Inequality (11) then follows as before, putting together (13) and (14) and
taking $K$ sufficiently large.
Suppose instead that
$\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)=\frac{1}{2}\frac{f\big{(}\frac{x+T_{1}}{2}\big{)}}{f(x)}+\frac{1}{2}\frac{f\big{(}\frac{x+T_{1}}{2}\big{)}}{f(T_{1})}-1<0,$
then notice that
$\displaystyle\log\left(f\bigg{(}\frac{T_{1}+x}{2}\bigg{)}\right)-\frac{1}{2}\big{(}\log(f(x))+\log(f(T_{1}))\big{)}=\frac{1}{2}\log\left(\frac{f\left(\frac{T_{1}+x}{2}\right)}{f(x)}\cdot\frac{f\left(\frac{T_{1}+x}{2}\right)}{f(T_{1})}\right)$
(15)
$\displaystyle\leq\log\left(\frac{1}{2}\frac{f\left(\frac{T_{1}+x}{2}\right)}{f(x)}+\frac{1}{2}\frac{f\left(\frac{T_{1}+x}{2}\right)}{f(T_{1})}\right)=\log\left(1+\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)\right)$
Moreover, according to the estimates done in the proof of Proposition 5.4 and
to (12), it is easy to realize that, for $k$ small enough
$\left(\frac{1}{2}-\frac{\tilde{y}(x,T_{1},y)-\frac{T_{1}-x}{4}}{\frac{T_{1}-x}{2}}\right)\left(1+\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)\right)^{-1}=\frac{1}{2}+\tilde{\delta}$
for some $\tilde{\delta}$ such that
$\big{|}\tilde{\delta}\big{|}\leq\frac{1}{32}\frac{\frac{T_{1}-x}{2}}{f(T_{1})}.$
Consequently I can infer that
$\displaystyle\log\left(\frac{\partial S_{2}}{\partial y}\right)$
$\displaystyle\geq\log\left(1+\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)+\left(\frac{\partial T_{2}}{\partial
y}-1\right)\left(\frac{1}{2}-\frac{\tilde{y}(x,T_{1},y)-\frac{T_{1}-x}{4}}{\frac{T_{1}-x}{2}}\right)\right)$
(16) $\displaystyle=\log\left(1+\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)\right)+\log\left(1+\left(\frac{\partial T_{2}}{\partial
y}-1\right)\left(\frac{1}{2}+\tilde{\delta}\right)\right)$
$\displaystyle\geq\log\left(1+\frac{\partial}{\partial
y}\tilde{y}(x,T_{1},y)\right)+\log(1)+\log\left(\frac{\partial T_{2}}{\partial
y}\right)-C\frac{\left(\frac{T_{1}-x}{2}\right)^{2}}{f(T_{1})^{2}}$
where the last passage follows from Lemma 3.1. Finally it is possible to prove
(11), putting together (15) with (16), applying Corollary 3.4 and taking $K$
sufficiently large. ∎
As I did in section 4, I exploit the local nature of Jacobi equation to
improve the last result. The following result is an easy consequence of
Theorem 6.1 and Corollary 4.3, and it will be useful in the end of this work.
###### Corollary 6.2.
Given two absolutely continuous measures $\mu_{0},\mu_{1}\in\mathscr{P}(X)$,
assume that there exists a map $T$ such that $T_{\\#}\mu_{0}=\mu_{1}$,
satisfying all the properties of Proposition 5.2. Then, calling $M$ the
midpoint selection presented in section 5, it holds
$\displaystyle\operatorname{Ent}\big{(}[M\circ(\operatorname{id},T)]_{\\#}(f\mu_{0})\big{)}\leq\frac{1}{2}\operatorname{Ent}(f\mu_{0})+\frac{1}{2}\operatorname{Ent}(T_{\\#}(f\mu_{0})),$
for every bounded measurable function $f:X\to\mathbb{R}^{+}$ with $\int
f\,\mathrm{d}\mu_{0}=1$.
## 7 Conclusions
In this last section I conclude all the work done up to now. In particular I
am going to show why this example is relevant, asking also to some open
question related to strict curvature dimension bounds. First of all let me
prove the most important result, which had already been anticipated in
previous sections.
###### Theorem 7.1.
For suitable $k$ and $K$ the metric measure space
$(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$ is a
$\mathsf{CD}(0,\infty)$ space.
###### Proof.
I am going to prove that, for every sequence of positive real numbers
$(\varepsilon_{n})_{n\in\mathbb{N}}$ converging to zero, the sequence of
metric measure spaces
$(X_{k,\varepsilon_{n}},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon_{n}})$
measured Gromov Hausdorff converges to
$(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$. According to Theorem
1.15 and Theorem 6.1 this is sufficient to conclude the proof, up to choose
suitable $k$ and $K$.
Define the function $f_{n}:X_{k,\varepsilon_{n}}\to X_{k,0}$ as
$f_{n}(x,y)=\bigg{(}x,y\cdot\frac{f_{k,0}(x)}{f_{k,\varepsilon_{n}}(x)}\bigg{)},$
it is immediate to notice that its image is actually $X_{k,0}$. Moreover its
is easy to prove that
$(f_{n})_{\\#}\mathfrak{m}_{k,K,\varepsilon_{n}}=\mathfrak{m}_{k,K,0}\quad\text{and}\quad\big{|}\mathsf{d}_{\infty}\big{(}f_{n}(x_{1},y_{1}),f_{n}(x_{2},y_{2})\big{)}-\mathsf{d}_{\infty}\big{(}(x_{1},y_{1}),(x_{2},y_{2})\big{)}\big{|}\leq
2\varepsilon_{n},$
and this shows the desired measured Gromov-Hausdorff convergence. ∎
First of all notice that the space
$(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$ has different topological
dimensions at different regions of the space. In particular, this shows the
non-constancy of topological dimension also for CD spaces, extending one of
the results by Ketterer and Rajala [3]. Furthermore, the space
$(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$ is not a very strict
$\mathsf{CD}(K,\infty)$ space for every $K\in\mathbb{R}$, in fact it is not
weakly essentially non-branching (see Theorem 1.12). In order to see this, it
is sufficient to consider an absolutely continuous measure $\mu_{0}$ supported
on $L$ and an absolutely continuous measure $\mu_{1}$ supported on $C$, and
subsequently notice that every $\eta\in\operatorname{OptGeo}(\mu_{0},\mu_{1})$
is supported in branching geodesics. It is then possible to conclude that the
weak CD condition is not sufficient to ensure any type of non-branching
condition. Observe also that every
$\eta\in\operatorname{OptGeo}(\mu_{0},\mu_{1})$ is not induced by a map,
consequently the (weak) CD condition is not sufficient to ensure the existence
of an optimal transport map, between two absolutely continuous marginals.
Finally notice that the space
$(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$ is an example of (weak)
CD space which is not a very strict CD space, and this shows that this two
notions of curvature dimension bounds are actually different.
For the last part of this section I need to introduce another type of
curvature bounds, called strict CD condition, which is stronger than the weak
CD condition, but is weaker than the very strict one.
###### Definition 7.2.
A metric measure space $(X,\mathsf{d},\mathfrak{m})$ is called a strict
$\mathsf{CD}(K,\infty)$ space if for every absolutely continuous measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$ there exists an optimal geodesic plan
$\eta\in\operatorname{OptGeo}(\mu_{0},\mu_{1})$, so that the entropy
functional $\operatorname{Ent}$ satisfies the K-convexity inequality along
$f\eta$ for every bounded measurable function
$f:\operatorname{Geo}(X)\to\mathbb{R}^{+}$ with $\int f\,\mathrm{d}\eta=1$.
I am now going to prove that, for suitable constants, the spaces
$(X_{k,\varepsilon},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon})$ with
$0<\varepsilon<k$ are strict CD spaces, while their measured Gromov Hausdorff
limit as $\varepsilon\to 0$
$(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$ is not. As a consequence
the following proposition holds.
###### Proposition 7.3.
The strict CD condition is not stable under measured Gromov Hausdorff
convergence.
Before going into the details of the proofs, I want to make some
clarifications. The fact that the spaces
$(X_{k,\varepsilon},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon})$ are
strict CD is a consequence of Corollary 6.2 and of an iteration argument. On
the other hand the space $(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$
cannot be a strict CD space, because of its particular topological structure I
have already highlighted.
###### Proposition 7.4.
For suitable $k$ and $K$ and every $0<\varepsilon<k$ the metric measure space
$(X_{k,\varepsilon},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon})$ is a
strict $\mathsf{CD}(0,\infty)$ space.
###### Proof.
For every $n\in\mathbb{N}$ I am going to define a measurable map
$G_{n}:X\to\operatorname{Geo}(X)$ by induction. In particular I introduce
$G_{0}:X\to\operatorname{Geo}(X)$ as any measurable map such that
$(e_{0},e_{1})\circ G_{0}(x)=(\operatorname{id},T)$ $\mu_{0}$-almost
everywhere, consequently
$(G_{0})_{\\#}\mu_{0}\in\operatorname{OptGeo}(\mu_{0},\mu_{1})$. Given
$G_{n}:X\to\operatorname{Geo}(X)$, define $G_{n+1}:X\to\operatorname{Geo}(X)$
by imposing that:
1. 1.
$e_{r}\circ G_{n+1}=e_{r}\circ G_{n}$ for every
$r\in\big{\\{}\frac{k}{2^{n}},k=0,\dots,2^{n}\big{\\}}$
2. 2.
$e_{\frac{2k+1}{2^{n+1}}}\circ G_{n+1}=M\big{(}e_{\frac{k}{2^{n}}}\circ
G_{n},e_{\frac{k+1}{2^{n}}}\circ G_{n}\big{)}$ $\mu_{0}$-almost everywhere,
where $M$ is the midpoint map defined as in Section 5.
Notice that, if the optimal transport map that induces
$\big{(}e_{\frac{k}{2^{n}}}\circ G_{n},e_{\frac{k+1}{2^{n}}}\circ
G_{n}\big{)}_{\\#}\mu_{0}$ satisfies all the properties of Proposition 5.2,
then also the maps that induce $\big{(}e_{\frac{2k}{2^{n+1}}}\circ
G_{n+1},e_{\frac{2k+1}{2^{n+1}}}\circ G_{n+1}\big{)}_{\\#}\mu_{0}$ and
$\big{(}e_{\frac{2k+1}{2^{n+1}}}\circ G_{n+1},e_{\frac{2k+2}{2^{n+1}}}\circ
G_{n+1}\big{)}_{\\#}\mu_{0}$ satisfy all the properties of Proposition 5.2.
The reader can easily realize that this is a quite straightforward consequence
of the definition of map $M$ and of its properties highlighted in Section 6.
This observation shows that the inductive procedure I have introduced can be
done in accordance with the previous section, moreover it is possible to apply
Corollary 6.2 and obtain that
$\begin{split}\operatorname{Ent}\big{(}(e_{s})_{\\#}[(G_{n+1})_{\\#}(f\mu_{0})]\big{)}&\leq\frac{1}{2}\operatorname{Ent}\big{(}(e_{r})_{\\#}[(G_{n+1})_{\\#}(f\mu_{0})]\big{)}+\frac{1}{2}\operatorname{Ent}\big{(}(e_{t})_{\\#}[(G_{n+1})_{\\#}(f\mu_{0})]\big{)}\\\
&\quad-\frac{K}{8}W_{2}^{2}\big{(}(e_{r})_{\\#}[(G_{n+1})_{\\#}(f\mu_{0})],(e_{t})_{\\#}[(G_{n+1})_{\\#}(f\mu_{0})]\big{)},\end{split}$
(17)
where $r=\frac{2k}{2^{n+1}}$, $s=\frac{2k+1}{2^{n+1}}$,
$t=\frac{2k+2}{2^{n+1}}$ and $f$ is any bounded measurable function with $\int
f\,\mathrm{d}\mu_{0}=1$. Notice that, in order to infer (17), I also used that
the map $e_{r}\circ G_{n}$ is injective outside a $\mu_{0}$-null set, as a
consequence of Proposition 5.4.
An inductive argument allows to conclude that for every $n\in\mathbb{N}$ it
holds
$\operatorname{Ent}\big{(}(e_{r})_{\\#}[(G_{n})_{\\#}(f\mu_{0})]\big{)}\leq(1-r)\operatorname{Ent}(f\mu_{0})+r\operatorname{Ent}(T_{\\#}(f\mu_{0}))-\frac{K}{2}r(1-r)W_{2}^{2}(f\mu_{0},T_{\\#}(f\mu_{0})),$
for every $r\in\big{\\{}\frac{k}{2^{n}},k=0,\dots,2^{n}\big{\\}}$, and every
bounded measurable function $f$ with $\int f\,\mathrm{d}\mu_{0}=1$. In fact,
this is completely obvious for $n=0$, and assuming it true for an $n$ it is
possible to easily deduce it for $n+1$, using (17).
It is now easy to notice that the first property in the definition of
$G_{n+1}$ given $G_{n}$, ensures the existence of a measurable map
$G:X\to\operatorname{Geo}(X)$, such that $G_{n}\to G$ uniformly. Furthermore
it is obvious that
$(e_{r})_{\\#}[(G_{n})_{\\#}(f\mu_{0})]=(e_{r})_{\\#}[G_{\\#}(f\mu_{0})]$
for every $r\in\big{\\{}\frac{k}{2^{n}},k=0,\dots,2^{n}\big{\\}}$, and every
bounded measurable function $f$ with $\int f\,\mathrm{d}\mu_{0}=1$.
Consequently it holds that
$\operatorname{Ent}\big{(}(e_{r})_{\\#}[G_{\\#}(f\mu_{0})]\big{)}\leq(1-r)\operatorname{Ent}((f\mu_{0}))+r\operatorname{Ent}(T_{\\#}(f\mu_{0}))-\frac{K}{2}r(1-r)W_{2}^{2}(f\mu_{0},T_{\\#}(f\mu_{0})),$
for every dyadic time $r$ and every suitable function $f$. Then the lower
semicontinuity of $\operatorname{Ent}$ allows to infer that the $K$-convexity
inequality of the entropy is satisfied along $G_{\\#}(f\mu_{0})$ for every
suitable function $f$. Finally I can conclude by observing that every optimal
geodesic plan of the type $F\cdot G_{\\#}\mu_{0}$, for a measurable function
$F:\operatorname{Geo}(X)\to\mathbb{R}^{+}$ with $\int
F\,\mathrm{d}(G_{\\#}\mu_{0})=1$, can be written as $G_{\\#}(f\mu_{0})$ for a
suitable measurable $f$ with $\int f\,\mathrm{d}\mu_{0}=1$, since $G$ is
clearly injective. ∎
###### Proposition 7.5.
For every $k$ and $K$ the metric measure space
$(X_{k,0},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,0})$ is not a strict
$\mathsf{CD}(0,\infty)$ space.
$\mu_{0}$$\mu_{1}$$\mu_{s}$ Figure 3: A representation of the geodesic
$(\mu_{t})_{t\in[0,1]}$.
###### Proof.
In this proof I denote with $\mathfrak{m}$ the measure $\mathfrak{m}_{k,K,0}$,
in order to simplify the notation.
For every $t\in[0,1]$ define the measure
$\mu_{t}=\frac{1}{\mathfrak{m}\big{(}[-\frac{1}{2}+t,-\frac{1}{4}+t]\times\mathbb{R}\big{)}}\cdot{\left.\kern-1.2pt\mathfrak{m}\right|_{[-\frac{1}{2}+t,-\frac{1}{4}+t]\times\mathbb{R}}}=\frac{4}{C_{K}}\cdot{\left.\kern-1.2pt\mathfrak{m}\right|_{[-\frac{1}{2}+t,-\frac{1}{4}+t]\times\mathbb{R}}},$
see Figure 3 in order to visualize it. It is easy to realize that
$(\mu_{t})_{t\in[0,1]}$ is the unique geodesic connecting $\mu_{0}$ and
$\mu_{1}$, along which the entropy functional is convex. Moreover let
$\eta\in\mathscr{P}\big{(}C([0,1]),X\big{)}$ such that
$(e_{t})_{\\#}\eta=\mu_{t}$ for every $t\in[0,1]$, I am going to prove that
the entropy functional is not convex along $F\eta$, for a suitable bounded
measurable function $F:\operatorname{Geo}(X)\to\mathbb{R}^{+}$ with $\int
F\,\mathrm{d}\eta=1$. Before going on, let me point out that every Wasserstein
geodesic in $\mathscr{P}\big{(}C([0,1]),X\big{)}$ which connects a measure on
$L$ to a measure on $C$ (and thus $\eta$ in particular), consists only of
“horizontal” transports. Therefore the only useful coordinate, in order to
evaluate the distance $\mathsf{d}_{\infty}$, will be the $x$ coordinate. As a
consequence every such optimal geodesic plan (and $\eta$ in particular) will
only depend on the $x$ coordinate. Some of the considerations I will do in
this proof actually follows from this observation.
Now, define the set
$A:=\\{(x,y)\in\mathbb{R}^{2}:f_{k,0}(x)>0\text{ and }0\leq 2y\leq
f_{k,0}(x)\\}\subset X_{k,0},$
and the quantity
$C^{\prime}_{K}=\int_{0}^{\frac{1}{2}}e^{-Ky^{2}}\,\mathrm{d}y.$
Then fix a time $\bar{t}$ such that $\mu_{\bar{t}}$ is concentrated in $C$,
consider the map $\tilde{F}:\operatorname{Geo}(X)\to\mathbb{R}^{+}$ defined as
$\tilde{F}:=\frac{C_{K}}{C^{\prime}_{K}}\cdot\chi_{A}\circ e_{\bar{t}}$ and
call $\tilde{\mu}_{1}=(e_{1})_{\\#}(\tilde{F}\eta)$. Notice that
$\int\tilde{F}\,\mathrm{d}\eta=1$, thus $\tilde{\mu}_{1}$ is a probability
measure and it is absolutely continuous with respect to $\mathfrak{m}$, with
density $\tilde{\rho}_{1}$ bounded above by $\frac{4}{C^{\prime}_{K}}$, as a
consequence
$\mathfrak{m}(\\{\tilde{\rho}_{1}>0\\})\geq\frac{C^{\prime}_{K}}{4}$. Now,
suppose that
$\mathfrak{m}(\\{\tilde{\rho}_{1}>0\\})=\frac{C^{\prime}_{K}}{4}$, then
$\tilde{\rho}_{1}\equiv\frac{4}{C^{\prime}_{K}}$ on $\\{\tilde{\rho}_{1}>0\\}$
and therefore
$\operatorname{Ent}(\tilde{\mu}_{1})=\log\left(\frac{4}{C^{\prime}_{K}}\right)=\operatorname{Ent}\big{(}(e_{\bar{t}})_{\\#}(\tilde{F}\eta)\big{)}.$
On the other hand
$\operatorname{Ent}\big{(}(e_{0})_{\\#}(\tilde{F}\eta)\big{)}=\operatorname{Ent}(\mu_{0})=\log\left(\frac{4}{C_{K}}\right)<\log\left(\frac{4}{C^{\prime}_{K}}\right),$
and consequently the entropy functional is not convex along $\tilde{F}\eta$.
Otherwise, suppose that
$\mathfrak{m}(\\{\tilde{\rho}_{1}>0\\})>\frac{C^{\prime}_{K}}{4}$, call
$S:=\\{\tilde{\rho}_{1}>0\\}$ and define the set
$S_{x}:=\\{(x^{\prime},y^{\prime})\in S:x^{\prime}=x\\}$, for every
$x\in[-1,1]$. Now consider
$\mathfrak{m}_{1}:=(\mathtt{p}_{1})_{\\#}\mathfrak{m}$ and denote by
$(\mathfrak{m}_{x})_{x\in[-1,1]}\subset\mathscr{P}(\mathbb{R})$ the
disintegration of $\mathfrak{m}$ with respect to the projection map
$\mathtt{p}_{1}$. Notice that, since $\eta$ depends only on the $x$
coordinate, then
$(\mathtt{p}_{1})_{\\#}\tilde{\mu}_{1}=\frac{4}{C_{K}}\cdot{\left.\kern-1.2pt\mathfrak{m}_{1}\right|_{\left[\frac{1}{2},\frac{3}{4}\right]}}$.
Moreover, since the density $\tilde{\rho}_{1}$ is bounded above by
$\frac{4}{C^{\prime}_{K}}$, it holds that
$\mathfrak{m}_{x}(S_{x})\geq\frac{C^{\prime}_{K}}{C_{K}}$, for
$\mathfrak{m}_{1}$-almost every $x\in\left[\frac{1}{2},\frac{3}{4}\right]$.
Furthermore, the assumption on $S$, that is
$\mathfrak{m}(S)>\frac{C^{\prime}_{K}}{4}$, ensures that
$\mathfrak{m}_{x}(S_{x})>\frac{C^{\prime}_{K}}{C_{K}}$ for a
${\left.\kern-1.2pt\mathfrak{m}_{1}\right|_{\left[\frac{1}{2},\frac{3}{4}\right]}}$-positive
set of $x$, therefore
$\int_{\frac{1}{2}}^{\frac{3}{4}}\log\left(\mathfrak{m}_{x}(S_{x})\right)\,\mathrm{d}\mathfrak{m}_{1}(x)>\frac{C_{K}}{4}\log\left(\frac{C^{\prime}_{K}}{C_{K}}\right).$
(18)
On the other hand, for every positive constant $c>0$ define the set
$S^{c}:=\\{\tilde{\rho}_{1}>c\\}$ and call
$S^{c}_{x}:=\\{(x^{\prime},y^{\prime})\in S^{c}:x^{\prime}=x\\}$ for every
$x\in[-1,1]$. Notice that, for every sufficiently small constant $c$, since
$\tilde{\rho}_{1}$ is bounded and
$(\mathtt{p}_{1})_{\\#}\tilde{\mu}_{1}=\frac{4}{C_{K}}\cdot{\left.\kern-1.2pt\mathfrak{m}_{1}\right|_{\left[\frac{1}{2},\frac{3}{4}\right]}}$,
the quantity $\mathfrak{m}_{x}(S_{x})$ is uniformly bounded from below for
$\mathfrak{m}_{1}$-almost every $x\in\left[\frac{1}{2},\frac{3}{4}\right]$.
Consequently, it is possible to apply the monotone convergence theorem and
deduce that there exists a constant $\bar{c}>0$ such that
$\int_{\frac{1}{2}}^{\frac{3}{4}}\log\left(\mathfrak{m}_{x}(S^{\bar{c}}_{x})\right)\,\mathrm{d}\mathfrak{m}_{1}(x)>\frac{C_{K}}{4}\log\left(\frac{C^{\prime}_{K}}{C_{K}}\right).$
Now, define the measurable map $F:\operatorname{Geo}(X)\to\mathbb{R}^{+}$
$F:=\frac{4}{C_{K}}\tilde{F}\cdot\left(\frac{1}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\frac{\chi_{S^{\bar{c}}}(x,y)}{\tilde{\rho}_{1}(x,y)}\circ
e_{1}\right).$
I have already noticed that the quantity $\mathfrak{m}_{x}(S^{\bar{c}}_{x})$
is uniformly bounded from below for $\mathfrak{m}_{1}$-almost every
$x\in\left[\frac{1}{2},\frac{3}{4}\right]$, thus $F$ is well defined and
bounded, moreover it holds that
$\displaystyle\int F\,\mathrm{d}\eta$
$\displaystyle=\int\frac{4}{C_{K}}\cdot\left(\frac{1}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\frac{\chi_{S^{\bar{c}}}(x,y)}{\tilde{\rho}_{1}(x,y)}\circ
e_{1}\right)\,\mathrm{d}\tilde{F}\eta=\frac{4}{C_{K}}\int\frac{1}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\frac{\chi_{S^{\bar{c}}}(x,y)}{\tilde{\rho}_{1}(x,y)}\,\mathrm{d}\tilde{\mu}_{1}(x,y)$
$\displaystyle=\frac{4}{C_{K}}\int\frac{\chi_{S^{\bar{c}}}(x,y)}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\,\mathrm{d}\mathfrak{m}(x,y)=\frac{4}{C_{K}}\int_{\frac{1}{2}}^{\frac{3}{4}}\int\frac{\chi_{S^{\bar{c}}}(x,y)}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\,\mathrm{d}\mathfrak{m}_{x}(y)\,\mathrm{d}\mathfrak{m}_{1}(x)=\frac{4}{C_{K}}\int_{\frac{1}{2}}^{\frac{3}{4}}\,\mathrm{d}\mathfrak{m}_{1}(x)=1.$
In particular, observe that a computation similar to this last one shows that
$(e_{1})_{\\#}(F\eta)=\frac{4}{C_{K}}\frac{\chi_{S^{\bar{c}}}(x,y)}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\cdot\mathfrak{m},$
thus it is possible to estimate its entropy:
$\displaystyle\operatorname{Ent}\big{(}(e_{1})_{\\#}(F\eta)\big{)}$
$\displaystyle=\int\frac{4}{C_{K}}\frac{\chi_{S^{\bar{c}}}(x,y)}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\log\left(\frac{4}{C_{K}}\frac{\chi_{S^{\bar{c}}}(x,y)}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\right)\,\mathrm{d}\mathfrak{m}$
$\displaystyle=\int_{\frac{1}{2}}^{\frac{3}{4}}\int\frac{4}{C_{K}}\frac{\chi_{S^{\bar{c}}}(x,y)}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\log\left(\frac{4}{C_{K}}\frac{\chi_{S^{\bar{c}}}(x,y)}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\right)\,\mathrm{d}\mathfrak{m}_{x}(y)\,\mathrm{d}\mathfrak{m}_{1}(x)$
$\displaystyle=\int_{\frac{1}{2}}^{\frac{3}{4}}\int_{S^{\bar{c}}_{x}}\frac{4}{C_{K}}\frac{1}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\log\left(\frac{4}{C_{K}}\frac{1}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\right)\,\mathrm{d}\mathfrak{m}_{x}(y)\,\mathrm{d}\mathfrak{m}_{1}(x)$
$\displaystyle=\frac{4}{C_{K}}\int_{\frac{1}{2}}^{\frac{3}{4}}\log\left(\frac{4}{C_{K}}\frac{1}{\mathfrak{m}_{x}(S^{\bar{c}}_{x})}\right)\,\mathrm{d}\mathfrak{m}_{1}(x)$
$\displaystyle=\log\left(\frac{4}{C_{K}}\right)+\frac{4}{C_{K}}\int_{\frac{1}{2}}^{\frac{3}{4}}-\log\left(\mathfrak{m}_{x}(S^{\bar{c}}_{x})\right)\,\mathrm{d}\mathfrak{m}_{1}(x)<\log\left(\frac{4}{C^{\prime}_{K}}\right),$
where the last inequality follows from (18). On the other hand
$(e_{\bar{t}})_{\\#}(F\eta)\ll(e_{\bar{t}})_{\\#}(\tilde{F}\eta)$ and
consequently Jensen’s inequality ensures that
$\operatorname{Ent}\big{(}(e_{\bar{t}})_{\\#}(F\eta)\big{)}\geq\log\left(\frac{4}{C^{\prime}_{K}}\right),$
Furthermore, it is easy to realize that
$(\mathtt{p}_{1})_{\\#}\big{[}(e_{1})_{\\#}(F\eta)\big{]}=\frac{4}{C_{K}}\cdot{\left.\kern-1.2pt\mathfrak{m}_{1}\right|_{\left[\frac{1}{2},\frac{3}{4}\right]}}$
and thus, as before, I have
$\operatorname{Ent}\big{(}(e_{0})_{\\#}(F\eta)\big{)}=\operatorname{Ent}(\mu_{0})=\log\left(\frac{4}{C_{K}}\right)<\log\left(\frac{4}{C^{\prime}_{K}}\right).$
Putting together this last three inequalities it is easy to realize that the
entropy functional is not convex along $F\eta$. ∎
Notice that this last result shows in particular that the strict CD condition
and the weak one are two actually different notions. Moreover, the combination
of Proposition 7.4 and 7.5 obviously yields Proposition 7.3, according to the
proof of Theorem 7.1. On the other hand, observe that this work does not allow
to disprove the stability of the very strict CD condition. In fact the proof
of Proposition 7.4 heavily relies on an approximation argument, thus it is
impossible to modify it, in order to prove the very strict CD condition for
the spaces
$(X_{k,\varepsilon},\mathsf{d}_{\infty},\mathfrak{m}_{k,K,\varepsilon})$.
However, in my opinion, this example leaves little hope for the very strict CD
condition to be stable.
Aknowlegments : This article contains part of the work I did for my master
thesis, that was supervised by Luigi Ambrosio and Karl-Theodor Sturm.
## References
* [1] L. Ambrosio, N. Gigli, and G. Savare. Gradient Flows: In Metric Spaces and in the Space of Probability Measures. Lectures in Mathematics. ETH Zürich. Birkhäuser Basel, 2005.
* [2] N. Gigli, A. Mondino, and G. Savaré. Convergence of pointed non-compact metric measure spaces and stability of Ricci curvature bounds and heat flows. Proceedings of the London Mathematical Society, 111:1071–1129, 2015\.
* [3] C. Ketterer and T. Rajala. Failure of topological rigidity results for the measure contraction property. Potential Analysis, 42, 2014.
* [4] J. Lott and C. Villani. Ricci curvature for metric-measure spaces via optimal transport. Annals of Mathematics, 169:903–991, 2009.
* [5] T. Rajala. Failure of the local-to-global property for $\mathsf{CD}(K,N)$ spaces. Ann. Sc. Norm. Super. Pisa Cl. Sci., 15:45–68, 2016.
* [6] T. Rajala and K.-T. Sturm. Non-branching geodesics and optimal maps in strong $\mathsf{CD}(K,\infty)$ spaces. Calculus of Variations and Partial Differential Equations, 50:831–846, 2014.
* [7] T. Schultz. Existence of optimal transport maps in very strict $\mathsf{CD}(K,\infty)$ spaces. Calculus of Variations and Partial Differential Equations, 57, 2018\.
* [8] T. Schultz. Equivalent definitions of very strict $\mathsf{CD}(K,N)$ spaces. arXiv preprint, 2019.
* [9] T. Schultz. On one-dimensionality of metric measure spaces. Proc. Amer. Math. Soc., 149:383–396, 2020.
* [10] K.-T. Sturm. On the geometry of metric measure spaces. Acta Math., 196(1):65–131, 2006.
* [11] K.-T. Sturm. On the geometry of metric measure spaces. II. Acta Math., 196(1):133–177, 2006.
* [12] C. Villani. Optimal transport – Old and new. Grundlehren der mathematischen Wissenschaften. Springer, 2008.
|
# Model Adaptation for image reconstruction using
Generalized Stein’s Unbiased Risk Estimator
###### Abstract
Deep learning image reconstruction algorithms often suffer from model
mismatches when the acquisition scheme differs significantly from the forward
model used during training. We introduce a Generalized Stein’s Unbiased Risk
Estimate (GSURE) loss metric to adapt the network to the measured k-space data
and minimize model misfit impact. Unlike current methods that rely on the mean
square error in k-space, the proposed metric accounts for noise in the
measurements. This makes the approach less vulnerable to overfitting, thus
offering improved reconstruction quality compared to schemes that rely on
mean-square error. This approach may be useful to rapidly adapt pre-trained
models to new acquisition settings (e.g., multi-site) and different contrasts
than training data.
Index Terms— Model adaptation, MRI, SURE, Image Reconstruction
## 1 Introduction
The reconstruction of images from a few noisy measurements is a central
problem in several modalities, including MRI, computer vision, and microscopy.
Classical methods, including compressed sensing (CS), pose the recovery as an
optimization scheme. The cost function in CS is the sum of a data consistency
term involving a numerical forward model of the acquisition scheme and a
regularization term that exploits image priors [1, 2].
Recently, deep learning algorithms are emerging as powerful alternatives
offering improved performance over CS-based methods that often rely on
carefully handcrafted regularization priors. Most deep learning methods for
image reconstruction rely on learning of trainable convolutional neural
network (CNN) modules within the network using fully sampled training images
[3, 4, 5]. In addition to computational efficiency, these deep learning based
methods provide improved image quality than classical CS-based approaches.
Unlike CS priors that only depend on the image, the learned CNN modules often
depend on the specific forward model used in training. In many cases, the
actual acquisition model can differ significantly from those used to train the
network. In those cases, deep learning methods may offer sub-optimal image
quality. In the MR imaging context, several factors can contribute to the
above model mismatches, including differences in acceleration factors,
sampling patterns, the amount of measurement noise, specific parallel MRI
coils, inter-site variability, inter-scanner variability, as well as
differences in image content and contrast.
Practitioners rely on training the network with several forward models to
minimize model mismatch related dependence [3, 6, 7, 8, 4]. However, even
these models have some sensitivity to model mismatch. It is often not
practical to train the network to each setting because of the lack of fully-
sampled training data corresponding to every case. To minimize this challenge,
several authors have proposed to fine-tune the pre-trained networks using the
error between the actual measurements and the ones made on the recovered
images [9]. A challenge with this scheme is the need for careful early
stopping. Specifically, CNNs often have sufficient capacity to learn
measurement noise. Therefore, without early stopping, the algorithm can
overfit the few measurements, resulting in degraded performance. Some authors
have proposed to add additional priors to restrict the network parameters to
not deviate significantly from the original ones [6]. Loss functions that only
use part of the data were introduced [10].
We introduce a loss function based on Stein’s unbiased risk estimator (SURE)
[11] to adapt a pre-trained deep image reconstruction network to a new
acquisition scheme and image content. Unlike prior approaches [6, 9, 10] that
do not account for measurement noise, the proposed approach accounts for the
noise statistics and is less vulnerable to overfitting. The proposed model
adaptation scheme will work with both model-based algorithms [3, 12] as well
as direct-inversion methods [13, 14].
Stein’s unbiased risk estimator (SURE) [11] is an unbiased estimator for mean-
square-error (MSE). LDAMP-SURE [15] utilizes this SURE estimate to train CNN
denoisers in an unsupervised fashion. LDAM-SURE also proposes to train
denoisers within an unrolled network in a layer-by-layer manner for image
recovery from undersampled measurements [15]. The GSURE approach [16] extends
SURE to inverse problems and considers an unbiased estimate of the MSE in the
range space of the measurement operator. A challenge in using GSURE [16] to
train deep networks for inverse problems is the poor approximation of the
actual MSE by the projected MSE, especially when the range space is small
[15]. We recently developed an ENsembled SURE (ENSURE) [17] approach for
unsupervised learning to overcome this problem. We showed that an ensemble of
sampling patterns can well approximate the projected MSE as weighted MSE.
In this work, we use the GSURE [16] approach to adapt a pre-trained network to
a new acquisition setting, only using the undersampled measurements. Unlike
[17], where a network is trained from scratch, we consider adopting a pre-
trained network only using the undersampled measurements of a single image.
Our results show that the GSURE-based model adaptation (GSURE-MA) offers
improved performance even in highly undersampled settings compared to existing
approaches [10, 9].
## 2 Proposed Method
The image acquisition model to acquire the noisy and undersampled measurements
$\boldsymbol{y}\in\mathbb{C}^{n}$ of an image
$\boldsymbol{x}\in\mathbb{C}^{m}$ using the forward operator $\mathcal{A}$ can
be represented as
$\boldsymbol{y}=\mathcal{A}\boldsymbol{x}+\boldsymbol{n}$ (1)
Here, we assume that noise $\boldsymbol{n}$ is Gaussian distributed with mean
zero and standard deviation $\mathbf{\sigma}$ such that
$\boldsymbol{n}\sim\boldsymbol{N}(0,\mathbf{\sigma})$. Define regridding
reconstruction as $\boldsymbol{u}=\mathcal{A}^{H}\boldsymbol{y}$ that lives in
a subspace of $\mathbb{C}^{m}$, specified by $\mathcal{V}$. The recovery using
a deep neural network $f_{\Phi}$ with trainable parameters $\Phi$ can be
represented as
$\boldsymbol{\widehat{x}}=f_{\Phi}(\boldsymbol{u}).$ (2)
Here $f_{\Phi}$ can be a direct-inversion or a model-based deep neural
network. Supervised deep learning methods compare the recovered image
$\widehat{\boldsymbol{x}}$ with fully sampled ground truth image
$\boldsymbol{x}$ using
$\text{MSE}=\mathbb{E}_{\boldsymbol{x}\sim\mathcal{M}}~{}\|\boldsymbol{\widehat{x}}-\boldsymbol{x}\|_{2}^{2}$
(3)
to train the reconstruct network.
(a) data-term (b) divergence-term
Fig. 1: The implementation details of the GSURE based loss function for model
adaptation. (a) shows the calculation of data-term. (b) shows the calculation
of the divergence term. Here we pass the regridding reconstruction and its
noisy version through the network and find the error between the two terms.
Then we take the inner product between this error term and the noise to get an
estimate of the network divergence divergence.
The deep network $f_{\Phi}$ is often sensitive to the specific forward model
$\mathcal{A}$ in (1) and the class of images $\mathcal{M}$. As discussed
before, the above trained networks are vulnerable to model mismatches, when
the acquisition scheme or the type of images are different. We hence consider
the adaptation of the trained network $f_{\Phi}$ to the specific images based
on the available measurements, assuming that fully sampled ground truth to
perform training using MSE, as in (3), is not available.
(a) Training Mask M0 (b) Testing Mask M1
Fig. 2: The 1D Cartesian sampling masks used for training and testing for the
first experiment.
A simple approach is to perform model adaptation using the available noisy
measurements only with the loss function:
$\text{DIP-
MA}=\mathbb{E}\|\mathcal{A}~{}f_{\Phi}(\boldsymbol{u})-\boldsymbol{y}\|_{2}^{2},$
(4)
where $\Phi$ is initialized with the parameters of the trained model. Due to
the similarity of this approach to [9], we term this approach as model
adaptation using deep image prior (DIP-MA). Because the measurements
$\boldsymbol{y}$ are noisy, DIP-MA is vulnerable to overfitting. Early
termination and the use of additional regularization priors to restrict the
deviation of $\Phi$ from the pretrained ones are used [6].
We propose to use GSURE [16] loss function that explicitly accounts for the
noise in the measurements to minimize overfitting issues. We denote the
projection to this subspace as
$\mathbf{P}=(\mathcal{A}^{H}\mathcal{A})^{{\dagger}}\mathcal{A}^{H}\mathcal{A}$,
where ${\dagger}$ denotes the pseudo-inverse. The GSURE approach is an
unbiased estimate for the projected MSE, denoted by
$\|\mathbf{P}(\boldsymbol{\widehat{x}}-\boldsymbol{x})\|^{2}$:
$\mathcal{L}=~{}\underbrace{\mathbb{E}_{\boldsymbol{u}}\left[\|\mathbf{P}\boldsymbol{\widehat{x}}-\boldsymbol{x}_{\text{LS}}\|_{2}^{2}\right]}_{\mathrm{data~{}term}}~{}+~{}\underbrace{2\mathbb{E}_{\boldsymbol{u}}\left[\nabla_{\boldsymbol{u}}\cdot
f_{\Phi}(\boldsymbol{u})\right]}_{\mathrm{divergence}}.$ (5)
Here
$\boldsymbol{x}_{\text{LS}}=(\mathcal{A}^{H}\mathcal{A})^{{\dagger}}\boldsymbol{u}$
is a least-square estimate. The second term is a measure of the divergence of
the network and is computed using the Monte-Carlo approach [18]. This term
acts as a network regularization term, this minimizing the risk of
overfitting. Fig. 1 shows the implementation details of data-term and the
divergence term.
## 3 Experiments and Results
(a) Ground Truth
(b) Input $\mathcal{A}_{0}^{H}y$, 22.89 dB
(c) Before-MA, 32.69 dB
(d) DIP-MA, 32.89 dB
(e) SSDU-MA, 33.38 dB
(f) GSURE-MA, 35.31 dB
(g) Input $\mathcal{A}_{1}^{H}y$, 23.14 dB
(h) Before-MA, 30.46 dB
(i) DIP-MA, 31.10 dB
(j) SSDU-MA, 31.97 dB
(k) GSURE-MA, 33.91 dB
Fig. 3: Experimental results for the 1D Cartesian sampling mask on a testing
slice. The training and testing forward model $\mathcal{A}_{0}$ and
$\mathcal{A}_{1}$ when applied on a ground truth image (a) from the testing
data lead to the re-gridding reconstruction $\mathcal{A}_{0}^{H}y$ (b) and
$\mathcal{A}_{1}^{H}y$ (g), respectively. (b-f) shows testing results on the
forward model $\mathcal{A}_{0}$ that was used during training. (g-k) shows
results with a different forward model $\mathcal{A}_{1}$, not seen during
training. Red box shows a zoomed portion of the images.
$0$$1{,}000$$2{,}000$$3{,}000$$4{,}000$$5{,}000$$26$$28$$30$$32$$34$$36$EpochsPSNR
valueGSURE-MADIP-MA (a) PSNR in direct-Inversion architecture, ResNet
$0$$500$$1{,}000$$1{,}500$$2{,}000$$35$$36$$37$EpochsPSNR valueGSURE-MADIP-MA
(b) PSNR in unrolled architecture, MoDL
Fig. 4: These plots show the variation in PSNR values with the model
adaptation epochs on a single test image using DIP-MA and proposed GSURE-MA
strategies. The ResNet and MoDL architectures were fine-tuned for 5000 and
2000 epochs, respectively.
We consider a publicly available [3] parallel MRI brain data obtained using 3T
GE MR750w scanner at the University of Iowa. The matrix dimensions were
$256\times 256\times 208$ with a 1 mm isotropic resolution. Fully sampled
multi-channel brain images of nine volunteers were collected, out of which
data from five subjects were used for training. The data from two subjects
were used for testing and the remaining two for validation.
We evaluate the performance of the proposed model-adaption technique in both
the direct-inversion-based networks and unrolled model-based networks.
Specifically, we use ResNet18 as the direct-inversion network and the MoDL
architecture as the unrolled network. The ResNet18 has $3\times 3$ convolution
filters and 64 feature maps at each layer. The real and imaginary components
of complex data were used as channels in all the experiments. For the MoDL
architecture, we use three unrolling steps, each having a ResNet18 followed by
a data-consistency step. The network weights are shared over the three
unrolls.
We compare the proposed GSURE-MA approach with DIP-MA and self-supervised
learning via deep undersampling (SSDU) [10]. For model-adaption using SSDU
(SSDU-MA), we utilized 60% of the measured k-space data for the data-
consistency and the remaining 40% for the loss-function, as suggested in SSDU-
MA [10].
The first experiment demonstrates the benefits of model-adaptation for 1D
multichannel Cartesian sampling. Fig. 2 shows the training mask $M_{0}$ and
testing mask $M_{1}$, corresponding to training and testing forward models
$\mathcal{A}_{0}$ and $\mathcal{A}_{1}$ respectively. We first performed a
supervised training of the MoDL architecture assuming $\mathcal{A}_{0}$ on 360
training slices. After training, we tested the performance of the learned
model on 100 test slices from a different subject using forward models
$\mathcal{A}_{0}$ as well as $\mathcal{A}_{1}$.
Fig. 3 shows both qualitative and quantitative results on models
$\mathcal{A}_{0}$ and $\mathcal{A}_{1}$. Fig. 3(h) shows that the MoDL
architecture is relatively robust to the change in the forward model. The DIP-
MA scheme offers relatively modest improvement, which are outperformed by
SSDU-MA. It is evident from PSNR values as well as from visual comparisons
that the proposed GSURE-MA leads to the best quality as compared to existing
approaches. Specifically, accounting for the noise during the model adaptation
phase results in improved performance. We note that the GSURE-MA scheme offers
improved performance even when $\mathcal{A}_{0}$ is used. We attribute this to
the differences in image content, compared to the ones used for training.
Table 1: Table shows PSNR (dB) values of the reconstructed test dataset at four different acceleration (Acc.) factors ranging from two-fold (2x) to eight-fold (8x) acceleration. The pre-training was performed with the 6x acceleration setting. | Acc. | 2x | 4x | 6x | 8x
---|---|---|---|---|---
| Input, $\mathcal{A}^{T}b$ | 30.23 | 24.80 | 22.96 | 22.27
Dir. Inv. ResNet | Before-MA | 23.37 | 29.37 | 32.10 | 30.34
DIP-MA | 33.17 | 34.06 | 33.21 | 32.28
GSURE-MA | 35.16 | 35.79 | 34.86 | 33.66
Unrolled MoDL | Before-MA | 28.37 | 35.10 | 35.35 | 33.99
DIP-MA | 37.72 | 33.50 | 31.94 | 31.15
SSDU-MA | 34.46 | 33.31 | 30.92 | 29.69
GSURE-MA | 39.96 | 37.80 | 36.08 | 34.97
The graphs in Fig. 4 shows a comparison of the DIP based and GSURE based model
adaption techniques. The DIP based approach is dependent on the number of
epochs. The performance starts dropping after a few iterations and thus DIP-MA
requires to manually find the optimal number of iterations. We also observe
that GSURE-MA is more stable than DIP-MA and does not require early
termination. This behavior is primarily due to the network divergence term
that acts as a regularization term in the loss function. Further, we observe
from Fig. 4(b), that in the case of unrolled architecture, the maximum PSNR
value achieved with GSURE-MA is higher than DIP-MA. Additionally, we note that
model-adaptation in unrolled architecture leads to higher PSNR values than the
direct-inversion-based approach.
The next experiment demonstrates the model adaptation capabilities of the
proposed GSURE-MA method for different acceleration factors. In particular, we
train a model for the six-fold (6x) acceleration factor with different 2D
random variable density sampling masks. During testing, we evaluate this
trained model at 2x, 4x, 6x, and 8x acceleration factor for both direct-
inversion and model-based unrolled networks. Table 1 summarizes the
experimental results of this experiment. SSDU-MA strategy is developed only
for unrolled architecture, therefore, its results are calculated for that
setting only. Table 1 shows that the performance of a model trained for 6x
acceleration does not work well for 2x acceleration. The rows corresponding to
before model adaptation ( Before-MA ) shows the PSNR values of the
reconstructed images from the 6x trained model. The proposed GSURE-MA strategy
improves the PSNR from 23.37 dB to 35.16 dB in the direct-inversion network
and 28.37 dB to 39.96 dB for the unrolled network. Similarly, we see that
model adaption improves the reconstruction results for all the accelerations.
## 4 Conclusions
This work proposed a model adaptation strategy to fine-tune a previously
trained, deep learned model to the new acquisition operator. We use the GSURE
loss function to rapidly adapt a pre-trained model to new acquisition models
without the risk of overfitting. We show the preliminary utility of the
proposed GSURE-MA scheme for MR image reconstruction.
## References
* [1] Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly, “Compressed sensing MRI,” IEEE signal processing magazine, vol. 25, no. 2, pp. 72, 2008.
* [2] Emmanuel Candes and Justin Romberg, “Sparsity and incoherence in compressive sampling,” Inverse problems, vol. 23, no. 3, pp. 969, 2007.
* [3] Hemant K Aggarwal, Merry P Mani, and Mathews Jacob, “MoDL: Model-based deep learning architecture for inverse problems,” IEEE Trans. Med. Imag., vol. 38, no. 2, pp. 394–405, 2019.
* [4] Jo Schlemper, Jose Caballero, Joseph V Hajnal, Anthony N Price, and Daniel Rueckert, “A deep cascade of convolutional neural networks for dynamic MR image reconstruction,” IEEE Trans. Med. Imag., vol. 37, no. 2, pp. 491–503, 2018.
* [5] Eunhee Kang, Junhong Min, and Jong Chul Ye, “A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction,” Medical Physics, vol. 44, no. 10, pp. e360–e375, 2017.
* [6] Kerstin Hammernik, Jo Schlemper, Chen Qin, Jinming Duan, Ronald M. Summers, and Daniel Rueckert, “Sigma-Net: Systematic evaluation of iterative deep neural networks for fast parallel MR image reconstruction,” arXiv preprint arXiv:1912.09278, 2019.
* [7] Tran Minh Quan, Thanh Nguyen-Duc, and Won-Ki Jeong, “Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss,” IEEE Trans. Med. Imag., vol. 37, no. 6, pp. 1488–1497, 2018.
* [8] Guang Yang, Simiao Yu, Hao Dong, Greg Slabaugh, Pier Luigi Dragotti, Xujiong Ye, Fangde Liu, Simon Arridge, Jennifer Keegan, Yike Guo, et al., “DAGAN: Deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction,” IEEE Trans. Med. Imag., vol. 37, no. 6, pp. 1310–1321, 2017.
* [9] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky, “Deep image prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9446–9454.
* [10] S. A. Hossein Hosseini, B. Yaman, S. Moeller, and M. Akçakaya, “High-fidelity accelerated mri reconstruction by scan-specific fine-tuning of physics-based neural networks,” in 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC), 2020, pp. 1481–1484.
* [11] Charles M Stein, “Estimation of the mean of a multivariate normal distribution,” The annals of Statistics, pp. 1135–1151, 1981.
* [12] Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P. Recht, Daniel K. Sodickson, Thomas Pock, and Florian Knoll, “Learning a Variational Network for Reconstruction of Accelerated MRI Data,” Magnetic resonance in Medicine, vol. 79, no. 6, pp. 3055–3071, 2017\.
* [13] Yoseob Han, Leonard Sunwoo, and Jong Chul Ye, “k-space deep learning for accelerated MRI,” IEEE Trans. Med. Imag., 2019.
* [14] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer, 2015, pp. 234–241.
* [15] Christopher A Metzler, Ali Mousavi, Reinhard Heckel, and Richard G Baraniuk, “Unsupervised learning with stein’s unbiased risk estimator,” arXiv preprint arXiv:1805.10531, 2018.
* [16] Yonina C Eldar, “Generalized sure for exponential families: Applications to regularization,” IEEE Transactions on Signal Processing, vol. 57, no. 2, pp. 471–481, 2008.
* [17] Hemant Kumar A Aggarwal, Aniket Pramanik, and Mathews Jacob, “ENSURE: Ensemble stein’s unbiased risk estimator for unsupervised learning,” arXiv:2010.10631, 2018, https://arxiv.org/abs/2010.10631.
* [18] Sathish Ramani, Thierry Blu, and Michael Unser, “Monte-carlo sure: A black-box optimization of regularization parameters for general denoising algorithms,” IEEE Transactions on image processing, vol. 17, no. 9, pp. 1540–1554, 2008.
|
# A Metric Stability Result for the Very Strict CD Condition
Mattia Magnabosco
###### Abstract
In [15] Schultz generalized the work of Rajala and Sturm [13], proving that a
weak non-branching condition holds in the more general setting of very strict
CD spaces. Anyway, similar to what happens for the strong CD condition, the
very strict CD condition seems not to be stable with respect to the measured
Gromov Hausdorff convergence (cf. [11]).
In this article I prove a stability result for the very strict CD condition,
assuming some metric requirements on the converging sequence and on the limit
space. The proof relies on the notions of consistent geodesic flow and
consistent plan selection, which allow to treat separately the static and the
dynamic part of a Wasserstein geodesic. As an application, I prove that the
metric measure space $\mathbb{R}^{N}$ equipped with a crystalline norm and
with the Lebesgue measure satisfies the very strict $\mathsf{CD}(0,\infty)$
condition.
In their pivotal works Lott, Villani [10] and Sturm [18, 19] introduced a weak
notion of curvature dimension bounds, which strongly relies on the theory of
Optimal Transport. They noticed that, in a Riemannian manifold, a uniform
bound on the Ricci tensor is equivalent to the uniform convexity of the
Boltzmann-Shannon entropy functional in the Wasserstein space. This allowed
them to define a consistent notion of curvature dimension bound for metric
measure spaces, that is known as CD condition. The metric measure spaces
satisfying the CD condition are called CD spaces and enjoy some remarkable
analytic and geometric properties.
The validity of the CD condition in a metric measure space
$(X,\mathsf{d},\mathfrak{m})$ is strongly related to the metric structure of
its Wasserstein space, which is in turn strictly dependent on the metric
structure of $(X,\mathsf{d},\mathfrak{m})$. For this reason, it is not
surprising that some properties of CD spaces hold under additional metric
assumptions. Among them, one of the most important is the non-branching
condition, which basically prevents two different geodesic to coincide in an
interval of times. Since the first works on CD spaces, it has been clear that
the non-branching assumption, associated with the CD condition, could confer
some nice properties to a metric measure space. For example, Sturm [18] was
able to prove the tensorization property and the local-to-global property,
while Gigli [8] managed to solve the Monge problem. The relation between non-
branching assumption and CD condition was made even more interesting by the
work of Rajala and Sturm [13]. They proved that the strong CD condition
implies a weak version of the non-branching one, that they called essentially
non-branching. The work of Rajala and Sturm was then generalized to the wider
context of very strict CD spaces by Schultz in [15] (see also [16] and [17],
where he investigates some properties of very strict CD spaces). In
particular, he also underlined that every very strict CD space satisfies a
weak non-branching condition, that I will call weak essentially non-branching.
One of the most important properties of CD spaces is their stability with
respect to the measured Gromov Hausdorff convergence. Unfortunately this
rigidity result cannot hold for the strong CD condition and, accordingly to
[11], it also does not hold for the so called strict CD condition, which is (a
priori) weaker than the very strict one, but stronger than the weak one. In
particular, as explained in [11], it is not possible to deduce in general any
type non-branching condition for a measured Gromov Hausdorff limit space. This
motivates to add some analytic or metric assumption on the converging spaces,
in order to achieve non-branching at the limit. In this direction the most
remarkable result is provided by the theory of RCD spaces, pioneered by
Ambrosio Gigli and Savaré in [4] and [5]. In fact these spaces are stable with
respect to the measured Gromov Hausdorff convergence and essentially non-
branching. In this article I present a stability result for very strict CD
spaces, assuming metric requirements on the converging sequence and on the
limit space.
In particular, the first section is dedicated to introduce the necessary
preliminary notions, related both to the Optimal Transport theory and to CD
conditions. In this sense, this section should be understood as a list of
prerequisites and not as a complete treatment of the basic theory. For a full
and precise discussions about the Optimal Transport theory I refer the reader
[1], [2], [20] and [21].
In the second section I prove a purely metric stability result, which assume
some strong rigidity requirements, but nevertheless can be applied to a fair
variety of metric measure spaces. This result relies on the notions of
consistent geodesic flow and consistent plan selection, which, as it will be
clear in the following, allow me to treat separately the dynamic and the
static parts of Wasserstein geodesics. The proof of this result uses an
approximation strategy, and it is completely different from the arguments used
for the RCD spaces theory.
The result of the second section can be applied to the metric measure space
$\mathbb{R}^{N}$ equipped with a crystalline norm and with the Lebesgue
measure, this is explained in the last section. In particular I will show how
a secondary variational problem can provide a consistent plan selection,
associated to the Euclidean consistent geodesic flow. This will allow to
conclude that these metric measure spaces are very strict CD spaces, and
therefore they are weakly essentially non-branching.
## 1 Preliminary Notions
This first section is aimed to state all the preliminary results I will need
in the following.
### 1.1 The Optimal Transport Problem
The original formulation of the Optimal Transport problem, due to Monge, dates
back to the XVIII century, and it is the following: given two topological
spaces $X,Y$, two probability measures $\mu\in\mathscr{P}(X)$,
$\nu\in\mathscr{P}(Y)$ and a non-negative Borel cost function $c:X\times
Y\to[0,\infty]$, look for the maps $T$ that minimize the following quantity
$\inf\left\\{\int_{X}c(x,T(x))\,\mathrm{d}\mu(x)\,:\,\text{$T:X\to Y$ Borel,
$T_{\\#}\mu=\nu$}\right\\}.$ (M)
The minimizers of the Monge problem are called optimal transport maps and in
general do not necessarily exist. Therefore for the development of a general
theory, it is necessary to introduce a slight generalization, due to
Kantorovich. Defining the set of transport plans from $\mu$ to $\nu$:
$\Gamma(\mu,\nu):=\\{\pi\in\mathscr{P}(X\times
Y)\,:\,(\mathtt{p}_{X})_{\\#}\pi=\mu\,\,\text{and}\,\,(\mathtt{p}_{Y})_{\\#}\pi=\nu\\},$
the Kantorovich’s formulation of the optimal transport problem asks to find
minima and minimizers of
$\mathcal{C}(\mu,\nu):=\inf\left\\{\int_{X\times
Y}c(x,y)\,\mathrm{d}\pi(x,y)\,:\,\pi\in\Gamma(\mu,\nu)\right\\}.$ (K)
This new problem admits minimizers under weak assumptions, in fact the
following theorem holds.
###### Theorem 1.1 (Kantorovich).
Let $X$ and $Y$ be Polish spaces and $c:X\times Y\to[0,\infty]$ a lower
semicontinuous cost function, then the minimum in the Kantorovich’s
formulation (K) is attained.
The minimizers of the Kantorovich problem are called optimal transport plans
and the set of all of them will be denoted by
$\operatorname{OptPlans}(\mu,\nu)$. Notice that this set obviously depends on
the cost function $c$, anyway I will usually avoid to make this dependence
explicit, since many times it will be clear from the context. A transport plan
$\pi\in\Gamma(\mu,\nu)$ is said to be induced by a map if there exists a
$\mu$-measurable map $T:X\to Y$ such that
$\pi=(\operatorname{id},T)_{\\#}\mu$. Notice that these transport plans are
exactly the ones considered in the Monge’s minimization problem (M).
###### Remark 1.2.
Suppose that every minimizer of the Kantorovich problem between the measures
$\mu,\nu\in\mathscr{P}(X)$ is induced by a map, and thus is a minimizer for
the Monge problem. Then the Kantorovich problem between $\mu$ and $\nu$ admit
a unique minimizer, which is clearly induced by a map. In fact, given two
distinct transport plans
$\pi_{1}=(\operatorname{id},T_{1})_{\\#}\mu,\,\pi_{2}=(\operatorname{id},T_{2})_{\\#}\mu\in\operatorname{OptPlans}(\mu,\nu)$,
their combination $\pi=\frac{\pi_{1}+\pi_{2}}{2}$ is itself an optimal plan
and it is not induced by a map, contradicting the assumption.
A fundamental approach in facing the Optimal Transport problem is the one of
$c$-duality, which allows to prove some very interesting and useful results.
Before stating them let me introduce the notions of $c$-cyclical monotonicity,
$c$-conjugate function and $c$-concave function.
###### Definition 1.3.
A set $\Gamma\subset X\times Y$ is said to be $c$-cyclically monotone if
$\sum_{i=1}^{N}c\left(x_{i},y_{\sigma(i)}\right)\geq\sum_{i=1}^{N}c\left(x_{i},y_{i}\right)$
for every $N\geq 1$, every permutation $\sigma$ of $\\{1,\dots,N\\}$ and every
$(x_{i},y_{i})\in\Gamma$ for $i=1,\dots,N$.
###### Definition 1.4.
Given a function $\phi:X\to\mathbb{R}\cup\\{-\infty\\}$, define its
$c$-conjugate function $\phi^{c}$ as
$\phi^{c}(y):=\inf_{x\in X}\\{c(x,y)-\phi(x)\\}.$
Analogously, given $\psi:Y\to\mathbb{R}\cup\\{-\infty\\}$, define its
$c$-conjugate function $\psi^{c}$ as
$\psi^{c}(x):=\inf_{y\in Y}\\{c(x,y)-\psi(y)\\}.$
Notice that, by definition, given a function
$\phi:X\to\mathbb{R}\cup\\{-\infty\\}$, $\phi(x)+\phi^{c}(y)\leq c(x,y)$ for
every $(x,y)\in X\times Y$.
###### Definition 1.5.
A function $\phi:X\to\mathbb{R}\cup\\{-\infty\\}$ is said to be $c$-concave if
it is the infimum of a family of $c$-affine functions $c(\cdot,y)+\alpha$.
Analogously, $\psi:Y\to\mathbb{R}\cup\\{-\infty\\}$ is said to be $c$-concave
if it is the infimum of a family of $c$-affine functions $c(x,\cdot)+\beta$.
The first important result of the $c$-duality theory is the following, which
summarize the strict relation that there is between optimality and
$c$-cyclical monotonicity.
###### Proposition 1.6.
Let $X$ and $Y$ be Polish spaces and $c:X\times Y\to[0,\infty]$ a lower
semicontinuous cost function. Then every optimal transport plan
$\pi\in\operatorname{OptPlans}(\mu,\nu)$ such that $\int
c\,\mathrm{d}\pi<\infty$ is concentrated in a $c$-cyclically monotone set.
Moreover, if there exist two functions $a\in L^{1}(X,\mu)$ and $b\in
L^{1}(Y,\nu)$ such that $c(x,y)\leq a(x)+b(x)$ for every $(x,y)\in X\times Y$,
a plan $\pi\in\Gamma(\mu,\nu)$ is optimal only if it is concentrated on a
$c$-cyclically monotone set.
The $c$-duality theory also allows to state the following duality result.
###### Proposition 1.7.
Let $X$ and $Y$ be Polish spaces and $c:X\times Y\to[0,\infty]$ a lower
semicontinuous cost function. If there exist two functions $a\in L^{1}(X,\mu)$
and $b\in L^{1}(Y,\nu)$ such that $c(x,y)\leq a(x)+b(x)$ for every $(x,y)\in
X\times Y$, then there exists a $c$-concave function
$\phi:X\to\mathbb{R}\cup\\{-\infty\\}$ satisfying
$\mathcal{C}(\mu,\nu)=\int\phi\,\mathrm{d}\mu+\int\phi^{c}\,\mathrm{d}\nu.$
Such a function $\phi$ is called Kantorovich potential.
###### Remark 1.8.
Notice that, if the cost $c$ is continuous, every $c$-concave function is
upper semicontinuous, being the infimum of continuous functions. Therefore,
according to Proposition 1.7, it is possible to find an upper semicontinuous
Kantorovich potential $\phi$ and its $c$-conjugate function $\phi^{c}$ will be
itself upper semicontinuous.
### 1.2 The Wasserstein Space and the Entropy Functional
In this section I am going to consider the Optimal Transport problem in the
special case in which $X=Y$, $(X,\mathsf{d})$ is a Polish metric space and the
cost function is equal to the distance squared, that is
$c(x,y)=\mathsf{d}^{2}(x,y)$. In this context the Kantorovich’s minimization
problem induces the so called Wasserstein distance on the space
$\mathscr{P}_{2}(X)$ of probabilities with finite second order moment. Let me
now give the precise definitions.
###### Definition 1.9.
Define the set
$\mathscr{P}_{2}(X):=\left\\{\mu\in\mathscr{P}(X)\,:\,\int\mathsf{d}^{2}(x,x_{0})\,\mathrm{d}\mu(x)<\infty\text{
for one (and thus all) }x_{0}\in X\right\\}$
###### Definition 1.10 (Wasserstein distance).
Given two measures $\mu,\nu\in\mathscr{P}_{2}(X)$ define their Wasserstein
distance $W_{2}(\mu,\nu)$ as
$W_{2}^{2}(\mu,\nu):=\min\left\\{\int
d^{2}(x,y)\,\mathrm{d}\pi(x,y)\,:\,\pi\in\Gamma(\mu,\nu)\right\\}.$
###### Proposition 1.11.
$W_{2}$ is actually a distance on $\mathscr{P}_{2}(X)$ and
$(\mathscr{P}_{2}(X),W_{2})$ is a Polish metric space.
The convergence of measures in $\mathscr{P}_{2}(X)$ with respect to the
Wasserstein distance can be easily characterized and this is very useful in
many situation.
###### Proposition 1.12.
Let $(\mu_{n})_{n\in\mathbb{N}}\subset\mathscr{P}_{2}(X)$ be a sequence of
measures and let $\mu\in\mathscr{P}_{2}(X)$, then
$\mu_{n}\xrightarrow{W_{2}}\mu$ if and only if $\mu_{n}\rightharpoonup\mu$ and
$\int\mathsf{d}^{2}(x,x_{0})\,\mathrm{d}\mu_{n}\to\int\mathsf{d}^{2}(x,x_{0})\,\mathrm{d}\mu\qquad\text{for
every }x_{0}\in X.$
In particular, if $(X,\mathsf{d})$ is a compact metric space, the Wasserstein
convergence is equivalent to weak convergence.
Let me now deal with the geodesic structure of $(\mathscr{P}_{2}(X),W_{2})$,
which, as the following statement shows, is heavily related to the one of the
base space $(X,\mathsf{d})$. This fact makes the Wasserstein space very
important, and allows to prove many remarkable facts. First of all, notice
that every measure ${\mbox{\boldmath$\pi$}}\in\mathscr{P}(C([0,1],X))$ induces
a curve $[0,1]\ni
t\to\mu_{t}=(e_{t})_{\\#}{\mbox{\boldmath$\pi$}}\in\mathscr{P}(X)$, therefore
in the following I will consider measures in $\mathscr{P}(C([0,1],X))$ in
order to consider curves in the Wasserstein space.
###### Proposition 1.13.
If $(X,\mathsf{d})$ is a geodesic space then $(\mathscr{P}_{2}(X),W_{2})$ is
geodesic as well. More precisely, given two measures
$\mu,\nu\in\mathscr{P}_{2}(X)$, the measure
${\mbox{\boldmath$\pi$}}\in\mathscr{P}(C([0,1],X))$ is a constant speed
Wassertein geodesic connecting $\mu$ and $\nu$ if and only if it is
concentrated in $\operatorname{Geo}(X)$ (that is the space of constant speed
geodesics in $(X,\mathsf{d})$) and
$(e_{0},e_{1})_{\\#}{\mbox{\boldmath$\pi$}}\in\operatorname{OptPlans}(\mu,\nu)$.
In this case it is said that $\pi$ is an optimal geodesic plan between $\mu$
and $\nu$ and this will be denoted as
${\mbox{\boldmath$\pi$}}\in\operatorname{OptGeo}(\mu,\nu)$.
###### Remark 1.14.
I will say that an optimal geodesic plan
${\mbox{\boldmath$\pi$}}\in\operatorname{OptGeo}(\mu,\nu)$ is induced by a map
if there exists a $\mu$-measurable map $\Theta:X\to\operatorname{Geo}(X)$,
such that ${\mbox{\boldmath$\pi$}}=\Theta_{\\#}\mu$. Following the argument
explained in Remark 1.2, it is possible to conclude that, if any optimal
geodesic plan ${\mbox{\boldmath$\pi$}}\in\operatorname{OptGeo}(\mu,\nu)$
between two given measures $\mu,\nu\in\mathscr{P}_{2}(X)$ is induced by a map,
then there exists a unique
${\mbox{\boldmath$\pi$}}\in\operatorname{OptGeo}(\mu,\nu)$ and it is obviously
induced by a map.
Let me now introduce the entropy functional that will be the main character in
defining the notion of weak curvature dimension bounds. As it will be soon
clear, the most appropriate framework in which deal with the entropy
functional, is the one of metric measure spaces.
###### Definition 1.15.
A metric measure space is a triple $(X,\mathsf{d},\mathfrak{m})$, where
$(X,\mathsf{d})$ is a Polish metric space and $\mathfrak{m}$ is a non-negative
and non-null Borel measure on $X$, finite on bounded sets. Moreover, a
quadruple $(X,\mathsf{d},\mathfrak{m},p)$ is called pointed metric measure
space if $(X,\mathsf{d},\mathfrak{m})$ is a metric measure space and $p$ is a
point in $X$.
###### Remark 1.16.
In this article I assume that every metric measure space I am going to
consider satisfies the following estimate
$\int e^{-c\cdot\mathsf{d}^{2}(x,x_{0})}\,\mathrm{d}\mathfrak{m}(x)<\infty,$
(1)
for some (and thus all) $x_{0}\in X$ and a suitable constant $c>0$. This is
essentially a technical assumption, but it is useful to ensure the lower
semicontinuity of the entropy functional (see Proposition 1.19).
Let me now properly define the entropy functional.
###### Definition 1.17.
In a metric measure space $(X,\mathsf{d},\mathfrak{m})$, given a measure
$\nu\in\mathcal{M}(X)$ define the relative entropy functional with respect to
$\nu$ $\operatorname{Ent}_{\nu}:\mathscr{P}_{2}(X)\to\overline{\mathbb{R}}$ as
$\operatorname{Ent}_{\nu}(\mu):=\begin{cases}\int\rho\log\rho\,\mathrm{d}\nu&\text{if
}\mu\ll\nu\text{ and }\mu=\rho\nu\\\ +\infty&\text{otherwise}\end{cases}.$
The entropy functional relative to the reference measure $\mathfrak{m}$ will
be simply denoted by $\operatorname{Ent}$.
###### Remark 1.18.
According to [3], condition (1) prevents the entropy functional
$\operatorname{Ent}$ to take the value $-\infty$.
The most important property of the entropy functional is its lower
semicontinuity with respect to the different notions of convergence in spaces
of probabilities. Some results in this direction are collected in this
proposition.
###### Proposition 1.19.
If $\mathfrak{m}(X)<\infty$ the functional $\operatorname{Ent}$ is lower
semicontinuous with respect to the weak topology of $\mathscr{P}_{2}(X)$,
while if $\mathfrak{m}(X)=\infty$ (but (1) holds) $\operatorname{Ent}$ is
lower semicontinuous with respect to the Wasserstein convergence.
I conclude this subsection introducing two more spaces of probabilities, that
will play an important role in the following.
###### Definition 1.20.
Introduce the space $\mathscr{P}_{ac}(X)\subseteq\mathscr{P}_{2}(X)$ of
probabilities absolutely continuous with respect to $\mathfrak{m}$, with
finite second order moments. Define also the space
$\mathscr{P}^{*}(X)\subseteq\mathscr{P}_{ac}(X)$ as
$\mathscr{P}^{*}(X):=\\{\mu\in\mathscr{P}_{2}(X)\,:\,\operatorname{Ent}(\mu)<\infty\\}.$
### 1.3 Curvature Dimension Bounds
In this section I introduce the notions of curvature dimension bound and CD
space, presenting also the results which are the starting point of this work.
Let me begin with the definition of weak and strong CD condition.
###### Definition 1.21.
A metric measure space $(X,\mathsf{d},\mathfrak{m})$ is said to satisfy the
(weak) $\mathsf{CD}(K,\infty)$ condition and to be a (weak)
$\mathsf{CD}(K,\infty)$ space, if for every
$\mu_{0},\mu_{1}\in\mathscr{P}^{*}(X)$ there exists a Wasserstein geodesic
with constant speed $(\mu_{t})_{t\in[0,1]}\subset\mathscr{P}^{*}(X)$
connecting them, along which the relative entropy functional is $K$-convex,
that is
$\operatorname{Ent}(\mu_{t})\leq(1-t)\operatorname{Ent}(\mu_{0})+t\operatorname{Ent}(\mu_{1})-t(1-t)\frac{K}{2}W_{2}^{2}(\mu_{0},\mu_{1}),\qquad\text{for
every }t\in[0,1].$
Moreover $(X,\mathsf{d},\mathfrak{m})$ is said to satisfy a strong
$\mathsf{CD}(K,\infty)$ condition and to be a strong $\mathsf{CD}(K,\infty)$
space if, for every $\mu_{0},\mu_{1}\in\mathscr{P}^{*}(X)$, the relative
entropy functional is $K$-convex along every Wasserstein geodesic with
constant speed connecting them.
The following proposition due to Villani [21] ensures the validity of CD
condition in some familiar metric measure spaces and it will be fundamental in
the last section.
###### Proposition 1.22.
Let $\left\lVert\cdot\right\rVert$ be a norm on $\mathbb{R}^{n}$ and let
$\mathsf{d}$ be the associated distance, then the metric measure space
$(\mathbb{R}^{n},\mathsf{d},\mathscr{L}^{n})$ is a (weak)
$\mathsf{CD}(0,\infty)$ space.
The next result states the stability of CD condition with respect to the
(pointed) measured Gromov Hausdorff convergence. I am not interested in giving
a precise the definition of this notion of convergence, because in this
article I will only deal with a different and stronger convergence for metric
measure spaces. For a precise definition I refer the reader to [21], where
also the next theorem is proven. Let me also point out that the measured
Gromov Hausdorff convergence can be in some sense metrized by the $\mathbb{D}$
distance, introduced by Sturm in [19]. Moreover in [9] Gigli, Mondino and
Savaré showed that some different notion of convergence for pointed metric
measure spaces are equivalent to the pointed measured Gromov Hausdorff
convergence.
###### Theorem 1.23.
Let $(X_{k},\mathsf{d}_{k},\mathfrak{m}_{k},p_{k})_{k\in\mathbb{N}}$ be a
sequence of locally compact pointed metric measure spaces converging in the
pointed measured Gromov Hausdorff sense to a locally compact pointed metric
measure space $(X,\mathsf{d},\mathfrak{m},p)$. Given $K\in\mathbb{R}$, if each
$(X_{k},\mathsf{d}_{k},\mathfrak{m}_{k})_{k\in\mathbb{N}}$ satisfies the weak
curvature dimension condition $\mathsf{CD}(K,\infty)$, then also
$(X,\mathsf{d},\mathfrak{m})$ satisfies $\mathsf{CD}(K,\infty)$.
I am now going to present the Rajala-Sturm theorem, which is the starting
point of this work. In order to do this I have to preliminary introduce the
notion of essentially non-branching metric measure space.
###### Definition 1.24.
A metric measure space $(X,\mathsf{d},\mathfrak{m})$ is said to be essentially
non-branching if for every absolutely continuous measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{ac}(X)$, every optimal geodesic plan $\eta$
connecting them is concentrated on a non-branching set of geodesics.
###### Theorem 1.25.
Every strong $\mathsf{CD}(K,\infty)$ metric measure space is essentially non-
branching.
The work of Rajala and Sturm was then generalized by Schultz [15] and applied
to the context of very strict CD spaces.
###### Definition 1.26.
A metric measure space $(X,\mathsf{d},\mathfrak{m})$ is called a very strict
$\mathsf{CD}(K,\infty)$ space if for every absolutely continuous measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{ac}(X)$ there exists an optimal geodesic plan
$\eta\in\operatorname{OptGeo}(\mu_{0},\mu_{1})$, so that the entropy
functional $\operatorname{Ent}$ satisfies the K-convexity inequality along
$(\operatorname{restr}_{t_{0}}^{t_{1}})_{\\#}(f\eta)$ for every
$t_{0}<t_{1}\in[0,1]$, and for all bounded Borel functions
$f:\operatorname{Geo}(X)\to\mathbb{R}^{+}$ with $\int f\,\mathrm{d}\eta=1$.
This CD condition is intermediate between the weak and the strong one and it
easy to realize that it cannot imply the essentially non-branching property.
Anyway, as pointed out by Schultz, it is possible to prove a weaker version of
the non-branching condition.
###### Definition 1.27 (Weak Essentially Non-Branching).
A metric measure space $(X,\mathsf{d},\mathfrak{m})$ is said to be weakly
essentially non-branching if for every absolutely continuous measures
$\mu_{0},\mu_{1}\in\mathscr{P}_{2}(X)$, there exists an optimal geodesic plan
connecting them, that is concentrated on a non-branching set of geodesics.
###### Theorem 1.28.
Every very strict $\mathsf{CD}(K,\infty)$ space is weakly essentially non-
branching.
Unfortunately, as the reader can easily notice, the strong CD condition is not
stable with respect to the measured Gromov Hausdorff convergence. Moreover,
the results in [11] suggest that it is not possible to prove a general
stability result also for the very strict CD condition. These observations
motivate this article, where I assume some metric requirements on the
converging sequence and on the limit space, in order to prove the very strict
CD condition for suitable measured Gromov Hausdorff limit spaces.
## 2 A Metric Stability Result
In this section I state and prove some results that allow to prove very strict
condition, and thus weak essentially non-branching, for some special measured
Gromov Hausdorff limit spaces. These results do not assume any analytic
requirement and are purely metric, therefore they can be applied to a wide
variety of metric measure spaces. The way to prove non-branching at the limit
in this case is very different from the one used by Ambrosio, Gigli and Savaré
in [5] and it is actually more straightforward.
First of all, let me introduce two notions which provide a nice strategy to
prove the very strict CD condition, they are called consistent geodesic flow
and consistent plan selection. As it will be clear in the proof of Theorem
2.4, these two concepts allow to decouple the static part from the dynamic
one, taking full advantage of Proposition 1.13. This, associated with suitable
assumption, makes easier to deal with restriction of optimal geodesic plans
and thus to prove the very strict CD condition.
###### Definition 2.1.
Let $(X,\mathsf{d})$ be a metric space. A measurable map $G:X\times X\to
C([0,1],X)$ is called consistent geodesic flow if the following properties
hold:
* 1)
for every $x,y\in X$, $G(x,y)$ is a constant speed geodesic connecting $x$ and
$y$, that is $G(x,y)\in\operatorname{Geo}(X)$ with $G(x,y)(0)=x$ and
$G(x,y)(1)=y$,
* 2)
${{\rm restr}_{s}^{t}}G(x,y)=G\big{(}G(x,y)(s),G(x,y)(t)\big{)}$ for every
$s<t\in(0,1)$ and every $x,y\in X$.
A consistent geodesic flow $G$ is said to be $L$-Lipschitz if
$\sup_{t\in[0,1]}\mathsf{d}\big{(}G(x_{1},y_{1})(t),G(x_{2},y_{2})(t)\big{)}\leq
L\cdot\big{(}\mathsf{d}^{2}(x_{1},x_{2})+\mathsf{d}^{2}(y_{1},y_{2})\big{)}^{\frac{1}{2}},$
i.e. if it is an $L$-Lipschitz map considered as
$G:(X\times
X,\mathsf{d}_{2})\to\big{(}\operatorname{Geo}(X),\left\lVert\cdot\right\rVert_{\text{sup}}\big{)},$
where $\mathsf{d}_{2}=\mathsf{d}\otimes\mathsf{d}$.
###### Definition 2.2.
Let $(X,\mathsf{d},\mathfrak{m})$ be a metric measure space and assume there
exists a consistent geodesic selection $G$ for the metric space
$(X,\mathsf{d})$. A map
$\Pi:\mathscr{P}_{ac}(X)\times\mathscr{P}_{ac}(X)\to\mathscr{P}(X\times X)$ is
called consistent plan selection, associated to the flow $G$ if
* 1)
$\Pi(\mu,\nu)\in\operatorname{OptPlans}(\mu,\nu)$ for every
$\mu,\nu\in\mathscr{P}_{ac}(X)$
* 2)
For every $\mu,\nu\in\mathscr{P}_{ac}(X)$, every pair of times $s<t\in[0,1]$
and every bounded Borel function $f:X\times X\to\mathbb{R}^{+}$ with $\int
f\,\mathrm{d}\Pi(\mu,\nu)=1$, if
$(G_{s})_{\\#}\big{(}f\cdot\Pi(\mu,\nu)\big{)},(G_{t})_{\\#}\big{(}f\cdot\Pi(\mu,\nu)\big{)}\in\mathscr{P}_{ac}(X),$
where $G_{r}$ denotes the map $e_{r}\circ G$ for every $r\in[0,1]$, then it
holds
$(G_{s},G_{t})_{\\#}\big{(}f\cdot\Pi(\mu,\nu)\big{)}=\Pi\big{(}(G_{s})_{\\#}\big{(}f\cdot\Pi(\mu,\nu)\big{)},(G_{t})_{\\#}\big{(}f\cdot\Pi(\mu,\nu)\big{)}\big{)}.$
Before going on I present the following lemma, that provides a useful
equivalent characterization for condition 2 in the last definition.
###### Lemma 2.3.
Condition 2 in Definition 2.2 is equivalent to the combination of the
following two requirements
* 2.1)
$f\cdot\Pi(\mu,\nu)=\Pi\big{(}(\mathtt{p}_{1})_{\\#}\big{(}f\cdot\Pi(\mu,\nu)\big{)},(\mathtt{p}_{2})_{\\#}\big{(}f\cdot\Pi(\mu,\nu)\big{)}\big{)}$
for every $\mu,\nu\in\mathscr{P}_{ac}(X)$ and every bounded Borel function
$f:X\times X\to\mathbb{R}^{+}$ with $\int f\,\mathrm{d}\Pi(\mu,\nu)=1$.
* 2.2)
For every $\mu,\nu\in\mathscr{P}_{ac}(X)$ and every $s<t\in[0,1]$, if
$(G_{s})_{\\#}\Pi(\mu,\nu),(G_{t})_{\\#}\Pi(\mu,\nu)\in\mathscr{P}_{ac}(X),$
then it holds
$(G_{s},G_{t})_{\\#}\Pi(\mu,\nu)=\Pi\big{(}(G_{s})_{\\#}\Pi(\mu,\nu),(G_{t})_{\\#}\Pi(\mu,\nu)\big{)}.$
###### Proof.
First of all notice that, putting $f\equiv 1$ in condition 2, one obtains
condition 2.2. Moreover, also condition 2.1 can be deduced by condition 2,
considering only the case where $s=0$ and $t=1$. Therefore condition 2 implies
the combination of 2.1 and 2.2.
On the other hand, assume that both 2.1 and 2.2 hold, then for every
$s<t\in[0,1]$, every $\mu,\nu\in\mathscr{P}_{ac}(X)$ and every bounded Borel
function $f:X\times X\to\mathbb{R}^{+}$ with $\int
f\,\mathrm{d}\Pi(\mu,\nu)=1$, if
$(G_{s})_{\\#}\big{(}f\cdot\Pi(\mu,\nu)\big{)},(G_{t})_{\\#}\big{(}f\cdot\Pi(\mu,\nu)\big{)}\in\mathscr{P}_{ac}(X),$
it holds that
$\displaystyle(G_{s},G_{t})_{\\#}(f\cdot\pi)$
$\displaystyle=(G_{s},G_{t})_{\\#}\Pi\big{(}(\mathtt{p}_{1})_{\\#}(f\cdot\pi),(\mathtt{p}_{2})_{\\#}(f\cdot\pi)\big{)}$
$\displaystyle=\Pi\bigg{(}(G_{s})_{\\#}\Pi\big{(}(\mathtt{p}_{1})_{\\#}(f\cdot\pi),(\mathtt{p}_{2})_{\\#}(f\cdot\pi)\big{)},(G_{t})_{\\#}\Pi\big{(}(\mathtt{p}_{1})_{\\#}(f\cdot\pi),(\mathtt{p}_{2})_{\\#}(f\cdot\pi)\big{)}\bigg{)}$
$\displaystyle=\Pi\bigg{(}(G_{s})_{\\#}(f\cdot\pi),(G_{t})_{\\#}(f\cdot\pi)\bigg{)},$
where I have denoted by $\pi$ the plan $\Pi(\mu,\nu)$, in order to ease the
notation. This last relation is exactly the requirement of condition 2. ∎
I have introduced everything I need to prove one of the crucial results of
this section. It shows how the existence of a consistent geodesic flow and a
consistent plan selection, satisfying suitable assumptions, ensures the
validity of the very strict CD condition.
###### Theorem 2.4.
Given a metric measure space $(X,\mathsf{d},\mathfrak{m})$, assume there exist
a consistent geodesic flow $G$ for $(X,\mathsf{d})$ and a consistent plan
selection $\Pi$ associated to $G$. Suppose also that for every pair of
measures $\mu,\nu\in\mathscr{P}_{ac}(X)$, the $K$-convexity inequality of the
entropy is satisfied along the Wasserstein geodesic $G_{\\#}\Pi(\mu,\nu)$ for
a suitable $K$, that is
$\operatorname{Ent}\big{(}(G_{t})_{\\#}\Pi(\mu,\nu)\big{)}\leq(1-t)\operatorname{Ent}(\mu)+t\operatorname{Ent}(\nu)-\frac{K}{2}t(1-t)W_{2}^{2}(\mu,\nu),$
for every $t\in(0,1)$. Then $(X,\mathsf{d},\mathfrak{m})$ is a very strict
$\mathsf{CD}(K,\infty)$ space.
###### Proof.
Fix two measures $\mu,\nu\in\mathscr{P}_{ac}(X)$ and call $\pi=\Pi(\mu,\nu)$.
Then I need to prove that the $K$-convexity inequality of the entropy holds
along the optimal geodesic plan $({{\rm restr}_{s}^{t}})_{\\#}\big{(}f\cdot
G_{\\#}\pi\big{)}$, for every $s<t\in[0,1]$ and every bounded Borel function
$f:C([0,1],X)\to\mathbb{R}^{+}$ with $\int f\,\mathrm{d}\pi=1$. This is
obviously true when at least one of its marginals at time $0$ and $1$ is not
absolutely continuous, therefore I can assume that
$(e_{s})_{\\#}\big{(}f\cdot G_{\\#}\pi\big{)},(e_{t})_{\\#}\big{(}f\cdot
G_{\\#}\pi\big{)}\in\mathscr{P}_{ac}(X).$ (2)
In particular this allows me to apply condition 2 in Definition 2.2. Now
notice that, since $G$ is obviously injective, if one calls $\tilde{f}=f\circ
G$ it holds
$({{\rm restr}_{s}^{t}})_{\\#}\big{(}f\cdot G_{\\#}\pi\big{)}=({{\rm
restr}_{s}^{t}})_{\\#}\big{(}G_{\\#}(\tilde{f}\cdot\pi)\big{)}=({{\rm
restr}_{s}^{t}}\circ G)_{\\#}(\tilde{f}\cdot\pi).$
Observe now that the definition of consistent geodesic flow ensures that
${{\rm restr}_{s}^{t}}\circ G=G\circ(G_{s},G_{t})$, thus
$\displaystyle({{\rm restr}_{s}^{t}})_{\\#}\big{(}f\cdot
G_{\\#}\pi\big{)}=(G\circ(G_{s},G_{t}))_{\\#}(\tilde{f}\cdot\pi)$
$\displaystyle=G_{\\#}\big{(}(G_{s},G_{t})_{\\#}(\tilde{f}\cdot\pi)\big{)}$
$\displaystyle=G_{\\#}\Pi\big{(}(G_{s})_{\\#}(\tilde{f}\cdot\pi),(G_{t})_{\\#}(\tilde{f}\cdot\pi)\big{)}.$
On the other hand it is obvious that
$(G_{s})_{\\#}(\tilde{f}\cdot\pi)=(e_{s})_{\\#}\big{(}G_{\\#}(\tilde{f}\cdot\pi)\big{)}=(e_{s})_{\\#}\big{(}f\cdot
G_{\\#}\pi\big{)},$
and similarly $(G_{t})_{\\#}(\tilde{f}\cdot\pi)=(e_{t})_{\\#}\big{(}f\cdot
G_{\\#}\pi\big{)}$, so I can conclude that
$({{\rm restr}_{s}^{t}})_{\\#}\big{(}f\cdot
G_{\\#}\pi\big{)}=G_{\\#}\Pi\big{(}(e_{s})_{\\#}\big{(}f\cdot
G_{\\#}\pi\big{)},(e_{t})_{\\#}\big{(}f\cdot G_{\\#}\pi\big{)}\big{)}.$
At this point the fact that the entropy functional is $K$-convex along $({{\rm
restr}_{s}^{t}})_{\\#}\big{(}f\cdot G_{\\#}\pi\big{)}$ is an easy consequence
of the assumption of the theorem, associated with (2). ∎
In the remaining part of the section I show how Theorem 2.4 can be applied in
order to prove the very strict CD condition for some suitable measured Gromov
Hausdorff limit spaces. The first result I want to present provides sufficient
conditions to ensure the existence of a consistent geodesic flow for a limit
space. The reader must notice that I am considering a notion of convergence,
that is much stronger than the measured Gromov Hausdorff convergence. This
choice allows me to avoid some tedious technical details, but it is easy to
notice that this result can be somehow extended to measured Gromov Hausdorff
limit spaces. Anyway, as the next section confirms, in many easy applications
this stronger hypothesis is sufficient.
###### Proposition 2.5.
Let $(X,\mathsf{d},\mathfrak{m})$ be a compact metric measure space and let
$\\{\mathsf{d}_{n}\\}_{n\in\mathbb{N}}$ be a sequence of distances on $X$
(inducing the same topology) such that there exist a sequence
$\\{\varepsilon_{n}\\}_{n\in\mathbb{N}}$ converging to zero satisfying
$|\mathsf{d}_{n}(x,y)-\mathsf{d}(x,y)|<\varepsilon_{n}\quad\text{for every
}x,y\in X,$
in particular the sequence $(X,\mathsf{d}_{n},\mathfrak{m})$ measured Gromov
Hausdorff converges to $(X,\mathsf{d},\mathfrak{m})$ by means of the identity
map. Given $L>0$, assume that for every $n$ there exists an $L$-Lipschitz
consistent geodesic flow $G_{n}$ for the metric measure space
$(X,\mathsf{d}_{n},\mathfrak{m})$. Then there exists an $L$-Lipschitz
consistent geodesic flow $G$ for the metric measure space
$(X,\mathsf{d},\mathfrak{m})$ and, up to subsequences, $G_{n}$ converges
uniformly to $G$.
###### Proof.
The space $X\times X$ is compact and thus separable, therefore there exists a
countable dense set $D\subseteq X\times X$. Notice that for every $(x,y)\in
X\times X$ and every $t\in[0,1]$, the sequence
$\\{G_{n}(x,y)(t)\\}_{n\in\mathbb{N}}$ is contained in the compact set $X$.
Then the diagonal argument ensures that, up to taking a suitable subsequence,
there exists a function
$G:D\to\\{f:[0,1]\cap\mathbb{Q}\to X\\}$
such that for every $(x,y)\in D$ and every $t\in[0,1]\cap\mathbb{Q}$ it holds
$G_{n}(x,y)(t)\to G(x,y)(t).$
Now for every $(x,y)\in D$ the function $G(x,y)$ is a
$\mathsf{d}(x,y)$-Lipschitz function on $[0,1]\cap\mathbb{Q}$, in fact for
every $s,t\in[0,1]\cap\mathbb{Q}$ it holds
$\displaystyle\mathsf{d}\big{(}G(x,y)(s),G(x,y)(t)\big{)}$
$\displaystyle\leq\mathsf{d}\big{(}G_{n}(x,y)(s),G(x,y)(s)\big{)}+\mathsf{d}\big{(}G_{n}(x,y)(s),G_{n}(x,y)(t)\big{)}$
$\displaystyle\quad+\mathsf{d}\big{(}G_{n}(x,y)(t),G(x,y)(t)\big{)}$
$\displaystyle\leq\mathsf{d}\big{(}G_{n}(x,y)(s),G(x,y)(s)\big{)}+\mathsf{d}_{n}\big{(}G_{n}(x,y)(s),G_{n}(x,y)(t)\big{)}+\varepsilon_{n}$
$\displaystyle\quad+\mathsf{d}\big{(}G_{n}(x,y)(t),G(x,y)(t)\big{)}$
$\displaystyle=\mathsf{d}\big{(}G_{n}(x,y)(s),G(x,y)(s)\big{)}+\mathsf{d}_{n}(x,y)\cdot|t-s|+\varepsilon_{n}$
$\displaystyle\quad+\mathsf{d}\big{(}G_{n}(x,y)(t),G(x,y)(t)\big{)}$
$\displaystyle\to\mathsf{d}(x,y)\cdot|t-s|.$
This allows to extend $G(x,y)$ to a $\mathsf{d}(x,y)$-Lipschitz function on
the whole interval $[0,1]$, moreover, since clearly $G(x,y)(0)=x$ and
$G(x,y)(1)$, I can infer that $G(x,y)\in\operatorname{Geo}(X)$. Then for every
$(x,y)\in D$ it is possible to extend the pointwise convergence of
$G_{n}(x,y)$ to $G(x,y)$ to the interval $[0,1]$. In fact, given
$\varepsilon>0$, for every $t\in[0,1]$ there exists $s\in[0,1]\cap\mathbb{Q}$
with $|t-s|<\frac{\varepsilon}{2\mathsf{d}(x,y)}$ that allows to perform the
following estimate
$\displaystyle\mathsf{d}\big{(}G_{n}(x,y)(t),G(x,y)(t)\big{)}$
$\displaystyle\leq\mathsf{d}\big{(}G_{n}(x,y)(t),G_{n}(x,y)(s)\big{)}+\mathsf{d}\big{(}G_{n}(x,y)(s),G(x,y)(s)\big{)}$
$\displaystyle\quad+\mathsf{d}\big{(}G(x,y)(s),G(x,y)(t)\big{)}$
$\displaystyle\leq(\mathsf{d}_{n}(x,y)+\mathsf{d}(x,y))\cdot|t-s|+\mathsf{d}\big{(}G_{n}(x,y)(s),G(x,y)(s)\big{)}$
$\displaystyle\to 2\mathsf{d}(x,y)\cdot|t-s|<\varepsilon,$
the claim follows from the arbitrariness of $\varepsilon$. I end up with the
map
$G:(D,\mathsf{d}_{2})\to\big{(}\operatorname{Geo}(X),\left\lVert\cdot\right\rVert_{\text{sup}}\big{)}\subset\big{(}C([0,1],X),\left\lVert\cdot\right\rVert_{\text{sup}}\big{)}$
that is the pointwise limit of the $L$-Lipschitz maps $G_{n}$, thus also $G$
is an $L$-Lipschitz map. Therefore it can be extended to an $L$-Lipschitz
function on the whole space $X\times X$, furthermore, since
$\operatorname{Geo}(X)$ is closed with respect to the sup norm, I obtain
$G:(X\times
X,\mathsf{d}_{2})\to\big{(}\operatorname{Geo}(X),\left\lVert\cdot\right\rVert_{\text{sup}}\big{)}.$
Then the equicontinuity of the maps $G_{n}$ ensures that the sequence
$\\{G_{n}\\}_{n\in\mathbb{N}}$ uniformly converges to $G$.
In order to conclude the proof I only need to show that $G$ is a consistent
geodesic flow, proving property 3 of Definition 2.1. To this aim fix $(x,y)\in
X\times X$, $s,t\in[0,1]$ and a small $\varepsilon>0$, subsequently take
$n\in\mathbb{N}$ such that $\left\lVert
G_{n}-G\right\rVert_{\text{sup}}<\varepsilon$. Then it holds that
$\displaystyle\big{\lVert}{{\rm restr}_{s}^{t}}G(x,y)$
$\displaystyle-G\big{(}G(x,y)(s),G(x,y)(t)\big{)}\big{\rVert}$
$\displaystyle\leq 2\left\lVert
G_{n}-G\right\rVert_{\text{sup}}+\left\lVert{{\rm
restr}_{s}^{t}}G_{n}(x,y)-G_{n}\big{(}G(x,y)(s),G(x,y)(t)\big{)}\right\rVert$
$\displaystyle=2\left\lVert G_{n}-G\right\rVert_{\text{sup}}+\left\lVert
G_{n}\big{(}G_{n}(x,y)(s),G_{n}(x,y)(t)\big{)}-G_{n}\big{(}G(x,y)(s),G(x,y)(t)\big{)}\right\rVert$
$\displaystyle\leq(2+2L)\cdot\left\lVert
G_{n}-G\right\rVert_{\text{sup}}<(2+2L)\varepsilon,$
thesis follows from the arbitrariness of $\varepsilon$. ∎
Once Proposition 2.5 has provided a consistent geodesic flow for the limit
space, the next result shows how, under suitable assumptions, it is possible
to prove the very strict CD condition for the metric measure space
$(X,\mathsf{d},\mathfrak{m})$.
###### Proposition 2.6.
Under the same assumptions of the last proposition, suppose that there exists
a consistent plan selection $\Pi$ on $(X,\mathsf{d},\mathfrak{m})$, associated
to $G$, such that for every $\mu,\nu\in\mathscr{P}_{ac}(X)$ there exists a
sequence $\pi_{n}\in\operatorname{OptPlans}_{\mathsf{d}_{n}}(\mu,\nu)$
satisfying
1. 1.
$\pi_{n}\rightharpoonup\Pi(\mu,\nu)$ (up to the extraction of a subsequence),
2. 2.
the $K$-convexity of the entropy functional holds along the
$\mathsf{d}_{n}$-Wasserstein geodesic $(G_{n})_{\\#}\pi_{n}$, with respect to
the distance $\mathsf{d}_{n}$.
Then the metric measure space $(X,\mathsf{d},\mathfrak{m})$ is a very strict
$\mathsf{CD}(K,\infty)$ space.
###### Proof.
Fix a time $t\in[0,1]$ and notice that the assumption 2 ensures that
$\operatorname{Ent}\big{(}\big{[}(G_{n})_{t}\big{]}_{\\#}\pi_{n}\big{)}\leq(1-t)\operatorname{Ent}(\mu)+t\operatorname{Ent}(\nu)-\frac{K}{2}t(1-t)(W_{2}^{\mathsf{d}_{n}})^{2}(\mu,\nu).$
(3)
Now, since in compact space weak convergence and Wasserstein convergence
coincide, it holds that $W_{2}^{2}(\pi_{n},\Pi(\mu,\nu))\to 0$. Then taking an
optimal transport plan $\eta$ between $\pi_{n}$ and $\Pi(\mu,\nu)$ and having
in mind that $G$ is $L$-Lipschitz, it is possible to do the following estimate
$\displaystyle
W^{2}_{2}\big{(}(G_{t})_{\\#}\pi_{n},(G_{t})_{\\#}\Pi(\mu,\nu)\big{)}$
$\displaystyle\leq\int\mathsf{d}^{2}\big{(}G_{t}(x_{1},y_{1}),G_{t}(x_{2},y_{2})\big{)}\,\mathrm{d}\eta\big{(}(x_{1},y_{1}),(x_{2},y_{2})\big{)}$
$\displaystyle\leq\int
L^{2}\cdot\mathsf{d}_{2}^{2}\big{(}(x_{1},y_{1}),(x_{2},y_{2})\big{)}\,\mathrm{d}\eta\big{(}(x_{1},y_{1}),(x_{2},y_{2})\big{)}$
$\displaystyle=L^{2}\cdot W_{2}^{2}(\pi_{n},\Pi(\mu,\nu))\to 0.$
Consequently, I am able to infer that
$\displaystyle
W^{2}_{2}\big{(}\big{[}(G_{n})_{t}\big{]}_{\\#}\pi_{n},(G_{t})_{\\#}\Pi(\mu,\nu)\big{)}$
$\displaystyle\leq
2W^{2}_{2}\big{(}\big{[}(G_{n})_{t}\big{]}_{\\#}\pi_{n},(G_{t})_{\\#}\pi_{n}\big{)}+2W^{2}_{2}\big{(}(G_{t})_{\\#}\pi_{n},(G_{t})_{\\#}\Pi(\mu,\nu)\big{)}$
$\displaystyle\leq
2\int\mathsf{d}^{2}\big{(}(G_{n})_{t},G_{t}\big{)}\,\mathrm{d}\pi_{n}+2W^{2}_{2}\big{(}(G_{t})_{\\#}\pi_{n},(G_{t})_{\\#}\Pi(\mu,\nu)\big{)}$
$\displaystyle\leq 2\left\lVert
G_{n}-G\right\rVert_{\text{sup}}^{2}++2W^{2}_{2}\big{(}(G_{t})_{\\#}\pi_{n},(G_{t})_{\\#}\Pi(\mu,\nu)\big{)}\to
0,$
and thus that
$\big{[}(G_{n})_{t}\big{]}_{\\#}\pi_{n}\xrightarrow{W_{2}}(G_{t})_{\\#}\Pi(\mu,\nu)$.
Finally, since obviously $(W_{2}^{\mathsf{d}_{n}})^{2}(\mu,\nu)\to
W_{2}^{2}(\mu,\nu)$, it is possible to pass to the limit in (3) using the
lower semicontinuity of the entropy and obtain
$\operatorname{Ent}\big{(}(G_{t})_{\\#}\Pi(\mu,\nu)\big{)}\leq(1-t)\operatorname{Ent}(\mu)+t\operatorname{Ent}(\nu)-\frac{K}{2}t(1-t)W^{2}_{2}(\mu,\nu),$
which, associated to Theorem 2.4, allows to conclude the proof because $t$ is
arbitrary. ∎
Following verbatim the proof of Proposition 2.6 it is easy to deduce the
following slight generalization.
###### Corollary 2.7.
Under the same assumptions of Proposition 2.5, suppose that there exists a
consistent plan selection $\Pi$ on $(X,\mathsf{d},\mathfrak{m})$, associated
to $G$. Moreover assume that for every $\mu,\nu\in\mathscr{P}_{ac}(X)$ there
exist three sequences $\mu_{n},\nu_{n}\in\mathscr{P}_{ac}(X)$ and
$\pi_{n}\in\operatorname{OptPlans}_{\mathsf{d}_{n}}(\mu_{n},\nu_{n})$
satisfying
1. 1.
$\mu_{n}\rightharpoonup\mu$, $\nu_{n}\rightharpoonup\nu$ and
$\operatorname{Ent}(\mu_{n})\to\operatorname{Ent}(\mu)$,
$\operatorname{Ent}(\nu_{n})\to\operatorname{Ent}(\nu)$,
2. 2.
$\pi_{n}\rightharpoonup\Pi(\mu,\nu)$ (up to the extraction of a subsequence),
3. 3.
the $K$-convexity of the entropy functional holds along the
$\mathsf{d}_{n}$-Wasserstein geodesic $(G_{n})_{\\#}\pi_{n}$.
Then the metric measure space $(X,\mathsf{d},\mathfrak{m})$ is a very strict
$\mathsf{CD}(K,\infty)$ space.
As already anticipated before, similar results can be proven for suitable
measured Gromov Hausdorff limit spaces, also in the non-compact case. These
generalizations require some technical assumption but their proof basically
follow the proofs I have just presented. Anyway, in order to be concise, I
prefer not to present the most general statements, except for the following
proposition, which will be fundamental in the next section. The reader can
easily notice that it can be proven following the proof of Proposition 2.6,
except for two technical details that I will fix below.
###### Proposition 2.8.
Let $(X,\mathsf{d},\mathfrak{m})$ be a locally compact metric measure space
and let $\\{\mathsf{d}_{n}\\}_{n\in\mathbb{N}}$ be a sequence of distances on
$X$ (inducing the same topology), locally uniformly convergent to $\mathsf{d}$
as $n\to\infty$, such that there exists a constant $H$ satisfying
$\mathsf{d}_{n}(x,y)\leq H\mathsf{d}(x,y)\qquad\text{for every }x,y\in X\times
X,$ (4)
for every $n$. Assume that there exists a map $G:X\times X\to C([0,1],X)$
which is a Lipschitz consistent geodesic flow for $\mathsf{d}$ and a
consistent geodesic flow for every distance $\mathsf{d}_{n}$. Moreover,
suppose that there is a consistent plan selection $\Pi$ on
$(X,\mathsf{d},\mathfrak{m})$, associated to $G$, such that for every
$\mu,\nu\in\mathscr{P}_{ac}(X)$ there exists a sequence
$\pi_{n}\in\operatorname{OptPlans}_{\mathsf{d}_{n}}(\mu,\nu)$, satisfying
1. 1.
$\pi_{n}\rightharpoonup\Pi(\mu,\nu)$ (up to the extraction of a subsequence),
2. 2.
the $K$-convexity of the entropy functional holds along the
$\mathsf{d}_{n}$-Wasserstein geodesic $G_{\\#}\pi_{n}$, with respect to the
distance $\mathsf{d}_{n}$.
Then the metric measure space $(X,\mathsf{d},\mathfrak{m})$ is a very strict
$\mathsf{CD}(K,\infty)$ space.
###### Remark 2.9.
Notice that condition (4) ensures that
$\mathscr{P}_{ac}(\mathbb{R}^{N},\mathsf{d})\subseteq\mathscr{P}_{ac}(\mathbb{R}^{N},\mathsf{d}_{n})$
for every $n$.
###### Proof.
In order to repeat the same strategy used for Proposition 2.6 I only need to
prove that $W_{2}^{2}(\pi_{n},\Pi(\mu,\nu))\to 0$ and that
$\lim_{n\to\infty}(W_{2}^{\mathsf{d}_{n}})^{2}(\mu,\nu)=W_{2}^{2}(\mu,\nu)$.
For the first condition, according to Proposition 1.12, it is sufficient to
prove that
$\int\mathsf{d}_{2}^{2}\big{(}(x,y),(x_{0},y_{0})\big{)}\,\mathrm{d}\pi_{n}(x,y)\to\int\mathsf{d}_{2}^{2}\big{(}(x,y),(x_{0},y_{0})\big{)}\,\mathrm{d}\Pi(\mu,\nu)(x,y),$
for every fixed $(x_{0},y_{0})\in X\times X$. But this can be easily shown, in
fact for every $n\in\mathbb{N}$ it holds
$\displaystyle\int\mathsf{d}_{2}^{2}\big{(}(x,y),(x_{0},y_{0})\big{)}\,\mathrm{d}\pi_{n}(x,y)$
$\displaystyle=\int\big{[}\mathsf{d}^{2}(x,x_{0})+\mathsf{d}^{2}(y,y_{0})\big{]}\,\mathrm{d}\pi_{n}(x,y)$
$\displaystyle=\int\mathsf{d}^{2}(x,x_{0})\,\mathrm{d}\mu(x)+\int\mathsf{d}^{2}(y,y_{0})\,\mathrm{d}\nu(y)$
$\displaystyle=\int\mathsf{d}_{2}^{2}\big{(}(x,y),(x_{0},y_{0})\big{)}\,\mathrm{d}\Pi(\mu,\nu)(x,y).$
On the other hand, taking
$\pi\in\operatorname{OptPlans}_{\mathsf{d}}(\mu,\nu)$, condition (4) allows to
use the dominated convergence theorem and deduce
$\limsup_{n\to\infty}(W_{2}^{\mathsf{d}_{n}})^{2}(\mu,\nu)\leq\limsup_{n\to\infty}\int\mathsf{d}_{n}^{2}\,\mathrm{d}\pi=\int\mathsf{d}^{2}\,\mathrm{d}\pi=W_{2}^{2}(\mu,\nu).$
Moreover for every compact set $K\subset X\times X$ there exists a continuous
function $\phi_{K}:X\times X\to[0,1]$ such that $\phi_{K}=0$ outside a compact
set $K^{\prime}$ and $f_{K}\equiv 1$ on $K$. Then
$\phi_{K}\mathsf{d}^{2}_{n}\to\phi_{K}\mathsf{d}^{2}$ uniformly, therefore
$\liminf_{n\to\infty}(W_{2}^{\mathsf{d}_{n}})^{2}(\mu,\nu)\geq\liminf_{n\to\infty}\int\phi_{K}\mathsf{d}_{n}^{2}\,\mathrm{d}\pi_{n}=\int\phi_{K}\mathsf{d}^{2}\,\mathrm{d}\pi\geq\int_{K}\mathsf{d}^{2}\,\mathrm{d}\pi.$
Since $K$ is arbitrary it is possible to conclude that
$\liminf_{n\to\infty}(W_{2}^{\mathsf{d}_{n}})^{2}(\mu,\nu)\geq\int\mathsf{d}^{2}\,\mathrm{d}\pi=W_{2}^{2}(\mu,\nu),$
and consequently that
$\lim_{n\to\infty}(W_{2}^{\mathsf{d}_{n}})^{2}(\mu,\nu)=W_{2}^{2}(\mu,\nu)$.
Having that $W_{2}^{2}(\pi_{n},\Pi(\mu,\nu))\to 0$ and that
$\lim_{n\to\infty}(W_{2}^{\mathsf{d}_{n}})^{2}(\mu,\nu)=W_{2}^{2}(\mu,\nu)$,
the proof of Proposition 2.6 can be repeated step by step and gives the
thesis. ∎
###### Remark 2.10.
This section has shown how the existence of a consistent geodesic flow and a
consistent plan selection associated to it, can help in proving the very
strict CD condition. However, I have not stated any results (except for
Proposition 2.5) that would guarantee the existence of these two objects in a
metric measure space. To this aim, it would be very interesting to investigate
under which assumptions on a given consistent geodesic flow $G$ (or on the
metric measure space), there exists a consistent plan selection associated to
$G$. In the next section I will show how a (double) minimization procedure
allows to identify a consistent plan selection in a particular metric measure
space. It is possible that these arguments can also apply to a more general
context.
## 3 Application to Crystalline Norms in $\mathbb{R}^{N}$
The aim of this section is to prove the very strict $\mathsf{CD}(0,\infty)$
condition for $\mathbb{R}^{N}$ equipped with a crystalline norm and with the
Lebesgue measure, using the theory developed in the last section and in
particular Proposition 2.8. Let me point out that the Optimal Transport
problem in these particular metric spaces has been already studied by Ambrosio
Kirchheim and Pratelli in [6]. They were able to solve the $L^{1}$-Monge
problem using a secondary variational minimization in order to suitably
decompose the space in transport rays. Despite the problem I want to face and
the way I will do it are different from the theory developed in [6], I will in
turn use a secondary variational problem to select a suitable transport plan
connecting two given measures, obtaining, as a byproduct, the existence of
optimal transport map between them.
Before going on, I fix the notation I will use in this section. Given a finite
set of vectors $\tilde{\mathcal{V}}\subset\mathbb{R}^{N}$ such that
$\text{span}(\tilde{\mathcal{V}})=\mathbb{R}^{N}$, introduce the associate
crystalline norm, which is defined as follows
$\left\lVert x\right\rVert:=\max_{v\in\tilde{\mathcal{V}}}|\langle
x,v\rangle|$
and the corresponding distance
$\mathsf{d}(x,y):=\left\lVert
x-y\right\rVert=\max_{v\in\tilde{\mathcal{V}}}|\langle x-y,v\rangle|.$
For sake of exposition, from now on I am going to use the following equivalent
formulations for the norm and the distance:
$\left\lVert x\right\rVert:=\max_{v\in\mathcal{V}}\langle
x,v\rangle,\quad\mathsf{d}(x,y):=\left\lVert
x-y\right\rVert=\max_{v\in\mathcal{V}}\langle x-y,v\rangle,$
where $\mathcal{V}$ denotes the set
$\tilde{\mathcal{V}}\cup(-\tilde{\mathcal{V}})$.
As the reader can easily guess, in this framework the choice of a consistent
geodesic flow is not really problematic, in fact it is sufficient to consider
the Euclidean one, that is
$\begin{split}G:\mathbb{R}^{N}\times\mathbb{R}^{N}&\to
C([0,1],\mathbb{R}^{N})\\\ (x,y)&\mapsto(t\mapsto(1-t)x+ty)\end{split}.$
The rest of the chapter will be then dedicated to the choice of a suitable
plan selection, associated to $G$, satisfying the requirements of Proposition
2.8. It will be identified via a secondary variational minimization. This type
of procedure turns out to be useful in many situation (see for example Chapter
2 and 3 in [14]) and in this specific case is inspired by the work of Rajala
[12]. Let me now go into the details. Given two measures
$\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{N})$, consider the usual Kantorovich
problem with cost $c(x,y)=\mathsf{d}^{2}(x,y)$, that is
$\min_{\pi\in\Gamma(\mu,\nu)}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\mathsf{d}^{2}(x,y)\,\mathrm{d}\pi(x,y),$
calling $\Pi_{1}(\mu,\nu)$ the set of its minimizers. Consequently consider
the secondary variational problem
$\min_{\pi\in\Pi_{1}(\mu,\nu)}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\mathsf{d}^{2}_{eu}(x,y)\,\mathrm{d}\pi(x,y),$
(5)
where I denote by $\mathsf{d}_{eu}$ the Euclidean distance, and denote by
$\Pi_{2}(\mu,\nu)\subseteq\Pi_{1}(\mu,\nu)$ the set of minimizers, which can
be easily seen to be not empty. In Theorem 3.2 I will show that, if $\mu$ is
absolutely continuous, $\Pi_{2}(\mu,\nu)$ consists of a single element, but,
in order to do this I have to preliminarily exploit the cyclical monotonicity
properties of the plans in $\Pi_{2}(\mu,\nu)$.
###### Proposition 3.1.
Every $\pi\in\Pi_{2}(\mu,\nu)$ is concentrated in a set $\Gamma$, such that
for every $(x,y),(x^{\prime},y^{\prime})\in\Gamma$ it holds that
$\mathsf{d}^{2}(x,y)+\mathsf{d}^{2}(x^{\prime},y^{\prime})\leq\mathsf{d}^{2}(x,y^{\prime})+\mathsf{d}^{2}(x^{\prime},y),$
(6)
moreover, if
$\mathsf{d}^{2}(x,y)+\mathsf{d}^{2}(x^{\prime},y^{\prime})=\mathsf{d}^{2}(x,y^{\prime})+\mathsf{d}^{2}(x^{\prime},y)$,
then
$\mathsf{d}^{2}_{eu}(x,y)+\mathsf{d}^{2}_{eu}(x^{\prime},y^{\prime})\leq\mathsf{d}^{2}_{eu}(x,y^{\prime})+\mathsf{d}^{2}_{eu}(x^{\prime},y).$
(7)
###### Proof.
Fix $\pi\in\Pi_{2}(\mu,\nu)$ and notice that, since in particular
$\pi\in\Pi_{1}(\mu,\nu)$, Proposition 1.6 yields that $\pi$ is concentrated in
a set $\Gamma_{1}$ satisfying (6). Furthermore, according to Proposition 1.7
and Remark 1.8, fix an upper semicontinuous Kantorovich potential $\phi$ for
the cost $c(x,y)=\mathsf{d}^{2}(x,y)$, such that also $\phi^{c}$ is upper
semicontinuous. In particular for every $\eta\in\Pi_{1}(\mu,\nu)$, it holds
$\phi(x)+\phi^{c}(y)=c(x,y)=\mathsf{d}^{2}(x,y),\quad\text{for $\eta$-almost
every $(x,y)\in\mathbb{R}^{N}\times\mathbb{R}^{N}$}.$
As a consequence, notice that being a minimizer of the secondary variational
problem (5) is equivalent to realize the minimum of
$\min_{\eta\in\Pi(\mu,\nu)}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\tilde{c}(x,y)\,\mathrm{d}\eta(x,y),$
where the cost $\tilde{c}$ is defined as
$\tilde{c}(x,y)=\begin{cases}\mathsf{d}_{eu}^{2}(x,y)&\text{if
}\phi(x)+\phi^{c}(y)=\mathsf{d}^{2}(x,y)\\\
+\infty&\text{otherwise}\end{cases}.$
Observe that, since $\phi$ and $\phi^{c}$ are upper semicontinuous, the cost
$\tilde{c}$ is lower semicontinuous. Thus Proposition 1.6 ensures that $\pi$
is concentrated in a set $\Gamma_{2}$ which is $\tilde{c}$-cyclically
monotone. Moreover, up to modify $\Gamma_{2}$ in a $\pi$-null set, it is
possible to assume that for every $(x,y)\in\Gamma_{2}$
$\phi(x)+\phi^{c}(y)=c(x,y)=\mathsf{d}^{2}(x,y).$
Now take $(x,y),(x^{\prime},y^{\prime})\in\Gamma_{2}$ with
$\mathsf{d}^{2}(x,y)+\mathsf{d}^{2}(x^{\prime},y^{\prime})=\mathsf{d}^{2}(x,y^{\prime})+\mathsf{d}^{2}(x^{\prime},y)$
and deduce that
$\displaystyle\phi(x)+\phi^{c}(y)+\phi(x^{\prime})+\phi^{c}(y^{\prime})=\mathsf{d}^{2}(x,y)+\mathsf{d}^{2}(x^{\prime},y^{\prime})=\mathsf{d}^{2}(x,y^{\prime})+\mathsf{d}^{2}(x^{\prime},y).$
On the other hand
$\phi(x)+\phi^{c}(y^{\prime})\leq\mathsf{d}^{2}(x,y^{\prime})$ and
$\phi(x^{\prime})+\phi^{c}(y)\leq\mathsf{d}^{2}(x^{\prime},y)$, therefore I
obtain
$\phi(x)+\phi^{c}(y^{\prime})=\mathsf{d}^{2}(x,y^{\prime})\quad\text{and}\quad\phi(x^{\prime})+\phi^{c}(y)=\mathsf{d}^{2}(x^{\prime},y).$
Finally the $\tilde{c}$-cyclical monotonicity allows to conclude that
$\displaystyle\mathsf{d}^{2}_{eu}(x,y)+\mathsf{d}^{2}_{eu}(x^{\prime},y^{\prime})$
$\displaystyle=\tilde{c}(x,y)+\tilde{c}(x^{\prime},y^{\prime})$
$\displaystyle\leq\tilde{c}(x,y^{\prime})+\tilde{c}(x^{\prime},y)=\mathsf{d}^{2}_{eu}(x,y^{\prime})+\mathsf{d}^{2}_{eu}(x^{\prime},y),$
which is exactly (7). Summing up, it is easy to check that the set
$\Gamma=\Gamma_{1}\cap\Gamma_{2}$ satisfies the requirements of Proposition
3.1. ∎
I can now go into the proof of one of the main results of this work.
###### Theorem 3.2.
Given two measures $\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{N})$ with $\mu$
absolutely continuous with respect to $\mathscr{L}^{n}$, there exists a unique
$\pi\in\Pi_{2}(\mu,\nu)$ and it is induced by a map.
###### Proof.
Reasoning as in Remark 1.2, it is sufficient to prove that every plan in
$\Pi_{2}(\mu,\nu)$ is induced by a map. So take $\pi\in\Pi_{2}(\mu,\nu)$,
applying Proposition 3.1 it is possible to find a full $\pi$-measure set
$\Gamma$, satisfying the monotonicity requirements (6) and (7). Assume by
contradiction that $\pi$ is not induced by a map, calling
$(\pi_{x})_{x\in\mathbb{R}^{N}}\subset\mathscr{P}(\mathbb{R}^{N})$ the
disintegration with respect to the projection map $\mathtt{p}_{1}$, then
$\pi_{x}$ is not a delta measure for a $\mu$-positive set. Moreover, given a
non-empty set $V\subseteq\mathcal{V}$, define the sets
$\tilde{A}_{z,V}:=\big{\\{}x\in\mathbb{R}^{N}\,:\,\mathsf{d}(z,x)=\langle
z-x,v\rangle\,\,\,\text{for every }v\in V\big{\\}},$
$A_{z,V}:=\big{\\{}x\in\tilde{A}_{z,V}\,:\,\mathsf{d}(z,x)>\langle
z-x,v\rangle\,\,\,\text{for every }v\in\mathcal{V}\setminus V\big{\\}},$
$A_{z,V}^{\varepsilon}:=\big{\\{}x\in\tilde{A}_{z,V}\,:\,\mathsf{d}(z,x)>\langle
z-x,v\rangle+\varepsilon\,\,\,\text{for every }v\in\mathcal{V}\setminus
V\big{\\}}.$
Notice that, for every fixed $z\in\mathbb{R}^{N}$, the sets $A_{z,V}$
constitute a partition of $\mathbb{R}^{N}$ as $V\subseteq\mathcal{V}$ varies.
Consequently, I divide the proof in three steps, whose combination will allow
me to conclude by contradiction.
Step 1: Given two nonempty sets $V_{1},V_{2}\subseteq\mathcal{V}$ such that
$v_{1}\neq v_{2}$ for every $v_{1}\in V_{1}$ and $v_{2}\in V_{2}$ (that is
$V_{1}\cap V_{2}=\emptyset$), the set
$E:=\big{\\{}z\in\mathbb{R}^{N}\,:\,\pi_{z}(A_{z,V_{1}})>0\text{ and
}\pi_{z}(A_{z,V_{2}})>0\big{\\}}$
has zero $\mu$-measure.
First of all, notice that if $E$ is non-empty, then for every fixed
$z\in\mathbb{R}^{N}$ there exist $x\in A_{z,V_{1}}$ and $y\in A_{z,V_{2}}$
such that $\mathsf{d}(z,x)=\mathsf{d}(z,y)=1$ and in particular
$\langle x,v_{1}\rangle=1>\langle y,v_{1}\rangle\quad\text{for every }v_{1}\in
V_{1}$
and
$\langle y,v_{2}\rangle=1>\langle x,v_{2}\rangle\quad\text{for every }v_{2}\in
V_{2}.$
Therefore, calling $\bar{v}=x-y$, it holds that
$\begin{split}&\langle\bar{v},v_{1}\rangle>0\quad\text{for every }v_{1}\in
V_{1},\\\ &\langle\bar{v},v_{2}\rangle<0\quad\text{for every }v_{1}\in
V_{2}.\end{split}$ (8)
Now, assume by contradiction that $E$ has positive $\mu$-measure, in
particular it is non-empty and there exists $\bar{v}$ satisfying (8).
Moreover, notice that, since $\Gamma$ is $\pi$-measurable and has full
measure, then $\Gamma_{z}:=\\{(z,y)\in\Gamma:y\in\mathbb{R}^{N}\\}$ is
$\pi_{z}$-measurable with $\pi_{z}(\Gamma_{z})=1$ for $\mu$-almost every
$z\in\mathbb{R}^{N}$. In particular for $\varepsilon>0$ small enough the set
$E_{\varepsilon}:=\big{\\{}z\in\mathbb{R}^{N}\,:\,\pi_{z}(A_{z,V_{1}}^{\varepsilon}\cap\Gamma_{z})>0\text{
and }\pi_{z}(A_{z,V_{2}}^{\varepsilon}\cap\Gamma_{z})>0\big{\\}}$
has positive $\mu$-measure, and thus it also has positive
$\mathscr{L}^{N}$-measure. Take a Lebesgue density point $\bar{z}$ of
$E_{\varepsilon}$, then in a neighborhood of $\bar{z}$ there exist $z$ such
that $z,z+\epsilon\bar{v}\in E_{\varepsilon}$ for a suitable
$0<\epsilon<\frac{\varepsilon}{\left\lVert\bar{v}\right\rVert}$. Now, there
exist $x\in A_{z,V_{1}}^{\varepsilon}$ and $y\in
A_{z+\epsilon\bar{v},V_{2}}^{\varepsilon}$ such that
$(z,x),(z+\epsilon\bar{v},y)\in\Gamma$. Notice that for every $v_{1}\in
V_{1}$, it holds
$\langle x-(z+\epsilon\bar{v}),v_{1}\rangle=\langle
x-z,v_{1}\rangle-\epsilon\langle\bar{v},v_{1}\rangle<\langle
x-z,v_{1}\rangle=\mathsf{d}(z,x),$ (9)
while for every $w\in\mathcal{V}\setminus V_{1}$ it is possible to perform the
following estimate:
$\langle x-(z+\epsilon\bar{v}),w\rangle=\langle
x-z,w\rangle-\epsilon\langle\bar{v},w\rangle<\mathsf{d}(x,z)-\varepsilon+\epsilon\left\lVert\bar{v}\right\rVert<\mathsf{d}(z,x).$
(10)
The combination of (9) and (10) yields
$\mathsf{d}(x,z+\epsilon\bar{v})<\mathsf{d}(z,x).$ (11)
Similarly, it holds
$\langle y-z,v_{2}\rangle=\langle
y-(z+\epsilon\bar{v}),v_{2}\rangle+\epsilon\langle\bar{v},v_{2}\rangle<\langle
y-(z+\epsilon\bar{v}),v_{2}\rangle=\mathsf{d}(z+\epsilon\bar{v},y),$
for every $v_{2}\in V_{2}$, and
$\langle y-z,w\rangle=\langle
y-(z+\epsilon\bar{v}),w\rangle+\epsilon\langle\bar{v},w\rangle<\mathsf{d}(z+\epsilon\bar{v},y)-\varepsilon+\epsilon\left\lVert\bar{v}\right\rVert<\mathsf{d}(z+\epsilon\bar{v},y),$
for every $w\in\mathcal{V}\setminus V_{2}$, which together show that
$\mathsf{d}(z,y)<\mathsf{d}(z+\epsilon\bar{v},y).$ (12)
Now, the inequalities (11) and (12) allow to infer that
$\mathsf{d}^{2}(z,x)+\mathsf{d}^{2}(z+\epsilon\bar{v},y)>\mathsf{d}^{2}(z,y)+\mathsf{d}^{2}(z+\epsilon\bar{v},x),$
contradicting the condition (6) of Proposition 3.1.
Step 2: Given two nonempty sets $V_{1},V_{2}\subseteq\mathcal{V}$ such that
$V_{1}\cap V_{2}\neq\emptyset$ and $V_{1}\neq V_{2}$, the set
$E:=\big{\\{}z\in\mathbb{R}^{N}\,:\,\pi_{z}(A_{z,V_{1}})>0\text{ and
}\pi_{z}(A_{z,V_{2}})>0\big{\\}}$
has zero $\mu$-measure.
Call $V=V_{1}\cap V_{2}$, $W_{1}=V_{1}\setminus V$ and $W_{2}=V_{2}\setminus
V$. Assume by contradiction that $E$ has positive $\mu$-measure, then for
$\varepsilon>0$ sufficiently small the set
$E_{\varepsilon}:=\big{\\{}z\in\mathbb{R}^{N}\,:\,\pi_{z}(A_{z,V_{1}}^{\varepsilon})>0\text{
and }\pi_{z}(A_{z,V_{2}}^{\varepsilon})>0\big{\\}}$
has positive $\mu$-measure too. As a consequence
$\gamma:=\int_{E_{\varepsilon}}{\left.\kern-1.2pt\pi_{z}\right|_{A_{z,V_{1}}^{\varepsilon}}}\times{\left.\kern-1.2pt\pi_{z}\right|_{A_{z,V_{2}}^{\varepsilon}}}\,\mathrm{d}\mu(z)$
is a strictly positive measure on $\mathbb{R}^{N}\times\mathbb{R}^{N}$ with
$\gamma\big{(}\big{\\{}(x,x):x\in\mathbb{R}^{N}\big{\\}}\big{)}=0$. Thus there
exists $(\bar{x},\bar{y})\in\operatorname{supp}(\gamma)$ with
$\bar{x}\neq\bar{y}$ and then
$\gamma\big{(}B_{\delta}(\bar{x})\times B_{\delta}(\bar{y})\big{)}>0,$
for every $\delta>0$. In particular, proceeding as in the first step, it is
possible to conclude that for every $\delta>0$ the set
$E_{\varepsilon}^{\delta}:=\big{\\{}z\in\mathbb{R}^{N}\,:\,\pi_{z}\big{(}A_{z,V_{1}}^{\varepsilon}\cap\Gamma_{z}\cap
B_{\delta}(\bar{x})\big{)}>0\text{ and
}\pi_{z}\big{(}A_{z,V_{2}}^{\varepsilon}\cap\Gamma_{z}\cap
B_{\delta}(\bar{y})\big{)}>0\big{\\}}$
has positive $\mu$-measure, and thus it also has positive
$\mathscr{L}^{N}$-measure. Now, I divide the proof in two cases, depending on
the vector $\bar{v}=\bar{x}-\bar{y}$:
* •
Case 1: $\langle\bar{v},v\rangle=0$ for every $v\in V$.
Since $(\bar{x},\bar{y})\in\operatorname{supp}(\gamma)$, for every $\eta>0$
there exist $x_{\eta},y_{\eta},z_{\eta}$ such that
$\left\lVert\bar{x}-x_{\eta}\right\rVert,\left\lVert\bar{y}-y_{\eta}\right\rVert<\eta$
and $x_{\eta}\in A_{z_{\eta},V_{1}}^{\varepsilon}$, $y_{\eta}\in
A_{z_{\eta},V_{2}}^{\varepsilon}$. Then, given $v\in V$, for every $v_{1}\in
W_{1}$ it holds that
$\displaystyle\langle x_{\eta}-z_{\eta},v_{1}\rangle=\langle
x_{\eta}-z_{\eta},v\rangle$ $\displaystyle=\langle
x_{\eta}-\bar{x},v\rangle+\langle\bar{v},v\rangle+\langle\bar{y}-y_{\eta},v\rangle+\langle
y_{\eta}-z_{\eta},v\rangle$ $\displaystyle>\langle
y_{\eta}-z_{\eta},v_{1}\rangle+\frac{\varepsilon}{2},$
for $\eta$ small enough. Thus, if $\eta$ is sufficiently small, follows that
$\langle x_{\eta},v_{1}\rangle>\langle
y_{\eta},v_{1}\rangle+\frac{\varepsilon}{2}\quad\text{for every }v_{1}\in
W_{1},$
and similarly
$\langle x_{\eta},v_{2}\rangle<\langle
y_{\eta},v_{2}\rangle-\frac{\varepsilon}{2}\quad\text{for every }v_{2}\in
W_{2}.$
Taking the limit as $\eta\to 0$, clearly $x_{\eta}\to\bar{x}$ and
$y_{\eta}\to\bar{y}$, therefore I conclude that
$\langle\bar{x},v_{1}\rangle>\langle\bar{y},v_{1}\rangle\,\text{ and thus
}\,\langle\bar{v},v_{1}\rangle>0,\quad\text{for every }v_{1}\in W_{1},$ (13)
and
$\langle\bar{x},v_{2}\rangle<\langle\bar{y},v_{2}\rangle\,\text{ and thus
}\,\langle\bar{v},v_{2}\rangle<0,\quad\text{for every }v_{2}\in W_{2}.$ (14)
Now, fix $\delta>0$ sufficiently small such that
$\langle\bar{v},x\rangle>\langle\bar{v},y\rangle,\quad\text{for every }x\in
B_{\delta}(\bar{x})\text{ and }y\in B_{\delta}(\bar{y}).$ (15)
As already emphasized, the set $E_{\varepsilon}^{\delta}$ has positive
Lebesgue measure, then take one of its density points $\bar{z}$. In a
neighborhood of $\bar{z}$ there exists $z$, such that $z,z+\epsilon\bar{v}\in
E_{\varepsilon}^{\delta}$ for a suitable
$0<\epsilon<\frac{\varepsilon}{\left\lVert\bar{v}\right\rVert}$, subsequently
take $x\in A_{z,V_{1}}^{\varepsilon}\cap B_{\delta}(\bar{x})$ with
$(z,x)\in\Gamma$, and $y\in A_{z+\epsilon\bar{v},V_{2}}^{\varepsilon}\cap
B_{\delta}(\bar{y})$ with $(z+\epsilon\bar{v},y)\in\Gamma$. Notice that for
every $v\in V$ it holds
$\langle x-(z+\epsilon\bar{v}),v\rangle=\langle x-z,v\rangle=\mathsf{d}(z,x),$
moreover (13) ensures that for every $v_{1}\in W_{1}$
$\langle x-(z+\epsilon\bar{v}),v_{1}\rangle<\langle
x-z,v_{1}\rangle=\mathsf{d}(z,x),$
while for every $w\in\mathcal{V}\setminus V_{1}$ the following estimate can be
performed
$\langle x-(z+\epsilon\bar{v}),w\rangle=\langle
x-z,w\rangle-\epsilon\langle\bar{v},w\rangle<\mathsf{d}(x,z)-\varepsilon+\epsilon\left\lVert\bar{v}\right\rVert<\mathsf{d}(z,x).$
This last three relations show that
$\mathsf{d}(z+\epsilon\bar{v},x)=\mathsf{d}(z,x),$ (16)
and analogously using (14) it can be proven that
$\mathsf{d}(z,y)=\mathsf{d}(z+\epsilon\bar{v},y).$ (17)
On the other hand, the choice of $\delta$ I made (see (15)) guarantees that
$\displaystyle\mathsf{d}_{eu}^{2}(z+\epsilon\bar{v},x)+\mathsf{d}_{eu}^{2}(z,y)$
$\displaystyle=\langle z+\epsilon\bar{v}-x,z+\epsilon\bar{v}-x\rangle+\langle
z-y,z-y\rangle$ $\displaystyle=\langle z-x,z-x\rangle+2\langle
z-x,\epsilon\bar{v}\rangle+\langle\epsilon\bar{v},\epsilon\bar{v}\rangle+\langle
z-y,z-y\rangle$ $\displaystyle<\langle
z-x,z-x\rangle+\langle\epsilon\bar{v},\epsilon\bar{v}\rangle+2\langle
z-y,\epsilon\bar{v}\rangle+\langle z-y,z-y\rangle$ $\displaystyle=\langle
z-x,z-x\rangle+\langle z+\epsilon\bar{v}-y,z+\epsilon\bar{v}-y\rangle$
$\displaystyle=\mathsf{d}_{eu}^{2}(z,x)+\mathsf{d}_{eu}^{2}(z+\epsilon\bar{v},y),$
which, together with (16) and (17), contradicts the condition (7) of
Proposition 3.1.
* •
Case 2: there exists $\bar{w}\in V$ such that
$\langle\bar{v},\bar{w}\rangle\neq 0$.
Without losing generality I can assume $\langle\bar{v},\bar{w}\rangle>0$, then
it is possible to fix a sufficiently small $\delta>0$ such that, for a
suitable $\eta>0$, it holds
$\langle\bar{w},x\rangle>\langle\bar{w},y\rangle+\eta,\quad\text{for every
}x\in B_{\delta}(\bar{x})\text{ and }y\in B_{\delta}(\bar{y}).$
Fix a vector $\tilde{v}\in A_{z,V_{1}}$. Repeating the argument used in Case 1
it is possible to find a point $z\in\mathbb{R}^{n}$, such that
$z,z+\epsilon\tilde{v}\in E_{\varepsilon}^{\delta}$ for a suitable
$0<\epsilon<\max\big{\\{}\frac{\varepsilon}{2\left\lVert\tilde{v}\right\rVert},\frac{\eta}{2\left\lVert\tilde{v}\right\rVert}\big{\\}}$.
Then take $x\in A_{z,V_{1}}^{\varepsilon}\cap B_{\delta}(\bar{x})$ and $y\in
A_{z+\epsilon\tilde{v},V_{2}}^{\varepsilon}\cap B_{\delta}(\bar{y})$ with
$(z,x),(z+\epsilon\tilde{v})\in\Gamma$, and notice that for every $v_{1}\in
V_{1}$ it holds that
$\langle x-(z+\epsilon\tilde{v}),v_{1}\rangle=\langle
x-z,v\rangle-\epsilon\langle\tilde{v},v_{1}\rangle=\mathsf{d}(z,x)-\epsilon\left\lVert\tilde{v}\right\rVert$
while for every $w\in\mathcal{V}\setminus V_{1}$ I have
$\langle x-(z+\epsilon\tilde{v}),w\rangle=\langle
x-z,w\rangle-\epsilon\langle\tilde{v},w\rangle<\mathsf{d}(z,x)-\varepsilon+\epsilon\left\lVert\tilde{v}\right\rVert<\mathsf{d}(z,x)-\epsilon\left\lVert\tilde{v}\right\rVert,$
therefore follows that
$\mathsf{d}(z+\epsilon\tilde{v},x)=\mathsf{d}(z,x)-\epsilon\left\lVert\tilde{v}\right\rVert.$
(18)
On the other hand, observe that
$\mathsf{d}(z+\epsilon\tilde{v},y)=\langle
y-(z+\epsilon\tilde{v}),\bar{w}\rangle=\langle
y-z,\bar{w}\rangle-\epsilon\langle\tilde{v},\bar{w}\rangle<\mathsf{d}(z,x)-\eta+\epsilon\left\lVert\tilde{v}\right\rVert<\mathsf{d}(z,x).$
(19)
It is then possible to conclude that
$\displaystyle\mathsf{d}^{2}(z+\epsilon\tilde{v},x)+\mathsf{d}^{2}(z,y)$
$\displaystyle\leq\big{(}\mathsf{d}(z,x)-\epsilon\left\lVert\tilde{v}\right\rVert\big{)}^{2}+\big{(}\mathsf{d}(z+\epsilon\tilde{v},y)+\epsilon\left\lVert\tilde{v}\right\rVert\big{)}^{2}$
$\displaystyle=\mathsf{d}^{2}(z,x)+\mathsf{d}^{2}(z+\epsilon\tilde{v},y)-2\epsilon\left\lVert\tilde{v}\right\rVert\big{(}\mathsf{d}(z,x)-\mathsf{d}(z+\epsilon\tilde{v},y)\big{)}$
$\displaystyle<\mathsf{d}^{2}(z,x)+\mathsf{d}^{2}(z+\epsilon\tilde{v},y),$
where I used both (18) and (19). This last inequality contradicts condition
(6) of Proposition 3.1.
Step 3: Given a nonempty set $V\subseteq\mathcal{V}$, the set
$E:=\big{\\{}z\in\mathbb{R}^{n}\,:\,{\left.\kern-1.2pt\pi_{z}\right|_{A_{z,V}}}\text{
is not a delta measure}\big{\\}}$
has zero $\mu$-measure.
The proof of this step is very similar to the one of Step 2, nevertheless I
decided to present it anyway, but avoiding all the details which can be easily
fixed following the proof of Step 2. Assume by contradiction that $E$ has
positive $\mu$-measure, then for $\varepsilon>0$ sufficiently small the set
$E_{\varepsilon}:=\big{\\{}z\in\mathbb{R}^{n}\,:\,{\left.\kern-1.2pt\pi_{z}\right|_{A_{z,V}^{\varepsilon}}}\text{
is not a delta measure}\big{\\}}$
has positive $\mu$-measure too. As a consequence
$\gamma:=\int_{E_{\varepsilon}}{\left.\kern-1.2pt\pi_{z}\right|_{A_{z,V}^{\varepsilon}}}\times{\left.\kern-1.2pt\pi_{z}\right|_{A_{z,V}^{\varepsilon}}}\,\mathrm{d}\mu(z)$
is a strictly positive measure on $\mathbb{R}^{N}\times\mathbb{R}^{N}$ that is
not concentrated on $\big{\\{}(x,x):x\in\mathbb{R}^{N}\big{\\}}$. Thus there
exists $(\bar{x},\bar{y})\in\operatorname{supp}(\gamma)$ with
$\bar{x}\neq\bar{y}$ and then
$\gamma\big{(}B_{\delta}(\bar{x})\times B_{\delta}(\bar{y})\big{)}>0,$
for every $\delta>0$. In particular, proceeding as in the first step, it is
possible to conclude that for every $\delta>0$ the set
$E_{\varepsilon}^{\delta}:=\big{\\{}z\in\mathbb{R}^{N}\,:\,\pi_{z}\big{(}A_{z,V}^{\varepsilon}\cap\Gamma_{z}\cap
B_{\delta}(\bar{x})\big{)}>0\text{ and
}\pi_{z}\big{(}A_{z,V}^{\varepsilon}\cap\Gamma_{z}\cap
B_{\delta}(\bar{y})\big{)}>0\big{\\}}$
has positive $\mu$-measure, and thus it also has positive
$\mathscr{L}^{N}$-measure. Now, as I did in Step 2, I divide the proof in two
cases:
* •
Case 1: $\langle\bar{v},v\rangle=0$ for every $v\in V$.
First of all, fix $\delta>0$ sufficiently small such that
$\langle\bar{v},x\rangle>\langle\bar{v},y\rangle,\quad\text{for every }x\in
B_{\delta}(\bar{x})\text{ and }y\in B_{\delta}(\bar{y}).$
Proceeding as in Step 2, I can find $z\in\mathbb{R}^{n}$, such that
$z,z+\epsilon\bar{v}\in E_{\varepsilon}^{\delta}$ for a positive, suitably
small $\epsilon$. Subsequently take $x\in A_{z,V}^{\varepsilon}\cap
B_{\delta}(\bar{x})$ with $(z,x)\in\Gamma$, and $y\in
A_{z+\epsilon\bar{v},V}^{\varepsilon}\cap B_{\delta}(\bar{y})$ with
$(z+\epsilon\bar{v},y)\in\Gamma$. Following the proof of Step 2, it is easy to
realize that
$\mathsf{d}(z+\epsilon\bar{v},x)=\mathsf{d}(z,x),$ (20)
and
$\mathsf{d}(z,y)=\mathsf{d}(z+\epsilon\bar{v},y).$ (21)
On the other hand, the choice of $\delta$ I made guarantees that
$\mathsf{d}_{eu}^{2}(z+\epsilon\bar{v},x)+\mathsf{d}_{eu}^{2}(z,y)<\mathsf{d}_{eu}^{2}(z,x)+\mathsf{d}_{eu}^{2}(z+\epsilon\bar{v},y),$
which, together with (20) and (21), contradicts the condition (7) of
Proposition 3.1.
* •
Case 2: there exists $\bar{w}\in V$ such that
$\langle\bar{v},\bar{w}\rangle\neq 0$.
Without losing generality I can assume $\langle\bar{v},\bar{w}\rangle>0$, then
it is possible to fix a sufficiently small $\delta>0$ such that, for a
suitable $\eta>0$, it holds that
$\langle\bar{w},x\rangle>\langle\bar{w},y\rangle+\eta,\quad\text{for every
}x\in B_{\delta}(\bar{x})\text{ and }y\in B_{\delta}(\bar{y}).$
Once fixed a vector $\tilde{v}\in A_{z,V}$, it is possible to find a point
$z\in\mathbb{R}^{n}$, such that $z,z+\epsilon\tilde{v}\in
E_{\varepsilon}^{\delta}$ for a positive, suitably small $\epsilon$. Then take
$x\in A_{z,V_{1}}^{\varepsilon}\cap B_{\delta}(\bar{x})$ and $y\in
A_{z+\epsilon\tilde{v},V_{2}}^{\varepsilon}\cap B_{\delta}(\bar{y})$ with
$(z,x),(z+\epsilon\tilde{v})\in\Gamma$. Proceeding as I did in Step 2, it is
easy to notice that
$\mathsf{d}(z+\epsilon\tilde{v},x)=\mathsf{d}(z,x)-\epsilon\left\lVert\tilde{v}\right\rVert.$
(22)
and
$\mathsf{d}(z+\epsilon\tilde{v},y)<\mathsf{d}(z,x).$ (23)
Then, combining (22) and (23), I can conclude that
$\mathsf{d}^{2}(z+\epsilon\tilde{v},x)+\mathsf{d}^{2}(z,y)<\mathsf{d}^{2}(z,x)+\mathsf{d}^{2}(z+\epsilon\tilde{v},y),$
contradicting condition (6) of Proposition 3.1.
As anticipated before, it is easy to realize that the combination of the three
steps allows to conclude the proof. ∎
At this point it is clear that Theorem 3.2 provides a plan selection on
$\mathscr{P}_{ac}(\mathbb{R}^{N})\times\mathscr{P}_{ac}(\mathbb{R}^{N})$,
simply imposing $\Pi(\mu,\nu)$ to be equal to the only optimal transport plan
in $\Pi_{2}(\mu,\nu)$. The following proposition ensures that $\Pi$ is a
consistent plan selection.
###### Proposition 3.3.
The map $\Pi$ is a consistent plan selection, associated to $G$.
###### Proof.
Considering how $\Pi$ has been defined, in order to conclude the proof, is
sufficient to prove conditions 2.1 and 2.2 of Lemma 2.3. It is easy to realize
that condition 2.1 is satisfied since $f\cdot\Pi(\mu,\nu)\ll\Pi(\mu,\nu)$ with
bounded density, for every suitable $f$. Condition 2.2 is a little bit
trickier and I am going to prove it with full details.
Assume by contradiction that, for some
$\mu,\nu\in\mathscr{P}_{ac}(\mathbb{R}^{N})$,
$\pi_{2}:=(G_{s},G_{t})_{\\#}\Pi(\mu,\nu)$ is not a minimizer for the
secondary variational problem (5), with absolutely continuous marginals
$\mu_{s}:=(G_{s})_{\\#}\Pi(\mu,\nu)$ and $\mu_{t}:=(G_{t})_{\\#}\Pi(\mu,\nu)$.
Since $\pi_{2}$ is clearly an optimal transport plan, this means that there
exists $\pi\in\operatorname{OptPlans}(\mu_{s},\mu_{t})$ such that
$\int\mathsf{d}_{eu}^{2}(x,y)\,\mathrm{d}\pi<\int\mathsf{d}_{eu}^{2}(x,y)\,\mathrm{d}\pi_{2}.$
Then Dudley’s gluing lemma ensures the existence of a probability measure
$\tilde{\pi}\in\mathscr{P}((\mathbb{R}^{N})^{4})$ such that
$(\mathtt{p}_{1},\mathtt{p}_{2})_{\\#}\tilde{\pi}=\pi_{1},\quad(\mathtt{p}_{2},\mathtt{p}_{3})_{\\#}\tilde{\pi}=\pi\quad\text{and}\quad(\mathtt{p}_{3},\mathtt{p}_{4})_{\\#}\tilde{\pi}=\pi_{3},$
where $\pi_{1}:=(G_{0},G_{s})_{\\#}\Pi(\mu,\nu)$ and
$\pi_{3}:=(G_{t},G_{1})_{\\#}\Pi(\mu,\nu)$. Defining
$\bar{\pi}:=(\mathtt{p}_{1},\mathtt{p}_{4})_{\\#}\tilde{\pi}$ it is possible
to perform the following estimate
$\displaystyle\int\mathsf{d}^{2}(x,y)\,\mathrm{d}\bar{\pi}(x,y)$
$\displaystyle=\int\mathsf{d}^{2}(x,y)\,\mathrm{d}\tilde{\pi}(x,z,w,y)$
$\displaystyle\leq\int\big{(}\mathsf{d}(x,z)+\mathsf{d}(z,w)+\mathsf{d}(w,y)\big{)}^{2}\,\mathrm{d}\tilde{\pi}(x,z,w,y)$
$\displaystyle=\int\mathsf{d}^{2}(x,z)\,\mathrm{d}\pi_{1}+\int\mathsf{d}^{2}(z,w)\,\mathrm{d}\pi+\int\mathsf{d}^{2}(w,y)\,\mathrm{d}\pi_{3}$
$\displaystyle\text{
}+2\int\mathsf{d}(x,z)\mathsf{d}(z,w)\,\mathrm{d}\tilde{\pi}(x,z,w,y)+2\int\mathsf{d}(x,z)\mathsf{d}(w,y)\,\mathrm{d}\tilde{\pi}(x,z,w,y)$
$\displaystyle\text{
}+2\int\mathsf{d}(z,w)\mathsf{d}(w,y)\,\mathrm{d}\tilde{\pi}(x,z,w,y).$
Moreover, this last three integrals can be further estimated, inferring that
$\displaystyle
2\int\mathsf{d}(x,z)\mathsf{d}(z,w)\,\mathrm{d}\tilde{\pi}(x,z,w,y)$
$\displaystyle=s(t-s)\int
2\bigg{(}\frac{1}{s}\mathsf{d}(x,z)\bigg{)}\bigg{(}\frac{1}{t-s}\mathsf{d}(z,w)\bigg{)}\,\mathrm{d}\tilde{\pi}(x,z,w,y)$
$\displaystyle\leq\frac{t-s}{s}\int\mathsf{d}^{2}(x,z)\,\mathrm{d}\pi_{1}+\frac{s}{t-s}\int\mathsf{d}^{2}(z,w)\,\mathrm{d}\pi$
and similarly
$2\int\mathsf{d}(x,z)\mathsf{d}(w,y)\,\mathrm{d}\tilde{\pi}(x,z,w,y)\leq\frac{1-t}{s}\int\mathsf{d}^{2}(x,z)\,\mathrm{d}\pi_{1}+\frac{s}{1-t}\int\mathsf{d}^{2}(w,y)\,\mathrm{d}\pi,$
$2\int\mathsf{d}(z,w)\mathsf{d}(w,y)\,\mathrm{d}\tilde{\pi}(x,z,w,y)\leq\frac{1-t}{t-s}\int\mathsf{d}^{2}(z,w)\,\mathrm{d}\pi_{1}+\frac{t-s}{1-t}\int\mathsf{d}^{2}(w,y)\,\mathrm{d}\pi.$
Putting together this last three inequalities, it is possible to deduce that
$\displaystyle\int\mathsf{d}^{2}(x,y)\,\mathrm{d}\bar{\pi}(x,y)$
$\displaystyle\leq\frac{1}{s}\int\mathsf{d}^{2}(x,z)\,\mathrm{d}\pi_{1}+\frac{1}{t-s}\int\mathsf{d}^{2}(z,w)\,\mathrm{d}\pi+\frac{1}{1-t}\int\mathsf{d}^{2}(w,y)\,\mathrm{d}\pi_{3}$
$\displaystyle=\frac{1}{s}W_{2}^{2}(\mu,\mu_{s})+\frac{1}{t-s}W_{2}^{2}(\mu_{s},\mu_{t})+\frac{1}{1-t}W_{2}^{2}(\mu_{t},\nu)=W_{2}^{2}(\mu,\nu),$
where I used the fact that $G_{\\#}\Pi(\mu,\nu)$ is an optimal geodesic plan.
In particular this shows that $\bar{\pi}\in\operatorname{OptPlans}(\mu,\nu)$.
Furthermore, performing the same computation as before, one can infer that
$\displaystyle\int\mathsf{d}_{eu}^{2}(x,y)\,\mathrm{d}\bar{\pi}(x,y)$
$\displaystyle\leq\frac{1}{s}\int\mathsf{d}_{eu}^{2}(x,z)\,\mathrm{d}\pi_{1}+\frac{1}{t-s}\int\mathsf{d}_{eu}^{2}(z,w)\,\mathrm{d}\pi+\frac{1}{1-t}\int\mathsf{d}_{eu}^{2}(w,y)\,\mathrm{d}\pi_{3}$
$\displaystyle<\frac{1}{s}\int\mathsf{d}_{eu}^{2}(x,z)\,\mathrm{d}\pi_{1}+\frac{1}{t-s}\int\mathsf{d}_{eu}^{2}(z,w)\,\mathrm{d}\pi_{2}+\frac{1}{1-t}\int\mathsf{d}_{eu}^{2}(w,y)\,\mathrm{d}\pi_{3}$
$\displaystyle=\int\mathsf{d}_{eu}^{2}(x,y)\,\mathrm{d}\Pi(\mu,\nu),$
where this last equality holds because $G_{\\#}\Pi(\mu,\nu)$ is concentrated
in Euclidean geodesic. Notice that I have found
$\bar{\pi}\in\operatorname{OptPlans}(\mu,\nu)$ such that
$\int\mathsf{d}_{eu}^{2}(x,y)\,\mathrm{d}\bar{\pi}(x,y)<\int\mathsf{d}_{eu}^{2}(x,y)\,\mathrm{d}\Pi(\mu,\nu),$
this contradicts the definition of $\Pi$. ∎
In order to deduce the main result of this section I only have to prove the
approximation property stated in Proposition 2.8, and to this aim I need to
preliminary state and prove the following proposition. Let me also point out
that this result can be proven using general theorems (see for example Theorem
10.27 in [21] or Theorem 1.3.1 in [7]), anyway I prefer to present a proof
that uses only cyclical monotonicity arguments, similar to the ones explained
previously.
###### Proposition 3.4.
Let $\mathsf{N}:\mathbb{R}^{N}\to\mathbb{R}^{+}$ be a smooth norm, such that
$\mathsf{N}^{2}:\mathbb{R}^{N}\to\mathbb{R}^{+}$ is $k$-convex for some $k>0$.
Calling $\mathsf{d}:\mathbb{R}^{N}\times\mathbb{R}^{N}\to\mathbb{R}^{+}$ the
associated distance and given $\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{N})$ with
$\mu\ll\mathscr{L}^{N}$, there exists a unique
$\pi\in\operatorname{OptPlans}(\mu,\nu)$ and it is induced by a map.
###### Proof.
According to Remark 1.2, it is sufficient to prove that every
$\pi\in\operatorname{OptPlans}(\mu,\nu)$ it is induced by a map. To this aim,
fix $\pi\in\operatorname{OptPlans}(\mu,\nu)$ and call $\Gamma$ the $\pi$-full
measure, $\mathsf{d}^{2}$-cyclically monotone set, provided by Proposition
1.6. Assume by contradiction that $\pi$ is not induced by a map, denote by
$\\{\pi_{x}\\}_{x\in X}$ the disintegration kernel with respect to the
projection map $\mathtt{p}_{1}$, then $\pi_{x}$ is not a delta measure for a
$\mu$-positive set of $x$. Therefore there exists a compact set
$A\subset\mathbb{R}^{N}$ with $\mu(A)>0$, such that $\pi_{x}$ is not a delta
measure for every $x\in A$. Consequently consider
$\eta:=\int_{A}\pi_{x}\times\pi_{x}\,\mathrm{d}\mu,$
which is a positive measure on $\mathbb{R}^{N}\times\mathbb{R}^{N}$. Moreover
$\eta$ is not concentrated on $\\{(x,x):x\in\mathbb{R}^{N}\\}$, thus there
exists $(\bar{x},\bar{y})\in\operatorname{supp}(\eta)$ with
$\bar{x}\neq\bar{y}$ and in particular $\eta(B_{\delta}(\bar{x})\times
B_{\delta}(\bar{y}))>0$ for every $\delta>0$. Now call $v=\bar{y}-\bar{x}$ and
notice that, since $\mathsf{N}^{2}$ is smooth there exists $\bar{\delta}>0$
such that for every $z\in A$ it holds that
$\left|\frac{\partial}{\partial v}\mathsf{N}^{2}(x-z)-\frac{\partial}{\partial
v}\mathsf{N}^{2}(\bar{x}-z)\right|<\frac{k}{3}\mathsf{d}^{2}(\bar{y},\bar{x})$
for every $x\in B_{2\bar{\delta}}(\bar{x})$, and
$\left|\frac{\partial}{\partial v}\mathsf{N}^{2}(y-z)-\frac{\partial}{\partial
v}\mathsf{N}^{2}(\bar{y}-z)\right|<\frac{k}{3}\mathsf{d}^{2}(\bar{y},\bar{x})$
for every $y\in B_{2\bar{\delta}}(\bar{y})$. Moreover, since $\mathsf{N}^{2}$
is $k$-convex, for every $z\in A$ it holds that
$\frac{\partial}{\partial
v}\mathsf{N}^{2}(\bar{y}-z)\geq\frac{\partial}{\partial
v}\mathsf{N}^{2}(\bar{x}-z)+k\mathsf{d}^{2}(\bar{y},\bar{x}),$
and consequently
$\frac{\partial}{\partial v}\mathsf{N}^{2}(y-z)>\frac{\partial}{\partial
v}\mathsf{N}^{2}(x-z)$ (24)
for every $x\in B_{2\bar{\delta}}(\bar{x})$ and every $y\in
B_{2\bar{\delta}}(\bar{y})$. On the other hand, since
$\eta(B_{\bar{\delta}}(\bar{x})\times B_{\bar{\delta}}(\bar{y}))>0$, the set
$A^{\bar{\delta}}=\\{z\in\mathbb{R}^{N}:\pi_{z}(B_{\bar{\delta}}(\bar{x}))>0\text{
and }\pi_{z}(B_{\bar{\delta}}(\bar{y}))>0\\}.$
has positive $\mu$-measure and thus it has positive $\mathscr{L}^{N}$-measure.
Let $\bar{z}$ be the density point of $A^{\bar{\delta}}$, then in a
neighborhood $\bar{z}$ there exists $z$ such that $z,z+\epsilon v\in
A^{\bar{\delta}}$ for some $0<\epsilon<\frac{\bar{\delta}}{\left\lVert
v\right\rVert}$. Consequently, it is possible to find $x\in
B_{\bar{\delta}}(\bar{x})$ and $y\in B_{\bar{\delta}}(\bar{y})$, such that
$(z+\epsilon v,x),(z,y)\in\Gamma.$
Then it holds that
$\displaystyle\mathsf{d}^{2}(z,x)+\mathsf{d}^{2}(z+\epsilon v,y)$
$\displaystyle=\mathsf{N}^{2}(x-z)+\mathsf{N}^{2}(y-(z+\epsilon v))$
$\displaystyle=\mathsf{N}^{2}(x-(z+\epsilon
v))+\int_{0}^{\epsilon}\frac{\partial}{\partial
v}\mathsf{N}^{2}(x-sv-z))\,\mathrm{d}s$
$\displaystyle\quad+\mathsf{N}^{2}(y-z)-\int_{0}^{\epsilon}\frac{\partial}{\partial
v}\mathsf{N}^{2}(y-sv-z)\,\mathrm{d}s$
$\displaystyle<\mathsf{d}^{2}(z+\epsilon v,x)+\mathsf{d}^{2}(z,y),$
where the last passage follows from (24). This last inequality contradicts the
$\mathsf{d}^{2}$-cyclical monotonicity of $\Gamma$, concluding the proof. ∎
Having a consistent geodesic flow and an associated plan selection, it only
remains to apply Proposition 2.8 and deduce the main result. In order to do
so, I introduce a sequence $(\mathsf{d}_{n})_{n\in\mathbb{N}}$ of distances on
$\mathbb{R}^{N}$ by requiring the following three properties:
* •
for every n, $\mathsf{d}_{n}$ is induced by a smooth norm $\mathsf{N}$, such
that $\mathsf{N}^{2}$ is $k$-convex for some $k>0$ and satisfies condition
(4),
* •
$\mathsf{d}_{n}^{2}$ converges to $\mathsf{d}^{2}$ uniformly on compact sets,
* •
$n(\mathsf{d}^{2}_{n}-\mathsf{d}^{2})$ converges to $\mathsf{d}^{2}_{eu}$
uniformly on compact sets, and $n(\mathsf{d}^{2}_{n}-\mathsf{d}^{2})\leq
2\mathsf{d}^{2}_{eu}$ for every $n$.
It is easy to see that such a sequence exists. Now, fixed a pair of absolutely
continuous measures $\mu,\nu\in\mathscr{P}_{ac}(\mathbb{R}^{N})$, Proposition
3.4 ensures that for every $n$ there exists a unique transport plan $\pi_{n}$
between $\mu$ and $\nu$, with respect to the cost
$c(x,y)=\mathsf{d}_{n}^{2}(x,y)$. Let me now prove that it is possible to
apply Proposition 2.8.
###### Proposition 3.5.
The maps $G$ and $\Pi$ and the sequences $(\mathsf{d}_{n})$ and $(\pi_{n})$ I
introduced satisfy the assumptions of Proposition 2.8 with $K=0$.
###### Proof.
Condition 2 is easily satisfied, in fact since $\mathsf{d}_{n}$ is induced by
a strictly convex norm the only geodesics in $(\mathbb{R}^{N},\pi_{n})$ are
the Euclidean ones. Then, because $\pi_{n}$ is unique and Proposition 1.22
holds, it is clear that the entropy functional is convex along
$G_{\\#}\pi_{n}$, with respect to the distance $\mathsf{d}_{n}$. Let me now
prove condition 1. Notice that $\pi_{n}\in\Gamma(\mu,\nu)$ for every $n$,
therefore the sequence $(\pi_{n})$ is tight and Prokhorov theorem ensures the
existence of $\pi\in\Gamma(\mu,\nu)$ such that, up to the extraction of a
subsequence, $\pi_{n}\rightharpoonup\pi$. I am now going to prove that
$\pi\in\Pi_{2}(\mu,\nu)$. Observe that $\pi_{n}$ is an optimal transport plan
for the distance $\mathsf{d}_{n}$ and thus
$\int\mathsf{d}_{n}^{2}\,\mathrm{d}\pi_{n}\leq\int\mathsf{d}_{n}^{2}\,\mathrm{d}\tilde{\pi}\qquad\forall\tilde{\pi}\in\Gamma(\mu,\nu),$
therefore for every compact set $C\subset\mathbb{R}^{N}$ it holds
$\int_{C}\mathsf{d}_{n}^{2}\,\mathrm{d}\pi_{n}\leq\int\mathsf{d}_{n}^{2}\,\mathrm{d}\tilde{\pi}\qquad\forall\tilde{\pi}\in\Gamma(\mu,\nu).$
It is then possible to pass to the limit as $n\to\infty$, using the uniform
convergence for the left hand side and the dominated convergence (ensured by
(4)) at the right hand side, obtaining
$\int_{C}\mathsf{d}^{2}\,\mathrm{d}\pi\leq\int\mathsf{d}^{2}\,\mathrm{d}\tilde{\pi}\qquad\forall\tilde{\pi}\in\Gamma(\mu,\nu).$
Since this last equation holds for every compact set $C\subset\mathbb{R}^{N}$,
it is possible to conclude that
$\int\mathsf{d}^{2}\,\mathrm{d}\pi\leq\int\mathsf{d}^{2}\,\mathrm{d}\tilde{\pi}\qquad\forall\tilde{\pi}\in\Gamma(\mu,\nu),$
in particular $\pi\in\Pi_{1}(\mu,\nu)$. Using once more the minimizing
property of $\pi_{n}$, follows that
$\int\mathsf{d}^{2}\,\mathrm{d}\tilde{\pi}+\int(\mathsf{d}_{n}^{2}-\mathsf{d}^{2})\,\mathrm{d}\pi_{n}\leq\int\mathsf{d}_{n}^{2}\,\mathrm{d}\pi_{n}\leq\int\mathsf{d}_{n}^{2}\,\mathrm{d}\tilde{\pi}=\int\mathsf{d}^{2}\,\mathrm{d}\tilde{\pi}+\int(\mathsf{d}_{n}^{2}-\mathsf{d}^{2})\,\mathrm{d}\tilde{\pi}\qquad\forall\tilde{\pi}\in\Pi_{1}(\mu,\nu),$
consequently it holds that
$\int n(\mathsf{d}_{n}^{2}-\mathsf{d}^{2})\,\mathrm{d}\pi_{n}\leq\int
n(\mathsf{d}_{n}^{2}-\mathsf{d}^{2})\,\mathrm{d}\tilde{\pi}\qquad\forall\tilde{\pi}\in\Pi_{1}(\mu,\nu),$
and proceeding as before I can infer that
$\int\mathsf{d}^{2}_{eu}\,\mathrm{d}\pi\leq\int\mathsf{d}^{2}_{eu}\,\mathrm{d}\tilde{\pi}\qquad\forall\tilde{\pi}\in\Pi_{1}(\mu,\nu).$
In particular $\pi\in\Pi_{2}(\mu,\nu)$ and this concludes the proof,
considering the definition of the map $\Pi$. ∎
Finally, the combination of this last result with Proposition 2.8 allows me to
conclude the final result of this article.
###### Corollary 3.6.
The metric measure space $(\mathbb{R}^{N},\mathsf{d},\mathscr{L}^{N})$ is a
very strict $\mathsf{CD}(0,\infty)$ space and consequently it is weakly
essentially non-branching.
Aknowlegments : This article contains part of the work I did for my master
thesis, that was supervised by Luigi Ambrosio and Karl-Theodor Sturm.
## References
* [1] L. Ambrosio. Lecture notes on optimal transport problem. Euro Summer School ”Mathematical aspects of evolving interfaces”, 2000\.
* [2] L. Ambrosio and N. Gigli. A user’s guide to optimal transport. In Modelling and optimisation of flows on networks, pages 1–155. Springer, 2013.
* [3] L. Ambrosio, N. Gigli, A. Mondino, and T. Rajala. Riemannian Ricci curvature lower bounds in metric measure spaces with $\sigma$-finite measure. Transactions of the American Mathematical Society, 367(7):4661–4701, 2015.
* [4] L. Ambrosio, N. Gigli, and G. Savaré. Calculus and heat flow in metric measure spaces and applications to spaces with Ricci bounds from below. Inventiones mathematicae, 195(2):289–391, 2013.
* [5] L. Ambrosio, N. Gigli, and G. Savaré. Metric measure spaces with Riemannian Ricci curvature bounded from below. Duke Mathematical Journal, 163(7):1405–1490, 2014.
* [6] L. Ambrosio, B. Kirchheim, and A. Pratelli. Existence of optimal transport maps for crystalline norms. Duke Mathematical Journal, 125(2):207–241, 2004.
* [7] A. Figalli. Optimal Transportation and Action-Minimizing Measures. Edizioni della Normale. SNS, 2010.
* [8] N. Gigli. Optimal maps in non branching spaces with Ricci curvature bounded from below. Geometric and Functional Analysis, 22:990–999, 2011.
* [9] N. Gigli, A. Mondino, and G. Savaré. Convergence of pointed non-compact metric measure spaces and stability of Ricci curvature bounds and heat flows. Proceedings of the London Mathematical Society, 111:1071–1129, 2015\.
* [10] J. Lott and C. Villani. Ricci curvature for metric-measure spaces via optimal transport. Annals of Mathematics, 169:903–991, 2009.
* [11] M. Magnabosco. Example of an highly branching CD space. arXiv preprint, 2021.
* [12] T. Rajala. Failure of the local-to-global property for $\mathsf{CD}(K,N)$ spaces. Ann. Sc. Norm. Super. Pisa Cl. Sci., 15:45–68, 2016.
* [13] T. Rajala and K.-T. Sturm. Non-branching geodesics and optimal maps in strong $\mathsf{CD}(K,\infty)$ spaces. Calculus of Variations and Partial Differential Equations, 50:831–846, 2014.
* [14] F. Santambrogio. Optimal Transport for Applied Mathematicians: Calculus of Variations, PDEs, and Modeling. Progress in Nonlinear Differential Equations and Their Applications. Springer International Publishing, 2015.
* [15] T. Schultz. Existence of optimal transport maps in very strict $\mathsf{CD}(K,\infty)$ spaces. Calculus of Variations and Partial Differential Equations, 57, 2018\.
* [16] T. Schultz. Equivalent definitions of very strict $\mathsf{CD}(K,N)$ spaces. arXiv preprint, 2019.
* [17] T. Schultz. On one-dimensionality of metric measure spaces. Proc. Amer. Math. Soc., 149:383–396, 2020.
* [18] K.-T. Sturm. On the geometry of metric measure spaces. Acta Math., 196(1):65–131, 2006.
* [19] K.-T. Sturm. On the geometry of metric measure spaces. II. Acta Math., 196(1):133–177, 2006.
* [20] C. Villani. Topics in Optimal Transportation. Graduate studies in mathematics. American Mathematical Society, 2003.
* [21] C. Villani. Optimal transport – Old and new. Grundlehren der mathematischen Wissenschaften. Springer, 2008.
|
# Dynamics of Laterally Propagating Flames in X-ray Bursts. II. Realistic
Burning & Rotation
A. Harpole Dept. of Physics and Astronomy, Stony Brook University, Stony
Brook, NY 11794-3800 N. M. Ford Lawrence Berkeley National Laboratory,
Berkeley, CA K. Eiden Dept. of Astronomy, University of California, Berkeley,
CA 94720-3411 Dept. of Physics and Astronomy, Stony Brook University, Stony
Brook, NY 11794-3800 M. Zingale Dept. of Physics and Astronomy, Stony Brook
University, Stony Brook, NY 11794-3800 D. E. Willcox Lawrence Berkeley
National Laboratory, Berkeley, CA Y. Cavecchi Universidad Nacional Autónoma
de México, Instituto de Astronomía, Ciudad Universitaria, CDMX 04510, Mexico
M. P. Katz NVIDIA Corporation A. Harpole<EMAIL_ADDRESS>
###### Abstract
We continue to investigate two-dimensional laterally propagating flames in
type I X-ray bursts using fully compressible hydrodynamics simulations. In the
current study we relax previous approximations where we artificially boosted
the flames. We now use more physically realistic reaction rates, thermal
conductivities, and rotation rates, exploring the effects of neutron star
rotation rate and thermal structure on the flame. We find that at lower
rotation rates the flame becomes harder to ignite, whereas at higher rotation
rates the nuclear burning is enhanced by increased confinement from the
Coriolis force and the flame propagates steadily. At higher crustal
temperatures, the flame moves more quickly and accelerates as it propagates
through the atmosphere. If the temperature is too high, instead of a flame
propagating across the surface the entire atmosphere burns uniformly. All of
the software used for these simulations is freely available.
X-ray bursts (1814), Nucleosynthesis (1131), Hydrodynamical simulations (767),
Hydrodynamics (1963), Neutron stars (1108), Open source software (1866),
Computational methods (1965)
††facilities: NERSC, OLCF††software: AMReX (Zhang et al., 2019), Castro
(Almgren et al., 2010, 2020), GCC (https://gcc.gnu.org/), Jupyter (Kluyver et
al., 2016), linux (https://www.kernel.org/), matplotlib (Hunter 2007,
http://matplotlib.org/), NumPy (Oliphant, 2007; van der Walt et al., 2011),
python (https://www.python.org/), valgrind (Nethercote & Seward, 2007), VODE
(Brown et al., 1989), yt (Turk et al., 2011)
## 1 Introduction
Considerable evidence suggests that ignition in an X-ray burst (XRB) starts in
a localized region and then spreads across the surface of the neutron star
(Bhattacharyya & Strohmayer, 2007; Chakraborty & Bhattacharyya, 2014). We
continue our study of flame spreading through fully compressible hydrodynamics
simulations of the flame. Building on our previous paper (Eiden et al., 2020),
we relax the approximations we made previously (artificially boosting the
speed of the flame in order to reduce the computational cost) and explore how
the flame properties depend on rotation rate and the thermal structure of the
neutron star. This new set of realistic simulations is possible because of the
work done to offload our simulation code, Castro (Almgren et al., 2020), to
GPUs, where it runs significantly faster.
We investigate the effect of rotation rate on the flame. With the exception of
IGR J17480-2446 (Altamirano et al. 2010, spinning at $11~{}\mathrm{Hz}$), most
observations of XRBs which come from sources with known rotation rates have
rotation rates of $200-600~{}\mathrm{Hz}$ (Bilous & Watts, 2019; Galloway et
al., 2020). There are a number of factors that could explain this lack of
observations below $200~{}\mathrm{Hz}$. It could be that there is some
physical process which inhibits the flame ignition and/or spread at lower
rotation rates. It could be that bursts at lower rotation rates are smaller in
amplitude and therefore more difficult to detect. It could be that it does not
have anything to do with the flame at all, but that neutron stars in accreting
low mass X-ray binaries rarely have rotation rates below $200~{}\mathrm{Hz}$.
Previous studies have found that rotation can have a significant effect on the
flame’s propagation. As the rotation rate increases, the Coriolis force whips
the spreading flame up into a hurricane-like structure (Spitkovsky et al.,
2002; Cavecchi et al., 2013). The stronger Coriolis force leads to greater
confinement of the hot accreted matter, leading to easier ignition of the
flame (Cavecchi et al., 2015).
The temperature structure of the accreted fuel layer can also affect the flame
propagation. Timmes (2000) showed that laminar helium flames have higher
speeds when moving into hotter upstream fuel. It has been suggested that
crustal heating may be stronger at lower accretion rates and weaker at higher
accretion rates, due to the effect of neutrino losses (Cumming et al., 2006;
Johnston et al., 2019). On the other hand, at very high accretion rates the
atmosphere is so heated that it simmers in place rather than forming a
propagating flame (Fujimoto et al., 1981; Bildsten, 1998; Keek et al., 2009).
A shallow heating mechanism of as yet unknown origin has been found necessary
to reproduce observed properties of XRBs in 1D simulations (Deibel et al.,
2015; Turlione et al., 2015; Keek & Heger, 2017). In our models, we keep the
crust at a constant temperature, so by increasing this temperature we can
effectively increase the crustal heating, shallow heating and/or mimic the
effects of accretion-induced heating.
In the following sections, we conduct a series of simulations at various
rotation rates and crustal temperatures to investigate their effects on the
flame. We find that at lower rotation rates, the flame itself becomes harder
to ignite. At higher rotation rates, nuclear burning is enhanced and the flame
propagates steadily. At higher crustal temperatures, burning is greatly
enhanced and the flame accelerates as it propagates. We discuss the
implications that this may have for burst physics and observations.
## 2 Numerical Approach
We use the Castro hydrodynamics code (Almgren et al., 2010, 2020) and the
simulation framework introduced in Eiden et al. (2020). The current
simulations are all performed in a two-dimensional axisymmetric geometry. For
these axisymmetric simulations, we add an additional geometric source term
from Bernard-Champmartin et al. (2012) that captures the effects of the
divergence of the flux operating on the azimuthal unit vector. This term is a
small correction, but was missing from our previous simulations. The
simulation framework initializes a fuel layer in hydrostatic equilibrium,
laterally blending a hot model on the left side of the domain (the coordinate
origin) and a cool model on the right. The initial temperature gradient
between the hot and cool fluids drives a laterally propagating flame through
the cool fuel. In our original set of calculations (Eiden et al., 2020), in
order to make the simulations computationally feasible we artificially boosted
the flame speed by adjusting the conductivity and reaction rate to produce a
flame moving 5–10$\times$ faster than the nominal laminar flame speed. We also
used high rotation rates ($\geq 1000~{}\mathrm{Hz}$) to reduce the lateral
lengthscale at which the Coriolis force balances the lateral flame spreading
in order to reduce the size of the simulation domain. The port of Castro to
GPUs (Katz et al., 2020) significantly improved its overall performance,
enabling us to run these new simulations without the previous approximations
while continuing to resolve the burning front. For these simulations, we no
longer boost the flame speed—the true conductivities (taken from Timmes 2000)
and reaction rates are used. We are also able to use slower, more physically
realistic rotation rates. We continue to use a 13-isotope $\alpha$-chain to
describe the helium burning.
The initial model is set up in the same fashion as described in Eiden et al.
(2020). In particular, we create a “hot” and “cool” hydrostatic model
representing the ash and fuel states and blend the two models laterally to
create a hot region near the origin of the coordinates and a smooth transition
to the cooler region at larger radii. The cool initial model is characterized
by three temperatures: $T_{\mathrm{star}}$ is the isothermal temperature of
the underlying neutron star, $T_{\mathrm{hi}}$ is the temperature at the base
of the fuel layer, and $T_{\mathrm{lo}}$ is the minimum temperature of the
atmosphere. The atmosphere structure is isentropic as it falls from
$T_{\mathrm{hi}}$ down to $T_{\mathrm{lo}}$, . For the hot model, we replace
$T_{\mathrm{hi}}$ with $T_{\mathrm{hi}}+\delta T$. In the calculations
presented here, we explore the structure of the initial models by varying
these parameters. All models have the same peak temperature in the hot model,
$T_{\mathrm{hi}}+\delta T$.
Figure 1: Initial temperature structure (left panel) and density structure
(right panel) as a function of height in the “hot” state. Figure 2: Initial
temperature structure (left panel) and density structure (right panel) as a
function of height in the “cool” state.
For the current simulations, we explore a variety of initial rotation rate and
temperature conditions for the flame. The main parameters describing the
models and the names by which we shall refer to them in this paper are
provided in Table 1. Figure 1 shows the temperature and density structure for
our hot models and Figure 2 shows the temperature and density structure for
the cool models.
Table 1: Rotation rate and temperature properties of the simulations. In the left-hand column we list the names we shall use to refer to each simulation throughout this paper. run | Rotation Rate (Hz) | $\delta T$ (K) | $T_{\mathrm{hi}}$ (K) | $T_{\mathrm{star}}$ (K) | $T_{\mathrm{lo}}$ (K)
---|---|---|---|---|---
F1000 | $1000$ | $1.2\times 10^{9}$ | $2\times 10^{8}$ | $10^{8}$ | $8\times 10^{6}$
F500 | $500$ | $1.2\times 10^{9}$ | $2\times 10^{8}$ | $10^{8}$ | $8\times 10^{6}$
F500_2E8 | $500$ | $1.2\times 10^{9}$ | $2\times 10^{8}$ | $2\times 10^{8}$ | $8\times 10^{6}$
F500_3E8 | $500$ | $1.1\times 10^{9}$ | $3\times 10^{8}$ | $3\times 10^{8}$ | $8\times 10^{6}$
F500_4E8 | $500$ | $10^{9}$ | $4\times 10^{8}$ | $4\times 10^{8}$ | $8\times 10^{6}$
F250 | $250$ | $1.2\times 10^{9}$ | $2\times 10^{8}$ | $10^{8}$ | $8\times 10^{6}$
## 3 Simulations and Results
We present six simulations in total, summarized in Table 1. These simulations
encompass three different rotation rates: $250~{}\mathrm{Hz}$,
$500~{}\mathrm{Hz}$, and $1000~{}\mathrm{Hz}$, and for the $500~{}\mathrm{Hz}$
run, four different temperature profiles. In the following subsections, we
look at how the flame properties depend on the model parameters. All
simulations are run in a domain of $1.8432\times 10^{5}~{}\mathrm{cm}\times
3.072\times 10^{4}~{}\mathrm{cm}$ with a coarse grid of $1152\times 192$ zones
and two levels of refinement (the first level refining the resolution by
factor of four, and the second by a factor of two again). This gives a fine-
grid resolution of $20~{}\mathrm{cm}$. In these simulations, refinement is
carried out in all zones within the atmosphere with density $\rho>2.5\times
10^{4}~{}\mathrm{g~{}cm^{-3}}$. We use an axisymmetric coordinate system, with
the horizontal $r$-direction pointing along the surface of the star and the
vertical $z$-direction pointing perpendicular to the surface.
Figure 3: Estimates of temperature (left panel) and nuclear energy generation
rate (right panel) in the burning region as functions of time. The quantities
on the vertical axes are the mass-weighted averages defined in Equation 1.
For some of our analysis, we would like to have a means of estimating the
temperature ($T$) and nuclear energy generation rate ($\dot{e}_{\rm nuc}$) in
the burning region of each simulation. For this purpose, we define the mass-
weighted average $\langle Q\rangle_{w}$ of some quantity $Q$ to be
$\langle
Q\rangle_{w}\equiv\frac{\sum_{c_{i}}m(c_{i})Q(c_{i})}{\sum_{c_{i}}m(c_{i})};~{}c_{i}\in
C_{99}(Q).$ (1)
Here, $C_{99}(Q)$ is the set of grid cells with $Q$ values in the top
percentile, $Q(c_{i})$ is the value of $Q$ in cell $c_{i}$, and $m(c_{i})$ is
the total mass in cell $c_{i}$. Using $\langle Q\rangle_{w}$ instead of simply
taking the maximum of the quantity across the entire simulation domain allows
us to track changes over the domain as a whole rather than at a single
localized point. This will therefore be a better reflection of the overall
behavior of the flame rather than of a single localized fluctuation. Figure 3
shows $\langle T\rangle_{w}$ and $\langle\dot{e}_{\rm nuc}\rangle_{w}$ as
functions of time for the subset of our runs that achieve a propagating flame.
This figure is referenced throughout the subsequent sections.
### 3.1 Effect of Rotation Rate on Flame Structure
We run three models (F250, F500, and F1000) with the same initial model in
terms of temperature but differing rotation rates. We saw in Eiden et al.
(2020) that increasing the neutron star rotation rate reduces the horizontal
lengthscale of the flame. An estimate of this lengthscale is given by the
Rossby radius of deformation, $L_{R}$. The Rossby radius may be thought of as
the scale over which the balance between the Coriolis force and horizontal
pressure gradient becomes important, and is approximately given by
$L_{R}\approx\frac{\sqrt{gH_{0}}}{\Omega},$ (2)
where $g$ is the gravitational acceleration, $H_{0}$ is the atmospheric scale
height, and $\Omega$ is the neutron star rotation rate. In Figure 4 and Figure
5, we use $\dot{e}_{\mathrm{nuc}}$ measured at $50~{}\mathrm{ms}$ and
$100~{}\mathrm{ms}$ to discern the horizontal extent of the flame at different
rotation rates. Taking the edge at greatest radius of the bright teal/green
region where the most significant energy generation is occurring as the
leading edge of the flame in each plot, we see that the horizontal extent of
the $1000~{}\mathrm{Hz}$ flame ($\tt{F1000}$) appears to be reduced compared
to the lower rotation $500~{}\mathrm{Hz}$ run ($\tt{F500}$). From Equation 2,
we can see that increasing the rotation rate from $500~{}\mathrm{Hz}$ to
$1000~{}\mathrm{Hz}$ should decrease $L_{R}$ by a factor of two, and that the
greater confinement from the Coriolis force should reduce the horizontal
extent of the flame by a similar factor. However, the Rossby radius is only an
approximate measure of this horizontal lengthscale, and in our simulations we
see that this scaling does not work so well for all rotation rates. The
simulations seem to follow the theoretical scaling described in Equation 2
more closely at higher rotation rates ($1000~{}\mathrm{Hz}$ and higher), based
on the results of Eiden et al. (2020).
Figure 4: Time series of the $500~{}\mathrm{Hz}$ run F500 showing the nuclear
energy generation rate, $\dot{e}_{\mathrm{nuc}}$. Figure 5: Time series of
the $1000~{}\mathrm{Hz}$ run F1000 showing the nuclear energy generation rate,
$\dot{e}_{\mathrm{nuc}}$. Figure 6: Time series of the $500~{}\mathrm{Hz}$
run F500 showing the mean molecular weight, $\bar{A}$. Figure 7: Time series
of the $1000~{}\mathrm{Hz}$ run F1000 showing the mean molecular weight,
$\bar{A}$.
The F500 and F1000 runs both qualitatively resemble the flame structure in
Eiden et al. (2020) — a laterally propagating flame that is lifted off of the
bottom of the fuel layer — but they differ in their burning structures.
Figures 6 and 7 show time series of the mean molecular weight, $\bar{A}$, for
the F500 and F1000 runs. Compared to those in Eiden et al. (2020), ashes
behind the flame do not reach as high atomic weights. This is not surprising,
since those previous runs artificially boosted the reaction rates. Comparing
these two new runs, the burning is much more evolved for the higher rotation
rate, and the ash is actually able to move ahead of the flame front (visible
in the Figure 7 $100~{}\mathrm{ms}$ snapshot). We believe that this is because
the increased rotation better confines the initial perturbation and subsequent
expansion from the burning, increasing the temperature and density in the
flame front such that the reaction rate increases, which allows the reactions
to progress further. The $\dot{e}_{\mathrm{nuc}}$ plots in Figure 5 also
support this interpretation, with the region of the flame front nearest to the
crust in the F1000 run reaching higher $\dot{e}_{\mathrm{nuc}}$ values than
for the F500 run in Figure 4. In contrast to F500 and F1000, the lowest
rotation run — F250 — failed to ignite. The lack of ignition for F250 also
aligns with the reasoning given above, with the lower rotation in this case
potentially leading to insufficient confinement such that the temperature and
density required for ignition is not achieved. In this scenario, another
source of confinement (e.g. magnetic fields, see Cavecchi et al. (2016)) would
need to take over at lower rotation rates to allow a burst to occur, at least
for the initial model used here. Given that the size of our domain is $\sim
1~{}L_{R}$ for F250 (using Equation 2), it is also possible that we simply
cannot confine the flame sufficiently with our current domain width. We see in
Figure 11 (discussed further in Section 3.2) that the F500 flame took longer
to achieve steady propagation than the F1000 flame. It may therefore also be
that we did not run our simulation for long enough to see the F250 flame
achieve the conditions required for ignition and steady propagation.
Burning in the F500 and F1000 runs is concentrated in a dense region with
circular motion. In Figure 8, which compares the horizontal $r$-velocity $u$,
density $\rho$ and the nuclear energy generation rate $\dot{e}_{\rm nuc}$ for
the F250, F500, and F1000 runs, most of the burning for each of the
simulations occurs in a high density region $\rho>3\times 10^{5}~{}{\rm
g\,cm^{-3}}$. The fluid in this dense, high energy generation region undergoes
vortical motion, shown in the Figure 9 phase plots comparing $u$, the vertical
$z$-velocity $v$ and $\dot{e}_{\rm nuc}$. This most likely corresponds to the
leading edge of the flame where fresh fuel is being entrained. This feature is
not developed in the $250~{}\mathrm{Hz}$ flame in Figure 9 (left panel); it
could potentially develop at later times (past the point at which we
terminated our simulation), or the burning could just fizzle out and the flame
fail to ignite entirely.
The mean molecular weight $\bar{A}$ within each of our simulations seems to
grow along defined tracks confined to certain temperatures $T$, as shown in
the Figure 10 phase plots. We believe that the tracks in the plot correspond
to different burning trajectories in phase space resulting from different
thermodynamic conditions. Comparing Figure 10 to Eiden et al. (2020), these
tracks are much more neat and clearly defined. The “messiness” of the tracks
may be dependent on how mixed the flame interior is. Since these new
simulations are un-boosted, they may be inherently less mixed than those in
Eiden et al. (2020). F1000 aligns with this interpretation: its $\bar{A}$
tracks are somewhat disrupted compared to the slower rotation runs, possibly
due to the more vigorous mixing of the vortex at the flame front. Comparing
the different runs, we also see that as the rotation rate increases, so does
the peak temperature. This makes sense if higher rotation leads to a more
concentrated, intense vortex near the flame front. It also agrees with our
earlier interpretation of the enhanced burning seen in Figure 7 for F1000.
Figure 8: Phase plot of the horizontal $r$-velocity $u$, density $\rho$ and
the nuclear energy generation rate $\dot{e}_{\rm nuc}$ for the 250, 500 and
$1000~{}\mathrm{Hz}$ runs (F250, F500 and F1000) at $t=100~{}\mathrm{ms}$. The
slightly lower $\dot{e}_{\rm nuc}$ values along the $u=0$ axis are most likely
a numerical artifact related to the density gradient setup and finite
resolution of these simulations. Figure 9: Phase plot of the horizontal
$r$-velocity $u$, vertical $z$-velocity $v$ and the nuclear energy generation
rate $\dot{e}_{\rm nuc}$ for the 250, 500 and $1000~{}\mathrm{Hz}$ runs (F250,
F500 and F1000) at $t=100~{}\mathrm{ms}$. Figure 10: Phase plot of the mean
molecular weight $\bar{A}$, temperature $T$ and the nuclear energy generation
rate $\dot{e}_{\rm nuc}$ for the 250, 500 and $1000~{}\mathrm{Hz}$ runs (F250,
F500 and F1000) at $t=100~{}\mathrm{ms}$.
### 3.2 Effect of Rotation Rate on Flame Propagation
For the purpose of measuring the flame propagation speed and acceleration, we
track the position of each of our flames as a function of time. We define the
position in terms of a specific value of the energy generation rate,
$\dot{e}_{\rm nuc}$, as we did in Eiden et al. (2020). To recapitulate: we
first reduce the 2D $\dot{e}_{\rm nuc}$ data for each simulation run to a set
of 1D radial profiles by averaging over the vertical coordinate. After
averaging, we take our reference $\dot{e}_{\rm nuc}$ value to be some fraction
of the global $\dot{e}_{\rm nuc}$ maximum across all of these profiles. Since
the flames in our simulations propagate in the positive horizontal direction,
we then search the region of each profile at greater radius than the local
$\dot{e}_{\rm nuc}$ maximum for the point where the $\dot{e}_{\rm nuc}$ first
drops below this reference value. This point gives us the location of our
flame front.
In Eiden et al. (2020), we used $0.1\%$ of the global $\dot{e}_{\rm nuc}$
maximum for our reference value. For the high temperature unboosted flames,
however, we found that the $\dot{e}_{\rm nuc}$ profiles failed to reach that
small a value across the domain at most times, which prevented us from
obtaining reliable position measurements. We therefore use $1\%$ of the global
$\dot{e}_{\rm nuc}$ maximum in this paper rather than $0.1\%$. This is still
sufficiently small that our measurements are not overly sensitive to
turbulence and other local fluid motions (the issue with simply tracking the
local maximum), but allows us to avoid the pitfall encountered by the $0.1\%$
metric.
Table 2: Flame speed and acceleration values measured for each simulation. $v_{0}$ is the flame velocity in the $+r$ direction at $t=0~{}\mathrm{ms}$ and $a$ is the acceleration of the flame. The initial flame velocities and accelerations are derived from a quadratic least-squares fit to each of the datasets for times $t\gtrsim 20~{}\mathrm{ms}$. Using these fit parameters we calculate the velocity at $t=50~{}\mathrm{ms}$, $v_{50}$, at which point the flames have reached steady propagation. run | $v_{0}$ (km s-1) | $a$ (km s-2) | $v_{50}$ (km s-1)
---|---|---|---
F1000 | $3.414\pm 0.008$ | $-1.03\pm 0.13$ | $3.36\pm 0.01$
F500 | $3.077\pm 0.013$ | $5.41\pm 0.19$ | $3.35\pm 0.02$
F500_2E8 | $3.760\pm 0.015$ | $1.74\pm 0.25$ | $3.85\pm 0.02$
F500_3E8 | $-5.293\pm 0.100$ | $357.41\pm 2.15$ | $12.58\pm 0.15$
Figure 11: Flame front position vs. time for the standard
($10^{8}~{}\mathrm{K}$) $500~{}\mathrm{Hz}$ and $1000~{}\mathrm{Hz}$ runs
(F500 and F1000). The dashed lines show quadratic least-squares fits to the
data for $t\gtrsim 20~{}\mathrm{ms}$.
Figure 11 gives the radial position of the flame front as a function of time
for the $\tt{F500}$ and $\tt{F1000}$ runs (blue and orange, respectively) to
show the dependence on rotation rate. In Eiden et al. (2020), we applied a
linear least-squares fit to the flame front position as a function of time to
estimate the propagation velocity. As some of the flames in this set of
simulation runs exhibit significant acceleration, for this study we instead
fit the data with a quadratic curve of the form
$r(t)=\frac{1}{2}at^{2}+v_{0}t+r_{0},$ (3)
where the parameter $a$ is the acceleration of the flame, $v_{0}$ is the
velocity at $t=0~{}\mathrm{ms}$, and $r_{0}$ is the flame front position at
$t=0~{}\mathrm{ms}$. We do not include the data points with $t\lesssim
20~{}\mathrm{ms}$ when performing the fit, since these correspond to the
initial transient period before the flame has begun to propagate steadily. The
values of $a$ and $v_{0}$ for the full suite of simulation runs are provided
in Table 2. Note that $v_{0}$ is only a parameter that may be used to
calculate the flame speed at an arbitrary time. It is not an estimate of the
true initial velocity of the flame, since the flame has not achieved ignition
yet at $t=0~{}\mathrm{ms}$. We use the fit parameters to calculate the flame
speeds at $t=50~{}\mathrm{ms}$ (when the flame has reached steady
propagation), given in the fourth column of Table 2.
, and as seen in Figure 11, there is no clear inverse scaling of the flame
speed with rotation rate in the new set of runs. We observed earlier, however,
that nuclear reactions progress more quickly at higher rotation rate. This
results in a higher $\dot{e}_{\rm nuc}$ — up to three to four times higher
near the burning front after the flame ignites (see Figure 3) — which may
counteract the reduction in flame speed from the increased Coriolis
confinement. Comparing accelerations, we also observe that $\tt{F500}$
accelerates faster than $\tt{F1000}$, which appears to experience a small
deceleration at early times. This disparity may be a direct result of the
difference in Coriolis force.
### 3.3 Effect of Temperature on Flame Structure
To explore the effect of different initial temperature configurations, we run
four simulations fixed at a rotation rate of $500~{}\mathrm{Hz}$ with
temperatures as shown in Table 1. For all the $500~{}\mathrm{Hz}$ simulations
(with the exception of the coolest run, $\tt{F500}$), we set
$T_{\mathrm{star}}$ = $T_{\mathrm{hi}}$, scaling $\delta T$ accordingly to
maintain a consistent value of $T_{\mathrm{hi}}+\delta T$. If we let
$T_{\mathrm{star}}<T_{\mathrm{hi}}$, the cooler neutron star surface might act
as a heat sink and siphon away energy that would otherwise go into heating the
burning layer. By choosing $T_{\mathrm{star}}$ = $T_{\mathrm{hi}}$, we can
effectively explore simulations with greater surface heating. There are
several physically distinct mechanisms that could produce an increased
temperature at the crust: crustal heating, some other form of shallow heating
or accretion-induced heating. In these simulations, we do not model the
mechanism producing the heating effect, just the effect itself, so we do not
distinguish between which of these mechanisms cause the heating.
Figure 12: Comparison of $\bar{A}$ for 3 different $500~{}\mathrm{Hz}$ models
with neutron star temperatures $T_{\rm star}$ of $10^{8}~{}\mathrm{K}$,
$2\times 10^{8}~{}\mathrm{K}$ and $3\times 10^{8}~{}\mathrm{K}$ ($\tt{F500}$,
$\tt{F500\\_2e8}$, and $\tt{F500\\_3e8}$, respectively), and resulting
envelope structures. Each flame is shown at $70~{}\mathrm{ms}$.
Figure 12 shows $\bar{A}$ for three $500~{}\mathrm{Hz}$ simulations with
different initial temperature structures ($T_{\mathrm{star}}\leq 3\times
10^{8}~{}\mathrm{K}$) at $t=70~{}\mathrm{ms}$. We do not plot
$\tt{F500\\_4E8}$ here because it fails to form a clear burning front.
$\tt{F500\\_3E8}$ (Figure 12, bottom panel) — the hottest run to form a clear
burning front — has a faster propagating flame (this will be discussed further
in Section 3.4). It also reaches slightly higher $\bar{A}$ values than the two
cooler runs. The Figure 18 $\bar{A}$-$T$ phase plots of $\tt{F500}$ (left) and
$\tt{F500\\_3E8}$ (middle) also reflect these $\bar{A}$ features, with
$\tt{F500\\_3E8}$ reaching slightly higher $\bar{A}$ values. $\tt{F500\\_3E8}$
also reaches higher $\dot{e}_{\rm nuc}$ values at the low end of the
temperature range ($<0.5\times 10^{9}~{}\mathrm{K}$). There appear to be more
causally connected regions across a range of $\bar{A}$ at low temperatures for
$\tt{F500\\_3E8}$ than for $\tt{F500}$, suggesting that the higher
$\dot{e}_{\rm nuc}$ for $\tt{F500\\_3E8}$ generates burning in certain burning
trajectories that are not present in the cooler $\tt{F500}$ run. Note that,
although $\tt{F500\\_3E8}$ is the hottest run with a modified initial
temperature configuration to form a distinct flame front, the highest rotation
$\tt{F1000}$ flame actually reaches higher temperatures (see Figure 3, left
panel) as well as higher $\bar{A}$ (see Figure 7).
In contrast to the models with $T_{\mathrm{star}}\leq 3\times
10^{8}~{}\mathrm{K}$, $\tt{F500\\_4E8}$ is so hot that the organized flame
structure is lost. This model burns so strongly that it is only run out to
$40~{}\mathrm{ms}$. After an initial period when the burning moves across the
domain, residual burning continues and eventually ignites the entire fuel
layer at late times, as shown in Figures 13 and 14 for three snapshots taken
in the last ten seconds of the simulation. In Figure 13, $\dot{e}_{\rm nuc}$
reaches values of
$10^{18}-10^{20}~{}\mathrm{erg}\,\mathrm{g}^{-1}\mathrm{s}^{-1}$ across the
domain and at heights up to $\sim 0.5\times 10^{4}~{}\mathrm{cm}$. There is
still significant burning occurring even higher, with $\dot{e}_{\rm nuc}$
reaching $\sim 10^{15}~{}\mathrm{erg}\,\mathrm{g}^{-1}\mathrm{s}^{-1}$ at
heights up to $\sim 0.8\times 10^{4}~{}\mathrm{cm}$. For comparison, the next
hottest run ($\tt{F500\\_3E8}$) only reaches maximum $\dot{e}_{\rm nuc}$
values on the order of
$10^{18}~{}\mathrm{erg}\,\mathrm{g}^{-1}\mathrm{s}^{-1}$ (see Figure 3, right
panel), even at the latest timesteps ($\sim 70~{}\mathrm{ms}$, when the flame
is most developed). Significant $\dot{e}_{\rm nuc}$ values for all runs other
than $\tt{F500\\_4E8}$ are confined to the flame front, rather than spanning
the entire domain.
Figure 14 shows $\bar{A}$ for $\tt{F500\\_4E8}$. Again, burning extends across
the domain and high into the atmosphere and fuel layer, lacking the
characteristic flame structure shown in $\bar{A}$ plots for the lower
temperature $500~{}\mathrm{Hz}$ runs (see Figure 12). A distinct mass of
material appears to have broken out of the atmosphere near the axis of
symmetry. The atmosphere edge is located at $\sim 1.2\times
10^{4}~{}\mathrm{cm}$. A similar effect is also visible (to a lesser extent)
in the $\tt{F500\\_3E8}$ plot in Figure 12, with a faint haze of material
rising above the flame near the axis of symmetry. However, this is likely to
be a numerical artefact rather than a true physical effect, and a result of
the boundary conditions imposed at the axis of symmetry for these simulations.
$\tt{F500\\_4E8}$ clearly reaches much higher $\bar{A}$ values across the
domain (especially at the latest snapshot, $t=40~{}\mathrm{ms}$) compared to
all the other runs described in this paper (see Figures 6, 7 and 12).
Though $\tt{F500\\_4E8}$ does not form a distinct burning front, it does
achieve greatly enhanced burning, as shown in the Figure 18 $\bar{A}$-$T$
phase plot (right panel). Not only is $\tt{F500\\_4E8}$ significantly hotter
overall compared to all other runs, but there is a large causally connected
region across a wide range of $\bar{A}$. This indicates that the hotter
temperature has facilitated significant burning in burning trajectories that
were not favored at the lower temperatures. The burning trajectories are also
very disrupted for $\tt{F500\\_4E8}$ compared to the cooler runs, suggesting
that the hotter temperature leads to more vigorous mixing. Indeed, this
appears to be the case looking at Figure 14 and the right panel of Figure 17
(discussed further in section 3.4). The disrupted burning trajectories
resemble those found in Figure 10 for the highest rotation $\tt{F1000}$ run,
though they are even more dramatically disrupted for $\tt{F500\\_4E8}$.
$\tt{F500\\_4E8}$ is also noticeably hotter than $\tt{F1000}$, even though it
is run for significantly less time ($t=40~{}\mathrm{ms}$ vs
$t=100~{}\mathrm{ms}$). Although the $\tt{F500\\_4E8}$ run is clearly a
special case in that it develops steady burning across the domain rather than
a propagating flame, enhanced burning in this hottest run aligns with results
from the other simulations with differing initial temperature structures. As
$T_{\mathrm{star}}$ is increased, the overall behavior and propagation of the
flame is significantly altered, implying that burning is very temperature-
sensitive. We explore flame propagation further in Section 3.4.
Figure 13: Time series of the $500~{}\mathrm{Hz}$ run $\tt{F500\\_4E8}$, with
$T_{\mathrm{star}}=4\times 10^{8}~{}\mathrm{K}$, showing $\dot{e}_{\rm nuc}$.
This model burns so strongly that it is only run out to $40~{}\mathrm{ms}$;
the snapshots shown here are at $T=30~{}\mathrm{ms}$, $35~{}\mathrm{ms}$, and
$40~{}\mathrm{ms}$. Figure 14: Time series of the $500~{}\mathrm{Hz}$ run
$\tt{F500\\_4E8}$, with $T_{\mathrm{star}}=4\times 10^{8}~{}\mathrm{K}$,
showing $\bar{A}$.
### 3.4 Effect of Temperature on Flame Propagation
The method for measuring flame propagation described in Section 3.2 is applied
in Figure 15 to the three $500~{}\mathrm{Hz}$ runs with $T_{\mathrm{star}}\leq
3\times 10^{8}~{}\mathrm{K}$. Due to the lack of a clear burning front in
$\tt{F500\\_4E8}$, we do not analyze its propagation velocity and
acceleration. As the initial $T_{\mathrm{star}}$ is increased beyond $\sim
2\times 10^{8}~{}\mathrm{K}$, the flame becomes greatly accelerated. The
initial flame velocities and accelerations derived from a quadratic least-
squares fit to each of the datasets, as well as the flame velocities at
$t=50~{}\mathrm{ms}$ calculated using the fit parameters, are provided in
Table 2. Comparing the flame propagation at different initial temperatures,
the most robust feature is the acceleration of the $\tt{F500\\_3E8}$ run at
$t\sim 40~{}\mathrm{ms}$. The reason for the acceleration of the flame is not
entirely clear. Whereas for the cooler runs, a state of steady flame
propagation is achieved, for the $\tt{F500\\_3E8}$ run the flame speed
continues to increase, suggesting that some instability driving the flame’s
propagation persists to later times. It could be that energy released from
burning begins to dominate the flame’s propagation as it evolves, increasing
the flame speed over time. Another possibility is that the increased
temperatures lead to enhanced turbulent mixing effects that pull in more fresh
fuel for the flame to burn. Yet another possibility is that the higher initial
$T_{\mathrm{star}}$ leads to a greater average temperature in the fuel layer
over time, making it easier for the flame to burn the fuel and propagate.
Figure 15: Flame front position vs. time for the three $500~{}\mathrm{Hz}$
runs with $T_{\rm star}$ of $10^{8}~{}\mathrm{K}$, $2\times
10^{8}~{}\mathrm{K}$ and $3\times 10^{8}~{}\mathrm{K}$ ($\tt{F500}$,
$\tt{F500\\_2e8}$, and $\tt{F500\\_3e8}$, respectively). The dashed lines show
quadratic least-squares fits to the data for $t\gtrsim 20~{}\mathrm{ms}$. Note
that due to its rapid acceleration, $\tt{F500\\_3E8}$ (green) is only run out
to $\sim 70~{}\mathrm{ms}$ to avoid surpassing the domain boundary.
In both the $u$-$\rho$ phase plot in Figure 16 and the $u$-$v$ phase plot in
Figure 17, the horizontal $u$-velocity for the $\tt{F500\\_4e8}$ (right panel)
burning region is slightly larger than that of the cooler runs. Additionally,
the cooler runs’ $u$-velocities are two to three orders of magnitude greater
than the flame speeds listed in Table 2. These phase plot velocities therefore
most likely correspond to vortical motion in the turbulent burning vortices
rather than the propagation of a flame itself. The vertical $z$-velocity $v$
in Figure 17 further suggests that $\tt{F500\\_4e8}$ undergoes increased
vortical motion, as we see that the velocity magnitude in the burning region
is significantly larger than in the lower temperature runs. Also of interest
in these plots is that the coolest run shows high velocity material comprising
both high and low $\dot{e}_{\rm nuc}$ with horizontal velocity asymmetry, both
of which suggest the burning vortex in the coolest case is less well defined
than at higher temperatures. The coolest run thus does not seem to have
developed the characteristic vortex structure at the burning front (i.e. where
$\dot{e}_{\rm nuc}$ is greatest) that can be clearly seen for the hotter runs
at this time ($40~{}\mathrm{ms}$). As can be seen in Figure 9 (which was
plotted at $100~{}\mathrm{ms}$), this does develop more at later times.
Similar to what we saw when comparing the runs with different rotation rates,
it would therefore appear that it takes longer for the flame to develop when
$T_{\rm star}$ is cooler.
Figure 16: Phase plot of the horizontal $r$-velocity $u$, density $\rho$ and
the nuclear energy generation rate $\dot{e}_{\rm nuc}$ for the
$500~{}\mathrm{Hz}$ runs with $T_{\rm star}$ of $10^{8}~{}\mathrm{K}$,
$3\times 10^{8}~{}\mathrm{K}$ and $4\times 10^{8}~{}\mathrm{K}$ ($\tt{F500}$,
$\tt{F500\\_3e8}$, and $\tt{F500\\_4e8}$) at $t=40~{}\mathrm{ms}$ (the latest
time that $\tt{F500\\_4E8}$ is run out to). The run with $T_{\rm star}=2\times
10^{8}~{}\mathrm{K}$ ($\tt{F500\\_2E8}$) closely resembles the cooler
$\tt{F500}$ run, and is not shown here. Figure 17: Phase plot of the
horizontal $r$-velocity $u$, vertical $z$-velocity $v$ and the nuclear energy
generation rate $\dot{e}_{\rm nuc}$ for the $500~{}\mathrm{Hz}$ runs with
$T_{\rm star}$ of $10^{8}~{}\mathrm{K}$, $3\times 10^{8}~{}\mathrm{K}$ and
$4\times 10^{8}~{}\mathrm{K}$ ($\tt{F500}$, $\tt{F500\\_3e8}$, and
$\tt{F500\\_4e8}$) at $t=40~{}\mathrm{ms}$. The run with $T_{\rm star}=2\times
10^{8}~{}\mathrm{K}$ ($\tt{F500\\_2E8}$) closely resembles the cooler
$\tt{F500}$ run, and is not shown here. Figure 18: Phase plot of the mean
molecular weight $\bar{A}$, temperature $T$ and the nuclear energy generation
rate $\dot{e}_{\rm nuc}$ for the $500~{}\mathrm{Hz}$ runs with $T_{\rm star}$
of $10^{8}~{}\mathrm{K}$, $3\times 10^{8}~{}\mathrm{K}$ and $4\times
10^{8}~{}\mathrm{K}$ ($\tt{F500}$, $\tt{F500\\_3e8}$, and $\tt{F500\\_4e8}$)
at $t=40~{}\mathrm{ms}$. The run with $T_{\rm star}=2\times
10^{8}~{}\mathrm{K}$ ($\tt{F500\\_2E8}$) closely resembles the cooler
$\tt{F500}$ run, and is not shown here.
## 4 Discussion and Conclusions
We ran a number of simulations of laterally propagating flames in XRBs in
order to explore the effects of rotation and thermal structure. We found that
increasing the rotation rate increased the energy generation rate within the
flame and enhanced nuclear burning. Apart from the lowest rotation run (which
failed to ignite), flame propagation was not noticeably impacted by rotation
rate; by the time the different flame fronts reached steady propagation, they
shared comparable velocities. These results are likely due to the rotation-
dependent strength of the Coriolis force and its confinement of the flame
balancing the enhanced nuclear burning.
We explored several models with different crustal temperatures to determine
what effects mechanisms such as higher accretion rates, crustal heating and
shallow heating may have on flame propagation. We found that increasing the
temperature of the crust significantly enhanced the flame propagation. This we
believe to be because a cooler crust allows heat to more efficiently be
transferred away from the flame itself, therefore reducing the flame’s
temperature, slowing burning and consequently reducing its propagation speed.
At higher crustal temperatures, we saw that the inability for heat to be
efficiently transported away from the flame front increased the flame
temperature, driving unstable, accelerating flame propagation. We saw that if
the crust temperature was too high, then instead of a flame the entire
atmosphere would burn steadily. This is reminiscent of what is seen for
neutron stars with accretion rates exceeding the Eddington limit.
In future work, we would like to improve and expand our simulations in order
to better understand the processes at play and to include more physics. This
includes adding tracer particles to the simulations so we can monitor the
fluid motion and perform more detailed nucleosynthesis; extending our
simulations to 3D, which would hopefully alleviate some of the boundary
effects we have observed in these simulations but will require significant
computational resources; and exploring the resolution of our simulations more
so that we can ensure that we have resolved all of the necessary physical
processes. We would also like to model H/He flames, as these are the sites of
rp-process nucleosynthesis (Schatz et al., 2001). Initially we will use the
same reaction sequence explored in our previous convection calculations
(Malone et al., 2014). Finally, we recently added an MHD solver to Castro
(Barrios Sazo, 2020); this will allow us in the future to explore the effects
of magnetic fields on flame propagation in XRBs.
Castro is open-source and freely available at http://github.com/AMReX-
Astro/Castro. The problem setup used here is available in the git repo as
flame_wave. The work at Stony Brook was supported by DOE/Office of Nuclear
Physics grant DE-FG02-87ER40317. This material is based upon work supported by
the U.S. Department of Energy, Office of Science, Office of Advanced
Scientific Computing Research and Office of Nuclear Physics, Scientific
Discovery through Advanced Computing (SciDAC) program under Award Number DE-
SC0017955. This research was supported by the Exascale Computing Project
(17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office
of Science and the National Nuclear Security Administration. This material is
also based upon work supported by the U.S. Department of Energy, Office of
Science, Office of Advanced Scientific Computing Research, Department of
Energy Computational Science Graduate Fellowship under Award Number DE-
SC0021110. This work was supported in part by the U.S. Department of Energy,
Office of Science, Office of Workforce Development for Teachers and Scientists
(WDTS) under the Science Undergraduate Laboratory Internship (SULI) program.
MZ acknowledges support from the Simons Foundation. This research used
resources of the National Energy Research Scientific Computing Center, a DOE
Office of Science User Facility supported by the Office of Science of the U.
S. Department of Energy under Contract No. DE-AC02-05CH11231. This research
used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge
National Laboratory, which is supported by the Office of Science of the U.S.
Department of Energy under Contract No. DE-AC05-00OR22725, awarded through the
DOE INCITE program. We thank NVIDIA Corporation for the donation of a Titan X
and Titan V GPU through their academic grant program. This research has made
use of NASA’s Astrophysics Data System Bibliographic Services.
## References
* Almgren et al. (2020) Almgren, A., Sazo, M. B., Bell, J., et al. 2020, Journal of Open Source Software, 5, 2513, doi: 10.21105/joss.02513
* Almgren et al. (2010) Almgren, A. S., Beckner, V. E., Bell, J. B., et al. 2010, ApJ, 715, 1221, doi: 10.1088/0004-637X/715/2/1221
* Altamirano et al. (2010) Altamirano, D., Watts, A., Kalamkar, M., et al. 2010, ATel, 2932, 1
* Barrios Sazo (2020) Barrios Sazo, M. G. 2020, PhD thesis, State University of New York at Stony Brook
* Bernard-Champmartin et al. (2012) Bernard-Champmartin, A., Braeunig, J.-P., & Ghidaglia, J.-M. 2012, Computers and Fluids, 7, doi: 10.1016/j.compfluid.2012.09.014
* Bhattacharyya & Strohmayer (2007) Bhattacharyya, S., & Strohmayer, T. E. 2007, 666, L85, doi: 10.1086/521790
* Bildsten (1998) Bildsten, L. 1998, in ASIC, Vol. 515, 419
* Bilous & Watts (2019) Bilous, A. V., & Watts, A. L. 2019, 245, 19, doi: 10.3847/1538-4365/ab2fe1
* Brown et al. (1989) Brown, P. N., Byrne, G. D., & Hindmarsh, A. C. 1989, SIAM J. Sci. Stat. Comput., 10, 1038
* Cavecchi et al. (2016) Cavecchi, Y., Levin, Y., Watts, A. L., & Braithwaite, J. 2016, MNRAS, 459, 1259, doi: 10.1093/mnras/stw728
* Cavecchi et al. (2013) Cavecchi, Y., Watts, A. L., Braithwaite, J., & Levin, Y. 2013, MNRAS, 434, 3526, doi: 10.1093/mnras/stt1273
* Cavecchi et al. (2015) Cavecchi, Y., Watts, A. L., Levin, Y., & Braithwaite, J. 2015, MNRAS, 448, 445, doi: 10.1093/mnras/stu2764
* Chakraborty & Bhattacharyya (2014) Chakraborty, M., & Bhattacharyya, S. 2014, ApJ, 792, 4, doi: 10.1088/0004-637X/792/1/4
* Cumming et al. (2006) Cumming, A., Macbeth, J., Zand, J. J. M. i. T., & Page, D. 2006, The Astrophysical Journal, 646, 429, doi: 10.1086/504698
* Deibel et al. (2015) Deibel, A., Cumming, A., Brown, E. F., & Page, D. 2015, ApJ, 809, L31, doi: 10.1088/2041-8205/809/2/L31
* Eiden et al. (2020) Eiden, K., Zingale, M., Harpole, A., et al. 2020, ApJ, 894, 6, doi: 10.3847/1538-4357/ab80bc
* Fujimoto et al. (1981) Fujimoto, M. Y., Hanawa, T., & Miyaji, S. 1981, Astrophysical Journal, 247, 267, doi: 10.1086/159034
* Galloway et al. (2020) Galloway, D. K., in ’t Zand, J. J. M., Chenevez, J., et al. 2020, arXiv e-prints, arXiv:2003.00685. https://arxiv.org/abs/2003.00685
* Hunter (2007) Hunter, J. D. 2007, Computing in Science and Engg., 9, 90, doi: 10.1109/MCSE.2007.55
* Johnston et al. (2019) Johnston, Z., Heger, A., & Galloway, D. K. 2019, arXiv e-prints, arXiv:1909.07977. https://arxiv.org/abs/1909.07977
* Katz et al. (2020) Katz, M. P., Almgren, A., Sazo, M. B., et al. 2020, in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’20 (IEEE Press)
* Keek & Heger (2017) Keek, L., & Heger, A. 2017, The Astrophysical Journal, 842, 113, doi: 10.3847/1538-4357/aa7748
* Keek et al. (2009) Keek, L., Langer, N., et al. 2009, Astronomy & Astrophysics, 502, 871
* Kluyver et al. (2016) Kluyver, T., Ragan-Kelley, B., Pérez, F., et al. 2016, in Positioning and Power in Academic Publishing: Players, Agents and Agendas, 87–90, doi: 10.3233/978-1-61499-649-1-87
* Malone et al. (2014) Malone, C. M., Zingale, M., Nonaka, A., Almgren, A. S., & Bell, J. B. 2014, ApJ, 788, 115, doi: 10.1088/0004-637X/788/2/115
* Nethercote & Seward (2007) Nethercote, N., & Seward, J. 2007, in Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’07 (New York, NY, USA: ACM), 89–100, doi: 10.1145/1250734.1250746
* Oliphant (2007) Oliphant, T. E. 2007, Computing in Science and Engg., 9, 10, doi: 10.1109/MCSE.2007.58
* Schatz et al. (2001) Schatz, H., Aprahamian, A., Barnard, V., et al. 2001, Physical Review Letters, 86, 3471, doi: 10.1103/PhysRevLett.86.3471
* Spitkovsky et al. (2002) Spitkovsky, A., Levin, Y., & Ushomirsky, G. 2002, ApJ, 566, 1018, doi: 10.1086/338040
* Timmes (2000) Timmes, F. X. 2000, ApJ, 528, 913, doi: 10.1086/308203
* Timmes & Woosley (1992) Timmes, F. X., & Woosley, S. E. 1992, ApJ, 396, 649, doi: 10.1086/171746
* Turk et al. (2011) Turk, M. J., Smith, B. D., Oishi, J. S., et al. 2011, ApJS, 192, 9, doi: 10.1088/0067-0049/192/1/9
* Turlione et al. (2015) Turlione, A., Aguilera, D. N., & Pons, J. A. 2015, Astronomy and Astrophysics, 577, doi: 10.1051/0004-6361/201322690
* van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science & Engineering, 13, 22, doi: 10.1109/MCSE.2011.37
* Zhang et al. (2019) Zhang, W., Almgren, A., Beckner, V., et al. 2019, Journal of Open Source Software, 4, 1370, doi: 10.21105/joss.01370
|
# Lower Bounds on Information Requirements for Causal Network Inference
††thanks: This material is based upon work supported by the National Science
Foundation under Grant No. CCF 19-00636.
Xiaohan Kang and Bruce Hajek University of Illinois at Urbana–Champaign
Electrical and Computer Engineering and Coordinated Science Laboratory
Urbana, Illinois
Email<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Recovery of the causal structure of dynamic networks from noisy measurements
has long been a problem of intense interest across many areas of science and
engineering. Many algorithms have been proposed, but there is no work that
compares the performance of the algorithms to converse bounds in a non-
asymptotic setting. As a step to address this problem, this paper gives lower
bounds on the error probability for causal network support recovery in a
linear Gaussian setting. The bounds are based on the use of the Bhattacharyya
coefficient for binary hypothesis testing problems with mixture probability
distributions. Comparison of the bounds and the performance achieved by two
representative recovery algorithms are given for sparse random networks based
on the Erdős–Rényi model.
## I Introduction
Causal networks refer to the directed graphs representing the causal
relationships among a number of entities, and the inference of sparse large-
scale causal networks is of great importance in many scientific, engineering,
and medical fields. For example, the study of gene regulatory networks in
biology concerns the causal interactions between genes and is vital for
finding pathways of biological functions. Because of the scale of these
networks, inference often cannot be carried out for specific ordered pairs of
the vertices without significant prior knowledge about the networks. Instead,
it is desirable to infer the sparse structure from observations on all the
vertices. Time-series observations are especially useful due to the nature of
causality. The problem of causal network inference is then typically
formulated as a sparse support recovery problem from time-series vertex data.
Numerous algorithms have been applied to the problem of causal network
inference, and their performance have been evaluated using both generative
models with ground truths and real data with putative truths (see, e.g., [1]
for gene regulatory network reconstruction), but there is little work that
studies the theoretical converse bounds of the minimum information
requirements. The work [2] lays a theoretical foundation for causal network
inference by studying a general dynamic Markovian model, and proposes the oCSE
algorithm which is shown to find the causal structure of the network when the
exact causation entropy information is available. However, such information is
often unavailable due to the limited amount of data and noise in the
observations.
Motivated by [2], as a first step to understand the fundamental information
requirements, we study the linear discrete stochastic network in [2] as a
special case of the general Markovian model. Unlike [2], we consider
observation noise on the time-series measurements.
To get lower bounds on the error probability, we apply lower bounds for binary
hypothesis testing (BHT) based on the Bhattacharyya coefficient (BC), which
measures the similarity between two probability distributions. In addition, we
use the fact that when there is side information that has the same
distribution under either hypothesis, the conditional error probabilities
given the side information can be lower bounded using the BC for the
conditional distributions and then averaged to yield a lower bound for the
original BHT problem.
The contributions of this paper are three-fold. First, two lower bounds on
error probabilities, a direct one for general hypothesis testing and one based
on side information for mixture hypothesis testing, are given. Second, the
lower bound based on side information is applied to the dynamic Erdős–Rényi
(ER) networks from [2] (see Proposition 4). Third, the lower bound based on
side information is numerically compared with the performance of lasso and
oCSE[2].
Problems similar to the causal network inference in this paper have been
studied in various settings, but nothing on converse bounds is known for a
non-asymptotic regime. In a linear system identification (i.e., a vector
autoregression model) setting, the upper bound of this problem was recently
studied in [3], with sparsity constraint in [4, 5], with observation noise in
[6], and in a closed-loop setting in [7], and in both discrete-time and
continuous-time settings in [8]. Notably, the mutual incoherence property (see
[9, 10, 11, 12, 13]) is often used in upper bound analysis. Lower bounds for
exact recovery in asymptotic settings have been studied in [14, 15, 16]. The
causal inference problem is also closely related to compressed sensing, but
unlike compressed sensing it has an unknown design matrix (the time series of
system states).
The organization of this paper is as follows. Section II introduces the model
of the causal network inference problem. Section III gives a direct lower
bound on the error probability based on the BC for hypothesis testing and
applies it to the network setting. Section IV presents the lower bound based
on side information, and Section V applies this bound to the dynamic ER
networks. Section VI shows the numerical comparison of the lower bound and two
representative algorithms.
## II Model
### II-A Network dynamics
Let $n$ be the number of network vertices and $A^{0}\in\mathbb{R}^{n\times n}$
be the random weighted adjacency matrix of the network with a prior
distribution $\pi^{0}$. Let $X(t)$ be an $n$-dimensional random row vector
representing the system state at time $t\in\\{0,1,2,\dots,T\\}$. Assume
$X(0)\sim\mathcal{N}(\mu,Q)$ and
$X(t)=X(t-1)A^{0}+W(t),\quad t=1,2,\dots,T,$
where $W(t)\sim\mathcal{N}(0,\sigma^{2}I)$ are independent driving noises with
variance $\sigma^{2}$. The noisy observations are
$Y(t)=X(t)+Z(t),\quad t=0,1,\dots,T,$
where $Z(t)\sim\mathcal{N}(0,\nu^{2}I)$ are observation noises with variance
$\nu^{2}$. The observations $Y=(Y(0),Y(1),\dots,Y(T))\in\mathbb{R}^{n(T+1)}$
are jointly Gaussian given $A^{0}$. The goal is to recover the support matrix
$\chi^{0}$ from the obervations $Y$, where $\chi^{0}$ is defined by
$\chi^{0}_{ij}=0$ if $A^{0}_{ij}=0$ and $\chi^{0}_{ij}=1$ if $A^{0}_{ij}\neq
0$. This setting is the same as the linear discrete stochastic network
dynamics in [2] and the discrete-time model in [8]. However the theoretical
results in [2] and [8] do not consider observation noise.
To be definite, for the examples considered in this paper, two more
restrictions are imposed on the prior distribution $\pi^{0}$ and the initial
distribution $\mathcal{N}(\mu,Q)$. Let $r(A)$ denote the spectral radious of
$A$, defined by
$r(A)\triangleq\max_{x\in\mathbb{R}^{n}\colon\|x\|_{2}=1}|xAx^{*}|$, where
$x^{*}$ denotes the transpose of $x$. We assume $r(A^{0})<1$ for stability.
Without such a stability condition data at later time points could have much
higher signal-to-noise ratios, making early time points relatively useless. We
also assume the process $\left(X(t)\colon t\in\\{0,1,\dots,T\\}\right)$ is
stationary; i.e., $\mu=0$ and $Q$ satisfies $Q=(A^{0})^{*}QA^{0}+\sigma^{2}I$.
### II-B Performance metrics
In this section we define the performance metrics of the network inference
problem, and relate them to error probabilities for testing hypotheses about
the existence of individual edges.
We first define the network-level error probabilities. Let
$\widehat{\chi}\colon\mathbb{R}^{n(T+1)}\to\\{0,1\\}^{n\times n}$ be the
support matrix estimator based on the observation $Y$. Let $\mathbb{I}(\cdot)$
be the indicator function. On the network level, following [2], we define the
false negative ratio $\epsilon^{-}$ and the false positive ratio
$\epsilon^{+}$ for a given network prior $\pi^{0}$ and an estimator
$\widehat{\chi}$ by
$\epsilon^{-}\triangleq\frac{\operatorname{\mathbb{E}}\sum_{i,j}\mathbb{I}(\\{A^{0}_{ij}\neq
0,\widehat{\chi}_{ij}(Y)=0\\})}{\operatorname{\mathbb{E}}\sum_{i,j}\mathbb{I}(\\{A^{0}_{ij}\neq
0\\})},$ (1)
$\epsilon^{+}\triangleq\frac{\operatorname{\mathbb{E}}\sum_{i,j}\mathbb{I}(\\{A^{0}_{ij}=0,\widehat{\chi}_{ij}(Y)=1\\})}{\operatorname{\mathbb{E}}\sum_{i,j}\mathbb{I}(\\{A^{0}_{ij}=0\\})},$
(2)
provided the denominators are positive. Here the summations are over all
ordered pairs, including the self-pairs.
Now we define the edge-level error probabilities. For an ordered pair $(i,j)$
given the prior $\pi^{0}$ on $A^{0}$ and an estimator $\widehat{\chi}$, the
recovery of $\chi^{0}_{ij}$ is a BHT problem with the probability of miss and
the probability of false alarm given by
$P^{-}_{ij}\triangleq\operatorname{\mathbb{P}}(\widehat{\chi}_{ij}(Y)=0\mid\chi^{0}_{ij}=1)$
(3)
and
$P^{+}_{ij}\triangleq\operatorname{\mathbb{P}}(\widehat{\chi}_{ij}(Y)=1\mid\chi^{0}_{ij}=0).$
(4)
###### Proposition 1
The network-level error probabilities are convex combinations of the edge-
level error probabilities:
$\epsilon^{-}=\sum_{i,j}P^{-}_{ij}w_{ij}^{-},\quad\epsilon^{+}=\sum_{i,j}P^{+}_{ij}w_{ij}^{+},$
where
$w_{ij}^{-}\triangleq\frac{\operatorname{\mathbb{P}}\\{A^{0}_{ij}\neq
0\\}}{\sum_{k,l}\operatorname{\mathbb{P}}\\{A^{0}_{kl}\neq 0\\}},\quad
w_{ij}^{+}\triangleq\frac{\operatorname{\mathbb{P}}\\{A^{0}_{ij}=0\\}}{\sum_{k,l}\operatorname{\mathbb{P}}\\{A^{0}_{kl}=0\\}}.$
The proof of Proposition 1 follows immediately by exchanging the summation and
expectation in the numerators and the denominators in (1) and (2). Proposition
1 implies in order to study the network-level error probabilities it suffices
to study the edge-level error probabilities.
###### Remark 1
The quantities $\epsilon^{-}$, $\epsilon^{+}$, $w_{ij}^{-}$, and $w_{ij}^{+}$
can be interpreted as limits, assuming the number of instances of the support
recovery problem converges to infinity. First, $\epsilon^{-}$ is the limiting
ratio of the number of false negatives (edges in the ground truth that are
missed in the prediction) to the total number of edges in the ground truth.
Similarly, $\epsilon^{+}$ is the limiting ratio of the number of false
positives (predicted edges that are not in the ground truth) to the total
number of ordered pairs with no edges. Likewise, the weight $w_{ij}^{-}$ is
the limiting fraction of edges that appear on the ordered pair $(i,j)$ out of
all edges, and $w_{ij}^{+}$ is the limiting fraction of non-edges on $(i,j)$
that appear out of all non-edges.
###### Remark 2
While one can alternatively define $\epsilon^{-}$ and $\epsilon^{+}$ in (1)
and (2) by taking the expectation of the ratios rather than the ratios of the
expectations, the presented definitions do not get overly dominated by the
variation of the denominators, and the denominators might even be zero. In [2]
the two quantities were originally defined for a pair of true and predicted
networks.
###### Remark 3
The weights $w_{ij}^{-}$’s and $w_{ij}^{+}$’s are determined by the prior
$\pi^{0}$. If the network prior $\pi^{0}$ is symmetric in the sense a) it is
invariant under vertex permutation; and b)
$\operatorname{\mathbb{P}}\\{A^{0}_{11}=0\\}=\operatorname{\mathbb{P}}\\{A^{0}_{12}=0\\}\in(0,1)$,
then $w_{ij}^{-}=w_{ij}^{+}=\frac{1}{n^{2}}$.
###### Remark 4
Note (1) and (2) weigh the self-edges and the other edges equally, whereas
they could be weighted differently, or self-edges could be excluded.
## III Direct lower bounds on error probability
In this section we apply error bounds for BHT based on the BC directly to
obtain sample complexity lower bounds on the network inference problem.
### III-A Binary hypothesis testing
For background, this section presents useful bounds from the theory of
detection that are used to provide lower bounds on the network-level error
probabilities $\epsilon^{-}$ and $\epsilon^{+}$.
Consider a BHT problem with prior probabilities $\pi$ for $H_{0}$ and $1-\pi$
for $H_{1}$. Suppose that the observation $Y\in\mathcal{Y}$ has probability
density function (pdf) $f$ under hypothesis $H_{0}$ and $g$ under hypothesis
$H_{1}$. For any decision rule $\delta\colon\mathcal{Y}\to\\{0,1\\}$, let the
probability of miss and the probability of false alarm be
$P^{-}_{\delta}\triangleq\int g(y)(1-\delta(y))\mathop{}\\!\mathrm{d}y\mbox{
and }P^{+}_{\delta}\triangleq\int f(y)\delta(y)\mathop{}\\!\mathrm{d}y.$
The average error probability is $\pi P^{-}_{\delta}+(1-\pi)P^{+}_{\delta}$. A
decision rule is Bayes optimal (i.e. minimizes the average error probability)
if and only if it minimizes $\pi f(y)\delta(y)+(1-\pi)g(y)(1-\delta(y))$ for
each $y$. The corresponding minimum average error probability is given by
$p_{e}^{\mathrm{opt}}=\int(\pi
f(y))\wedge((1-\pi)g(y))\mathop{}\\!\mathrm{d}y,$ (5)
where $a\wedge b\triangleq\min\\{a,b\\}$. Let the BC for a pair of continuous
probability distributions with pdfs $f$ and $g$ be defined by:
$\rho(f,g)\triangleq\int\sqrt{f(y)g(y)}\mathop{}\\!\mathrm{d}y.$
We shall often omit the arguments $f$ and $g$ when they are clear from the
context. Note the BC is related to the Hellinger distance $H(f,g)$ by
$H(f,g)=\sqrt{1-\rho}$. For two jointly Gaussian distributions with means
$\mu_{1}$ and $\mu_{2}$ and covariance matrix $\Sigma_{1}$ and $\Sigma_{2}$ it
is known that $\rho=\rho_{G}((\mu_{1},\Sigma_{1}),(\mu_{2},\Sigma_{2}))$ is
given by
$\displaystyle\exp\left(-\frac{1}{8}(\Delta\mu)^{*}\left(\frac{\Sigma_{1}+\Sigma_{2}}{2}\right)^{-1}(\Delta\mu)\right)\cdot\sqrt{\frac{\sqrt{\det\Sigma_{1}\det\Sigma_{2}}}{\det\frac{\Sigma_{1}+\Sigma_{2}}{2}}},$
where $\Delta\mu=\mu_{1}-\mu_{2}$.
The following lemma goes back at least as far as a report of Kraft in the
1950’s (see [17]).
###### Lemma 1
(a) For any two distributions $f$ and $g$, the minimum average error
probability for the BHT problem with priors $(\pi,1-\pi)$ satisfies
$\pi(1-\pi)\rho^{2}\leq\frac{1}{2}(1-\sqrt{1-4\pi(1-\pi)\rho^{2}})\leq
p_{e}^{\mathrm{opt}}\leq\sqrt{\pi(1-\pi)}\rho.$
(b) The BC for tensor products is the product of the BCs:
$\rho(\otimes_{j=1}^{n}f_{j},\otimes_{j=1}^{n}g_{j})=\prod_{j=1}^{n}\rho(f_{j},g_{j}),$
where the $f_{j}$ and $g_{j}$ are pdfs.
A proof of Lemma 1 is shown in Appendix B.
### III-B Direct Bhattacharyya bound for network inference
Now we return to the model in Section II. Let $f_{Y\mid A^{0}_{ij}=0}$ and
$f_{Y\mid A^{0}_{ij}\neq 0}$ be the conditional pdfs of $Y$ given there is not
an edge from $i$ to $j$ or there is such an edge, respectively. We have the
following bound on the average network-level error probability.
###### Proposition 2
For any estimator $\widehat{\chi}$ and any $\pi\in[0,1]$,
$\displaystyle\pi\epsilon^{-}+(1-\pi)\epsilon^{+}\geq\frac{1}{2}\sum_{i,j}\left(1-\sqrt{1-4\pi(1-\pi)\rho_{ij}^{2}}\right)(w_{ij}^{-}\wedge
w_{ij}^{+}),$
where $\rho_{ij}\triangleq\rho(f_{Y\mid A^{0}_{ij}\neq 0},f_{Y\mid
A^{0}_{ij}=0})$, and $w_{ij}^{-}$ and $w_{ij}^{+}$ are defined in Proposition
1.
###### Proof:
Proposition 1 yields
$\displaystyle\pi\epsilon^{-}+(1-\pi)\epsilon^{+}$
$\displaystyle=\sum_{i,j}\pi P^{-}_{ij}w_{ij}^{-}+(1-\pi)P^{+}_{ij}w_{ij}^{+}$
$\displaystyle\geq\sum_{i,j}(\pi
P^{-}_{ij}+(1-\pi)P^{+}_{ij})(w_{ij}^{-}\wedge w_{ij}^{+}).$
and $\pi P^{-}_{ij}+(1-\pi)P^{+}_{ij}$ is the average error probability for
testing $\chi^{0}_{ij}=0$ vs. $\chi^{0}_{ij}=1$. Applying Lemma 1 to bound
each of those terms completes the proof. ∎
We illustrate Proposition 2 with a simple example.
###### Example 1
Consider the following prior for network size $n=2$. With probability $\beta$
there is a single edge of coefficient $a\neq 0$ from vertex $1$ to vertex $2$,
and with probability $1-\beta$ there are no edges in the network. In other
words, $\pi^{0}$ is given by
$\pi^{0}\left(\left\\{A_{0}\right\\}\right)=1-\beta$ and
$\pi^{0}\left(\left\\{A_{1}\right\\}\right)=\beta$, where
$A_{0}=\begin{pmatrix}0&0\\\ 0&0\end{pmatrix}$ and
$A_{1}=\begin{pmatrix}0&a\\\ 0&0\end{pmatrix}$.
Let $N=\nu^{2}/\sigma^{2}$. By the formula of $\rho_{G}$, the BC of the
observations given the two possible networks as
$\rho(f_{Y\mid A^{0}=A_{0}},f_{Y\mid A^{0}=A_{1}})=\rho_{0}\gamma^{T},$ (6)
where
$\rho_{0}\triangleq\sqrt{\frac{\sqrt{(N+1)(N+1+a^{2})}}{N+1+\frac{a^{2}}{2}}}$
and
$\gamma\triangleq\left(\frac{(N+1)\sqrt{N^{2}+(2+a^{2})N+1}}{N^{2}+(2+\frac{a^{2}}{2})N+1+\frac{a^{2}}{4}}\right)^{1/2}.$
Indeed, note that $Y_{1}(T)$, $Y_{2}(0)$, and the pairs
$\left((Y_{1}(t-1),Y_{2}(t))\colon 1\leq t\leq T\right)$ are all mutually
independent. Then by Lemma 1(b), the BC of $Y$ under the two hypotheses
tensorizes to the product of BCs of the independent components. The BC of
$Y_{2}(0)$ is the first multiplicative term in (6), and the BCs of the i.i.d.
pairs form the other term. The random variable $Y_{1}(T)$ is identically
distributed under either hypothesis so its BC is 1.
For the given $\pi^{0}$ we can get $w^{-}=\begin{pmatrix}0&1\\\
0&0\end{pmatrix}$ and
$w^{+}=\displaystyle\frac{1}{4-\beta}\begin{pmatrix}1&1-\beta\\\
1&1\end{pmatrix}$. Then by Proposition 2, for any $\pi\in[0,1]$,
$\pi\epsilon^{-}+(1-\pi)\epsilon^{+}\geq\frac{1}{2}\left(1-\sqrt{1-4\pi(1-\pi)\rho_{0}^{2}\gamma^{2T}}\right)\frac{1-\beta}{4-\beta}.$
## IV Lower bounds on error probability with side information
The direct Bhattacharyya bound in Proposition 2 can however be hard to compute
or estimate because of the integration involving high-dimensional pdfs. In
this section we give a different bound based on the side information that acts
like a switch random variable. With proper choice of the side information,
this new bound can be easily estimated numerically.
Consider the same setting as in Section III-A with the additional condition
$f=\sum_{s=1}^{d}\alpha_{s}f_{s}$ and $g=\sum_{s=1}^{d}\alpha_{s}g_{s}$, where
$f_{s}$ and $g_{s}$ are pdfs for any $s\in[d]\triangleq\\{1,2,\dots,d\\}$ and
$\left(\alpha_{s}\colon s\in[d]\right)$ satisfies $\sum_{s=1}^{d}\alpha_{s}=1$
and $\alpha_{s}\geq 0$ for any $s$. Recall by Lemma 1 we have
$p_{e}^{\mathrm{opt}}\geq\mathsf{LB}_{\text{direct}}$, where
$\mathsf{LB}_{\text{direct}}$ is the lower bound based on the mixture
distributions alone without decomposing them into mixture components and is
given by
$\mathsf{LB}_{\text{direct}}\triangleq\frac{1}{2}\left(1-\sqrt{1-4\pi(1-\pi)\rho^{2}(f,g)}\right).$
(7)
The following proposition gives a lower bound using the mixture
representation.
###### Proposition 3
For any $\pi\in[0,1]$, we have
$p_{e}^{\mathrm{opt}}\geq\mathsf{LB}_{\text{side-info}}$, where
$\mathsf{LB}_{\text{side-
info}}\triangleq\frac{1}{2}\left(1-\sum_{s}\alpha_{s}\sqrt{1-4\pi(1-\pi)\rho^{2}(f_{s},g_{s})}\right).$
(8)
###### Proof:
Let $S$ be a random variable such that, under either hypothesis,
$\operatorname{\mathbb{P}}\\{S=s\\}=\alpha_{s}$ for $s\in[d]$. Suppose $X$ is
jointly distributed with $S$ such that the conditional pdf of $X$ given $S=s$
is $f_{s}$ under $H_{0}$ and is $g_{s}$ under $H_{1}$. Then $S$ can be thought
of as a switch variable or side information. For each $s\in[d]$, Lemma 1
applied to the BHT problem with densities $f_{s}$ and $g_{s}$ yields that the
conditional average error probability, given $S=s$, is lower bounded by
$\mathsf{LB}_{\text{direct}}(f_{s},g_{s})$. Averaging these lower bounds using
$\left(\alpha_{s}\colon s\in[d]\right)$ yields (8). The lower bound applies to
decision rules $\delta$ that depend on both $S$ and $Y$, so it applies to
decision rules depending on $Y$ alone. ∎
Numerical comparisons of $\mathsf{LB}_{\text{direct}}$,
$\mathsf{LB}_{\text{side-info}}$, and $p_{e}^{\mathrm{opt}}$ under the uniform
prior $\pi=1/2$ are shown in Fig. 1.
Figure 1: Numerical comparison of lower bounds with $d=2$, $X$ taking 10
values, and $\pi=1/2$.
For each trial, $d=2$ and $\alpha_{1}$ is drawn from $[0,1]$ uniformly at
random, and $\alpha_{2}=1-\alpha_{1}$ (equivalently, $(\alpha_{1},\alpha_{2})$
follow the 2-dimensional symmetric Dirichlet distribution). The distributions
$f_{1}$, $f_{2}$, $g_{1}$, $g_{2}$ follow the $10$-dimensional symmetric
Dirichlet distribution independently. All these distributions are generated
independently for each trial. The trials are sorted in the increasing order of
$\mathsf{LB}_{\text{direct}}$. It is shown that $\mathsf{LB}_{\text{direct}}$
is usually better than $\mathsf{LB}_{\text{side-info}}$ for random mixtures.
Note the following example shows this is not always the case. Equivalently, it
shows that $\mathsf{LB}_{\text{direct}}$ is not a concave function of $(f,g)$.
###### Example 2
Consider discrete distributions with probability mass functions
$f=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$ and
$g=(\frac{1}{2},\frac{1}{3},\frac{1}{6})$. Note the two distributions can be
decomposed into mixtures as
$f=\frac{2}{3}\left(\frac{1}{2},0,\frac{1}{2}\right)+\frac{1}{3}(0,1,0),$
$g=\frac{2}{3}\left(\frac{3}{4},0,\frac{1}{4}\right)+\frac{1}{3}(0,1,0).$
Then for $\pi=1/2$, we have $\mathsf{LB}_{\text{direct}}\approx 0.4246$ and
$\mathsf{LB}_{\text{side-info}}\approx 0.4385$. So
$\mathsf{LB}_{\text{direct}}<\mathsf{LB}_{\text{side-info}}$.
## V Application to dynamic ER networks
This section presents the random network prior used in [2]. As we will see,
the direct lower bounds on the average error probability for such a prior
require high-dimensional integration and are hard to compute even numerically,
while the calculation of the lower bounds based on side information only
involves the means and the determinants of the covariance matrices of the
Gaussian mixture components.
###### Definition 1
A _dynamic ER prior distribution_ , denoted by $\pi^{0}_{\mathsf{ER}}$, is the
distribution of a signed ER graph scaled to a desired spectral radius if
possible. Here the signed ER graph is a random graph whose edge weights take
values in $\\{-1,0,1\\}$, with the locations of the edges determined by an ER
graph and the signs equally likely. Formally, for given $n\in\mathbb{N}$,
$p\in[0,1]$, and $r_{0}\in(0,1)$, let $R^{0}\in\\{-1,1\\}^{n\times n}$ be
independent Rademacher random variables indicating the potential signs of the
edges, and let $\chi^{0}\in\\{0,1\\}^{n\times n}$ be independent Bernoulli
random variables with mean $p$ indicating the support. Then
$\pi^{0}_{\mathsf{ER}}$ is defined to be the distribution of
$s(R^{0}\circ\chi^{0})$, where $\circ$ denotes the Hadamard (entrywise)
product and $s\colon\\{-1,0,1\\}^{n\times n}\to\mathbb{R}^{n\times n}$ defined
by (recall $r(\cdot)$ is the spectral radius)
$s(A)\triangleq\begin{cases}r_{0}\frac{A}{r(A)}&\text{if }r(A)\neq 0,\\\
A&\text{if }r(A)=0.\end{cases}$
Note $R^{0}\circ\chi^{0}$ can have a spectral radius of zero, in which case
scaling cannot achieve the desired spectral radius $r_{0}$. Also note
$\pi^{0}_{\mathsf{ER}}$ is symmetric in the sense of Remark 3.
To apply the direct lower bound in Proposition 2 one needs to calculate the
$\rho_{ij}$’s, which are based on high-dimensional Gaussian components mixed
with mixture weights from the dynamic ER networks. As a result, even numerical
estimation is highly non-trivial.
Alternatively, to use Proposition 3 we need to find suitable side information
for the BHT $\chi^{0}_{ij}=0$ vs. $\chi^{0}_{ij}=1$. Borrowing terminology
from game theory, let $\chi^{0}_{-ij}$ be all the entries except the $(i,j)$th
in $\chi^{0}$, defined by
$\chi^{0}_{-ij}=(\chi^{0}_{i^{\prime}j^{\prime}}:i^{\prime},j^{\prime}\in[n],~{}(i^{\prime},j^{\prime})\neq(i,j)).$
Because $R^{0}$ and $\chi^{0}_{-ij}$ are independent of $\chi^{0}_{ij}$, a
natural choice of the side information is $(R^{0},\chi^{0}_{-ij})$. With a
slight abuse of notation we write
$\pi^{0}_{\mathsf{ER}}(R,\chi_{-ij})\triangleq\operatorname{\mathbb{P}}\\{R^{0}=R,\chi^{0}_{-ij}=\chi_{-ij}\\}$.
Propositions 1 and 3 then yield the following bound on the average network-
level error probability for the dynamics ER networks.
###### Proposition 4
Under $\pi^{0}_{\mathsf{ER}}$ for any estimator $\widehat{\chi}$ and any
$\pi$,
$\displaystyle\quad\pi\epsilon^{-}+(1-\pi)\epsilon^{+}$
$\displaystyle\geq\frac{1}{2}\left(1-\frac{1}{n^{2}}\sum_{i,j}\sum_{R,\chi_{-ij}}\pi^{0}_{\mathsf{ER}}(R,\chi_{-ij})\sqrt{1-4\pi(1-\pi)\rho_{i,j,R,\chi_{-ij}}^{2}}\right),$
where $\rho_{i,j,R,\chi_{-ij}}$ is the BC for the following two mean zero
Gaussian distributions: The conditional distribution of $Y$ given
$R^{0}=R,\chi^{0}_{-ij}=\chi_{-ij},\chi^{0}_{ij}=0$ vs. the conditional
distribution of $Y$ given $R^{0}=R,\chi^{0}_{-ij}=\chi_{-ij},\chi^{0}_{ij}=1$.
The bound in Proposition 4 involves an expectation over the probability
distribution of the dynamic ER network, and can be readily estimated by Monte
Carlo simulation.
## VI Numerical results
### VI-A Algorithms
Let
$\Phi(0)=\begin{pmatrix}Y(0)\\\ Y(1)\\\ \vdots\\\
Y(T-1)\end{pmatrix},\quad\Phi(1)=\begin{pmatrix}Y(1)\\\ Y(2)\\\ \vdots\\\
Y(T)\end{pmatrix}.$
#### VI-A1 lasso
The lasso algorithm solves the optimization problem
$\operatorname*{minimize}_{A_{j}}\frac{1}{2T}\|\Phi_{j}(1)-\Phi(0)A_{j}\|_{2}^{2}+\lambda\|A_{j}\|_{1},$
where $A_{j}$ and $\Phi_{j}(1)$ are the $j$th columns of $A$ and $\Phi(1)$,
respectively, and $\lambda\geq 0$ is the regularization parameter. If
$\Phi(0)^{*}\Phi(0)$ is invertible, the minimizer
$\hat{A}_{j}^{\mathsf{lasso}}$ is unique. Write
$\hat{A}_{j}^{\mathsf{lasso}}=\left(\hat{A}_{ij}^{\mathsf{lasso}}\colon
i\in[n]\right)$. Then the estimated support matrix is
$\widehat{\chi}^{\mathsf{lasso}}$ defined by
$\widehat{\chi}_{ij}^{\mathsf{lasso}}=\mathbb{I}(\\{\hat{A}_{ij}^{\mathsf{lasso}}\neq
0\\})$. We implement lasso using scikit-learn [18].
#### VI-A2 oCSE
oCSE was proposed in [2]. For each target vertex $j$, its parent set is
discovered greedily one at a time by finding the vertex whose column in
$\Phi(0)$ together with the other chosen columns fits $\Phi_{j}(1)$ the best
in the least squares sense. This discovery stage terminates when the
improvement in the residual fails a permutation test with some threshold
$\theta.$
### VI-B Dynamic ER graphs
Comparisons of the receiver operating characteristic (ROC) curves of lasso and
oCSE by varying the parameters $\lambda$ and $\theta$ and the upper bounds on
the ROC curve resulted from Proposition 4 on dynamic ER graphs are shown in
Figs. 2 and 3.
Figure 2: ROC curves of oCSE and lasso and the upper bound based on
$\mathsf{LB}_{\text{side-info}}$ for dynamic ER graphs with $n=10$, $p=0.2$,
$T=20$, and $\nu^{2}=0$, averaged over 100 simulations. Figure 3: ROC curves
of oCSE and lasso and the upper bound based on $\mathsf{LB}_{\text{side-
info}}$ for dynamic ER graphs with $n=10$, $p=0.2$, $T=20$, and $\nu^{2}=1$,
averaged over 100 simulations.
The simulation code can be found at [19].
## VII Discussion
There is a significant gap between the information lower bound (upper bound on
the ROC curve) and the algorithm performance, especially for small spectral
radius (high signal-to-noise ratio) settings. Further research to close the
gap is warranted.
## References
* [1] D. Marbach, J. C. Costello, R. Küffner, N. M. Vega, R. J. Prill, D. M. Camacho, K. R. Allison, M. Kellis, J. J. Collins, and G. Stolovitzky, “Wisdom of crowds for robust gene network inference,” _Nat Methods_ , vol. 9, no. 8, pp. 796–804, Jul. 2012.
* [2] J. Sun, D. Taylor, and E. M. Bollt, “Causal network inference by optimal causation entropy,” _SIAM Journal on Applied Dynamical Systems_ , vol. 14, no. 1, pp. 73–106, Jan. 2015.
* [3] M. Simchowitz, H. Mania, S. Tu, M. I. Jordan, and B. Recht, “Learning without mixing: Towards a sharp analysis of linear system identification,” _CoRR_ , vol. abs/1802.08334, 2018. [Online]. Available: http://arxiv.org/abs/1802.08334
* [4] S. Fattahi and S. Sojoudi, “Sample complexity of sparse system identification problem,” _CoRR_ , vol. abs/1803.07753, 2018. [Online]. Available: http://arxiv.org/abs/1803.07753
* [5] S. Fattahi, N. Matni, and S. Sojoudi, “Learning sparse dynamical systems from a single sample trajectory,” _CoRR_ , vol. abs/1904.09396, 2019. [Online]. Available: http://arxiv.org/abs/1904.09396
* [6] S. Oymak and N. Ozay, “Non-asymptotic identification of LTI systems from a single trajectory,” in _2019 American Control Conference (ACC)_. IEEE, Jul. 2019.
* [7] S. Lale, K. Azizzadenesheli, B. Hassibi, and A. Anandkumar, “Logarithmic regret bound in partially observable linear dynamical systems,” _CoRR_ , vol. abs/2003.11227, 2020. [Online]. Available: https://arxiv.org/abs/2003.11227
* [8] J. Bento, M. Ibrahimi, and A. Montanari, “Learning networks of stochastic differential equations,” in _Advances in Neural Information Processing Systems (NIPS)_ , 2010, pp. 172–180. [Online]. Available: http://papers.nips.cc/paper/4055-learning-networks-of-stochastic-differential-equations.pdf
* [9] J. Fuchs, “Recovery of exact sparse representations in the presence of bounded noise,” _IEEE Transactions on Information Theory_ , vol. 51, no. 10, pp. 3601–3608, Oct. 2005.
* [10] J. Tropp, “Just relax: convex programming methods for identifying sparse signals in noise,” _IEEE Transactions on Information Theory_ , vol. 52, no. 3, pp. 1030–1051, Mar. 2006.
* [11] P. Zhao and B. Yu, “On model selection consistency of Lasso,” _J Mach Learn Res_ , vol. 7, pp. 2541–2563, Nov. 2006.
* [12] N. Meinshausen and P. Bühlmann, “High-dimensional graphs and variable selection with the lasso,” _The Annals of Statistics_ , vol. 34, no. 3, pp. 1436–1462, Jun. 2006.
* [13] M. J. Wainwright, “Sharp thresholds for high-dimensional and noisy sparsity recovery using $\ell_{1}$-constrained quadratic programming (Lasso),” _IEEE Transactions on Information Theory_ , vol. 55, no. 5, pp. 2183–2202, May 2009.
* [14] Y. Jedra and A. Proutiere, “Sample complexity lower bounds for linear system identification,” in _2019 IEEE 58th Conference on Decision and Control (CDC)_. IEEE, Dec. 2019.
* [15] J. Bento, M. Ibrahimi, and A. Montanari, “Information theoretic limits on learning stochastic differential equations,” in _2011 IEEE International Symposium on Information Theory Proceedings_. IEEE, Jul. 2011.
* [16] J. B. A. Periera and M. Ibrahimi, “Support recovery for the drift coefficient of high-dimensional diffusions,” _IEEE Transactions on Information Theory_ , vol. 60, no. 7, pp. 4026–4049, Jul. 2014.
* [17] T. Kailath, “The divergence and Bhattacharyya distance measures in signal selection,” _IEEE Transactions on Communications_ , vol. 15, no. 1, pp. 52–60, Feb. 1967.
* [18] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” _Journal of Machine Learning Research_ , vol. 12, pp. 2825–2830, 2011.
* [19] X. Kang, “Causal network inference simulations,” Feb. 2021. [Online]. Available: https://github.com/Veggente/net-inf-eval
* [20] H. V. Poor, _An Introduction to Signal Detection and Estimation_. Springer-Verlag New York, 1994.
* [21] J. H. Shapiro, “Bounds on the area under the ROC curve,” _Journal of the Optical Society of America A_ , vol. 16, no. 1, p. 53, Jan. 1999.
## Appendix A Mutual incoherence property in dynamic ER networks
For a covariance matrix $Q$, let $\mathcal{A}_{j}$ be the nonzero index set of
column $j$ of $Q$. The mutual incoherence property in [4] requires that
$\mathsf{MIP}\leq 1-\gamma_{0}$ for some positive $\gamma_{0}$, where
$\mathsf{MIP}\triangleq\max_{1\leq j\leq
n}\max_{i\notin\mathcal{A}_{j}}\|Q_{i,\mathcal{A}_{j}}Q_{\mathcal{A}_{j},\mathcal{A}_{j}}^{-1}\|_{1}.$
For the dynamic ER networks as defined in Definition 1 starting from an all-
zero state, the average MIP at time $T$ is shown in Fig. 4. It can be seen
that the average MIP can be greater than $1$. As a result, the mutual
incoherence property does not hold in general.
Figure 4: Average mutual incoherence parameters for dynamic ER networks with
200 vertices and 0.05 probability of connection. Each point is averaged over
10 simulations.
## Appendix B Bhattacharyya bounds on average error probability for BHT
This section provides a proof of Lemma 1 and discusses the (non)-concavity
properties of the related quantities in BHT.
###### Proof:
Let $\bar{\pi}\triangleq 1-\pi$. Integrating the identity $\pi
f(y)+\bar{\pi}g(y)=2((\pi f(y))\wedge(\bar{\pi}g(y)))+|\pi
f(y)-\bar{\pi}g(y)|$ over $y$ yields
$1=2p_{e}^{\mathrm{opt}}+\int|\pi f(y)-\bar{\pi}g(y)|\mathop{}\\!\mathrm{d}y.$
(9)
The Cauchy–Schwarz inequality yields
$\displaystyle\quad\int|\pi f(y)-\bar{\pi}g(y)|\mathop{}\\!\mathrm{d}y$
$\displaystyle=\int\left|\sqrt{\pi
f(y)}+\sqrt{\bar{\pi}g(y)}\right|\cdot\left|\sqrt{\pi
f(y)}-\sqrt{\bar{\pi}g(y)}\right|\mathop{}\\!\mathrm{d}y$
$\displaystyle\leq\sqrt{\int\left(\sqrt{\pi
f(y)}+\sqrt{\bar{\pi}g(y)}\right)^{2}\mathop{}\\!\mathrm{d}y}$
$\displaystyle\quad\cdot\sqrt{\int\left(\sqrt{\pi
f(y)}-\sqrt{\bar{\pi}g(y)}\right)^{2}\mathop{}\\!\mathrm{d}y}$
$\displaystyle=\sqrt{(1+2\sqrt{\pi\bar{\pi}}\rho)(1-2\sqrt{\pi\bar{\pi}}\rho)}$
$\displaystyle=\sqrt{1-4\pi\bar{\pi}\rho^{2}}.$ (10)
Combining (9) and (10) yields
$p_{e}^{\mathrm{opt}}\geq\frac{1}{2}\left[1-\sqrt{1-4\pi\bar{\pi}\rho^{2}}\right].$
For the other direction, note that $(\pi
f(y))\wedge(\bar{\pi}g(y))\leq\sqrt{\pi\bar{\pi}f(y)g(y)}$. Integrating over
$y$ yields $p_{e}^{\mathrm{opt}}\leq\sqrt{\pi\bar{\pi}}\rho$.
Using the fact $\sqrt{1-u}\leq 1-\frac{u}{2}$ for $0\leq u\leq 1$ (square both
sides to check), one gets
$\frac{1}{2}(1-\sqrt{1-4\pi(1-\pi)\rho^{2}})\geq\pi(1-\pi)\rho^{2}$ (see also
eq. (III.C.23) in Section III.C.2 of [20]).
Proof of the tensorization of BC is left to the reader. ∎
In the rest of this section we study the joint concavity of the average error
probability and its upper and lower bounds in Lemma 1 in terms of the
distributions $f$ and $g$. The following two lemmas show that the minimum
average error probability and the upper bound are jointly concave in the two
distributions. The two lower bounds are not concave in general per Example 2,
but Proposition 5 shows $\rho^{2}$ is jointly concave in a binary case.
Besides, Proposition 6 shows $\rho^{2}$ is concave in one distribution with
the other fixed.
###### Lemma 2
For fixed $\pi$, the minimum average error probability $p_{e}^{\mathrm{opt}}$
in (5) is jointly concave in $f$ and $g$.
###### Proof:
For any $f_{1}$, $f_{2}$, $g_{1}$, $g_{2}$, and $\alpha$, let
$f^{\prime}=\alpha f_{1}+(1-\alpha)f_{2}$ and let $g^{\prime}=\alpha
g_{1}+(1-\alpha)g_{2}$. Then $f^{\prime}$ and $g^{\prime}$ are the mixture
distributions of $f_{1}$, $f_{2}$ and $g_{1}$, $g_{2}$ with mixture weights
$\alpha$ and $1-\alpha$. It then suffices to show for any $f_{1}$, $f_{2}$,
$g_{1}$, $g_{2}$, and $\alpha$, we have
$p_{e}^{\mathrm{opt}}(f^{\prime},g^{\prime})\geq\alpha
p_{e}^{\mathrm{opt}}(f_{1},g_{1})+(1-\alpha)p_{e}^{\mathrm{opt}}(f_{2},g_{2}).$
(11)
Indeed, the left-hand side of (11) is the optimal average error probability
based on an observation of the two mixture distributions, while the right-hand
side of (11) is the optimal average error probability based on the observation
as well as the side information of a binary random variable indicating the
mixture component. Since side information cannot hurt, we obtain (11). ∎
###### Lemma 3
The BC $\rho$ is jointly concave in $f$ and $g$.
###### Proof:
It is easy to see that the mapping $\phi\colon(x,y)\mapsto\sqrt{xy}$ is
jointly concave by checking the Hessian. Then $\rho$ is also jointly concave
because a summation or integration of concave functions is still concave. ∎
Example 2 shows $\mathsf{LB}_{\text{direct}}$ defined in (7) is not concave.
The same example also demonstrates that $\rho^{2}$ is not concave. The
following proposition shows that $\rho^{2}$ is however indeed concave for
discrete distributions taking only two values.
###### Proposition 5
For binary-valued distributions $f$ and $g$, the BC
$\rho^{2}(f,g)=\sqrt{f_{1}g_{1}}+\sqrt{f_{2}g_{2}}$ is jointly concave in
$(f,g)$.
The proof can be done by checking the Hessian is negative semi-definite and is
left to the reader.
###### Proposition 6
$\rho^{2}(f,\cdot)$ is concave for fixed $f$.
###### Proof:
Applying the reverse Minkowski inequality immediately yields the result. ∎
## Appendix C Bounds on the area under the ROC curve
This paper presents lower bounds on error probabilities which were shown as
upper bounds on the ROC curve. This section briefly describes the
corresponding upper bounds on the area under the ROC curve. For the Bayes
optimal decision rules $\delta(\pi)$ defined in Section III-A, the area under
the ROC curve, denoted by AUC, is traced out by varying $\pi$ for the pair
$(1-P^{-}_{\delta(\pi)},P^{+}_{\delta(\pi)})$. Lemma 1 implies the following
upper bound on AUC.
###### Corollary 1
$\mathsf{AUC}\leq 1-\rho^{4}/6$.
###### Proof:
For any $\pi\in[0,1]$ and any $\alpha\in[0,1]$, let $\bar{\pi}=1-\pi$ and
$\bar{\alpha}=1-\alpha$. By Lemma 1,
$\alpha
P^{-}_{\delta(\pi)}+\bar{\alpha}P^{+}_{\delta(\pi)}\geq\alpha\bar{\alpha}\rho^{2}.$
Using the fact that for any nonnegative $a$, $b$, and $c$,
$(\forall\alpha\in[0,1],\alpha
a+\bar{\alpha}b\geq\alpha\bar{\alpha}c)\iff\sqrt{a}+\sqrt{b}\geq\sqrt{c},$
we can eliminate the parameter $\alpha$ to get
$P^{-}_{\delta(\pi)}\geq\left(\rho-\sqrt{P^{+}_{\delta(\pi)}}\right)^{2}$.
Then by excluding the impossible region we have $\mathsf{AUC}\leq
1-\int_{0}^{\rho^{2}}(\rho-\sqrt{x})^{2}\mathop{}\\!\mathrm{d}x=1-\rho^{4}/6$.
∎
Similarly, the tighter lower bound on $p_{e}^{\mathrm{opt}}$ in Lemma 1(a)
implies a tighter upper bound on AUC. The tighter AUC bound does not have a
nice analytical form, but can nevertheless be numerically computed. The two
bounds and the bound from Shapiro [21] ($\mathsf{AUC}\leq
1-(1-\sqrt{1-\rho^{2}})^{2}/2$) are shown in Fig. 5.
Figure 5: Comparison of upper bounds on AUC.
It can be seen that our numerical bound outperforms Shapiro’s bound for all
value of $\rho$, and our simple analytical bound in Corollary 1 also slightly
outperforms Shapiro’s bound when $\rho<0.65$.
|
††thanks: A footnote to the article title
Also at ]Physics Department, XYZ University.
# Manuscript Title:
with Forced Linebreak
Nicholas Sohre [ Stephen J. Guy<EMAIL_ADDRESS>University of Minnesota
Department of Computer Science & Engineering
###### Abstract
An article usually includes an abstract, a concise summary of the work covered
at length in the main body of the article.
Usage
Secondary publications and information retrieval purposes.
Structure
You may use the description environment to structure your abstract; use the
optional argument of the `\item` command to give the category of each item.
††preprint: APS/123-QED
## I First-level heading:
The line break was forced via \\\
This sample document demonstrates proper use of REVTeX 4.2 (and LaTeX 2ε) in
mansucripts prepared for submission to APS journals. Further information can
be found in the REVTeX 4.2 documentation included in the distribution or
available at http://journals.aps.org/revtex/.
When commands are referred to in this example file, they are always shown with
their required arguments, using normal TeX format. In this format, `#1`, `#2`,
etc. stand for required author-supplied arguments to commands. For example, in
`\section{#1}` the `#1` stands for the title text of the author’s section
heading, and in `\title{#1}` the `#1` stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using \\\\. A
blank input line tells TeX that the paragraph has ended. Note that top-level
section headings are automatically uppercased. If a specific letter or word
should appear in lowercase instead, you must escape it using `\lowercase{#1}`
as in the word “via” above.
### I.1 Second-level heading: Formatting
This file may be formatted in either the preprint or reprint style. reprint
format mimics final journal output. Either format may be used for submission
purposes. letter sized paper should be used when submitting to APS journals.
#### I.1.1 Wide text (A level-3 head)
The widetext environment will make the text the width of the full page, as on
page 10. (Note the use the `\pageref{#1}` command to refer to the page
number.)
##### Note (Fourth-level head is run in)
The width-changing commands only take effect in two-column formatting. There
is no effect if text is in a single column.
### I.2 Citations and References
A citation in text uses the command `\cite{#1}` or `\onlinecite{#1}` and
refers to an entry in the bibliography. An entry in the bibliography is a
reference to another document.
#### I.2.1 Citations
Because REVTeX uses the `natbib` package of Patrick Daly, the entire
repertoire of commands in that package are available for your document; see
the `natbib` documentation for further details. Please note that REVTeX
requires version 8.31a or later of `natbib`.
##### Syntax
The argument of `\cite` may be a single _key_ , or may consist of a comma-
separated list of keys. The citation _key_ may contain letters, numbers, the
dash (-) character, or the period (.) character. New with natbib 8.3 is an
extension to the syntax that allows for a star (*) form and two optional
arguments on the citation key itself. The syntax of the `\cite` command is
thus (informally stated)
> where _optarg+key_ signifies
> where _pre_ and _post_ is whatever text you wish to place at the beginning
> and end, respectively, of the bibliographic reference (see Ref. [witten2001]
> and the two under Ref. [feyn54]). (Keep in mind that no automatic space or
> punctuation is applied.) It is highly recommended that you put the entire
> _pre_ or _post_ portion within its own set of braces, for example: `\cite`
> `{` [ `{`_text_`}`]_key_`}`. The extra set of braces will keep LaTeX out of
> trouble if your _text_ contains the comma (,) character.
The star (*) modifier to the _key_ signifies that the reference is to be
merged with the previous reference into a single bibliographic entry, a common
idiom in APS and AIP articles (see below, Ref. [epr]). When references are
merged in this way, they are separated by a semicolon instead of the period
(full stop) that would otherwise appear.
##### Eliding repeated information
When a reference is merged, some of its fields may be elided: for example,
when the author matches that of the previous reference, it is omitted. If both
author and journal match, both are omitted. If the journal matches, but the
author does not, the journal is replaced by _ibid._ , as exemplified by Ref.
[epr]. These rules embody common editorial practice in APS and AIP journals
and will only be in effect if the markup features of the APS and AIP BibTeX
styles is employed.
##### The options of the cite command itself
Please note that optional arguments to the _key_ change the reference in the
bibliography, not the citation in the body of the document. For the latter,
use the optional arguments of the `\cite` command itself: `\cite` *[_pre-
cite_][_post-cite_]`{`_key-list_`}`.
#### I.2.2 Example citations
By default, citations are numerical[Beutler1994]. Author-year citations are
used when the journal is RMP. To give a textual citation, use
`\onlinecite{#1}`: Refs. [][, andreferencestherein]witten2001, Bire82. By
default, the natbib package automatically sorts your citations into numerical
order and “compresses” runs of three or more consecutive numerical citations.
REVTeX provides the ability to automatically change the punctuation when
switching between journal styles that provide citations in square brackets and
those that use a superscript style instead. This is done through the
citeautoscript option. For instance, the journal style prb automatically
invokes this option because Physical Review B uses superscript-style
citations. The effect is to move the punctuation, which normally comes after a
citation in square brackets, to its proper position before the superscript. To
illustrate, we cite several together [[Seetheexplanationoftimetravelin]feyn54,
*[Theclassicalrelativistictreatmentof][isarelativeclassic]epr, witten2001,
Berman1983, Davies1998, Bire82], and once again in different order (Refs.
[epr, feyn54, Bire82, Berman1983, witten2001, Davies1998]). Note that the
citations were both compressed and sorted. Futhermore, running this sample
file under the prb option will move the punctuation to the correct place.
When the `prb` class option is used, the `\cite{#1}` command displays the
reference’s number as a superscript rather than in square brackets. Note that
the location of the `\cite{#1}` command should be adjusted for the reference
style: the superscript references in `prb` style must appear after
punctuation; otherwise the reference must appear before any punctuation. This
sample was written for the regular (non-prb) citation style. The command
`\onlinecite{#1}` in the prb style also displays the reference on the
baseline.
#### I.2.3 References
A reference in the bibliography is specified by a `\bibitem{#1}` command with
the same argument as the `\cite{#1}` command. `\bibitem{#1}` commands may be
crafted by hand or, preferably, generated by BibTeX. REVTeX 4.2 includes
BibTeX style files `apsrev4-2.bst`, `apsrmp4-2.bst` appropriate for Physical
Review and Reviews of Modern Physics, respectively.
#### I.2.4 Example references
This sample file employs the `\bibliography` command, which formats the
apssamp.bbl file and specifies which bibliographic databases are to be used by
BibTeX (one of these should be by arXiv convention apssamp.bib). Running
BibTeX (via bibtex apssamp) after the first pass of LaTeX produces the file
apssamp.bbl which contains the automatically formatted `\bibitem` commands
(including extra markup information via `\bibinfo` and `\bibfield` commands).
If not using BibTeX, you will have to create the `thebibiliography`
environment and its `\bibitem` commands by hand.
Numerous examples of the use of the APS bibliographic entry types appear in
the bibliography of this sample document. You can refer to the apssamp.bib
file, and compare its information to the formatted bibliography itself.
### I.3 Footnotes
Footnotes, produced using the `\footnote{#1}` command, usually integrated into
the bibliography alongside the other entries. Numerical citation styles do
this111Automatically placing footnotes into the bibliography requires using
BibTeX to compile the bibliography.; author-year citation styles place the
footnote at the bottom of the text column. Note: due to the method used to
place footnotes in the bibliography, _you must re-run Bib TeX every time you
change any of your document’s footnotes_.
## II Math and Equations
Inline math may be typeset using the `$` delimiters. Bold math symbols may be
achieved using the `bm` package and the `\bm{#1}` command it supplies. For
instance, a bold $\alpha$ can be typeset as `$\bm{\alpha}$` giving
$\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck)
characters should be typeset using the `\mathfrak{#1}` and `\mathbb{#1}`
commands respectively. Both are supplied by the amssymb package. For example,
`$\mathbb{R}$` gives $\mathbb{R}$ and `$\mathfrak{G}$` gives $\mathfrak{G}$
In LaTeX there are many different ways to display equations, and a few
preferred ways are noted below. Displayed math will center by default. Use the
class option `fleqn` to flush equations left.
Below we have numbered single-line equations; this is the most common type of
equation in Physical Review:
$\displaystyle\chi_{+}(p)\lesssim{\bf[}2|{\bf p}|(|{\bf
p}|+p_{z}){\bf]}^{-1/2}\left(\begin{array}[]{c}|{\bf p}|+p_{z}\\\
px+ip_{y}\end{array}\right)\;,$ (3) $\displaystyle\left\\{\openone
234567890abc123\alpha\beta\gamma\delta
1234556\alpha\beta\frac{1\sum^{a}_{b}}{A^{2}}\right\\}.$ (4)
Note the open one in Eq. (4).
Not all numbered equations will fit within a narrow column this way. The
equation number will move down automatically if it cannot fit on the same line
with a one-line equation:
$\left\\{ab12345678abc123456abcdef\alpha\beta\gamma\delta
1234556\alpha\beta\frac{1\sum^{a}_{b}}{A^{2}}\right\\}.$ (5)
When the `\label{#1}` command is used [cf. input for Eq. (4)], the equation
can be referred to in text without knowing the equation number that TeX will
assign to it. Just use `\ref{#1}`, where `#1` is the same name that used in
the `\label{#1}` command.
Unnumbered single-line equations can be typeset using the `\[`, `\]` format:
$g^{+}g^{+}\rightarrow
g^{+}g^{+}g^{+}g^{+}\dots~{},~{}~{}q^{+}q^{+}\rightarrow
q^{+}g^{+}g^{+}\dots~{}.$
### II.1 Multiline equations
Multiline equations are obtained by using the `eqnarray` environment. Use the
`\nonumber` command at the end of each line to avoid assigning a number:
$\displaystyle{\cal M}=$ $\displaystyle
ig_{Z}^{2}(4E_{1}E_{2})^{1/2}(l_{i}^{2})^{-1}\delta_{\sigma_{1},-\sigma_{2}}(g_{\sigma_{2}}^{e})^{2}\chi_{-\sigma_{2}}(p_{2})$
(6)
$\displaystyle\times[\epsilon_{j}l_{i}\epsilon_{i}]_{\sigma_{1}}\chi_{\sigma_{1}}(p_{1}),$
$\displaystyle\sum|M^{\text{viol}}_{g}|^{2}$ $\displaystyle=$ $\displaystyle
g^{2n-4}_{S}(Q^{2})~{}N^{n-2}(N^{2}-1)$ (7)
$\displaystyle\times\left(\sum_{i<j}\right)\sum_{\text{perm}}\frac{1}{S_{12}}\frac{1}{S_{12}}\sum_{\tau}c^{f}_{\tau}~{}.$
Note: Do not use `\label{#1}` on a line of a multiline equation if `\nonumber`
is also used on that line. Incorrect cross-referencing will result. Notice the
use `\text{#1}` for using a Roman font within a math environment.
To set a multiline equation without _any_ equation numbers, use the
`\begin{eqnarray*}`, `\end{eqnarray*}` format:
$\displaystyle\sum|M^{\text{viol}}_{g}|^{2}$ $\displaystyle=$ $\displaystyle
g^{2n-4}_{S}(Q^{2})~{}N^{n-2}(N^{2}-1)$
$\displaystyle\times\left(\sum_{i<j}\right)\left(\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}\right)\frac{1}{S_{12}}~{}.$
To obtain numbers not normally produced by the automatic numbering, use the
`\tag{#1}` command, where `#1` is the desired equation number. For example, to
get an equation number of (8),
$g^{+}g^{+}\rightarrow
g^{+}g^{+}g^{+}g^{+}\dots~{},~{}~{}q^{+}q^{+}\rightarrow
q^{+}g^{+}g^{+}\dots~{}.$ (8)
##### A few notes on tags
`\tag{#1}` requires the amsmath package. Place the `\tag{#1}` command before
the `\label{#1}`, if any. The numbering produced by `\tag{#1}` does not affect
the automatic numbering in REVTeX; therefore, the number must be known ahead
of time, and it must be manually adjusted if other equations are added.
`\tag{#1}` works with both single-line and multiline equations. `\tag{#1}`
should only be used in exceptional cases—do not use it to number many
equations in your paper. Please note that this feature of the amsmath package
is _not_ compatible with the hyperref (6.77u) package.
Enclosing display math within `\begin{subequations}` and `\end{subequations}`
will produce a set of equations that are labeled with letters, as shown in
Eqs. (9b) and (9a) below. You may include any number of single-line and
multiline equations, although it is probably not a good idea to follow one
display math directly after another.
$\displaystyle{\cal M}=$ $\displaystyle
ig_{Z}^{2}(4E_{1}E_{2})^{1/2}(l_{i}^{2})^{-1}(g_{\sigma_{2}}^{e})^{2}\chi_{-\sigma_{2}}(p_{2})$
(9a) $\displaystyle\times[\epsilon_{i}]_{\sigma_{1}}\chi_{\sigma_{1}}(p_{1}).$
$\left\\{abc123456abcdef\alpha\beta\gamma\delta
1234556\alpha\beta\frac{1\sum^{a}_{b}}{A^{2}}\right\\},$ (9b)
Giving a `\label{#1}` command directly after the `\begin{subequations}`,
allows you to reference all the equations in the subequations environment. For
example, the equations in the preceding subequations environment were Eqs.
(9).
#### II.1.1 Wide equations
The equation that follows is set in a wide format, i.e., it spans the full
page. The wide format is reserved for long equations that cannot easily be set
in a single column:
${\cal
R}^{(\text{d})}=g_{\sigma_{2}}^{e}\left(\frac{[\Gamma^{Z}(3,21)]_{\sigma_{1}}}{Q_{12}^{2}-M_{W}^{2}}+\frac{[\Gamma^{Z}(13,2)]_{\sigma_{1}}}{Q_{13}^{2}-M_{W}^{2}}\right)+x_{W}Q_{e}\left(\frac{[\Gamma^{\gamma}(3,21)]_{\sigma_{1}}}{Q_{12}^{2}-M_{W}^{2}}+\frac{[\Gamma^{\gamma}(13,2)]_{\sigma_{1}}}{Q_{13}^{2}-M_{W}^{2}}\right)\;.$
(10)
This is typed to show how the output appears in wide format. (Incidentally,
since there is no blank line between the equation environment above and the
start of this paragraph, this paragraph is not indented.)
## III Cross-referencing
REVTeX will automatically number such things as sections, footnotes,
equations, figure captions, and table captions. In order to reference them in
text, use the `\label{#1}` and `\ref{#1}` commands. To reference a particular
page, use the `\pageref{#1}` command.
The `\label{#1}` should appear within the section heading, within the footnote
text, within the equation, or within the table or figure caption. The
`\ref{#1}` command is used in text at the point where the reference is to be
displayed. Some examples: Section I on page I, Table 1,
Table 1: A table that fits into a single column of a two-column layout. Note that REVTeX 4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-, center-, decimal- and right-aligned columns, along with the use of the ruledtabular environment which sets the Scotch (double) rules above and below the alignment, per APS style. Left222Note a. | Centered333Note b. | Decimal | Right
---|---|---|---
1 | 2 | $3.001$ | 4
10 | 20 | $30$ | 40
100 | 200 | $300.0$ | 400
and Fig. 1.
Figure 1: A figure caption. The figure captions are automatically numbered.
## IV Floats: Figures, Tables, Videos, etc.
Figures and tables are usually allowed to “float”, which means that their
placement is determined by LaTeX, while the document is being typeset.
Use the figure environment for a figure, the table environment for a table. In
each case, use the `\caption` command within to give the text of the figure or
table caption along with the `\label` command to provide a key for referring
to this figure or table. The typical content of a figure is an image of some
kind; that of a table is an alignment.
Figure 2: Use the figure* environment to get a wide figure that spans the page in twocolumn formatting. Table 2: This is a wide table that spans the full page width in a two-column layout. It is formatted using the table* environment. It also demonstates the use of \multicolumn in rows with entries that span more than one column. | $D_{4h}^{1}$ | $D_{4h}^{5}$
---|---|---
Ion | 1st alternative | 2nd alternative | lst alternative | 2nd alternative
K | $(2e)+(2f)$ | $(4i)$ | $(2c)+(2d)$ | $(4f)$
Mn | $(2g)$444The $z$ parameter of these positions is $z\sim\frac{1}{4}$. | $(a)+(b)+(c)+(d)$ | $(4e)$ | $(2a)+(2b)$
Cl | $(a)+(b)+(c)+(d)$ | $(2g)$111Here’s the first, from Ref. feyn54. | $(4e)^{\text{a}}$ |
He | $(8r)^{\text{a}}$ | $(4j)^{\text{a}}$ | $(4g)^{\text{a}}$ |
Ag | | $(4k)^{\text{a}}$ | | $(4h)^{\text{a}}$
Insert an image using either the graphics or graphix packages, which define
the `\includegraphics{#1}` command. (The two packages differ in respect of the
optional arguments used to specify the orientation, scaling, and translation
of the image.) To create an alignment, use the tabular environment.
The best place to locate the figure or table environment is immediately
following its first reference in text; this sample document illustrates this
practice for Fig. 1, which shows a figure that is small enough to fit in a
single column.
In exceptional cases, you will need to move the float earlier in the document,
as was done with Table 2: LaTeX’s float placement algorithms need to know
about a full-page-width float earlier.
Fig. 2 has content that is too wide for a single column, so the figure*
environment has been used.
Table 3: Numbers in columns Three–Five are aligned with the “d” column specifier (requires the dcolumn package). Non-numeric entries (those entries without a “.”) in a “d” column are aligned on the decimal point. Use the “D” specifier for more complex layouts. One | Two | Three | Four | Five
---|---|---|---|---
one | two | three | four | five
He | 2 | $2.77234$ | $45672.$ | $0.69$
C555Some tables require footnotes. | C666Some tables need more than one footnote. | $12537.64$ | $37.66345$ | $86.37$
The content of a table is typically a tabular environment, giving rows of type
in aligned columns. Column entries separated by `&`’s, and each row ends with
\\\\. The required argument for the tabular environment specifies how data are
aligned in the columns. For instance, entries may be centered, left-justified,
right-justified, aligned on a decimal point. Extra column-spacing may be be
specified as well, although REVTeX 4 sets this spacing so that the columns
fill the width of the table. Horizontal rules are typeset using the `\hline`
command. The doubled (or Scotch) rules that appear at the top and bottom of a
table can be achieved enclosing the tabular environment within a ruledtabular
environment. Rows whose columns span multiple columns can be typeset using the
`\multicolumn{#1}{#2}{#3}` command (for example, see the first row of Table
2).
Tables 1, 2, 3, and 4
Table 4: A table with numerous columns that still fits into a single column. Here, several entries share the same footnote. Inspect the LaTeX input for this table to see exactly how it is done. | $r_{c}$ (Å) | $r_{0}$ (Å) | $\kappa r_{0}$ | | $r_{c}$ (Å) | $r_{0}$ (Å) | $\kappa r_{0}$
---|---|---|---|---|---|---|---
Cu | 0.800 | 14.10 | 2.550 | Sn111Here’s the first, from Ref. feyn54. | 0.680 | 1.870 | 3.700
Ag | 0.990 | 15.90 | 2.710 | Pb222Here’s the second. | 0.450 | 1.930 | 3.760
Au | 1.150 | 15.90 | 2.710 | Ca333Here’s the third. | 0.750 | 2.170 | 3.560
Mg | 0.490 | 17.60 | 3.200 | Sr444Here’s the fourth. | 0.900 | 2.370 | 3.720
Zn | 0.300 | 15.20 | 2.970 | Li222Here’s the second. | 0.380 | 1.730 | 2.830
Cd | 0.530 | 17.10 | 3.160 | Na555And etc. | 0.760 | 2.110 | 3.120
Hg | 0.550 | 17.80 | 3.220 | K555And etc. | 1.120 | 2.620 | 3.480
Al | 0.230 | 15.80 | 3.240 | Rb333Here’s the third. | 1.330 | 2.800 | 3.590
Ga | 0.310 | 16.70 | 3.330 | Cs444Here’s the fourth. | 1.420 | 3.030 | 3.740
In | 0.460 | 18.40 | 3.500 | Ba555And etc. | 0.960 | 2.460 | 3.780
Tl | 0.480 | 18.90 | 3.550 | | | |
show various effects. A table that fits in a single column employs the table
environment. Table 2 is a wide table, set with the table* environment. Long
tables may need to break across pages. The most straightforward way to
accomplish this is to specify the `[H]` float placement on the table or table*
environment. However, the LaTeX 2ε package longtable allows headers and
footers to be specified for each page of the table. A simple example of the
use of longtable can be found in the file summary.tex that is included with
the REVTeX 4 distribution.
There are two methods for setting footnotes within a table (these footnotes
will be displayed directly below the table rather than at the bottom of the
page or in the bibliography). The easiest and preferred method is just to use
the `\footnote{#1}` command. This will automatically enumerate the footnotes
with lowercase roman letters. However, it is sometimes necessary to have
multiple entries in the table share the same footnote. In this case, there is
no choice but to manually create the footnotes using `\footnotemark[#1]` and
`\footnotetext[#1]{#2}`. #1 is a numeric value. Each time the same value for
#1 is used, the same mark is produced in the table. The
`\footnotetext[#1]{#2}` commands are placed after the tabular environment.
Examine the LaTeX source and output for Tables 1 and 4 for examples.
Video IV illustrates several features new with REVTeX4.2, starting with the
video environment, which is in the same category with figure and table.
http://link.aps.org/multimedia/PRSTPER/v4/i1/e010101Students explain their
initial idea about Newton’s third law to a teaching assistant. Clip (a): same
force. Clip (b): move backwards. The `\setfloatlink` command causes the title
of the video to be a hyperlink to the indicated URL; it may be used with any
environment that takes the `\caption` command. The `\href` command has the
same significance as it does in the context of the hyperref package: the
second argument is a piece of text to be typeset in your document; the first
is its hyperlink, a URL.
Physical Review style requires that the initial citation of figures or tables
be in numerical order in text, so don’t cite Fig. 2 until Fig. 1 has been
cited.
###### Acknowledgements.
We wish to acknowledge the support of the author community in using REVTeX,
offering suggestions and encouragement, testing new versions, ….
## Appendix A Appendixes
To start the appendixes, use the `\appendix` command. This signals that all
following section commands refer to appendixes instead of regular sections.
Therefore, the `\appendix` command should be used only once—to setup the
section commands to act as appendixes. Thereafter normal section commands are
used. The heading for a section can be left empty. For example,
\appendix
\section{}
will produce an appendix heading that says “APPENDIX A” and
\appendix
\section{Background}
will produce an appendix heading that says “APPENDIX A: BACKGROUND” (note that
the colon is set automatically).
If there is only one appendix, then the letter “A” should not appear. This is
suppressed by using the star version of the appendix command (`\appendix*` in
the place of `\appendix`).
## Appendix B A little more on appendixes
Observe that this appendix was started by using
\section{A little more on appendixes}
Note the equation number in an appendix:
$E=mc^{2}.$ (11)
### B.1 A subsection in an appendix
You can use a subsection or subsubsection in an appendix. Note the numbering:
we are now in Appendix B.1.
Note the equation numbers in this appendix, produced with the subequations
environment:
$\displaystyle E$ $\displaystyle=$ $\displaystyle mc,$ (12a) $\displaystyle E$
$\displaystyle=$ $\displaystyle mc^{2},$ (12b) $\displaystyle E$
$\displaystyle\gtrsim$ $\displaystyle mc^{3}.$ (12c)
They turn out to be Eqs. (12a), (12b), and (12c).
|
# Evidence for a high-energy tail in the gamma-ray spectra of globular
clusters
Deheng Song,1 Oscar Macias,1,2,3 Shunsaku Horiuchi,1 Roland M. Crocker4 and
David M. Nataf5
1Center for Neutrino Physics, Department of Physics, Virginia Tech,
Blacksburg, VA 24061, USA
2Kavli Institute for the Physics and Mathematics of the Universe
(WPI),University of Tokyo, Kashiwa, Chiba 277-8583, Japan
3GRAPPA Institute, Institute of Physics, University of Amsterdam, 1098 XH
Amsterdam, The Netherlands
4Research School of Astronomy and Astrophysics, Australian National
University, Canberra, ACT 2611, Australia
5Center for Astrophysical Sciences and Department of Physics and Astronomy,
The Johns Hopkins University, Baltimore, MD 21218, USA
dhsong@vt.eduo.a.maciasramirez<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Millisecond pulsars are very likely the main source of gamma-ray emission from
globular clusters. However, the relative contributions of two separate
emission processes–curvature radiation from millisecond pulsar magnetospheres
vs. inverse Compton emission from relativistic pairs launched into the
globular cluster environment by millisecond pulsars–have long been unclear. To
address this, we search for evidence of inverse Compton emission in 8-year
Fermi-LAT data from the directions of 157 Milky Way globular clusters. We find
a mildly statistically significant (3.8$\sigma$) correlation between the
measured globular cluster gamma-ray luminosities and their photon field energy
densities. However, this may also be explained by a hidden correlation between
the photon field densities and the stellar encounter rates of globular
clusters. Analysed in toto, we demonstrate that the gamma-ray emission of
globular clusters can be resolved spectrally into two components: i) an
exponentially cut-off power law and ii) a pure power law. The latter
component–which we uncover at a significance of 8.2$\sigma$–has a power index
of 2.79 $\pm$ 0.25. It is most naturally interpreted as inverse Compton
emission by cosmic-ray electrons and positrons injected by millisecond
pulsars. We find the luminosity of this power-law component is comparable to,
or slightly smaller than, the luminosity of the curved component, suggesting
the fraction of millisecond pulsar spin-down luminosity into relativistic
leptons is similar to the fraction of the spin-down luminosity into prompt
magnetospheric radiation.
###### keywords:
gamma-rays:general – globular clusters: general – pulsars:general
††pubyear: 2021††pagerange: Evidence for a high-energy tail in the gamma-ray
spectra of globular clusters–D
## 1 Introduction
Over two dozen globular clusters (GCs) have been detected in $\gamma$ rays in
Fermi Large Area Telescope (LAT) data (Abdo et al., 2009a, 2010; Kong et al.,
2010; Tam et al., 2011; Zhou et al., 2015; Zhang et al., 2016). The
millisecond pulsar (MSP) populations of those GCs are believed to be the main
source of this $\gamma$-ray emission. In particular, MSPs have been firmly
established as $\gamma$-ray sources (Verbunt et al., 1996; Abdo et al., 2009b;
Espinoza et al., 2013; Abdo et al., 2013) and a large fraction of them have
been discovered in GCs (Camilo & Rasio, 2005). Recently, Fermi-LAT detected
$\gamma$-ray pulsations in two GCs (Freire et al., 2011; Johnson et al.,
2013), providing further support for this scenario.
The high-energy emission from MSPs emerges from the primary electrons
accelerated by them and subsequent radiation by secondary, relativistic
electrons and positrons ($e^{\pm}$) pair created in their magnetospheres. In
particular, Harding et al. (2005) studied the curvature radiation (CR) of
primary electrons within MSP magnetospheres with a focus on GeV-scale
emission. Bednarek & Sitarek (2007) then considered a scenario in which
$e^{\pm}$, injected by MSPs, gradually diffuse through a GC, up-scattering
ambient photons, and thus producing inverse Compton (IC) $\gamma$-ray emission
in the GeV$-$TeV energy range. Venter et al. (2009) calculated the CR and IC
spectra for an ensemble of MSPs in the GCs 47 Tucanae and Terzan 5. Cheng et
al. (2010) found that the spectra of 47 Tucanae and seven other GCs can be
explained by IC alone, invoking background photons from the cosmic microwave
background (CMB) or Galactic infrared/stellar radiation. For a review of the
observations and models about the $\gamma$-ray emission from globular
clusters, see Tam et al. (2016). In general, the GeV emission mechanism of
MSPs remains in contention with CR, IC, and synchrotron radiation all proposed
(Harding, 2021).
Here, motivated by the increasing number of GCs detected in $\gamma$ rays, we
perform a collective statistical study of their properties in order to gain
insight into the nature of their high-energy emission. Our particular aim is
to investigate the importance of the contribution of IC emission to the
overall $\gamma$-ray emission of GCs.
Relations between the detected GC gamma-ray luminosities and properties of GC
can be used to probe the origins of $\gamma$-ray emission and their underlying
sources. For example, correlations with the photon field energy density of GCs
could unveil the potential contribution from IC, and correlations with the
stellar encounter rate and metallicity could provide insight into the
dynamical formation of MPSs in GCs. Previous work here includes a study by
Abdo et al. (2010) that reported a correlation between the $\gamma$-ray
luminosity $L_{\gamma}$ and the stellar encounter rate of eight GCs. Hui et
al. (2011) studied a group of 15 $\gamma$-ray emitting GCs with 2 years of
Fermi data and found a positive correlation between $L_{\gamma}$ and,
respectively, encounter rate, metallicity, and Galactic photon energy density.
Hooper & Linden (2016) studied 25 GCs using 85 months of Fermi data, and found
that the $\gamma$-ray luminosity function of MSPs in GCs is consistent with
that applying to MSPs detected in the field. Lloyd et al. (2018) studied
$\gamma$-ray emission from high-latitude GCs and its connection to their X-ray
emission. de Menezes et al. (2019) reanalysed 9 years of Fermi data and found
23 $\gamma$-ray emitting GCs; they found that the metallicity only mildly
contributes to $L_{\gamma}$ while a very high encounter rate seemed to reduce
the $L_{\gamma}$ from GCs.
In parallel, modeling of GCs’ observed broadband spectral energy distributions
provides a handle on their CR and IC emissions. Recently, Kopp et al. (2013)
and Ndiyavala et al. (2019) modelled the multiwavelength emission from MSPs
considering a potential CR origin for GeV and IC emissions for TeV $\gamma$
rays, as well as synchrotron radiation for the radio and X-ray wavebands.
These models are successful in explaining the multiwavelength spectra of
Terzan 5. However, Terzan 5 is the only GC (perhaps) detected above TeV
energies (H. E. S. S. Collaboration et al., 2011). Detailed spectral modelling
similar to that presented by Kopp et al. (2013) and Ndiyavala et al. (2019) is
difficult for other GCs at present due to a lack of TeV $\gamma$-ray data.
Although Fermi-LAT is sensitive to $\gamma$-rays of up to $\approx 1$ TeV, the
photon count statistics at the highest energies are very low.
In the recently published Fermi-LAT fourth source catalog (Abdollahi et al.,
2020) (4FGL)111The Fermi collaboration has recently released an incremental
update of the fourth source catalog (Ballet et al., 2020) (4FGL-DR2, for Data
Release 2). The new catalog uses 10 years of data, a 25% increase with respect
to the 4FGL. However, only 1 new GC (NGC 362) has been detected. Given this
marginal change, we retain use of the 4FGL catalog constructed with the 8-year
data set., 30 GCs have been detected in GeV $\gamma$ rays. With such a number,
we can begin to carefully study the nature of the $\gamma$-ray emission from
GCs through a population study. In this paper, we repeat the bin-by-bin
analysis of the 4FGL data for the 157 known Milky Way GCs in the Harris (1996)
catalog2222010 edition: https://www.physics.mcmaster.ca/~harris/mwgc.dat. We
search for correlations between the $\gamma$-ray luminosity of the GCs and
other parameters of the GCs to probe which are good proxies for the
$\gamma$-ray luminosity and study potential IC contributions; to this end, we
consider the photon field energy densities, the stellar encounter rate, and
the metallicity of the GCs. Unlike previous studies of correlations of the GC
$\gamma$-ray emissions, we consider also the upper limits placed by null
detections, which we implement via a Kendall $\tau$ coefficient test
statistic. Furthermore, we also look for evidence of IC from the spectra of
GCs. For the first time, we implement a universal two-component model to study
the spectra of $\gamma$-ray-detected GCs. The two-component model comprises a
CR component, which is spectrally curved, plus an IC component modeled as a
power law in the energy range of interest.
The remainder of the paper is as follows: In Section 2 we discuss the choice
of GC samples and the data analysis procedure. Section 3 presents the
methodology and results of our correlation analysis. Section 4 describes the
spectral analysis method and reports the $e^{\pm}$ injection efficiency in the
GCs. We discuss the implications of our results in Section 5 and conclude in
Section 6.
## 2 $\gamma$-ray emission from globular clusters
In this section, we describe our choice of GC sample and the GCs’
$\gamma$-ray-related parameters. The Fermi data analysis process is reported
as well. For GCs with a $\gamma$-ray counterpart in the 4FGL, we update their
spectral parameters through a maximum likelihood procedure. For those not
detected in the 4FGL, we estimate their 95% C.L. $\gamma$-ray upper limits.
### 2.1 Globular cluster sample
We consider the Harris (1996) catalog (2010 edition), which contains
identifications and basic parameters for 157 GCs in the Milky Way. Here, we
reanalyse publicly available Fermi-LAT data from the direction of all GCs in
the Harris (1996) catalog. Figure 1 shows the spatial distribution of the GCs.
The top panel shows the projected direction of the GCs on the celestial plane
while the bottom two panels display their 3D coordinates. The GCs which are
detected in the 4FGL are marked by red stars while null detections are
indicated by green circles. Most $\gamma$-ray GCs are near the Sun (yellow
circle) or located in the Galactic bulge (assumed to be sphere of 3 kpc
radius, grey circular area).
Figure 1: Spatial distribution of the 157 Milky Way GCs in the Harris (1996)
catalog. The top panel shows the all sky spatial distribution in Galactic
coordinate. The middle and bottom panel display the three-dimensional (3D)
Cartesian coordinates of the GCs, in which the Sun (yellow circle) is located
in the negative x-axis. The $\gamma$-ray-detected GCs in the 4FGL catalog are
shown as red stars, and the GCs not detected in $\gamma$ rays are shown as
green circles. Most $\gamma$-ray-detected GCs are located near the Sun or in
the Galactic bulge (grey shaded area in the middle and bottom panel).
The origin of the $\gamma$-ray emission from GCs can be studied by comparing
its dependency on GC properties. IC emission is sensitive to the ambient
photon field on which the $e^{\pm}$ scatter. Hui et al. (2011) reported a
positive correlation between the $\gamma$-ray luminosity $L_{\gamma}$ and the
photon field energy density at the cluster location, indicating an IC
contribution. In the present work, we improve upon the Galactic radiation
field model used by Hui et al. (2011) by extracting the energy density of the
interstellar radiation at the locations of the GCs from the three-dimensional
interstellar radiation model in GALPROP v56333http://galprop.stanford.edu/
(Porter et al., 2017; Jóhannesson et al., 2018). This is a fully 3D model that
combines the CMB, infrared, and optical photons of the Milky Way, denoted as
$u_{\text{MW}}$. In addition, photons from stars in the GCs are expected to
make a dominant contribution to the total, ambient radiation field. We
estimate this component by
$u_{\text{GC}}=\dfrac{L_{*}}{4\pi cr_{h}^{2}},$ (1)
where $L_{*}$ and $r_{h}$ are the stellar luminosity and the half-light radius
of the GC. The total photon field energy density is
$u_{\text{Total}}=u_{\text{MW}}+u_{\text{GC}}$.
A potential correlation between the $\gamma$-ray luminosity $L_{\gamma}$ and
the stellar encounter rate has been studied as a way to probe the dynamic
formation of MSPs in GCs (Abdo et al., 2010; Hui et al., 2011; de Menezes et
al., 2019). In the present work, we adopt the stellar encounter rate estimated
by Bahramian et al. (2013), which is defined as
$\Gamma_{c}=\frac{4\pi}{\sigma_{c}}\int\rho(r)^{2}r^{2}dr,$ (2)
where $\sigma_{c}$ is the velocity dispersion at the core radius, $\rho(r)$ is
the stellar density profile of the cluster, and the line-of-sight integration
is performed along the half-light radius.
Additionally, it has been argued (Hui et al., 2011; de Menezes et al., 2019)
that high metallicity [Fe/H] in the GCs could enhance the dynamical formation
of MSP. The outer convective zone of metal-rich stars enables magnetic
braking, which assists orbital shrinking during binary formation. In this
analysis, we use the GCs metallicities reported in the Harris (1996) catalog,
which summarizes spectroscopic or photometric estimates in the literature.
In summary, we consider the empirical dependence of the inferred $\gamma$-ray
luminosity of GCs on four parameters, namely, $u_{\text{MW}}$,
$u_{\text{Total}}$, $\Gamma_{c}$, and [Fe/H]. We summarize the values of these
parameters for the 30 $\gamma$-ray-detected GCs in Table 1. Also included are
the stellar masses M∗ of the GCs adopted from Baumgardt (2017), Sollima &
Baumgardt (2017), and Baumgardt & Hilker (2018). They are estimated from
N-body modelling of the velocity dispersion and surface density profiles. In
Section 3, we study the correlations between the $\gamma$-ray emission of GCs
and these parameters.
Table 1: Parameters and data analysis results for 30 $\gamma$-ray-detected
GCs.
Name | $\Gamma_{c}$a | [Fe/H]b | M∗c | $u_{\text{MW}}$d | $u_{\text{Total}}$e | $R_{\odot}$f | Fluxg | $L_{\gamma}$g | TSh
---|---|---|---|---|---|---|---|---|---
| | | ($10^{5}M_{\odot}$) | (eV cm-3) | (eV cm-3) | (kpc) | (10-8 ph cm-2 s-1) | (1034 erg s-1) |
2MS-GC01 | … | … | … | 1.79 | 7.14 | 3.60 | $1.96\pm 0.26$ | $3.88\pm 0.81$ | 153.82
GLIMPSE01 | … | … | … | 1.55 | 30.23 | 4.20 | $2.55\pm 0.28$ | $8.79\pm 0.94$ | 535.61
GLIMPSE02 | … | -0.33 | … | 2.61 | >2.61 | 5.50 | $2.73\pm 0.25$ | $11.55\pm 1.57$ | 318.41
M 62 | 2470.00 | -1.18 | 6.76 | 2.14 | 293.14 | 6.80 | $0.98\pm 0.09$ | $9.16\pm 0.89$ | 1012.19
M 80 | 937.00 | -1.75 | 2.82 | 0.92 | 276.86 | 10.00 | $0.17\pm 0.07$ | $4.26\pm 1.39$ | 94.83
NGC 104 | 1000.00 | -0.72 | 8.13 | 0.55 | 31.12 | 4.50 | $1.34\pm 0.07$ | $5.61\pm 0.34$ | 4853.63
NGC 1904 | 126.00 | -1.60 | 1.66 | 0.29 | 173.13 | 12.90 | $0.11\pm 0.04$ | $2.32\pm 0.98$ | 23.84
NGC 2808 | 1210.00 | -1.14 | 8.13 | 0.38 | 467.35 | 9.60 | $0.19\pm 0.06$ | $3.43\pm 1.03$ | 90.30
NGC 5904 | 120.00 | -1.29 | 3.63 | 0.55 | 56.46 | 7.50 | $0.11\pm 0.04$ | $1.10\pm 0.47$ | 39.07
NGC 6139 | 407.00 | -1.65 | 3.47 | 1.15 | 161.34 | 10.10 | $0.29\pm 0.09$ | $5.82\pm 2.19$ | 59.29
NGC 6218 | 18.10 | -1.37 | 0.83 | 0.90 | 14.94 | 4.80 | $0.07\pm 0.05$ | $0.38\pm 0.20$ | 33.92
NGC 6304 | 150.00 | -0.45 | 1.62 | 2.33 | 23.95 | 5.90 | $0.10\pm 0.03$ | $1.09\pm 0.42$ | 21.71
NGC 6316 | 131.00 | -0.45 | 3.63 | 1.88 | 270.82 | 10.40 | $0.48\pm 0.11$ | $10.91\pm 2.13$ | 207.99
NGC 6341 | 265.00 | -2.31 | 3.09 | 0.42 | 97.31 | 8.30 | $0.05\pm 0.04$ | $0.62\pm 0.37$ | 15.84
NGC 6388 | 1770.00 | -0.55 | 10.47 | 1.30 | 1127.11 | 9.90 | $0.98\pm 0.09$ | $18.41\pm 1.63$ | 970.86
NGC 6397 | 146.00 | -2.02 | 0.89 | 0.92 | 3.75 | 2.30 | $0.11\pm 0.06$ | $0.09\pm 0.05$ | 17.21
NGC 6402 | 106.00 | -1.28 | 7.41 | 0.87 | 136.26 | 9.30 | $0.19\pm 0.08$ | $3.13\pm 1.18$ | 51.16
NGC 6440 | 1750.00 | -0.36 | 5.01 | 2.50 | 721.93 | 8.50 | $0.76\pm 0.13$ | $10.34\pm 1.97$ | 259.55
NGC 6441 | 3150.00 | -0.46 | 11.75 | 1.36 | 1148.78 | 11.60 | $0.74\pm 0.10$ | $18.08\pm 2.62$ | 363.50
NGC 6528 | 233.00 | -0.11 | 0.59 | 4.00 | 158.13 | 7.90 | $0.11\pm 0.03$ | $2.06\pm 0.75$ | 31.27
NGC 6541 | 567.00 | -1.81 | 2.51 | 1.48 | 120.84 | 7.50 | $0.21\pm 0.06$ | $2.10\pm 0.63$ | 77.12
NGC 6652 | 805.00 | -0.81 | 0.47 | 1.22 | 106.17 | 10.00 | $0.22\pm 0.06$ | $4.42\pm 1.03$ | 120.53
NGC 6717 | 46.10 | -1.26 | 0.36 | 1.56 | 22.38 | 7.10 | $0.21\pm 0.07$ | $2.04\pm 0.63$ | 70.85
NGC 6752 | 374.00 | -1.54 | 2.29 | 0.83 | 18.59 | 4.00 | $0.21\pm 0.05$ | $0.58\pm 0.12$ | 157.19
NGC 6838 | 2.05 | -0.78 | 0.54 | 0.85 | 4.14 | 4.00 | $0.20\pm 0.08$ | $0.50\pm 0.18$ | 40.13
NGC 7078 | 6460.00 | -2.37 | 4.90 | 0.39 | 248.98 | 10.40 | $0.17\pm 0.05$ | $2.55\pm 0.78$ | 46.55
Omega Cen | 144.00 | -1.53 | 33.11 | 0.71 | 27.35 | 5.20 | $0.59\pm 0.07$ | $3.46\pm 0.42$ | 747.94
Terzan 1 | 0.63 | -1.03 | 2.95 | 4.06 | 4.27 | 6.70 | $0.10\pm 0.02$ | $2.93\pm 0.73$ | 62.48
Terzan 2 | 19.60 | -0.69 | 0.33 | 4.23 | 9.33 | 7.50 | $0.15\pm 0.05$ | $3.00\pm 1.00$ | 42.65
Terzan 5 | 1400.00 | -0.23 | 6.17 | 4.80 | 98.73 | 6.90 | $3.93\pm 0.20$ | $38.65\pm 2.51$ | 3740.32
* a
Stellar encounter rate computed using equation (2). The numerical values are
normalized by the encounter rate of NGC 104, which is set to 1000.
* b
Metallicity.
* c
Stellar mass.
* d
Galactic photon field energy density.
* e
Total photon field energy density, defined as the sum of the Galactic photon
field and the photons from stars in the GC.
* f
Distance from the Sun.
* g
$\gamma$-ray flux and luminosity between 300 MeV to 500 GeV.
* h
Test statistic.
### 2.2 Data analysis
We use 8 years of Fermi-LAT data, from 2008 August 4 to 2016 August 2. This
constitutes the same data as the 4FGL. The newest Pass 8 data release is
applied. As recommended by the Fermi-LAT data analysis
documentation444http://fermi.gsfc.nasa.gov/ssc/data/analysis/, the event class
for the analysis is "P8 Source" class (evclass=128) and the event type is
"FRONT+BACK" (evtype=3). We use a 90∘ zenith angle cut to remove Earth limb
events and filter the data by (DATA_QUAL>0)&&(LAT_CONFIG==1). The
corresponding instrument response function is P8R3_SOURCE_V2. For our
analysis, the Fermipy software version 0.18.0 is used, together with the Fermi
Science Tools version 1.2.21.
For 30 GCs detected by the 4FGL, we simply reanalyse the 4FGL $\gamma$-ray
source. We use a 10∘ by 10∘ Region-of-Interest around the source with a 0.1∘
by 0.1∘ bin size. Photons from 300 MeV to 500 GeV are analysed using 9
logarithmic bins. Given that we use a different Region-of-Interest size and
photon class compared to those adopted in the construction of the 4FGL,
additional point sources might emerge in our Region-of-Interest. However,
since we use the same observation time as in the 4FGL, the impact of those
potential new sources is expected to be minimal. Therefore, we only include
known 4FGL sources in our analysis. As recommended by the Fermi team, we re-
run a maximum likelihood analysis that starts from the best-fit parameter
values found in the 4FGL and updating accordingly. The most recent Galactic
interstellar emission model gll_iem_v07 and the isotropic component
iso_P8R3_SOURCE_V2_v1 are employed as fore/backgrounds with free-floating
normalization. We have followed the Fermipy recommended
procedure555http://fermipy.readthedocs.io/en/latest/quickstart.html and fixed
the spectral parameters of the sources with TS < 10 and 10 < Npred < 100 to
their 4FGL values. However, the spectral parameters of the 4FGL sources lying
within 5∘ of the Region-of-Interest center are allowed to float freely. The
MINUIT algorithm is used to determine the best-fit parameters of the sources
for each energy bin independently.
For the 127 additional GCs in the Harris (1996) catalog without 4FGL
detections, we estimate the 95% C.L. $\gamma$-ray upper limits from their
locations. More specifically, we place a point source at the coordinates of
those GCs. The point source is assumed to have a power-law spectrum $dN/dE\sim
E^{-\Gamma}$ with fixed index $\Gamma=2$. We applied the same pipeline used on
the set of detected GCs and obtained the 95% C.L. flux upper limits on the
putative point sources placed at the GCs locations.
Table 1 summarizes the Fermi data analysis results. We report the photon flux
and luminosity $L_{\gamma}$ for 30 $\gamma$-emitting GCs. For each GC, the
total photon flux is summed over the bin-by-bin fluxes from the Fermi
analysis. The statistical error of the total flux is added quadratically from
the bin-by-bin flux errors. The energy flux is estimated similarly, then
$L_{\gamma}=4\pi R_{\odot}^{2}\times(\mathrm{energy\ flux})$. We ignore the
uncertainties on $R_{\odot}$ for the GCs since they are either unavailable in
the Harris (1996) catalog or estimated only at percentage level (Baumgardt,
2017) and so make a negligible contribution to the overall error of
$L_{\gamma}$. For the parameters and flux upper limits of 127 additional GCs,
see Appendix A.
## 3 Correlation analysis
In this section, we investigate the correlation between $L_{\gamma}$’s and
other GC observables. However, GCs not yet detected in $\gamma$ rays and
sample selection effects must be taken into account to properly determine the
significance of any apparent correlations. We use the Kendall $\tau$
coefficient as the test statistic for estimating the significance of the
correlations, and the expectation-maximization (EM) algorithm for the linear
regression of the correlations. Both methods allow us to properly incorporate
the luminosity upper limits$-$implied by GCs not detected in the 4FGL$-$into
our statistical analysis.
### 3.1 Linear regression with the expectation–maximization algorithm
To study the correlations between the $L_{\gamma}$’s and the other GC
observables, we assume a linear relation in logarithmic space of the form:
$\log(L_{\gamma})=a\log(X)+b,$ (3)
where $L_{\gamma}$ is the gamma-ray luminosity of the GC, $X$ is the
independent observable considered, and $a$ and $b$ are parameters to be
determined.
We use an EM algorithm (Feigelson & Nelson, 1985; Isobe et al., 1986; Lavalley
et al., 1992) to find the maximum likelihood estimates of the parameters $a$
and $b$. In contrast to the standard maximum likelihood method, the EM
algorithm is designed to be used with censored data, i.e., data consisting of
both measurements and limits. Upper limits must be properly incorporated in
the correlation analyses so as to obtain statistically robust results.
Briefly, the implementation of the EM algorithm is done as follows: first, the
expected values of the censored data are estimated based on the regression
parameters and the variance of the uncensored data. Second, a least-squares
fit is performed and the variance is updated. Lastly, the procedure is
repeated until convergence is achieved on $a$, $b$, and the variance. Using
the EM algorithm, we are able to utilize the complete data set (including
upper limits) in estimating relations between the $L_{\gamma}$ and the other
observables.
### 3.2 Kendall $\tau$ coefficient and significance
While the EM algorithm allows us to estimate the linear relations between the
$L_{\gamma}$’s and other GC observables, we are also interested in determining
the statistical significance of those relations. To that end, we apply the
generalized Kendall $\tau$ rank correlation test and perform Monte Carlo (MC)
simulations to determine the significance of each correlation studied with the
EM algorithm.
The Kendall $\tau$ rank correlation coefficient (also referred to as the
Kendall $\tau$ coefficient) is a non-parametric statistical test that has been
used to study multi-wavelength correlations of star-forming galaxies
(Ackermann et al., 2012; Ajello et al., 2020), and misaligned active galactic
nuclei (Di Mauro et al., 2014). It has been generalized to include upper
limits in the statistical procedure (Ackermann et al., 2012). Therefore, we
can calculate the Kendall $\tau$ coefficient using all available information
concerning GCs (measurements and upper limits).
To estimate the significance of the correlations, we adopt a similar procedure
as advanced previously in the literature (Ackermann et al., 2012). Namely, the
null hypothesis assumes no correlation between $L_{\gamma}$ and $X$. A set of
null hypothesis samples is generated by repeating the following steps: (1)
randomly exchange $L_{\gamma}$ of two GCs while preserving their locations;
(2) if the energy fluxes of the GCs after exchanging the $L_{\gamma}$ are
above the detection threshold of Fermi-LAT, the exchange is kept666This step
guarantees the detectability of the null hypothesis samples. It is crucial to
apply realistic estimates of the detection threshold so that the null
hypothesis samples are valid. Ackermann et al. (2012) and Ajello et al. (2020)
have used the minimum fluxes in their data. Since we are using the same amount
of data as the 4FGL, we take advantage of the spatial map of the 8-year LAT
detection threshold published with the 4FGL. We expect this to be a more
rigorous way of generating the samples since the map includes the spatial
dependence of the LAT threshold.; and (3) we perform a large number of
exchanges, until obtaining a nearly uniform $L_{\gamma}$ sample (including
corrections from applying the detection threshold) over $X$, as required by
the null hypothesis. In Appendix B, we discuss the number of exchanges needed
to generate the null hypothesis sample.
For each correlation, we generate 104 null hypothesis samples and calculate
their Kendall $\tau$ coefficients. For a large number of samples, the
coefficients can be fitted to a normal distribution (Efron & Petrosian, 1999),
$\\{\hat{\tau}_{i}\\}\sim N(\nu_{0},\sigma_{0}),$ (4)
where $\\{\hat{\tau}_{i}\\}$ represents the distribution of the $\tau$
coefficients from the null hypothesis sample, and $N(\nu_{0},\sigma_{0})$ is a
normal distribution with mean $\nu_{0}$ and standard deviation $\sigma_{0}$.
For each correlation, we can compare the observed value of $\tau$ with the
corresponding normal distribution from the MC results and compute the
significance,
$\sigma=\frac{\tau-\nu_{0}}{\sigma_{0}}.$ (5)
Figure 2 shows an example of the $L_{\gamma}$–$u_{\mathrm{Total}}$ data set.
The blue histogram shows the probability density of Kendall $\tau$
coefficients of the null hypothesis samples. The dash-dotted line is the best
fit normal distribution of the probability density, which has $\nu_{0}=0.071$
and $\sigma_{0}=0.0057$. The Kendall $\tau$ coefficient of real data is 0.093,
shown by the red vertical line. The real data is about 3.8$\sigma$ away from
the center of the null hypothesis distribution.
Figure 2: Probability density distribution of the Kendall $\tau$ coefficients for the $L_{\gamma}$–$u_{\mathrm{Total}}$ data set. The blue histogram corresponds to the density of the Kendall $\tau$ coefficients of the null hypothesis samples. The dash-dotted line shows the best-fit normally-distributed probability density function for the null hypothesis. The red vertical line indicates the Kendall $\tau$ coefficient for the actual data. Table 2: Summary of correlations between $L_{\gamma}$ and four astrophysical parameters of the GCs. The best-fit parameters $a,$b, and the corresponding variance of $L_{\gamma}$ are found using the EM algorithm. The significance of the correlations is found by MC simulations with Kendall $\tau$ coefficients. Correlation | $a$ | $b$ | $\sqrt{\text{Variance}}$ | Significance
---|---|---|---|---
vs $\Gamma_{c}$ | 0.39 $\pm$ 0.10 | 32.99 $\pm$ 0.26 | 0.59 $\pm$ 0.08 | 6.4$\sigma$
vs $u_{\mathrm{Total}}$ | 0.59 $\pm$ 0.09 | 32.97 $\pm$ 0.19 | 0.47 $\pm$ 0.06 | 3.8$\sigma$
vs [Fe/H] | 0.35 $\pm$ 0.14 | 34.18 $\pm$ 0.19 | 0.64 $\pm$ 0.08 | 1.8$\sigma$
vs $u_{\mathrm{MW}}$ | 0.29 $\pm$ 0.26 | 33.75 $\pm$ 0.12 | 0.68 $\pm$ 0.09 | 1.5$\sigma$
### 3.3 Correlation results
The top (bottom) panel of Figure 3 shows the correlations between $L_{\gamma}$
and $u_{\mathrm{MW}}$ ($u_{\mathrm{Total}}$). GCs with measured $\gamma$-ray
luminosity are shown in red, while GCs with upper limits are shown in blue. We
find a very small slope for the $L_{\gamma}$-$u_{\mathrm{MW}}$ correlation,
with $a=0.29\pm 0.26$, which is almost consistent with 0 considering the large
statistical error. The significance of the $L_{\gamma}$-$u_{\mathrm{MW}}$
correlation is found to be 1.5$\sigma$. When the total photon field is
considered, we find a $L_{\gamma}$–$u_{\mathrm{Total}}$ correlation with
$a=0.59\pm 0.09$. In this case, the significance increases to 3.8$\sigma$. The
$L_{\gamma}$–$u_{\mathrm{Total}}$ correlation is mostly driven by
$u_{\mathrm{GC}}$, the photon field from the starlight in the GCs (see
equation (1)). As shown by Table 1, $u_{\mathrm{Total}}$ is much greater than
$u_{\mathrm{MW}}$ due to the dominant contribution from $u_{\mathrm{GC}}$.
Figure 3: Correlations between $L_{\gamma}$ and the photon field energy
densities. The top panel shows the $L_{\gamma}$\- $u_{\mathrm{MW}}$
correlation and the bottom panel shows the $L_{\gamma}$\- $u_{\mathrm{Total}}$
correlation. GCs with measured $\gamma$ rays are shown in red, while GCs with
upper limits are shown in blue. The best-fit correlations (black solid lines)
are calculated using the EM algorithm discussed in Section 3.1, with 1$\sigma$
uncertainties included as the gray shaded bands. We find a shallow correlation
between $L_{\gamma}$ and $u_{\mathrm{MW}}$ with $a=0.29\pm 0.26$. The
correlation between $L_{\gamma}$ and $u_{\mathrm{Total}}$ is more significant,
with $a=0.59\pm 0.09$. Numerical values of correlations are summarized in
Table 2, along with their significance.
We also investigate the correlation of the $L_{\gamma}$’s with the stellar
encounter rate ($\Gamma_{c}$) and GC metallicities ([Fe/H]). These observables
are argued to berelated to the formation of MSPs and may provide a proxy for
the total number of MSPs in GCs. Figure 4 shows the $L_{\gamma}$–$\Gamma_{c}$
correlation (top panel) and the $L_{\gamma}$–[Fe/H] correlation (bottom
pannel) obtained with the EM algorithm. We find a positive correlation between
the $L_{\gamma}$ and $\Gamma_{c}$, with $a=0.39\pm 0.10$, for which the
Kendall $\tau$ test yields a 6.4$\sigma$ statistical significance. Similarly,
we find a correlation between $L_{\gamma}$ and [Fe/H] with the best-fit value
$a=0.35\pm 0.14$. However, the statistical significance of the correlation is
only 1.8$\sigma$.
Figure 4: Same as Figure 3, but correlated with the encounter rate (top
panel) and metallicity (bottom panel).
We summarize the best-fit correlation results and their respective statistical
significance in Table 2.
### 3.4 Hidden correlation and interpretations
Positive and statistically significant correlations are obtained in both the
$L_{\gamma}$–$u_{\mathrm{Total}}$ and the $L_{\gamma}$–$\Gamma_{c}$ space. The
positive $L_{\gamma}$–$u_{\mathrm{Total}}$ correlation could indicate a
significant contribution from IC emission. If the $e^{\pm}$ injected by MSPs
lose energy through multiple comparable processes, e.g., IC and synchrotron
radiaton, the $L_{\gamma}$ is proportional to the IC energy loss rates, which
is linearly proportional to the $u_{\mathrm{Total}}$. In the extreme limit
where all the $e^{\pm}$ injected by MSPs lose their energy through IC, the
$L_{\gamma}$ is constrained by the energy injection rate of $e^{\pm}$ by MSPs
and the $u_{\mathrm{Total}}$ would have less impact. Since we find a
preference for a non-linear correlation ($a=0.59\pm 0.09$), the $\gamma$ rays
are unlikely all originated from IC radiation.
However, the positive correlation between $L_{\gamma}$ and
$u_{\mathrm{Total}}$ could alternatively be driven by the
$L_{\gamma}$–$\Gamma_{c}$ correlation. Here, we investigate a potential hidden
correlation between $u_{\mathrm{Total}}$ and $\Gamma_{c}$ in order to better
understand the nature of our detections. Figure 5 shows the
$u_{\mathrm{Total}}$ and $\Gamma_{c}$ values for our sample of GCs. It is
apparent that the $u_{\mathrm{Total}}$ tends to be higher for GCs with higher
encounter rates. Since these data are uncensored, we simply estimate the
correlation using the Spearman coefficient: we find 0.72, confirming a strong
correlation. This result is not surprising because a higher photon density
implies higher stellar density which implies higher encounter rates (see also
equation (2)).
Figure 5: Hidden correlation between $u_{\mathrm{Total}}$ and $\Gamma_{c}$.
GCs with higher encounter rates tend to have higher total photon field energy
densities. The red line shows the relation between $u_{\mathrm{Total}}$ and
$\Gamma_{c}$ based on a least-squires method in logarithmic space.
An important implication of this result is that the
$L_{\gamma}$–$u_{\mathrm{Total}}$ and the $L_{\gamma}$–$\Gamma_{c}$
correlations are not necessarily independent. Using a simple least squares
method in the logarithmic space, we find the relation between $\Gamma_{c}$ and
$u_{\mathrm{Total}}$ to be
$\Gamma_{c}\propto u_{\mathrm{Total}}^{1.44\pm 0.13}.$ (6)
As reported in Table 2, the correlation between $L_{\gamma}$ and $\Gamma_{c}$
has a power index $a=0.39\pm 0.10$. Based on the hidden relation between
$\Gamma_{c}$ and $u_{\mathrm{Total}}$, the projected correlation between
$L_{\gamma}$ and $u_{\mathrm{Total}}$ would have an index $a=0.56\pm 0.15$.
Within the uncertainty, this projected result is consistent with the directly
measured correlation between $L_{\gamma}$ and $u_{\mathrm{Total}}$ found in
real data, $a=0.59\pm 0.09$. Therefore, the positive correlation between
$L_{\gamma}$ and $u_{\mathrm{Total}}$ could be evidence for IC, or
alternatively, an indirect effect of the $L_{\gamma}$–$\Gamma_{c}$ correlation
which is connected to the dynamic formation of MSPs. The correlation found
between $L_{\gamma}$ and $u_{\mathrm{Total}}$ cannot be considered as concrete
evidence for IC due to this ambiguity implicated by the hidden correlation.
However, as we discuss next, evidence for IC emission in GCs may still be
revealed from the detailed spectral properties of these objects.
## 4 Spectral analysis
Motivated by the correlations detected in the previous section, we perform a
spectral analysis of the 30 GCs detected in the 4FGL catalog, with the aim of
finding further evidence for IC emission. First, we model the spectra of the
GCs individually and compare their spectral parameters with those describing
the field MSPs. Second, we fit the GCs spectra with universal spectral models
which phenomenologically describe possible IC emission. Lastly, we use the
detected IC component to constrain the $e^{\pm}$ injection efficiency in the
GCs.
### 4.1 Individual spectral fits
We consider two possible mechanisms of $\gamma$-ray emission, not mutually
exclusive: CR, and IC up-scattering of starlight.
Detailed theoretical models predict that the maximum energy of the $e^{\pm}$
accelerated by the MSPs is limited by the CR in the pulsar magnetosphere. The
predicted CR spectrum exhibits an energy cut-off which is related to the
$e^{\pm}$ Lorentz factor (Harding et al., 2005). For this reason, we model the
GC $\gamma$-ray spectrum–as predicted by CR models–using a power law with an
exponential cut-off (PLE) of the form:
$\left[\frac{dN}{dE}\right]_{\mathrm{CR}}=N_{0}\left(\frac{E}{E_{0}}\right)^{-\Gamma}\exp{\left(-\frac{E}{E_{\text{cut}}}\right)},$
(7)
where $N_{0}$ is the normalization factor, $\Gamma$ is the spectral index,
$E_{0}$ is the scaling energy, and $E_{\mathrm{cut}}$ is the energy cutoff.
The $e^{\pm}$ may also leave the MSPs through open magnetic field lines and
diffuse into the GC medium. Escaping pairs may up-scatter ambient photons and
produce IC emission. The spectrum of the IC is determined by the $e^{\pm}$
spectrum and the ambient photon field. Theoretical studies (Harding &
Muslimov, 2011) show that the MSPs can inject $e^{\pm}$ with Lorentz factors
$\gamma_{e^{\pm}}$ $>$ 106 efficiently. Given ambient photons of $E_{0}\sim$ 1
eV energy, the up-scattered IC photons can reach to above
$\gamma_{e^{\pm}}^{2}E_{0}=1$ TeV. Thus, in the Fermi GeV energy range, we
assume a power law (PL) injection distribution for $e^{\pm}$. In the Thomson
regime, the IC spectrum resulting from the interaction of power-law-like
$e^{\pm}$ with ambient photons following a black-body radiation distribution
(Blumenthal & Gould, 1970) is still a power law in $\gamma$-ray energy. We
consider this spectral form as a phenomenological description of the IC model.
Specifically777For the maximum $\gamma$-ray energy (hundreds of GeV) and the
photon field (starlight) we considered, the IC is in transition from the
Thomson regime to the Klein-Nishina regime with the Thomson regime still an
adequate approximation.,
$\left[\frac{dN}{dE}\right]_{\mathrm{IC}}=N_{0}\left(\frac{E}{E_{0}}\right)^{-\Gamma}.$
(8)
We first estimate the GCs’ spectral parameters using a maximum likelihood
method. For this, we use the CR model and the IC model separately. We perform
a $\chi^{2}$ test using the bin-by-bin $\gamma$-ray fluxes (9 energy bins from
300 MeV to 500 GeV) of each GC and the CR and IC emission models. Therefore,
we define
$\chi^{2}=\sum_{i}\frac{(F_{\text{data}}^{i}-F_{\text{model}}^{i})^{2}}{(\Delta
F_{\text{data}}^{i})^{2}+(f_{\text{ref}}^{i}F_{\text{data}}^{i})^{2}},$ (9)
where $F_{\mathrm{data}}^{i}$ and $\Delta F_{\mathrm{data}}^{i}$ are the
measured fluxes and flux uncertainties obtained at each independent energy
bin, $F_{\mathrm{model}}^{i}$ are the predicted fluxes (either the CR or IC
models). We allow all model parameters to be free (i.e., normalization, power-
law index, and cut-off energy for CR, and normalization and power-law index
for IC). The $f_{\text{ref}}^{i}$ values encapsulate the systematic
uncertainties on the effective area of the LAT. We follow the values reported
in the 4FGL catalog (Abdollahi et al., 2020), and set $f_{\text{ref}}$ to 0.05
for the first three energy bins, 0.06 for the fourth bin, and 0.1 for the last
five bins.
The significance of the spectral curvature is estimated by computing the
difference of the best-fit $\chi^{2}$ between the IC and the CR models,
TS${}_{\mathrm{curve}}=\chi^{2}_{\text{IC}}-\chi^{2}_{\text{CR}}.$ We apply a
2$\sigma$ threshold to determine the type of spectrum: for GCs with
TS${}_{\mathrm{curve}}\geq 4$, their PLE spectra are reported. Otherwise, the
power-law spectra are reported. Note that this is a lower threshold than the
4FGL, which requires TS${}_{\mathrm{curve}}\geq 9$ before detection of
curvature is claimed. We adopt this low threshold because our analysis removes
potentially contaminated photons < 300 MeV. Bins encompassing this low energy
range usually generate upper limits in the 4FGL analysis and contribute to the
detection of curvature. We find, a posteriori, the 2$\sigma$ threshold
adequate in our analysis since our fits generate finite $E_{\mathrm{cut}}$’s
within uncertainties for all GCs with TS${}_{\mathrm{curve}}\geq 4$. For those
GCs with TS${}_{\mathrm{curve}}<4$, the fits only generate lower limits for
$E_{\mathrm{cut}}$. Table 3 summarizes the best-fit parameters of the spectra
for 30 $\gamma$-ray-detected GCs, sorted by their TScurve. The majority prefer
curved spectra, with only 5 preferring simple power law spectra. Figure 6
shows the spectra for 2 GCs as examples. The top panel shows the spectrum of
NGC 6397, which is best fit by a simple power law, while the lower panel shows
the spectrum for NGC 6541, which prefers an exponential cut-off at $\sim$ 350
MeV with TS${}_{\mathrm{curve}}=4$.
Table 3: Spectral parameters for 30 $\gamma$-ray-detected GCs from the
individual fits, ordered from the least curved to the most curved. For GCs
with TS${}_{\mathrm{curve}}<$ 4 (2$\sigma$), the best-fit simple power laws
(PL) are reported. For the rest GCs, the power laws with an exponential cutoff
(PLE) are reported.
Name | $\Gamma$ | $\log\left(\frac{E_{\mathrm{cut}}}{\mathrm{MeV}}\right)$ | $\chi^{2}$/d.o.f. | Typea | TScurve
---|---|---|---|---|---
2MS-GC01 | 2.68$\pm$0.08 | … | 1.03 | PL | 0
NGC 1904 | 2.89$\pm$0.28 | … | 0.63 | PL | 0
NGC 6397 | 2.56$\pm$0.20 | … | 0.36 | PL | 2
NGC 7078 | 2.74$\pm$0.16 | … | 0.70 | PL | 2
NGC 5904 | 2.53$\pm$0.15 | … | 0.54 | PL | 2
NGC 6341 | 0.94$\pm$1.12 | 3.24$\pm$0.38 | 0.74 | PLE | 4
NGC 6541 | 1.64$\pm$0.57 | 3.41$\pm$0.37 | 0.25 | PLE | 4
NGC 6528 | 1.85$\pm$0.54 | 3.68$\pm$0.53 | 1.24 | PLE | 4
GLIMPSE02 | 2.58$\pm$0.16 | 3.94$\pm$0.35 | 2.54 | PLE | 5
NGC 6717 | 1.85$\pm$0.34 | 3.71$\pm$0.30 | 0.09 | PLE | 6
NGC 6218 | 0.00$\pm$1.61 | 3.42$\pm$0.10 | 0.43 | PLE | 7
NGC 6402 | 1.86$\pm$0.38 | 3.73$\pm$0.32 | 0.10 | PLE | 7
NGC 2808 | 1.83$\pm$0.33 | 3.75$\pm$0.31 | 0.11 | PLE | 7
NGC 6139 | 1.94$\pm$0.32 | 3.77$\pm$0.26 | 0.43 | PLE | 7
NGC 6838 | 1.38$\pm$0.65 | 3.27$\pm$0.23 | 0.29 | PLE | 9
NGC 6304 | 0.86$\pm$0.81 | 3.10$\pm$0.28 | 0.74 | PLE | 12
M 80 | 1.60$\pm$0.32 | 3.71$\pm$0.24 | 0.31 | PLE | 14
Terzan 2 | 0.60$\pm$0.59 | 3.40$\pm$0.18 | 1.00 | PLE | 16
NGC 6440 | 1.88$\pm$0.22 | 3.63$\pm$0.19 | 1.07 | PLE | 17
NGC 6441 | 1.83$\pm$0.23 | 3.59$\pm$0.21 | 0.84 | PLE | 17
NGC 6652 | 1.29$\pm$0.42 | 3.29$\pm$0.20 | 0.71 | PLE | 22
NGC 6316 | 1.60$\pm$0.23 | 3.54$\pm$0.14 | 1.16 | PLE | 24
NGC 6752 | 0.83$\pm$0.58 | 2.99$\pm$0.20 | 0.18 | PLE | 28
Terzan 1 | 0.00$\pm$0.36 | 3.28$\pm$0.06 | 0.77 | PLE | 37
GLIMPSE01 | 1.67$\pm$0.14 | 3.57$\pm$0.10 | 0.53 | PLE | 61
M 62 | 1.48$\pm$0.14 | 3.47$\pm$0.08 | 0.74 | PLE | 90
Omega Cen | 1.05$\pm$0.27 | 3.25$\pm$0.12 | 1.11 | PLE | 103
NGC 6388 | 1.33$\pm$0.15 | 3.35$\pm$0.07 | 0.46 | PLE | 137
Terzan 5 | 1.58$\pm$0.09 | 3.54$\pm$0.06 | 2.14 | PLE | 159
NGC 104 | 1.28$\pm$0.11 | 3.37$\pm$0.05 | 0.54 | PLE | 207
* a
Spectrum type: PL for power law; PLE for power law with an exponential cut-
off.
Figure 6: Best-fit spectra (blue solid line) and the 1$\sigma$ uncertainties
(blue band) for two GCs. The bin-by-bin fluxes from the Fermi data analysis
are included as black points. The top panel shows the spectrum of NGC 6397 as
a simple power law because the PLE model is only slightly favored
(TS${}_{\mathrm{curve}}=2$). The bottom panel shows the spectrum for NGC 6541,
which prefers an exponential cutoff $\sim$ GeV with TS${}_{\mathrm{curve}}=4$.
The Fermi-LAT has detected more than 200 pulsars. Most of these have been
found to have a curved spectrum with best-fit energy cutoffs of the order of a
few GeV. Therefore, their $\gamma$-ray emission is likely dominated by a CR
process. Nevertheless, Hooper & Linden (2018, 2021) find that many MSPs could
be surrounded by TeV halos of IC. The IC emission may also extend to the GeV
energy range. Figure 7 compares the distribution of the spectral parameters of
108 field MSPs in the 4FGL (red dots) with the $\gamma$-ray GCs (blue dots),
assuming a PLE spectra. The 1$\sigma$ uncertainties of the best-fit parameters
are also shown. We find that within uncertainties, the spectral distribution
of the GCs and the field MSPs are very similar. However, given the starlight
in GCs typically contributes a much larger photon field energy density than
for field MSPs, IC emission may still provide a sizeable contribution to the
overall GC $\gamma$-ray emission. The results from individual spectral fit
cannot rule out the presence of IC for the following reasons: (1) There are 5
GCs for which the spectra shows no obvious energy cutoffs. This is hard to
explain using the CR emission model alone. (2) Many GCs have energy bins above
10 GeV detected even though their spectra have cutoffs of order a GeV (see
Appendix C). These high-energy measurements may be indicative of an IC
component.
Figure 7: Spectral parameters of the $\gamma$-ray emission. The PLE spectrum
is assumed, with $\Gamma$ and $E_{\mathrm{cut}}$ as free parameters. Both the
GCs (blue dots) and the MSPs (red dots) detected in the 4FGL are included. The
error bars represent the 1$\sigma$ parameter uncertainties. Within
uncertainties, the distribution of the GCs’ spectra is very similar to that of
the MSPs.
### 4.2 Fit assuming universal spectral components
The curvature of the GCs’ spectra at around a few GeV’s, as well as their
similarity to the field MSPs spectra, support the hypothesis that the GeV
$\gamma$-ray emission from most GCs is due to mainly local CR emission from
MSPs within GCs. However, IC may still contribute sub-dominantly, especially
at the high-energy end. To probe this possibility, we perform a reduced
$\chi^{2}$ analysis in which we fit, bin-by-bin, the GCs’ spectra using a
linear combination of the spectral components introduced in equation 7 and 8.
Specifically;
$\displaystyle\frac{dN}{dE}$
$\displaystyle=\left[\frac{dN}{dE}\right]_{\mathrm{CR}}+\left[\frac{dN}{dE}\right]_{\mathrm{IC}}$
(10)
$\displaystyle=N_{1}\left(\frac{E}{E_{0}}\right)^{-\Gamma_{1}}\exp\left(-\frac{E}{E_{\mathrm{cut}}}\right)+N_{2}\left(\frac{E}{E_{0}}\right)^{-\Gamma_{2}}.$
Fitting such a two-component model to each GC’s bin-by-bin data is difficult
since the GC spectra only contains 9 energy bins, and many high energy bins
only provide upper limits. On the other hand, typical GCs can host close to
$\sim$ 20 MSPs (Ye et al., 2019) each. So, as a simplifying approximation, we
hypothesise that the $\gamma$-ray and $e^{\pm}$ injection from the collection
of MSPs in each GC to be similar to one another. Then, we can fit a common or
universal spectrum to all the $\gamma$-ray detected GCs, i.e., one set of
spectral shape parameters in the two component model in equation (10) for all
the GCs. More specifically, we tie the $\Gamma_{1}$, $\Gamma_{2}$ and
$E_{\mathrm{cut}}$ across all GCs considered (hereafter referred to as the
“universal model”). The normalization factors $N_{1}$ and $N_{2}$ are allowed
to float for each GC as these should depend on the number of MSPs and the
photon field energy density in the GCs.
We perform the universal fit by minimizing the total $\chi^{2}$ of 30
$\gamma$-ray-detected GCs,
$\chi^{2}_{\mathrm{total}}(\Gamma_{1},\;\Gamma_{2},\;E_{\mathrm{cut}})=\sum_{i}\chi_{i}^{2}(\Gamma_{1},\;\Gamma_{2},\;E_{\mathrm{cut}},\;N_{1}^{i},\;N_{2}^{i}).$
(11)
In practice, we assign the same $\Gamma_{1}$, $\Gamma_{2}$, and
$E_{\mathrm{cut}}$ to all $\gamma$-ray GCs and perform a minimum $\chi^{2}$
for each different object. However, during the fit, we free the $N_{1}^{i}$
and $N_{2}^{i}$ parameters. By scanning the parameter space of $\Gamma_{1}$,
$\Gamma_{2}$, and $E_{\mathrm{cut}}$, we find the values that minimize the
total $\chi^{2}$ for the two-component model. These are,
$\displaystyle\Gamma_{1}=0.88\pm 0.44,$ $\displaystyle\Gamma_{2}=2.79\pm
0.25,$
$\displaystyle\log\left(\frac{E_{\mathrm{cut}}}{\mathrm{MeV}}\right)=3.28\pm
0.16,$
for which we find a $\chi^{2}_{\mathrm{total}}/\mathrm{d.o.f}=204/206=0.99$
(we have $30\times 9$ data points, and the number of free parameters is $60+3$
as there are 2 normalization factors for each GC, and 3 global parameters. So,
we have $\mathrm{d.o.f}=30\times 9-60-3-1=206$). In Figure 8, we show the
associated 3$\sigma$ contours and correlated uncertainties for the parameters
$\Gamma_{1}$, $\Gamma_{2}$, and $E_{\mathrm{cut}}$ as found in this procedure.
Figure 8: The projected parameter space of the universal model fit, as
illustrated in equation (10). The blue shaded contours show the 1$\sigma$,
2$\sigma$, and 3$\sigma$ confidence levels for the two-component model. The
crosses indicate the best fit values for $\Gamma_{1}$, $\Gamma_{2}$, and
$E_{\mathrm{cut}}$. On the top-left panel, the red shaded region shows the
best-fit value and 3$\sigma$ confidence levels for the null hypothesis, which
includes only the CR model.
In order to compute the statistical significance of the PL component, it is
necessary to define the null hypothesis. This corresponds to the universal
model containing only the CR component (see equation 7). Again, we tie
$\Gamma_{1}$ and $E_{\mathrm{cut}}$ across all GCs and allow the normalization
factors to individually vary. We find that the best-fit parameters for the CR-
only model are:
$\displaystyle\Gamma_{1}=1.72\pm 0.21,$
$\displaystyle\log\left(\frac{E_{\mathrm{cut}}}{\mathrm{MeV}}\right)=3.53\pm
0.19.$
In this case, we find a
$\chi^{2}_{\mathrm{total}}/\mathrm{d.o.f}=349/237=1.47$ (the null hypothesis
has 30 + 2 free parameters so the $\mathrm{d.o.f}=30\times 9-30-2-1=237$).
This implies that the two-component model is preferred at the 8.2$\sigma$
level ($\Delta\chi^{2}=349-204$ for 31 d.o.f [1 power-law index plus 30
normalization factors]). It is useful to compare the best-fit spectral results
of the CR component for the universal models with the best-fit spectral
parameters of the MSPs in the 4FGL catalog. As seen in Figure 9, although the
CR-only model (null hypothesis) has larger $\Gamma$ and higher
$E_{\mathrm{cut}}$ than the CR component from the two-component model, our
results for both models are compatible with the field MSPs, up to statistical
uncertainties.
The universal fitting procedure used in this section is similar to a stacking
analysis. This method is usually applied to explore the characteristics of an
astrophysical population, especially one that is undetected. Numerous studies
have shown that this technique can increase the detection sensitivity to such
population characteristics. So, even though there is good statistical evidence
for the PL component in the universal fit, this might not be apparent from
individual fitting of the two-component model.
We show examples of the spectra obtained in the universal fit of the two-
component model for NGC 6397 and NGC 6541 in Figure 10. As can be seen, the
solutions for the CR and PL components look physically plausible. The spectra
also include 1$\sigma$ bow-tie errors, which immediately reveal the level of
statistical support for the CR and PL components, respectively. For
comparison, the results shown in Figure 6 presented a single-component (e.g.,
Abdollahi et al. (2020)) spectral curvature analysis applied to NGC 6397 and
NGC 6541, individually.
Figure 9: Same as Figure 7, but with best-fit parameters of GC replaced by
those obtained from the universal models. The 3$\sigma$ contours are shown for
the CR-only model (red) and the CR component from the two-component model
(blue), as in Figure 8. The MSP parameters are included in the background
(yellow dots).
We show some additional noteworthy results of the universal fit in Figure 11.
Here, we see that in the case of GC 2MS-GC01, the PL model is sufficient to
explain the bin-by-bin spectrum over the full energy range, but we also
display the estimated 95% C.L. upper limit for the normalization of the CR
model. By contrast, in the case of GC M 80, we find that the data is best
described by the CR model alone, and we show the 95% C.L. upper limit for the
normalization of the PL component. These examples might indicate special
conditions of the environment of the GC.
For 19 GCs (out of the 30 GCs included in the universal fit), we find good
statistical support for both the CR and PL models. For the remaining 11 GCs we
find that only one component is sufficient to explain the spectrum: 7 GCs
require only the CR model and the other 4 GCs require only the PL model. The
two-component spectral results for all 30 GCs are shown in Appendix C.
To explain the best-fit index of the PL component ($\Gamma_{2}=2.79\pm 0.25$)
as IC emission, the implied emitting $e^{\pm}$ spectrum would have an index of
$4.58\pm 0.50$. The minimum $e^{\pm}$ energy required to maintain a power law
IC in the energy range of our analysis (300 MeV) is $\lesssim$ 10 GeV given
that the upscattered photon field has energies $\sim$ 1 eV (Blumenthal &
Gould, 1970). Interestingly, Harding & Muslimov (2011) has simulated $e^{\pm}$
pair cascades from pulsar polar caps. For typical MSP parameters, they show
that the injected $e^{\pm}$ flux decreases by $\sim 5-10$ orders of magnitude
when the $e^{\pm}$ energy increases from $\sim$10 GeV to $\sim$ 1 TeV. The
soft $e^{\pm}$ spectrum we found is in line with their results.
Figure 10: The best-fit two-component spectra for NGC 6397 (top panel) and NGC
6541 (bottom panel). The spectra are fit to a universal shape with same
$\Gamma_{1}$, $\Gamma_{2}$, and $E_{\mathrm{cut}}$ for all GCs. Only the
normalizations of the two components are allowed to vary between GCs. The
best-fit parameters for the CR component (red line with shaded band) is
$\Gamma_{1}=0.88\pm 0.44$ and $\log(E_{\mathrm{cut}}/\mathrm{MeV})=3.28\pm
0.16$. The best fit parameter for the PL component (blue line with shaded
band) is $\Gamma_{2}=2.79\pm 0.25$. The black dashed line indicates the total
of the two components.
Figure 11: Spectra of 2MS-GC01 (top panel) and M 80 (bottom panel) from the
two-component fit. The 2MS-GC01 prefers only the PL model (blue solid line).
The 95% upper limit on the normalization of the CR model is shown by the red
shaded region. In contrast, only the CR model is detected for M 80 (red solid
curve). The 95% upper limit on the normalization of the PL model is shown by
the blue shaded region.
### 4.3 Leptonic injection efficiency within the globular clusters
The relative normalization of the CR and IC components probes an important
property of the MSPs: the $\gamma$-ray production and $e^{\pm}$ injection
efficiencies, respectively. Indeed, the spin-down energy of MSPs can be
injected into $\gamma$ rays and $e^{\pm}$. While prompt $\gamma$ rays are
mainly produced by CR in the magnetosphere, the $e^{\pm}$ can propagate into
the interstellar environment. We can write down the following empirical
relations,
$\displaystyle L_{\mathrm{CR}}$ $\displaystyle=f_{\gamma}L_{\mathrm{sd}},$
(12) $\displaystyle L_{e^{\pm}}$ $\displaystyle=f_{e^{\pm}}L_{\mathrm{sd}},$
(13)
where $L_{\mathrm{sd}}$ is the spin-down luminosity and the $f$’s are
efficiency parameters.
Assuming that the $\gamma$-ray emission is the superposition of CR and IC
processes, we have that
$L_{\gamma}=L_{\mathrm{CR}}+L_{\mathrm{IC}}.$ (14)
However, $e^{\pm}$ can also lose energy via synchrotron radiation. We can
compare the relative strength of the sychrotron radiation vs the IC emission
through
$\frac{\dot{E}_{\mathrm{SR}}}{\dot{E}_{\mathrm{IC}}}=\frac{u_{\mathrm{B}}^{2}}{u_{\mathrm{rad}}^{2}},$
(15)
where $\dot{E}_{\mathrm{SR}}$ and $\dot{E}_{\mathrm{IC}}$ are the synchrotron
and IC energy loss rates, respectively, $u_{\mathrm{B}}$ is the magnetic
energy density, and $u_{\mathrm{rad}}$ is the radiation field energy density.
We note that this relation assumes that the $e^{\pm}$ lose all their energy
within the GCs. We provide justifications for this assumption in Appendix D.
For a typical GC the magnetic field is estimated to be
$B\lesssim\mathrm{10\;\mu G}$ (Bednarek & Sitarek, 2007), so we expect to have
a $u_{\mathrm{B}}={\mathrm{(10\;\mu G)}^{2}}/{(2\mu_{0})}=2.5$ eV cm-3, which
is much smaller than the total radiation field of most GCs shown in Table 1.
Thus in the usual instance when IC is the leading energy loss process, we have
that
$L_{\mathrm{IC}}\simeq L_{e^{\pm}}.$ (16)
Since no GC is detected as an extended source by the Fermi-LAT, the energy
carried away to the interstellar medium by $e^{\pm}$ propagation is expected
to be small. Thus, we can use the following approximate scaling relation,
$\dfrac{f_{e^{\pm}}}{f_{\gamma}}\simeq\dfrac{L_{\mathrm{IC}}}{L_{\mathrm{CR}}}.$
(17)
Using this888We discuss caveats to the approximation used in equation (17) in
Appendix D., we estimate the ratios $f_{e^{\pm}}/f_{\gamma}$ for all
$\gamma$-ray emitting GCs in Table 4. These are found to be in the range
$\approx 0.17-1.04$. Note that for some GCs we present only upper or lower
limits as only one component is detected. The measurement of pulsars by Fermi-
LAT estimated the $f_{\gamma}$ efficiency from observations of pulsars and
found that on average, $f_{\gamma}\sim\mathrm{10\%}$. Furthermore, the
$e^{\pm}$ efficiency $f_{e^{\pm}}$ was also estimated to be around 10% from
TeV observations of nearby pulsars (Hooper et al., 2017; Hooper & Linden,
2018, 2021) and the Galactic center (Bednarek & Sobczak, 2013), although MAGIC
Collaboration et al. (2019) claims $f_{e^{\pm}}$ is at the percentage level
for one GC they observed (NGC 7078), and Sudoh et al. (2021) suggest
$f_{e^{\pm}}\sim 90\%$ on the basis of the radio continuum emission detected
from galaxies with low specific star formation rates.
For the CR and IC luminosities, we integrate the best-fit two-component
spectra from 300 MeV to 500 GeV, the same energy range used in the Fermi data
analysis. For the IC emission, the minimum $e^{\pm}$ injection energy probed
by this energy range is $\lesssim$ 10 GeV assuming the ambient photon field is
starlight. We note that Harding & Muslimov (2011) investigated the $e^{\pm}$
pair cascades from MSPs and proposed several theoretical models. Their Figure
10 shows that their predicted pair spectra peak at $\sim$ GeV and extend to
$\gtrsim$ TeV. This roughly corresponds to the Fermi energy range we assume.
If the $e^{\pm}$ injection spectra extend to lower energy, they will lead to
higher $L_{\mathrm{IC}}$. Therefore, the choice of $\gamma$-ray energy range
will contribute as systematic uncertainties on the estimated
$f_{e^{\pm}}/f_{\gamma}$. For example, we verify that the
$f_{e^{\pm}}/f_{\gamma}$ would be $\sim$ 5 times larger if the minimum
$\gamma$-ray energy is assumed to be 30 MeV.
Table 4: $\gamma$-ray luminosity for the IC and CR components and the ratios between $f_{e^{\pm}}$ and $f_{\gamma}$. For GCs with only one component detected, the 95% C.L. upper limits are reported for another component. Name | $L_{\mathrm{{IC}}}$ | $L_{\mathrm{CR}}$ | $f_{e^{\pm}}/f_{\gamma}$
---|---|---|---
| (1034 erg s-1) | (1034 erg s-1) |
GLIMPSE02 | 10.90 $\pm$ 1.06 | < 1.70 | > 6.40
2MS-GC01 | 3.20 $\pm$ 0.44 | < 1.08 | > 2.95
NGC 7078 | 2.38 $\pm$ 0.62 | < 1.14 | > 2.08
NGC 1904 | 1.75 $\pm$ 0.75 | < 1.62 | > 1.08
NGC 5904 | 0.54 $\pm$ 0.31 | 0.52 $\pm$ 0.24 | 1.04 $\pm$ 0.77
NGC 6397 | 0.05 $\pm$ 0.04 | 0.05 $\pm$ 0.03 | 0.97 $\pm$ 1.00
NGC 6440 | 4.83 $\pm$ 1.20 | 5.12 $\pm$ 0.97 | 0.94 $\pm$ 0.29
NGC 6541 | 0.99 $\pm$ 0.42 | 1.09 $\pm$ 0.35 | 0.92 $\pm$ 0.48
NGC 6139 | 2.33 $\pm$ 1.14 | 2.64 $\pm$ 0.90 | 0.88 $\pm$ 0.52
NGC 6441 | 8.36 $\pm$ 1.80 | 9.57 $\pm$ 1.57 | 0.87 $\pm$ 0.24
NGC 6752 | 0.25 $\pm$ 0.09 | 0.33 $\pm$ 0.08 | 0.76 $\pm$ 0.33
NGC 6717 | 0.80 $\pm$ 0.42 | 1.19 $\pm$ 0.35 | 0.67 $\pm$ 0.40
NGC 6402 | 1.18 $\pm$ 0.79 | 1.83 $\pm$ 0.62 | 0.65 $\pm$ 0.48
NGC 2808 | 1.18 $\pm$ 0.63 | 2.09 $\pm$ 0.56 | 0.56 $\pm$ 0.34
NGC 6838 | 0.16 $\pm$ 0.14 | 0.28 $\pm$ 0.11 | 0.55 $\pm$ 0.54
GLIMPSE01 | 2.77 $\pm$ 0.65 | 5.89 $\pm$ 0.62 | 0.47 $\pm$ 0.12
NGC 6316 | 2.99 $\pm$ 1.44 | 7.71 $\pm$ 1.26 | 0.39 $\pm$ 0.20
Terzan 5 | 10.02 $\pm$ 1.68 | 27.46 $\pm$ 1.83 | 0.37 $\pm$ 0.07
NGC 6652 | 1.16 $\pm$ 0.77 | 3.35 $\pm$ 0.68 | 0.35 $\pm$ 0.24
M 62 | 2.20 $\pm$ 0.60 | 6.92 $\pm$ 0.63 | 0.32 $\pm$ 0.09
NGC 6388 | 4.37 $\pm$ 1.18 | 14.22 $\pm$ 1.27 | 0.31 $\pm$ 0.09
NGC 104 | 0.96 $\pm$ 0.22 | 4.67 $\pm$ 0.30 | 0.21 $\pm$ 0.05
Omega Cen | 0.50 $\pm$ 0.24 | 2.90 $\pm$ 0.27 | 0.17 $\pm$ 0.08
NGC 6528 | < 2.41 | 1.38 $\pm$ 0.58 | < 1.74
NGC 6218 | < 0.39 | 0.23 $\pm$ 0.12 | < 1.67
NGC 6341 | < 0.71 | 0.49 $\pm$ 0.23 | < 1.45
NGC 6304 | < 0.79 | 0.89 $\pm$ 0.30 | < 0.88
M 80 | < 2.31 | 3.45 $\pm$ 0.74 | < 0.67
Terzan 2 | < 0.49 | 2.66 $\pm$ 0.52 | < 0.18
Terzan 1 | < 0.16 | 2.42 $\pm$ 0.49 | < 0.07
## 5 Discussion
### 5.1 Implications of the correlation analysis
We have found strong positive correlations between $L_{\gamma}$, the stellar
encounter rate $\Gamma_{c}$, and the total photon field energy density
$u_{\mathrm{Total}}$ of GCs. The latter correlation may indicate a significant
contribution of IC upscattering of ambient starlight to the total $\gamma$-ray
emission of GCs. However, we showed in Figure 5 that the $u_{\mathrm{Total}}$
also increases with $\Gamma_{c}$. So, the detection of the
$L_{\gamma}$–$u_{\mathrm{Total}}$ correlation alone does not unambiguously
demonstrate the presence of IC emission in GCs 999We also analyzed other
potential hidden correlations, but no obvious correlations with other
parameters such as the interstellar radiation field and the distance from the
Sun were found (see Appendix D).. On the other hand, corroborating evidence
for IC emission was found from the universal two-component fit, wherein we
were able to estimate, separately, the luminosities of the CR and IC
components of most GCs. The ratios of the luminosities between the CR and IC
components were found to be comparable. This implies that MSPs in GCs can
potentially inject $e^{\pm}$ as efficiently as they inject prompt
magnetospheric $\gamma$ rays.
Overall, our correlation results in the $L_{\gamma}$-$\Gamma_{c}$ plane are
consistent with those in Hui et al. (2011) and de Menezes et al. (2019),
though it is important to note that our method is more statistically robust
since we include GCs with detection limits which were previously neglected. In
particular, our high significance ($6.4\sigma$) detection of a
$L_{\gamma}$–$\Gamma_{c}$ correlation naively supports a dynamic formation
scenario for MSPs in GCs. However, as pointed out earlier, this correlation
may not be independent due to the hidden correlation of $u_{\mathrm{Total}}$
and $\Gamma_{c}$. On the other hand, we have not found an obvious correlation
between $f_{e^{\pm}}/f_{\gamma}$ and $u_{\mathrm{Total}}$ (see Appendix D).
The lack of this latter correlation may indicate that IC is, in fact, the
leading energy loss process for $e^{\pm}$ in GCs: in the limit of IC
dominance, the IC luminosity of GCs already saturates the power going into
freshly-injected $e^{\pm}$ pairs, so “dialling-up” the light field energy
density has no effect on the IC luminosity. Thus, in this situation of IC
dominance, we expect, at most, only a weak correlation between $L_{\gamma}$
and $u_{\mathrm{Total}}$ and we would anticipate that the
$L_{\gamma}$-$\Gamma_{c}$ correlation is the fundamental one (while the
$L_{\gamma}$-$u_{\mathrm{Total}}$ correlation is caused by the fact that GCs
with higher stellar encounter rate naturally have higher stellar density which
leads to higher photon field density). With the uncertainties of the data and
the number of variables involved, it is challenging to statistically confirm
this scenario. Overall, however, our results are consistent with there being
both a significant role for dynamical formation of MSPs in GCs and for the
presence of a significant contribution of IC to the overall $\gamma$-ray
emission of GCs.
Previous studies (Hui et al., 2011) found a positive correlation between
$L_{\gamma}$ and $u_{\mathrm{MW}}$, and $L_{\gamma}$ and [Fe/H]. However, our
study does not confirm these results. The former discrepancy is possibly due
to the different interstellar radiation field models assumed in these works,
or it could be due to the more limited sample data used in Hui et al. (2011).
Specifically, while we have used the most up-to-date interstellar radiation
field for the Milky Way–which is the 3D radiation field model in GALPROP v56
(Porter et al., 2017)$-$ Hui et al. (2011) used the 2D radiation field model
in GALPROP v54. Also, as explained above, our correlation study includes 30
$\gamma$-ray-detected GCs , as well as the luminosity upper limits from the
127 non-detected ones, thus covering the entire GC Harris (1996) catalog. As
for the latter discrepancy, similar results for the $L_{\gamma}$–[Fe/H]
correlation were obtained by de Menezes et al. (2019), which also found low
statistical evidence for this correlation.
### 5.2 Implications for the Fermi GeV excess
The emission from a putative population of about $40,000$ (Ploeg et al., 2020)
unresolved MSPs in the Galactic Center region is currently the preferred
explanation for the Fermi GeV excess (Macias et al., 2018; Bartels et al.,
2018; Macias et al., 2019; Abazajian et al., 2020). Since GCs also contain
large numbers of unresolved MSPs, it is useful to compare the light-to-mass
ratios for these two systems so as to obtain additional clues for the physical
processes causing the observed high-energy $\gamma$-ray emissions in their
directions. In Figure 12, we show the relation between $L_{\gamma}$ and the
stellar mass for several different systems. The blue dots show the sample of
the $\gamma$-ray detected GCs in this work. The nuclear bulge (orange dot) has
a stellar mass around $1.4\times 10^{9}$ M⊙ and a $\gamma$-ray luminosity of
$(3.9\pm 0.5)\times 10^{36}$ erg s-1 and the boxy bulge (green dot) has
$1.5\times 10^{10}$ M⊙ and $(2.2\pm 0.4)\times 10^{37}$ erg s-1 (Macias et
al., 2019). The combination of the nuclear bulge and the boxy bulge is
responsible for the Galactic center GeV excess. Also included are the Galactic
disk (red dot) luminosity predicted by Bartels et al. (2018) and the M31
galaxy (purple star) (Ackermann et al., 2017). The dot-dashed line shows the
$\gamma$-ray luminosity-to-stellar-mass relation implied for the nuclear bulge
and the boxy bulge, which is $2\times 10^{27}$ erg s-1 M${}_{\odot}^{-1}$.
As can be seen in Figure 12, the luminosities of the detected sample of GCs
exceed the luminosities expected based on the bulge correlations. In total,
the GC samples have a stellar mass of $\sim 1.4\times 10^{7}$ M⊙ and a
$\gamma$-ray luminosity of $\sim 1.5\times 10^{36}$ erg s-1. This means that
GCs systematically emit $\sim$ 50 times more $\gamma$ rays per stellar mass
than other objects such as the nuclear bulge and the Galactic bulge. The GCs
have long been known for producing MSPs efficiently (Camilo & Rasio, 2005). On
average GCs make up $\sim 0.05\%$ of the total number of stars in the Milky
Way (Ye et al., 2019), but more than one-third of the known MPSs are found in
these systems (Manchester et al., 2005). Our observations support this
scenario. This is also consistent with the larger stellar densities and larger
stellar encounter rates in GCs than in the Galactic bulge. We also note in
passing that a large fraction of $\gamma$-ray-detected GCs are located in the
Galactic bulge region (see Figure 1), which means that it is possible that at
least some of the unresolved MSPs contributing to the Fermi GeV excess are
hosted by GCs in the Galactic bulge region.
Figure 12: Relations between $L_{\gamma}$ and the stellar mass for several
systems. The results for the nuclear bulge (orange dot) and boxy bulge (green
dot) are adopted from Macias et al. (2019). The Galactic disk (red dot)
luminosity is predicted by Bartels et al. (2018), and the M31 (purple star)
are adopted from Ackermann et al. (2017). The blue dots show the data for 30
$\gamma$-ray detected GCs. The dash-dotted line is the relation implied by the
nuclear bulge and boxy bulge.
### 5.3 TeV observations of globular clusters
Using the universal two-component fit, we identified a power law component
with a slope of $2.79\pm 0.25$ from the spectra of $\gamma$-ray-detected GCs.
The power law component can be plausibly explained by IC emission from GCs.
The fact that this power law component is rather soft may explain why most GCs
are not detected in the TeV energy range. In order to explore this more
closely, we extrapolate the high energy tail of the GCs spectra to TeV
energies in Figure 13. In this figure, the black line shows the extrapolated
fluxes for Terzan 5, and the gray band shows the range of extrapolated fluxes
for the other 23 GCs with a detected IC component. Above 100 GeV, Fermi-LAT
only find upper limits (blue arrows) for Terzan 5. The red dots are the
H.E.S.S. measurements from the direction of Terzan 5. It is interesting to
note that the extrapolated spectrum for Terzan 5 is about one order of
magnitude lower than the H.E.S.S. measurements from the same object. This
discrepancy might be explained by the fact that the $\gamma$-ray source
reported by H.E.S.S is misaligned with the center of Terzan 5 so that this
association could be a chance coincidence. However, such a coincidence with
known objects has been estimated to be improbable ($\sim 10^{-4}$) (H. E. S.
S. Collaboration et al., 2011). If the H.E.S.S. source is indeed associated to
Terzan 5, it could be that $e^{\pm}$ injection spectrum from MSPs has a
spectral break at approximately 1 TeV. Note that a substantial fraction of
stars in Terzan 5 have been identified as young and centrally concentrated
(Ferraro et al., 2016; Gotta et al., 2020), which could lead to a larger
number of younger pulsars. The H.E.S.S. measurements could be explained if
these young pulsars have higher energy $e^{\pm}$ cutoffs. Therefore, Terzan 5
may not be representative compared to other GCs which are dominated by old
stellar systems. However, Nataf et al. (2019) also find that the abundance
variations among Terzan 5 is indeed consistent with a regular globular
cluster. Alternatively, the TeV $\gamma$ rays could originate from sources
other than MSPs (e.g., hadronic emission from supernova remnants). Further
investigation of those scenarios, though very interesting, is beyond the scope
of this work.
We also include in Figure 13 the sensitivities to point-like sources for the
next generation $\gamma$ ray observatories. The green line shows the
sensitivity for the Cherenkov Telescope Array (CTA)$-$South assuming 100 hours
of observation time. The purple line shows the 1-year sensitivity of the Large
High Altitude Air Shower Observatory (LHAASO). The extrapolated IC fluxes are
close to the 100-hour CTA sensitivity. It is clear that it will be difficult
for the next-generation TeV $\gamma$-ray telescopes to actually detect each
individual GC considered in our study. This might require a much more
ambitious observation strategy that increases the sensitivity by factor of a
few at the TeV energy range. Efforts to measure the diffuse IC emission from
the putative MSP population responsible for the Fermi GeV excess have been
made and are very encouraging; see Song et al. (2019) and Macias et al.
(2021). Alternatively, Bednarek et al. (2016) studied TeV $\gamma$-ray
emission from MSPs taking into account the advection of $e^{\pm}$ with the
wind from the GC. They showed that CTA can constrain models incorporating such
effects.
Figure 13: Extrapolated spectra for Terzan 5 (black line) and 23 GCs with the
IC component detected (grey region). The H.E.S.S. measurement of Terzan 5 is
shown by red dots with error bars. The Fermi-LAT only find upper limits (blue
arrows) for Terzan 5 in this energy range. Also included are the sensitivities
for 100-hour CTA South (green line) and 1-year LHAASO (purple line).
## 6 Summary and Conclusions
We have reanalyzed Fermi-LAT data in the energy range between 300 MeV and 500
GeV from the direction of 157 GCs in the Harris (1996) catalog. Using the same
data cuts adopted in the construction of the 4FGL catalog, we confirmed the
detection of 30 GCs in $\gamma$ rays, and updated the $\gamma$-ray spectral
parameters for the sample of detected objects. We also estimated the 95% C.L.
luminosity upper limits for the sample of 127 undetected GCs in the 4FGL
catalog. The main objective of our reanalysis was to find evidence for IC
emission from $e^{\pm}$ injected by MSPs in GCs. This was done using two
different methodologies. First, we searched for correlations of the
$\gamma$-ray luminosities with other GCs properties. Second, we performed a
spectral analysis of the GCs with a universal fit method that enhances the
sensitivity to the high energy tail of the spectra. Specifically:
* 1.
Using an expectation-maximization algorithm that properly incorporates null
detections ($L_{\gamma}$ upper limits) in the pipeline, we found a correlation
between $L_{\gamma}$ and the GCs’ total photon field energy density
$u_{\mathrm{Total}}$ of the form,
$\log\left(\frac{L_{\gamma}}{\mathrm{erg}\;\mathrm{s}^{-1}}\right)=(0.59\pm
0.09)\log\left(\frac{u_{\mathrm{Total}}}{\mathrm{eV}\;\mathrm{cm}^{-3}}\right)+(32.97\pm
0.19).$ (18)
Using the Kendall $\tau$ coefficient as the test statistic we determined this
correlation to have a $3.8\sigma$ significance. The total photon field is
dominated by the stellar light of the GCs ($u_{\rm GC}$), and we find a much
weaker correlation (below the 2$\sigma$ level) when only the photon field at
the location of the GC ($u_{\rm MW}$) is used. In addition, we obtained a
strong correlation (at 6.4$\sigma$ significance) between $L_{\gamma}$ and the
stellar encounter rate $\Gamma_{c}$, which is given by
$\log\left(\frac{L_{\gamma}}{\mathrm{erg}\;\mathrm{s}^{-1}}\right)=(0.39\pm
0.10)\log\left(\Gamma_{c}\right)+(32.99\pm 0.26).$ (19)
Finally, we found only weak evidence (below the 2$\sigma$ level) for a
correlations between $L_{\gamma}$ and the stellar metallicity [Fe/H].
* 2.
We revealed a hidden correlation between $u_{\mathrm{Total}}$ and
$\Gamma_{c}$, which implies that the $L_{\gamma}$–$u_{\rm Total}$ and
$L_{\gamma}$–$\Gamma_{c}$ correlations are not entirely independent. However,
as described below, we find spectral evidence for IC emission. The correlation
results are consistent with there being both a significant role for dynamical
formation of MSPs in GCs and the for the presence of a significant
contribution of IC to the observed $\gamma$-ray luminosity.
* 3.
We applied a universal spectral fit to the sample of 30 GCs in the 4FGL
catalog and searched for evidence of an IC component on top of a curvature
radiation model–accounting for the MSPs prompt emission in the GCs. We found
that the extra power-law IC component is preferred at the 8.2$\sigma$
significance over the curvature radiation model only. The best-fit power law
index of the IC component was found to be $2.79\pm 0.25$. This implies a
power-law $e^{\pm}$ spectrum with an index of $4.58\pm 0.50$ and a minimum
energy as low as $\sim$ 10 GeV.
* 4.
We estimated the $e^{\pm}$ injection efficiency $f_{e^{\pm}}$ for MSPs
residing in GCs. We determined the IC $\gamma$-ray luminosities over 300 MeV
to 500 GeV, which roughly corresponds to $e^{\pm}$ energies from 10 GeV to 1
TeV. We found the fraction of MSP spin-down energy injected to $e^{\pm}$ is
comparable to or slightly smaller than that injected to $\gamma$ rays,
$f_{e^{\pm}}\lesssim f_{\gamma}$ and is at $\lesssim 10\%$ level. This
parameter has been estimated in different environments, such as nearby pulsars
(Hooper et al., 2017; Hooper & Linden, 2018, 2021), the Galactic center
(Bednarek & Sobczak, 2013), individual GCs (MAGIC Collaboration et al., 2019),
and galaxies with low specific star formation rate (Sudoh et al., 2021). Our
results provide new insights into the $f_{e^{\pm}}$ parameter based on the
universal properties of $\gamma$-ray-detected GCs in the Milky Way.
In summary, our analysis reveals strong evidence for soft IC emission in
Fermi-LAT GCs. This is indicative of $e^{\pm}$ injected by MSPs hosted by such
systems. Although the Fermi-LAT sensitivity for energies larger than 10 GeV is
not sufficiently high to claim a detection in each individual GC, we employed
a universal fit method with the bin-by-bin spectra of the sample of detected
objects and were able to increase the sensitivity to the IC component. Our
results also explain why it is difficult to detect GCs with TeV $\gamma$-ray
telescopes: we have obtained a very soft spectra for the high energy tail of
the GC population. It is possible that with a more aggressive observation
campaign such objects could be detected by forthcoming TeV telescopes [see
(Ndiyavala et al., 2018) for a recent sensitivity analysis] such as CTA
(Cherenkov Telescope Array Consortium et al., 2019) and LHAASO (Bai et al.,
2019). Globular clusters remain some of the most important systems within
which to search for and study millisecond pulsars. We have shown the potential
of extracting critical knowledge from $\gamma$-ray data of globular clusters
with advanced statistical tools and intensive modelling.
## Acknowledgements
We thank Shin’ichiro Ando, Holger Baumgardt and Zhe Li for discussions. The
work of D.S. is supported by the U.S. Department of Energy under the award
number DE-SC0020262. O.M. acknowledges support by JSPS KAKENHI Grant Numbers
JP17H04836, JP18H04340, JP18H04578, and JP20K14463. The work of S.H. is
supported by the U.S. Department of Energy under the award number DE-SC0020262
and NSF Grant numbers AST-1908960 and PHY-1914409. This work was supported by
World Premier International Research Center Initiative (WPI Initiative), MEXT,
Japan. R.M.C. acknowledges support from the Australian Government through the
Australian Research Council for grant DP190101258 shared with Prof. Mark
Krumholz at the ANU. D.M.N. acknowledges support from NASA under award Number
80NSSC19K0589.
## Data Availability
The Fermi-LAT data analysed in this article were accessed from
https://heasarc.gsfc.nasa.gov/FTP/fermi/data/lat/weekly/. The derived data
generated in this research are available in the article.
## References
* Abazajian et al. (2020) Abazajian K. N., Horiuchi S., Kaplinghat M., Keeley R. E., Macias O., 2020, Phys. Rev. D, 102, 043012
* Abdo et al. (2009a) Abdo A. A., et al., 2009a, Science, 325, 845
* Abdo et al. (2009b) Abdo A. A., et al., 2009b, ApJ, 699, 1171
* Abdo et al. (2010) Abdo A. A., et al., 2010, A&A, 524, A75
* Abdo et al. (2013) Abdo A. A., et al., 2013, ApJS, 208, 17
* Abdollahi et al. (2020) Abdollahi S., et al., 2020, ApJS, 247, 33
* Ackermann et al. (2012) Ackermann M., et al., 2012, ApJ, 755, 164
* Ackermann et al. (2017) Ackermann M., et al., 2017, ApJ, 836, 208
* Ajello et al. (2020) Ajello M., Di Mauro M., Paliya V. S., Garrappa S., 2020, ApJ, 894, 88
* Bahramian et al. (2013) Bahramian A., Heinke C. O., Sivakoff G. R., Gladstone J. C., 2013, ApJ, 766, 136
* Bai et al. (2019) Bai X., et al., 2019, arXiv e-prints, p. arXiv:1905.02773
* Ballet et al. (2020) Ballet J., Burnett T. H., Digel S. W., Lott B., 2020, arXiv e-prints, p. arXiv:2005.11208
* Bartels et al. (2018) Bartels R., Storm E., Weniger C., Calore F., 2018, Nature Astronomy, 2, 819
* Baumgardt (2017) Baumgardt H., 2017, MNRAS, 464, 2174
* Baumgardt & Hilker (2018) Baumgardt H., Hilker M., 2018, MNRAS, 478, 1520
* Bednarek & Sitarek (2007) Bednarek W., Sitarek J., 2007, MNRAS, 377, 920
* Bednarek & Sobczak (2013) Bednarek W., Sobczak T., 2013, MNRAS, 435, L14
* Bednarek et al. (2016) Bednarek W., Sitarek J., Sobczak T., 2016, MNRAS, 458, 1083
* Blumenthal & Gould (1970) Blumenthal G. R., Gould R. J., 1970, Reviews of Modern Physics, 42, 237
* Camilo & Rasio (2005) Camilo F., Rasio F. A., 2005, in Rasio F. A., Stairs I. H., eds, Astronomical Society of the Pacific Conference Series Vol. 328, Binary Radio Pulsars. p. 147 (arXiv:astro-ph/0501226)
* Cheng et al. (2010) Cheng K. S., Chernyshov D. O., Dogiel V. A., Hui C. Y., Kong A. K. H., 2010, ApJ, 723, 1219
* Cherenkov Telescope Array Consortium et al. (2019) Cherenkov Telescope Array Consortium et al., 2019, Science with the Cherenkov Telescope Array, doi:10.1142/10986.
* de Menezes et al. (2019) de Menezes R., Cafardo F., Nemmen R., 2019, MNRAS, 486, 851
* Di Mauro et al. (2014) Di Mauro M., Calore F., Donato F., Ajello M., Latronico L., 2014, ApJ, 780, 161
* Efron & Petrosian (1999) Efron B., Petrosian V., 1999, Journal of the American Statistical Association, 94, 824
* Espinoza et al. (2013) Espinoza C. M., et al., 2013, MNRAS, 430, 571
* Feigelson & Nelson (1985) Feigelson E. D., Nelson P. I., 1985, ApJ, 293, 192
* Ferraro et al. (2016) Ferraro F. R., Massari D., Dalessandro E., Lanzoni B., Origlia L., Rich R. M., Mucciarelli A., 2016, ApJ, 828, 75
* Freire et al. (2011) Freire P. C. C., et al., 2011, Science, 334, 1107
* Gotta et al. (2020) Gotta V., Mauro F., Moni Bidin C., Geisler D., Ferraro F., 2020, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 61C, 90
* H. E. S. S. Collaboration et al. (2011) H. E. S. S. Collaboration et al., 2011, A&A, 531, L18
* Harding (2021) Harding A. K., 2021, arXiv e-prints, p. arXiv:2101.05751
* Harding & Muslimov (2011) Harding A. K., Muslimov A. G., 2011, ApJ, 743, 181
* Harding et al. (2005) Harding A. K., Usov V. V., Muslimov A. G., 2005, ApJ, 622, 531
* Harris (1996) Harris W. E., 1996, AJ, 112, 1487
* Hooper & Linden (2016) Hooper D., Linden T., 2016, J. Cosmology Astropart. Phys., 2016, 018
* Hooper & Linden (2018) Hooper D., Linden T., 2018, Phys. Rev. D, 98, 043005
* Hooper & Linden (2021) Hooper D., Linden T., 2021, arXiv e-prints, p. arXiv:2104.00014
* Hooper et al. (2017) Hooper D., Cholis I., Linden T., Fang K., 2017, Phys. Rev. D, 96, 103013
* Hui et al. (2011) Hui C. Y., Cheng K. S., Wang Y., Tam P. H. T., Kong A. K. H., Chernyshov D. O., Dogiel V. A., 2011, ApJ, 726, 100
* Isobe et al. (1986) Isobe T., Feigelson E. D., Nelson P. I., 1986, ApJ, 306, 490
* Jóhannesson et al. (2018) Jóhannesson G., Porter T. A., Moskalenko I. V., 2018, ApJ, 856, 45
* Johnson et al. (2013) Johnson T. J., et al., 2013, ApJ, 778, 106
* Kong et al. (2010) Kong A. K. H., Hui C. Y., Cheng K. S., 2010, ApJ, 712, L36
* Kopp et al. (2013) Kopp A., Venter C., Büsching I., de Jager O. C., 2013, ApJ, 779, 126
* Lavalley et al. (1992) Lavalley M. P., Isobe T., Feigelson E. D., 1992, in Bulletin of the American Astronomical Society. pp 839–840
* Lloyd et al. (2018) Lloyd S. J., Chadwick P. M., Brown A. M., 2018, MNRAS, 480, 4782
* MAGIC Collaboration et al. (2019) MAGIC Collaboration et al., 2019, MNRAS, 484, 2876
* Macias et al. (2018) Macias O., Gordon C., Crocker R. M., Coleman B., Paterson D., Horiuchi S., Pohl M., 2018, Nature Astronomy, 2, 387
* Macias et al. (2019) Macias O., Horiuchi S., Kaplinghat M., Gordon C., Crocker R. M., Nataf D. M., 2019, J. Cosmology Astropart. Phys., 2019, 042
* Macias et al. (2021) Macias O., van Leijen H., Song D., Ando S., Horiuchi S., Crocker R. M., 2021, MNRAS, 506, 1741
* Manchester et al. (2005) Manchester R. N., Hobbs G. B., Teoh A., Hobbs M., 2005, AJ, 129, 1993
* Nataf et al. (2019) Nataf D. M., et al., 2019, AJ, 158, 14
* Ndiyavala et al. (2018) Ndiyavala H., Krüger P. P., Venter C., 2018, MNRAS, 473, 897
* Ndiyavala et al. (2019) Ndiyavala H., Venter C., Johnson T. J., Harding A. K., Smith D. A., Eger P., Kopp A., van der Walt D. J., 2019, ApJ, 880, 53
* Ploeg et al. (2020) Ploeg H., Gordon C., Crocker R., Macias O., 2020, J. Cosmology Astropart. Phys., 2020, 035
* Porter et al. (2017) Porter T. A., Jóhannesson G., Moskalenko I. V., 2017, ApJ, 846, 67
* Sollima & Baumgardt (2017) Sollima A., Baumgardt H., 2017, MNRAS, 471, 3668
* Song et al. (2019) Song D., Macias O., Horiuchi S., 2019, Phys. Rev. D, 99, 123020
* Sudoh et al. (2021) Sudoh T., Linden T., Beacom J. F., 2021, Phys. Rev. D, 103, 083017
* Tam et al. (2011) Tam P. H. T., Kong A. K. H., Hui C. Y., Cheng K. S., Li C., Lu T. N., 2011, ApJ, 729, 90
* Tam et al. (2016) Tam P.-H. T., Hui C. Y., Kong A. K. H., 2016, Journal of Astronomy and Space Sciences, 33, 1
* Venter et al. (2009) Venter C., De Jager O. C., Clapson A. C., 2009, ApJ, 696, L52
* Verbunt et al. (1996) Verbunt F., Kuiper L., Belloni T., Johnston H. M., de Bruyn A. G., Hermsen W., van der Klis M., 1996, A&A, 311, L9
* Ye et al. (2019) Ye C. S., Kremer K., Chatterjee S., Rodriguez C. L., Rasio F. A., 2019, ApJ, 877, 122
* Zhang et al. (2016) Zhang P. F., Xin Y. L., Fu L., Zhou J. N., Yan J. Z., Liu Q. Z., Zhang L., 2016, MNRAS, 459, 99
* Zhou et al. (2015) Zhou J. N., Zhang P. F., Huang X. Y., Li X., Liang Y. F., Fu L., Yan J. Z., Liu Q. Z., 2015, MNRAS, 448, 3215
## Appendix A Globular clusters not detected in the 4FGL catalog
Table 5 reports the parameters and $\gamma$-ray analysis results for 127
additional GCs in the Harris (1996) catalog with no counterpart detected in
the 4FGL. For fluxes and $L_{\gamma}$, we report their 95% C.L. upper limits
by placing a putative point source at the sky location of the GC and running a
maximum-likelihood procedure in which we assume a power-law spectrum with a
spectral slope of $\Gamma=-2$.
Importantly, we included the $L_{\gamma}$ upper limits in the correlation
analysis shown in Section 3. The EM algorithm uses the $L_{\gamma}$ upper
limits to perform maximum likelihood estimates and find the best-fit
parameters for the correlations with the other GC observables. The
significance of the correlations is estimated with the generalized Kendall
$\tau$ coefficient as the test statistic, which also includes the luminosity
upper limits.
Table 5: Parameters and data analysis results of 127 GCs not detected in the 4FGL. Notations same as Table 1. Name | $\Gamma_{c}$ | [Fe/H] | $M_{*}$ | $u_{\text{MW}}$ | $u_{\text{Total}}$ | $R_{\odot}$ | Flux 95% UL | $L_{\gamma}$ 95% UL
---|---|---|---|---|---|---|---|---
| | | ($10^{5}M_{\odot}$) | (eV cm-3) | (eV cm-3) | (kpc) | (10-8 ph cm-2 s-1) | (1034 erg s-1)
2MS-GC02 | … | -1.08 | 0.26 | 2.56 | 17.79 | 4.90 | < 0.57 | < 2.33
AM 1 | 0.01 | -1.70 | 0.32 | 0.25 | 24.57 | 123.30 | < 0.04 | < 457.85
AM 4 | 0.00 | -1.30 | 0.01 | 0.27 | 1.77 | 32.20 | < 0.09 | < 38.62
Arp 2 | 0.01 | -1.75 | 0.37 | 0.28 | 2.47 | 28.60 | < 0.06 | < 27.28
BH 176 | 0.15 | 0.00 | 0.48 | 0.34 | 3.06 | 18.90 | < 0.16 | < 23.19
BH 261 | … | -1.30 | 0.31 | 3.29 | 11.51 | 6.50 | < 0.07 | < 1.54
Djorg 1 | 9.04 | -1.51 | 0.78 | 0.97 | 13.81 | 13.70 | < 0.16 | < 13.22
Djorg 2 | 46.40 | -0.65 | 0.56 | 4.01 | 34.01 | 6.30 | < 0.09 | < 1.72
E 3 | 0.08 | -0.83 | 0.03 | 0.46 | 0.99 | 8.10 | < 0.16 | < 3.92
Eridanus | 0.03 | -1.43 | 0.11 | 0.25 | 28.17 | 90.10 | < 0.07 | < 271.32
ESO280-SC06 | … | -1.80 | 0.26 | 0.33 | 4.54 | 21.40 | < 0.06 | < 17.45
ESO452-SC11 | 1.72 | -1.50 | 0.06 | 1.44 | 9.94 | 8.30 | < 0.10 | < 4.18
FSR 1735 | … | … | 0.58 | 1.47 | 173.86 | 9.80 | < 0.23 | < 5.79
HP 1 | 2.75 | -1.00 | 1.62 | 4.89 | 6.99 | 8.20 | < 0.17 | < 4.27
IC 1257 | … | -1.70 | 0.65 | 0.30 | 8.01 | 25.00 | < 0.21 | < 44.16
IC 1276 | 7.78 | -0.75 | 0.72 | 1.58 | 5.88 | 5.40 | < 0.17 | < 1.47
IC 4499 | 0.94 | -1.53 | 1.29 | 0.31 | 15.50 | 18.80 | < 0.09 | < 15.22
Ko 1 | … | … | … | 0.25 | 2.94 | 48.30 | < 0.03 | < 68.99
Ko 2 | … | … | … | 0.25 | 1.89 | 34.70 | < 0.02 | < 32.76
Liller 1 | 391.00 | -0.33 | 6.61 | 4.38 | >4.38 | 8.20 | < 0.56 | < 9.34
Lynga 7 | … | -1.01 | 1.02 | 1.24 | 17.13 | 8.00 | < 0.06 | < 2.39
NGC 1261 | 17.90 | -1.27 | 1.74 | 0.30 | 149.73 | 16.30 | < 0.03 | < 7.73
NGC 1851 | 1910.00 | -1.18 | 2.82 | 0.30 | 433.15 | 12.10 | < 0.14 | < 6.77
NGC 2298 | 5.37 | -1.92 | 0.54 | 0.30 | 18.54 | 10.80 | < 0.04 | < 8.44
NGC 2419 | 3.37 | -2.15 | 14.45 | 0.25 | 388.12 | 82.60 | < 0.08 | < 202.00
NGC 288 | 1.23 | -1.32 | 1.20 | 0.35 | 5.64 | 8.90 | < 0.02 | < 2.96
NGC 3201 | 8.45 | -1.59 | 1.45 | 0.52 | 5.73 | 4.90 | < 0.12 | < 0.90
NGC 362 | 569.00 | -1.26 | 3.39 | 0.42 | 184.01 | 8.60 | < 0.06 | < 3.07
NGC 4147 | 14.90 | -1.80 | 0.29 | 0.28 | 67.12 | 19.30 | < 0.03 | < 11.28
NGC 4372 | 2.28 | -2.17 | 2.19 | 0.65 | 5.13 | 5.80 | < 0.14 | < 1.62
NGC 4590 | 4.58 | -2.23 | 1.29 | 0.40 | 20.79 | 10.30 | < 0.03 | < 3.16
NGC 4833 | 25.00 | -1.85 | 2.04 | 0.67 | 17.40 | 6.60 | < 0.10 | < 1.45
NGC 5024 | 28.50 | -2.10 | 4.27 | 0.30 | 93.40 | 17.90 | < 0.06 | < 14.33
NGC 5053 | 0.15 | -2.27 | 0.62 | 0.31 | 4.20 | 17.40 | < 0.09 | < 10.59
NGC 5272 | 167.00 | -1.50 | 3.63 | 0.35 | 35.36 | 10.20 | < 0.04 | < 3.41
NGC 5286 | 569.00 | -1.69 | 3.80 | 0.47 | 308.67 | 11.70 | < 0.13 | < 6.82
NGC 5466 | 0.18 | -1.98 | 0.52 | 0.32 | 6.45 | 16.00 | < 0.02 | < 7.68
NGC 5634 | 24.50 | -1.88 | 2.00 | 0.29 | 84.71 | 25.20 | < 0.04 | < 18.05
NGC 5694 | 205.00 | -1.98 | 3.89 | 0.27 | 444.24 | 35.00 | < 0.10 | < 45.81
NGC 5824 | 1220.00 | -1.91 | 8.51 | 0.27 | 897.80 | 32.10 | < 0.03 | < 28.45
NGC 5897 | 1.16 | -1.90 | 1.55 | 0.49 | 10.12 | 12.50 | < 0.05 | < 5.04
NGC 5927 | 251.00 | -0.49 | 3.47 | 1.08 | 58.71 | 7.70 | < 0.05 | < 2.00
NGC 5946 | 122.00 | -1.29 | 1.15 | 0.85 | 50.13 | 10.60 | < 0.06 | < 3.34
NGC 5986 | 56.10 | -1.59 | 3.31 | 0.79 | 130.52 | 10.40 | < 0.03 | < 3.01
NGC 6101 | 1.21 | -1.98 | 1.29 | 0.38 | 28.76 | 15.40 | < 0.04 | < 7.32
NGC 6121 | 34.50 | -1.16 | 0.89 | 0.91 | 3.01 | 2.20 | < 0.07 | < 0.21
NGC 6144 | 3.61 | -1.76 | 0.48 | 1.14 | 11.98 | 8.90 | < 0.06 | < 2.38
NGC 6171 | 9.65 | -1.02 | 0.78 | 1.08 | 13.42 | 6.40 | < 0.06 | < 1.17
NGC 6205 | 89.90 | -1.53 | 4.47 | 0.46 | 48.74 | 7.10 | < 0.04 | < 1.57
NGC 6229 | 49.90 | -1.47 | 2.95 | 0.27 | 677.70 | 30.50 | < 0.01 | < 51.41
NGC 6235 | 7.11 | -1.28 | 1.12 | 0.92 | 18.12 | 11.50 | < 0.30 | < 6.36
NGC 6254 | 42.80 | -1.56 | 1.86 | 0.98 | 14.51 | 4.40 | < 0.05 | < 0.60
NGC 6256 | 242.00 | -1.02 | 1.07 | 1.66 | 53.00 | 10.30 | < 0.11 | < 3.63
NGC 6273 | 246.00 | -1.74 | 6.46 | 1.93 | 136.93 | 8.80 | < 0.06 | < 2.28
NGC 6284 | 797.00 | -1.26 | 2.40 | 0.56 | 184.37 | 15.30 | < 0.03 | < 6.50
NGC 6287 | 52.30 | -2.10 | 1.32 | 1.49 | 85.63 | 9.40 | < 0.21 | < 4.99
NGC 6293 | 1220.00 | -1.99 | 1.38 | 1.99 | 87.63 | 9.50 | < 0.12 | < 4.45
NGC 6325 | 189.00 | -1.25 | 0.72 | 2.42 | 82.73 | 7.80 | < 0.13 | < 1.89
NGC 6333 | 153.00 | -1.77 | 3.16 | 1.82 | 87.91 | 7.90 | < 0.04 | < 1.79
NGC 6342 | 83.70 | -0.55 | 0.60 | 1.82 | 38.20 | 8.50 | < 0.16 | < 3.06
NGC 6352 | 12.50 | -0.64 | 0.55 | 1.40 | 6.23 | 5.60 | < 0.12 | < 0.96
NGC 6355 | 130.00 | -1.37 | 1.17 | 2.74 | 117.16 | 9.20 | < 0.13 | < 3.13
NGC 6356 | 110.00 | -0.40 | 3.80 | 0.56 | 203.09 | 15.10 | < 0.06 | < 6.52
NGC 6362 | 3.57 | -0.99 | 1.07 | 0.75 | 8.27 | 7.60 | < 0.03 | < 1.69
Table 6: continued Name | $\Gamma_{c}$ | [Fe/H] | $M_{*}$ | $u_{\text{MW}}$ | $u_{\text{Total}}$ | $R_{\odot}$ | Flux | $L_{\gamma}$
---|---|---|---|---|---|---|---|---
| | | ($10^{5}M_{\odot}$) | (eV cm-3) | (eV cm-3) | (kpc) | (10-8 ph cm-2 s-1) | (1034 erg s-1)
NGC 6366 | 4.06 | -0.59 | 0.59 | 1.04 | 2.26 | 3.50 | < 0.11 | < 0.68
NGC 6380 | 96.20 | -0.75 | 3.02 | 1.64 | 97.36 | 10.90 | < 0.12 | < 6.66
NGC 6401 | 60.20 | -1.02 | 2.75 | 1.80 | 22.57 | 10.60 | < 0.11 | < 3.65
NGC 6426 | 2.68 | -2.15 | 0.63 | 0.32 | 29.15 | 20.60 | < 0.04 | < 11.25
NGC 6453 | 183.00 | -1.50 | 2.34 | 1.57 | 210.77 | 11.60 | < 0.12 | < 6.25
NGC 6496 | 2.02 | -0.46 | 0.83 | 0.94 | 39.16 | 11.30 | < 0.10 | < 3.58
NGC 6517 | 661.00 | -1.23 | 3.02 | 1.00 | 419.34 | 10.60 | < 0.22 | < 7.06
NGC 6522 | 467.00 | -1.34 | 2.14 | 4.15 | 64.33 | 7.70 | < 0.08 | < 2.51
NGC 6535 | 1.42 | -1.79 | 0.13 | 1.11 | 6.87 | 6.80 | < 0.27 | < 2.11
NGC 6539 | 271.00 | -0.63 | 2.40 | 1.34 | 38.89 | 7.80 | < 0.24 | < 2.39
NGC 6540 | 263.00 | -1.35 | 0.43 | 2.62 | >2.62 | 5.30 | < 0.13 | < 1.42
NGC 6544 | 462.00 | -1.40 | 1.15 | 1.43 | 22.80 | 3.00 | < 0.13 | < 0.37
NGC 6553 | 103.00 | -0.18 | 3.02 | 3.40 | 66.75 | 6.00 | < 0.06 | < 1.38
NGC 6558 | 109.00 | -1.32 | 0.39 | 3.23 | 7.50 | 7.40 | < 0.07 | < 1.53
NGC 6569 | 72.80 | -0.76 | 2.40 | 1.49 | 169.48 | 10.90 | < 0.08 | < 4.65
NGC 6584 | 22.00 | -1.50 | 1.17 | 0.56 | 117.73 | 13.50 | < 0.13 | < 7.88
NGC 6624 | 1080.00 | -0.44 | 0.62 | 2.37 | 79.61 | 7.90 | < 0.07 | < 2.21
NGC 6626 | 688.00 | -1.32 | 2.82 | 2.41 | 27.22 | 5.50 | < 0.08 | < 1.17
NGC 6637 | 92.40 | -0.64 | 1.48 | 1.78 | 86.29 | 8.80 | < 0.09 | < 5.98
NGC 6638 | 103.00 | -0.95 | 1.74 | 1.60 | 143.62 | 9.40 | < 0.07 | < 3.50
NGC 6642 | 112.00 | -1.26 | 0.25 | 1.97 | 47.34 | 8.10 | < 0.04 | < 2.00
NGC 6656 | 92.40 | -1.70 | 4.07 | 1.26 | 12.92 | 3.20 | < 0.05 | < 0.35
NGC 6681 | 964.00 | -1.62 | 1.12 | 1.33 | 74.61 | 9.00 | < 0.12 | < 3.64
NGC 6712 | 33.40 | -1.02 | 1.20 | 1.44 | 31.07 | 6.90 | < 0.19 | < 2.60
NGC 6715 | 2030.00 | -1.49 | 15.85 | 0.29 | 765.61 | 26.50 | < 0.09 | < 27.82
NGC 6723 | 13.90 | -1.10 | 1.74 | 1.18 | 31.53 | 8.70 | < 0.11 | < 3.51
NGC 6749 | 38.50 | -1.60 | 0.79 | 1.17 | 21.91 | 7.90 | < 0.09 | < 1.83
NGC 6760 | 77.00 | -0.40 | 2.57 | 1.11 | 45.55 | 7.40 | < 0.10 | < 2.01
NGC 6779 | 25.90 | -1.98 | 1.66 | 0.48 | 40.35 | 9.40 | < 0.15 | < 2.52
NGC 6809 | 3.65 | -1.94 | 1.86 | 1.01 | 7.99 | 5.40 | < 0.10 | < 1.09
NGC 6864 | 370.00 | -1.29 | 3.98 | 0.32 | 663.99 | 20.90 | < 0.09 | < 14.80
NGC 6934 | 31.80 | -1.47 | 1.41 | 0.34 | 105.49 | 15.60 | < 0.05 | < 8.08
NGC 6981 | 4.12 | -1.42 | 0.69 | 0.34 | 40.01 | 17.00 | < 0.06 | < 8.68
NGC 7006 | 6.46 | -1.52 | 1.48 | 0.25 | 316.89 | 41.20 | < 0.07 | < 47.47
NGC 7089 | 441.00 | -1.65 | 5.01 | 0.39 | 191.32 | 11.50 | < 0.02 | < 4.30
NGC 7099 | 366.00 | -2.27 | 1.26 | 0.50 | 47.69 | 8.10 | < 0.14 | < 2.71
NGC 7492 | 0.86 | -1.78 | 0.29 | 0.25 | 8.61 | 26.30 | < 0.09 | < 34.99
Pal 1 | 0.05 | -0.65 | 0.02 | 0.30 | 2.82 | 11.10 | < 0.04 | < 2.79
Pal 10 | 75.80 | -0.10 | 0.55 | 0.90 | 11.97 | 5.90 | < 0.08 | < 1.05
Pal 11 | 12.40 | -0.40 | 0.11 | 0.48 | 14.90 | 13.40 | < 0.05 | < 7.89
Pal 12 | 0.54 | -0.85 | 0.06 | 0.32 | 1.41 | 19.00 | < 0.04 | < 11.78
Pal 13 | 0.00 | -1.88 | 0.03 | 0.27 | 13.18 | 26.00 | < 0.07 | < 20.02
Pal 14 | 0.00 | -1.62 | 0.15 | 0.25 | 3.18 | 76.50 | < 0.06 | < 263.46
Pal 15 | 0.02 | -2.07 | 0.42 | 0.25 | 7.18 | 45.10 | < 0.06 | < 57.78
Pal 2 | 457.00 | -1.42 | 2.29 | 0.25 | 323.49 | 27.20 | < 0.07 | < 21.58
Pal 3 | 0.03 | -1.63 | 0.23 | 0.25 | 23.67 | 92.50 | < 0.10 | < 319.08
Pal 4 | 0.03 | -1.41 | 0.28 | 0.25 | 51.34 | 108.70 | < 0.04 | < 352.19
Pal 5 | 0.01 | -1.41 | 0.18 | 0.30 | 1.12 | 23.20 | < 0.07 | < 15.55
Pal 6 | 40.20 | -0.91 | 1.00 | 4.07 | 23.00 | 5.80 | < 0.23 | < 2.53
Pal 8 | 3.03 | -0.37 | 0.58 | 0.79 | 25.72 | 12.80 | < 0.15 | < 5.03
Pyxis | … | -1.20 | 0.23 | 0.25 | >0.25 | 39.40 | < 0.17 | < 67.61
Rup 106 | 0.36 | -1.68 | 0.35 | 0.29 | 16.78 | 21.20 | < 0.08 | < 18.51
Terzan 10 | 4430.00 | -1.00 | 2.45 | 3.42 | 10.99 | 5.80 | < 0.03 | < 0.94
Terzan 12 | 35.80 | -0.50 | 0.01 | 2.34 | 6.57 | 4.80 | < 0.12 | < 0.85
Terzan 3 | 0.89 | -0.74 | 0.58 | 1.36 | 4.20 | 8.20 | < 0.11 | < 3.29
Terzan 4 | … | -1.41 | 0.76 | 4.12 | 5.07 | 7.20 | < 0.23 | < 1.94
Terzan 6 | 1300.00 | -0.56 | 0.89 | 4.23 | 298.38 | 6.80 | < 0.13 | < 2.35
Terzan 7 | 0.62 | -0.32 | 0.19 | 0.31 | 9.23 | 22.80 | < 0.04 | < 15.89
Terzan 8 | 0.08 | -2.16 | 0.54 | 0.29 | 6.49 | 26.30 | < 0.13 | < 24.96
Terzan 9 | 2.79 | -1.05 | 0.04 | 4.88 | 7.51 | 7.10 | < 0.10 | < 2.67
Ton 2 | 6.45 | -0.70 | 0.26 | 2.51 | 11.62 | 8.20 | < 0.13 | < 3.40
UKS 1 | 100.00 | -0.64 | 0.78 | 4.47 | >4.47 | 7.80 | < 0.45 | < 6.28
Whiting 1 | … | -0.70 | 0.02 | 0.25 | 10.69 | 30.10 | < 0.05 | < 28.00
## Appendix B Kendall $\tau$ test
During the MC simulations that determine the significance of correlations
between $L_{\gamma}$ and other observables, the null hypothesis samples are
generated by repetitively exchanging $L_{\gamma}$ of two GCs while keeping
their location fixed. A large number of exchanges is needed to fully randomize
the data and fulfill the requirement of the null hypothesis. A previous study
(Ackermann et al., 2012) that used this technique with star forming galaxies
found that 1000 exchanges were enough in their case. In this section, we test
different numbers of exchanges and their impact on the significance of the
correlations. Figure 14 shows the calculated significance of each correlation
when different numbers of exchanges are used to generate null hypothesis
samples. We find that the significance usually increases with the number of
exchanges until some large exchange number. This is expected as a small number
of exchanges will generate samples too close to the real data, which is
unwanted for the null hypothesis. The dependence of significance on the
exchanges disappears from a large number of exchanges when the samples are
fully randomized and represents the null hypothesis. We find that about
$10^{4}$ exchanges are needed to generate the null hypothesis samples and
achieve converged significance for four correlations estimated.
Figure 14: Tests of the number of $L_{\gamma}$ exchanges during the MC
simulations. The significance of correlations increases with the number of
exchanges until the number is large enough to generate fully randomized
samples. For the four observables investigated, the typical number of
exchanges needed to achieve converged significance is $\sim 10^{4}$.
## Appendix C Spectral energy distributions
Figure 15 shows the best-fit spectra for 30 $\gamma$-ray-detected GCs from the
individual fits. We report the $\gamma$-ray fluxes of the GCs from 300 MeV to
500 GeV in 9 logarithmic energy bins (black dots). The spectra are sorted by
their spectral curvature (TScurve). For five of them (2MS-GC01, NGC 1904, NGC
6397, NGC 7078, and NGC 5904), we report the best-fit spectra from a power law
since they have TScurve < 4\. For the rest of them, we present the best-fit
spectra using a PLE function. In general, we find the cut-off energy
$E_{\mathrm{cut}}$ to be around $\sim 1$ GeV.
We also performed a universal fit with the sample of detected GCs. Figure 17
shows the best-fit spectra from the universal two-component fits described in
Section 4.2. This comprises a curved CR component and a power law IC
component. The shape of the two components are tied across all GCs, based on
the parameters that minimize the $\chi^{2}_{\mathrm{total}}$ in the universal
fit. The normalization factors of the two-component model are allowed to
float. We report 19 GCs with both components detected (nonzero normalization
factors). Four GCs are only fitted to the IC component. The rest of them are
fitted to the CR component. For GCs with only one component detected, we
include the 95% upper limit on the normalization of the non-detected
component.
Figure 15: Best-fit spectra and 1$\sigma$ uncertainty (blue line and shaded
area) for 30 $\gamma$-ray-detected GCs from the individual fits. The bin-by-
bin fluxes from Fermi data analysis are include by the black dots with error
bars. The spectra are sorted by their curvature. For GCs with TS${}_{\rm
curve}<4$, we report their best-fit PL spectrum. For the rest, the best-fit
PLE spectra are shown.
Figure 16: continued
Figure 17: Best-fit spectra and 1$\sigma$ uncertainty for 30 $\gamma$-ray-
detected GCs from the universal two-component fits (see section 4.2). The
best-fit parameters for the CR component (red line with shaded band) is
$\Gamma_{1}=0.88\pm 0.44$ and $\log(E_{\mathrm{cut}}/\mathrm{MeV})=3.28\pm
0.16$. The best fit parameter for the PL component (blue line with shaded
band) is $\Gamma_{2}=2.79\pm 0.25$. The black dashed line shows the combined
CR and PL components. When only one component detected, the 95% C.L. upper
limit for the CR (PL) component is shown by the red (blue) shaded area.
Figure 18: continued
## Appendix D Systematic uncertainty on electron/positron luminosity
In Section 4.3, we discussed the implication for the $e^{\pm}$ injection
efficiency from modelling the universal GC spectra. We argued that the
$e^{\pm}$ luminosity from the MSPs in the GCs can be approximated by the
luminosity of the IC component in our two-component model.
Figure 19 plots the $L_{\mathrm{IC}}/L_{\mathrm{CR}}$ against the total
radiation field $u_{\mathrm{Total}}$ of the GCs (left panel) and their
distance $R_{\odot}$ from the Sun (right panel). It is shown that the ratios
of $L_{\mathrm{IC}}/L_{\mathrm{CR}}$ are $\sim O(1)$. In both cases, the
$L_{\mathrm{IC}}/L_{\mathrm{CR}}$ are scattered and show no obvious
correlations with $u_{\mathrm{Total}}$ or $R_{\odot}$. This supports the idea
that IC is the leading energy loss process for $e^{\pm}$ injected by MSPs
diffusing in the GCs. The propagating $e^{\pm}$ eventually lose all their
energy when interacting with the background photon field (IC) or magnetic
field (synchrotron radiation). The $L_{\mathrm{IC}}$ would be suppressed for
small $u_{\mathrm{Total}}$ when the synchrotron energy loss is comparable with
the IC, which is not observed.
For similar reason, we can rule out large uncertainties due to the point
spread function effects of the Fermi-LAT. In the present work, we are only
considering the point-source luminosity of the GCs. For GCs closer to the Sun,
the GC spatial extensions correspond to larger angular separations. The
$e^{\pm}$ injected by MSPs could leak and carry away energy from the GCs.
Should that happen, we may find a positive correlation between
$L_{\mathrm{IC}}/L_{\mathrm{CR}}$ and $R_{\odot}$ since the apertures would
only see partial IC from the $e^{\pm}$. Within uncertainties, this is not
observed.
Given the above, it is a good approximation to assume that
$\dfrac{f_{e^{\pm}}}{f_{\gamma}}\simeq\dfrac{L_{\mathrm{IC}}}{L_{\mathrm{CR}}},$
(20)
as we have done in this study.
Figure 19: Distribution of the ratio $L_{\mathrm{IC}}/L_{\mathrm{CR}}$
compared with the total photon field energy density $u_{\mathrm{Total}}$ (left
panel) and the distance from the Sun $R_{\odot}$ (right panel). The ratios are
shown to be $\sim$ O(1) among GCs, with uncertainties presented. In both
cases, the ratios show no obvious correlations with the other parameters.
|
# Choquet-Sugeno-like operator based on relation
and conditional aggregation operators
Michał Boczek11footnotemark: 1<EMAIL_ADDRESS>Ondrej
Hutník22footnotemark: 2<EMAIL_ADDRESS>Marek Kaluszka33footnotemark: 3
<EMAIL_ADDRESS>Institute of Mathematics, Lodz University of
Technology, 90-924 Lodz, Poland Institute of Mathematics, Pavol Jozef Šafárik
University in Košice, 040-01 Košice, Slovakia
###### Abstract
We introduce a Choquet-Sugeno-like operator generalizing many operators for
bounded functions and monotone measures from the literature, e.g., Sugeno-like
operator, Lovász and Owen measure extensions,
$\displaystyle\mathrm{F}$-decomposition integral with respect to a partition
decomposition system, and others. The new operator is based on the concepts of
dependence relation and conditional aggregation operators, but it does not
depend on $\displaystyle t$-level sets. We also provide conditions for which
the Choquet-Sugeno-like operator coincides with some Choquet-like integrals
defined on finite spaces and appeared recently in the literature, e.g. reverse
Choquet integral, $\displaystyle d$-Choquet integral,
$\displaystyle\mathrm{F}$-based discrete Choquet-like integral, some version
of $\displaystyle C_{\mathrm{F}_{1}\mathrm{F}_{2}}$-integral,
$\displaystyle\mathrm{C}\mathrm{C}$-integrals (or Choquet-like Copula-based
integral) and discrete inclusion-exclusion integral. Some basic properties of
the Choquet-Sugeno-like operator are studied.
###### keywords:
Choquet integral; Sugeno integral; Conditional aggregation operator; Monotone
measure; Decomposition integral; Möbius transform
## 1 Introduction
The origin of investigation of nonadditive integrals goes back to the works of
Vitali, Choquet, Shilkret and Sugeno. Their works have been extensively
studied and several generalizations have been proposed in recent years
grouping the mentioned integrals into two main classes: depending on
$\displaystyle t$-level set and independent on it. The first group includes
universal integral [23], upper and lower $\displaystyle n$-Sugeno integral
[3], whereas the second group includes copula-based integrals [23, 26],
$\displaystyle\mathrm{F}$-decomposition integrals [20] and inclusion-exclusion
integral [17]. Obviously there are some functionals belonging to both groups
such as seminormed fuzzy integrals [36] and the prominent nonadditive
integrals of Choquet [11] and Sugeno [37]. Especially, generalizations of the
discrete Choquet integral have recently attracted the greatest interest. The
discrete Choquet integral can be equivalently expressed as follows
$\displaystyle\displaystyle(C)\int\limits_{X}f\,{\mathrm{d}}\mu$
$\displaystyle\displaystyle=\sum_{i=1}^{n}f_{(i)}\cdot(\mu(B_{(i)})-\mu(B_{(i+1)})),$
(1)
$\displaystyle\displaystyle=\sum_{i=1}^{n}(f_{(i)}-f_{(i-1)})\cdot\mu(B_{(i)}),$
(2)
$\displaystyle\displaystyle=\sum_{i=1}^{n}\bigl{(}f_{(i)}\cdot\mu(B_{(i)})-f_{(i-1)}\cdot\mu(B_{(i)})\bigr{)},$
(3) $\displaystyle\displaystyle=\sum_{D\in
2^{X}\setminus\\{\emptyset\\}}\textsf{Mob}_{\mu}(D)\cdot\min_{i\in D}f(i),$
(4)
where $\displaystyle X=\\{1,2,\dots,n\\},$ $\displaystyle\mu$ is a monotone
measure on $\displaystyle 2^{X},$ integrand $\displaystyle f$ is a vector with
nonnegative entries, $\displaystyle(\cdot)\colon X\to X$ is a permutation such
that $\displaystyle 0=f_{(0)}\leqslant f_{(1)}\leqslant\ldots\leqslant
f_{(n)},$ and $\displaystyle B_{(i)}=\\{(i),\ldots,(n)\\}$ for $\displaystyle
i\in\\{1,2,\dots,n\\}$ with $\displaystyle B_{(n+1)}=\emptyset.$ Moreover,
$\displaystyle\textsf{Mob}_{\mu}$ is the Möbius transform of
$\displaystyle\mu.$ Replacing the product operation in its standard form (1),
equivalent form (2), expanded form (3), and the Möbius transform form (4) by
some other fusion functions with appropriate properties one can obtain a
resulting aggregation-like function providing various generalizations of the
discrete Choquet integral. For the present state-of-art of the generalizations
of the discrete Choquet integral we refer to [12].
As far as we know, there is no unified setting for a common generalization of
all the expressions (1)–(4) of the discrete Choquet integral. In recent paper
[8] authors write in Conclusion: “(…) the possible extension of our
generalization idea to Choquet integrals expressed in terms of Möbius
transform does not seem so easily achievable, as it is not clear how the
information provided by a restricted dissimilarity function can be included in
such representation. Nevertheless, it is worth to mention that, in any case,
the equivalence between the different possible representations of the standard
Choquet integral will be most probably lost in our more general setting.”
Therefore, in this paper we try to cover some generalizations of expressions
(1)–(4) of the discrete Choquet integral in one general formula. For this
purpose we introduce a Choquet-Sugeno-like operator (see Definition 3.2)
independent on $\displaystyle t$-level sets, but depending upon a conditional
aggregation operator and some relation between sets in a collection. These are
two novel ingredients in comparison with the known approaches existing in the
literature. Due to these new elements we may provide many well-known and
examined operators as well as several new operators not yet studied in the
literature. A detail discussion is included in Examples 3.4–3.9 and in Section
4, where we also give relationships between the existing generalizations of
discrete Choquet integral and our Choquet-Sugeno-like operator. As a by-
product, we join the works that simultaneously generalize the Sugeno integral
and the Choquet integral on finite sets, similarly to the works [20, 26, 31,
39].
The paper is organized as follows. In the forthcoming section we provide basic
notations and definitions we work with. In Section 3 we give several examples
of Choquet-Sugeno-like operator known in the literature. In the next Section 4
we provide the conditions for which our Choquet-Sugeno-like operators
coincides with the operators extending the formulas (1)–(4). In Section 5 we
examine some basic properties of the Choquet-Sugeno like operator such as
monotonicity, homogeneity, subadditivity, convexity and idempotency.
## 2 Basic notations
Let $\displaystyle(X,\Sigma)$ be a measurable space, where
$\displaystyle\Sigma$ is a $\displaystyle\sigma$-algebra of subsets of
$\displaystyle X.$ In what follows,
$\displaystyle\Sigma_{0}=\Sigma\setminus\\{\emptyset\\}.$ A monotone or
nonadditive measure on $\displaystyle\Sigma$ is a finite nondecreasing set
function $\displaystyle\mu\colon\Sigma\to[0,\infty),$ i.e.,
$\displaystyle\mu(C)\leqslant\mu(D)$ whenever $\displaystyle C\subset D$ with
$\displaystyle\mu(\emptyset)=0,$ where “$\displaystyle\subset$” and
“$\displaystyle\subseteq$” denote the proper inclusion and improper inclusion,
respectively. We denote the class of all monotone measures on
$\displaystyle(X,\Sigma)$ by $\displaystyle\mathbf{M}.$ If
$\displaystyle\mu(X)=1,$ then $\displaystyle\mu$ is called a capacity and
$\displaystyle\mathbf{M}^{1}$ denotes the set of all capacities. For
$\displaystyle\mu,\nu\in\mathbf{M},$ we write $\displaystyle\mu\leqslant\nu$
whenever $\displaystyle\mu(D)\leqslant\nu(D)$ for any $\displaystyle
D\in\Sigma.$ For $\displaystyle X=\\{1,2,\ldots,n\\},$ we say that
$\displaystyle\mu\in\mathbf{M}$ is symmetric if the condition
$\displaystyle|C|=|D|$ implies $\displaystyle\mu(C)=\mu(D),$ where
$\displaystyle|E|$ is the cardinality of a set $\displaystyle E.$
Additionally, a capacity $\displaystyle\mu$ is symmetric whenever
$\displaystyle\mu(\cdot)=\nu(\cdot)/\nu(X)$ with
$\displaystyle\nu\in\mathbf{M}$ being symmetric. Denote by
$\displaystyle\widehat{\mathbf{M}}$ the family of all set functions
$\displaystyle\widehat{\mu}\colon\Sigma\to(-\infty,\infty)$ with
$\displaystyle\widehat{\mu}(\emptyset)=0.$
By $\displaystyle\mathbf{F}$ denote the set of all
$\displaystyle\Sigma$-measurable (measurable, for short) nonnegative bounded
functions on $\displaystyle X$ and
$\displaystyle\mathbf{F}^{1}=\\{f\in\mathbf{F}\colon\textstyle{\sup_{x\in
X}f(x)}\leqslant 1\\}.$ We write $\displaystyle f\leqslant g$ if
$\displaystyle f(x)\leqslant g(x)$ for all $\displaystyle x\in X.$ Increasing
[resp. nondecreasing] function $\displaystyle f\in\mathbf{F}$ means that
$\displaystyle f(x)<f(y)$ [resp. $\displaystyle f(x)\leqslant f(y)$] whenever
$\displaystyle x<y.$ We say that a function $\displaystyle f\in\mathbf{F}$ is
subadditive if $\displaystyle f(x+y)\leqslant f(x)+f(y)$ for any
$\displaystyle x,y$ such that $\displaystyle x+y\in X.$ For $\displaystyle
t\geqslant 0,$ define the $\displaystyle t$-level set of a function
$\displaystyle f\in\mathbf{F}$ as $\displaystyle\\{f\geqslant t\\}=\\{x\in
X\colon f(x)\geqslant t\\}.$
Let $\displaystyle D_{1},D_{2}\subseteq(-\infty,\infty).$ We say that an
operation $\displaystyle\circ\colon D_{1}\times D_{2}\to[0,\infty)$ is
nondecreasing if $\displaystyle a\circ b\leqslant c\circ d$ for any
$\displaystyle(a,b),(c,d)\in(D_{1},D_{2})$ such that $\displaystyle a\leqslant
c$ and $\displaystyle b\leqslant d.$ We say that $\displaystyle x\mapsto
x\circ b$ is subadditive for any $\displaystyle b$ if $\displaystyle(x+y)\circ
b\leqslant x\circ b+y\circ b$ for any $\displaystyle b.$ In a similar way we
define the subadditivity of $\displaystyle x\mapsto a\circ x.$
The set $\displaystyle\\{1,\ldots,k\\}$ is denoted by $\displaystyle[k].$ Let
$\displaystyle\mathbbm{1}_{D}$ denote the indicator function of a set
$\displaystyle D,$ that is, $\displaystyle\mathbbm{1}_{D}(x)=1$ if
$\displaystyle x\in D$ and $\displaystyle\mathbbm{1}_{D}(x)=0$ otherwise, and
$\displaystyle\mathbbm{1}(S)$ denote the indicator function of a logical
sentence $\displaystyle S,$ that is, $\displaystyle\mathbbm{1}(S)=1$ if
$\displaystyle S$ is true and $\displaystyle\mathbbm{1}(S)=0$ otherwise. For
any $\displaystyle a,b\in[0,\infty),$ let $\displaystyle a\wedge
b=\min\\{a,b\\}$ and $\displaystyle a\vee b=\max\\{a,b\\}$ as well as
$\displaystyle(x)_{+}=x\vee 0$ for any $\displaystyle x\in(-\infty,\infty).$
Moreover, $\displaystyle{\mathbb{N}}=\\{1,2,\ldots\\}$ denotes the set of
natural numbers and $\displaystyle 0_{D}(x)=0$ for all $\displaystyle x\in D,$
where $\displaystyle D\in\Sigma.$ We adopt the usual conventions:
$\displaystyle\textstyle{\sum_{k\in\emptyset}f(k)=0}$ and
$\displaystyle\textstyle{\sum_{i=j}^{k}a_{i}=0}$ for $\displaystyle k<j.$
A crucial concept used in this paper is an extension of aggregation functions
introduced recently in [2].
###### Definition 2.1.
A map $\displaystyle\mathsf{A}(\cdot|D)\colon\mathbf{F}\to[0,\infty)$ is said
to be a conditional aggregation operator with respect to $\displaystyle
D\in\Sigma_{0}$ if it satisfies the following conditions:
1. $\displaystyle(C1)$
$\displaystyle\mathsf{A}(f|D)\leqslant\mathsf{A}(g|D)$ for any $\displaystyle
f,g\in\mathbf{F}$ such that $\displaystyle f(x)\leqslant g(x)$ for all
$\displaystyle x\in D$;
2. $\displaystyle(C2)$
$\displaystyle\mathsf{A}(\mathbbm{1}_{D^{c}}|D)=0.$
The nonempty set $\displaystyle D$ will be called a conditional set.
From Definition 2.1 it follows that
$\displaystyle\mathsf{A}(f|D)=\mathsf{A}(f\mathbbm{1}_{D}|D)$ for any
$\displaystyle f\in\mathbf{F},$ so the value $\displaystyle\mathsf{A}(f|D)$
can be interpreted as “an aggregated value of $\displaystyle f$ on
$\displaystyle D$”. In other words, the conditional aggregation operator only
depends on the value of the considered function defined on the conditional
set. Conditional aggregation operator extends the concept of aggregation
operator $\displaystyle\mathsf{Agg}(\cdot)$ presented by Calvo et al. [9] to
all measurable functions. For some examples and methods of construction of
conditional aggregation operators we refer to [2]444 Observe that in [2] the
conditional aggregation operator can take infinite values as well. To avoid
unnecessary complications, we consider here only finite-valued conditional
aggregation operators.. By
$\displaystyle{\mathscr{A}}=\\{\mathsf{A}(\cdot|D)\colon D\in\Sigma\\}$ we
denote a family of conditional aggregation operators (FCA in short). In order
for the FCA to be well defined for all the sets from $\displaystyle\Sigma,$
from now on we consider the conditional aggregation operators with the
additional assumption $\displaystyle\mathsf{A}(\cdot|\emptyset)=0.$ Several
important FCAs $\displaystyle{\mathscr{A}}$ will be highlighted using the
superscript such as
$\displaystyle{\mathscr{A}}^{\inf}=\\{\mathsf{A}^{\inf}(\cdot|D)\colon
D\in\Sigma\\}$ and
$\displaystyle{\mathscr{A}}^{\sup}=\\{\mathsf{A}^{\sup}(\cdot|D)\colon
D\in\Sigma\\},$ where
$\displaystyle\mathsf{A}^{\inf}(f|D)=\textstyle{\inf_{x\in D}f(x)}$ and
$\displaystyle\mathsf{A}^{\sup}(f|D)=\textstyle{\sup_{x\in D}f(x)}$ for any
$\displaystyle D\in\Sigma_{0}.$ In order to avoid ambiguity in the markings,
we still assume that
$\displaystyle\mathsf{A}^{\inf}(f|\emptyset)=0=\mathsf{A}^{\sup}(f|\emptyset)$
when necessary.
## 3 Choquet-Sugeno-like operator and its several special cases
In this section we introduce an operator which is based on two families of
conditional aggregation operators and a relation between the conditional sets,
which is the main ingredient providing new possibilities. Conditional sets
will be chosen from a collection being any subset
$\displaystyle{\mathcal{D}}\subseteq\Sigma_{0}.$ A nonempty family
$\displaystyle{\mathcal{H}}$ of collections will be called a decomposition
system, i.e., $\displaystyle{\mathcal{H}}\subseteq
2^{\Sigma_{0}}\setminus\\{\emptyset\\}$ (cf. [20]). Several decomposition
systems often used in the paper are summarized in the following example.
###### Example 3.1.
1. 2. (a)
$\displaystyle{\mathcal{H}}_{\text{one}}=\\{\Sigma_{0}\\}$ is a singleton
consisting of the maximal collection;
3. (b)
$\displaystyle{\mathcal{H}}_{\text{part}}=\\{{\mathcal{P}}\colon{\mathcal{P}}\text{
is a~{}finite partition of }X\\}.$ We say that $\displaystyle{\mathcal{P}}$ is
a finite partition of $\displaystyle X$ if
$\displaystyle{\mathcal{P}}=\\{D_{1},\ldots,D_{n}\\}$ such that
$\displaystyle\textstyle{\bigcup_{i=1}^{n}D_{i}=X}$ and $\displaystyle
D_{i}\cap D_{j}=\emptyset$ for any $\displaystyle i\neq j$ and $\displaystyle
D_{i}\in\Sigma_{0}$ for any $\displaystyle i$;
4. (c)
$\displaystyle{\mathcal{H}}_{\text{chain}}=\\{{\mathcal{D}}_{l}\colon
l\in{\mathbb{N}}\\},$ where $\displaystyle{\mathcal{D}}_{l}$ is a chain of
length $\displaystyle l$ defined as
$\displaystyle{\mathcal{D}}_{l}=\\{D_{l},\ldots,D_{1}\\},$ where
$\displaystyle D_{l}\subset\ldots\subset D_{1}$ for any $\displaystyle
D_{i}\in\Sigma_{0}$ and all $\displaystyle i\in[l].$ For instance, for
$\displaystyle X=[2]$ we have
$\displaystyle{\mathcal{H}}_{\text{chain}}=\big{\\{}\\{\\{1\\}\\},\\{\\{2\\}\\},\\{\\{1,2\\}\\},\\{\\{1\\},\\{1,2\\}\\},\\{\\{2\\},\\{1,2\\}\\}\big{\\}}.$
We say that $\displaystyle{\mathcal{R}}$ is a relation on
$\displaystyle{\mathcal{D}}\cup\\{\emptyset\\}$ if
$\displaystyle{\mathcal{R}}\subseteq({\mathcal{D}}\cup\\{\emptyset\\})\times({\mathcal{D}}\cup\\{\emptyset\\}),$
where $\displaystyle{\mathcal{D}}$ is a collection. For two sets
$\displaystyle C,D\in{\mathcal{D}}\cup\\{\emptyset\\}$ being in relation
$\displaystyle{\mathcal{R}}$ we write $\displaystyle(C,D)\in{\mathcal{R}}.$
Although the relation $\displaystyle{\mathcal{R}}$ depends on a collection
$\displaystyle{\mathcal{D}},$ we will not indicate this dependence explicitly
in the notation.
Now we can define the Choquet-Sugeno-like operator. Note that we do not use
the word “integral”, as there is no unambiguous definition of an integral in
the literature, see [2, 10, 23]. Since this paper is related to another
aspects of this topic, we leave this problem for further discussions.
###### Definition 3.2.
Let $\displaystyle{\mathcal{H}}$ be a decomposition system,
$\displaystyle{\mathcal{D}}$ a collection from $\displaystyle{\mathcal{H}},$
and $\displaystyle{\mathcal{R}}$ a relation on
$\displaystyle{\mathcal{D}}\cup\\{\emptyset\\}.$ Then for
$\displaystyle\mathrm{L}\colon[0,\infty)^{3}\times(-\infty,\infty)\to(-\infty,\infty)$
the Choquet-Sugeno-like operator of $\displaystyle f\in\mathbf{F},$
$\displaystyle\mu\in\mathbf{M}$ and
$\displaystyle\widehat{\mu}\in\widehat{\mathbf{M}}$ is defined by
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{L}}_{{\mathcal{H}},{\mathscr{A}},\widehat{{\mathscr{A}}}}(f,\mu,\widehat{\mu})=\sup_{{\mathcal{D}}\in{\mathcal{H}}}\sum_{(C,D)\in{\mathcal{R}}}\mathrm{L}\big{(}\mathsf{A}(f|C),\widehat{\mathsf{A}}(f|D),\mu(C),\widehat{\mu}(D)\big{)},$
(5)
where $\displaystyle{\mathscr{A}}$ and $\displaystyle\widehat{{\mathscr{A}}}$
are FCAs.
###### Remark 3.3.
Note that it is sufficient to define $\displaystyle\mu$ and
$\displaystyle\widehat{\mu}$ on
$\displaystyle\textstyle{\bigcup_{{\mathcal{D}}\in{\mathcal{H}}}{\mathcal{D}}}$
instead of $\displaystyle\Sigma.$
Let us underline once again that the relation $\displaystyle{\mathcal{R}}$ in
(5) depends on a collection $\displaystyle{\mathcal{D}}$ although we do not
write it. Now we show that the Choquet-Sugeno-like operator generalizes many
concepts from the literature.
###### Example 3.4 (upper Sugeno-like operator).
Let $\displaystyle\mathrm{F}\colon[0,\infty)^{2}\to[0,\infty),$
$\displaystyle{\mathcal{R}}=\\{(D,D)\colon D\in{\mathcal{D}}\\}$ for
$\displaystyle{\mathcal{D}}\in{\mathcal{H}}=\\{\\{D\\}\colon
D\in\Sigma_{0}\\}$ and $\displaystyle\mathrm{L}(x,y,z,w)=\mathrm{F}(x,z).$
Then (5) takes the form
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{F}}_{{\mathcal{H}},{\mathscr{A}}}(f,\mu)=\sup_{D\in\Sigma_{0}}\mathrm{F}(\mathsf{A}(f|D),\mu(D)).$
(6)
Putting $\displaystyle\mathsf{A}(\cdot|D)=\mathsf{A}^{\inf}(\cdot|D)$ for any
$\displaystyle E\in\Sigma_{0}$ we obtain the upper Sugeno-like operator
$\displaystyle\displaystyle\mathrm{Su}^{\mathrm{F}}(f,\mu)=\sup_{D\in\Sigma_{0}}\mathrm{F}(\inf_{x\in
D}f(x),\mu(D)).$ (7)
In particular, for $\displaystyle\mathrm{F}\colon[0,1]^{2}\to[0,1]$ being a
fuzzy conjunction555A binary function
$\displaystyle\circ\colon[0,1]^{2}\to[0,1]$ is called a fuzzy conjunction if
it is nondecreasing and fulfils $\displaystyle 0\circ 0=0\circ 1=1\circ 0=0$
and $\displaystyle 1\circ 1=1.$ the operator (7) is the $\displaystyle
q$-integral [13], whereas for $\displaystyle\mathrm{F}\colon[0,1]^{2}\to[0,1]$
being a semicopula666A binary function
$\displaystyle\circ\colon[0,1]^{2}\to[0,1]$ is called a semicopula if it is
nondecreasing and fulfils $\displaystyle 1\circ a=a\circ 1=a$ for any
$\displaystyle a\in[0,1].$ the seminormed fuzzy integral [4, 5, 6, 7, 36] of
$\displaystyle(f,\mu)\in\mathbf{F}^{1}\times\mathbf{M}^{1}$ is recovered. The
upper Sugeno-like operator with $\displaystyle\mathrm{F}=\wedge$ is the famous
Sugeno integral [37], whereas for $\displaystyle\mathrm{F}=\cdot$ we get the
Shilkret integral [35]. If $\displaystyle\mathrm{F}$ is nondecreasing, arguing
as in the proof of [4, Theorem 2.2], the upper Sugeno-like operator (7) can be
rewritten in term of $\displaystyle t$-level sets in the following way
$\displaystyle\displaystyle\mathrm{Su}^{\mathrm{F}}(f,\mu)=\sup_{t\geqslant
0}\mathrm{F}(t,\mu(\\{f\geqslant t\\}))$ (8)
known as the (upper) generalized Sugeno integral [10, 22]. For $\displaystyle
X=[n]$ the operator (8) takes the form
$\displaystyle\displaystyle\mathrm{Su}^{\mathrm{F}}(f,\mu)=\max_{i\in[n]}\mathrm{F}(f_{(i)},\mu(B_{(i)})),$
(9)
where $\displaystyle(\cdot)\colon[n]\to[n]$ is a permutation such that
$\displaystyle f_{(1)}\leqslant\ldots\leqslant f_{(n)}$ and $\displaystyle
B_{(i)}=\\{(i),\ldots,(n)\\}$ for $\displaystyle i\in[n],$ as
$\displaystyle\mathrm{F}$ is nondecreasing. The operator presented in (9) was
studied by Horanská and Šipošová [18] for $\displaystyle\Sigma=2^{[n]},$
$\displaystyle\mathrm{F}\colon[0,1]^{2}\to[0,1]$ and
$\displaystyle(f,\mu)\in\mathbf{F}^{1}\times\mathbf{M}^{1}.$
###### Example 3.5 (generalized Lebesgue integral for sum).
Let $\displaystyle\mathrm{L}(x,y,z,w)=\mathrm{F}(x,z).$ Then (5) with
$\displaystyle{\mathscr{A}}={\mathscr{A}}^{\inf}$ and
$\displaystyle{\mathcal{R}}=\\{(D,D)\colon D\in{\mathcal{D}}\\}$ for
$\displaystyle{\mathcal{D}}\in{\mathcal{H}}$ can be rewritten as follows
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{F}}_{{\mathcal{H}},{\mathscr{A}}^{\inf}}(f,\mu)=\sup_{{\mathcal{D}}\in{\mathcal{H}}}\sum_{D\in{\mathcal{D}}}\mathrm{F}\big{(}\mathsf{A}^{\inf}(f|D),\mu(D)\big{)}.$
(10)
Putting $\displaystyle\mathrm{F}(x,z)=x\otimes z$ and
$\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{part}}$ in (10) we get
$\displaystyle\displaystyle\mathrm{CS}^{\otimes}_{{\mathcal{H}}_{\text{part}},{\mathscr{A}}^{\inf}}(f,\mu)=\sup_{{\mathcal{D}}\in{\mathcal{H}}_{\text{part}}}\sum_{D\in{\mathcal{D}}}\mathsf{A}^{\inf}(f|D)\otimes\mu(D)$
(11)
the generalized Lebesgue integral with $\displaystyle\oplus=+$ defined in [42,
Definition 3.1], where $\displaystyle\otimes$ is a pseudo-multiplication (see
[42, Definition 2.2]). Let $\displaystyle\otimes=\cdot$ and $\displaystyle
X=(a,b],$ where $\displaystyle a<b$ and $\displaystyle
a,b\in(-\infty,\infty).$ Consider
$\displaystyle{\mathcal{H}}_{\text{part}}^{\ast}=\\{\\{I_{1},\ldots,I_{k}\\}\colon
k\in{\mathbb{N}}\\}\subset{\mathcal{H}}_{\text{part}},$
where $\displaystyle I_{i}=(x_{i},x_{i+1}]$ with the measure
$\displaystyle\mu_{G}(I_{i})=G(x_{i+1})-G(x_{i})$ for a nondecreasing function
$\displaystyle G$ on $\displaystyle[a,b].$ Then (11) is the lower Darboux-
Stieltjes integral of the form
$\displaystyle\displaystyle\mathrm{CS}^{\cdot}_{{\mathcal{H}}_{\text{part}}^{\ast},{\mathscr{A}}^{\inf}}(f,\mu_{G})=\sup_{{\mathcal{D}}\in{\mathcal{H}}_{\text{part}}^{\ast}}\sum_{I_{i}\in{\mathcal{D}}}\mu_{G}(I_{i})\cdot\inf_{x\in
I_{i}}f(x).$
Setting
$\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{part}}^{\ast\ast}=\\{\\{X\\}\\}$
and $\displaystyle\mathrm{F}(x,z)=x\cdot z$ in (10) we get the min-max
integral [38, Theorem 4.13]
$\displaystyle\mathrm{CS}^{\cdot}_{{\mathcal{H}}_{\text{part}}^{\ast\ast},{\mathscr{A}}^{\inf}}(f,\mu)=\mu(X)\cdot\inf_{x\in
X}f(x).$
###### Example 3.6 ($\displaystyle\mathrm{F}$-decomposition integral with
respect to $\displaystyle{\mathcal{H}}_{\text{part}}$).
Let $\displaystyle X=[n]$ and $\displaystyle\Sigma=2^{[n]}.$ Observe that the
operator (10) with $\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{part}}$
and a nondecreasing function $\displaystyle\mathrm{F}$ takes the form
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{F}}_{{\mathcal{H}}_{\text{part}},{\mathscr{A}}^{\inf}}(f,\mu)=\mathcal{I}^{\mathrm{F}}_{{\mathcal{H}}_{\text{part}},\mu}(f),$
(12)
where
$\displaystyle\displaystyle\mathcal{I}^{\mathrm{F}}_{{\mathcal{H}},\mu}(f)=\sup\Big{\\{}\sum_{D\in{\mathcal{D}}}\mathrm{F}(a_{D},\mu(D))\colon\sum_{D\in{\mathcal{D}}}a_{D}\mathbbm{1}_{D}\leqslant
f\,\,{\mathcal{D}}\in{\mathcal{H}}\Big{\\}}$ (13)
is an $\displaystyle\mathrm{F}$-decomposition integral. Setting
$\displaystyle\mathrm{F}=\cdot$ in (12) we obtain the Pan-integral based on
the standard arithmetic operation addition and multiplication [41]. The
operator (13) is studied in [20] for
$\displaystyle\mathrm{F}\colon[0,1]^{2}\to[0,\infty)$ and
$\displaystyle(f,\mu)\in\mathbf{F}^{1}\times\mathbf{M}^{1}.$
The assumption $\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{part}}$ in
(12) is essential to prove the equivalence between formula (10) and (12) for
any nondecreasing binary function $\displaystyle\mathrm{F}$ and any
$\displaystyle(f,\mu)\in\mathbf{F}\times\mathbf{M}.$ Indeed, let
$\displaystyle X=[3]$ and
$\displaystyle{\mathcal{H}}=\\{\\{D_{1},D_{2},D_{3}\\}\\}\neq{\mathcal{H}}_{\text{part}},$
where $\displaystyle D_{1}=\\{1\\},$ $\displaystyle D_{2}=\\{1,3\\}$ and
$\displaystyle D_{3}=X.$ Assume that $\displaystyle\mu(C)=1$ for any
$\displaystyle\emptyset\neq C\subseteq X,$ $\displaystyle f(1)=0.4,$
$\displaystyle f(2)=0.2$ and $\displaystyle f(3)=0.3.$ Then (10) with
$\displaystyle\mathrm{F}(x,y)=xy$ takes the form
$\displaystyle\mathrm{CS}^{\cdot}_{{\mathcal{H}},{\mathscr{A}}^{\inf}}(f,\mu)=\textstyle{\sum_{i=1}^{3}\mathsf{A}^{\inf}(f|D_{i})}=0.9.$
On the other hand, (13) has the form
$\displaystyle\displaystyle\mathcal{I}^{\cdot}_{{\mathcal{H}},\mu}(f)$
$\displaystyle\displaystyle=\sup\Big{\\{}\sum_{i=1}^{3}a_{D_{i}}\colon\sum_{i=1}^{3}a_{D_{i}}\mathbbm{1}_{D_{i}}\leqslant
f\Big{\\}}\leqslant f(1),$
since the condition
$\displaystyle\textstyle{\sum_{i=1}^{3}a_{D_{i}}\mathbbm{1}_{D_{i}}\leqslant
f}$ implies that $\displaystyle a_{D_{1}}+a_{D_{2}}+a_{D_{3}}\leqslant f(1),$
and so
$\displaystyle\mathrm{CS}^{\cdot}_{{\mathcal{H}},{\mathscr{A}}^{\inf}}(f,\mu)>\mathcal{I}^{\cdot}_{{\mathcal{H}},\mu}(f).$
###### Example 3.7 (generalization of the Lovász extension).
Let $\displaystyle X=[n]$ and $\displaystyle\Sigma=2^{[n]}.$ Assume that
$\displaystyle{\mathcal{R}}=\\{(D,D)\colon D\in{\mathcal{D}}\\}$ for
$\displaystyle{\mathcal{D}}\in{\mathcal{H}}_{\text{one}},$
$\displaystyle\mathrm{L}(x,y,z,w)=y\circ w$ and
$\displaystyle\widehat{\mu}(D)=\textsf{Mob}_{\mu}(D)$ with
$\displaystyle\mu\in\mathbf{M},$ where
$\displaystyle\textsf{Mob}_{\mu}(D)=\textstyle{\sum_{C\subseteq
D}(-1)^{|D\setminus C|}\mu(C)}$ is the Möbius transform ([15, Section 2.10]).
Then (5) is a generalization of the Lovász extension,
$\displaystyle\displaystyle\mathrm{CS}^{\circ}_{{\mathcal{H}}_{\text{one}},{\mathscr{A}}}(f,\textsf{Mob}_{\mu}{})=\sum_{\emptyset\neq
D\subseteq X}\mathsf{A}(f|D)\circ\textsf{Mob}_{\mu}(D).$ (14)
In particular, for $\displaystyle\circ=\cdot$ and
$\displaystyle\mathsf{A}(\cdot|D)=\mathsf{A}^{\inf}(\cdot|D)$ we get the
discrete Choquet integral expressed in terms of the Möbius transform, known as
the Lovász measure extension [25]. On the other hand, for the product
conditional aggregation operator
$\displaystyle\mathsf{A}^{\textrm{prod}}(f|D)=\textstyle{\prod_{i\in D}f(i)},$
the formula (14) is the generalized Owen extension of $\displaystyle\mu.$ The
original Owen measure extension corresponds to
$\displaystyle\mathsf{A}^{\textrm{prod}}(\cdot|\cdot)$ and
$\displaystyle\circ=\cdot,$ see [33]. Assume that
$\displaystyle\circ\colon[0,1]\times(-\infty,\infty)\to(-\infty,\infty)$ is
bounded on $\displaystyle[0,1]^{2},$
$\displaystyle(f,\mu)\in\mathbf{F}^{1}\times\mathbf{M}^{1}$ and
$\displaystyle\mathsf{A}(h|D)=\mathsf{Agg}(h\mathbbm{1}_{E})$ is an
aggregation operator (see [9]). Then the operator (14) with
* •
$\displaystyle\circ=\cdot$ is studied by Kolesárová et al. [24],
* •
$\displaystyle\mathsf{Agg}(h\mathbbm{1}_{E})=\textstyle{\inf_{x\in E}h(x)}$ is
examined by Fernandez et al. [14],
* •
$\displaystyle\circ$ with the values in $\displaystyle[0,1]$ is discussed by
Horanská [19].
###### Example 3.8.
Let $\displaystyle X=[n]$ and $\displaystyle\Sigma=2^{[n]}.$ Put
$\displaystyle{\mathcal{R}}=\\{(D,D^{c})\colon D\in{\mathcal{D}}\\}$ for
$\displaystyle{\mathcal{D}}\in{\mathcal{H}}_{\text{one}},$
$\displaystyle\mathrm{L}(x,y,z,w)=(x-y)_{+}\circ z,$
$\displaystyle{\mathscr{A}}={\mathscr{A}}^{\inf}$ and
$\displaystyle\widehat{{\mathscr{A}}}={\mathscr{A}}^{\sup}.$ Then (5) takes
the form
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{L}}_{{\mathcal{H}}_{\text{one}},{\mathscr{A}}^{\inf},{\mathscr{A}}^{\sup}}(f,\mu)=\sum_{\emptyset\neq
D\subseteq X}(\min_{x\in D}f(x)-\max_{x\in D^{c}}f(x))_{+}\circ\mu(D).$ (15)
For $\displaystyle\circ=\cdot\,,$ we get an alternative representation of the
Choquet integral presented in [21], see also [1, p. 149].
###### Example 3.9.
(Generalized $\displaystyle p$-variation) Let
$\displaystyle{\mathcal{R}}\subseteq{\mathcal{D}}\times{\mathcal{D}}$ and
$\displaystyle\mathrm{L}(x,y,z,w)=|x-y|^{p},$ where $\displaystyle p\geqslant
1.$ Then the operator (5) with
$\displaystyle{\mathscr{A}}=\widehat{{\mathscr{A}}},$ called the generalized
$\displaystyle p$-variation, takes the form
$\displaystyle\displaystyle{\mathrm{V}}^{p}_{{\mathscr{A}}}(f)=\sup_{{\mathcal{D}}\in{\mathcal{H}}}\sum_{(C,D)\in{\mathcal{R}}}\big{|}\mathsf{A}(f|C)-\mathsf{A}(f|D)\big{|}^{p}.$
The well known notion of $\displaystyle p$-variation we get when taking
$\displaystyle X=[a,b],$ $\displaystyle\Sigma=2^{X},$
$\displaystyle\mathsf{A}(f|\\{x\\})=f(x)$ and
$\displaystyle{\mathcal{R}}=\\{(\\{x_{i-1}\\},\\{x_{i}\\})\colon\\{x_{i}\\}\in{\mathcal{D}},\,i\in[n]\\},$
where
$\displaystyle{\mathcal{D}}\in{\mathcal{H}}=\\{\\{\\{x_{0}\\},\\{x_{1}\\},\ldots,\\{x_{k}\\}\\}\colon
a=x_{0}<x_{1}<\ldots<x_{n-1}<x_{n}=b,\,n\in{\mathbb{N}}\\}.$
###### Remark 3.10.
General properties of operators defined in (6), (10), (14) and (15) have not
been studied in the literature so far.
## 4 Connections with other operators with respect to monotone measure:
finite space case
In [8, 18, 20, 23, 26, 27, 28, 29, 30] the authors study properties of
operators (1)–(3) mainly by replacing the product by another binary function.
So, it is natural to find a connection between them and the Choquet-Sugeno-
like operator. We will describe them in this section. Additionally, we provide
a relationship between the Choquet-Sugeno-like operator and the discrete
inclusion-exclusion integral [17].
In this section we assume that $\displaystyle X=[n]$ and
$\displaystyle\Sigma=2^{[n]}$ with $\displaystyle n\geqslant 2.$ Moreover, to
shorten the notation, we introduce two relations
$\displaystyle{\mathcal{R}}^{+}$ and $\displaystyle{\mathcal{R}}^{-}$ on
$\displaystyle{\mathcal{D}}_{l}\cup\\{\emptyset\\}$ (see Example 3.1 (c)) as
follows:
* •
$\displaystyle{\mathcal{R}}^{+}=\big{\\{}(D_{1},D_{2}),(D_{2},D_{3}),\ldots,(D_{l},D_{l+1})\big{\\}}$;
* •
$\displaystyle{\mathcal{R}}^{-}=\big{\\{}(D_{1},D_{0}),(D_{2},D_{1}),\ldots,(D_{l},D_{l-1})\big{\\}},$
where $\displaystyle D_{l}\subset D_{l-1}\subset\ldots\subset D_{1}$ and
$\displaystyle D_{0}=D_{l+1}=\emptyset.$
### Connection with $\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\widehat{\mu}}$
Let $\displaystyle\circ\colon[0,\infty)^{2}\to[0,\infty).$ The operator
$\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\widehat{\mu}}$ is defined as
follows
$\displaystyle\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\widehat{\mu}}(f)=\sum_{i=1}^{n}f_{(i)}\circ(\mu(B_{(i)})-\widehat{\mu}(B_{(i+1)}))$
(16)
for any
$\displaystyle(f,\mu,\widehat{\mu})\in\mathbf{F}\times\mathbf{M}\times\mathbf{M}$
such that $\displaystyle\mu\geqslant\widehat{\mu},$ where
$\displaystyle(\cdot)\colon[n]\to[n]$ is a permutation such that
$\displaystyle f_{(1)}\leqslant\ldots\leqslant f_{(n)},$ $\displaystyle
B_{(i)}=\\{(i),\ldots,(n)\\}$ for $\displaystyle i\in[n]$ and $\displaystyle
B_{(n+1)}=\emptyset.$ The operator (16) depends on the permutation, which need
not be unique in general. Therefore, in order the operator
$\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\widehat{\mu}}$ to be well defined,
its evaluation must not depend on the considered permutation. Observe that if
$\displaystyle f$ is either decreasing or increasing, then the permutation is
unique. This means that in the class of all decreasing or increasing
functions, the operator
$\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\widehat{\mu}}$ is well defined for
any binary operation $\displaystyle\circ$ and
$\displaystyle\mu,\widehat{\mu}\in\mathbf{M}.$ If we assume that the map
$\displaystyle x\mapsto a\circ x$ is Lebesgue measurable for any
$\displaystyle a,$ then using the same argument as Horanská and Šipošová [18,
Proposition 12] one can show that the operator (16) is well defined for any
$\displaystyle(f,\mu,\widehat{\mu})\in\mathbf{F}\times\mathbf{M}\times\mathbf{M}$
if and only if $\displaystyle a\circ b=g(a)b$ for some function $\displaystyle
g\colon[0,\infty)\to[0,\infty).$ Also the special case of (16) with
$\displaystyle\circ\colon[0,1]^{2}\to[0,1],$ $\displaystyle
f\in\mathbf{F}^{1}$ and $\displaystyle\mu=\widehat{\mu}\in\mathbf{M}^{1}$ is
examined in [18].
###### Proposition 4.1.
The operator (16) is well defined for any $\displaystyle f\in\mathbf{F}$ and
any binary operation $\displaystyle\circ\colon[0,\infty)^{2}\to[0,\infty)$
whenever $\displaystyle\mu,\widehat{\mu}\in\mathbf{M}$ are symmetric such that
$\displaystyle\mu\geqslant\widehat{\mu}.$
In terms of conditional aggregation operators the operator (16) can be
rewritten as
$\displaystyle\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\widehat{\mu}}(f)=\sum_{i=1}^{n}\mathsf{A}^{\inf}(f|B_{(i)})\circ(\mu(B_{(i)})-\widehat{\mu}(B_{(i+1)})).$
(17)
This form resembles the Choquet-Sugeno-like operator (5) with
$\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{chain}},$
$\displaystyle{\mathcal{R}}={\mathcal{R}}^{+},$
$\displaystyle{\mathscr{A}}={\mathscr{A}}^{\inf},$
$\displaystyle\mu,\widehat{\mu}\in\mathbf{M}$ such that
$\displaystyle\mu\geqslant\widehat{\mu}$ and
$\displaystyle\mathrm{L}(x,y,z,w)=x\circ(z-w)_{+},$ i.e.,
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{L}}_{{\mathcal{H}}_{\text{chain}},{\mathscr{A}}^{\inf}}(f,\mu,\widehat{\mu})=\sup_{{\mathcal{D}}\in{\mathcal{H}}_{\text{chain}}}\sum_{i=1}^{l}\mathsf{A}^{\inf}(f|D_{i})\circ(\mu(D_{i})-\widehat{\mu}(D_{i+1}))$
(18)
under the convention that $\displaystyle D_{l+1}=\emptyset.$ However, the
following example demonstrates that the operators (17) and (18) are not the
same.
###### Example 4.2.
Put $\displaystyle X=[2],$ $\displaystyle f(1)=0.5,$ $\displaystyle f(2)=1,$
$\displaystyle\mu(X)=1,$ $\displaystyle\mu(\\{1\\})=0.5$ and
$\displaystyle\mu(\\{2\\})=0.4.$ By (16) with $\displaystyle\circ=\wedge$ and
$\displaystyle\widehat{\mu}=\mu$ we get
$\displaystyle\displaystyle{\mathfrak{C}}^{\mu,\mu}_{\wedge}(f)$
$\displaystyle\displaystyle=\sum_{i=1}^{2}f_{(i)}\wedge(\mu(B_{(i)})-\mu(B_{(i+1)}))$
$\displaystyle\displaystyle=0.5\wedge(\mu(X)-\mu(\\{2\\}))+1\wedge\mu(\\{2\\})=0.9.$
Considering $\displaystyle\mathrm{L}(x,y,z,w)=x\wedge(z-w)_{+}$ and the chain
$\displaystyle{\mathcal{D}}_{2}=\\{D_{2},D_{1}\\}$ with $\displaystyle
D_{2}=\\{1\\}$ and $\displaystyle D_{1}=X$ we obtain
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{L}}_{{\mathcal{H}}_{\text{chain}},{\mathscr{A}}^{\inf}}(f,\mu,\mu)$
$\displaystyle\displaystyle\geqslant\mathsf{A}^{\inf}(f|X)\wedge(\mu(X)-\mu(\\{1\\}))+\mathsf{A}^{\inf}(f|\\{1\\})\wedge\mu(\\{1\\})=1>{\mathfrak{C}}^{\mu,\mu}_{\wedge}(f).$
In consequence, operators (17) and (18) are different.
Reason for the non-equivalence between operators (17) and (18) is the lack of
symmetry of monotone measure. The following result provides a necessary and
sufficient condition under which both operators are equal.
###### Theorem 4.3.
Assume that $\displaystyle\circ\colon[0,\infty)^{2}\to[0,\infty)$ is
nondecreasing. Then the operator defined in (18) coincides with
$\displaystyle{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}(f)$ for any
$\displaystyle f\in\mathbf{F}$ and any symmetric
$\displaystyle\mu,\widehat{\mu}\in\mathbf{M}$ such that
$\displaystyle\mu\geqslant\widehat{\mu}$ if and only if the function
$\displaystyle x\mapsto a\circ x$ is subadditive for any $\displaystyle a.$
Proof. The value of
$\displaystyle{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}(f)$ for any
$\displaystyle f\in\mathbf{F}$ does not depend on permutations, since
$\displaystyle\mu,\widehat{\mu}$ are symmetric (see Proposition 4.1).
“$\displaystyle\Rightarrow$” Let $\displaystyle
f=b\mathbbm{1}_{\\{2,\ldots,n\\}},$ where $\displaystyle b\geqslant 0.$
Clearly,
$\displaystyle{\mathfrak{C}}^{\mu,\mu}_{\circ}(f)=0\circ(\mu(B_{(1)})-\mu(B_{(2)}))+\sum_{i=2}^{n}b\circ(\mu(B_{(i)})-\mu(B_{(i+1)}))$
for any symmetric $\displaystyle\mu\in\mathbf{M},$ where $\displaystyle
B_{(n+1)}=\emptyset.$ The operator (18) coincides with
$\displaystyle{\mathfrak{C}}^{\mu,\mu}_{\circ},$ so for the chain
$\displaystyle{\mathcal{D}}_{2}=\\{D_{2},D_{1}\\}$ with $\displaystyle
D_{2}=B_{(2)}$ and $\displaystyle D_{1}=B_{(1)}$ we get
$\displaystyle
0\circ(\mu(B_{(1)})-\mu(B_{(2)}))+b\circ\mu(B_{(2)})\leqslant{\mathfrak{C}}^{\mu,\mu}_{\circ}(f)$
for any symmetric $\displaystyle\mu\in\mathbf{M}.$ In consequence, we have
$\displaystyle\displaystyle
b\circ\sum_{i=2}^{n}(\mu(B_{(i)})-\mu(B_{(i+1)}))\leqslant\sum_{i=2}^{n}b\circ(\mu(B_{(i)})-\mu(B_{(i+1)}))$
for any symmetric $\displaystyle\mu\in\mathbf{M}.$ This implies that
$\displaystyle x\circ(y+z)\leqslant x\circ y+x\circ z$ for any $\displaystyle
x,y,z.$
“$\displaystyle\Leftarrow$” Since $\displaystyle\\{B_{(n)},\ldots,B_{(1)}\\}$
is a chain, by (17) we have
$\displaystyle\displaystyle\sup_{{\mathcal{D}}\in{\mathcal{H}}_{\text{chain}}}\sum_{i=1}^{l}\mathsf{A}^{\inf}(f|D_{i})\circ(\mu(D_{i})-\widehat{\mu}(D_{i+1}))\geqslant{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}(f).$
To prove the statement we will show that
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle:=\sum_{i=1}^{l}\mathsf{A}^{\inf}(f|D_{i})\circ(\mu(D_{i})-\widehat{\mu}(D_{i+1}))\leqslant{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}(f)$
for any $\displaystyle{\mathcal{D}}_{l}\in{\mathcal{H}}_{\text{chain}}$ such
that $\displaystyle l\in[n].$ For the sake of clarity of presentation, we will
consider separately the case of a chain consisting of one nonempty set.
* •
Let
$\displaystyle{\mathcal{D}}_{1}=\\{D_{1}\\}\in{\mathcal{H}}_{\text{chain}}.$
Then $\displaystyle L=\mathsf{A}^{\inf}(f|D_{1})\circ\mu(D_{1}),$ as
$\displaystyle D_{2}=\emptyset,$ due to the convention. Since $\displaystyle
X$ is a finite set, so
$\displaystyle\\{k\colon\mathsf{A}^{\inf}(f|D_{1})=f_{(k)}\\}\neq\emptyset.$
Let $\displaystyle k_{1}=\min\\{k\colon\mathsf{A}^{\inf}(f|D_{1})=f_{(k)}\\}.$
Thus $\displaystyle D_{1}\subseteq B_{(k_{1})}$ and by monotonicity of
$\displaystyle\circ$ and subadditivity of $\displaystyle x\mapsto a\circ x$ we
get
$\displaystyle\displaystyle L$ $\displaystyle\displaystyle\leqslant
f_{(k_{1})}\circ\mu(B_{(k_{1})})=f_{(k_{1})}\circ\big{(}\mu(B_{(k_{1})})-\widehat{\mu}(B_{(k_{1}+1)})+\widehat{\mu}(B_{(k_{1}+1)})\big{)}$
$\displaystyle\displaystyle\leqslant
f_{(k_{1})}\circ(\mu(B_{(k_{1})})-\widehat{\mu}(B_{(k_{1}+1)}))+f_{(k_{1})}\circ\widehat{\mu}(B_{(k_{1}+1)}).$
Since $\displaystyle(f_{(i)})_{i}$ is a nondecreasing sequence and
$\displaystyle\widehat{\mu}\leqslant\mu,$ we get
$\displaystyle\displaystyle L$ $\displaystyle\displaystyle\leqslant
f_{(k_{1})}\circ(\mu(B_{(k_{1})})-\widehat{\mu}(B_{(k_{1}+1)}))+f_{(k_{1}+1)}\circ\mu(B_{(k_{1}+1)})$
$\displaystyle\displaystyle\leqslant\ldots\leqslant\sum_{i=k_{1}}^{n}f_{(i)}\circ(\mu(B_{(i)})-\widehat{\mu}(B_{(i+1)}))\leqslant{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}(f).$
* •
Let $\displaystyle{\mathcal{D}}_{l}\in{\mathcal{H}}_{\text{chain}}$ for fixed
$\displaystyle 2\leqslant l\leqslant n.$ Then $\displaystyle L=S_{1},$ where
$\displaystyle\displaystyle
S_{k}=\sum_{i=k}^{l}\mathsf{A}^{\inf}(f|D_{i})\circ(\mu(D_{i})-\widehat{\mu}(D_{i+1}))$
for $\displaystyle k\in[l]$ with the convention $\displaystyle
D_{l+1}=\emptyset.$ Let $\displaystyle
k_{1}=\min\\{k\colon\mathsf{A}^{\inf}(f|D_{1})=f_{(k)}\\}.$ Then
$\displaystyle D_{1}\subseteq B_{(k_{1})}$ and
$\displaystyle\displaystyle L$ $\displaystyle\displaystyle\leqslant
f_{(k_{1})}\circ(\mu(B_{(k_{1})})-\widehat{\mu}(D_{2}))+S_{2}.$ (19)
Note that $\displaystyle D_{2}\subset B_{(k_{1})}$ and
$\displaystyle|D_{2}|<|D_{1}|.$ Thus $\displaystyle|D_{2}|<|B_{(k_{1})}|$ and
$\displaystyle|D_{2}|\leqslant|B_{(k_{1}+1)}|,$ as
$\displaystyle|B_{(k_{1}+1)}|+1=|B_{(k_{1})}|.$ Then there exists
$\displaystyle k_{2}>k_{1}$ such that $\displaystyle|D_{2}|=|B_{(k_{2})}|$ and
$\displaystyle\mathsf{A}^{\inf}(f|D_{2})\leqslant\mathsf{A}^{\inf}(f|B_{(k_{2})})=f_{(k_{2})}.$
Indeed, if $\displaystyle D_{2}=B_{(k_{2})},$ then
$\displaystyle\mathsf{A}^{\inf}(f|D_{2})=\mathsf{A}^{\inf}(f|B_{(k_{2})}).$ If
$\displaystyle D_{2}\neq B_{(k_{2})},$ then there exists $\displaystyle j$
such that $\displaystyle j\in D_{2}$ and $\displaystyle j\notin B_{(k_{2})}$
as $\displaystyle|D_{2}|=|B_{(k_{2})}|.$ Observe that $\displaystyle
B_{(k_{1})}=\\{(k_{1}),\ldots,(k_{2}-1)\\}\cup B_{(k_{2})}$ and $\displaystyle
f(i)\leqslant\mathsf{A}^{\inf}(f|B_{(k_{2})})$ for any $\displaystyle
i\in\\{(k_{1}),\ldots,(k_{2}-1)\\},$ so $\displaystyle
f(j)\leqslant\mathsf{A}^{\inf}(f|B_{(k_{2})}),$ as $\displaystyle j\in
B_{(k_{1})}\setminus B_{(k_{2})}.$ In consequence
$\displaystyle\mathsf{A}^{\inf}(f|D_{2})\leqslant\mathsf{A}^{\inf}(f|B_{(k_{2})}).$
By (19) and the symmetricity of $\displaystyle\mu$ and
$\displaystyle\widehat{\mu},$ we have
$\displaystyle\displaystyle L$ $\displaystyle\displaystyle\leqslant
f_{(k_{1})}\circ(\mu(B_{(k_{1})})-\widehat{\mu}(B_{(k_{2})}))+f_{(k_{2})}\circ(\mu(B_{(k_{2})})-\widehat{\mu}(D_{3}))+S_{3}.$
Using $\displaystyle(k_{2}-k_{1}-1)$ times monotonicity of
$\displaystyle\circ$ and subadditivity of $\displaystyle x\mapsto a\circ x,$
we get
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle\leqslant\sum_{i=k_{1}}^{k_{2}-1}f_{(i)}\circ(\mu(B_{(i)})-\widehat{\mu}(B_{(i+1)}))+f_{(k_{2})}\circ(\mu(B_{(k_{2})})-\widehat{\mu}(D_{3}))+S_{3}.$
Repeating the same arguments for $\displaystyle D_{3},\ldots,D_{l}$ leads to
the desired inequality
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle\leqslant\sum_{i=k_{1}}^{k_{l}-1}f_{(i)}\circ(\mu(B_{(i)})-\widehat{\mu}(B_{(i+1)}))+f_{(k_{l})}\circ\mu(B_{(k_{l})})\leqslant{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}(f).$
This completes the proof. ∎
###### Remark 4.4.
The assumption on symmetricity of $\displaystyle\mu\in\mathbf{M}$ cannot be
omitted (see Example 4.2, where $\displaystyle x\mapsto a\wedge x$ is
subadditive for any $\displaystyle a\geqslant 0$).
In a similar way to the proof of Theorem 4.3 one can show that if
$\displaystyle\circ\colon[0,1]^{2}\to[0,1]$ is nondecreasing, then the
operator defined in (18) coincides with
$\displaystyle{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}(f)$ for any
$\displaystyle f\in\mathbf{F}^{1}$ and any symmetric capacities
$\displaystyle\mu,\widehat{\mu}$ such that
$\displaystyle\mu\geqslant\widehat{\mu}$ if and only if the function
$\displaystyle x\mapsto a\circ x$ is subadditive for any $\displaystyle
a\in[0,1].$ The symmetric capacities $\displaystyle\mu$ appear in the natural
way when dealing with the order statistics [16, 34] from discrete probability
distributions. This is due to the fact that the operator (16) with
$\displaystyle\circ=\cdot$ and symmetric capacity
$\displaystyle\mu=\widehat{\mu}$ is the Choquet integral being a
generalization of the OWA operator777R.R. Yager in [40] defined the ordered
weighted averaging (OWA) operator as an expected value of order statistics for
random variables defined in finite possible outcomes..
Now we show that the operator $\displaystyle{\mathfrak{C}}^{\mu,\mu}_{\circ}$
can be rewritten in terms of $\displaystyle t$-level sets.
###### Proposition 4.5.
If $\displaystyle a\circ b=g(a)b$ for some nonnegative function $\displaystyle
g$ with $\displaystyle g(0)=0,$ then
$\displaystyle\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\mu}(f)=\sum_{i=1}^{n}f_{(i)}\circ\big{(}\mu(\\{f\geqslant
f_{(i)}\\})-\mu(\\{f\geqslant f_{(i+1)}\\})\big{)}$ (20)
holds for any $\displaystyle(f,\mu)\in\mathbf{F}\times\mathbf{M},$ where
$\displaystyle(\cdot)$ is a permutation such that $\displaystyle
f_{(1)}\leqslant\ldots\leqslant f_{(n)}$ under the convention
$\displaystyle\\{f\geqslant f_{(n+1)}\\}=\emptyset.$
Proof. If $\displaystyle(f_{(i)})_{i=1}^{n}$ is an increasing sequence, the
statement is obvious. Assume that $\displaystyle
f_{(1)}<\ldots<f_{(j)}=\ldots=f_{(k)}<f_{(k+1)}<\ldots<f_{(n)}$ for some
$\displaystyle j<k.$ Then by (16) and Abel transformation888Abel
transformation:
$\displaystyle\textstyle{\sum_{i=1}^{n}a_{i}(b_{i}-b_{i+1})=\sum_{i=1}^{n}b_{i}(a_{i}-a_{i-1})}$
under the convention $\displaystyle b_{n+1}=a_{0}=0.$ we get
$\displaystyle\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\mu}(f)$
$\displaystyle\displaystyle=\sum_{i=1}^{n}(g(f_{(i)})-g(f_{(i-1)}))\mu(B_{(i)}),$
where $\displaystyle B_{(n+1)}=\emptyset,$ $\displaystyle g(0)=0$ and
$\displaystyle f_{(0)}=0.$ Moreover, $\displaystyle B_{(i)}=\\{f\geqslant
f_{(i)}\\}$ for any $\displaystyle i\notin\\{j+1,\ldots,k\\}.$ As
$\displaystyle g(f_{(j)})=\ldots=g(f_{(k)}),$ we get
$\displaystyle\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\mu}(f)$
$\displaystyle\displaystyle=\sum_{i=1}^{j}(g(f_{(i)})-g(f_{(i-1)}))\mu(\\{f\geqslant
f_{(i)}\\})+\sum_{i=k+1}^{n}(g(f_{(i)})-g(f_{(i-1)}))\mu(\\{f\geqslant
f_{(i)}\\})$
$\displaystyle\displaystyle=\sum_{i=1}^{n}(g(f_{(i)})-g(f_{(i-1)}))\mu(\\{f\geqslant
f_{(i)}\\})=\sum_{i=1}^{n}g(f_{(i)})\big{(}\mu(\\{f\geqslant
f_{(i)}\\})-\mu(\\{f\geqslant f_{(i+1)}\\})\big{)},$
where in the last equality the Abel transformation has been used again. The
other cases can be treated similarly, so we omit them. ∎
It is clear that (20) is true for any $\displaystyle\mu\in\mathbf{M}$ and any
binary operation $\displaystyle\circ$ whenever $\displaystyle f\in\mathbf{F}$
is decreasing or increasing. However, it is not possible to obtain (20) for
any $\displaystyle f\in\mathbf{F}$ and any binary operation
$\displaystyle\circ$ even if $\displaystyle\mu\in\mathbf{M}$ is symmetric as
it is illustrated in the following example.
###### Example 4.6.
Let $\displaystyle X=[2],$ $\displaystyle f(1)=f(2)=0{.}5$ and
$\displaystyle\mu\in\mathbf{M}$ be symmetric such that $\displaystyle\mu(X)=2$
and $\displaystyle\mu(\\{2\\})=1=\mu(\\{1\\}).$ The operator
$\displaystyle{\mathfrak{C}}_{\wedge}^{\mu,\mu}$ does not depend on
permutation, so for $\displaystyle B_{(1)}=X$ and $\displaystyle|B_{(2)}|=1$
we get
$\displaystyle\displaystyle{\mathfrak{C}}_{\wedge}^{\mu,\mu}(f)=0{.}5\wedge(\mu(X)-\mu(B_{(2)}))+0{.}5\wedge\mu(B_{(2)})=1.$
The right-hand side of (20) with $\displaystyle\circ=\wedge$ takes the form
$\displaystyle\displaystyle\sum_{i=1}^{2}f_{(i)}\wedge\big{(}\mu(\\{f\geqslant
f_{(i)}\\})-\mu(\\{f\geqslant f_{(i+1)}\\})\big{)}=0{.}5\wedge 0+0{.}5\wedge
2=0{.}5<{\mathfrak{C}}_{\wedge}^{\mu,\mu}(f).$
To sum up, the equality (20) does not hold.
### Connection with
$\displaystyle{\mathfrak{R}}{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}$
Let $\displaystyle\circ\colon[0,\infty)^{2}\to[0,\infty).$ Define the operator
$\displaystyle{\mathfrak{R}}{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}$ as
follows
$\displaystyle\displaystyle{\mathfrak{R}}{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}(f)=\sum_{i=1}^{n}f_{(i)}\circ(\mu(C_{(i)})-\widehat{\mu}(C_{(i-1)}))$
(21)
for any
$\displaystyle(f,\mu,\widehat{\mu})\in\mathbf{F}\times\mathbf{M}\times\mathbf{M}$
such that $\displaystyle\mu\geqslant\widehat{\mu},$ where
$\displaystyle(\cdot)\colon[n]\to[n]$ is a permutation such that
$\displaystyle f_{(1)}\leqslant\ldots\leqslant f_{(n)}$ and $\displaystyle
C_{(i)}=\\{(1),\ldots,(i)\\}$ for $\displaystyle i\in[n]$ with $\displaystyle
C_{(0)}=\emptyset.$ The operator (21) is well defined if $\displaystyle f$ is
monotone (decreasing or increasing),
$\displaystyle\mu,\widehat{\mu}\in\mathbf{M}$ are symmetric or $\displaystyle
a\circ b=g(a)b$ for some $\displaystyle g.$ Putting
$\displaystyle\mu=\widehat{\mu}\in\mathbf{M}^{1}$ and
$\displaystyle\circ=\cdot$ in (21), we obtain the reverse Choquet integral
introduced in [29].
Observe that if $\displaystyle\mu([n])=\widehat{\mu}([n]),$
$\displaystyle\nu(D)=\widehat{\mu}([n])-\widehat{\mu}(D^{c})$ and
$\displaystyle\widehat{\nu}(D)=\mu([n])-\mu(D^{c})$ for any $\displaystyle
D\in 2^{[n]},$ then
$\displaystyle\displaystyle{\mathfrak{R}}{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}(f)$
$\displaystyle\displaystyle={\mathfrak{C}}^{\nu,\widehat{\nu}}_{\circ}(f).$
(22)
Hence, Theorem 4.3 can also be used to analyze the relationship between the
special case of Choquet-Sugeno-like operator and
$\displaystyle{\mathfrak{R}}{\mathfrak{C}}_{\circ}^{\mu,\widehat{\mu}}.$
### Connection with $\displaystyle{\mathfrak{C}}^{\circ,\delta}_{\mu}$
###### Definition 4.7.
A function $\displaystyle\delta\colon[0,\infty)^{2}\to[0,\infty)$ is said to
be a dissimilarity function if for all $\displaystyle x,y,z$ the following
conditions hold:
1. (a)
$\displaystyle\delta(x,y)=\delta(y,x)$;
2. (b)
$\displaystyle\delta(x,y)=0$ if and only if $\displaystyle x=y$;
3. (c)
if $\displaystyle x\leqslant y\leqslant z,$ then
$\displaystyle\delta(x,y)\leqslant\delta(x,z)$ and
$\displaystyle\delta(y,z)\leqslant\delta(x,z).$
Let $\displaystyle\circ\colon[0,\infty)^{2}\to[0,\infty)$ and
$\displaystyle\delta$ be a dissimilarity function. Now we define the following
operator
$\displaystyle\displaystyle{\mathfrak{C}}_{\mu}^{\circ,\delta}(f)=\sum_{i=1}^{n}\delta(f_{(i)},f_{(i-1)})\circ\mu(B_{(i)})$
(23)
for any $\displaystyle(f,\mu)\in\mathbf{F}\times\mathbf{M},$ where
$\displaystyle(\cdot)\colon[n]\to[n]$ is a permutation such that
$\displaystyle f_{(1)}\leqslant\ldots\leqslant f_{(n)},$ $\displaystyle
B_{(i)}=\\{(i),\ldots,(n)\\}$ for $\displaystyle i\in[n]$ under the convention
$\displaystyle f_{(0)}=0.$ In order the operator (23) would not depend on the
permutation it is necessary to assume that either $\displaystyle 0\circ b=0$
for all $\displaystyle b$ (cf. [30]) or $\displaystyle f\in\mathbf{F}$ is
monotone (decreasing or increasing) or $\displaystyle\mu\in\mathbf{M}$ is
symmetric.
The operator (23) with $\displaystyle\circ=\cdot,$
$\displaystyle(f,\mu)\in\mathbf{F}^{1}\times\mathbf{M}^{1}$ and
$\displaystyle\delta\colon[0,1]^{2}\to[0,1]$ being a restricted dissimilarity
function999 A function $\displaystyle\delta\colon[0,1]^{2}\to[0,1]$ is said to
be a restricted dissimilarity function if it satisfies, for all $\displaystyle
x,y,z\in[0,1],$ the conditions (a), (b) and (c) from Definition 4.7 and
$\displaystyle\delta(x,y)=1$ if and only if $\displaystyle\\{x,y\\}=\\{0,1\\}$
(see [8, Definition 2.1]). is the $\displaystyle d$-Choquet integral defined
in [8]. Study of (23) for $\displaystyle\circ\colon[0,1]^{2}\to[0,1],$
$\displaystyle(f,\mu)\in\mathbf{F}^{1}\times\mathbf{M}^{1}$ and
$\displaystyle\delta(x,y)=|x-y|,$ i.e.,
$\displaystyle\displaystyle{\mathfrak{C}}_{\mu}^{\circ}(f)=\sum_{i=1}^{n}(f_{(i)}-f_{(i-1)})\circ\mu(B_{(i)}),$
was conducted by Mesiar et al. [30] and call it as $\displaystyle\circ$-based
discrete Choquet-like integral. Later, it was redefined by Lucca et al. [27]
in the way
$\displaystyle\overline{{\mathfrak{C}}}_{\mu}^{\circ}(f)=\min\\{1,{\mathfrak{C}}_{\mu}^{\circ}(f)\\}$
for any $\displaystyle(f,\mu)\in\mathbf{F}^{1}\times\mathbf{M}^{1}$ and was
used in a fuzzy rule-based classification system.
Similarly as earlier, we give a relationship between (23) and the Choquet-
Sugeno-like operator (5) with
$\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{chain}},$
$\displaystyle{\mathcal{R}}={\mathcal{R}}^{-},$
$\displaystyle\mathrm{L}(x,y,z,w)=\delta(x,y)\circ z$ and
$\displaystyle{\mathscr{A}}={\mathscr{A}}^{\inf}=\widehat{{\mathscr{A}}},$
i.e.,
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{L}}_{{\mathcal{H}}_{\text{chain}},{\mathscr{A}}^{\inf},{\mathscr{A}}^{\inf}}(f,\mu)=\sup_{{\mathcal{D}}\in{\mathcal{H}}_{\text{chain}}}\sum_{i=1}^{l}\delta\big{(}\mathsf{A}^{\inf}(f|D_{i}),\mathsf{A}^{\inf}(f|D_{i-1})\big{)}\circ\mu(D_{i})$
(24)
for any $\displaystyle(f,\mu)\in\mathbf{F}\times\mathbf{M}$ under the
convention $\displaystyle D_{0}=\emptyset,$ where
$\displaystyle\circ\colon[0,\infty)^{2}\to[0,\infty)$ and
$\displaystyle\delta$ is a dissimilarity function. We split the study of
connection between $\displaystyle{\mathfrak{C}}^{\circ,\delta}_{\mu}$ and (24)
into two cases: $\displaystyle|X|=2$ and $\displaystyle|X|\geqslant 3.$
###### Theorem 4.8.
Let $\displaystyle X=[2].$ Assume that $\displaystyle\delta$ is a
dissimilarity function and
$\displaystyle\circ\colon[0,\infty)^{2}\to[0,\infty)$ is nondecreasing.
* (i)
The operator defined in (24) coincides with
$\displaystyle{\mathfrak{C}}_{\mu}^{\circ,\delta}(f)$ for any $\displaystyle
f\in\mathbf{F}$ and any symmetric $\displaystyle\mu\in\mathbf{M}$ if and only
if
$\displaystyle\displaystyle\delta(x_{2},0)\circ y\leqslant\delta(x_{1},0)\circ
y+\delta(x_{2},x_{1})\circ y$ (25)
for any $\displaystyle 0\leqslant x_{1}\leqslant x_{2}<\infty$ and any
$\displaystyle y\in[0,\infty).$
* (ii)
If $\displaystyle 0\circ a=0$ for all $\displaystyle a,$ then the operator
defined in (24) coincides with
$\displaystyle{\mathfrak{C}}_{\mu}^{\circ,\delta}(f)$ for any
$\displaystyle(f,\mu)\in\mathbf{F}\times\mathbf{M}$ if and only if (25) is
true for any $\displaystyle 0\leqslant x_{1}\leqslant x_{2}<\infty$ and any
$\displaystyle y\in[0,\infty).$
Proof. (i) Since $\displaystyle X=[2],$ so
$\displaystyle{\mathcal{H}}_{\text{chain}}$ is given in Example 3.1 (c). Let
$\displaystyle x_{(1)}=\min\\{x_{1},x_{2}\\}$ and $\displaystyle
x_{(2)}=\max\\{x_{1},x_{2}\\}.$ Then the operators take the form
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{L}}_{{\mathcal{H}}_{\text{chain}},{\mathscr{A}}^{\inf},{\mathscr{A}}^{\inf}}(f,\mu)$
$\displaystyle\displaystyle=\max\big{\\{}\delta(x_{(2)},0)\circ\mu(\\{(2)\\}),\,\delta(x_{(1)},0)\circ\mu(X)+\delta(x_{(2)},x_{(1)})\circ\mu(\\{(2)\\})\big{\\}},$
(26) $\displaystyle\displaystyle{\mathfrak{C}}_{\mu}^{\circ,\delta}(f)$
$\displaystyle\displaystyle=\delta(x_{(1)},0)\circ\mu(X)+\delta(x_{(2)},x_{(1)})\circ\mu(\\{(2)\\})$
for any symmetric $\displaystyle\mu\in\mathbf{M}.$ Thus, the equality
$\displaystyle\mathrm{CS}^{\mathrm{L}}_{{\mathcal{H}}_{\text{chain}},{\mathscr{A}}^{\inf},{\mathscr{A}}^{\inf}}(f,\mu)={\mathfrak{C}}_{\mu}^{\circ,\delta}(f)$
holds for any $\displaystyle f\in\mathbf{F}$ and any symmetric
$\displaystyle\mu\in\mathbf{M}$ if and only if
$\displaystyle\displaystyle\delta(x_{(2)},0)\circ\mu(\\{(2)\\})\leqslant\delta(x_{(1)},0)\circ\mu(X)+\delta(x_{(2)},x_{(1)})\circ\mu(\\{(2)\\})$
for any $\displaystyle 0\leqslant x_{1}\leqslant x_{2}<\infty$ and any
symmetric $\displaystyle\mu.$ As $\displaystyle\mu(\\{2\\})\leqslant\mu(X)$
and $\displaystyle\circ$ is nondecreasing, the latter condition is equivalent
to $\displaystyle\delta(x_{(2)},0)\circ y\leqslant\delta(x_{(1)},0)\circ
y+\delta(x_{(2)},x_{(1)})\circ y$ for any $\displaystyle 0\leqslant
x_{1}\leqslant x_{2}<\infty$ and any $\displaystyle 0\leqslant y<\infty,$
which finishes the proof of part (i).
The proof of part (ii) is similar to the proof of (i), since (26) holds for
any $\displaystyle\mu\in\mathbf{M}$ in view of $\displaystyle 0\circ a=0$ for
any $\displaystyle a.$ ∎
###### Theorem 4.9.
Let $\displaystyle|X|\geqslant 3.$ Assume that $\displaystyle\delta$ is a
dissimilarity function and
$\displaystyle\circ\colon[0,\infty)^{2}\to[0,\infty)$ is nondecreasing such
that $\displaystyle 0\circ 0=0.$ Then the operator defined in (24) coincides
with $\displaystyle{\mathfrak{C}}_{\mu}^{\circ,\delta}(f)$ for any
$\displaystyle f\in\mathbf{F}$ and any symmetric
$\displaystyle\mu\in\mathbf{M}$ if and only if
$\displaystyle\delta(x_{3},x_{1})\circ y\leqslant\delta(x_{2},x_{1})\circ
y+\delta(x_{3},x_{2})\circ y$ for any $\displaystyle 0\leqslant x_{1}\leqslant
x_{2}\leqslant x_{3}<\infty$ and any $\displaystyle y\in[0,\infty).$
Proof. “$\displaystyle\Rightarrow$” Assume that $\displaystyle f(1)=x_{1},$
$\displaystyle f(2)=x_{2}$ and $\displaystyle f(j)=x_{3}$ for $\displaystyle
j\in\\{3,\ldots,n\\}$ such that $\displaystyle 0\leqslant x_{1}\leqslant
x_{2}\leqslant x_{3},$ where $\displaystyle n=|X|.$ Clearly,
$\displaystyle{\mathfrak{C}}_{\mu}^{\circ,\delta}(f)=\delta(x_{1},0)\circ\mu(B_{(1)})+\delta(x_{2},x_{1})\circ\mu(B_{(2)})+\delta(x_{3},x_{2})\circ\mu(B_{(3)})+\sum_{i=4}^{n}0\circ\mu(B_{(i)}).$
The operator (24) coincides with
$\displaystyle{\mathfrak{C}}_{\mu}^{\circ,\delta},$ so for
$\displaystyle{\mathcal{D}}_{2}=\\{B_{(3)},B_{(1)}\\}\in{\mathcal{H}}_{\text{chain}},$
we get
$\displaystyle\delta(x_{1},0)\circ\mu(B_{(1)})+\delta(x_{3},x_{1})\circ\mu(B_{(3)})\leqslant{\mathfrak{C}}_{\mu}^{\circ,\delta}(f)$
for any symmetric $\displaystyle\mu\in\mathbf{M}.$ In consequence, for any
$\displaystyle y\in[0,\infty)$ and the symmetric monotone measure given by
$\displaystyle\mu(E)=\begin{cases}y,&\textrm{if}\,|E|\in\\{n,n-1,n-2\\},\\\
0,&\textrm{otherwise},\end{cases}$
we obtain $\displaystyle\delta(x_{3},x_{1})\circ
y\leqslant\delta(x_{2},x_{1})\circ y+\delta(x_{3},x_{2})\circ y,$ as
$\displaystyle 0\circ 0=0,$ which finishes the proof of the subadditivity
condition.
“$\displaystyle\Leftarrow$” Since
$\displaystyle\mathsf{A}^{\inf}(f|B_{(i)})=f_{(i)}$ for any $\displaystyle
i\in[n],$ we get
$\displaystyle\sup_{{\mathcal{D}}\in{\mathcal{H}}_{\text{chain}}}\sum_{i=1}^{l}\delta\big{(}\mathsf{A}^{\inf}(f|D_{i}),\mathsf{A}^{\inf}(f|D_{i-1})\big{)}\circ\mu(D_{i})\geqslant{\mathfrak{C}}_{\mu}^{\circ,\delta}(f),$
so it is enough to show that
$\displaystyle\displaystyle
L:=\sum_{i=1}^{l}\delta\big{(}\mathsf{A}^{\inf}(f|D_{i}),\mathsf{A}^{\inf}(f|D_{i-1})\big{)}\circ\mu(D_{i})\leqslant{\mathfrak{C}}_{\mu}^{\circ,\delta}(f)$
for any $\displaystyle{\mathcal{D}}_{l}\in{\mathcal{H}}_{\text{chain}}$ such
that $\displaystyle l\in[n].$ Then $\displaystyle L=S_{1},$ where
$\displaystyle\displaystyle
S_{k}=\sum_{i=k}^{l}\delta\big{(}\mathsf{A}^{\inf}(f|D_{i}),\mathsf{A}^{\inf}(f|D_{i-1})\big{)}\circ\mu(D_{i})$
for $\displaystyle k\in[l].$ Let $\displaystyle
k_{1}=\min\\{k\in[n]\colon\mathsf{A}^{\inf}(f|D_{1})=f_{(k)}\\}.$ Thus
$\displaystyle D_{1}\subseteq B_{(k_{1})}$ and
$\displaystyle\displaystyle
L\leqslant\delta(f_{(k_{1})},0)\circ\mu(B_{(k_{1})})+\delta(\mathsf{A}^{\inf}(f|D_{2}),f_{(k_{1})})\circ\mu(D_{2})+S_{3}.$
(27)
Then there exists $\displaystyle k_{2}\in\\{k_{1}+1,\ldots,n\\}$ such that
$\displaystyle|D_{2}|=|B_{(k_{2})}|$ and
$\displaystyle\mathsf{A}^{\inf}(f|D_{2})\leqslant\mathsf{A}^{\inf}(f|B_{(k_{2})})=f_{(k_{2})}$
(see the proof of Theorem 4.3). In view of $\displaystyle
f_{(k_{1})}=\mathsf{A}^{\inf}(f|D_{1})\leqslant\mathsf{A}^{\inf}(f|D_{2})$ (as
$\displaystyle D_{2}\subset D_{1}$) and Definition 4.7 (a) and (c), from (27)
and symmetricity of $\displaystyle\mu$ we get
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle\leqslant\delta(f_{(k_{1})},0)\circ\mu(B_{(k_{1})})+\delta(f_{(k_{2})},f_{(k_{1})})\circ\mu(B_{(k_{2})})+\delta(\mathsf{A}^{\inf}(f|D_{3}),f_{(k_{2})})\circ\mu(D_{3})+S_{4}.$
(28)
Now we only focus on $\displaystyle\delta(f_{(k_{1})},0)\circ\mu(B_{(k_{1})})$
from (28). By the assumption on $\displaystyle\delta$ and by monotonicity of
$\displaystyle\mu$ we obtain
$\displaystyle\displaystyle\delta(f_{(k_{1})},0)\circ\mu(B_{(k_{1})})$
$\displaystyle\displaystyle\leqslant\delta(f_{(1)},f_{(0)})\circ\mu(B_{(k_{1})})+\delta(f_{(k_{1})},f_{(1)})\circ\mu(B_{(k_{1})})$
$\displaystyle\displaystyle\leqslant\delta(f_{(1)},f_{(0)})\circ\mu(B_{(1)})+\delta(f_{(k_{1})},f_{(1)})\circ\mu(B_{(k_{1})})$
$\displaystyle\displaystyle\leqslant\ldots\leqslant\sum_{i=1}^{k_{1}}\delta(f_{(i)},f_{(i-1)})\circ\mu(B_{(i)}).$
(29)
Now consider the term
$\displaystyle\delta(f_{(k_{2})},f_{(k_{1})})\circ\mu(B_{(k_{2})})$ from (28).
Using the similar argument as above we get
$\displaystyle\displaystyle\delta(f_{(k_{2})},f_{(k_{1})})\circ\mu(B_{(k_{2})})$
$\displaystyle\displaystyle\leqslant\delta(f_{(k_{1}+1)},f_{(k_{1})})\circ\mu(B_{(k_{2})})+\delta(f_{(k_{2})},f_{(k_{1}+1)})\circ\mu(B_{(k_{2})})$
$\displaystyle\displaystyle\leqslant\delta(f_{(k_{1}+1)},f_{(k_{1})})\circ\mu(B_{(k_{1}+1)})+\delta(f_{(k_{2})},f_{(k_{1}+1)})\circ\mu(B_{(k_{2})})$
$\displaystyle\displaystyle\leqslant\ldots\leqslant\sum_{i=k_{1}+1}^{k_{2}}\delta(f_{(i)},f_{(i-1)})\circ\mu(B_{(i)}).$
(30)
So putting (4)–(4) in (28) we have
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle\leqslant\sum_{i=1}^{k_{2}}\delta(f_{(i)},f_{(i-1)})\circ\mu(B_{(i)})+\delta(\mathsf{A}^{\inf}(f|D_{3}),f_{(k_{2})})\circ\mu(D_{3})+S_{4}.$
Repeating the same arguments for $\displaystyle D_{3},\ldots,D_{l}$ leads to
the inequality
$\displaystyle\displaystyle
L\leqslant\sum_{i=1}^{k_{l}}\delta(f_{(i)},f_{(i-1)})\circ\mu(B_{(i)})\leqslant{\mathfrak{C}}_{\mu}^{\circ,\delta}(f),$
which finishes the proof. ∎
###### Remark 4.10.
Note that the assumption on $\displaystyle|X|\geqslant 3$ and $\displaystyle
0\circ 0=0$ is used only in the implication “$\displaystyle\Rightarrow$” in
the proof of Theorem 4.9.
From Theorem 4.13 the two results connecting our approach with the existing
ones in the literature for the set $\displaystyle X$ such that
$\displaystyle|X|\geqslant 3$ follow:
1. (a)
if $\displaystyle\delta\colon[0,1]^{2}\to[0,1]$ is a restricted dissimilarity
function, then the Choquet-Sugeno-like operator defined in (24) with
$\displaystyle\circ=\cdot$ coincides with the $\displaystyle d$-Choquet
integral for any $\displaystyle f\in\mathbf{F}^{1}$ and any symmetric capacity
$\displaystyle\mu$ if and only if
$\displaystyle\delta(x_{3},x_{1})\leqslant\delta(x_{2},x_{1})+\delta(x_{3},x_{2})$
for any $\displaystyle 0\leqslant x_{1}\leqslant x_{2}\leqslant x_{3}\leqslant
1.$
2. (b)
if $\displaystyle\delta(x,y)=|x-y|,$ an operation
$\displaystyle\circ\colon[0,1]^{2}\to[0,1]$ is nondecreasing and
$\displaystyle 0\circ 0=0,$ then the operator (24) coincides with
$\displaystyle{\mathfrak{C}}_{\mu}^{\circ}(f)$ for any $\displaystyle
f\in\mathbf{F}^{1}$ and any symmetric capacity $\displaystyle\mu$ if and only
if $\displaystyle x\mapsto x\circ b$ is subadditive for any $\displaystyle
b\in[0,1].$
A representation of $\displaystyle{\mathfrak{C}}_{\mu}^{\circ,\delta}$ using
$\displaystyle t$-level sets reads as follows. The result is an easy
consequence of the fact that $\displaystyle\delta(x,x)=0$ for any
$\displaystyle x.$
###### Proposition 4.11.
Let $\displaystyle\delta$ be a dissimilarity function and
$\displaystyle\circ\colon[0,\infty)^{2}\to[0,\infty).$ If $\displaystyle
0\circ a=0$ for all $\displaystyle a,$ then
$\displaystyle\displaystyle{\mathfrak{C}}^{\circ,\delta}_{\mu}(f)=\sum_{i=1}^{n}\delta(f_{(i)},f_{(i-1)})\circ\mu(\\{f\geqslant
f_{(i)}\\})$ (31)
for any $\displaystyle(f,\mu)\in\mathbf{F}\times\mathbf{M},$ where
$\displaystyle(\cdot)\colon[n]\to[n]$ is a permutation such that
$\displaystyle f_{(1)}\leqslant\ldots\leqslant f_{(n)}$ with $\displaystyle
f_{(0)}=0.$
It is clear that (31) is true for any $\displaystyle\mu\in\mathbf{M}$ and any
binary operation $\displaystyle\circ$ whenever $\displaystyle f\in\mathbf{F}$
is decreasing or increasing. Again, it is not possible to obtain the equality
(31) for any $\displaystyle f\in\mathbf{F}$ and any binary operation
$\displaystyle\circ$ even for symmetric $\displaystyle\mu\in\mathbf{M}.$
###### Example 4.12.
Let $\displaystyle X,$ $\displaystyle f$ and $\displaystyle\mu$ be the same as
in Example 4.6. Then
$\displaystyle\displaystyle\sum_{i=1}^{2}\delta(f_{(i)},f_{(i-1)})\circ\mu(\\{f\geqslant
f_{(i)}\\})$
$\displaystyle\displaystyle=\delta(f_{(1)},0)\circ\mu(X)+0\circ\mu(X),$
$\displaystyle\displaystyle\sum_{i=1}^{2}\delta(f_{(i)},f_{(i-1)})\circ\mu(B_{(i)})$
$\displaystyle\displaystyle=\delta(f_{(1)},0)\circ\mu(X)+0\circ\mu(\\{2\\}).$
Clearly, for $\displaystyle 0\circ\mu(X)\neq 0\circ\mu(\\{2\\})$ the equality
in (31) does not hold.
### Connection with
$\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}$
Let
$\displaystyle\mathrm{F}_{1},\mathrm{F}_{2}\colon[0,\infty)^{2}\to[0,\infty)$
and $\displaystyle\mathrm{F}_{1}\geqslant\mathrm{F}_{2}.$ The operator
$\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}$ is
defined as
$\displaystyle\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}(f)$
$\displaystyle\displaystyle=\sum_{i=1}^{n}\big{(}\mathrm{F}_{1}(f_{(i)},\mu(B_{(i)}))-\mathrm{F}_{2}(f_{(i-1)},\mu(B_{(i)}))\big{)},$
(32)
where $\displaystyle(\cdot)\colon[n]\to[n]$ is a permutation such that
$\displaystyle f_{(1)}\leqslant\ldots\leqslant f_{(n)}$ and $\displaystyle
B_{(i)}=\\{(i),\ldots,(n)\\}$ for $\displaystyle i\in[n]$ with $\displaystyle
f_{(0)}=0.$ The formula (32) can be rewritten as follows
$\displaystyle\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}(f)=\sum_{i=1}^{n-1}\big{(}\mathrm{F}_{1}(f_{(i)},\mu(B_{(i)}))-\mathrm{F}_{2}(f_{(i)},\mu(B_{(i+1)}))\big{)}+\mathrm{F}_{1}(f_{(n)},\mu(B_{(n)}))-\mathrm{F}_{2}(0,\mu(B_{(1)})).$
(33)
The operator (32) is well defined (i.e., independent on permutations) when
either $\displaystyle\mathrm{F}_{1}=\mathrm{F}_{2}$ or $\displaystyle
f\in\mathbf{F}$ is monotone (decreasing or increasing) or
$\displaystyle\mu\in\mathbf{M}$ is symmetric.
Putting
$\displaystyle\mathrm{F}{{}_{1}}|_{[0,1]^{2}}=\mathrm{F}{{}_{2}}|_{[0,1]^{2}}=\mathrm{C}$
in (32), where $\displaystyle\mathrm{C}$ is a copula, we get the operator
firstly defined in [23] and then redefined in [26] under the name
$\displaystyle\mathrm{C}\mathrm{C}$-integral (or Choquet-like Copula-based
integral) of $\displaystyle(f,\mu)\in\mathbf{F}^{1}\times\mathbf{M}^{1}$ in
the form
$\displaystyle\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{C},\mathrm{C})}(f)=\sum_{i=1}^{n}\big{(}\mathrm{C}(f_{(i)},\mu(B_{(i)}))-\mathrm{C}(f_{(i-1)},\mu(B_{(i)}))\big{)},$
(34)
where $\displaystyle f_{(i)}$ and $\displaystyle B_{(i)}$ are the same as in
(32). The operator (34) is a $\displaystyle[0,1]$-valued universal integral
([23]). Setting
$\displaystyle\mathrm{F}_{1},\mathrm{F}_{2}\colon[0,1]^{2}\to[0,1]$ in (32)
such that $\displaystyle\mathrm{F}_{1}\geqslant\mathrm{F}_{2}$ and
$\displaystyle\mathrm{F}_{1}(a,\mu(X))=a$ for all $\displaystyle a,$ we get
$\displaystyle\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}(f)=f_{(1)}+\sum_{i=2}^{n}\big{(}\mathrm{F}_{1}(f_{(i)},\mu(B_{(i)}))-\mathrm{F}_{2}(f_{(i-1)},\mu(B_{(i)}))\big{)}$
for any $\displaystyle(f,\mu)\in\mathbf{F}^{1}\times\mathbf{M}^{1},$ being a
version of $\displaystyle C_{\mathrm{F}_{1}\mathrm{F}_{2}}$-integral defined
in [28, Definition 7].
Let $\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{chain}}$ and
$\displaystyle{\mathcal{R}}={\mathcal{R}}^{-}.$ Then the Choquet-Sugeno-like
operator (5) with
$\displaystyle\mathrm{L}(x,y,z,w)=\mathrm{F}_{1}(x,z)-\mathrm{F}_{2}(y,z)$ and
$\displaystyle{\mathscr{A}}={\mathscr{A}}^{\inf}=\widehat{{\mathscr{A}}}$ can
be written as follows
$\displaystyle\displaystyle\mathrm{CS}^{(\mathrm{F}_{1},\mathrm{F}_{2})}_{{\mathcal{H}}_{\text{chain}},{\mathscr{A}}^{\inf},{\mathscr{A}}^{\inf}}(f,\mu)=\sup_{{\mathcal{D}}\in{\mathcal{H}}_{\text{chain}}}\sum_{i=1}^{l}\big{(}\mathrm{F}_{1}(\mathsf{A}^{\inf}(f|D_{i}),\mu(D_{i}))-\mathrm{F}_{2}(\mathsf{A}^{\inf}(f|D_{i-1}),\mu(D_{i}))\big{)}$
(35)
under the convention $\displaystyle D_{0}=\emptyset.$
A pair $\displaystyle(\mathrm{F}_{1},\mathrm{F}_{2})$ of functions
$\displaystyle\mathrm{F}_{1},\mathrm{F}_{2}\colon E_{1}\times
E_{2}\to[0,\infty),$ where $\displaystyle E_{1},E_{2}\subseteq[0,\infty),$ is
pairwise $\displaystyle 2$-increasing if
$\displaystyle\displaystyle\mathrm{F}_{1}(x_{1},y_{2})-\mathrm{F}_{2}(x_{1},y_{1})\leqslant\mathrm{F}_{1}(x_{2},y_{2})-\mathrm{F}_{2}(x_{2},y_{1})$
for any $\displaystyle[x_{1},x_{2}]\times[y_{1},y_{2}]\subseteq E_{1}\times
E_{2}.$ It is clear that the pair $\displaystyle(\mathrm{C},\mathrm{C})$ is
pairwise $\displaystyle 2$-increasing whenever $\displaystyle\mathrm{C}$ is a
copula [32].
###### Theorem 4.13.
Assume that
$\displaystyle\mathrm{F}_{1},\mathrm{F}_{2}\colon[0,\infty)^{2}\to[0,\infty)$
are pairwise $\displaystyle 2$-increasing, $\displaystyle\mathrm{F}_{1}$ is
nondecreasing, $\displaystyle\mathrm{F}_{1}\geqslant\mathrm{F}_{2}$ and
$\displaystyle\mathrm{F}_{2}(0,b)=0$ for any $\displaystyle b.$ Then the
operator (35) coincides with
$\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}(f)$ for
any $\displaystyle f\in\mathbf{F}$ and any symmetric
$\displaystyle\mu\in\mathbf{M}.$
Proof. By (32) and (35) we get
$\displaystyle\mathrm{CS}^{(\mathrm{F}_{1},\mathrm{F}_{2})}_{{\mathcal{H}}_{\text{chain}},{\mathscr{A}}^{\inf},{\mathscr{A}}^{\inf}}(f,\mu)\geqslant{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}(f),$
so we will prove the reverse inequality, that is,
$\displaystyle\displaystyle
L:=\sum_{i=1}^{l}\big{(}\mathrm{F}_{1}(\mathsf{A}^{\inf}(f|D_{i}),\mu(D_{i}))-\mathrm{F}_{2}(\mathsf{A}^{\inf}(f|D_{i-1}),\mu(D_{i}))\big{)}\leqslant{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}(f)$
for any $\displaystyle{\mathcal{D}}_{l}\in{\mathcal{H}}_{\text{chain}}$ with
$\displaystyle l\in[n].$ Since
$\displaystyle\sum_{i=1}^{l}\bigl{(}\mathrm{F}_{1}(a_{i},b_{i})-\mathrm{F}_{2}(a_{i-1},b_{i})\bigr{)}=\sum_{i=1}^{l-1}\bigl{(}\mathrm{F}_{1}(a_{i},b_{i})-\mathrm{F}_{2}(a_{i},b_{i+1})\bigr{)}+\mathrm{F}_{1}(a_{l},b_{l})-\mathrm{F}_{2}(a_{0},b_{1})$
for any nonnegative sequence $\displaystyle(a_{i})_{i=0}^{l}$ and
$\displaystyle(b_{i})_{i=1}^{l},$ we get $\displaystyle L=S_{1},$ where
$\displaystyle\displaystyle
S_{k}=\sum_{i=k}^{l-1}\big{(}\mathrm{F}_{1}(\mathsf{A}^{\inf}(f|D_{i}),\mu(D_{i}))-\mathrm{F}_{2}(\mathsf{A}^{\inf}(f|D_{i}),\mu(D_{i+1}))\big{)}+\mathrm{F}_{1}(\mathsf{A}^{\inf}(f|D_{l}),\mu(D_{l}))$
for $\displaystyle k\in[l],$ as $\displaystyle\mathrm{F}_{2}(0,\mu(D_{1}))=0.$
Let $\displaystyle k_{1}=\min\\{k\colon\mathsf{A}^{\inf}(f|D_{1})=f_{(k)}\\}.$
Clearly, $\displaystyle D_{1}\subseteq B_{(k_{1})}.$ Hence by monotonicity of
$\displaystyle\mathrm{F}_{1}$ we have
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle\leqslant\mathrm{F}_{1}(f_{(k_{1})},\mu(B_{(k_{1})}))-\mathrm{F}_{2}(f_{(k_{1})},\mu(D_{2}))+S_{2}.$
(36)
Then there exists $\displaystyle k_{2}\in\\{k_{1}+1,\ldots,n\\}$ such that
$\displaystyle|D_{2}|=|B_{(k_{2})}|$ and
$\displaystyle\mathsf{A}^{\inf}(f|D_{2})\leqslant\mathsf{A}^{\inf}(f|B_{(k_{2})})$
(see the proof of Theorem 4.3). Thus
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle\leqslant\mathrm{F}_{1}(f_{(k_{1})},\mu(B_{(k_{1})}))-\mathrm{F}_{2}(f_{(k_{1})},\mu(B_{(k_{2})}))+\mathrm{F}_{1}(\mathsf{A}^{\inf}(f|D_{2}),\mu(D_{2}))-\mathrm{F}_{2}(\mathsf{A}^{\inf}(f|D_{2}),\mu(D_{3}))+S_{3}.$
Since $\displaystyle\mathrm{F}_{1}$ and $\displaystyle\mathrm{F}_{2}$ are
pairwise $\displaystyle 2$-increasing,
$\displaystyle\mu(D_{3})\leqslant\mu(D_{2})$ and
$\displaystyle\mathsf{A}^{\inf}(f|B_{(k_{2})})=f_{(k_{2})},$ we get
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle\leqslant\mathrm{F}_{1}(f_{(k_{1})},\mu(B_{(k_{1})}))-\mathrm{F}_{2}(f_{(k_{1})},\mu(B_{(k_{2})}))+\mathrm{F}_{1}(f_{(k_{2})},\mu(D_{2}))-\mathrm{F}_{2}(f_{(k_{2})},\mu(D_{3}))+S_{3}$
$\displaystyle\displaystyle=\mathrm{F}_{1}(f_{(k_{1})},\mu(B_{(k_{1})}))-\mathrm{F}_{2}(f_{(k_{1})},\mu(B_{(k_{2})}))+\mathrm{F}_{1}(f_{(k_{2})},\mu(B_{(k_{2})}))-\mathrm{F}_{2}(f_{(k_{2})},\mu(D_{3}))+S_{3}.$
(37)
Let $\displaystyle
M=\mathrm{F}_{1}(f_{(k_{1})},\mu(B_{(k_{1})}))-\mathrm{F}_{2}(f_{(k_{1})},\mu(B_{(k_{2})})).$
Then by $\displaystyle\mathrm{F}_{1}\geqslant\mathrm{F}_{2}$ and pairwise
$\displaystyle 2$-increasingness we get
$\displaystyle\displaystyle M$
$\displaystyle\displaystyle\leqslant\mathrm{F}_{1}(f_{(k_{1})},\mu(B_{(k_{1})}))-\mathrm{F}_{2}(f_{(k_{1})},\mu(B_{(k_{1}+1)}))+\mathrm{F}_{1}(f_{(k_{1})},\mu(B_{(k_{1}+1)}))-\mathrm{F}_{2}(f_{(k_{1})},\mu(B_{(k_{2})}))$
$\displaystyle\displaystyle\leqslant\mathrm{F}_{1}(f_{(k_{1})},\mu(B_{(k_{1})}))-\mathrm{F}_{2}(f_{(k_{1})},\mu(B_{(k_{1}+1)}))+\mathrm{F}_{1}(f_{(k_{1}+1)},\mu(B_{(k_{1}+1)}))-\mathrm{F}_{2}(f_{(k_{1}+1)},\mu(B_{(k_{2})}))$
$\displaystyle\displaystyle\leqslant\ldots\leqslant\sum_{i=k_{1}}^{k_{2}-1}\big{(}\mathrm{F}_{1}(f_{(i)},\mu(B_{(i)}))-\mathrm{F}_{2}(f_{(i)},\mu(B_{(i+1)}))\big{)}.$
(38)
Combining (4) and (4) we get
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle\leqslant\sum_{i=k_{1}}^{k_{2}-1}\big{(}\mathrm{F}_{1}(f_{(i)},\mu(B_{(i)}))-\mathrm{F}_{2}(f_{(i)},\mu(B_{(i+1)}))\big{)}+\mathrm{F}_{1}(f_{(k_{2})},\mu(B_{(k_{2})}))-\mathrm{F}_{2}(f_{(k_{2})},\mu(D_{3}))+S_{3}.$
By repeating the same arguments, we obtain
$\displaystyle\displaystyle L$
$\displaystyle\displaystyle\leqslant\sum_{i=k_{1}}^{k_{l}-1}\big{(}\mathrm{F}_{1}(f_{(i)},\mu(B_{(i)}))-\mathrm{F}_{2}(f_{(i)},\mu(B_{(i+1)}))\big{)}+\mathrm{F}_{1}(f_{(k_{l})},\mu(B_{(k_{l})}))\leqslant{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}(f),$
where in the last inequality the formula (33) has been used. This completes
the proof. ∎
Let $\displaystyle\mathrm{C}$ be a copula. Similarly as in the proof of
Theorem 4.13, one can prove that the operator defined in (35) with
$\displaystyle\mathrm{F}_{1}|_{[0,1]^{2}}=\mathrm{F}_{2}|_{[0,1]^{2}}=\mathrm{C}$
coincides with
$\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{C},\mathrm{C})}(f)$
($\displaystyle\mathrm{C}\mathrm{C}$-integral) for any $\displaystyle
f\in\mathbf{F}^{1}$ and any symmetric capacity $\displaystyle\mu.$
The operator
$\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}$ can also
be represented via $\displaystyle t$-level sets. Indeed, the equality
$\displaystyle\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}(f)=\sum_{i=1}^{n}\big{(}\mathrm{F}_{1}(f_{(i)},\mu(\\{f\geqslant
f_{(i)}\\}))-\mathrm{F}_{2}(f_{(i-1)},\mu(\\{f\geqslant f_{(i)}\\}))\big{)},$
(39)
where $\displaystyle(\cdot)\colon[n]\to[n]$ is a permutation such that
$\displaystyle 0=f_{(0)}\leqslant f_{(1)}\leqslant\ldots\leqslant f_{(n)},$
holds
* •
for any $\displaystyle(f,\mu)\in\mathbf{F}\times\mathbf{M}$ whenever
$\displaystyle\mathrm{F}_{1}=\mathrm{F}_{2}$, or
* •
for any $\displaystyle\mu\in\mathbf{M}$ and any
$\displaystyle\mathrm{F}_{1},\mathrm{F}_{2}$ such that
$\displaystyle\mathrm{F}_{1}\geqslant\mathrm{F}_{2}$ whenever $\displaystyle
f\in\mathbf{F}$ is decreasing or increasing.
Observe that the equality (39) need not hold for any $\displaystyle
f\in\mathbf{F}$ and any $\displaystyle\mathrm{F}_{1},\mathrm{F}_{2}$ such that
$\displaystyle\mathrm{F}_{1}\geqslant\mathrm{F}_{2}$ even for symmetric
$\displaystyle\mu\in\mathbf{M}.$
###### Example 4.14.
Let $\displaystyle X,$ $\displaystyle f$ and $\displaystyle\mu$ be the same as
in Example 4.6. Then
$\displaystyle\displaystyle\sum_{i=1}^{2}\big{(}\mathrm{F}_{1}(f_{(i)},\mu(\\{f\geqslant
f_{(i)}\\}))-\mathrm{F}_{2}(f_{(i-1)},\mu(\\{f\geqslant f_{(i)}\\}))\big{)}$
$\displaystyle\displaystyle=\mathrm{F}_{1}(0.5,2)-\mathrm{F}_{2}(0,2)+\mathrm{F}_{1}(0.5,2)-\mathrm{F}_{2}(0.5,2),$
$\displaystyle\displaystyle\sum_{i=1}^{2}\big{(}\mathrm{F}_{1}(f_{(i)},\mu(B_{(i)}))-\mathrm{F}_{2}(f_{(i-1)},\mu(B_{(i)}))\big{)}$
$\displaystyle\displaystyle=\mathrm{F}_{1}(0.5,2)-\mathrm{F}_{2}(0,2)+\mathrm{F}_{1}(0.5,1)-\mathrm{F}_{2}(0.5,1).$
The equality in (39) does not hold e.g., if
$\displaystyle\mathrm{F}_{1}=\mathrm{F}$ and
$\displaystyle\mathrm{F}_{2}=c\mathrm{F}$ for any $\displaystyle c\in(0,1)$
and any $\displaystyle\mathrm{F}\colon[0,\infty)^{2}\to[0,\infty)$ such that
$\displaystyle\mathrm{F}(0.5,2)\neq\mathrm{F}(0.5,1)$.
### Connection with $\displaystyle\mathfrak{IE}_{\mu}^{\circ,\mathsf{I}}$
Let
$\displaystyle\circ\colon[0,\infty)\times(-\infty,\infty)\to(-\infty,\infty).$
Define the following operator
$\displaystyle\displaystyle\mathfrak{IE}_{\mu}^{\circ,\mathsf{I}}(f)=\sum_{\emptyset\neq
D\subseteq X}\mathsf{I}(f,D)\circ\textsf{Mob}_{\mu}(D)$ (40)
for any $\displaystyle(f,\mu)\in\mathbf{F}\times\mathbf{M},$ where
$\displaystyle\mathsf{I}\colon\mathbf{F}\times\Sigma_{0}\to[0,\infty)$ is an
extended interaction operator satisfying the conditions:
* $\displaystyle(I1)$
$\displaystyle\mathsf{I}(f,\\{i\\})=f(i)$ for any $\displaystyle
f\in\mathbf{F}$ and any $\displaystyle i\in[n]$;
* $\displaystyle(I2)$
$\displaystyle\mathsf{I}(f,D)\leqslant\mathsf{I}(g,D)$ for any $\displaystyle
D\in\Sigma_{0}$ and any $\displaystyle f,g\in\mathbf{F}$ such that
$\displaystyle f(x)\leqslant g(x)$ for all $\displaystyle x\in D$;
* $\displaystyle(I3)$
$\displaystyle\mathsf{I}(f,D)\geqslant\mathsf{I}(f,E)$ for any $\displaystyle
f\in\mathbf{F}$ and any $\displaystyle D,E\in\Sigma_{0}$ with $\displaystyle
D\subseteq E.$
Putting $\displaystyle\circ=\cdot$ and $\displaystyle\mathsf{I}$ being an
interaction operator ([17, Definition 5]) in (40), we get a discrete
inclusion-exclusion integral ([17, Theorem 3]). For a connection between
$\displaystyle\mathfrak{IE}^{\circ,\mathsf{I}}_{\mu}$ and the Choquet-Sugeno-
like operator of the form (14) we first explain a relationship between the
extended interaction operator and conditional aggregation operator.
###### Example 4.15.
* (i)
The mapping $\displaystyle\mathsf{A}(f|D)=(\textstyle{\sum_{i\in
D}f(i)^{p}})^{1/p}$ with $\displaystyle p\in[1,\infty)$ is the conditional
aggregation operator which is not interaction operator since it violates
$\displaystyle(I3).$
* (ii)
The Łukasiewicz t-conorm
$\displaystyle\mathsf{A}(f|D)=1\wedge\textstyle{\sum_{i\in D}f(i)}$ is the
conditional aggregation operator violating the properties $\displaystyle(I1)$
and $\displaystyle(I3).$
These examples with the next proposition claim that the concept of conditional
aggregation is more general than that of extended interaction operators. For
that purpose we introduce the following notion: a conditional aggregation
operator $\displaystyle\mathsf{A}(\cdot|\cdot)$ is called conjunctive if
$\displaystyle\mathsf{A}(f|D)\leqslant\mathsf{A}^{\inf}(f|D)$ for any
$\displaystyle D\in\Sigma_{0}$ and any $\displaystyle f\in\mathbf{F}.$
###### Proposition 4.16.
Let $\displaystyle D\in\Sigma_{0}.$ Then
$\displaystyle\mathsf{I}(f,D)=\mathsf{A}(f|D)$ for any $\displaystyle
f\in\mathbf{F},$ where $\displaystyle\mathsf{A}$ is a conjunctive conditional
aggregation operator.
Proof. In order to prove that an extended interaction operator
$\displaystyle\mathsf{I}$ is a conditional aggregation operator, only
condition $\displaystyle(C2)$ has to be verified. Let $\displaystyle
D\in\Sigma_{0}.$ Clearly, $\displaystyle D$ can be written as a finite union
of singletons, let us say $\displaystyle D=\textstyle{\bigcup_{j\in
D}\\{j\\}}.$ Then for any $\displaystyle f\in\mathbf{F}$ from
$\displaystyle(I3)$ and $\displaystyle(I1)$ we have
$\displaystyle\mathsf{I}(\mathbbm{1}_{D^{c}},D)\leqslant\mathsf{I}(\mathbbm{1}_{D^{c}},\\{j\\})=\mathbbm{1}_{D^{c}}(j)=0$
for each $\displaystyle j\in D.$ This yields that $\displaystyle\mathsf{I}$
satisfies $\displaystyle(C2).$ To prove conjunctivity, for $\displaystyle
D\in\Sigma_{0}$ put $\displaystyle f(j)=\textstyle{\min_{i\in D}f(i)}.$ Then
$\displaystyle j\in D,$ and
$\displaystyle\mathsf{I}(f,D)\leqslant\mathsf{I}(f,\\{j\\})=f(j)=\mathsf{A}^{\inf}(f|D).$
This completes the proof. ∎
Using Proposition 4.16, the connection between
$\displaystyle\mathfrak{IE}^{\circ,\mathsf{I}}_{\mu}$ and (14) is immediate.
###### Theorem 4.17.
Let
$\displaystyle\circ\colon[0,\infty)\times(-\infty,\infty)\to(-\infty,\infty)$
and $\displaystyle{\mathscr{A}}$ be a FCA consisting of all conjunctive
conditional aggregation operators. Then the operator
$\displaystyle\mathfrak{IE}^{\circ,\mathsf{I}}_{\mu}(f)$ with
$\displaystyle\mathsf{I}\in{\mathscr{A}}$ coincides with the Choquet-Sugeno-
like operator (14) for any
$\displaystyle(f,\mu)\in\mathbf{F}\times\mathbf{M}.$
## 5 Selected properties of the Choquet-Sugeno-like operator
In this section several properties of the Choquet-Sugeno-like operator will be
examined with respect to function $\displaystyle\mathrm{L}.$ Let us summarize
the functions $\displaystyle\mathrm{L}=\mathrm{L}(x,y,z,w)$ that have been
used in the paper:
$\displaystyle\displaystyle 1.\quad$
$\displaystyle\displaystyle\mathrm{L}_{1}(x,y,z,w)=|x-y|^{p}\quad$ (Example
3.9); $\displaystyle\displaystyle 2.\quad$
$\displaystyle\displaystyle\mathrm{L}_{2}(x,y,z,w)=x\circ z\quad$ (Examples
3.4–3.6 with $\displaystyle\mathrm{F}=\circ$); $\displaystyle\displaystyle
3.\quad$ $\displaystyle\displaystyle\mathrm{L}_{3}(x,y,z,w)=y\circ w\quad$
(formula (14)); $\displaystyle\displaystyle 4.\quad$
$\displaystyle\displaystyle\mathrm{L}_{4}(x,y,z,w)=x\circ(z-w)_{+}\quad$
$\displaystyle\displaystyle\text{(formulas \eqref{n:c1b} and \eqref{n:c5a})};$
$\displaystyle\displaystyle 5.\quad$
$\displaystyle\displaystyle\mathrm{L}_{5}(x,y,z,w)=(x-y)_{+}\circ z\quad$
(formula (15)); $\displaystyle\displaystyle 6.\quad$
$\displaystyle\displaystyle\mathrm{L}_{6}(x,y,z,w)=\delta(x,y)\circ z\quad$
$\displaystyle\displaystyle\text{(formula \eqref{n:d1b})};$
$\displaystyle\displaystyle 7.\quad$
$\displaystyle\displaystyle\mathrm{L}_{7}(x,y,z,w)=\mathrm{F}_{1}(x,z)-\mathrm{F}_{2}(y,z)\quad$
$\displaystyle\displaystyle\text{(formula \eqref{n:cf3})},$
where $\displaystyle\delta,\circ,\mathrm{F}_{1},\mathrm{F}_{2}$ have been
defined at the indicated places of the article. To simplify the notation, we
will write just $\displaystyle\mathrm{CS}^{\mathrm{L}}(f)$ instead of
$\displaystyle\mathrm{CS}^{\mathrm{L}}_{{\mathcal{H}},{\mathscr{A}},\widehat{{\mathscr{A}}}}(f,\mu,\widehat{\mu}).$
Proofs of the results presented in this section are immediate when
representing the Choquet-Sugeno-like operator (5) in the following way
$\displaystyle\displaystyle\mathrm{CS}^{\mathrm{L}}(f)$
$\displaystyle\displaystyle=\sup_{{\mathcal{D}}\in{\mathcal{H}}}\Big{\\{}\sum_{(C,D)\in{\mathcal{R}},\,C,D\neq\emptyset}\mathrm{L}\big{(}\mathsf{A}(f|C),\widehat{\mathsf{A}}(f|D),\mu(C),\widehat{\mu}(D)\big{)}+\sum_{(C,\emptyset)\in{\mathcal{R}},\,C\neq\emptyset}\mathrm{L}\big{(}\mathsf{A}(f|C),0,\mu(C),0\big{)}$
$\displaystyle\displaystyle\qquad\qquad+\sum_{(\emptyset,D)\in{\mathcal{R}},\,D\neq\emptyset}\mathrm{L}\big{(}0,\widehat{\mathsf{A}}(f|D),0,\widehat{\mu}(D)\big{)}+\sum_{(\emptyset,\emptyset)\in{\mathcal{R}}}\mathrm{L}(0,0,0,0)\Big{\\}},$
(41)
as
$\displaystyle\mathsf{A}(\cdot|\emptyset)=\widehat{\mathsf{A}}(\cdot|\emptyset)=0$
and $\displaystyle\widehat{\mu}(\emptyset)=0.$
###### Proposition 5.1.
If $\displaystyle\mathrm{L}(0,0,z,w)=0$ for all $\displaystyle z,w,$ then
$\displaystyle\mathrm{CS}^{\mathrm{L}}(0_{X})=0.$
Proof. The statement follows from (5) and the fact that
$\displaystyle\mathsf{A}(0_{X}|D)=0=\widehat{\mathsf{A}}(0_{X}|D)$ for any
$\displaystyle D\in\Sigma_{0},$ where
$\displaystyle\mathsf{A}\in{\mathscr{A}}$ and
$\displaystyle\widehat{\mathsf{A}}\in\widehat{{\mathscr{A}}}$ (see [2,
Proposition 3.3 (b)]). ∎
###### Example 5.2.
The assumption in Proposition 5.1 is satisfied for the following functions:
* •
$\displaystyle\mathrm{L}_{1}$;
* •
$\displaystyle\mathrm{L}_{2}-\mathrm{L}_{6}$ whenever $\displaystyle 0\circ
b=0$ for any $\displaystyle b$;
* •
$\displaystyle\mathrm{L}_{7}$ whenever
$\displaystyle\mathrm{F}_{1}(0,b)=\mathrm{F}_{2}(0,b).$
###### Proposition 5.3.
(Monotonicity) Assume that $\displaystyle\mathrm{L}(x,y,z,w)$ is nondecreasing
in $\displaystyle x$ and $\displaystyle y$ for any fixed $\displaystyle z,w.$
Then
$\displaystyle\mathrm{CS}^{\mathrm{L}}(f)\leqslant\mathrm{CS}^{\mathrm{L}}(g)$
whenever $\displaystyle f\leqslant g.$
Proof. Use the equality (5) and the property $\displaystyle(C1)$ (see
Definition 2.1). ∎
###### Example 5.4.
The assumption of Proposition 5.3 is satisfied for functions
$\displaystyle\mathrm{L}_{2}-\mathrm{L}_{4}$ whenever $\displaystyle x\mapsto
x\circ b$ is nondecreasing for any $\displaystyle b.$
We say that a conditional aggregation operator
$\displaystyle\mathsf{A}(\cdot|D)$ is homogeneous if
$\displaystyle\mathsf{A}(\alpha f|D)=\alpha\mathsf{A}(f|D)$ for any
$\displaystyle\alpha\in[0,\infty)$ and $\displaystyle f\in\mathbf{F}.$ A
family $\displaystyle{\mathscr{A}}$ of conditional aggregation operators is
said to be homogeneous if $\displaystyle\mathsf{A}(\cdot|D)\in{\mathscr{A}}$
is homogeneous for any $\displaystyle D\in\Sigma_{0}.$
###### Proposition 5.5.
(Homogeneity) Let $\displaystyle{\mathscr{A}}$ and
$\displaystyle\widehat{{\mathscr{A}}}$ be homogeneous FCAs. If
$\displaystyle\mathrm{L}(\alpha x,\alpha y,z,w)=\alpha\mathrm{L}(x,y,z,w)$ for
all $\displaystyle\alpha\in[0,\infty)$ and all $\displaystyle x,y,z,w,$ then
$\displaystyle f\mapsto\mathrm{CS}^{\mathrm{L}}(f)$ is a homogeneous operator.
###### Example 5.6.
The assumption is satisfied for:
* •
$\displaystyle\mathrm{L}_{1}$ for $\displaystyle p=1$;
* •
$\displaystyle\mathrm{L}_{2}-\mathrm{L}_{5}$ if $\displaystyle(\alpha a)\circ
b=\alpha(a\circ b)$ for any $\displaystyle\alpha,a,b$;
* •
$\displaystyle\mathrm{L}_{6}$ if $\displaystyle\delta(\alpha x,\alpha y)\circ
z=\alpha(\delta(x,y)\circ z)$ for any $\displaystyle\alpha,x,y,z$;
* •
$\displaystyle\mathrm{L}_{7}$ if $\displaystyle x\mapsto\mathrm{F}_{1}(x,b)$
and $\displaystyle x\mapsto\mathrm{F}_{2}(x,b)$ are homogeneous for any
$\displaystyle b.$
We say that a conditional aggregation operator
$\displaystyle\mathsf{A}(\cdot|D)$ is subadditive if
$\displaystyle\mathsf{A}(f+g|D)\leqslant\mathsf{A}(f|D)+\mathsf{A}(g|D)$ for
any $\displaystyle f,g\in\mathbf{F}.$ A FCA $\displaystyle{\mathscr{A}}$ is
said to be subadditive if $\displaystyle\mathsf{A}(\cdot|D)\in{\mathscr{A}}$
is subadditive for any $\displaystyle D\in\Sigma_{0}.$
###### Proposition 5.7.
(Subadditivity) Suppose that $\displaystyle{\mathscr{A}}$ and
$\displaystyle\widehat{{\mathscr{A}}}$ are subadditive FCAs. If
$\displaystyle\mathrm{L}(x+a,y+b,z,w)\leqslant\mathrm{L}(x,y,z,w)+\mathrm{L}(a,b,z,w)$
for all $\displaystyle x,y,z,w,a,b,$ then the mapping $\displaystyle
f\mapsto\mathrm{CS}^{\mathrm{L}}(f)$ is subadditive.
###### Example 5.8.
The following functions $\displaystyle\mathrm{L}$ are subadditive for the
first and second coordinate:
* •
$\displaystyle\mathrm{L}_{1}$ with $\displaystyle p=1$;
* •
$\displaystyle\mathrm{L}_{2}-\mathrm{L}_{4}$ are subadditive if $\displaystyle
x\mapsto x\circ b$ is subadditive for any $\displaystyle b$;
* •
$\displaystyle\mathrm{L}_{5}$ if $\displaystyle x\mapsto x\circ z$ is
nondecreasing and subadditive for any $\displaystyle z$;
* •
$\displaystyle\mathrm{L}_{6}$ if $\displaystyle\delta(x+a,y+b)\circ
z\leqslant\delta(x,y)\circ z+\delta(a,b)\circ z$ for any $\displaystyle
x,y,z,a,b.$
###### Proposition 5.9.
(Convexity) Let $\displaystyle f\mapsto\mathsf{A}(f|D)$ and $\displaystyle
f\mapsto\widehat{\mathsf{A}}(f|D)$ be convex for any $\displaystyle
D\in\Sigma_{0},$ where $\displaystyle\mathsf{A}\in{\mathscr{A}}$ and
$\displaystyle\widehat{\mathsf{A}}\in\widehat{{\mathscr{A}}}.$ If
$\displaystyle\mathrm{L}(\lambda x+(1-\lambda)a,\lambda
y+(1-\lambda)b,z,w)\leqslant\lambda\mathrm{L}(x,y,z,w)+(1-\lambda)\mathrm{L}(a,b,z,w)$
for all $\displaystyle\lambda\in(0,1)$ and all $\displaystyle x,y,z,w,a,b,$
then $\displaystyle f\mapsto\mathrm{CS}^{\mathrm{L}}(f)$ is convex.
###### Example 5.10.
The assumption in Proposition 5.9 is satisfied for
* •
$\displaystyle\mathrm{L}_{1}$ if $\displaystyle p\geqslant 1$;
* •
$\displaystyle\mathrm{L}_{2}-\mathrm{L}_{4}$ whenever $\displaystyle x\mapsto
x\circ b$ is convex for any $\displaystyle b$;
* •
$\displaystyle\mathrm{L}_{5}$ whenever $\displaystyle x\mapsto x\circ z$ is
nondecreasing and convex for any $\displaystyle z.$
We say that a FCA $\displaystyle{\mathscr{A}}$ is idempotent if
$\displaystyle\mathsf{A}(b\mathbbm{1}_{X}|D)=b$ for any $\displaystyle
b\in[0,\infty)$ and any $\displaystyle\mathsf{A}(\cdot|D)\in{\mathscr{A}}$
with $\displaystyle D\in\Sigma_{0}$ (see [2, Proposition 3.10]). Obviously,
$\displaystyle{\mathscr{A}}^{\inf}$ and $\displaystyle{\mathscr{A}}^{\sup}$
are idempotent FCAs.
###### Proposition 5.11.
(Idempotency) Let $\displaystyle{\mathscr{A}}$ and
$\displaystyle\widehat{{\mathscr{A}}}$ be idempotent FCAs. Then
$\displaystyle\mathrm{CS}^{\mathrm{L}}(b\mathbbm{1}_{X})=b$ for all
$\displaystyle b\in[0,\infty)$ if and only if
$\displaystyle\displaystyle\sup_{{\mathcal{D}}\in{\mathcal{H}}}\Big{\\{}$
$\displaystyle\displaystyle\sum_{(C,D)\in{\mathcal{R}},\,D\neq\emptyset}\mathrm{L}(b,b,\mu(C),\widehat{\mu}(D))+\sum_{(C,\emptyset)\in{\mathcal{R}},\,C\neq\emptyset}\mathrm{L}(b,0,\mu(C),0)$
$\displaystyle\displaystyle+\sum_{(\emptyset,D)\in{\mathcal{R}},\,D\neq\emptyset}\mathrm{L}(0,b,0,\widehat{\mu}(D))+\sum_{(\emptyset,\emptyset)\in{\mathcal{R}}}\mathrm{L}(0,0,0,0)\Big{\\}}=b$
(42)
for any $\displaystyle b\in[0,\infty).$
For the special choice of L’s the condition (5.11) can be simplified as the
following examples demonstrate.
* •
Consider $\displaystyle\mathrm{L}_{1}.$ If $\displaystyle{\mathscr{A}}$ and
$\displaystyle\widehat{{\mathscr{A}}}$ are idempotent FCAs, then the
$\displaystyle p$-variation of any constant function is equal to 0 (see
Example 3.9), so the $\displaystyle p$-variation is not idempotent, as
expected.
* •
Consider $\displaystyle\mathrm{L}_{2}$ such that $\displaystyle x\circ
0=0=0\circ z$ for any $\displaystyle x,z.$ Then (5.11) takes the form
$\displaystyle\displaystyle\sup_{{\mathcal{D}}\in{\mathcal{H}}}\sum_{(C,D)\in{\mathcal{R}},\,C\neq\emptyset}b\circ\mu(C)=b.$
(43)
If $\displaystyle{\mathcal{R}}=\\{(D,D)\colon D\in{\mathcal{D}}\\}$ for
$\displaystyle{\mathcal{D}}\in{\mathcal{H}}=\\{\\{D\\}\colon
D\in\Sigma_{0}\\}$ (see Example 3.4), then (43) can be rewritten as follows
$\displaystyle\textstyle{\sup_{C\in\Sigma_{0}}\\{b\circ\mu(C)\\}}=b$ for any
$\displaystyle b.$ For $\displaystyle\mathrm{L}_{3}$ we can apply a similar
approach.
* •
Consider $\displaystyle\mathrm{L}_{4},$
$\displaystyle{\mathcal{R}}={\mathcal{R}}^{+}$ and
$\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{chain}}$ (cf. section devoted
to $\displaystyle{\mathfrak{C}}^{\mu,\widehat{\mu}}_{\circ}$). Then the
condition (5.11) has the following form
$\displaystyle\displaystyle\sup_{{\mathcal{D}}\in{\mathcal{H}}_{\text{chain}}}\sum_{i=1}^{l}b\circ(\mu(D_{i})-\widehat{\mu}(D_{i+1}))=b$
(44)
for any $\displaystyle b.$ The condition (44) is satisfied if
$\displaystyle\circ=\cdot$ and
$\displaystyle\mu=\widehat{\mu}\in\mathbf{M}^{1}$ (cf. [18, Proposition 14]).
* •
Consider $\displaystyle\mathrm{L}_{5},$ $\displaystyle 0\circ z=0$ for any
$\displaystyle z$ and $\displaystyle{\mathcal{R}}=\\{(D,D^{c})\colon
D\in{\mathcal{D}}\\}$ for
$\displaystyle{\mathcal{D}}\in{\mathcal{H}}_{\text{one}}$ (cf. Example 3.8).
Then (5.11) reduces to the equality $\displaystyle b\circ\mu(X)=b$ for any
$\displaystyle b\in[0,\infty).$
* •
Consider $\displaystyle\mathrm{L}_{6},$ $\displaystyle 0\circ z=0$ for any
$\displaystyle z,$ $\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{chain}}$
and $\displaystyle{\mathcal{R}}={\mathcal{R}}^{-}$ (cf. section devoted to
$\displaystyle{\mathfrak{C}}^{\circ,\delta}_{\mu}$). Then the condition (5.11)
takes the form
$\displaystyle\displaystyle\sup_{D\in\Sigma_{0}}\\{\delta(b,0)\circ\mu(D)\\}=b$
(45)
for any $\displaystyle b.$ The condition (45) is true for nondecreasing
$\displaystyle x\mapsto a\circ x$ for any $\displaystyle a$ such that
$\displaystyle\delta(a,0)\circ\mu(X)=a$ for any $\displaystyle a$ (cf. [8,
Theorem 3.31]).
* •
Consider $\displaystyle\mathrm{L}_{7},$
$\displaystyle{\mathcal{H}}={\mathcal{H}}_{\text{chain}}$ and
$\displaystyle{\mathcal{R}}={\mathcal{R}}^{-}$ (cf. section devoted to
$\displaystyle{\mathfrak{C}}^{(\mathrm{F}_{1},\mathrm{F}_{2})}_{\mu}$). Then
the formula (5.11) takes the form
$\displaystyle\displaystyle\sup_{{\mathcal{D}}\in{\mathcal{H}}_{\text{chain}}}\sum_{i=1}^{l}\bigl{(}\mathrm{F}_{1}(b,\mu(D_{i}))-\mathrm{F}_{2}(b,\mu(D_{i}))\bigr{)}=b$
for any $\displaystyle b\in[0,\infty).$
## 6 Conclusion
In this paper we have indicated a way how to look at different operators with
respect to a nonadditive measure from a new (in some sense unified)
perspective. We have defined an operator generalizing Sugeno-like operator
(Example 3.4), generalized Lebesgue integral (Example 3.5),
$\displaystyle\mathrm{F}$-decomposition integral with respect to a finite
partition decomposition system (Example 3.6), Lovász extension (Example 3.7)
and generalized $\displaystyle p$-variation (Example 3.9). Moreover, we have
given relationships with several functionals generalizing the discrete Choquet
integral expressions
$\displaystyle\displaystyle(C)\int\limits_{X}f\,{\mathrm{d}}\mu$
$\displaystyle\displaystyle=\sum_{i=1}^{n}f_{(i)}(\mu(B_{(i)})-\mu(B_{(i+1)}))\qquad$
$\displaystyle\displaystyle\text{(cf.
$\displaystyle{\mathfrak{C}}_{\circ}^{\mu,\widehat{\mu}}(f)$)},$
$\displaystyle\displaystyle=\sum_{i=1}^{n}(f_{(i)}-f_{(i-1)})\mu(B_{(i)})\quad$
$\displaystyle\displaystyle\text{(cf.
$\displaystyle{\mathfrak{C}}_{\mu}^{\circ,\delta}(f)$)},$
$\displaystyle\displaystyle=\sum_{i=1}^{n}\bigl{(}f_{(i)}\mu(B_{(i)})-f_{(i-1)}\mu(B_{(i)})\bigr{)}\qquad\qquad$
$\displaystyle\displaystyle\text{(cf.
$\displaystyle{\mathfrak{C}}_{\mu}^{(\mathrm{F}_{1},\mathrm{F}_{2})}(f)$)}.$
All of this has been possible thanks to the conditional aggregation operator
defined in [2] and the dependence relation between conditional sets proposed
in the present paper. The relation can generate different aggregation styles
in decision making and can be used in as diverse areas as graph theory, neural
networks and fuzzy theory. Thus, describing new relations between conditional
sets may produce new operators interesting both from a theoretical point of
view as well as for applications.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or
personal relationships that could have appeared to influence the work reported
in this paper.
## References
* [1] G. Beliakov, H. Bustince, T. Calvo, A Practical Guide to Averaging Functions, Studies in Fuzziness and Soft Computing, Vol. 329, Springer, 2016.
* [2] M. Boczek, L. Halčinová, O. Hutník, M. Kaluszka, Novel survival functions based on conditional aggregation operators, Information Sciences (2021) https://doi.org/10.1016/j.ins.2020.12.049
* [3] M. Boczek, A. Hovana, O. Hutník, M. Kaluszka, New monotone measure-based integrals inspired by scientific impact problem, European Journal of Operational Research 290 (2021) 346–357.
* [4] J. Borzová-Molnárová, L. Halčinová, O. Hutník, The smallest semicopula-based universal integrals I: Properties and characterizations, Fuzzy Sets and Systems 271 (2015) 1–17.
* [5] J. Borzová-Molnárová, L. Halčinová, O. Hutník, The smallest semicopula-based universal integrals II: Convergence theorems, Fuzzy Sets and Systems 271 (2015) 18–30.
* [6] J. Borzová-Molnárová, L. Halčinová, O. Hutník, The smallest semicopula-based universal integrals III: Topology determined by the integral, Fuzzy Sets and Systems 304 (2016) 20–34.
* [7] J. Borzová, L. Halčinová, O. Hutník, The smallest semicopula-based universal integrals: Remarks and improvements, Fuzzy Sets and Systems 393 (2020) 29–52.
* [8] H. Bustince, R. Mesiar, J. Fernandez, M. Galar, D. Paternain, A. Altalhi, G.P. Dimuro, B. Bedregal, Z. Takáč, d-Choquet integrals: Choquet integrals based on dissimilarities, Fuzzy Sets and Systems https://doi.org/10.1016/j.fss.2020.03.019
* [9] T. Calvo, A. Kolesárová, M. Komorníková, R. Mesiar, Aggregation operators: properties, classes and construction methods. In: Calvo, T., Mayor, G., Mesiar, R. (eds.): Aggregation operators. New trends and applications. Physica-Verlag, Heidelberg (2002), pp. 3–104.
* [10] M.E.G.V. Cattaneo, Statistical Decisions Based Directly on the Likelihood Function, Diss. ETH No.17122, 2007.
* [11] G. Choquet, Theory of capacities, Annales de l’institut Fourier 5 (1954) 131–295.
* [12] G.P. Dimuro, J. Fernández, B. Bedregal, R. Mesiar, J.A. Sanz, G. Lucca, H. Bustince, The state-of-art of the generalizations of the Choquet integral: From aggregation and pre-aggregation to ordered directionally monotone functions, Information Fusion 57 (2020) 27–43.
* [13] D. Dubois, H. Prade, A. Rico, B. Teheux, Generalized Qualitative Sugeno Integrals, Information Sciences 415-416 (2017) 429–445.
* [14] J. Fernandez, H. Bustince, L. Horanská, R. Mesiar, A. Stupňanová, A generalization of the Choquet Integral Defined in Terms of the Möbius Transform, IEEE Transactions on Fuzzy Systems 28 (2020) 2313–2319.
* [15] M. Grabisch, Set Functions, Games and Capacities in Decision Making, Springer, 2016.
* [16] W. Hoeffding, On the distribution of the expected values of the order statistics, The Annals of Mathematical Statistics (1953) 93–100.
* [17] A. Honda, Y. Okazaki, Theory of inclusion-exclusion integral, Information Sciences 376 (2017) 136–147.
* [18] L. Horanská, A. Šipošová, A generalization of the discrete Choquet and Sugeno integrals based on a fusion function, Information Sciences 451 (2018) 83–99.
* [19] L. Horanská, On compatibility of two approaches to generalization of the Lovász extension formula, Information Processing and Management of Uncertainty in Knowledge-Based Systems 1238 (2020) 426–434.
* [20] L. Horanská, H. Bustince, J. Fernandez, R. Mesiar, Generalized decomposition integral, Information Science 538 (2020) 415–427.
* [21] L.S. Jin, M. Kalina, R. Mesiar, S. Borkotokey, Discrete Choquet integrals for Riemann integrable inputs with some applications, IEEE Transactions on Fuzzy Systems 26 (2018) 3164–3169.
* [22] M. Kaluszka, A. Okolewski, M. Boczek, On Chebyshev type inequalities for generalized Sugeno integrals, Fuzzy Sets and Systems 244 (2014) 51–62.
* [23] E.P. Klement, R. Mesiar, E. Pap, A universal integral as common frame for Choquet and Sugeno integral, IEEE Transactions on Fuzzy Systems 18 (2010) 178–187.
* [24] A. Kolesárová, A. Stupňanová, J. Beganová, Aggregation-based extensions of fuzzy measures, Fuzzy Sets and Systems 194 (2012) 1–14.
* [25] L. Lovász, Submodular functions and convexity, in: Mathematical Programming: The State of the Art. Springer, Berlin (1983) 235–257.
* [26] G. Lucca, J.A. Sanz, G.P. Dimuro, B. Bedregal, M.J. Asiain, M. Elkano, H. Bustince, CC-integrals: Choquet-like copula-based aggregation functions and its application in fuzzy rule-based classification systems, Knowledge-Based Systems 119 (2017) 32–43.
* [27] G. Lucca, J.A. Sanz, G.P. Dimuro, B. Bedregal, H. Bustince, R. Mesiar, $\displaystyle C_{F}$-integrals: A new family of pre-aggregation functions with application to fuzzy rule-based classification systems, Information Sciences 435 (2018) 94–110.
* [28] G. Lucca, G.P. Dimuro, J. Fernández, H. Bustince, B. Bedregal, J.A. Sanz, Improving the performance of fuzzy rule-based classification systems based on a nonaveraging generalization of CC-integrals named $\displaystyle C_{F_{1}F_{2}}$ -integrals, IEEE Transactions on Fuzzy Systems 27 (2019) 124–134.
* [29] F. Meng, S-M. Chen, J. Tang, Multicriteria decision making based on bi-direction Choquet integrals, Information Sciences (2020), doi: https://doi.org/10.1016/j.ins.2020.10.055
* [30] R. Mesiar, A. Kolesárová, H. Bustince, G.P. Dimuro, B. Bedregal, Fusion functions based discrete Choquet-like integrals, European Journal of Operational Research 252 (2016) 601–609.
* [31] R. Mesiar, A. Stupňanová, A note on CC-integral, Fuzzy Sets and Systems 355 (2019) 106–109.
* [32] R.B. Nelsen, An Introduction to Copulas, Springer Science & Business Media, 2006.
* [33] G. Owen, Multilinear extensions of games, Management Science 18 (1972) 64–79.
* [34] A. Rényi, On the theory of order statistics, Acta Mathematica Academiae Scientiarum Hungarica 4.3-4 (1953) 191–231.
* [35] N. Shilkret, Maxitive measure and integration, Indagationes Mathematicae 33 (1971) 109–116.
* [36] F. Suárez García, P. Gil Álvarez, Two families of fuzzy integrals, Fuzzy Sets and Systems 18 (1986) 67–81.
* [37] M. Sugeno, Theory of Fuzzy Integrals and its Applications, Ph.D. Dissertation, Tokyo Institute of Technology, 1974.
* [38] A. Šeliga, Decomposition integral without alternatives, its equivalence to Lebesgue integral, and computational algorithms, Journal of Automation, Mobile Robotics and Intelligent Systems 13 (2019) 41–48.
* [39] V. Torra, Twofold integral: A Choquet integral and Sugeno integral generalization, Butlletí de l’Associació Catalana d’Intelligència Artificial 29 (2003), 14-20 (in Catalan). Preliminary version: IIIA Research Report TR-2003-08 (in English).
* [40] R.R. Yager, On ordered weighted averaging aggregation operators in multicriteria decision making, IEEE Transactions on Systems, Man, and Cybernetics 18 (1988) 183–190.
* [41] Q. Yang, The Pan-integral on the Fuzzy Measure Space, Fuzzy Math. 3 (1985) 107–114.
* [42] Q. Zhang, R. Mesiar, J. Li, P. Struk, Generalized Lebesgue integral, International Journal of Approximate Reasoning 52 (2011) 427–443.
|
# Bayesian Multivariate Sparse Functional Principal Components Analysis with
Application to Longitudinal Microbiome Multi-Omics Data
Lingjing Jianglabel=e1 [<EMAIL_ADDRESS>Chris
<EMAIL_ADDRESS>[ Jane J.
<EMAIL_ADDRESS>[ Austin D.
<EMAIL_ADDRESS>[ Rob
<EMAIL_ADDRESS>[ Wesley K. Thompsonlabel=e5 [
<EMAIL_ADDRESS>Herbert Wertheim School of Public Health and
Human Longevity Science, University of California San Diego, Julia Computing,
Department of Pediatrics, University of California San Diego, Center for
Microbiome Innovation, University of California San Diego, Department of
Computer Science and Engineering, University of California San Diego,
Department of Bioengineering, University of California San Diego,
###### Abstract
Microbiome researchers often need to model the temporal dynamics of multiple
complex, nonlinear outcome trajectories simultaneously. This motivates our
development of multivariate Sparse Functional Principal Components Analysis
(mSFPCA), extending existing SFPCA methods to simultaneously characterize
multiple temporal trajectories and their inter-relationships. As with existing
SFPCA methods, the mSFPCA algorithm characterizes each trajectory as a smooth
mean plus a weighted combination of the smooth major modes of variation about
the mean, where the weights are given by the component scores for each
subject. Unlike existing SFPCA methods, the mSFPCA algorithm allows estimation
of multiple trajectories simultaneously, such that the component scores, which
are constrained to be independent within a particular outcome for
identifiability, may be arbitrarily correlated with component scores for other
outcomes. A Cholesky decomposition is used to estimate the component score
covariance matrix efficiently and guarantee positive semi-definiteness given
these constraints. Mutual information is used to assess the strength of
marginal and conditional temporal associations across outcome trajectories.
Importantly, we implement mSFPCA as a Bayesian algorithm using R and stan,
enabling easy use of packages such as PSIS-LOO for model selection and
graphical posterior predictive checks to assess the validity of mSFPCA models.
Although we focus on application of mSFPCA to microbiome data in this paper,
the mSFPCA model is of general utility and can be used in a wide range of
real-world applications.
Bayesian,
Functional Data Analysis,
Longitudinal,
Microbiome,
Multi-omics,
###### keywords:
, , , , and
## 1 Introduction
Numerous disorders, including heritable immune-mediated diseases such as
inflammatory bowel disease (IBD) and asthma, neurological conditions including
autism, and genetically driven diseases such as cancer, have been linked to
dysregulation of human microbiota (Holleran et al., 2018; Lloyd-Price et al.,
2019; Frati et al., 2019; Sharon et al., 2019; Ballen et al., 2016). However,
the complex influences of microbiota on human health are not yet functionally
understood. To understand the links between the human microbiome and disease,
it is necessary to determine which microbe genes are being expressed and the
timing of their expression (Sberro et al., 2019). Thus, in addition to
obtaining microbiome data using 16S ribosomal RNA gene sequencing or whole
genome shotgun sequencing (Kuczynski et al., 2010; Ranjan et al., 2016; Gill
et al., 2006), an increasing number of studies are also collecting
transcriptomics data to understand microbial gene expression, proteomics data
to study expressed proteins, and metabolomics data to define the functional
status of host-microbial relationships (iHMP Consortium, 2014; Lloyd-Price et
al., 2019; Bouslimani et al., 2019). This complex combination of data types,
called microbiome multi-omics, is essential for understanding the links
between microbial communities and disease and may enable translation of
microbiome research into effective treatments.
An increasing number of microbiome multi-omics studies are longitudinal, aimed
at simultaneously characterizing microbiome and host temporal changes to
provide a more comprehensive picture of dynamics during healthy and diseased
states (iHMP Consortium, 2014; Lloyd-Price et al., 2019; Vatanen et al., 2018;
Stewart et al., 2018). Despite these breakthroughs in microbiome study designs
and data collection, few statistical methods are available to analyze these
complex longitudinal omics data. Recently, several new methods based on
network analysis were developed for multi-omics integration of microbiome data
in cross-sectional studies (Jiang et al., 2019; Morton et al., 2019), however,
analytical methods for longitudinal microbiome multi-omics data are still in
their infancy. The challenges include irregular timing and frequency across
subjects, unmatched time points between different data types, non-linear
temporal patterns, missing data, and high individual variability (Bodein et
al., 2019).
The statistical framework of functional data analysis (FDA) was introduced by
Ramsay and Silverman (Ramsay and Silverman, 1997), wherein the basic unit of
information is the entire function, such as a curve or an image. Functional
principal component analysis (FPCA) has been a widely used tool in FDA. The
fundamental aim of FPCA is to reduce dimensionality by capturing the principal
modes of smooth variation. FPCA summarizes subject-specific deviations from
the mean curve via the coordinates (principal component scores) of the basis
spanned by the principal components (Di et al., 2009). Existing FPCA methods
include smoothed FPCA approaches based on roughness penalties (Rice and
Silverman, 1991), extensions to sparsely sampled functional data (James,
Hastie and Sugar, 2000; Yao, Müller and Wang, 2005; Peng and Paul, 2009; Di,
Crainiceanu and Jank, 2014; Kidziński and Hastie, 2018), and the asymptotic
properties of FPCA (Hall and Hosseini-Nasab, 2006; Li et al., 2010). Most FPCA
methods development has focused on univariate functional data. Chiou, Chen and
Yang (2014) proposed a multivariate FPCA method to simultaneously model
multiple temporal outcomes and infer the component dependencies through
pairwise cross-covariance functions. However, this method is limited to
functional data as classically conceived, where the curves are observed
longitudinally over densely and consistently sampled time points.
To meet the need for modeling irregularly and sparsely sampled, non-linear
multivariate microbiome multi-omics trajectories, we developed multivariate
sparse functional principal components analysis (mSFPCA). The major novelty of
our approach is that it focuses on a set of functions which may covary in
complex ways. Smoothing is accomplished by retaining a low-dimensional set of
PC functions, as in James, Hastie and Sugar (2000). The PC scores across
outcomes are modeled jointly via a constrained covariance matrix, efficiently
estimated by Cholesky decomposition. Our proposed method allows for
simultaneously characterizing multiple temporal measurements, such as
microbiome, metabolome, inflammatory markers, and self-report measures, and to
infer the temporal associations among these measures both marginally and
conditionally, based on estimation of marginal and partial mutual information.
The model is implemented using a Bayesian formulation, instantiated with
Hamiltonian Markov Chain Monte Carlo (MCMC) methods in stan to sample from the
posterior distribution of the model parameters. Our Bayesian implementation
enables the usage of PSIS-LOO for model selection and graphical posterior
predictive checks to assess the validity of mSFPCA models. While we focus on
application of mSFPCA to microbiome data in this paper, the mSFPCA model is of
general utility and can be used in many applications.
The remainder of the paper is organized as follows. Section 2 reviews sparse
functional principal component analysis (SFPCA), and introduces multivariate
SFPCA, our statistical framework for longitudinal microbiome multi-omics data.
Section 3 describes extensive simulation studies to evaluate the performance
of mSFPCA in realistic settings. Section 4 describes the application of our
methodology to a challenging longitudinal microbiome multi-omics data on type
2 diabetes. Section 5 presents our conclusion. To ensure reproducibility of
our results, accompanying software, simulations and analysis results are
posted at https://github.com/knightlab-analyses/mfpca-analyses.
## 2 Methodology
### 2.1 Sparse functional principal components analysis
The classical assumption of functional data analysis is that each trajectory
is sampled over a dense grid of time points common to all individuals (Ramsay
and Silverman, 2007). However, in practice, trajectories are often measured at
an irregular and sparse set of time points that can differ widely across
individuals. To address this issue, James, Hastie and Sugar (2000) proposed
sparse functional principal components analysis (SFPCA) using a reduced rank
mixed-effects framework. Let $Y_{i}(t)$ be the measurement at time $t$ for the
$i$th individual, $\mu(t)$ the overall mean function, $f_{j}$ the $j$th
principal component function and ${f}=[(f_{1},f_{2},…,f_{k})]^{T}$, where $k$
is the number of principal components. Then the James, Hastie and Sugar (2000)
SFPCA model is given by
$Y_{i}(t)=\mu(t)+\sum_{j=1}^{k}f_{j}(t)\alpha_{ij}+\epsilon_{i}(t),\quad
i=1,...,N$ (1)
subject to the orthogonality constraint $\int f_{j}f_{l}=\delta_{jl}$, the
Kronecker $\delta$. The vector
$\alpha_{i}=(\alpha_{i1},\ldots,\alpha_{ik})^{T}$ is the component weights for
the $i$th individual and $\epsilon_{i}(t)$ is a normally-distributed residual,
independent across subjects and across times within subject. The functions
$\mu$ and ${f}$ are approximated using cubic splines to allow a smooth but
flexible fit. Let $b(t)$ be a cubic spline basis with dimension $q>k$. The
spline basis is orthonormalized so that $\int b_{j}b_{l}=\delta_{jl}$. Let
$\Theta$ and $\theta_{\mu}$ be, respectively, a $q\times k$ matrix and a
$q$-dimensional vector of real-valued coefficients. For each individual $i$,
denote their measurement times by $t=(t_{i1},t_{i2},…,t_{in_{i}})^{T}$, and
let $Y_{i}=(Y_{i}(t_{i1}),…,Y_{i}(t_{in_{i}}))^{T}$ be the corresponding real-
valued observations. Then $B_{i}=(b(t_{i1}),…,b(t_{in_{i}}))^{T}$ is the
$n_{i}\times q$ spline basis matrix for the $i$th individual. The reduced rank
model can then be written as
$\displaystyle
Y_{i}=B_{i}\theta_{\mu}+B_{i}\Theta\alpha_{i}+\epsilon_{i},\quad i=1,...,N,$
(2) $\displaystyle\Theta^{T}\Theta=I,\quad\alpha_{i}\sim
N(0,D),\quad\epsilon_{i}\sim N(0,\sigma_{\epsilon}^{2}I_{n_{i}}),$
where the covariance matrix $D$ is restricted to be diagonal and $I_{n_{i}}$
is the $n_{i}\times n_{i}$ identity matrix. Various fitting approaches, such
as the EM algorithm, kernel smoothing and Newton-Raphson algorithm, have been
proposed to estimate parameters of the SFPCA model (James, Hastie and Sugar,
2000; Yao, Müller and Wang, 2005; Peng and Paul, 2009). These approaches then
use model selection techniques, such as cross-validation, Akaike information
criterion (AIC) and leave-one-curve-out cross-validation, to select the
dimension of the basis and the number of principal components. However, due to
their reliance on assumptions such as normally-distributed component scores
and residuals, these models need to be carefully examined when applied to real
data (Kidziński and Hastie, 2018).
### 2.2 Bayesian SFPCA
Jiang et al. (2020) proposed an SFPCA model in a Bayesian framework to allow
for flexible prior specification and implementation of model selection and
assessment methods. This Bayesian implementation used the Hamiltonian MCMC
sampling algorithm in Stan (Carpenter et al., 2017). The real-valued
observations $Y_{i}(t)$ are first standardized to have mean zero and standard
deviation one. The prior distributions for parameters in Eq. (2) were chosen
as follows:
$\displaystyle\theta_{\mu}$ $\displaystyle\sim N_{q}(0,I_{q})$
$\displaystyle\alpha_{i}$ $\displaystyle\sim N_{k}(0,I_{k})$
$\displaystyle\Theta_{j}$ $\displaystyle\sim N_{q}(0,I_{q}),j=1,\ldots,k$
$\displaystyle\epsilon_{i}$ $\displaystyle\sim
N_{v_{i}}(0,\sigma_{\epsilon}^{2}I_{v_{i}})$ $\displaystyle\sigma_{\epsilon}$
$\displaystyle\sim Cauchy(0,1),$
where $\Theta_{j}$ is the $j$th column of the loading matrix $\Theta$, and
$v_{i}$ is the total number of visits for the $i$th subject. This Bayesian
implementation enables use of leave-one-out cross-validation with Pareto-
smoothed important sampling (PSIS-LOO) (Vehtari, Gelman and Gabry, 2017) to
perform model selection on the number of principal components $k$ and the
number of basis functions $q$. Moreover, model fit can be assessed via
diagnostic plots from PSIS-LOO as well as the graphical posterior predictive
checks obtained from simulating posterior predictive data (Gelman, Meng and
Stern, 1996; Gabry et al., 2019). This Bayesian implementation thus offers a
flexible and comprehensive solution to real-data SFPCA applications.
### 2.3 Multivariate SFPCA
Here, we extend our previous Bayesian SFPCA to simultaneously model multiple
trajectories, and to quantify their marginal and conditional temporal
associations. Let $\boldsymbol{Y_{i}}(t)$ denote the $P$-dimensional observed
response at time $t$ for subject $i$, which can be modeled by the multivariate
Functional PCA (mFPCA) model as
$\boldsymbol{Y_{i}}(t)=\boldsymbol{\mu_{i}}(t)+\boldsymbol{f}(t)^{T}\boldsymbol{\alpha_{i}}+\boldsymbol{\epsilon_{i}}(t),i=1,...,N,$
(3)
where $\boldsymbol{\mu_{i}}=(\mu_{i,1}(t),...,\mu_{i,P}(t))^{T}$ is the
overall mean response of $P$ trajectories for subject $i$,
$\boldsymbol{f}(t)^{T}=diag(f_{1}(t)^{T},...,f_{P}(t)^{T})$, with $f_{p}(t)$
the FPC functions corresponding to the $p$th trajectory at time $t$,
$\boldsymbol{\alpha_{i}}=(\alpha_{i,1},...,\alpha_{i,P})$ is the vector of FPC
scores for subject $i$, and $\boldsymbol{\epsilon_{i}}$ is the corresponding
residuals.
In order to fit this model when the data are sampled at only a finite number
of time points, we chose to fit $\boldsymbol{\mu_{i}}$ and $\boldsymbol{f}(t)$
using a basis of spline functions $\boldsymbol{B}$. Let $K_{p}$ be the number
of FPCs, $Q_{p}$ be the corresponding number of basis functions, $V_{ip}$ be
the total number of assessments for $i$th subject in the $p$th temporal
measurement, $B_{ip}$ be the transpose of the cubic spline basis, and
$\Theta_{p}$ be the corresponding FPC loadings. Then the total number of
principal components across $p$ measurements are $K=\sum_{p=1}^{P}K_{p}$, the
total number of basis functions are $Q=\sum_{p=1}^{P}Q_{p}$, and the total
number of assessments for subject $i$ is $V_{i}=\sum_{p=1}^{P}V_{ip}$. The
model for $\boldsymbol{Y_{i}}(t)$ can be written into the multivariate Sparse
Functional PCA (mSFPCA) model as
$\boldsymbol{Y_{i}}=\boldsymbol{B_{i}\theta_{\mu}}+\boldsymbol{B_{i}\Theta\alpha_{i}}+\boldsymbol{\epsilon_{i}},i=1,...,N,$
(4)
where $\boldsymbol{Y_{i}}$ is a $P$-dimensional observed response, residuals
$\boldsymbol{\epsilon}_{i}\sim
N_{V_{i}}(0,\sigma_{\boldsymbol{\epsilon}}^{2}I_{V_{i}})$,
$\boldsymbol{B_{i}}$ is a $V_{i}\times Q$ matrix with
$\boldsymbol{B_{i}}=\begin{bmatrix}B_{i1}&0^{T}&\cdots&0^{T}\\\
0&B_{i2}&\cdots&0^{T}\\\ \vdots&\vdots&\ddots&\vdots\\\
0&0&\cdots&B_{iP}\end{bmatrix},$
where $B_{ip}$ is the $V_{ip}\times Q_{p}$ matrix of spline bases evaluated at
assessment times ${\bf t}_{ip}=(t_{ip1},\ldots,t_{ipV_{ip}})$. The $Q\times K$
matrix $\boldsymbol{\Theta}$ of FPC loadings
$\boldsymbol{\Theta}=\begin{bmatrix}\Theta_{1}&0^{T}&\cdots&0^{T}\\\
0&\Theta_{2}&\cdots&0^{T}\\\ \vdots&\vdots&\ddots&\vdots\\\
0&0&\cdots&\Theta_{P}\end{bmatrix},$
is subject to the orthonormality constraint
$\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}=I$. The $K$-dimensional vector of
FPC scores $\boldsymbol{\alpha_{i}}\sim N(0,\Sigma_{\alpha})$, with
$\Sigma_{\alpha}$ is restricted to the form
$\begin{bmatrix}D_{1}&C_{21}^{T}&\cdots&C_{P1}^{T}\\\
C_{21}&D_{2}&\cdots&C_{P2}^{T}\\\ \vdots&\vdots&\ddots&\vdots\\\
C_{P1}&C_{P2}&\cdots&D_{P}\end{bmatrix},$
where $D_{p}$ is the within-trajectory diagonal covariance matrix (necessary
for identifiability of within-trajectory FPCs) for the $p$th trajectory , and
$C_{lm}$ is the covariance matrix for the $l$th and $m$th trajectories.
$\Sigma_{\alpha}$ can be written as
$\Sigma_{\alpha}=S_{\alpha}R_{\alpha}S_{\alpha}$, where $S_{\alpha}$ is the
diagonal matrix of standard deviations for the FPC scores, and $R_{\alpha}$ is
the correlation matrix restricted to the form
$\begin{bmatrix}I_{1}&R_{21}^{T}&\cdots&R_{P1}^{T}\\\
R_{21}&I_{2}&\cdots&R_{P2}^{T}\\\ \vdots&\vdots&\ddots&\vdots\\\
R_{P1}&R_{P2}&\cdots&I_{P}\end{bmatrix},$
where $I_{p}$ is the $Q_{p}\times Q_{p}$ identity matrix corresponding to the
$p$th trajectory.
We implemented the sMFPCA model using the Hamiltonian Markov Chain Monte Carlo
(MCMC) sampling algorithm in Stan to estimate parameters. The prior
distributions for $\boldsymbol{\theta_{\mu}}$, $\boldsymbol{\Theta}$ and
$\boldsymbol{\epsilon_{i}}$ are set as follows
$\displaystyle\boldsymbol{\theta_{\mu}}$ $\displaystyle\sim N_{Q}(0,I_{Q})$
$\displaystyle\boldsymbol{\Theta_{kp}}$ $\displaystyle\sim
N_{Q_{p}}(0,I_{Q_{p}}),$ $\displaystyle\boldsymbol{\epsilon_{i}}$
$\displaystyle\sim N_{V_{i}}(0,\sigma_{\boldsymbol{\epsilon}}^{2}I_{V_{i}})$
$\displaystyle\sigma_{\boldsymbol{\epsilon}}$ $\displaystyle\sim Cauchy(0,1),$
where $\boldsymbol{\Theta_{kp}}$ is the $k$th column of the FPC loadings in
the $p$th block. Leveraging this Bayesian implementation in Stan, we utilized
PSIS-LOO for model selection, and diagnostics plots from PSIS-LOO and
graphical posterior predictive checks for model diagnostics.
#### 2.3.1 Orthonormality constraint
A difficulty in implementing the Bayesian mSFPCA model is that the principal
component loadings $\boldsymbol{\Theta}$ are not uniquely specified. For a
given $K\times K$ rotation matrix $P$, if
$\boldsymbol{\Theta}^{*}=\boldsymbol{\Theta}P$ and $\boldsymbol{\Theta}$ obeys
the constraints in Eq.(4), then
$\boldsymbol{\Theta}^{*T}\boldsymbol{\Theta}^{*}=P^{T}\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}P=I$,
and hence $\boldsymbol{\Theta}$ is unidentifiable without additional
restrictions. Instead of directly enforcing orthonormality when sampling from
the conditional posteriors in the Bayesian model fitting, we sampled the
parameters with no constraint on $\boldsymbol{\Theta}$ and then performed a
post hoc rotation for each iteration of the MCMC algorithm to meet the
orthonormality constraint. Since the symmetric matrix
$\boldsymbol{\Theta}\Sigma_{\alpha}\boldsymbol{\Theta}^{T}$ is identifiable
and non-negative definite, we applied an eigenvalue decomposition
$\boldsymbol{\Theta}\Sigma_{\alpha}\boldsymbol{\Theta}^{T}=VSV^{T}$, where $V$
is the $Q\times Q$ matrix of orthonormal eigenvectors, and $S$ is the diagonal
matrix of eigenvalues, with the $Q$ positive eigenvalues ordered from largest
to smallest. Let $\Theta^{*}=V_{k}$ denote the $Q\times K$ matrix consisting
of the first $K$ eigenvectors of $V$, which satisfies
$\boldsymbol{\Theta}^{*T}\boldsymbol{\Theta}^{*}=I$. Finally, we rotated
$\Sigma_{\alpha}$ and FPC scores $\boldsymbol{\alpha}_{i}$, to obtain
$\Sigma_{\alpha}^{*}=\boldsymbol{\Theta}^{T*}\boldsymbol{\Theta}\Sigma_{\alpha}\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}^{*}$,
and
$\boldsymbol{\alpha}_{i}^{*}=\boldsymbol{\Theta}^{*T}\boldsymbol{\Theta}\boldsymbol{\alpha}_{i}$,
so that
$\boldsymbol{\Theta}^{*}\Sigma_{\alpha}^{*}\boldsymbol{\Theta}^{T*}=\boldsymbol{\Theta}\Sigma_{\alpha}\boldsymbol{\Theta}^{T}$,
and
$\boldsymbol{\Theta}^{*}\boldsymbol{\alpha}_{i}^{*}=\boldsymbol{\Theta}\boldsymbol{\alpha}_{i}$.
#### 2.3.2 Modeling covariance
The covariance matrix of FPC scores $\Sigma_{\alpha}$ must be positive semi-
definite and is restricted to be diagonal within-trajectory, it is a challenge
to model this covariance matrix effectively. Barnard, McCulloch and Meng
(2000) proposed a separation strategy for modeling $\Sigma=SRS$ by assuming
independent priors for the standard deviations $S$ and the correlation matrix
$R$. To account for the dependent structure of correlations among different
subsets of variables, Liechty, Liechty and Müller (2004) proposed the common
correlation model for $R$, which assumes a common normal prior for all
correlations with the additional restriction that the correlation matrix is
positive definite. However, the awkward manner in which $r_{ij}$, the ${ij}$th
element in the correlation matrix $R$, is embedded in the full conditional
posterior density, leading to use of a Metropolis-Hastings algorithm to update
one coefficient $r_{ij}$ at a time (Liechty, Liechty and Müller, 2004). This
consecutive updating procedure for correlation estimation is inefficient, and
could lead to heavy computational cost when the correlation matrix is large or
when the correlation has to be estimated separately from other parameters in
mSFPCA model when implemented in Stan (Carpenter et al., 2017). For example,
in a simulated data with 3 temporal measurements from 100 subjects over 10
time points, it would take 40 hours for a mSFPCA model using Liechty’s
covariance estimation method to estimate all the parameters when implemented
in Stan. However, the computational time can be reduced over 130 times (to
only 18 minutes) by using our proposed method due to the avoidance of
additional Metropolis-Hastings algorithm.
To pursue an efficient numerical solution to the covariance estimation, we
took advantage of the Cholesky decomposition (Nash, 1990) and imposed the
diagonal constraint on the within-trajectory covariance matrices. Since the
covariance matrix of FPC scores $\Sigma_{\alpha}$ has full rank with
probability one, it has a unique Cholesky decomposition in the form of
$\Sigma_{\alpha}=LL^{T},$
where $L$ is a real lower triangular matrix with positive diagonal entries
(Gentle, 2012) . Given a lower triangular matrix $L$ divided into $P$ blocks,
we have
$L=\begin{bmatrix}L_{1,1}&0&\cdots&0\\\ L_{2,1}&L_{2,2}&\cdots&0\\\
\vdots&\vdots&\ddots&\vdots\\\ L_{P,1}&L_{P,2}&\cdots&L_{P,P}\end{bmatrix},$
$\text{then }LL^{T}=\begin{bmatrix}L_{1,1}L_{1,1}^{T}&*&\cdots&*\\\
L_{2,1}L_{1,1}^{T}&L_{2,1}L_{2,1}^{T}+L_{2,2}L_{2,2}^{T}&\cdots&*\\\
\vdots&\vdots&\ddots&\vdots\\\
L_{P,1}L_{1,1}^{T}&L_{P,1}L_{2,1}^{T}+L_{P,2}L_{2,2}^{T}&\cdots&L_{P,1}L_{P,1}^{T}+...+L_{P,P}L_{P,P}^{T}\end{bmatrix},$
where $*$ denotes the transpose of the corresponding sub-diagonal block. To
ensure that $LL^{T}$ is positive definite with diagonal within-block
covariance matrices, the lower triangular Cholesky factor $L$ needs to meet
the following two conditions:
1. 1.
Within-block covariance matrices $\sum_{m=1}^{M}L_{M,m}L_{M,m}^{T},M=1,...,P$,
are diagonal.
2. 2.
The diagonal entries of $L_{M,M},M=1,...,P$ are positive.
We will focus on defining the diagonal blocks $L_{M,M}$ to achieve these, and
leave the off-diagonal blocks $L_{M,m},m=1,...,M-1$ to be arbitrary,
unconstrained (i.e. the unconstrained parameter elements from the Hamiltonian
MCMC sampling).
Let $D_{M},M=1,...,P$ be the $M$th within-block covariance matrix, then
$\displaystyle D_{M}$ $\displaystyle=\sum_{m=1}^{M}L_{M,m}L_{M,m}^{T}$ (5)
$\displaystyle=L_{M,M}L_{M,M}^{T}+\sum_{m=1}^{M-1}L_{M,m}L_{M,m}^{T},$
$\displaystyle L_{M,M}L_{M,M}^{T}$
$\displaystyle=D_{M}-\sum_{m=1}^{M-1}L_{M,m}L_{M,m}^{T}=A.$
Since all the off-diagonal elements of $D_{M}$ are known to be zero and the
off-diagonal blocks $L_{M,m},m=1,...,M-1$ are defined earlier with
unconstrained estimates, we have thus defined all the off-diagonals of this
matrix A, leaving only the diagonals. Because $L_{M,M}$ needs to have positive
diagonal entries, $L_{M,M}L_{M,M}^{T}$ must be positive definite, thus
$L_{M,M}$ is the Cholesky factor of A. To derive $L_{M,M}$, a typical approach
is to proceed with the Cholesky–Banachiewicz and Cholesky–Crout algorithm on
$A$, where entries for the lower triangular factor $L$ are
$\displaystyle L_{j,j}$
$\displaystyle=\sqrt{A{j,j}-\sum_{k=1}^{j-1}L_{j,k}L_{j,k}^{T}}$ (6)
$\displaystyle L_{i,j}$
$\displaystyle=\frac{1}{L_{j,j}}(A_{i,j}-\sum_{k=1}^{j-1}L_{i,k}L_{j,k}^{T})\text{
for }i>j$
However, for the diagonal entries $L_{j,j}$, instead of using Eq.6, we
substitute it with an exponential term $exp(0.5*O+2)$ to ensure it is
positive, where $O$ is the corresponding unconstrained parameter estimates.
Here, $0.5$ was chosen to mimic the square root in the original formula, and
$2$ was added to bound initial values of diagonal entries away from zero,
given that the default initial values are drawn uniformly from the interval
$(-2,2)$ in Stan. Finally, we update the off-diagonal entries $L_{i,j}$ using
the existing formula Eq.6.
In short, in our Bayesian implementation, we set the off-diagonal entries in
within-trajectory covariance matrices to be zero, estimate the rest of
parameters without constraint using uninformative (non-proper) prior
$uniform(-\infty,+\infty)$, substitute the diagonal entries with our
exponential term, and finally update the off-diagonal entries. In this way, we
are able to estimate the covariance matrix efficiently and guarantee it to be
positive semi-definite with our desired constrained form. Once we obtained the
covariance matrix, we can then decompose it into correlation matrix
$R_{\alpha}$ and standard deviations in order to estimate temporal
associations.
#### 2.3.3 Estimating inter-block association
Apart from simultaneously modeling multivariate longitudinal measurements, we
want to estimate the association among measurements of interest via the
correlations among the FPC scores, where the correlation matrix $R_{\alpha}$
obtained earlier will play a crucial role. We propose a measure of inter-
trajectory association by calculating the mutual information of FPC scores
from different measurements.
We define the inter-trajectory association between measurements $p_{1}$ and
$p_{2}$ as the mutual information of FPC scores $\boldsymbol{\alpha}_{ip_{1}}$
and $\boldsymbol{\alpha}_{ip_{2}}$, $1\leq p_{1},p_{2}\leq P$ , with
$\displaystyle
MI(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{ip_{2}})=H(\boldsymbol{\alpha}_{ip_{1}})+H(\boldsymbol{\alpha}_{ip_{2}})-H(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{ip_{2}}),$
where $H(X)$ is the entropy of $X$ and $H(X)=-E[log(f_{X}(X))]$ with
$f_{X}(X)$ being the probability density function of $X$(Cover, 1999).
If $K$-dimensional random variable $X$ follows multivariate normal
distribution with covariance matrix $\Sigma$, then according to Ahmed and
Gokhale (1989)
$\displaystyle H(X)=\frac{k}{2}+\frac{k}{2}log(2\pi)+\frac{1}{2}|\Sigma|.$
Since the $K$-dimensional FPC scores $\boldsymbol{\alpha}_{i}\sim
N(0,\Sigma_{\alpha})$, and any subvector of $\boldsymbol{\alpha}_{i}$ is of
the same structure with the correlation matrix being a submatrix of
$R_{\alpha}$, then according to Arellano-Valle, Contreras-Reyes and Genton
(2013), the mutual information of $\boldsymbol{\alpha}_{ip_{1}}$ and
$\boldsymbol{\alpha}_{ip_{2}}$ could be simplified as
$\displaystyle
MI(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{ip_{2}})=-\frac{1}{2}log|R_{\alpha\\{p_{1},p_{2}\\}}|,$
(7)
where
$R_{\alpha\\{p_{1},p_{2}\\}}=\begin{vmatrix}I_{p1}&R_{p_{1}p_{2}}\\\
R_{p_{1}p_{2}}^{T}&I_{p2}\\\ \end{vmatrix}.$
Moreover, we can estimate the conditional inter-trajectory association between
any two measurements of interest given the other measurements in the model by
calculating the partial mutual information of FPC scores. The conditional
inter-trajectory association between measurements $p_{1}$ and $p_{2}$ is
defined as the partial mutual information of $\boldsymbol{\alpha}_{ip_{1}}$
and $\boldsymbol{\alpha}_{ip_{2}}$, $1\leq p_{1},p_{2}\leq P$, with
$\displaystyle
MI(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{ip_{2}}|\boldsymbol{\alpha}_{i\\{1,...,P\setminus
p_{1},p_{2}\\}})$
$\displaystyle=H(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{i\\{1,...,P\setminus
p_{1},p_{2}\\}})+H(\boldsymbol{\alpha}_{ip_{2}},\boldsymbol{\alpha}_{i\\{1,...,P\setminus
p_{1},p_{2}\\}})$ (8)
$\displaystyle\qquad-H(\boldsymbol{\alpha}_{i\\{1,...,P\setminus
p_{1},p_{2}\\}})-H(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{ip_{2}},\boldsymbol{\alpha}_{i\\{1,...,P\setminus
p_{1},p_{2}\\}})$ $\displaystyle=H(\boldsymbol{\alpha}_{i\\{1,...,P\setminus
p_{2}\\}})+H(\boldsymbol{\alpha}_{i\\{1,...,P\setminus
p_{1}\\}})-H(\boldsymbol{\alpha}_{i\\{1,...,P\setminus
p_{1},p_{2}\\}})-H(\boldsymbol{\alpha_{i}})$
$\displaystyle=\frac{1}{2}log|R_{\alpha\\{1,...,P\setminus
p_{2}\\}}|+\frac{1}{2}log|R_{\alpha\\{1,...,P\setminus p_{1}\\}}|$
$\displaystyle\qquad-\frac{1}{2}log|R_{\alpha\\{1,...,P\setminus
p_{1},p_{2}\\}}|-\frac{1}{2}log|R_{\alpha}|,$
where $R_{\alpha\\{1,...,P\setminus p_{2}\\}}$, $R_{\alpha\\{1,...,P\setminus
p_{1}\\}}$, and $R_{\alpha\\{1,...,P\setminus p_{1},p_{2}\\}}$ are defined in
the similar way as $R_{\alpha\\{p_{1},p_{2}\\}}$ in Eq.(7).
Inter-trajectory association obtained from this way ranges from 0 to infinity.
By analogy with the way Person’s contingency coefficient was obtained, we can
apply a simple transformation proposed by Joe (1989) to obtain a normalized
version of the mutual information as
$MI^{*}(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{ip_{2}})\coloneqq\sqrt{1-exp[-2MI(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{ip_{2}})]}.$
(9)
In this way, the inter-trajectory and conditional associations now take its
value in [0, 1]. The interpretation is that the closer
$MI^{*}(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{ip_{2}})$ or
$MI^{*}(\boldsymbol{\alpha}_{ip_{1}},\boldsymbol{\alpha}_{ip_{2}}|\boldsymbol{\alpha}_{i\\{1,...,P\setminus
p_{1},p_{2}\\}})$ is to 1, the higher the temporal association between
measurements is.
## 3 Simulation studies
To evaluate the performance of mSFPCA in modeling multiple temporal
measurements, especially in its covariance estimation and temporal association
inference, we simulated sparse longitudinal trajectories with three temporal
measurements under four different covariance structures. To better mimic the
reality, our data was simulated based on an mSFPCA model using parameters
initially estimated from a real longitudinal microbiome multi-omics dataset
(Kostic et al., 2015) in the following way:
1. 1.
Applying mSFPCA to model three temporal measurements in the real multi-omics
dataset.
2. 2.
Selecting the optimal number of PCs and dimension of basis using PSIS-LOO: the
chosen model has the number of PCs as 2, 2, 1, and the number of basis as 6,
5, 5 for each measurement respectively.
3. 3.
Extracting the estimated values for population mean curve
$(\boldsymbol{\theta}_{\mu})$, FPC loadings $(\boldsymbol{\Theta})$, and
residual variance $\sigma_{\boldsymbol{\epsilon}}$.
Then under four distinct covariance structures on FPC scores
($\Sigma_{\alpha}$), we simulate the trajectories for 100 subjects with an
average of 20% missing data over observations at 10 time points. Observations
were randomly deleted to create increasingly sparse functional datasets. In
the 1st covariance structure, all PCs are independent; in the 2nd covariance
structure, only 1 strong correlation of 0.75 exists across all PCs; in the 3rd
covariance structure, 1 strong and 1 medium correlation exists, at values of
0.75 and 0.5 respectively; in the 4th covariance structure, 1 strong, 1 medium
and 1 weak correlations exists at strength of 0.75, 0.5 and 0.25. In short,
there are increasing dependence structures among PCs as the covariance
structure moves from the first to the last. Based on these pre-specified
covariance structures and initially estimated parameters, we simulate the
sparse longitudinal trajectories as follows:
1. 1.
Choosing the total number of subjects to be 100, and the number of time points
to be 10 in order to place possible time points between $[0,1]$.
2. 2.
Simulating the observed number of time points for each individual with
$n_{i}\sim Poisson(8)$, where 8 represents the average number of time points
across all subjects, and then randomly placing the observed time points in the
possible time locations (chosen in the previous step).
3. 3.
Generating the cubic spline basis matrix $\boldsymbol{B}_{i}$ for each subject
(orthonormality obtained through Gram-Schmidt orthonormalization).
4. 4.
Simulating for each subject FPC scores $\boldsymbol{\alpha_{i}}\sim
N(0,\Sigma_{\alpha})$ and noise $\boldsymbol{\epsilon}_{i}\sim
N(0,\sigma_{\boldsymbol{\epsilon}}^{2}I)$.
5. 5.
Obtaining the temporal trajectory for each individual with
$\boldsymbol{Y_{i}}=\boldsymbol{B_{i}\theta_{\mu}}+\boldsymbol{B_{i}\Theta\alpha_{i}}+\boldsymbol{\epsilon_{i}}$.
6. 6.
Repeating step 1– 5 1000 times for each covariance structure, thus generating
4000 simulated datasets in total.
To evaluate the mSFPCA model performance in simulated data, we want to examine
three main results: 1. how well mSFPCA can capture the temporal patterns
embodied in the overall mean curve and FPC curves for each temporal
measurement; 2. the accuracy of covariance estimation; 3. the inference on
temporal associations based on mutual information estimation.
Figure 1 shows that the estimated overall mean curves and PC curves accurately
recovered the ground truth for all three outcome variables under covariance
structure I. This accurate capturing of major temporal patterns was seen in
other three covariance structures as well (Supplementary Figure 1-3). Figure 2
summarizes the performance of covariance estimation across all 4 scenarios in
terms of the coverage probabilities of 95% credible intervals on estimated
covariance parameters. The coverage probabilities are lowest in the 1st
covariance structure (independent, Figure 2A), improved when more dependence
structures are introduced (Figure 2B-D), and reach highest with the 4th
covariance structure (having most correlations across PCs, Figure 2D). Despite
these subtle differences in the coverage probabilities for each covariance
parameter, the average coverage probability across all estimated parameters,
represented by the dashed line, is around 95% within each covariance
structure. This indicates that our mSFPCA model is able to estimate the
covariance matrix properly, and its performance is affected by the structure
of covariance matrix itself: the more sparse the covariance is, the more
challenging the estimation. But even with the most sparse scenario (Figure 2
A), our mSFPCA model is still able to achieve about 95% average coverage
probability.
Regarding the inference on temporal associations, Table 1 shows the mutual
information estimates in each simulation scenario, which estimates the
temporal association between each pair of temporal measurements. $MI_{ij}$
denotes the normalized mutual information between $i$th and $j$th temporal
measurements. When the true MI is zero, the coverage probability is denoted as
$0^{*}$, as the estimated mutual information is always non-negative and thus
by construction the posterior distribution will not encompass zero. In the
rest of the table, except for the slightly lower coverage probability with
0.92 for $MI_{12}$ in the 3rd scenario, or 0.93 for $MI_{23}$ in the 4th
scenario, all the coverage probabilities are close to 95%. Table 2 shows the
conditional mutual information estimates in each simulation scenario, which
estimates the temporal association between each pair of temporal measurements
given the other measurement in the model. $CMI_{ij}$ denotes the normalized
conditional mutual information between $i$th and $j$th temporal measurements.
All coverage probabilities of non-zero CMIs are close to 95% on the estimation
of conditional mutual information.
In short, our simulation results have demonstrated the good performance of
mSFPCA in modeling sparse longitudinal data with multiple temporal
measurements and providing valid inference on temporal associations.
Figure 1: Estimated mean and FPC curves from mSFPCA on simulated data with covariance structure I. Estimated (red) vs. true (blue) overall mean curve on simulated trajectories (black) on outcome variable 1 (A), outcome variable 2 (B), and outcome variable 1 (C). Estimated (red) vs. true (black) PC curves on simulated data for outcome variable 1 (D), outcome variable 2 (E), and outcome variable 3 (F). Figure 2: Coverage probability of 95% credible interval on estimated covariance parameters in covariance matrix I (A), II (B), III (C) and IV (D). Values within each dot represent the coverage probability for each estimated covariance parameter, and only values for unique covariance element are displayed. Black dashed lines indicate the average coverage probability across all estimated parameters within each covariance matrix, which is around 95% in all four simulation scenarios. Table 1: Mutual information estimates for each simulation scenario | | | | 95% credible interval
---|---|---|---|---
Simulation scenario | Parameter | Truth | Median | Cov.prob. | 2.5% | 97.5%
Covariance I | $MI_{12}$ | 0 | 0.26 | 0* | 0.11 | 0.42
| $MI_{13}$ | 0 | 0.17 | 0* | 0.04 | 0.34
| $MI_{23}$ | 0 | 0.18 | 0* | 0.04 | 0.35
Covariance II | $MI_{12}$ | 0 | 0.26 | 0* | 0.11 | 0.42
| $MI_{13}$ | 0 | 0.17 | 0* | 0.04 | 0.34
| $MI_{23}$ | 0.75 | 0.75 | 0.96 | 0.65 | 0.83
Covariance III | $MI_{12}$ | 0.5 | 0.54 | 0.92 | 0.39 | 0.66
| $MI_{13}$ | 0 | 0.17 | 0* | 0.04 | 0.34
| $MI_{23}$ | 0.75 | 0.75 | 0.96 | 0.65 | 0.83
Covariance IV | $MI_{12}$ | 0.5 | 0.54 | 0.94 | 0.38 | 0.66
| $MI_{13}$ | 0.25 | 0.29 | 0.94 | 0.12 | 0.46
| $MI_{23}$ | 0.75 | 0.75 | 0.93 | 0.66 | 0.83
Table 2: Conditional mutual information estimates for each simulation scenario | | | | 95% credible interval
---|---|---|---|---
Simulation scenario | Parameter | Truth | Median | Cov.prob. | 2.5% | 97.5%
Covariance I | $CMI_{12}$ | 0 | 0.26 | 0* | 0.11 | 0.42
| $CMI_{13}$ | 0 | 0.17 | 0* | 0.04 | 0.34
| $CMI_{23}$ | 0 | 0.18 | 0* | 0.04 | 0.35
Covariance II | $CMI_{12}$ | 0 | 0.26 | 0* | 0.12 | 0.42
| $CMI_{13}$ | 0 | 0.17 | 0* | 0.04 | 0.34
| $CMI_{23}$ | 0.75 | 0.75 | 0.95 | 0.65 | 0.83
Covariance III | $CMI_{12}$ | 0.76 | 0.77 | 0.94 | 0.68 | 0.84
| $CMI_{13}$ | 0.66 | 0.66 | 0.95 | 0.53 | 0.76
| $CMI_{23}$ | 0.87 | 0.87 | 0.96 | 0.81 | 0.91
Covariance IV | $CMI_{12}$ | 0.81 | 0.82 | 0.95 | 0.74 | 0.88
| $CMI_{13}$ | 0.76 | 0.76 | 0.95 | 0.66 | 0.83
| $CMI_{23}$ | 0.89 | 0.90 | 0.95 | 0.85 | 0.93
## 4 Real data application
For the real data application, we want to model multiple temporal measurements
simultaneously in a large and challenging dataset, with a special interest in
utilizing conditional mutual information to infer temporal association. This
dataset comes from the type 2 diabetes (T2D) longitudinal studies in the
Integrative Human Microbiome Project (iHMP Consortium, 2014). In this example,
an over 3 years’ study has been conducted in approximately 100 individuals at
high risk for T2D, in order to better understand the biological changes that
occur during the onset and progression of T2D. Multiple sample types were
collected from the study participants every 2-3 months during their healthy
periods, with more frequent sampling during periods of respiratory illness and
other environmental stressors. These data include multi-omics assays such as
stool microbiome data using 16S rRNA sequencing, host protein expression
profiles in fecal samples using LC-MS/MS, and cytokine profiles that quantify
the levels of 50 diverse inflammatory proteins and insulin peptides in host
serum, as well as standard clinical tests results like hemoglobin A1c (HbA1c),
insulin and glucose. Moreover, behavior changes of patients, such as emotional
and psychological stress, were documented using the Perceived Stress Scale
instrument. Our outcomes of interest are the longitudinal pattern of Shannon
diversities in bacteria, proteins and cytokines, and of clinical test results
on HbA1c. Shannon diversity is defined as
$Shannon=-\sum_{i=1}^{S}p_{i}ln(p_{i})$, where $S$ is the total number of
species, and $p_{i}$ is the relative proportion of species $i$ relative to the
entire population. Shannon diversities in bacteria, proteins and cytokines are
chosen over specific features in our application, because they have higher
predictive power of individuals’ diabetic status than specific bacteria,
proteins or cytokines. Hence, we are particularly interested in utilizing
mutual information to investigate which omics data (based on Shannon
diversity) have strongest association with HbA1c, and whether additional omics
data improve the temporal association based on conditional mutual information.
The estimated mean curves in Figure 3 show different temporal trends in each
outcome, where Shannon bacterial diversity decreases slowly over time, protein
diversity increases steadily over time, cytokine diversity increases over the
first 300 days, decreases between day 300 and 900, and then increases, and
HbA1c decreases during the first 2 years, and increases slightly afterwards.
As indicated by the observed trajectories for each individual (black curves),
there are great subject-level variations in each outcome. This additional
temporal information is captured by the FPC curves in Figure 4. Figure 4A
shows the first two PCs in Shannon bacterial diversity, of which PC 1 explains
79% and captures variation around day 750, while PC 2 explains 21% of the
variation and emphasizes variation around day 300 and 1100. Figure 4B shows
the first four PCs in Shannon protein diversity, of which the first two PCs
explain over 90% of the variance. The first PC captures variation around day
750, and the second PC emphasizes variation around day 450 and 1200. Figure 4C
shows the first four PCs in Shannon cytokine diversity, of which PC 1 explains
83% and exhibits an almost flat curve over time, while PC 2 explains 13% of
the variation and emphasizes variation around day 300 and 800. Figure 4D shows
the first four PCs in HbA1c: PC1 exhibits a slight increasing curve over time,
accounting for 70% variation, and PC 2 captures variation around day 300 and
800, explaining for an additional 21% variation. In short, although principal
patterns in each measurement vary, changing time points are pretty consistent,
suggesting coherent responses to changes in patients’ mental or physical
conditions.
Among omics’ temporal associations with standard clinical test result HbA1c,
Table 3 suggests that Shannon protein diversity has the highest association
with HbA1c, at an estimated mutual information of 0.91 with 95% credible
interval (0.786, 0.971). Cytokine diversity is the second highest, with MI at
0.849 (0.668, 0.954), and bacteria diversity has the lowest association at
0.71 (0.441, 0.854). However, when information about other omics measurements
are provided, all the pairwise temporal associations increase to over 0.95, as
indicated by the conditional MI results. Regarding temporal associations among
omics measurement, Shannon protein and bacteria diversities have highest
temporal association, with mutual information at 0.982 (0.875, 0.999); Shannon
protein and cytokine diversities also have high association at 0.966 (0.886,
0.998); the association between Shannon bacteria and cytokine is medium at
0.798 (0.454, 0.977). Similar to earlier results, when information about other
measurements are available, all conditional information increase to 0.99. In
short, host protein expression profiles data has highest temporal association
with patients’ diabetes status (i.e. HbA1c), but this information can still be
further improved with additional omics data.
We need model diagnostics to conclude on the validity of our mSFPCA
application. The optimal model selected by PSIS-LOO has 2 PCs for Shannon
bacterial diversity, and 4 PCs for the other measurements, and the number of
internal knot is chosen to be one for all outcomes. PSIS-LOO diagnostics in
Figure 5A show that the selected mSFPCA model fit the majority of the data
well, except for 4 outliers with Pareto shape k values higher than the warning
threshold 0.7. Graphical posterior predictive checks in Figure 5B suggests
good model fit as the simulated data from the posterior predictive
distribution was able to cover the distribution of observed outcomes well.
Figure 5C-F highlight the observed trajectories of the 4 outliers detected by
PSIS-LOO diagnostic plot. The red subject has highest curve in Shannon
cytokine diversity and low value in HbA1c. A closer look at his/her metadata
shows that this subject went through stages of healthy, infection and back to
healthy. The green subject shows high oscillation pattern in Shannon protein
diversity, as he/she oscillated between stages of healthy, inflammation, and
infection. The blue subject exhibits high oscillation pattern in Shannon
protein diversity, because he/she went through a complicated interweaving
stages of healthy, inflammation, infection, post-travel and allergy. The
purple subject, who has the highest Pareto shape k value in Figure 5A,
experienced drastic change in HbA1c, as he/she went through stages of
infection, stress and back to healthy. In short, our mSFPCA model generally
fits this dataset well, and our diagnostic tools were able to highlight
biologically meaningful outliers for further examination.
Figure 3: Estimated mean curves from mSFPCA application on type 2 diabetes multi-omics dataset. (A) Estimated population mean curve (blue) for Shannon bacterial diversity on observed individual trajectories (black). (B) Estimated population mean curve (blue) for Shannon protein diversity on observed individual trajectories (black). (C) Estimated population mean curve (blue) for Shannon cytokine diversity on observed individual trajectories (black). (D) Estimated population mean curve (blue) for HbA1c on observed individual trajectories (black). Figure 4: Estimated FPC curves from mSFPCA application on type 2 diabetes multi-omics dataset. (A) Trends of variability in Shannon bacterial diversity captured by 2 principal component curves. (B) Trends of variability in Shannon protein diversity captured by 4 principal component curves. (C) Trends of variability in Shannon cytokine diversity captured by 4 principal component curves. (D) Trends of variability in HbA1c captured by 4 principal component curves. Table 3: Mutual information estimates for type 2 diabetes multi-omics dataset application temporal associations with HbA1c | $MI(95\%CI)$ | $CMI(95\%CI)$
---|---|---
HbA1c—protein | 0.910 (0.786, 0.971) | 0.994 (0.968, 0.999)
HbA1c—cytokine | 0.849 (0.668, 0.954) | 0.986 (0.951, 0.999)
HbA1c—bacteria | 0.710 (0.441, 0.854) | 0.957 (0.820, 0.999)
temporal associations among omics | $MI(95\%CI)$ | $CMI(95\%CI)$
protein—bacteria | 0.982 (0.875, 0.999) | 0.999 (0.996, 0.999)
protein—cytokine | 0.966 (0.886, 0.998) | 0.999 (0.997, 0.999)
bacteria—cytokine | 0.798 (0.454, 0.977) | 0.995 (0.958, 0.999)
Figure 5: Graphical model diagnostics and examination of outliers for mSFPCA
application on type 2 diabetes multi-omics dataset. (A) Scatterplot of
estimated Pareto shape parameter $\hat{k}$ in PSIS-LOO diagnostic plot: all
$\hat{k}$’s except 4 are lower than the warning threshold 0.7. (B) Graphical
posterior predictive plot: kernel density estimate of the observed dataset $y$
(dark curve), with kernel estimates for 100 simulated dataset $y_{rep}$ drawn
from the posterior predictive distribution (thin, lighter lines). (C) Observed
Shannon bacterial diversity for 4 outliers detected by PSIS-LOO diagnostic
plot in (A). (D) Observed Shannon protein diversity for 4 outliers. (E)
Observed Shannon cytokine diversity for 4 outliers. (F) Observed HbA1c for 4
outliers.
## 5 Discussion
Here, we have proposed multivariate sparse functional PCA, an extension to the
sparse functional principal components analysis, to modeling multiple
trajectories simultaneously. The methodological novelty lies in the
computationally efficient Bayesian covariance matrix estimation, where we
utilized a Cholesky decomposition to guarantee it to be positive semi-definite
under the constrained form of diagonal within-trajectory covariance and
arbitrary form of between-trajectory covariance structure. Moreover, we
utilized mutual information to assess marginal and conditional temporal
associations, providing. Finally, our Bayesian implementation in Stan enables
the usage of PSIS-LOO for efficient model selection, and visual model
diagnostic methods, such as examining the estimated shape parameters from
PSIS-LOO and utilizing the graphical posterior predictive checks, to evaluate
the validity of mSFPCA models and highlight potential outliers.
In both our real-data based simulations and application to longitudinal
microbiome multi-omics datasets, we have demonstrated that mSFPCA is able to
accurately uncover the underlying principal modes of variation over time,
including both the average population pattern and subject-level variation, and
estimate the temporal associations properly. These enabled us to detect
biologically meaningful signals in a large and challenging longitudinal cohort
with irregular sampling, missing data, and four temporal measurements.
Moreover, the model diagnostics plots from real data application show that
mSFPCA can provide reliable model fitting to real microbiome multi-omics
dataset. All these results highlight the great value of our method in modeling
longitudinal data with multiple temporal measurements. Though we applied
mSFPCA to microbiome data in this paper, the method is in fact a general
framework that can be applied to a wide range of multivariate longitudinal
data.
One limitation of our method is that we assume the principal component scores
and residuals to be normally distributed as in the original SFPCA model. This
normality assumption would restrict our method from being applying to highly
skewed trajectories, for example However, improper application of the method
to such data could be detected by the model diagnostic tools we provide, e.g.,
the graphical posterior predictive model checks. Users could also modify the
mSFPCA model by incorporating alternative prior distributions, for example, a
t-distribution with a low degree of freedom to capture heavy tails in the
distribution of principal component scores, which can be easily implemented in
Stan. Finally, since the mSFPCA model is implemented in Stan, a programming
language with a very active user base, this method will be able to be updated
with more efficient MCMC sampling algorithms and also incorporate other
groundbreaking model selection and diagnostic techniques whenever they become
available. Hence, we believe that the mSFPCA method will become a useful and
up-to-date tool for researchers in various fields to analyze longitudinal data
with multiple measurements in order to detect complex temporal associations.
## 6 Acknowledgement
RK was supported by NIH under grant 1DP1AT010885, NIDDK under grant
1P30DK120515, and CCFA under grant 675191. WT was supported by NIH/NIMH under
grants MH120025 and MH122688.
Supplement to ”multivariate Sparse Functional Principal Components Analysis
for Longitudinal Microbiome Multi-Omics Data”. Three supplementary figures for
simulation results are included in the supplementary materials.
## References
* Ahmed and Gokhale (1989) [author] Ahmed, Nabil AliN. A. and Gokhale, DVD. (1989). Entropy expressions and their estimators for multivariate distributions. IEEE Transactions on Information Theory 35 688–692.
* Arellano-Valle, Contreras-Reyes and Genton (2013) [author] Arellano-Valle, Reinaldo BR. B., Contreras-Reyes, Javier EJ. E. and Genton, Marc GM. G. (2013). Shannon Entropy and Mutual Information for Multivariate Skew-Elliptical Distributions. Scandinavian Journal of Statistics 40 42–62.
* Ballen et al. (2016) [author] Ballen, KarenK., Ahn, Kwang WooK. W., Chen, MinM., Abdel-Azim, HishamH., Ahmed, IbrahimI., Aljurf, MahmoudM., Antin, JosephJ., Bhatt, Ami SA. S., Boeckh, MichaelM., Chen, GeorgeG. et al. (2016). Infection rates among acute leukemia patients receiving alternative donor hematopoietic cell transplantation. Biology of Blood and Marrow Transplantation 22 1636–1645.
* Barnard, McCulloch and Meng (2000) [author] Barnard, JohnJ., McCulloch, RobertR. and Meng, Xiao-LiX.-L. (2000). Modeling covariance matrices in terms of standard deviations and correlations, with application to shrinkage. Statistica Sinica 1281–1311.
* Bodein et al. (2019) [author] Bodein, AntoineA., Chapleur, OlivierO., Droit, ArnaudA. and Lê Cao, Kim-AnhK.-A. (2019). A generic multivariate framework for the integration of microbiome longitudinal studies with other data types. Frontiers in genetics 10\.
* Bouslimani et al. (2019) [author] Bouslimani, AminaA., da Silva, RicardoR., Kosciolek, TomaszT., Janssen, StefanS., Callewaert, ChrisC., Amir, AmnonA., Dorrestein, KathleenK., Melnik, Alexey VA. V., Zaramela, Livia SL. S., Kim, Ji-NuJ.-N. et al. (2019). The impact of skin care products on skin chemistry and microbiome dynamics. BMC biology 17 1–20.
* Carpenter et al. (2017) [author] Carpenter, BobB., Gelman, AndrewA., Hoffman, Matthew DM. D., Lee, DanielD., Goodrich, BenB., Betancourt, MichaelM., Brubaker, MarcusM., Guo, JiqiangJ., Li, PeterP. and Riddell, AllenA. (2017). Stan: A probabilistic programming language. Journal of statistical software 76\.
* Chiou, Chen and Yang (2014) [author] Chiou, Jeng-MinJ.-M., Chen, Yu-TingY.-T. and Yang, Ya-FangY.-F. (2014). Multivariate functional principal component analysis: A normalization approach. Statistica Sinica 1571–1596.
* Cover (1999) [author] Cover, Thomas MT. M. (1999). Elements of information theory. John Wiley & Sons.
* Di, Crainiceanu and Jank (2014) [author] Di, ChongzhiC., Crainiceanu, Ciprian MC. M. and Jank, Wolfgang SW. S. (2014). Multilevel sparse functional principal component analysis. Stat 3 126–143.
* Di et al. (2009) [author] Di, Chong-ZhiC.-Z., Crainiceanu, Ciprian MC. M., Caffo, Brian SB. S. and Punjabi, Naresh MN. M. (2009). Multilevel functional principal component analysis. The annals of applied statistics 3 458\.
* Frati et al. (2019) [author] Frati, FrancoF., Salvatori, CristinaC., Incorvaia, CristoforoC., Bellucci, AlessandroA., Di Cara, GiuseppeG., Marcucci, FrancescoF. and Esposito, SusannaS. (2019). The role of the microbiome in Asthma: The Gut–Lung axis. International Journal of Molecular Sciences 20 123\.
* Gabry et al. (2019) [author] Gabry, JonahJ., Simpson, DanielD., Vehtari, AkiA., Betancourt, MichaelM. and Gelman, AndrewA. (2019). Visualization in Bayesian workflow. Journal of the Royal Statistical Society: Series A (Statistics in Society) 182 389–402.
* Gelman, Meng and Stern (1996) [author] Gelman, AndrewA., Meng, Xiao-LiX.-L. and Stern, HalH. (1996). Posterior predictive assessment of model fitness via realized discrepancies. Statistica sinica 733–760.
* Gentle (2012) [author] Gentle, James EJ. E. (2012). Numerical linear algebra for applications in statistics. Springer Science & Business Media.
* Gill et al. (2006) [author] Gill, Steven RS. R., Pop, MihaiM., DeBoy, Robert TR. T., Eckburg, Paul BP. B., Turnbaugh, Peter JP. J., Samuel, Buck SB. S., Gordon, Jeffrey IJ. I., Relman, David AD. A., Fraser-Liggett, Claire MC. M. and Nelson, Karen EK. E. (2006). Metagenomic analysis of the human distal gut microbiome. science 312 1355–1359.
* Hall and Hosseini-Nasab (2006) [author] Hall, PeterP. and Hosseini-Nasab, MohammadM. (2006). On properties of functional principal components analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68 109–126.
* Holleran et al. (2018) [author] Holleran, GG., Scaldaferri, FF., Ianiro, GG., Lopetuso, LL., Mc, D NamaraD. N., Mele, MCM., Gasbarrini, AA. and Cammarota, GG. (2018). Fecal microbiota transplantation for the treatment of patients with ulcerative colitis and other gastrointestinal conditions beyond Clostridium difficile infection: an update. Drugs of today (Barcelona, Spain: 1998) 54 123–136.
* iHMP Consortium (2014) [author] iHMP Consortium (2014). The Integrative Human Microbiome Project: dynamic analysis of microbiome-host omics profiles during periods of human health and disease. Cell host & microbe 16 276\.
* James, Hastie and Sugar (2000) [author] James, Gareth MG. M., Hastie, Trevor JT. J. and Sugar, Catherine AC. A. (2000). Principal component models for sparse functional data. Biometrika 87 587–602.
* Jiang et al. (2019) [author] Jiang, DuoD., Armour, Courtney RC. R., Hu, ChenxiaoC., Mei, MengM., Tian, ChuanC., Sharpton, Thomas JT. J. and Jiang, YuanY. (2019). Microbiome multi-omics network analysis: statistical considerations, limitations, and opportunities. Frontiers in genetics 10\.
* Jiang et al. (2020) [author] Jiang, LingjingL., Zhong, YuanY., Elrod, ChrisC., Natarajan, LokiL., Knight, RobR. and Thompson, Wesley KW. K. (2020). BayesTime: Bayesian Functional Principal Components for Sparse Longitudinal Data. arXiv preprint arXiv:2012.00579.
* Joe (1989) [author] Joe, HarryH. (1989). Relative entropy measures of multivariate dependence. Journal of the American Statistical Association 84 157–164.
* Kidziński and Hastie (2018) [author] Kidziński, ŁukaszŁ. and Hastie, TrevorT. (2018). Longitudinal data analysis using matrix completion. arXiv preprint arXiv:1809.08771.
* Kostic et al. (2015) [author] Kostic, Aleksandar DA. D., Gevers, DirkD., Siljander, HeliH., Vatanen, TommiT., Hyötyläinen, TuuliaT., Hämäläinen, Anu-MaariaA.-M., Peet, AleksandrA., Tillmann, ValloV., Pöhö, PäiviP., Mattila, IsmoI. et al. (2015). The dynamics of the human infant gut microbiome in development and in progression toward type 1 diabetes. Cell host & microbe 17 260–273.
* Kuczynski et al. (2010) [author] Kuczynski, JustinJ., Costello, Elizabeth KE. K., Nemergut, Diana RD. R., Zaneveld, JesseJ., Lauber, Christian LC. L., Knights, DanD., Koren, OmryO., Fierer, NoahN., Kelley, Scott TS. T., Ley, Ruth ER. E. et al. (2010). Direct sequencing of the human microbiome readily reveals community differences. Genome biology 11 210\.
* Li et al. (2010) [author] Li, YehuaY., Hsing, TailenT. et al. (2010). Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data. The Annals of Statistics 38 3321–3351.
* Liechty, Liechty and Müller (2004) [author] Liechty, John CJ. C., Liechty, Merrill WM. W. and Müller, PeterP. (2004). Bayesian correlation estimation. Biometrika 91 1–14.
* Lloyd-Price et al. (2019) [author] Lloyd-Price, JasonJ., Arze, CesarC., Ananthakrishnan, Ashwin NA. N., Schirmer, MelanieM., Avila-Pacheco, JulianJ., Poon, Tiffany WT. W., Andrews, ElizabethE., Ajami, Nadim JN. J., Bonham, Kevin SK. S., Brislawn, Colin JC. J. et al. (2019). Multi-omics of the gut microbial ecosystem in inflammatory bowel diseases. Nature 569 655–662.
* Morton et al. (2019) [author] Morton, James TJ. T., Aksenov, Alexander AA. A., Nothias, Louis FelixL. F., Foulds, James RJ. R., Quinn, Robert AR. A., Badri, Michelle HM. H., Swenson, Tami LT. L., Van Goethem, Marc WM. W., Northen, Trent RT. R., Vazquez-Baeza, YoshikiY. et al. (2019). Learning representations of microbe–metabolite interactions. Nature methods 16 1306–1314.
* Nash (1990) [author] Nash, JCJ. (1990). The Cholesky Decomposition. Compact numerical methods for computers: Linear algebra and function minimisation 2\.
* Peng and Paul (2009) [author] Peng, JieJ. and Paul, DebashisD. (2009). A geometric approach to maximum likelihood estimation of the functional principal components from sparse longitudinal data. Journal of Computational and Graphical Statistics 18 995–1015.
* Ramsay and Silverman (1997) [author] Ramsay, JamesJ. and Silverman, Bernard WB. W. (1997). Functional data analysis (Springer series in statistics).
* Ramsay and Silverman (2007) [author] Ramsay, James OJ. O. and Silverman, Bernard WB. W. (2007). Applied functional data analysis: methods and case studies. Springer.
* Ranjan et al. (2016) [author] Ranjan, RaviR., Rani, AshaA., Metwally, AhmedA., McGee, Halvor SH. S. and Perkins, David LD. L. (2016). Analysis of the microbiome: Advantages of whole genome shotgun versus 16S amplicon sequencing. Biochemical and biophysical research communications 469 967–977.
* Rice and Silverman (1991) [author] Rice, John AJ. A. and Silverman, Bernard WB. W. (1991). Estimating the mean and covariance structure nonparametrically when the data are curves. Journal of the Royal Statistical Society: Series B (Methodological) 53 233–243.
* Sberro et al. (2019) [author] Sberro, HilaH., Fremin, Brayon JB. J., Zlitni, SoumayaS., Edfors, FredrikF., Greenfield, NicholasN., Snyder, Michael PM. P., Pavlopoulos, Georgios AG. A., Kyrpides, Nikos CN. C. and Bhatt, Ami SA. S. (2019). Large-scale analyses of human microbiomes reveal thousands of small, novel genes. Cell 178 1245–1259.
* Sharon et al. (2019) [author] Sharon, GilG., Cruz, Nikki JamieN. J., Kang, Dae-WookD.-W., Gandal, Michael JM. J., Wang, BoB., Kim, Young-MoY.-M., Zink, Erika ME. M., Casey, Cameron PC. P., Taylor, Bryn CB. C., Lane, Christianne JC. J. et al. (2019). Human gut microbiota from autism spectrum disorder promote behavioral symptoms in mice. Cell 177 1600–1618.
* Stewart et al. (2018) [author] Stewart, Christopher JC. J., Ajami, Nadim JN. J., O’Brien, Jacqueline LJ. L., Hutchinson, Diane SD. S., Smith, Daniel PD. P., Wong, Matthew CM. C., Ross, Matthew CM. C., Lloyd, Richard ER. E., Doddapaneni, HarshaVardhanH., Metcalf, Ginger AG. A. et al. (2018). Temporal development of the gut microbiome in early childhood from the TEDDY study. Nature 562 583–588.
* Vatanen et al. (2018) [author] Vatanen, TommiT., Franzosa, Eric AE. A., Schwager, RandallR., Tripathi, SuryaS., Arthur, Timothy DT. D., Vehik, KendraK., Lernmark, ÅkeÅ., Hagopian, William AW. A., Rewers, Marian JM. J., She, Jin-XiongJ.-X. et al. (2018). The human gut microbiome in early-onset type 1 diabetes from the TEDDY study. Nature 562 589–594.
* Vehtari, Gelman and Gabry (2017) [author] Vehtari, AkiA., Gelman, AndrewA. and Gabry, JonahJ. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and computing 27 1413–1432.
* Yao, Müller and Wang (2005) [author] Yao, FangF., Müller, Hans-GeorgH.-G. and Wang, Jane-LingJ.-L. (2005). Functional data analysis for sparse longitudinal data. Journal of the American statistical association 100 577–590.
|
Bochner-Riesz Means Convergence of Prolate Spheroidal Series and Their
Extensions
Mourad Boulsanea 111 Corresponding author: Mourad Boulsane, Email:
<EMAIL_ADDRESS>Ahmed Souabnia
a Carthage University, Faculty of Sciences of Bizerte, Department of
Mathematics, Jarzouna, 7021, Tunisia.
###### Abstract
In this paper, we study the $L^{p}$-Bochner-Riesz mean summability problem
related to the spectrum of some particular Sturm-Liouville operators in the
weighted $L^{p}([a,b],\omega).$ Our purpose is to establish suitable
conditions under which the Bochner-Riesz expansion of a function $f\in
L^{p}([a,b],\omega)$,$1<p<\infty$, in two generalisations of Slepian’s basis,
converges to $f$ in $L^{p}([a,b],\omega)$.
Keywords: Bochner-Riesz mean convergence, eigenfunctions and eigenvalues,
prolate spheroidal wave functions.
2010 Mathematics Subject Classification. 42C10, 41A60.
## 1 Introduction
The $L^{p}$-Bochner-Riesz mean convergence of orthogonal series has attracted
special attention since several decades ago. This kind of convergence is
briefly described as follows. Let $1\leq p<\infty$ , $a,b\in\mathbb{R}$ and
$\\{\varphi_{n}\\}$ an orthonormal set of eigenfunctions of a positive self-
adjoint differential operator $\mathcal{L}$ associated with eigenvalues
$\chi_{n}$ on a weighted Hilbert space $L^{2}(I,\omega)$, where $\omega$ is a
positive bounded weight function. We define the expansion coefficients of
$f\in L^{p}([a,b],\omega)$ by
$a_{n}(f)=\int_{a}^{b}f(x)\varphi_{n}(x)\omega(x)dx.$ The orthonormal set
$\\{\varphi_{n}\\}$ is said to have the Bochner-Riesz mean convergence of
order $p$ over the Banach space $L^{p}(I,\omega)$ if for some suitable
$\delta>0$ and for all $f\in L^{p}(I,\omega),$ we have
$\lim_{R\to\infty}\int_{a}^{b}|f(x)-\Psi_{R}^{\delta}f(x)|^{p}\omega(x)dx=0,\mbox{
where
}\displaystyle\Psi_{R}^{\delta}f=\sum_{n=0}^{\infty}\Big{(}1-\frac{\chi_{n}}{R}\Big{)}^{\delta}_{+}a_{n}(f)\varphi_{n}.$
(1)
To the best of our knowledge, M. Riesz was the first, in 1911, to investigate
this problem in some special cases. Our problem is a modified summability
method of Riesz mean introduced by Salomon Bochner given by (1) . In [7],
S.Bochner started by studying this problem for the trigonometric exponential
case in higher dimension. Furthermore, in [13], the authors have proved a
Bochner-Riesz mean convergence for the orthonormal eigenvectors system of a
second order elliptic differential operator on a compact N-dimensional
manifold M for $1\leq p\leq 2\frac{N+1}{N+3}$ and
$\delta>N\left|\frac{1}{p}-\frac{1}{2}\right|-\frac{1}{2}$. Mauceri and Müller
have also studied this problem in [20] and [22] in the framework of the
Heisenberg group. This problem has been analysed for Fourier-Bessel expansions
series in [10] and [11]. Moreover, in [8], authors have also solved this
question in the case of sublaplacien on the sphere $S^{2n-1}$ in the complex
n-dimensional space $\mathbb{C}^{n}$,where it has been shown that we have
convergence for $\delta>(2n-1)\Big{|}\frac{1}{2}-\frac{1}{p}\Big{|}$. The weak
type convergence is investigated in this problem. Indeed, we say that an
orthonormal family $\\{\varphi_{n}\\}$ of $L^{p}(I,\omega)$ has a weakly
Bochner-Riesz mean convergence if $\Psi_{R}^{\delta}f$ converge to $f$ almost
everywhere for every $f\in L^{p}(I,\omega)$. This problem has been solved in
some special cases of orthonormal systems like Jacobi and Laguerre polynomials
in [21] and for the eigenfunctions of the Hermite operator in higher dimension
in [9].
In this work, we extend the $L^{p}$-Bochner-Riesz means convergence to the
circular and the generalized (or weighted) prolate spheroidal wave functions
denoted by (CPSWFs) and (GPSWFs), respectively. The two last families are
defined respectively as the eigenfunctions of the operators
$\mathcal{H}_{c}^{\alpha}f(x)=\int_{0}^{1}\sqrt{cxy}J_{\alpha}(cxy)f(y)dy,\quad\mathcal{F}_{c}^{(\alpha)}f(x)=\int_{-1}^{1}e^{icxy}f(y)(1-y^{2})^{\alpha}\,dy,$
where $\alpha>-1/2,\,c>0$ are two real numbers. These two sets of orthonormal
functions are characterized as solutions of some Sturm-Liouville problems. The
second family we consider is the weighted, some times called generalized,
prolate spheroidal wave functions (GPSWFs) introduced by Wang-Zhang [30]. Note
that the classical PSWFs correspond to the special case of the GPSWFs with
$\alpha=0.$
Our aim in this paper is to prove the $L^{p}$convergence of Bochner-Riesz mean
expansion in the GPSWFs and CPSWFs bases.
This work is organised as follows. In section 2, we give some mathematical
preliminaries on Sturm-Liouville theory and some properties of the CPSWfs and
GPSWFs. Note that these functions can be considered as generalizations of the
spherical Bessel functions $j_{n}^{(\alpha)}$ and Gegenbauer’s polynomials
$\widetilde{P}_{n}^{(\alpha)}$, respectively. In section 3, we state our two
main theorems and section 4 and 5 are respectively devoted to the proof of
sufficient and necessary conditions of the main results.
## 2 Mathematical preliminaries
In this paragraph, we give some mathematical preliminaries that will be
frequently used in the proofs of the different results of this work.
### 2.1 Some facts about Sturm-Liouville theory
The Sturm-Liouville differential operator is defined as follows, see for
example [1],
$\mathcal{L}y(x)=\frac{d}{dx}[p(x)y^{\prime}(x)]+q(x)y(x),\quad x\in I=(a,b).$
(2)
with $r=\frac{1}{p},q\in L^{1}(I,\mathbb{R}).$ The Sturm-Liouville eigenvalues
problem is given by the following differential equation :
$\mathcal{L}.u(x)=-\chi\omega(x)u(x),\quad\sigma\in L^{1}(I,\mathbb{R}).$ (3)
That is
$\frac{d}{dx}\Big{[}p(x)\frac{du}{dx}\Big{]}+q(x)u(x)+\chi\omega(x)u(x)=0,\quad
x\in I.$ (4)
Note that a Sturm-Liouville operator satisfies the following properties,
1. 1.
$u\mathcal{L}v-v\mathcal{L}u=\Big{[}p(uv^{\prime}-vu^{\prime})\Big{]}^{\prime}$
( Lagrange’s identity )
2. 2.
The eigenvalues of $\mathcal{L}$ are real and form an infinite countable set
$\chi_{0}<\chi_{1}<\cdots<\chi_{n}<\cdots$ with
$\lim_{n\rightarrow+\infty}\chi_{n}=+\infty.$
3. 3.
For each eigenvalue $\chi_{n}$ there exists an eigenfunction $\phi_{n}$ having
n zeros on $[a,b].$
4. 4.
Eigenfunctions corresponding to different eigenvalues are orthogonal with
respect to the following inner product
${\left\langle{f,g}\right\rangle}_{\omega}=\int_{a}^{b}f(x)g(x)\omega(x)dx,\quad
f,g\in L^{2}(I,\omega).$
In the sequel, we assume that $\omega(x)\geq 0$, for $x\in(a,b).$
### 2.2 Some facts about GPSWFs and CPSWFs
We first recall that, for $c>0$, the prolate spheroidal wave functions PSWFs,
denoted $\psi_{n,c}$, have been introduced by D.Slepian as solutions of the
following energy maximization problem
$\mbox{ Find }f=\arg\max_{f\in
B_{c}}\frac{\int_{-1}^{1}|f(t)|^{2}dt}{\int_{\mathbb{R}}|f(t)|^{2}dt},$
where $B_{c}$ is the classical Paley-Wiener space, defined by
$B_{c}=\left\\{f\in L^{2}(\mathbb{R}),\,\,\mbox{Support
}\widehat{f}\subseteq[-c,c]\right\\}.$ (5)
Here, $\widehat{f}$ is the Fourier transform of $f\in L^{2}(\mathbb{R}).$ It
has been shown that they are also eigenfunctions of the integral operator with
sinc kernel. A breakthrough in the theory of Slepian functions is due to
Slepian,Pollard and Landau who have proved that PSWFs are also eigenfunctions
of a Sturm-Liouville operator by proving a commutativity property. For more
details about Slepian’s functions we refer reader to [25, 26, 27]. In this
work we are interested in two generalizations of the PSWFs.
The first basis is called circular prolate spheroidal wave functions (CPSWFs)
or radial part of the 2d-Slepian, introduced by D.Slepian[27] as solutions of
the following problem
$\mbox{ Find }f=\arg\max_{f\in
HB^{\alpha}_{c}}\frac{\int_{0}^{1}|f(t)|^{2}dt}{\int_{0}^{\infty}|f(t)|^{2}dt},$
where $HB^{\alpha}_{c}$ is the Hankel Paley-Wiener space, defined by
$HB^{\alpha}_{c}=\left\\{f\in L^{2}(\mathbb{R}),\,\,\mbox{Support
}\mathcal{H}^{\alpha}f\subseteq[-c,c]\right\\}.$ (6)
Here the Hankel transform $\mathcal{H}^{\alpha}$ is defined, for $f\in
L^{1}(0,\infty)$, by
$\mathcal{H}^{\alpha}f(x)=\int_{0}^{\infty}\sqrt{xy}J_{\alpha}(xy)f(y)dy.$
Here $J_{\alpha}(.)$ is the Bessel function and $\alpha>-1/2$. Like Fourier
transform, $\mathcal{H}^{\alpha}$ can be extended into a unitary operator on
$L^{2}(0,\infty)$. They are also the different band-limited eigenfunctions of
the finite Hankel transform $\mathcal{H}_{c}^{\alpha}$ defined on $L^{2}(0,1)$
with kernel $H_{c}^{\alpha}(x,y)=\sqrt{cxy}J_{\alpha}(cxy)$ where $J_{\alpha}$
is the Bessel function of the first type and order $\alpha>-\frac{1}{2}$(see
for example [27]). That is
$\mathcal{H}_{c}^{\alpha}(\varphi^{\alpha}_{n,c})=\mu_{n,\alpha}(c)\varphi^{\alpha}_{n,c}.$
(7)
In his pioneer work [27], D. Slepian has shown that the compact integral
operator $\mathcal{H}_{c}^{\alpha}$ commutes with the following Sturm-
Liouville differential operator $\mathcal{L}^{\alpha}_{c}$ defined on
$C^{2}([0,1])$ by
$\mathcal{L}_{c}^{\alpha}(\phi)=-\dfrac{d}{dx}\left[(1-x^{2})\dfrac{d}{dx}\phi\right]+\left(c^{2}x^{2}-\dfrac{\dfrac{1}{4}-\alpha^{2}}{x^{2}}\right)\phi.$
(8)
Hence, $\varphi^{\alpha}_{n,c}$ is the $n-$th bounded eigenfunction of the
positive self-adjoint operator $\mathcal{L}_{c}^{\alpha}$ associated with the
eigenvalue $\chi_{n,\alpha}(c),$ that is
$-\dfrac{d}{dx}\left[(1-x^{2})\dfrac{d}{dx}\varphi^{\alpha}_{n,c}(x)\right]+\left(c^{2}x^{2}-\dfrac{\dfrac{1}{4}-\alpha^{2}}{x^{2}}\right)\varphi^{\alpha}_{n,c}(x)=\chi_{n,\alpha}(c)\varphi^{\alpha}_{n,c}(x),\quad
x\in[0,1].$ (9)
The orthonormal family $\varphi_{n,c}^{\alpha}$ form an orthonormal basis of
$L^{2}(0,1)$ and the associated eigenvalues family $\chi_{n,\alpha}(c)$
satisfy the following inequality, (see [27])
$(2n+\alpha+1/2)(2n+\alpha+3/2)\leq\chi_{n,\alpha}(c)\leq(2n+\alpha+1/2)(2n+\alpha+3/2)+c^{2}$
(10)
The second family we consider in this work is the weighted, (some times called
generalized), prolate spheroidal wave functions introduced by Wang-Zhang [30]
as solutions of a Sturm-Liouville problem or equivalently eigenfunctions of an
integral operator. GPSWFs are also solutions of the following problem as given
in [18]
$\mbox{Find }f={\displaystyle arg\max_{f\in
B^{\alpha}_{c}}\frac{\|f\|^{2}_{L^{2}_{\omega_{\alpha}}(I)}}{\|\widehat{f}\|^{2}_{L^{2}(\omega_{-\alpha}(\frac{\cdot}{c}))}}},$
where $\omega_{\alpha}(x)=(1-x^{2})^{\alpha}$ and $B^{(\alpha)}_{c}$ is the
restricted Paley-Winer space, defined by
$B_{c}^{(\alpha)}=\\{f\in L^{2}(\mathbb{R}),\,\,\mbox{Support
}\widehat{f}\subseteq[-c,c],\,\,\widehat{f}\in
L^{2}\big{(}(-c,c),\omega_{-\alpha}(\frac{\cdot}{c})\big{)}\\}.$
More precisely, the GPSWFs are the eigenfunctions of the weighted finite
Fourier transform operator $\mathcal{F}_{c}^{(\alpha)}$ defined by
$\mathcal{F}_{c}^{(\alpha)}f(x)=\int_{-1}^{1}e^{icxy}f(y)\,\omega_{\alpha}(y)\,\mathrm{d}y.$
(11)
It is well known, (see [18, 30]) that they are also eigenfunctions of the
compact and positive operator
$\mathcal{Q}_{c}^{(\alpha)}=\frac{c}{2\pi}\mathcal{F}_{c}^{({\alpha})^{*}}\circ\mathcal{F}_{c}^{(\alpha)}$
which is defined on $L^{2}(I,\omega_{\alpha})$ by
$\mathcal{Q}_{c}^{(\alpha)}g(x)=\int_{-1}^{1}\frac{c}{2\pi}\mathcal{K}_{\alpha}(c(x-y))g(y)\omega_{\alpha}(y)\d{y}$
(12)
Here,
$\mathcal{K}_{\alpha}(x)=\sqrt{\pi}2^{\alpha+1/2}\Gamma(\alpha+1)\frac{J_{\alpha+1/2}(x)}{x^{\alpha+1/2}}$
It has been shown in [18, 30] that the last two integral operators commute
with the following Sturm-Liouville operator $\mathcal{L}_{c}^{(\alpha)}$
defined on $C^{2}[-1,1]$ by
$\mathcal{L}_{c}^{(\alpha)}(f)(x)=-\frac{1}{\omega_{\alpha}(x)}\frac{d}{dx}\left[\omega_{\alpha}(x)(1-x^{2})f^{\prime}(x)\right]+c^{2}x^{2}f(x).$
(13)
Also, note that the $(n+1)-$th eigenvalue $\chi_{n,\alpha}(c)$ of
$\mathcal{L}_{c}^{(\alpha)}$ satisfies the following classical inequalities,
$n(n+2\alpha+1)\leq\chi_{n,\alpha}(c)\leq n(n+2\alpha+1)+c^{2},\quad\forall
n\geq 0.$ (14)
## 3 Statement of results
In this section, we will state the main results of this paper that we will
prove in the following sections. As mentioned before, the main issue studied
in this paper is to get a necessary and sufficient conditions of Bochner-Riesz
expansion convergence of a function $f$ in the GPSWFs’s and CPSWFs’s basis.
Let’s start by studying the case of GPSWFs in the following theorem.
###### Theorem 1.
Let $0\leq\alpha<3/2$, $\delta$ and $c$ be two positive number and
$(\psi_{n,c}^{(\alpha)})_{n\geq 0}$ be the family of weighted prolate
spheroidal wave functions. For a smooth function $f$ on $I=(-1,1)$, we define
$\Psi_{R}^{\delta}f=\sum_{n=0}^{\infty}\left(1-\frac{\chi_{n,\alpha}(c)}{R}\right)^{\delta}_{+}{\left\langle{f,\psi_{n,c}^{(\alpha)}}\right\rangle}_{L^{2}(I,\omega_{\alpha})}\psi_{n,c}^{(\alpha)}.$
Then, for every $1\leq p<\infty$, $\Psi^{\delta}_{R}$ can be extended to a
bounded operator $L^{p}(I,\omega_{\alpha})\to L^{p}(I,\omega_{\alpha})$.
Further, $\Psi^{\delta}_{R}f$ is uniformly bounded if ,and only if,
$\delta>\max\\{\frac{\gamma_{\alpha}(p^{\prime})}{2},0\\}$ and
$p\not=p_{0}=2-\frac{1}{\alpha+3/2}$ where
$\gamma_{\alpha}(p)=\begin{cases}0&\mbox{ if }1<p<p^{\prime}_{0}\\\
\epsilon&\mbox{ if }p=p^{\prime}_{0}\\\
2(\alpha+1)\left[\frac{1}{2}-\frac{1}{p}\right]-\frac{1}{2}&\mbox{ if
}p>p^{\prime}_{0}\\\ \alpha+1&\mbox{ if }p=1\end{cases}.$
and $\epsilon$ is an arbitrary real number. Note that $p^{\prime}$ denote here
the dual exponent of $p$.
###### Remark 1.
The sufficient condition in both GPSWFs’s and CPSWFs’s case still valid even
for all $\alpha>-1/2$.
###### Remark 2 (Two special cases).
Recall that $\psi^{(\alpha)}_{n,0}=\widetilde{P}^{(\alpha,\alpha)}_{n}$, then
we recover the same result for the case of normalized Geganbauer polynomials.
Note that both conditions (A) and (B), defined in the proof of the last
theorem, are still valid even for $\widetilde{P}_{n}^{(\alpha,\beta)}$ with
exactly the same proof and by noticing that the transferring theorem, which is
the key step of the necessary condition, has been proven in [16] , the last
result is valid also for Jacobi polynomials for all $\alpha,\beta>-1/2$ . For
$\alpha=0$ and $c>0$, $\psi^{0}_{n,c}=\psi_{n,c}$ presents the classical
prolate spheroidal wave functions PSWFs which satisfy the Bochner-Riez mean
convergence if and only if
$\delta>\max\\{0,\frac{\gamma_{0}(p^{\prime})}{2}\\}.$
Let us now focus on the circular case.
###### Theorem 2.
Let $\alpha\geq 1/2$, $c>0$ and $(\varphi_{n,c}^{(\alpha)})_{n\geq 0}$ be the
family of Hankel prolate spheroidal wave functions. For a smooth function $f$
on $I=(0,1)$, we define
$\Psi_{R}^{\delta}f=\sum_{n=0}^{\infty}\left(1-\frac{\chi_{n,\alpha}(c)}{R}\right)^{\delta}_{+}{\left\langle{f,\varphi_{n,c}^{(\alpha)}}\right\rangle}_{L^{2}(0,1)}\varphi_{n,c}^{(\alpha)}.$
Then, for every $1\leq p<\infty$, $\Psi^{\delta}_{R}$ can be extended to a
bounded operator $L^{p}(0,1)\to L^{p}(0,1)$. Further $\Psi^{\delta}_{R}f$ is
uniformly bounded if ,and only if,
$\delta>\max\\{\frac{\gamma(p^{\prime})}{2},0\\}$ where
$\gamma(p)=\begin{cases}\frac{1}{p}-\frac{1}{2}&\mbox{ if }1<p<4\\\
\epsilon-\frac{1}{4}&\mbox{ if }p=4\\\
\frac{1}{3}\left[\frac{1}{p}-1\right]&\mbox{ if }p>4\\\ 1&\mbox{ if
}p=1\end{cases}.$
.
## 4 Proof of sufficient condition
Let $(I,\omega)$ be a measured space such that $\omega$ is a bounded weight
function. We denote by $p^{\prime}=\frac{p}{p-1}$ the dual index of $p$.
Throughout this section, $\mathcal{L}$ denotes a Sturm-Liouville operator and
$\varphi_{n}$ (respectively $\lambda_{n}$) the sequence of the associated
eigenfunctions (respectively eigenvalues). The Riesz means of index $\delta>0$
associated with $\mathcal{L}$ of a function
$f\in\mathcal{C}^{\infty}(I,\mathbb{R})$ are consequently defined as
$\Psi^{\delta}_{R}f=\sum_{n=0}^{\infty}\Big{(}1-\frac{\lambda_{n}}{R}\Big{)}^{\delta}_{+}a_{n}(f)\varphi_{n}\quad\mbox{with}\quad
a_{n}(f)=\int_{I}f(y)\varphi_{n}(y)d\mu(y).$ (15)
$\Psi^{\delta}_{R}f$ can also be written as
$\Psi^{\delta}_{R}.f(x)=\int_{I}K_{R}^{\delta}(x,y)f(y)d\mu(y)\quad\mbox{where}\quad
K_{R}^{\delta}(x,y)=\sum_{n=0}^{\infty}\Big{(}1-\frac{\lambda_{n}}{R}\Big{)}^{\delta}_{+}\varphi_{n}(x)\varphi_{n}(y)$
Our aim in this section is to prove at the same time the sufficient conditions
of our two main theorems. More precisely, we will define several conditions on
$\varphi_{n}$ that will ensure the convergence of $\Psi_{R}^{\delta}.f$ to $f$
in the $L^{p}$norm as $R\to\infty$ and verify that both two families satisfy
these conditions. Assume that $\varphi_{n}$ satisfies the following conditions
:
* $(A)$
For every $1\leq p\leq\infty$, every $n$, $\varphi_{n}\in L^{p}(I,\omega)$.
Further, we assume that there is a constant $\gamma(p)\geq 0$ such that
${\left\|{\varphi_{n}}\right\|}_{L^{p}(\mu)}\leq Cn^{\gamma(p)}$.
* $(B)$
The sequence $(\lambda_{n})$ of the eigenvalues of the operator $\mathcal{L}$
satisfies the following properties
1. 1.
$\displaystyle\sum_{\lambda_{n}\in(m,M)}1\leq C(M-m)$ for all $0\leq m<M$.
2. 2.
There exists $\varepsilon>0$ such that
$\lambda_{n}\geq Cn^{\varepsilon}.$
First of all, we start by giving sense to $\Psi^{\delta}_{R}.f$ for every
$f\in L^{p}(\mu)$. Indeed,
${\left\|{K^{\delta}_{R}}\right\|}_{L^{p}(\mu)\otimes
L^{p^{\prime}}(\mu)}\leq\sum_{\lambda_{n}<R}{\left\|{\varphi_{n}}\right\|}_{p}{\left\|{\varphi_{n}}\right\|}_{p^{\prime}}\leq\sum_{\lambda_{n}<R}n^{\gamma(p)+\gamma(p^{\prime})}\leq
CR^{\frac{\left(\gamma(p)+\gamma(p^{\prime})\right)}{\varepsilon}+1},$
So that the integral operator $\Psi_{R}^{\delta}$ can be extended to a
continuous operator $L^{p}(\mu)\to L^{p}(\mu)$ with
${\left\|{\Psi_{R}^{\delta}}\right\|}_{L^{p}\to
L^{p}}\leq{\left\|{K^{\delta}_{R}}\right\|}_{L^{p}(\mu)\otimes
L^{p^{\prime}}(\mu)}.$
The following theorem is one of the main results of this paper.
###### Theorem 3.
With the above notation and under conditions $(A)$ and $(B)$ with
$\delta>\delta(p)=\max\\{\frac{\gamma(p^{\prime})}{\varepsilon},0\\}$, there
exists a constant $C>0$ satisfying the following inequality
${\left\|{\Psi_{R}^{\delta}}\right\|}_{(L^{p}(I,w),L^{p}(I,w))}\leq C.$ (16)
The following lemma will be used in the proof of the previous theorem.
###### Lemma 1.
Let $1\leq p\leq 2$ then for every $f\in L^{p}(I,\omega),$ we have
${\left\|{\sum_{\lambda_{n}\in(m,M)}a_{n}(f)\varphi_{n}}\right\|}_{L^{2}(I,\omega)}\leq
C(p)M^{\frac{\gamma(p^{\prime})}{\varepsilon}}(M-m)^{\frac{1}{2}}{\left\|{f}\right\|}_{L^{p}(I,\omega)}.$
(17)
###### Proof.
By orthogonality and Hölder’s inequality, we have
$\displaystyle{\left\|{\sum_{\lambda_{n}\in(m,M)}a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,\omega)}$
$\displaystyle=$
$\displaystyle\sum_{\lambda_{n}\in(m,M)}a_{n}^{2}(f)\leq\sum_{\lambda_{n}\in(m,M)}{\left\|{\varphi_{n}}\right\|}^{2}_{L^{p^{\prime}}(I,\omega)}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
From condition $(A)$, we have
${\left\|{\varphi_{n}}\right\|}_{L^{p^{\prime}}(I,\omega)}\leq
n^{\gamma(p^{\prime})}$. We also obtain by using condition ($B1$)
$\displaystyle{\left\|{\sum_{\lambda_{n}\in(m,M)}a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,\omega)}$
$\displaystyle\leq$
$\displaystyle\sum_{\lambda_{n}\in(m,M)}n^{2\gamma(p^{\prime})}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}\leq
C\sum_{\lambda_{n}\in(m,M)}\lambda_{n}^{\frac{2\gamma(p^{\prime})}{\varepsilon}}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
$\displaystyle\leq$ $\displaystyle
CM^{\frac{2\gamma(p^{\prime})}{\varepsilon}}\left(\sum_{\lambda_{n}\in(m,M)}1\right){\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
$\displaystyle\leq$ $\displaystyle
CM^{(\frac{2\gamma(p^{\prime})}{\varepsilon})}(M-m){\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}.$
Then one gets
${\left\|{\sum_{\lambda_{n}\in(m,M)}a_{n}(f)\varphi_{n}}\right\|}_{L^{2}(I,\omega)}\leq
CM^{(\frac{\gamma(p^{\prime})}{\varepsilon})}(M-m)^{\frac{1}{2}}{\left\|{f}\right\|}_{L^{p}(I,\omega)}.$
∎
###### Proof of Theorem1.
We should mention here that some parts of the proof of this theorem are
inspired from [8]. Without loss of generality, we can consider $1\leq p<2$ and
conclude by duality. To prove (16), we are going to have to decompose the
multiplier $\Psi_{R}^{\delta}$. In order to do so, let
$\phi\in\mathcal{C}^{\infty}_{0}(\mathbb{R})$ with support on $(1/2,2)$ such
that $\displaystyle\sum_{k\in\mathbb{Z}}\phi(2^{k}t)=1$ and
$\displaystyle\phi_{0}(t)=1-\sum_{k=1}^{+\infty}\phi(2^{k}t)$ for all $t>0$.
We define
$\phi_{R,k}^{\delta}(t)=\left(1-\frac{t}{R}\right)_{+}^{\delta}\phi\left(2^{k}(1-\frac{t}{R})\right).$
We recall that, from [8], this last function has the following properties :
1. 1.
$\mbox{supp}\left(\phi_{R,k}^{\delta}\right)\subseteq(R(1-2^{-k+1}),R(1-2^{-k-1}))$,
2. 2.
$\sup_{t\in\mathbb{R}}|\phi_{R,k}^{\delta}(t)|\leq C2^{-k\delta}$,
3. 3.
$\forall N\geq 0,$ there exists $C_{N}>0$ such that
$|\partial_{t}^{N}\phi_{R,k}^{\delta}(t)|\leq
C_{N}\Big{(}\frac{2^{k}}{R}\Big{)}^{N}.$
Furthermore, we denote by
$\Psi_{R,k}^{\delta}.f=\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}\qquad
k=1,2,\cdots$ (18)
Then, we have
$\displaystyle\Psi_{R}^{\delta}f$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\left(1-\frac{\lambda_{n}}{R}\right)_{+}^{\delta}a_{n}(f)\varphi_{n}$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\phi_{0}(1-\frac{\lambda_{n}}{R})\left(1-\frac{\lambda_{n}}{R}\right)_{+}^{\delta}a_{n}(f)\varphi_{n}+\sum_{n=0}^{\infty}\left(\sum_{k=1}^{+\infty}\phi(2^{k}(1-\frac{\lambda_{n}}{R}))\right)\left(1-\frac{\lambda_{n}}{R}\right)_{+}^{\delta}a_{n}(f)\varphi_{n}$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\phi_{0}(1-\frac{\lambda_{n}}{R})\left(1-\frac{\lambda_{n}}{R}\right)_{+}^{\delta}a_{n}(f)\varphi_{n}+\sum_{k=1}^{\left[\frac{\log(R)}{\log(2)}\right]}\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}+\sum_{k=\left[\frac{\log(R)}{\log(2)}\right]+1}^{\infty}\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}$
$\displaystyle=$
$\displaystyle\psi_{R,0}^{\delta}f+\sum_{k=1}^{\left[\frac{\log(R)}{\log(2)}\right]}\Psi_{R,k}^{\delta}f+\mathcal{R}_{R}^{\delta}f.$
It is clear that the main term is the second one. With the same approach used
in [8], we will prove the following proposition :
###### Proposition 1.
Let $1\leq p<2$ and $\delta>\delta(p)=\frac{\gamma(p^{\prime})}{\varepsilon}$.
There exists $\beta>0$ such that for every $f\in L^{p}(I,w)$, we have
${\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{p}(I,w)}\leq
C2^{-k\beta}{\left\|{f}\right\|}_{L^{p}(I,w)},$ (19)
where $C$ is a constant independent of $R$ and $f$.
###### Proof.
Let $x_{0}=\frac{a+b}{2}\in(a,b)$ and $r=\frac{b-a}{4}>0$ such that
$(x_{0}-r,x_{0}+r)\subseteq(a,b).$ Note that, for every $1\leq
k\leq\left[\frac{\log(R)}{\log(2)}\right]=k_{R}$, we have
$r_{k}^{\alpha}=\left(\frac{2^{k}}{R}\right)^{\mu(p)}r<r$ where
$\mu(p)=\frac{(\frac{\gamma(p^{\prime})}{\varepsilon}+\frac{1}{2})}{(\frac{1}{p}-\frac{1}{2})}$
. So we notice that
$I=(a,b)=(x_{0}-r_{k}^{\alpha},x_{0}+r_{k}^{\alpha})\cup\\{y\in(a,b),|y-x_{0}|>r_{k}^{\alpha}\\}=I_{k,1}^{\alpha}\cup
I_{k,2}^{\alpha}$.
We start by providing an $L^{p}$ bound of
${\left\|{\Psi_{R,k}^{\delta}}\right\|}_{L^{p}(I^{\alpha}_{k,1},\omega)}$. To
do so, we proceed in the way to reduce the $L^{p}$ inequality (19) to certain
$(L^{p},L^{2})$ inequality using the last lemma.
Using Parseval formula and the fact that
$\mbox{supp}\left(\phi_{R,k}^{\delta}\right)\subseteq(R_{k,1},R_{k,2})$, where
$R_{k,1}=R(1-2^{-k+1})$ and $R_{k,2}=R(1-2^{-k-1}),$ we have
$\displaystyle{\left\|{\Psi_{R,k}^{\delta}f}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle=$
$\displaystyle{\left\|{\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle=$ $\displaystyle{\left\|{\sum_{R_{k,1}\leq\lambda_{n}\leq
R_{k,2}}\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,w)},$
Using the previous lemma with $m=R_{k,1}$, $M=R_{k,2}$ and the fact that
$\displaystyle\sup_{t\in\mathbb{R}}|\phi_{R,k}^{\delta}(t)|\leq
C2^{-k\delta}$, one gets
$\displaystyle{\left\|{\Psi_{R,k}^{\delta}f}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
C2^{-2k\delta}{\left\|{\sum_{R_{k,1}\leq\lambda_{n}\leq
R_{k,2}}a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,w)}$ $\displaystyle\leq$
$\displaystyle
C2^{-2k\delta}R^{(2\frac{\gamma(p^{\prime})}{\varepsilon})}\left(\frac{3R}{2^{k+1}}\right){\left\|{f}\right\|}^{2}_{L^{p}(I,w)}.$
Hence, we have
${\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{2}(I,w)}\leq
C2^{-k(\delta+\frac{1}{2})}R^{(\frac{\gamma(p^{\prime})}{\varepsilon})+\frac{1}{2})}{\left\|{f}\right\|}_{L^{p}(I,w)}.$
(20)
By combining Hölder inequality and (20), we obtain
$\displaystyle{\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{p}(I_{k,1}^{\alpha},w)}$
$\displaystyle\leq$
$\displaystyle(\mu(I_{k,1}))^{\frac{1}{p}-\frac{1}{2}}{\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{2}(I_{k,1}^{\alpha},w)}$
(21) $\displaystyle\leq$
$\displaystyle(2r_{k}^{\alpha})^{\frac{1}{p}-\frac{1}{2}}{\left\|{\Psi_{R,k}^{\delta}f}\right\|}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
C2^{-k(\delta-\frac{\gamma(p^{\prime})}{\varepsilon})}{\left\|{f}\right\|}_{L^{p}(I,w)}.$
Let $\displaystyle
s_{R,k}^{\delta}(u,v)=\sum_{n=0}^{\infty}\phi_{R,k}^{\delta}(\lambda_{n})\varphi_{n}(x)\varphi_{n}(y)$
be the kernel of $\Psi_{R,k}^{\delta}$. We just have to find an estimate of
$||\Psi_{R,k}^{\delta}f||_{L^{p}(I_{k,2}^{\alpha},w)}$, so we will use the
Schur test with the symmetric property of $s_{R,k}^{\delta},$ then it suffices
to prove the following inequality
$\sup_{u\in
I_{k,2}^{\alpha}}{\left\|{s_{R,k}^{\delta}(u,.)}\right\|}_{L^{1}(I_{k,2}^{\alpha})}\leq
C2^{-k\varepsilon}$
for some $\varepsilon>0$ and $C>0$ depending only on $p.$
We consider
$g_{R,k}^{\delta}(\lambda)=\left(1-\frac{\lambda^{2}}{R}\right)_{+}^{\delta}e^{\lambda^{2}/R}\phi(2^{k}(1-\frac{\lambda^{2}}{R}))$
satisfying the following properties, see [8]
1. 1.
For every non-negative integer $i$ there exists a constant $C_{i}$ such that
for all $s>0$
$\int_{|t|\geq s}|\hat{g}_{R,k}^{\delta}(t)|dt\leq
C_{i}s^{-i}R^{-i/2}2^{(i-\delta)k}$ (22)
2. 2.
${\left\|{g_{R,k}^{\delta}(\sqrt{\mathcal{L}})}\right\|}_{(L^{2},L^{2})}\leq
C2^{-k\delta}.$ (23)
For our purpose, we will consider such a positive self-adjoint operator
$\mathcal{L}$ on $L^{2}(\mathbb{R})$ such that the semigroup
$e^{-t\mathcal{L}}$, generated by $-\mathcal{L}$, has the kernel $p_{t}(x,y)$
obeying the Gaussian upper bound
$|p_{t}(u,v)|\leq\frac{C}{\sqrt{t}}\exp{\left(-\frac{|u-v|^{2}}{Ct}\right)}.$
(24)
for a constant $C>0$. (see [14])
For all $u\in\mathbb{R}$ and $t>0$, one gets the following estimate
${\left\|{p_{t}(u,.)}\right\|}_{L^{2}(\mathbb{R})}\leq C.$ (25)
On the other hand, there exists $i_{0}\in\mathbb{N}$ such that
$2^{i_{o}-1}<R^{\mu(p)}<2^{i_{0}}$ and we can see that
$I_{k,2}^{\alpha}\subseteq\displaystyle\cup_{\mu(p)k-i_{0}\leq j\leq 0}D_{j}$
where $D_{j}=\\{y,2^{j}r\leq|y-x_{0}|<2^{j+1}r\\}.$ Since, $\mathcal{L}$ is a
positive self-adjoint operator, then it’s clear that
$\phi_{R,k}^{\delta}(\mathcal{L})=g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\exp{\left(-\mathcal{L}/R\right)}.$
(26)
Hence one gets
$\displaystyle s_{R,k}^{\delta}(u,v)$ $\displaystyle=$ $\displaystyle
g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\left(p_{1/R}(u,.)\right)(v)$
$\displaystyle=$ $\displaystyle
g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|<2^{j-1}r\\}}\right)(v)+g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|\geq
2^{j-1}r\\}}\right)(v)$ $\displaystyle=$ $\displaystyle
s_{R,k}^{\delta,1}(u,v)+s_{R,k}^{\delta,2}(u,v).$
Using the fact that $g_{R,k}^{\delta}$ is an even function, with the inversion
formula, we have
$g_{R,k}^{\delta}(\sqrt{\lambda})=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\hat{g}_{R,k}^{\delta}(t)\cos{(t\sqrt{\lambda})}dt.$
Hence, we obtain
$\displaystyle s_{R,k}^{\delta,1}(u,v)$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\hat{g}_{R,k}^{\delta}(t)\cos{(t\sqrt{\mathcal{L}})}\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|<2^{j-1}r\\}}\right)(v)dt.$
Moreover, the operator $\cos{(t\sqrt{\mathcal{L}})}$ is bounded in $L^{2}$
with support kernel $\mathcal{K}_{t}$ satisfying, see [14, 28]
$\mbox{Supp}\left(\mathcal{K}_{t}\right)=\\{(u,v)\in\mathbb{R}^{2},|u-v|\leq
c_{0}|t|\\}$
From (23), (25) and the previous analysis, one gets
$\displaystyle{\left\|{s_{R,k}^{\delta,1}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2\pi}}{\left\|{\int_{\mathbb{R}}\hat{g}_{R,k}^{\delta}(t)\cos{(t\sqrt{\mathcal{L}})}\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|<2^{j-1}r\\}}\right)dt}\right\|}_{L^{1}(D_{j})}$
$\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2\pi}}{\left\|{\int_{|t|>\frac{2^{j-1}r}{c_{0}}}\hat{g}_{R,k}^{\delta}(t)\cos{(t\sqrt{\mathcal{L}})}\left(p_{1/R}(u,.)\chi_{\\{w,|x_{0}-w|<2^{j-1}r\\}}\right)dt}\right\|}_{L^{1}(D_{j})}$
$\displaystyle\leq$
$\displaystyle\frac{\mu^{1/2}(D_{j})}{\sqrt{2\pi}}\int_{|t|>\frac{2^{j-1}r}{c_{0}}}|\hat{g}_{R,k}^{\delta}(t)|{\left\|{p_{1/R}(u,.)}\right\|}_{L^{2}(D_{j})}dt$
$\displaystyle\leq$
$\displaystyle\frac{C}{\sqrt{2\pi}}2^{\frac{j+1}{2}}\int_{|t|>\frac{2^{j-1}r}{c_{0}}}|\hat{g}_{R,k}^{\delta}(t)|dt$
Let $i>\frac{\mu+\frac{1}{2}}{2(\mu+1-\frac{1}{p})}$ where
$\mu=\frac{\gamma(p^{\prime})}{\varepsilon}>0.$ Then by (22), there exists a
constant $C_{i}>0$ such that
$\displaystyle{\left\|{s_{R,k}^{\delta,1}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle\leq$
$\displaystyle\frac{C_{i}}{\sqrt{2\pi}}2^{j/2}(\frac{2^{j}}{c_{0}})^{-i}R^{-i/2}2^{(i-\delta)k}$
$\displaystyle\leq$
$\displaystyle\frac{C_{i}}{\sqrt{2\pi}}c_{0}^{i}2^{(i-\delta)k}2^{j(1/2-i)}.$
Then, we obtain
$\displaystyle{\left\|{s_{R,k}^{\delta,1}(u,.)}\right\|}_{L^{1}(I_{k,2})}$
$\displaystyle\leq$ $\displaystyle\sum_{\mu(p)k-i_{0}\leq j\leq
0}{\left\|{s_{R,k}^{\delta,1}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle\leq$
$\displaystyle\frac{C_{i}}{\sqrt{2\pi}}c_{0}^{i}2^{(i-\delta)k}\sum_{\mu(p)k-i_{0}\leq
j\leq 0}2^{j(1/2-i)}$ $\displaystyle\leq$
$\displaystyle\frac{C_{i}}{\sqrt{2\pi}}c_{0}^{i}2^{(i-\delta)k}2^{(i-1/2)(i_{0}-\mu(p)k+1)}$
$\displaystyle\leq$ $\displaystyle C^{\prime}_{i}2^{-k\varepsilon_{1}}.$
From our assumption on $i,$
$\varepsilon_{1}=\delta-i+(i-1/2)(\frac{\mu+\frac{1}{2}}{(\frac{1}{p}-\frac{1}{2})}))>0.$
Then, to have an estimate of the kernel $s_{R,k}^{\delta,1}$ on
$L^{1}(I_{k,2}^{\delta})$, it suffices to find an estimate of the kernel
$s_{R,k}^{\delta,2}$ on $L^{1}(I_{k,2}^{\delta})$.
From (23), (24) and using the fact that $R\leq R^{\mu(p)}$, one gets the
following inequality
$\displaystyle{\left\|{s_{R,k}^{\delta,2}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle=$
$\displaystyle\int_{D_{j}}|g_{R,k}^{\delta}(\sqrt{\mathcal{L}})\left(p_{1/R}(u,.)\chi_{\\{w,|w-x_{0}|>2^{j-1}r\\}}\right)(v)|dv$
$\displaystyle\leq$
$\displaystyle{\left\|{g_{R,k}^{\delta}(\sqrt{\mathcal{L}})}\right\|}_{(L^{2},L^{2})}{\left\|{p_{1/R}(u,.)\chi_{\\{w,|w-x_{0}|>2^{j-1}r\\}}}\right\|}_{L^{2}(D_{j})}$
$\displaystyle\leq$ $\displaystyle
C2^{-k\delta}\sqrt{R}e^{(-CR2^{2j-2})}\left(\mu(D_{j})\right)^{1/2}$
$\displaystyle\leq$ $\displaystyle
C2^{-k\delta}2^{\frac{i_{0}+j}{2}}e^{-C2^{2(i_{0}+j)}}.$
Hence, we conclude that
$\displaystyle{\left\|{s_{R,k}^{\delta,2}(u,.)}\right\|}_{L^{1}(I_{k,2})}$
$\displaystyle\leq$ $\displaystyle\sum_{\mu(p)k-i_{0}\leq j\leq
0}{\left\|{s_{R,k}^{\delta,2}(u,.)}\right\|}_{L^{1}(D_{j})}$
$\displaystyle\leq$ $\displaystyle
C2^{-k\delta}\sum_{i=i_{o}+j\geq\mu(p)k}2^{\frac{i}{2}}e^{-C2^{2i}}$
$\displaystyle\leq$ $\displaystyle C^{\prime}2^{-k\delta}.$
∎
###### Proposition 2.
Let $1\leq p\leq 2$ and
$\delta>\delta(p)=\frac{\gamma(p^{\prime})}{\varepsilon}$, then for all $f\in
L^{p}(I,w)$, we have
${\left\|{\psi_{R,0}^{\delta}f}\right\|}_{L^{p}(I,w)}\leq
C{\left\|{f}\right\|}_{L^{p}(I,w)}.$ (27)
where $C$ is a constant independent of $f$ and $R.$
###### Proof.
It suffices to use the same techniques as those used in the previous proof to
get an estimate of ${\left\|{\psi_{R,0}^{\delta}f}\right\|}_{L^{p}(I_{1},w)}$
and ${\left\|{\psi_{R,0}^{\delta}f}\right\|}_{L^{p}(I_{2},w)}$ for all $f\in
L^{p}(I,w),$ where $I=(a,b)=I_{1}\cup I_{2}$ with
$I_{1}=(x_{0}-r^{\alpha}_{0},x_{0}+r^{\alpha}_{0})$ and
$I_{2}=\\{y,|y-x_{0}|>r^{\alpha}_{0}\\}$ where
$r^{\alpha}_{0}=\frac{r}{R^{\mu(p)}}.$ ∎
To conclude the theorem’s proof it suffices to find a uniform bound of
$\mathcal{R}_{R}^{\delta}$.
###### Proposition 3.
Let $1\leq p\leq 2$ and
$\delta>\delta(p)=\frac{\gamma(p^{\prime})}{\varepsilon}$, then for all $f\in
L^{p}(I,w)$, we have
${\left\|{\mathcal{R}_{R}^{\delta}f}\right\|}_{L^{p}(I,w)}\leq
C{\left\|{f}\right\|}_{L^{p}(I,w)}.$ (28)
where $C$ depends only on $p$.
###### Proof.
From Holder’s inequality and the previous lemma, we have
$\displaystyle{\left\|{\mathcal{R}_{R}^{\delta}f}\right\|}^{2}_{L^{p}(I,w)}$
$\displaystyle\leq$ $\displaystyle
2^{2(\frac{1}{p}-\frac{1}{2})}{\left\|{\mathcal{R}_{R}^{\delta}f}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
2^{2(\frac{1}{p}-\frac{1}{2})}\sum_{k=K_{R}+1}^{\infty}\sum_{n=0}^{\infty}{\left\|{\phi_{R,k}^{\delta}(\lambda_{n})a_{n}(f)\varphi_{n}}\right\|}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
C2^{2(\frac{1}{p}-\frac{1}{2})}\sum_{k=K_{R}+1}^{\infty}2^{-2k\delta}\sum_{R_{k,1}\leq\lambda_{n}\leq
R_{k,2}}{\left\|{a_{n}(f)\varphi_{n}}\right\|}^{2}_{L^{2}(I,w)}$
$\displaystyle\leq$ $\displaystyle
C2^{2(\frac{1}{p}-\frac{1}{2})}\sum_{k=K_{R}+1}^{\infty}2^{-2k(\delta+\frac{1}{2})}R^{2(\frac{1}{2}+\frac{\gamma(p^{\prime})}{\varepsilon})}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
$\displaystyle\leq$ $\displaystyle
C2^{2(\frac{1}{p}-\frac{1}{2})}2^{-2(\delta+\frac{1}{2})\big{(}\left[\frac{\log(R)}{\log(2)}\right]+1\big{)}}R^{2(\frac{1}{2}+\frac{\gamma(p^{\prime})}{\varepsilon})}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
$\displaystyle\leq$ $\displaystyle
C2^{2(\frac{1}{p}-\frac{1}{2})}R^{-2(\delta-(\frac{\gamma(p^{\prime})}{\varepsilon}))}{\left\|{f}\right\|}^{2}_{L^{p}(I,\omega)}$
Finally we obtain
$\displaystyle{\left\|{\mathcal{R}_{R}^{\delta}f}\right\|}_{L^{p}(I,w_{\alpha,\beta})}$
$\displaystyle\leq$ $\displaystyle
C2^{(\frac{1}{p}-\frac{1}{2})}R^{-(\delta-(\frac{\gamma(p^{\prime})}{\varepsilon}))}{\left\|{f}\right\|}_{L^{p}(I,\omega_{\alpha,\beta})}$
$\displaystyle\leq$ $\displaystyle
C(p){\left\|{f}\right\|}_{L^{p}(I,\omega_{\alpha,\beta})}.$
∎
∎
###### Corollary 1.
Under the notation and conditions of the previous Theorem, we have for all
$f\in L^{p}(I,w)$
$\Psi_{R}^{\delta}f\to f~{}~{}\mbox{as}~{}~{}R\to\infty.$ (29)
###### Proof.
Step1: We prove that, for every $f\in\mathcal{C}^{\infty}(I,\mathbb{R})$,
$\Psi^{\delta}_{R}f\to f$ in $L^{p}(I,\omega)$. Note that
$\displaystyle\Big{|}\Big{(}1-\frac{\lambda_{n}}{R}\Big{)}^{\delta}_{+}{\left\langle{f,\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}$
$\displaystyle\leq$
$\displaystyle\Big{|}{\left\langle{f,\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}=\frac{1}{\lambda_{n}}\Big{|}{\left\langle{f,\mathcal{L}\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}$
(30) $\displaystyle=$
$\displaystyle\frac{1}{\lambda_{n}}\Big{|}{\left\langle{\mathcal{L}f,\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}=\cdots=\frac{1}{\lambda_{n}^{k}}\Big{|}{\left\langle{\mathcal{L}^{k}.f,\varphi_{n}}\right\rangle}_{L^{2}(I,\omega)}\Big{|}$
$\displaystyle\leq$ $\displaystyle
n^{-k\varepsilon}{\left\|{\mathcal{L}^{k}.f}\right\|}_{L^{2}(I,\omega)}.$
Since ${\left\|{\varphi_{n}}\right\|}_{L^{2}(I,\omega)}\leq n^{\gamma(p)}$, it
suffices to take $k$ big enough to have $\gamma(p)-k\varepsilon<-1$ and obtain
the convergence of the series in $L^{p}(I,\omega)$.
Since
$\displaystyle{\left\|{\Psi^{\delta}_{R}.f-f}\right\|}_{2}^{2}=\sum_{n=0}^{\infty}\Big{(}(1-\frac{\lambda_{n}}{R})^{\delta}_{+}-1\Big{)}^{2}|a_{n}(f)|^{2}\to
0$ as $R\to\infty$,then the result remains true for $1\leq p<\infty$.
Step2: For all $\varepsilon>0$. By density of
$\mathcal{C}^{\infty}_{0}(I,\mathbb{R})$ in $L^{p}(I,\omega)$, there exists
$g\in\mathcal{C}^{\infty}_{0}(I,\mathbb{R})$ such that
${\left\|{f-g}\right\|}_{L^{p}(I,\omega)}<\varepsilon$ and there exists $R>0$
such that
${\left\|{\Psi^{\delta}_{R}.f-\Psi^{\delta}_{R}.g}\right\|}_{L^{p}(I,\omega)}<\varepsilon$.
By writing,
${\left\|{\Psi^{\delta}_{R}.f-f}\right\|}_{L^{p}(I,\omega)}\leq{\left\|{\Psi^{\delta}_{R}.f-\Psi^{\delta}_{R}.g}\right\|}_{L^{p}(I,\omega)}+{\left\|{\Psi^{\delta}_{R}.g-g}\right\|}_{L^{p}(I,\omega)}+{\left\|{f-g}\right\|}_{L^{p}(I,\omega)},$
one gets the desired result. ∎
To conclude for the proof of sufficient conditions of both theorems 1 and 2,
it suffices to verify that the two considered bases satisfy conditions (A) and
(B).We will prove this result only for the case of GPSWFs. The other case is
almost identical.
We first recall that from (13), the GPSWFs are the eigenfunctions of the
Sturm-Liouville operator $\mathcal{L}_{c}^{(\alpha)}.$ Also, note that the
$(n+1)-$th eigenvalue $\chi_{n,\alpha}(c)$ of $\mathcal{L}_{c}^{(\alpha)}$
satisfies the following classical inequalities,
$n^{2}\leq n(n+2\alpha+1)\leq\chi_{n,\alpha}(c)\leq
n(n+2\alpha+1)+c^{2},\quad\forall n\geq 0.$
Moreover, for every $0\leq m<M$ such that $M-m>1$, we have
$\displaystyle\sum_{\chi_{n,\alpha}(c)\in(m,M)}1$ $\displaystyle\leq$
$\displaystyle\sum_{n(n+2\alpha+1)\in(\max(0,m-c^{2}),M)}1$
$\displaystyle\leq$
$\displaystyle\sum_{(n+\alpha+1/2)^{2}-(\alpha+1/2)^{2}\in(\max(0,m-c^{2}),M)}1$
$\displaystyle\leq$
$\displaystyle\sum_{n\in\left((\max(0,m-c^{2})+(\alpha+1/2)^{2})^{\frac{1}{2}}-(1/2+\alpha),(M+(\alpha+1/2)^{2})^{\frac{1}{2}}-(1/2+\alpha)\right)}1$
$\displaystyle\leq$ $\displaystyle C(M-m).$
It follows that condition (B) is satisfied.
Form [6] Lemma $2.6$, one can conclude that condition (A) is satisfied for
weighted prolate spheroidal wave functions for $1<p<\infty$. Moreover, it has
been shown in [18] that ${\left\|{\psi^{(\alpha)}_{n,c}}\right\|}_{\infty}\leq
C\Big{(}\chi_{n,\alpha}(c)\Big{)}^{\frac{\alpha+1}{2}}.$ Then, by using (14),
we obtain ${\left\|{\psi^{(\alpha)}_{n,c}}\right\|}_{1}\leq
C\Big{(}\chi_{n,\alpha}(c)\Big{)}^{\frac{\alpha+1}{2}}\leq Cn^{\alpha+1}.$
###### Remark 3.
The uniform norm of the CPSWFs has been given in [5].
## 5 Proof of necessary condition
The transferring theorem from the uniform boundedness of $\Psi_{R}^{\delta}$
to the uniform boundedness of the Hankel multiplier transform operator
$\mathcal{M}_{\alpha}$ defined by
$\mathcal{M}_{\alpha}(f)=\mathcal{H}_{\alpha}\left(\phi(.)\mathcal{H}_{\alpha}(f)\right)$
can be used to derive necessary condition. Note here that $\phi$ is a bounded
function on $\mathbb{R}$, continuous except on a set of Lebesgue measure zero
and $\mathcal{H}_{\alpha}$ is the modified Hankel operator defined by
$\mathcal{H}_{\alpha}(f)(x)=\int_{0}^{\infty}\frac{J_{\alpha}(xy)}{(xy)^{\alpha}}f(y)y^{2\alpha+1}dy.$
From [12] and the transferring theorem, the uniform boundedness of
$\Psi_{R}^{\delta}$ holds true if and only if
$\delta>\max\\{2(\alpha+1)|\frac{1}{p}-\frac{1}{2}|-\frac{1}{2},0\\}.$ It’s
easy to check that
$\max\\{2(\alpha+1)|\frac{1}{p}-\frac{1}{2}|-\frac{1}{2},0\\}\geq\max\\{\frac{\gamma_{\alpha}(p^{\prime})}{2},0\\}$
for every $p\not=2-\frac{1}{\alpha+3/2}$, then one gets our necessary
condition. To be more precise, let’s study each transferring theorem
separately.
### 5.1 GPSWFs’s case
Let’s recall that the family of weighted prolate spheroidal wave functions
$\\{\psi_{n,c}^{(\alpha)}(\cos\theta)\\}_{n}$ form an orthonormal system on
$(0,\pi)$ with respect to the measure $(\sin\theta)^{2\alpha+1}d\theta$.
For a function $f(\theta)$ integrable on $(0,\pi)$ with respect to the measure
defined above, we have formally
$f(\theta)=\sum_{n=0}^{\infty}a_{n}(f)\psi_{n,c}^{(\alpha)}(\cos\theta)\qquad
a_{n}(f)=\int_{0}^{\pi}f(\theta)\psi_{n,c}^{(\alpha)}(\cos\theta)(\sin\theta)^{2\alpha+1}d\theta$
For $p\geq 1$ and a function $f$ on $(0,\pi)$ we define a norm
${\left\|{f}\right\|}_{p}=\Bigg{(}\int_{0}^{\pi}|f(\theta)|^{p}(\sin\theta)^{2\alpha+1}\Bigg{)}^{1/p}.$
Before stating an adequate transferring theorem, let’s define a GPSWFs-
multiplier.
###### Definition 1.
Let $\lambda>0$ be a sufficiently large real, the bounded sequence
$\\{\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})\\}_{n}$ is called a
Weighted prolate multiplier if there exist a constant $C>0$ such that for
every $f\in L^{p}(I,\omega_{\alpha})$, we have
${\left\|{\sum_{n=0}^{\infty}\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})a_{n}(f)\psi_{n,c}^{(\alpha)}}\right\|}_{L^{p}(I,\omega_{\alpha})}\leq
C{\left\|{f}\right\|}_{L^{p}(I,\omega_{\alpha})}.$
The smallest constant $C$ verifying this last inequality is written
${\left\|{\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}$. In
the same context, the function $\phi$ is called an $\L^{p}$-Hankel transform
multiplier if
$\mathcal{M}_{\alpha}(f)=\mathcal{H}_{\alpha}(\phi(.)\mathcal{H}_{\alpha}(f))$
is uniformly bounded on
$L^{p}\left((0,\infty),\theta^{2\alpha+1}d\theta\right)$.
###### Theorem 4 (Transferring theorem).
Let $1<p<\infty$, $0\leq\alpha<3/2$ and $\phi$ be a bounded function on
$(0,\infty)$ continuous except on a set of Lebesgue measure zero such that
$\\{\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})\\}_{n}$ is a Weighted
prolate multiplier for all large $\lambda>0$ and
$\displaystyle\liminf_{\lambda\to\infty}{\left\|{\phi(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}$
is finite then $\phi$ is an $L^{p}$-Hankel transform multiplier and we have
${\left\|{\mathcal{M}_{\alpha}}\right\|}_{p}\leq\liminf_{\lambda\to\infty}{\left\|{\phi\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)}\right\|}_{p}.$
###### Proof.
Let $g$ be an infinitely differentiable function with compact support in
$[0,M]$ and put $g_{\lambda}(\theta)=g(\lambda\theta)$. Here $\lambda$ is a
positive real so that $supp(g_{\lambda})\subset[0,\pi]$.
Recall that we have by assumption
${\left\|{\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g)\psi_{n,c}^{(\alpha)}(\cos(.))}\right\|}_{p}\leq{\left\|{\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)}\right\|}_{p}{\left\|{g}\right\|}_{p}.$
(31)
Via a simple change of variable, one can write
$\lim_{\lambda\to\infty}\lambda^{2\alpha+2}{\left\|{g_{\lambda}}\right\|}_{p}^{p}=\lim_{\lambda\to\infty}\int_{0}^{M}|g(\tau)|^{p}\Big{(}\lambda\sin(\tau/\lambda)\Big{)}^{2\alpha+1}d\tau=\int_{0}^{\infty}|g(\tau)|^{p}\tau^{2\alpha+1}d\tau.$
By using (31) together with Fatou’s lemma, one gets
$\displaystyle\displaystyle\int_{0}^{\infty}$
$\displaystyle\liminf_{\lambda\to\infty}\Big{|}\chi_{(0,\pi\lambda)}(\tau)\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\tau/\lambda)\Big{|}^{p}\tau^{2\alpha+1}d\tau$
$\displaystyle=$
$\displaystyle\int_{0}^{\infty}\liminf_{\lambda\to\infty}\Big{|}\chi_{(0,\pi\lambda)}(\tau)\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\tau/\lambda)\Big{|}^{p}\lambda^{2\alpha+1}\sin(\tau/\lambda)^{2\alpha+1}d\tau$
$\displaystyle\leq$
$\displaystyle\liminf_{\lambda\to\infty}\lambda^{2\alpha+1}\int_{0}^{\infty}\Big{|}\chi_{(0,\pi\lambda)}(\tau)\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\tau/\lambda)\Big{|}^{p}\sin(\tau/\lambda)^{2\alpha+1}d\tau$
$\displaystyle\leq$
$\displaystyle\liminf_{\lambda\to\infty}\lambda^{2\alpha+2}{\left\|{\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)}\right\|}_{p}{\left\|{g_{\lambda}}\right\|}^{p}_{p}=\liminf_{\lambda\to\infty}{\left\|{\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)}\right\|}_{p}\Bigg{[}\int_{0}^{\infty}|g(\tau)|^{p}\tau^{2\alpha+1}d\tau\Bigg{]}.$
Then there exists a sequence
$\lambda_{1}<\lambda_{2}<\cdots<\lambda_{p}\to\infty$ that
$G(\tau,\lambda)=\displaystyle\chi_{(0,\pi\lambda)}(\tau)\sum_{n=0}^{\infty}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\frac{\tau}{\lambda})$
converges weakly to a function $G(\tau)$. Furthermore, $G$ satisfies
$\Bigg{[}\int_{0}^{\infty}|G(\tau)|^{p}\tau^{2\alpha+1}d\tau\Bigg{]}^{1/p}\leq\liminf_{\lambda\to\infty}{\left\|{\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)}\right\|}_{p}\Bigg{[}\int_{0}^{\infty}|g(\tau)|^{p}\tau^{2\alpha+1}d\tau\Bigg{]}^{1/p}.$
Let us now prove that $G=\mathcal{H}_{\alpha}(\phi.\mathcal{H}_{\alpha}(g))$.
Let
$G(\tau,\lambda)=\chi_{(0,\pi\lambda)}(\tau)\Big{[}\sum_{n=0}^{N[\lambda]}+\sum_{N[\lambda]+1}^{\infty}\Big{]}\phi(\chi^{1/2}_{n,\alpha}(c)/\lambda)a_{n}(g_{\lambda})\psi_{n,c}^{(\alpha)}(\cos\tau/\lambda)=G^{N}(\tau,\lambda)+H^{N}(\tau,\lambda)$
We start by giving the following lemma that will be proved later,
###### Lemma 2.
We have
$\int_{0}^{\infty}\Big{|}H^{N}(\tau,\lambda)\Big{|}^{2}\tau^{2\alpha+1}d\tau=O(\frac{1}{N})\mbox{
uniformly in }\lambda$
Therefore, by the diagonal argument, there exists a subsequence also noted
$\\{\lambda_{j}\\}$ for a sake of clarity, such that $H^{N}(\tau,\lambda_{j})$
converges weakly to a function $H^{N}(\tau)$ and
$\displaystyle\int_{0}^{\pi}\Big{|}H^{N}(\tau)\Big{|}^{p}\tau^{2\alpha+1}d\tau=O(\frac{1}{N^{2}}).$
Then, there exists a subsequence $H^{N_{j}}$ denoted for the same reason
$H^{N}$ that converges to zero a.e .
Since $G^{N}(\tau,\lambda)=G(\tau,\lambda)-H^{N}(\tau,\lambda)$,
$G^{N}(\tau,\lambda)$ converge weakly to a limit $G^{N}(\tau)$ and
$G(\tau)=G^{N}(\tau)+H^{N}(\tau)$. Thus $G^{N}(\tau)$ converges to $G(\tau)$
almost everywhere. On the other hand, we will prove the following lemma
###### Lemma 3.
We have
$\lim_{\lambda\to\infty}G^{N}(\tau,\lambda)=\int_{0}^{N}\phi(v)\mathcal{H}_{\alpha}.g(v)\frac{J_{\alpha}(v\tau)}{(v\tau)^{\alpha}}v^{2\alpha+1}dv,$
which implies that
$G(\tau)=\int_{0}^{\infty}\phi(v)\mathcal{H}_{\alpha}.g(v)\frac{J_{\alpha}(v\tau)}{(v\tau)^{\alpha}}v^{2\alpha+1}dv,$
and achieves our proof. ∎
###### Proof of Lemma 2.
We have
$\displaystyle\int_{0}^{M}|H^{N}(\tau,\lambda)|^{2}\Big{(}\lambda\sin\frac{\tau}{\lambda}\Big{)}^{2\alpha+1}d\tau$
$\displaystyle=$
$\displaystyle\lambda^{2\alpha+2}\int_{0}^{\pi}|H^{N}(\lambda\tau,\lambda)|^{2}(\sin\tau)^{2\alpha+1}d\tau$
(32) $\displaystyle=$
$\displaystyle\lambda^{2\alpha+2}\sum_{N[\lambda]+1}^{\infty}|\phi(\frac{n}{\lambda})|^{2}|a_{n}(g_{\lambda})|^{2}.$
Recall that in [19], authors have given the following uniform approximation of
GPSWFs in term of Jacobi polynomials for $0\leq\alpha<3/2$,
$\psi_{n,c}^{(\alpha)}(\cos\theta)=A_{n}\widetilde{P}_{n}^{(\alpha)}(\cos\theta)+R_{n,c}^{(\alpha)}(\cos\theta)\qquad{\left\|{R_{n,c}}\right\|}^{(\alpha)}_{\infty}\leq
C_{\alpha,c}\frac{1}{2n+2\alpha+1}.$ (33)
We also know that (see for example [29])
$n(\sin\theta)^{2\alpha+1}\widetilde{P}_{n}^{(\alpha,\alpha)}(\cos\theta)=2\frac{h^{\alpha+1}_{n-1}}{h^{(\alpha)}_{n}}\frac{d}{d\theta}\Big{[}(\sin\theta)^{2\alpha+2}\widetilde{P}_{n-1}^{(\alpha+1,\alpha+1)}(\cos\theta)\Big{]}$
(34)
By combining (33) and (34), one gets
$(\sin\theta)^{2\alpha+1}\psi_{n,c}^{(\alpha)}(\cos\theta)=\frac{2}{n}\frac{h^{\alpha+1}_{n-1}}{h^{(\alpha)}_{n}}\frac{d}{d\theta}\Big{[}(\sin\theta)^{2\alpha+2}\widetilde{P}_{n-1}^{(\alpha+1,\alpha+1)}(\cos\theta)\Big{]}+R_{n,c}^{(\alpha)}(\cos\theta).$
Then, integrating by parts one gets
$\displaystyle a_{n}(g_{\lambda})$ $\displaystyle=$
$\displaystyle\frac{C}{n}\int_{0}^{\pi}\frac{g^{\prime}(\lambda\theta)}{\sin\theta}\widetilde{P}_{n-1}^{\alpha+1}(\cos\theta)(\sin\theta)^{2\alpha+3}d\theta+\int_{0}^{\pi}R_{n,c}^{(\alpha)}(\cos\theta)g(\lambda\theta)d\theta$
$\displaystyle=$ $\displaystyle a_{n,1}(g_{\lambda})+a_{n,2}(g_{\lambda})$
Let’s come back to (32). We have by Bessel’s inequality
$\displaystyle\lambda^{2\alpha+2}\sum_{N[\lambda]+1}^{\infty}|\phi(\frac{n}{\lambda})|^{2}|a_{n,1}(g_{\lambda})|^{2}$
$\displaystyle\leq$ $\displaystyle
C\lambda^{2\alpha+2}\Big{[}\frac{\lambda}{N(\lambda-1)}\Big{]}^{2}\sum_{N[\lambda]+1}^{\infty}|\frac{n}{\lambda}a_{n,1}(g_{\lambda})|^{2}$
(35) $\displaystyle\leq$
$\displaystyle\frac{C}{N^{2}}\lambda^{2\alpha+2}\int_{0}^{\pi}\Big{|}\frac{g^{\prime}(\lambda\theta)}{\sin\theta}\Big{|}^{2}(\sin\theta)^{2\alpha+3}d\theta$
$\displaystyle=$
$\displaystyle\frac{C}{N^{2}}\int_{0}^{M}|g^{\prime}(\theta)|^{2}\Big{(}\lambda\sin\frac{\theta}{\lambda}\Big{)}^{2\alpha+1}d\theta$
$\displaystyle=$ $\displaystyle O(\frac{1}{N^{2}})\mbox{ uniformly in
}\lambda.$
On the other hand, using Cauchy-Schwarz’s inequality
$\displaystyle\lambda^{2\alpha+2}\sum_{N[\lambda]+1}^{\infty}|\phi(\frac{n}{\lambda})|^{2}|a_{n,2}(g_{\lambda})|^{2}$
$\displaystyle\leq$ $\displaystyle
C\lambda^{2\alpha+2}\sum_{N[\lambda]+1}^{\infty}{\left\|{R_{n,c}^{\alpha}}\right\|}^{2}_{2}{\left\|{g(\lambda.)}\right\|}_{2}^{2}$
$\displaystyle\leq$ $\displaystyle
C\sum_{N[\lambda]+1}^{\infty}\frac{1}{n^{2}}\int_{0}^{M}|g(\theta)|^{2}\big{(}\lambda\sin\frac{\theta}{\lambda}\big{)}^{2\alpha+1}d\theta$
$\displaystyle=$ $\displaystyle O(\frac{1}{N}).$
Then, one conclude that
$\int_{0}^{M}|H^{N}(\tau,\lambda)|^{2}\tau^{2\alpha+1}d\tau=O(\frac{1}{N})\mbox{
uniformly in }\lambda.$
∎
###### Proof of lemma 3.
We use now the following uniform approximation of GPSWFs in term of Bessel
function (we refer the reader once again to [19])
$\psi_{n,c}^{(\alpha)}(\cos\frac{\tau}{\lambda})=A_{\alpha}(q)\frac{\chi_{n,c}^{1/4}S(\cos\frac{\tau}{\lambda})^{1/2}J_{\alpha}(\chi_{n,c}^{1/2}S(\cos\frac{\tau}{\lambda}))}{(\sin\frac{\tau}{\lambda})^{\alpha+1/2}(1-q\cos^{2}\frac{\tau}{\lambda})^{1/4}}+E_{n,c}(\cos\frac{\tau}{\lambda}),$
(36)
where
$\Big{|}E_{n,c}(\cos\theta)\Big{|}\leq\frac{C.A_{\alpha}(q)}{(1-q)}\frac{(\sin\theta)^{1/2}}{(1-q\cos^{2}\theta)^{1/4}}\qquad\forall\theta\in[0,\pi]\quad\mbox{and}\quad
S(x)=\int_{x}^{1}\sqrt{\frac{1-qt^{2}}{1-t^{2}}}dt.$
Note that it has also been shown in [4] that
$\frac{\sin\theta\sqrt{1-q\cos^{2}\theta}}{S(\cos\theta)}=1+\Big{(}\frac{q}{1-q}+\frac{3}{4}\Big{)}(1-\cos\theta)+o(1-\cos\theta).$
Thus, we can write, for $n\leq N[\lambda]$,and by taking into account that
$\sqrt{x}J_{\alpha}(x)$ is bounded then
$\displaystyle\frac{\psi_{n,c}^{(\alpha)}(\cos\frac{\tau}{\lambda})}{\lambda^{\alpha}}$
$\displaystyle=$ $\displaystyle
n^{1/2}\frac{J_{\alpha}(\frac{n\tau}{\lambda})}{\Big{(}\lambda.\sin\frac{\tau}{\lambda}\Big{)}^{\alpha}}-n^{1/2}\frac{J_{\alpha}(\frac{n\tau}{\lambda})}{\Big{(}\lambda.\sin\frac{\tau}{\lambda}\Big{)}^{\alpha}}\big{(}\frac{q}{1-q}+3/4\big{)}\frac{\tau^{2}}{4\lambda^{\alpha+2}}+O(\frac{1}{n.\lambda^{\alpha+2}})$
(37) $\displaystyle=$ $\displaystyle
n^{1/2}J_{\alpha}(\frac{n\tau}{\lambda})\Big{(}\frac{1}{\tau}\Big{)}^{\alpha}+o(\frac{1}{n}).$
On the other hand,
$\displaystyle\lambda^{\alpha}a_{n}(g_{\lambda})$ $\displaystyle=$
$\displaystyle\lambda^{\alpha-1}\int_{0}^{M}g(\tau)\psi_{n,c}^{(\alpha)}(\cos\frac{\tau}{\lambda})(\sin\frac{\tau}{\lambda})^{2\alpha+1}d\tau$
$\displaystyle=$
$\displaystyle\frac{1}{\lambda^{2}}\Bigg{[}A_{\alpha}(q)n^{1/2}\int_{0}^{\infty}g(\tau)J_{\alpha}(\frac{n\tau}{\lambda})\Big{(}\lambda\sin\frac{\tau}{\lambda}\Big{)}^{\alpha+1}d\tau\Bigg{]}+o(\frac{1}{\lambda^{2}})$
$\displaystyle=$
$\displaystyle\frac{n^{1/2}}{\lambda^{2}}\int_{0}^{\infty}g(\tau)J_{\alpha}(\frac{n\tau}{\lambda})\tau^{\alpha+1}d\tau+o(\frac{1}{\lambda^{2}}).$
Then, by combining the last two estimates, one gets
$G^{N}(\tau,\lambda)=\sum_{n=0}^{N[\lambda]+1}\phi(\frac{n}{\lambda})\mathcal{H}_{\alpha}.g(\frac{n}{\lambda})J_{\alpha}(\frac{n\tau}{\lambda})\frac{1}{\tau^{\alpha}}\frac{n}{\lambda^{2}}+\frac{n}{\lambda^{2}}o(1).$
Therefore, by letting $\lambda\to\infty$, we conclude for the proof of lemma3.
∎
### 5.2 CPSWF’s case
As for the example studied in the previous section, we start by establishing
an adequate transferring theorem for the circular case. To do this, we
introduce a suitable terminology.
###### Definition 2.
Let $\lambda>0$ be a sufficiently large real, a bounded sequence
$\\{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})\\}_{n}$ is called to be a
Circular prolate multiplier, if there exists a constant $C>0$ such that for
every $f\in L^{p}(0,1)$, we have
${\left\|{\sum_{n=0}^{\infty}m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})a_{n}(f)\varphi^{(\alpha)}_{n,c}}\right\|}_{L^{p}(0,1)}\leq
C{\left\|{f}\right\|}_{L^{p}(0,1)}.$
The smallest constant $C$ verifying the last inequality is written
${\left\|{m\Big{(}\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\Big{)}}\right\|}_{p}$.
Here
$\mathcal{M}:=\mathcal{M}_{0}=\mathcal{H}_{0}\Big{(}m(.)\mathcal{H}_{0}(f)\Big{)}$
is the multiplier related to the Hankel transform operator.
###### Theorem 5 (Circular transferring theorem).
Let $1<p<\infty$, $\alpha\geq 1/2$ and $m$ be a bounded function on
$(0,\infty)$ continuous except on a set of Lebesgue measure zero such that
$\\{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})\\}_{n}$ is a Circular prolate
multiplier for all large $\lambda>0$ and
$\displaystyle\liminf_{\lambda\to\infty}{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}$
is finite then $m$ is an $L^{p}$-Hankel transform multiplier and we have
${\left\|{\mathcal{M}}\right\|}_{p}\leq\liminf_{\lambda\to\infty}{\left\|{m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)}\right\|}_{p}.$
###### Proof.
Let $\lambda>0$ and $g\in C^{\infty}_{c}(0,\infty)$ supported in $(0,M)$ such
that $\lambda>\frac{2}{\pi}M$. Let $g_{\lambda}(\tau)=g(\lambda\tau)$ for
every $\tau\in(0,1)$ and $G_{\lambda}=g_{\lambda}\circ\arccos.$
By asymption, we have
${\left\|{\sum_{n=0}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}}\right\|}_{L^{p}\left(0,1\right)}\leq{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}{\left\|{G_{\lambda}}\right\|}_{L^{p}\left(0,1\right)}.$
Then, we get
${\left\|{\chi_{(0,\lambda\frac{\pi}{2})}\sum_{n=0}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}\left(G_{\lambda}\right)\varphi_{n}(\cos(\frac{.}{\lambda}))}\right\|}^{p}_{L^{p}((0,\infty),\sin(\frac{.}{\lambda}))}\leq{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}^{p}_{p}{\left\|{g}\right\|}^{p}_{L^{p}\left((0,\infty),\sin(\frac{.}{\lambda})\right)}$
We denote by
$F_{\lambda}(\theta)=\chi_{(0,\lambda\frac{\pi}{2})}(\theta)\sum_{n=0}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}\left(G_{\lambda}\right)\varphi_{n}\left(\cos(\frac{\theta}{\lambda})\right),$
hence we have
${\left\|{F_{\lambda}}\right\|}^{p}_{L^{p}((0,\infty),\sin(\frac{.}{\lambda}))}\leq{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}^{p}_{p}{\left\|{g}\right\|}^{p}_{L^{p}((0,\infty),\sin(\frac{.}{\lambda}))}.$
(38)
By using (38), Fatou’s Lemma and the fact that
$\displaystyle\lim_{\lambda\to\infty}\lambda\sin(\frac{\theta}{\lambda})=\theta$,
we obtain
${\left\|{\displaystyle\liminf_{\lambda\to\infty}F_{\lambda}}\right\|}^{p}_{L^{p}((0,\infty),\theta
d\theta)}\leq\displaystyle\liminf_{\lambda\to\infty}{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}^{p}_{p}{\left\|{g}\right\|}^{p}_{L^{p}((0,\infty),\theta
d\theta)}$ (39)
Let
$L=\displaystyle\liminf_{\lambda\to\infty}{\left\|{m(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})}\right\|}_{p}$,
then there exists a sequence of $(\lambda_{j})_{j\in\mathbb{N}}$ such that
$\displaystyle\lim_{j\to\infty}\lambda_{j}=+\infty$ verifying
${\left\|{F_{\lambda_{j}}}\right\|}_{L^{p}((0,\infty),\theta
d\theta)}\leq(L+1/j){\left\|{g}\right\|}_{L^{p}((0,\infty),\theta d\theta)}.$
(40)
On the other hand, as $m$ is bounded and from Perseval’s formula, we have
${\left\|{F_{\lambda_{j}}}\right\|}_{L^{2}((0,\infty),\theta
d\theta)}\leq(L+1/j){\left\|{g}\right\|}_{L^{2}((0,\infty),\theta d\theta)}.$
(41)
From (40) and (41) there exists a subsequence of
$(\lambda_{j})_{j\in\mathbb{N}}$ denoted also $(\lambda_{j})_{j\in\mathbb{N}}$
such that the sequence $\\{F_{\lambda_{j}}\\}$ converge weakly to a function
$F$ in $L^{p}\cap L^{2}((0,\infty),\theta d\theta)$ and satisfying the
following inequality
${\left\|{F}\right\|}_{L^{p}((0,\infty),\theta d\theta)}\leq
L{\left\|{g}\right\|}_{L^{p}((0,\infty),\theta d\theta)}.$ (42)
Our purpose now is to show that
$F=\mathcal{H}_{0}\left(m(.)\mathcal{H}_{0}(g)\right)$ almost everywhere on
$(0,\infty).$
Let $N\geq 1$ and $\theta\in(0,\infty)$
$\displaystyle F_{\lambda}(\theta)$ $\displaystyle=$
$\displaystyle\chi_{(0,\lambda)}(\theta)\sum_{n=0}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}\left(\cos(\frac{\theta}{\lambda})\right)$
$\displaystyle=$
$\displaystyle\chi_{(0,\lambda)}(\theta)\left[\sum_{n=0}^{N[\lambda]}+\sum_{n=N[\lambda]+1}^{\infty}\right]m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}\left(\cos(\frac{\theta}{\lambda})\right)$
$\displaystyle=$ $\displaystyle
F^{N}_{\lambda}(\theta)+K^{N}_{\lambda}(\theta).$
Using (9), the function
$F(\theta)=\varphi^{\alpha}_{n,c}\left(\cos(\theta)\right)$ satisfies the
following differential equation
$\displaystyle\mathcal{L}(F)(\theta)$ $\displaystyle=$
$\displaystyle-F^{\prime\prime}(\theta)-\frac{\cos(\theta)}{\sin(\theta)}F^{\prime}(\theta)+\left(c^{2}\cos^{2}(\theta)-\frac{1/4-\alpha^{2}}{\cos^{2}(\theta)}\right)F(\theta)$
$\displaystyle=$ $\displaystyle\chi_{n,\alpha}(c)F(\theta).$
Using the symmetry of $\mathcal{L}$ on $C^{\infty}_{c}(0,\infty)$, we obtain
$\displaystyle a_{n}(G_{\lambda})$ $\displaystyle=$
$\displaystyle{\left\langle{G_{\lambda},\varphi^{\alpha}_{n,c}}\right\rangle}_{L^{2}(0,1)}$
$\displaystyle=$
$\displaystyle\frac{1}{\chi_{n,\alpha}(c)}\int_{0}^{\frac{\pi}{2}}g_{\lambda}(\theta)\chi_{n,\alpha}(c)\varphi^{\alpha}_{n,c}\left(\cos(\theta)\right)\sin(\theta)d\theta$
$\displaystyle=$
$\displaystyle\frac{1}{\chi_{n,\alpha}(c)}\int_{0}^{\frac{\pi}{2}}g_{\lambda}(\theta)\mathcal{L}(F)(\theta)\sin(\theta)d\theta$
$\displaystyle=$
$\displaystyle\frac{\lambda^{2}}{\chi_{n,\alpha}(c)}\int_{0}^{\frac{\pi}{2}}\frac{1}{\lambda^{2}}\mathcal{L}\left(g_{\lambda}\right)(\theta)F(\theta)\sin(\theta)d\theta=\frac{\lambda^{2}}{\chi_{n,\alpha}(c)}a_{n}\left(\frac{1}{\lambda^{2}}\mathcal{L}\left(g_{\lambda}\right)\right).$
Using the previous equality, Perseval’s formula and the fact that $m$ is
bounded with the well known inequality
$\frac{2}{\pi}\theta\leq\sin(\theta)\leq\theta$, for
$0\leq\theta\leq\frac{\pi}{2}$ and (10), we obtain
$\displaystyle{\left\|{K_{\lambda}^{N}}\right\|}_{L^{2}((0,\infty),\theta
d\theta)}$ $\displaystyle=$
$\displaystyle\left[\int_{0}^{\infty}\chi_{(0,\lambda\frac{\pi}{2})}(\theta)\left|\sum_{n=N[\lambda]+1}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}(\cos(\frac{\theta}{\lambda}))\right|^{2}\theta
d\theta\right]^{1/2}$ $\displaystyle=$
$\displaystyle\left[\int_{0}^{\lambda\frac{\pi}{2}}\left|\sum_{n=N[\lambda]+1}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}(\cos\left(\frac{\theta}{\lambda})\right)\right|^{2}\theta
d\theta\right]^{1/2}$ $\displaystyle\leq$
$\displaystyle\sqrt{\frac{\pi}{2}}\left[\lambda\int_{0}^{\lambda\frac{\pi}{2}}\left|\sum_{n=N[\lambda]+1}^{\infty}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a_{n}(G_{\lambda})\varphi_{n}\left(\cos(\frac{\theta}{\lambda})\right)\right|^{2}\sin\left(\frac{\theta}{\lambda}\right)d\theta\right]^{1/2}$
$\displaystyle=$
$\displaystyle\sqrt{\frac{\pi}{2}}\left[\lambda^{2}\sum_{n=N[\lambda]+1}^{\infty}m^{2}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\right)a^{2}_{n}(G_{\lambda})\right]^{1/2}$
$\displaystyle\leq$ $\displaystyle
C\,\sqrt{\frac{\pi}{2}}\left[\frac{\lambda^{2}}{N^{4}}\sum_{n=N[\lambda]+1}^{\infty}a^{2}_{n}\left(\frac{1}{\lambda^{2}}\mathcal{L}(g_{\lambda})\right)\right]^{1/2}$
$\displaystyle\leq$
$\displaystyle\frac{C}{N^{2}}\,\sqrt{\frac{\pi}{2}}\left[{\left\|{g^{\prime\prime}+\frac{g^{\prime}}{\theta}}\right\|}_{L^{2}\left((0,\infty),\theta
d\theta\right)}+C{\left\|{g}\right\|}_{L^{2}\left((0,\infty),\theta
d\theta\right)}\right]$
Then we obtain ${\left\|{K^{N}_{\lambda}}\right\|}_{L^{2}((0,\infty),\theta
d\theta)}=O(\frac{1}{N^{2}})$ uniformly in $\lambda.$
Thus by the diagonal argument there exists a subsequence of
$\\{\lambda_{j}\\}$ noted again $\\{\lambda_{j}\\}$ such that for every $N\geq
1$, $\\{K^{N}_{\lambda_{j}}\\}_{j\in\mathbb{N}}$ converge weakly to a function
$K^{N}$ in $L^{2}((0,\infty),\theta d\theta)$ satisfy
${\left\|{K^{N}}\right\|}_{L^{2}((0,\infty),\theta
d\theta)}=O(\frac{1}{N^{2}})$, one conclude that there exists a sequence
$\\{N_{k}\\}$ such that $\\{K^{N_{k}}\\}_{k\in\mathbb{N}}$ converge to zero
almost everywhere on $(0,\infty)$. Let $F^{N_{k}}=F-K^{N_{k}}$, clearly we
have $\\{F_{{\lambda_{j}}}^{N_{k}}\\}_{j\in\mathbb{N}}$ converge weakly to
$F^{N_{k}}$ in $L^{2}(0,\infty)$ for every $k\in\mathbb{N}$. Moreover,
$\\{F^{N_{k}}\\}$ converge to $F$ almost everywhere on $(0,\infty).$
We prove now the following equality
$\lim_{j\to\infty}F_{{\lambda_{j}}}^{N_{k}}(x)=\int_{0}^{N_{k}}m(y)J_{0}(xy)\mathcal{H}_{0}(g)(y)ydy$
(43)
for every $x\in(0,\infty)$, the weak convergence of
$\\{F_{{\lambda_{j}}}^{N_{k}}\\}_{j\in\mathbb{N}}$ to $F^{N_{k}}$, in
particular,
${\left\langle{F_{{\lambda_{j}}}^{N_{k}},\chi_{(r,s)}}\right\rangle}$ converge
to ${\left\langle{F^{N_{k}},\chi_{(r,s)}}\right\rangle}$ for every
$0<r<s<\infty$ and by using the Lebesgue dominated convergence theorem which
give as ${\left\langle{F_{{\lambda_{j}}}^{N_{k}},\chi_{(r,s)}}\right\rangle}$
converge to
${\left\langle{\mathcal{H}_{\alpha}\left(\chi_{(0,N_{k}\pi)}m(.)\mathcal{H}_{\alpha}(g)\right),\chi_{(r,s)}}\right\rangle}$,
one conclude that
$F^{N_{k}}=\mathcal{H}_{\alpha}\left(\chi_{(0,N_{k}\pi)}m(.)\mathcal{H}_{\alpha}(g)\right)$
almost everywhere on $(0,\infty).$ Finally, as $k\to\infty$, we get our
purpose.
For the proof of (43), we need the uniform approximation of the family of
CPSWFs on $(0,1)$ which is given by the following estimates
$\varphi_{n,c}^{\alpha}(\cos(\frac{\theta}{\lambda}))=(-1)^{n}B_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)+\gamma_{n}^{\alpha+1/2}O(\frac{c^{2}}{n})$
(44)
for every $\theta\in\mathbb{(}\lambda t_{n},\lambda\frac{\pi}{2})$, where
$t_{n}=\arccos(\gamma_{n})$ and
$\gamma_{n}\sim\frac{\sqrt{\alpha^{2}-1/4}}{\chi_{n,\alpha}^{1/2}(c)}$.
$\varphi_{n,c}^{\alpha}(\cos(\frac{\theta}{\lambda}))=A_{n}\,\chi^{1/4}_{n,\alpha}(c)\frac{\sqrt{S(\cos(\frac{\theta}{\lambda}))}J_{0}\left(\chi^{1/2}_{n,\alpha}(c)S(\cos(\frac{\theta}{\lambda}))\right)}{(\sin(\frac{\theta}{\lambda}))^{\frac{1}{2}}r_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{1/4}}+R_{n}(\cos(\frac{\theta}{\lambda}))$
(45)
for every $\theta\in\mathbb{(}0,\lambda t_{n})$, where $A_{n}\sim 1,$
$r_{n}(t)=1-qt^{2}+\frac{1/4-\alpha^{2}}{\chi_{n,\alpha}^{1/2}(c)t^{2}}$ and
$\displaystyle\sup_{\theta\in\mathbb{(}0,t_{n})}\left|R_{n}(\cos(\theta))\right|\leq\frac{C}{\chi^{1/2}_{n,\alpha}(c)}$,
for more details see [17].
By a straightforward computation, we have
$\frac{\sqrt{S(\cos(\frac{\theta}{\lambda}))}}{(\sin(\frac{\theta}{\lambda}))^{1/2}r_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{1/4}}=1-\beta(q)(1-\cos(\frac{\theta}{\lambda}))+o(1-\cos(\frac{\theta}{\lambda}))$
then, we can easily check that
$\varphi_{n,c}^{\alpha}(\cos(\frac{\theta}{\lambda}))=\chi_{n,\alpha}^{1/4}(c)J_{0}\left(\frac{\chi_{n,\alpha}^{1/2}(c)}{\lambda}\theta\right)+R_{n}(\cos(\frac{\theta}{\lambda})).$
Let $N>0$ and $\lambda>\max\\{\frac{2M}{\pi},N^{3}\\}$. By (45) we have, for
every $n\leq N[\lambda]$
$\displaystyle a_{n}(G_{\lambda})$ $\displaystyle=$
$\displaystyle{\left\langle{G_{\lambda},\varphi_{n,c}^{\alpha}}\right\rangle}_{L^{2}(0,1)}$
$\displaystyle=$
$\displaystyle\frac{1}{\lambda}\int_{0}^{\lambda\frac{\pi}{2}}\left(g_{\lambda}\circ\arccos\right)(\cos(\frac{\theta}{\lambda}))\varphi_{n,c}^{\alpha}(\cos(\frac{\theta}{\lambda}))\sin(\frac{\theta}{\lambda})d\theta$
$\displaystyle=$
$\displaystyle\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda}\int_{0}^{\lambda\frac{\pi}{2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)g(\theta)\sin(\frac{\theta}{\lambda})d\theta-\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)g(\theta)\sin(\frac{\theta}{\lambda})d\theta$
$\displaystyle+$ $\displaystyle\frac{(-1)^{n}B_{n}}{\lambda}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}g(\theta)P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}\sin(\frac{\theta}{\lambda})d\theta+\frac{1}{n^{a}\chi^{1/4}_{n,\alpha}(c)}O(\frac{1}{\lambda^{b}})$
$\displaystyle=$
$\displaystyle\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda^{2}}\mathcal{H}_{0}(g)(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})+\frac{1}{n^{a}\chi^{1/4}_{n,\alpha}(c)}O(\frac{1}{\lambda^{b}}).$
where $a>1$ and $b>0.$ Indeed, using the fact that
$\sup_{x>0}|\sqrt{x}J_{\alpha}(x)|\leq C_{\alpha}$, see [23], and
$\lambda\sin(\frac{\theta}{\lambda})\leq\theta$ we have
$\displaystyle\left|\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)g(\theta)\sin(\frac{\theta}{\lambda})d\theta\right|$
$\displaystyle\leq$ $\displaystyle\frac{1}{\lambda^{3/2}}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}\left|\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda^{1/2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)\right||g(\theta)|\theta
d\theta$ $\displaystyle\leq$
$\displaystyle\frac{1}{\lambda^{3/2}}\left[\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}\left|\frac{\chi_{n,\alpha}^{1/4}(c)\theta^{1/2}}{\lambda^{1/2}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda}\theta\right)\right|^{2}\frac{d\theta}{\theta}\right]^{1/2}{\left\|{\theta
g}\right\|}_{L^{2}(0,\infty)}$ $\displaystyle\leq$
$\displaystyle\frac{C_{0}}{\lambda^{3/2}}(\ln(\frac{\pi}{2t_{n}}))^{1/2}{\left\|{\theta
g}\right\|}_{L^{2}(0,\infty)}$ $\displaystyle\leq$
$\displaystyle\frac{C_{0}}{\lambda^{3/2}\chi_{n,\alpha}^{1/4}(c)}{\left\|{\theta
g}\right\|}_{L^{2}(0,\infty)}.$
Moreover, using the fact that
$\left|P_{n}^{(0,\alpha)}(\cos(\frac{2\theta}{\lambda}))\right|\leq
P_{n}^{(0,\alpha)}(1)=O(n^{\alpha})$ and the deacreasing cosinus function with
$|B_{n}|=O(n^{1/2})$, we obtain
$\left|\displaystyle\frac{(-1)^{n}B_{n}}{\lambda}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}g(\theta)P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}\sin(\frac{\theta}{\lambda})d\theta\right|$
$\displaystyle\leq$ $\displaystyle\frac{|B_{n}|}{\lambda^{2}}\int_{\lambda
t_{n}}^{\lambda\frac{\pi}{2}}\left|g(\theta)\right|\left|P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)\right|\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}\theta
d\theta$ $\displaystyle\leq$
$\displaystyle\frac{C}{\lambda^{2}\chi^{1/4}_{n,\alpha}(c)}{\left\|{\theta^{3/2}g}\right\|}_{L^{2}(0,\infty)}$
Finally, there exist a constant $a>1$ and $b>0$ such that
$a_{n}(G_{\lambda})=\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda^{2}}\mathcal{H}_{0}(g)(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda})+\frac{1}{n^{a}\chi^{1/4}_{n,\alpha}(c)}O(\frac{1}{\lambda^{b}}).$
(46)
Hence, we obtain
$F_{{\lambda_{j}}}^{N_{k}}(\theta)=\chi_{(0,\frac{\pi}{2}\lambda_{j})}(\theta)\displaystyle\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)a_{n}(G_{\lambda_{j}})\varphi^{\alpha}_{n,c}(\cos(\frac{\theta}{\lambda_{j}}))$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)a_{n}(G_{\lambda_{j}})\left(\chi_{(0,\lambda_{j}t_{n})}(\theta)\varphi_{n}(\cos(\frac{\theta}{\lambda_{j}}))+\chi_{(\lambda_{j}t_{n},\frac{\pi}{2}\lambda_{j})}(\theta)\varphi_{n}(\cos(\frac{\theta}{\lambda_{j}}))\right)$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)\left(\frac{\chi_{n,\alpha}^{1/4}(c)}{\lambda^{2}_{j}}\mathcal{H}_{0}(g)(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}})+\frac{1}{n^{a}\chi^{1/4}_{n,\alpha}(c)}O(\frac{1}{\lambda_{j}^{b}})\right)\chi_{(0,\lambda_{j}\frac{\pi}{2})}(\theta)\chi_{n,\alpha}^{1/4}(c)J_{0}\left(\frac{\chi_{n,\alpha}^{1/2}(c)}{\lambda}\theta\right)$
$\displaystyle+$
$\displaystyle\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)a_{n}(G_{\lambda_{j}})\chi_{(\lambda_{j}t_{n},\frac{\pi}{2}\lambda_{j})}(\theta)\left((-1)^{n}B_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)-\chi_{n,\alpha}^{1/4}(c)J_{0}\left(\frac{\chi_{n,\alpha}^{1/2}(c)}{\lambda}\theta\right)\right)$
$\displaystyle+$ $\displaystyle O(\frac{1}{\lambda^{\varepsilon}_{j}})$
$\displaystyle=$
$\displaystyle\chi_{(0,\frac{\pi}{2}\lambda_{j})}(\theta)\frac{1}{\lambda_{j}}\sum_{n=0}^{N_{k}[\lambda_{j}]}m\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\right)\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}J_{0}\left(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}}\theta\right)\mathcal{H}_{0}(g)(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda_{j}})+O(\frac{1}{\lambda^{\varepsilon}_{j}}).$
where $\varepsilon>0.$ Indeed, from [29], we have
$\displaystyle(-1)^{n}B_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)$
$\displaystyle=$
$\displaystyle(2n+\alpha+1)^{1/2}\left(\cos(\frac{\theta}{\lambda})\right)^{1/2}\left(\frac{\theta/\lambda}{\sin(\theta/\lambda)}\right)^{1/2}J_{0}\left(2(n+\frac{\alpha+1}{2})\frac{\theta}{\lambda}\right)$
$\displaystyle+$
$\displaystyle\frac{1}{\lambda^{1/2}}O(\frac{(2\theta)^{1/2}}{n})$
$\displaystyle=$
$\displaystyle(2n+\alpha+1)^{1/2}J_{0}\left((2n+\alpha+1)\frac{\theta}{\lambda}\right)+O(\frac{1}{n}),$
and by using (10), one gets $\chi_{n,\alpha}(c)\sim(2n+\alpha+1)^{2}$, and
concludes that
$(-1)^{n}B_{n}\left(\cos(\frac{\theta}{\lambda})\right)^{\alpha+1/2}P_{n}^{(0,\alpha)}\left(\cos(\frac{2\theta}{\lambda})\right)-\chi_{n,\alpha}^{1/4}(c)J_{0}\left(\frac{\chi_{n,\alpha}^{1/2}(c)}{\lambda}\theta\right)=O(\frac{1}{n}).$
Further, from [5], we have
${\left\|{\varphi^{\alpha}_{n,c}}\right\|}_{L^{\infty}(0,1)}=O(\chi^{1/2}_{n,\alpha}(c))$,
then we obtain
$a_{n}(G_{\lambda_{j}})=O(\frac{\chi^{1/2}_{n,\alpha}(c)}{\lambda^{2}}).$
Finally, as $j\to\infty$, we get
$F^{N_{k}}=\mathcal{H}_{0}\left(\chi_{(0,N_{k})}m(.)\mathcal{H}_{0}(g)\right).$
∎
## References
* [1] W.O. Amrein, A. M. Hinz and D. B. Pearson, Sturm-Liouville Theory: Past and Present, Birkhäuser, Basel-Boston-Berlin, (2005).
* [2] A. I. Aptekarev, V. S. Buyarov & I. S. Degeza, Asymptotic behavior of the $L^{p}$-norms and the entropy for general orthogonal polynomials. Russian Acad. Sci. Sb. Math. 82 (1995), 373–395.
* [3] J.J. Betankor, K.Stempak , Relating multipliers and transplantation for Fourier Bessel expansions and Hankel transform. Tohoku Math. J. 53 (2001), 109–129.
* [4] A.Bonami and A. Karoui , Uniform Approximation and Explicit Estimates for the Prolate Spheroidal Wave Functions. Constr.Approx. 43 (2016), 15–45.
* [5] M. Boulsane and A. Karoui, The Finite Hankel Transform Operator: Some Explicit and Local Estimates of the Eigenfunctions and Eigenvalues Decay Rates, J. Four. Anal. Appl, 24 , (2018), 1554-1578.
* [6] M. Boulsane, P. Jaming and A. Souabni, Mean convergence of prolate spheroidal series and their extensions, J. Functional Analysis, 277, (2019)
* [7] S. Bochner , Summation of multiple Fourier series by spherical means. Trans. Amer. Math. Soc 40 (1936), 175–207.
* [8] V. Casarino and M. M. Peloso, $L^{p}$-summability of Riesz means for the sublaplacian on complex spheres, J. London Math. Soc. (2) 83 (2011) 137–152.
* [9] P.Chen, X. T. Duong, D. He, S. Lee and L. Yan, Almost everywhere convergence of Bochner-Riesz means for the Hermite operator. Arxiv:2006.05689v3.
* [10] Ó. Ciaurri and L. Roncal, The Bochner–Riesz means for Fourier–Bessel expansions Journal of Functional Analysis 228 (2005), 83–113.
* [11] Ó.Ciaurri and L.Roncal, The Bochner–Riesz means for Fourier–Bessel expansions: Norm inequalities for the maximal operator and almost everywhere convergence. Journal of Approximation Theory 167 (2013), 121–146.
* [12] S.Ciaurri and J. L. Varona, An Uniform Boundedness for Bochner-Riesz Operators Related to the Hankel Transform. J of lnequal and AppL, 2002, 7(6), 759–777.
* [13] L. Colzani and G. Travaclini, Estimates for Riesz Kernels of Eigenfunction Expansions of Elliptic Differential Operators on Compact Manifolds.Journal of Functional Analysis 96,( 1991) l-30
* [14] E.B. Davies, Heat Kernels and Spectral Theory, Cambridge University Press, Cambridge, (1989).
* [15] J. Horvath, L’oeuvre mathématique de Marcel Riesz I. Cahiers du séminaire d’histoire des mathématiques, tome 3 (1982), 83-121.
* [16] S. Igari, On The Multipliers of Hankel Transform. Tohoku Math. Journal. 24(1972), 201-206.
* [17] A. Karoui and I. Mehrzi, Asymptotic behaviors and numerical computations of the eigenfunctions and eigenvalues associated with the classical and circular prolate spheroidal wave functions. Journal of Applied Mathematics and Computation 218 (2012), 10871–10888.
* [18] A. Karoui and A. Souabni, Generalized Prolate Spheroidal Wave Functions: Spectral Analysis and Approximation of Almost Band-limited Functions. J. Four. Anal. Appl. 22 (2016), 383–412.
* [19] A. Karoui and A. Souabni, Weighted finite fourier transform operator:Uniform approximations of the eigenfunctions, eigenvalues decay and behaviour. J.Sci.Comp. 71 (2) (2017), 547–570.
* [20] G. Mauceri , Riesz means for the eigenfunction expansions for a class of hypo-elliptic differential operators. Ann. Inst. Fourier 31 (1981), 115–140.
* [21] C. Meaney, Divergent Cesaro and Riesz means of Jacobi and Laguerre expansions.Proceedings of the American Mathematical Society. 131 (2003), 3123–3128.
* [22] D. Müller, On Riesz means of eigenfunction expansions for the Kohn Laplacian. J. Reine Angew. Math 401 (1989), 113–121.
* [23] A. YA. Olenko. Upper bound on $\sqrt{x}J_{\mu}(x)$ and its applications. Integral Transforms and Special Functions. 17 (2006), 455–467.
* [24] M. Riesz, Sur les fonctions conjugées. Math. Z. 27 (1927), 218–244.
* [25] D. Slepian and H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty I, Bell System Tech. J. 40 (1961), 43–64.
* [26] D.Slepian, Prolate spheroidal wave functions, Fourier analysis and uncertainty–IV: Extensions to many dimensions; generalized prolate spheroidal functions, Bell System Tech. J. 43 (1964), 3009–3057.
* [27] D. Slepian, Some comments on Fourier analysis, uncertainty and modeling. SIAM Rev. 25 (1983) 379-393.
* [28] L. Song , J. Xiao and X. Yan, Preduals of quadratic campanato spaces associated to operators with heat kernel bounds. J. Potential Analysis 41,(2014) 849–867.
* [29] G. Szegö, Orthogonal polynomials, Fourth edition, American Mathematical Society, Colloquium Publications, Vol. XXIII. American Mathematical Society, Providence, R.I., 1975.
* [30] L. L. Wang and J. Zhang, A new generalization of the PSWFs with applications to spectral approximations on quasi-uniform grids. Appl. Comput. Harmon. Anal. 29 (2010), 303–329.
|
# RecSSD: Near Data Processing for
Solid State Drive Based Recommendation Inference
Mark Wilkening Harvard University33 Oxford St.CambridgeMassachusettsUSA
<EMAIL_ADDRESS>, Udit Gupta Harvard University
Facebook33 Oxford St.CambridgeMassachusettsUSA<EMAIL_ADDRESS>, Samuel
Hsia Harvard University33 Oxford St.CambridgeMassachusettsUSA
<EMAIL_ADDRESS>, Caroline Trippel Facebook1 Hacker WayMenlo
ParkCaliforniaUSA<EMAIL_ADDRESS>, Carole-Jean Wu Facebook1 Hacker WayMenlo
ParkCaliforniaUSA<EMAIL_ADDRESS>, David Brooks Harvard University33
Oxford St.CambridgeMassachusettsUSA<EMAIL_ADDRESS>and Gu-Yeon Wei
Harvard University33 Oxford St.CambridgeMassachusettsUSA
<EMAIL_ADDRESS>
(2021)
###### Abstract.
Neural personalized recommendation models are used across a wide variety of
datacenter applications including search, social media, and entertainment.
State-of-the-art models comprise large embedding tables that have billions of
parameters requiring large memory capacities. Unfortunately, large and fast
DRAM-based memories levy high infrastructure costs. Conventional SSD-based
storage solutions offer an order of magnitude larger capacity, but have worse
read latency and bandwidth, degrading inference performance. RecSSD is a near
data processing based SSD memory system customized for neural recommendation
inference that reduces end-to-end model inference latency by 2$\times$
compared to using COTS SSDs across eight industry-representative models.
near data processing, neural networks, solid state drives
††copyright: acmcopyright††price: 15.00††doi:
10.1145/3445814.3446763††journalyear: 2021††submissionid:
asplos21main-p1455-p††isbn: 978-1-4503-8317-2/21/04††conference: Proceedings
of the 26th ACM International Conference on Architectural Support for
Programming Languages and Operating Systems; April 19–23, 2021; Virtual,
USA††booktitle: Proceedings of the 26th ACM International Conference on
Architectural Support for Programming Languages and Operating Systems (ASPLOS
’21), April 19–23, 2021, Virtual, USA††ccs: Hardware External storage††ccs:
Computer systems organization Neural networks
## 1\. Introduction
Recommendation algorithms are used across a variety of Internet services such
as social media, entertainment, e-commerce, and search (Gupta et al., 2019;
Zhao et al., 2019a; Zhou et al., 2018, 2019; Underwood, 2019). In order to
efficiently provide accurate, personalized, and scalable recommendations to
users, state-of-the-art algorithms use deep learning based solutions. These
algorithms consume a significant portion of infrastructure capacity and cycles
in industry datacenters. For instance, compared to other AI-driven
applications, recommendation accounts for 10$\times$ the infrastructure
capacity in Facebook’s datacenter (Gupta et al., 2019; Naumov et al., 2019;
Lui et al., 2020). Similar capacity requirements can be found at Google,
Alibaba, and Amazon (Zhao et al., 2019a; Zhou et al., 2018, 2019).
One of the key distinguishing features of neural recommendation models is
processing categorical input features using large embedding tables. While
large embedding tables enable higher personalization, they consume up to
hundreds of GBs of storage (Gupta et al., 2019; Park et al., 2018). In fact,
in many cases, the size of recommendation models is set by the amount of
memory available on servers (Gupta et al., 2019). A promising alternative is
to store embedding tables in SSDs. While SSDs offer orders of magnitude higher
storage capacities than main memory systems, they exhibit slower read and
write performance. To hide the longer SSD read and write latencies, previous
SSD based systems overlap computations from other layers in the recommendation
models and cache frequently accessed embedding vectors in DRAM-based main
memory (Eisenman et al., 2018b; Zhao et al., 2020; Zhao et al., 2019b).
We propose RecSSD, a near data processing (NDP) solution customized for
recommendation inference that improves the performance of the underlying SSD
storage for embedding table operations. In order to fully utilize the internal
SSD bandwidth and reduce round-trip data communication overheads between the
host CPU and SSD memory, RecSSD offloads the entire embedding table operation,
including gather and aggregation computations, to the SSDs. Compared to
baseline SSD, we demonstrate that RecSSD provides a 4$\times$ improvement in
embedding operation latency and 2$\times$ improvement in end-to-end model
latency on a real OpenSSD system. In addition to offloading embedding
operations, RecSSD exploits the locality patterns of recommendation inference
queries. RecSSD demonstrates that a combination of host-side and SSD-side
caching complement NDP and reduce end-to-end model inference latency. To
demonstrate the feasibility and practicality of the proposed design in server-
class datacenter systems, We implement RecSSD on a real, open-source
Cosmos+OpenSSD system within the Micron UNVMe driver library.
The key contributions of this paper are:
* •
We design RecSSD, the first NDP-based SSD system for recommendation inference.
Improving the performance of conventional SSD systems, the proposed design
targets the main performance bottleneck to datacenter scale recommendation
execution using SSDs. Furthermore, the latency improvement further enables
recommendation models with higher storage capacities at reduced infrastructure
cost.
* •
We implement RecSSD in a real system on top of the Cosmos+OpenSSD hardware.
The implementation demonstrates the viability of Flash-based SSDs for
industry-scale recommendation. In order to provide a feasible solution for
datacenter scale deployment, we implement RecSSD within the FTL firmware; the
interface is compatible with existing NVMe protocols, requiring no hardware
changes.
* •
We evaluate the proposed design across eight industry representative models
across various use cases (e.g., social media, e-commerce, entertainment). Of
the eight, our real system evaluation shows that five models — whose runtime
is dominated by compute-intensive FC layers — achieve comparable performance
using SSD compared to DRAM. The remaining three models are dominated by
memory-bound, embedding table operations. On top of the highly optimized
hybrid DRAM-SSD systems, we demonstrate that RecSSD improves performance by up
to 4$\times$ for individual embedding operations, translating into up to
2$\times$ end-to-end recommendation inference latency reduction.
## 2\. Background
### 2.1. Recommendation Systems
Often found in commercial applications, recommendation systems recommend items
to users by predicting said items’ values in the context of the users’
preferences. In fact, meticulously tuned personalized recommendation systems
form the backbone of many internet services – including social media,
e-commerce, and online entertainment (Zhou et al., 2018, 2019; Naumov et al.,
2019; He et al., 2017; Zhao et al., 2019a) – that require real-time responses.
Modern recommendation systems implement deep learning-based solutions that
enable more sophisticated user-modeling. Recent work shows that deep-learning
based recommendation systems not only drive product success(Underwood, 2019;
Chui et al., 2018; Xie et al., 2018) but also dominate the datacenter capacity
for AI training and inference (Gupta et al., 2019; Hazelwood et al., 2018;
Naumov et al., 2020). Thus, there exists a need to make dataceter-scale
recommendation solutions more efficient and scalable.
Figure 1. Recommendation models process both categorical and continuous input
features.
Overview of model architecture As shown in Figure 1, deep learning-based
recommendation models comprise both fully-connected (FC) layers and embedding
tables of various sizes. FC layers stress compute capabilities by introducing
regular MAC operations while embedding table references stress memory
bandwidth by introducing irregular memory lookups. Based on the specific
operator composition and dimensions, recommendation models span a diverse
range of architectures. For instance, the operators that combine outputs from
Bottom FC and embedding table operations depend on the application use case.
Furthermore, recommendation models implement a wide range of sizes of FC
layers and embedding tables.
Processing categorical inputs Unique to recommendations, models process
categorical input features using embedding table operations. Embedding tables
are organized such that each row is a unique embedding vector typically
comprising 16, 32, or 64 learned features (i.e., number of columns for the
table). For each inference, a set of embedding vectors, specified by a list of
IDs (e.g., multi-hot encoded categorical inputs) is gathered and aggregated
together. Common operations for aggregating embedding vectors together include
sum, averaging, concatentation, and matrix multiplication (Naumov et al.,
2019; Zhao et al., 2019a; Zhou et al., 2018, 2019); Figure 1 shows an example
using summation. Inference requests are often batched together to amortize
control overheads and better utilize computational resources. Additionally
models often comprise many embedding tables. Currently, production-scale
datacenter store embedding tables in DRAM while CPU perform embedding table
operations, optimizations such as vectorized instructions and software
prefetching (Caf, [n.d.]b).
The embedding table operations pose unique challenges:
1. (1)
Capacity: Industry-scale embedding tables have up to hundreds of millions of
rows leading to embedding tables that often require up to $\sim$10GBs of
storage (Gupta et al., 2019). In fact, publications from industry illustrate
that the aggregate capacity of all embedidng tables in a neural recommendation
model can require TBs of storage (Zhao et al., 2020; Zhao et al., 2019b)
2. (2)
Irregular Accesses: Categorical input features are sparse, multi-hot encoded
vectors. High sparsity leads to a small fraction of embedding vectors being
access per request. Furthermore, access patterns between subsequent requests
from different users can be quite different causing embedding table operations
to incur irregular accesses.
3. (3)
Low Compute Intensity: The overall compute intensity of the embedding tables
are orders of magnitude lower than other deep learning workloads precluding
efficient execution using recently proposed SIMD, systolic array, and dataflow
hardware accelerators (Gupta et al., 2019).
These three features – large capacity requirements, irregular memory accesses,
and low compute intensity – make Flash technology an interesting target for
embedding tables.
### 2.2. Flash memory systems
Figure 2. In order to support a high performance and simple logical block
interface to the host while handling the peculiarities of NAND Flash memories,
SSDs are designed with a Microprocessor operating alongside dedicated memory
controllers.
Architecture of Flash NAND flash memory is the most widely used SSD building
block on the market. Compared to traditional disk-based storage systems, NAND
flash memories offer higher performance in terms of latency and bandwidth for
reads and writes (Andersen and Swanson, 2010). Figure 2 illustrates the
overall architecture of NAND Flash storage systems. To perform a read
operation, the host communicates over PCIe using an NVMe protocol to a host
controller on the SSD. The host requests logical blocks, which are served by a
flash translation layer (FTL) running on a microprocessor on the SSD. The FTL
schedules and controls an array of Flash controllers, which are organized per
channel and provide specific commands to all the Flash DIMs (chips) on a
channel and DMA capabilities across the multiple channels. In order to
transfer data between the Flash controller’s DMA engines and the host NVMe DMA
engine, the controller uses an on-board DRAM buffer.
Flash Translation Layer (FTL) In order to maintain compatibility with existing
drivers and file systems, Flash SSD systems implement the FTL. The FTL exposes
a logical block device interface to the host system while managing the
underlying NAND Flash memory system. This includes performing key functions
such as (1) maintaining indirect mapping between logical and physical pages,
(2) maintaining a log-like write mechanism to sequentially add data in erase
blocks and invalidate stale data (Rosenblum and Ousterhout, 1992), (3) garbage
collection, and (4) wear leveling. As shown in Figure 2, to perform this
diverse set of tasks, the FTL runs on a general purpose microprocessor.
Performance characteristics of SSD storage Compared to DRAM-based main memory
systems, Flash-based storage systems have orders of magnitude higher storage
densities (Andersen and Swanson, 2010) enabling higher capacities at lower
infrastructure cost, around 4-8x cheaper than DRAM per bit (Eisenman et al.,
2018a). Despite these advantages, Flash poses many performance challenges. One
single flash memory package provides a limited bandwidth of 32-40MB/sec
(Agrawal et al., 2008). In addition, writes to flash memory are often much
slower, incurring $O$(ms) latencies. To help address these limitations, SSDs
are built to expose significant internal bandwidth by organizing flash memory
packages as an array of connected channels (e.g., 2-10) handled by a single
memory controller. Since logical blocks can be striped over multiple flash
memory packages, data accesses can be conducted in parallel to provide higher
aggregated bandwidth and hide high latency operations through concurrent work.
## 3\. SSD Storage for Neural Recommendation
To better understand the role of SSD storage for neural recommendation
inference, we begin with initial characterization. First, we present the
memory access pattern characterization for recommendation models running in a
cloud-scale production environmentand describe the locality optimization
opportunities for performing embedding execution on SSDs. Then, we take a step
further to study the impact of storing embedding tables and performing
associated computation in SSDs as opposed to DRAM (Ke et al., 2019). The
characterization studies focus on embedding table operations, followed by the
evaluation on the end-to-end model performance.
### 3.1. Embedding access patterns in production models
One important property of SSD systems is that SSDs operate as block devices
where data is transferred in coarse chunks. This is an important factor when
considering efficient bandwidth use of SSDs. The hardware is designed for
sequential disk access, where data is streamed in arbitrarily large chunks.
However, larger access granularity penalizes performance for workloads that
require random, sparse accesses – embedding table access and operation in
neural recommendation models. Therefore, it is important to understand unique
memory access patterns of embedding tables. Furthermore, caching techniques
become even more important to exploit temporal reuse and maximize spatial
locality from block accesses.
Figure 3 depicts the reuse distribution of embedding tables in the granularity
of 256B, 1KB, and 4KB, respectively. The x-axis represents pages accessed over
the execution of real-time recommendation inference serving (sorted by the
corresponding hit counts in the ascending order) whereas the y-axis shows the
cumulative hit counts, by analyzing embedding table accesses logged for
recommendation models running in a cloud-scale production environment. Access
patterns to embedding tables follow the power-law distribution. Depending on
the page sizes, the slope of the tail changes. The majority of reuse remains
concentrated in a few hot memory regions — a few hundred pages capture 30% of
reuses while caching a few thousand pages can extend reuse over 50%.
The concentration of hot pages varies across individual embedding tables.
Figure 4 characterizes the memory locality patterns across different,
individual embedding tables. Using a 16-way, LRU, 4KB page cache of varying
cache capacities, the hit rate varies wildly from under 10% to over 90% across
the different embedding tables of recommendation models running in a cloud-
scale production environment. As the capacity of the page cache increases,
more embedding vectors can be captured in the page cache, leading to higher
reuses. With a 16MB page cache per embedding table, more than 50% of reuses
can be achieved across all the embedding tables analyzed in this study. The
specific page cache capacity per embedding table can be further optimized for
better efficiency.
Figure 3. Access patterns to neural recommendation embedding tables follow the
power-law distribution. Figure 4. Locality patterns vary significantly across
different embedding tables. Using a 16-way LRU 4KB page cache of varying total
capacities, the hit rate varies wildly from under 10% to over 90% across
different embedding tables.
Locality in embedding table accesses influences the design and performance of
SSD systems in many ways. First, on-board SSD caching is difficult due to the
limited DRAM capacity and the potentially large reuse distances. Despite this,
the distribution of reuse and the relatively small collection of hot pages
suggest reasonable effectiveness of static partitioning strategies, where hot
pages can be stored in host-side DRAM. But, most importantly, the varying page
reuse patterns (Figure 4) suggests that, although in some cases, caching can
be used to effectively deal with block access, strategies for more efficiently
handling sparse access is also needed. Previous work (Eisenman et al., 2018b)
has thoroughly investigated advanced caching techniques, while we propose
orthogonal solutions which specifically target increasing the efficiency of
sparse accesses. We evaluate our proposed techniques by using somewhat simpler
caching strategies (standard LRU software caching and static partitioning) and
sweeping the design space across a variety of input locality distributions.
Figure 5. Using a table configuration typical of industry scale models (Gupta
et al., 2020, 2019) and a range of batch sizes, the Sparse Length Sum (SLS)
embedding table operation slows down significantly using SSD storage over
DRAM.
### 3.2. Performance of individual embedding operations
Given their unique memory access patterns, storing embedding tables in SSD
versus DRAM has a large impact on performance given the characteristics of the
underlying memory systems. Figure 5 illustrates the performance of a single
embedding table operation using DRAM versus SSD across a range of batch-sizes.
The embedding table has one million rows, with an embedding vector dimension
of 32, and 80 lookups per table, typical for industry-scale models such as
Facebook’s embedding-dominated recommendation networks (Gupta et al., 2020,
2019). For an optimized DRAM-based embedding table operation, we analyze the
performance of the SparseLengthsSum operation in Caffe2 (Caf, [n.d.]a). As
shown in Figure 5, compared to the DRAM baseline, accessing embedding tables
stored in the SSD incurs three orders of magnitude longer latencies. This is a
result of software overhead in accessing embedding tables over PCIe as well as
the orders-of-magnitude lower read bandwidth in the underlying SSD system —
10K IOPS or 10$MB/s$ random read bandwidth on SSD versus 1$GB/s$ on DRAM.
Thus, while SSD storage offers appealing capacity advantage for growing
industry neural recommendation models, there is significant room to improve
the performance of embedding table operations.
### 3.3. Performance of end-to-end recommendation models
While embedding tables enable recommendation systems to more accurately model
user interests, as shown in Figure 1, embedding is only a component when
considering end-to-end recommendation inference. Thus, to understand the end-
to-end performance impact by offloading embedding tables to the SSD memory, we
characterize the performance impact on recommendation inference over a
representative variety of network model architectures.
Our evaluations use eight open-source recommendation models (Gupta et al.,
2020) representing industry-scale inference use cases from Facebook, Google,
and Alibaba (Zhou et al., 2018, 2019; Zhao et al., 2019a; Naumov et al., 2019;
He et al., 2017; Gupta et al., 2019). For the purposes of this study, models
are clustered into two categories based on the respective performance
characteristics: embedding-dominated and MLP-dominated. MLP-dominated models,
such as Wide and Deep (WD), Multi-Task Wide and Deep (MTWND), Deep Interest
(DIN), Deep Interest Evolution (DIEN), and Neural Collaborative Filtering
(NCF), spend the vast majority of their execution time on matrix operations.
On the other hand, embedding-dominated models, such as DLRM-RMC1, DLRM-RMC2,
and DLRM-RMC3, spend the majority of their time processing embedding table
operations. We refer the reader to (Gupta et al., 2020) for detailed operator
breakdowns and benchmark model characterizations.
Figure 6 shows the execution time of the eight recommendation models at a
batch-size of 64 when embedding tables are stored in DRAM and in SSD,
respectively. The execution time for MLP-dominated models remains largely
unaffected between the two memory systems. Compared to DRAM, storing tables in
SSD for WND, MTWND, DIEN, and NCF increases the model latency by 1.01$\times$,
1.01$\times$, 1.09$\times$, and 1.01$\times$. On the other hand, storing
embedding tables in SSD instead of DRAM significantly impacts the execution
time for embedding-dominated models. For instance, the execution time of
embedding-dominated models, such as DLRM-RMC1, DLRM-RMC2, DLRM-RMC3, degrades
by several orders of magnitude.
Figure 6. Performance degradation from using Flash based embedding table
operations is model dependant. Storing tables in SSD for WND, MTWND, DIEN, and
NCF increases model latency by 1.01$\times$, 1.01$\times$, 1.09$\times$, and
1.01$\times$, versus DRAM.
### 3.4. Opportunities for acceleration and optimization
Given the performance characterization of individual embedding operations and
end-to-end models when embeddings are stored in SSDs, we identify many
opportunities for inference acceleration and optimization. First, the
overwhelming majority of execution time in MLP-dominated models is devoted to
matrix operations; thus SSDs systems offer an exiting solution to store
embedding tables in high-density storage substrates, lowering infrastructure
costs for datacenter scale recommendation inference.
While SSDs is an appealing target for MLP-dominated models, there is
significant room for performance improvement, particularly when embedding
table operations are offloaded to SSDs for embedding-dominated recommendation
models. To bridge the performance gap, this paper proposes to use near data
processing (NDP) by leveraging the existing compute capability of commodity
SSDs. Previous work has shown that NDP-based SSD systems can improve
performance across a variety of different application spaces such as databases
and graph analytics (Ouyang et al., 2014; Schroeder et al., 2016). NDP
solutions work particularly well when processing gather-reduce operations over
large quantities of input data using lightweight computations. Embedding table
operations follow this compute paradigm as well. NDP can help reduce round-
trip counts and latency overheads in PCIe communication as well as improve the
SSD bandwidth utilization by co-locating compute with the Flash-based storage
systems (more detail in Section 4).
In summary, the focus of this work is to demonstrate the viability of SSD-
based storage for the MLP-dominated recommendation models and to customize
NDP-based SSD systems for neural recommendation in order to unlock the
advantages of SSD storage capacity for the embedding-dominated models.
## 4\. RecSSD Design
We present RecSSD, a near-data processing (NDP) solution for efficient
embedding table operations on SSD memory. Compared to traditional SSD storage
systems, RecSSD increases bandwidth to Flash memories by utilizing internal
SSD bandwidth rather than external PCIe, greatly reducing unused data
transmitted over PCIe by packing useful data together into returned logical
blocks, and reduces command and control overheads in the host driver stack by
reducing the number of I/O commands needed for the same amount of data. To
maintain compatibility with exisitng NVMe protocol and drivers, RecSSD is
implemented within the FTL of the SSD, requiring no modifications to the
hardware substrate and paving the way for datacenter scale deployment. This
section describes the overall RecSSD design and implementation. First we
outline how embedding operations are mapped to the FTL in SSD systems; next,
we detail how RecSSD exploits temporal locality in embedding table operations
to improve performance; and finally, we describe the feasibility of
implementing RecSSD in real systems.
### 4.1. Mapping Embedding Operations to the FTL
RecSSD is designed to accelerate embedding table operations for recommendation
inference. In most high-level machine learning frameworks, these embedding
operations are implemented as specific custom operators. These operators can
be implemented using a variety of backing hardware/software technologies,
typically DRAM based data structures for conventional embedding operations.
RecSSD implements embedding operations using SSD storage by moving computation
into the SSD FTL, and on the host using custom NDP based drivers within the
higher-level framework operator implementation.
Given the large storage requirements, embedding table operations (e.g.,
SparseLengthSum in Caffe2), span multiple pages within SSD systems. A standard
FTL provides highly optimized software that supports individual page
scheduling and maintenance; RecSSD operates on top of request queues and data
buffers designed for individual Flash page requests and operations. In order
to support multi-page SparseLengthSum (SLS) operations, we add a scheduling
layer – with accompanying buffer space and request queues – on top of the
existing page scheduling layer. The proposed SLS scheduling layer feeds
individual page requests from a set of in-flight SLS requests into the
existing page-level scheduler to guarantee high throughput across SLS
requests. The existing page-level scheduling proceeds as normal to ensure page
operations maximize the available internal memory parallelism.
Figure 7 details the proposed RecSSD design, which augments SSD systems with
NDP to improve internal Flash memory bandwidth and overall performance of
embedding table operations.
Figure 7. The lifetime of an SSD based SLS operator. The addition of an SLS
request buffer and a specialized embedding cache support the multi-page
operation.
Data-structures In particular, to support NDP SLS, RecSSD adds two major
system components: a pending-SLS-request buffer and a specialized embedding
cache. These components are colored red in Figure 7.
Each SLS operation allocates an entry in the pending SLS request buffer. Each
entry contains five major elements: (Input Config) buffer space to store SLS
configuration data passed from the host, (Status) various data structures
storing reformatted input configuration and counters to track completion
status, (Pending Flash Page Requests) a queue of pending Flash page read
requests to be submitted to the low-level page request queues, (Pending Host
Page Requests) a queue of pending result logical block requests to be serviced
to the host upon completion, and (Result Scratchpad) buffer space for those
result pages.
Initiating embedding request When the FTL receives an SLS request in the form
of a write-like NVMe command, the FTL allocates an SLS request entry. The FTL
then triggers the DMA of the configuration data from the host using the NVMe
host controller (step 1a). Upon receipt of that configuration data, the FTL
will need to process the data, initializing completion status counters and
populating custom data structures containing the reformatted input data
(populating element 2 - Status). This processing step computes which flash
Pages must be accessed and separates input embeddings by flash Page, such that
the per-page processing computation can easily access its own input
embeddings. During this scan of the input data, a fast path may also check for
availability of input embeddings in an embedding cache (discussed later this
section), avoiding flash Page read requests (step 2a), and otherwise placing
those Flash page requests in the pending queue (step 2b). Upon completion of
the configuration processing the request entry is marked as configured, and
pending Flash page requests may be pulled and fed into the low-level page
request queues (step 3a). If the page exists within the page cache already,
the page may be directly processed (step 3b). When the FTL receives a SLS
read-like NVMe command (asynchronous with steps 2-5), it searches for the
associated SLS request entry and populates the pending host page request queue
(step 1b).
Issuing individual Flash requests At a high level of the FTL scheduler polling
loop, the scheduler will maintain a pointer to an entry in the SLS request
buffer. Before processing low-level request queues, the scheduler will fill
the queues from the current SLS entry’s pending Flash page request queue. The
scheduler will then perform an iteration of processing the low level page
request queues, and increment the SLS request buffer pointer regardless of
completion, such that requests are handled fairly in a round robin fashion.
Returning individual Flash requests Upon completion of a Flash page read
request which is associated with an SLS request (step 4), the extraction and
reduction computation will be triggered for that page. The embeddings required
for the request which reside in that page will be read from the page buffer
entry and accumulated into the appropriate result embedding in the result
buffer space for that SLS request (step 5). The reformatted input
configuration allows the page processing function to quickly index which
embeddings need to be processed and appropriately update the completion
counters.
Returning embedding requests Again at a high level of the FTL scheduler
polling loop, the scheduler will check for completed host page requests within
an SLS request. If completed pages are ready, and the NVMe host controller is
available, the scheduler will trigger the controller to DMA the result pages
back to the host (step 6). Upon completion of all result pages in a SLS
request the SLS request entry will be deallocated. The NVMe host controller
will automatically track completed pages and complete NVMe host commands.
### 4.2. Exploiting temporal locality in embedding operations
Multi-threading and Pipelining Aside from the base NDP implementation, there
are a number of conventional optimizations that can be applied on top of the
NDP Flash operation. Multi-threading and software pipelining can be used to
overlap NDP SLS I/O operations with the rest of the neural network
computation. For this we use a threadpool of SLS workers to fetch embeddings
and feed post-SLS embeddings to neural network workers. We match our SLS
worker count to the number of independent available I/O queues in our SSD
driver stack. We then match our neural network workers to the available CPU
resources.
DRAM caching is another technique which has been previously studied (Eisenman
et al., 2018b) in the context of recommendation inference serving. With our
NDP implementation, there is the option for both host DRAM caching and SSD
internal DRAM caching.
Host-side DRAM Caching Because our NDP SLS operator returns accumulated result
embeddings to the host, we cannot use our workload’s existing NDP SLS requests
to populate a host DRAM embedding cache. In order to still make use of
available host DRAM, we implement a static partitioning technique utilizing
input data profiling which can partition embedding tables such that frequently
accessed embeddings are stored in host DRAM, while infrequently used
embeddings are stored on the SSD. This solution is motivated by the
characterization in Section 3.1, showcasing the power law distribution of page
access. Because there exist relatively few highly accessed embeddings, static
partitioning becomes a viable solution. With this feature, our system requests
the SSD embeddings using our NDP function, and post processes the returned
partial sums to include embeddings contained in the DRAM cache on the host.
SSD-side DRAM Caching For host DRAM caching, it is entirely feasible to use a
large fully associative LRU software cache. However, for SSD internal DRAM
caching, we must more carefully consider the implementation overheads of our
software caching techniques. The FTL runs on a relatively simple CPU, with
limited DRAM space. The code and libraries available are specifically designed
for embedded systems, such that the code is compact and has low computation
overhead, as well as having more consistent performance. The SSD FTL is
designed without dynamic memory allocation and garbage collection. When
implementing any level of associativity, the cost of maintaining LRU or pseudo
LRU information on every access must be balanced against cache hit-rate gains.
For the current evaluations we implement a direct-mapped SSD-side DRAM cache.
### 4.3. Feasibility and Implementation: NDP SLS Interface
Our custom interface maintains complete compatibility with the existing NVMe
protocol, utilizing a single unused command bit to indicate embedding
commands. Other than this bit, our interface simply uses the existing command
structure of traditional read/write commands. Embedding processing parameters
are passed to the SSD system with a special write-like command, which
initiates embedding processing. A subsequent read-like command gathers the
resulting pages. The parameters passed include embedding vector dimensions
such as attribute size and vector length, the total number of input embeddings
to be gathered, the total number of resulting embeddings to be returned, and a
list of (input ID, result ID) pairs specifying the input embeddings and their
accumulation destinations. Adding a restriction that this list be sorted by
input ID enables more efficient processing on the SSD system, which contains a
much less powerful CPU than the host system. The configuration-write command
and result-read command are associated with each-other internally in the SSD
by embedding a request ID into the starting logical block address (SLBA) of
the requests. The SLBA is set as the starting address of the targeted
embedding table added with the unique request ID. By assuming a minimum table
size and alignment constraints, the two inputs can be separated within the SSD
system using the modulus operator.
We also note that in addition to maintaining compatibility with existing NVMe
protocol, by implementing support for embedding table operations purely
through software within the SSD FTL, we ensure RecSSD is fully compatible with
existing commodity SSD hardware. This method of implementation relies on the
lightweight nature of the required computation, such that the SSD
microprocessor does not become overly delayed in its scheduling functions by
performing the extra reduction computation.
## 5\. Methodology and Implementation
This section describes the methodology and experimental setup used to evaluate
the proposed RecSSD design. Here we summarize the OpenSSD platform, Micron
UNVMe, recommendation models, and input traces used. Additional details can be
found in the Appendix.
OpenSSD In order to evaluate the efficacy of offloading the SLS operator onto
the SSD FTL, we implement a fully functional NDP SLS operator in the open
source Cosmos+ OpenSSD system (Ope, [n.d.]). The development platform, Cosmos+
OpenSSD, has a 2TB capacity, fully functional NVMe Flash SSD, and a
customizable Flash controller and FTL firmware. In order to provide a feasible
solution for datacenter scale deployment, we implement RecSSDwithin the FTL
firmware; the interface is compatible with existing NVMe protocol, requiring
no hardware changes.
Micron UNVMe In addition to the NVMe compatible OpenSSD system, the RecSSD
interface is implemented within the Micron UNVMe driver library (unv, [n.d.]).
We modify the UNVMe driver stack to include two additional commands, built on
top existing command structures for NVMe read/write commands and distinguished
by setting an additional unused command bit, as described in Section 4. The
command interface enables flexible input data and command configurations,
while maintaining compatibility with the existing NVMe host controller. The
UNVMe driver makes use of a low latency userspace library, which polls for the
completion of NVMe read commands, and uses the maximum number of
threads/command queues.
Neural recommendation models To evaluate RecSSD, we use a diverse set of eight
industry-representative recommendation models provided in DeepRecInfra (Gupta
et al., 2020), implemented in Python using Caffe2 (Caf, [n.d.]a). In order to
evaluate the performance of end-to-end recommendation models on real systems,
we integrate the SparseLengthsSum operations (embedding table operations in
Caffe2) with the custom NDP solution. We offload embedding operations to
RecSSD, we design a low-overhead Python-level interface using CTypes, which
allows us to load the modified UNVMe as a shared library and call NVMe and NDP
SLS I/O commands. In the future these operations could be ported into a custom
Caffe2 operator function, and compiled along with the other Caffe2 C++
binaries.
Input traces and Performance Metrics In addition to the recommendation models
themselves, we instrument the networks with synthetically generated input
traces. We instrument the open-source synthetic trace generators from
Facebook’s open-scourced DLRM (Naumov et al., 2019) with the locality analysis
from industry-scale recommendation systems shown in Figure 4. The synthetic
trace generator is instrumented with likelihood distributions for input
embeddings across stack distances of previously requested embedding vectors.
We generate exponential distributions based on a parameter value, $K$.
Sweeping $K$ generates input traces with varying degrees of locality; for
instance, setting $K$ equal to 0, 1, and 2 generates traces with 13%, 54%, and
72% unique accesses respectively (Gupta et al., 2019; Eisenman et al., 2018b).
Given the high cache miss rates and our locality analysis, we assume a single
embedding vector per SSD page of 16KB. For the evaluation results, we assume
embedding tables have 1 million vectors and host-side DRAM caches store up to
2K entries per embedding table.
Because our prototype limits us to single-model single-SSD systems, we do not
focus our results on latency-bounded throughput, but rather direct request
latencies, a critical metric for determining the performance viability of SSD
based recommendation. We average latency results across many batches, ensuring
steady-state behavior.
Physical Compute Infrastructure All experiments are run on a Quad-core Intel
Skylake desktop machine. Our machine uses G.SKILL TridentZ Series 64GB (4 x
16GB) 288-Pin DDR4 SDRAM DDR4 3200 (PC4 25600) Desktop Memory Model
F4-3200C14Q-64GTZ DRAM. DRAM has nanosecond-scale latencies, and 10s of GB/s
in throughput. Our prototype SSD system supports 10K IOPs per channel with 8
channels and a page size of 16KB, leading to maximum throughput with
sequential read of just under 1.4GB/s. Newer SSD systems will have higher
throughput. Single page access latencies are in the 10s to 100s of
microseconds range.
## 6\. Evaluating RecSSD
Here we present empirical results evaluating the performance of RecSSD.
Overall, the results demonstrate that RecSSD provides up to 2$\times$ speedup
over baseline SSD for recommendation inference. This section first analyzes
the fundamental tradeoffs of RecSSDusing micro-benchmarks based on embedding
table operations. Following the micro-benchmark analysis, the section compares
the performance of end-to-end recommendation models between baseline SSD
systems and RecSSD. Expanding upon this analysis, we present performance
tradeoffs between baseline SSD systems and RecSSD using both host-side and
SSD-side DRAM caching in order to exploit temporal locality in embedding table
accesses. Finally, the section conducts a sensitivity study on the impact of
individual recommendation network architectural parameters on RecSSD
performance. The sensitivity analysis provides insight into RecSSD’s
performance on future recommendation models.
Figure 8. The standalone performance of the SLS embedding operator.
Performance is shown for both sequential and strided access patterns, using
both conventional SSD interfaces and NDP interfaces, on a variety of batch
sizes.
### 6.1. Accelerating embedding table operations
Figure 8 presents the performance of embedding table operations (i.e.,
SparseLengthsSum in Caffe2 (Caf, [n.d.]b)). For RecSSD, the execution time is
categorized five components (i.e., Config Write, Config Process, Translation,
and Flash Read) over a range of batch sizes. Config Write and Config Process
represent the time taken to transfer configuration data to the SSD and the
time to process the configuration, respectively; after the transfers, internal
data structures are populated and Flash page requests begin issuing.
Translation represents the time spent on processing returned Flash pages,
extracting the necessary input embedding vectors, and accumulating the vectors
into the corresponding result buffer space. Flash Read indicates the time in
which the FTL is managing and waiting on Flash memory operations.
It is difficult to compare the computational throughput of Translation
independently with the IO bandwidth of flash, as the computation is
synchronously tightly-coupled with the IO scheduling mechanisms within the
FTL. With hardware modification this computation could be decoupled and made
parallel. However, we can indirectly observe the bottleneck by observing the
dominating time spent in the FTL, whether it is translation computation or
flash read operations.
Following the characterization from Section 3.1, we study two distinct memory
access patterns: SEQ and STR. The Sequential(SEQ) memory access pattern
represents use cases where embedding table IDs are contiguous. This is
unlikely to happen in datacenter-scale recommendation inference applications,
as shown in Figure 4, but represents use cases with extremely high page
locality. The Random(STR) memory access patterns are generated with strided
embedding table lookup IDs and representative of access patterns where each
vector accessed is located on a unique Flash page. Given the large diversity
in recommendation use cases, as evidenced by the variety of recommendation
model architectures (Gupta et al., 2020), the two memory access patterns allow
us to study the performance characteristics of RecSSD across a wide variety
use cases. Furthermore, while current recommendation use cases exhibit sparse
access patterns, future optimizations in profiling and restructuring embedding
tables may increase the locality.
Performance with low locality embedding accesses Under the Random memory
lookup access pattern, RecSSD achieves up to a 4$\times$ performance
improvement over baseline SSD. This performance improvement comes from the
increased memory level parallelism. RecSSD increases memory level parallelism
by concurrently executing Flash requests for each embedding operation,
increasing utilization of the internal SSD bandwidth. As shown in Figure 8,
roughly half the time in the RecSSD’s FTL is spent on Translation. Given the
limited hardware capability of the 1GHz, dual core ARM A9 processors of the
Cosmos+OpenSSD system(Ope, [n.d.]), we expect that with faster SSD
microprocessors or custom logic, the Translation time could be significantly
reduced.
Performance with high locality embedding accesses Compared to the baseline SSD
system using conventional NVMe requests, Sequential access patterns with high
spatial locality result in poor NDP performance. Compared to random or low
locality access patterns, sequential or high locality embedding accesses
access fewer Flash pages but require commensurate compute resources to
aggregate embedding vectors. In the baseline system, the SSD page cache will
hold pages while embedding vectors requests are sequentially streamed through
the system and accumulated by the CPU. While, RecSSD also acccess fewer Flash
pages, the embedding vectors are aggregated using the dual-core ARM A9
processor on the Cosmos+OpenSSD system; this accounts for over half the
execution time (Translation) as shown in Figure 8. With sequential accesses,
the benefits of aggregating faster server class, host Intel CPU outweighs the
lack of overhead of multiple NVMe commands. We anticipate more sophisticated
processors on the NDP system would close eliminate the slight performance
degradation using RecSSD.
Figure 9. NDP alone provides up to 7$\times$ performance improvement for some
full models given a simple naive configuration. Figure 10. Relative full
model performance improvement including caching techniques. The percentages
above each bar represent the hitrate of either the SSD cache (a-c) or the host
partition (d-f) for RecSSD. The baseline LRU cache hitrates always follow the
inverse of the locality distribution, with 84%, 44%, and 28% hits in the cache
corresponding to $K$ equal to 0, 1, and 2, respectively.
### 6.2. Accelerating end-to-end recommendation models
In addition to individual embedding table operations, here we evaluate RecSSD
across a selection of industry-representative recommendation models. To start,
we showcase the raw potential of NDP, by presenting the simplest naive
experimental configuration. Figure 9 presents the relative speedup of RecSSD
over a conventional SSD baseline, without operator pipelining and caching
techniques, and using randomly generated input indices. We observe that many
models exist where NDP provides no observable benefits, and for models where
performance is limited by embedding operations and SSD latencies, NDP can
provide substantial assistance with up to 7$\times$ speedup. The maximum
speedup across models shown here is higher than that of the individual
embedding operations (Figure 8) due to differences in underlying model
parameters such as feature size and indices per lookup as discussed in Section
6.4.
### 6.3. Exploiting Locality in end to end models
In addition to the end-to-end model results, we evaluate the performance of
RecSSD with operator pipelining and caching. These optimization techniques, as
presented in Section 4, are applied on top of RecSSD and conventional SSD
systems.
Figure 10(a-c) presents relative speedup results for RecSSD with just SSD-side
caching and the conventional SSD baseline with host-side caching. RecSSD
utilizes a large, but direct mapped, cache within the SSD DRAM while the
baseline utilizes a fully associative LRU cache within host DRAM. Batchsizes
are swept between 1 and 32, along with the three input trace locality
conditions $K=0,1,2$. Hit rates for RecSSD’s SSD DRAM cache are labeled above
each speedup bar. The baseline LRU cache hit rates follow the inverse of the
locality distribution, with 84%, 44%, and 28% hits in the cache corresponding
to $K$ equal to 0, 1, and 2 respectively. Note, the LRU cache hit rates span
the diverse set of embedding access patterns from the initial characterization
of production-scale recommendation models shown in Figure 4.
With high locality (i.e., low $K$), conventional SSD systems achieve higher
performance than RecSSD. On the other hand, with low locality RecSSD
outperforms the conventional baseline. This is because the direct mapped
caching hit rate cannot match that of the more complex fully associative LRU
cache on the host system, exemplified in the high batch size runs for RM1.
Furthermore, RM2 has lower SSD cache hit rates compared to RM1/3, a result of
the larger number of embedding lookups required per request and temporal
locality being across requests not lookups. Even so, without host DRAM
caching, RecSSD outperforms the baseline by up to 1.5X for lower locality
traces, where many SSD pages must be accessed and the benefits of increased
internal bandwidth shine.
Figure 10(d-f) presents relative speedup results for RecSSD using static table
partitioning as well as SSD caching. With static table partitioning, we make
use of available host DRAM by statically placing the most frequently used
embeddings within the host DRAM cache as detailed in Section 4. The hitrates
labeled above each bar represent the hit rates of RecSSD in the statically
partitioned host DRAM cache, not the SSD cache.
Following the conventional SSD baseline, static partitioning helps in
leveraging the available host DRAM memory. For high temporal locality however
it cannot match the hit rate of the fully associative LRU cache. With higher
batch sizes as well as higher indicies per request (seen in RM2), the hit rate
asymptotically approaches 25%, the size of the static partition relative to
the used ID space. Overall, Figure10 shows that with static partitioning,
RecSSD achieves a 2$\times$ performance improvement over the conventional SSD
baseline. This occurs when the baseline host LRU cache has a relatively low
hit rate such that many SSD pages must be accessed, while RecSSD is able to
achieve comparable host DRAM hitrates with static partitioning.
In general, the above results show that the advantages of RecSSD shine when
pages must be pulled from the SSD, and when the host level caching strategies
available for RecSSD (static partitioning) are of comparable effectiveness to
the baseline LRU software cache. Although RecSSD shows diminishing returns
with improved caching and locality, we note that because RecSSD is fully
compatible with the existing NVMe interface, it can be employed in tandem with
conventional strategies and switched based on the embedding table locality
properties.
### 6.4. Sensitivity study: Impact of model architecture
In this section we more closely examine the impact of model parameters
differentiating the performance of our benchmark models. Table 1 details the
parameter space of RM1/2/3. We specifically note that absolute table size does
not impact our results. Growing table sizes do provide motivation to move from
capacity constrained DRAM to flash SSDs, however embedding lookup performance
is dependant on access patterns, not absolute table size.
Table 1. Differentiating benchmark parameters. Benchmark | Feature Size | Indices | Table Count
---|---|---|---
RM1 | 32 | 80 | 8
RM2 | 64 | 120 | 32
RM3 | 32 | 20 | 10
In Figure 11a we see how feature size and quantization, which affect the size
of embedding vectors relative to the page size, show decreasing relative
performance as this ratio grows. This is because the baseline is able to make
more efficient use of block accesses as the lowest unit of memory access
approaches the size of a memory block, while RecSSD must perform more
computation on the SSD microprocessor per page accessed from Flash. In Figure
11b we see that although increasing table count diminishes performance, this
quickly becomes outscaled by increases in performance from the increased
indices per lookup. The performance loss from increasing table count is due to
the implementation of our NDP interface. Because a single NDP call handles a
single table, the amortization of command overheads is on a per table basis.
On the other hand, increasing the number of indices per lookup increases the
amortization of this control overhead as well as the value of accumulating
these embeddings on the SSD, where only one vector is sent to the host for all
the indices accumulated in a single lookup.
(a) Feature Size and Quantization
(b) Indices and Table Count
Figure 11. Examining the impact of model parameters on full model executions.
## 7\. Related Work
SSD systems Recent advances in flash memory technology have made it an
increasingly compelling storage system for at-scale deployment. Compared to
disk based solutions, SSDs offer 2.6$\times$ and 3.2$\times$ bandwidth per
Watt and bandwidth per dollar respectively (Andersen and Swanson, 2010).
Furthermore, given the high density and energy efficiency, SSDs are being used
as datacenter DRAM-replacements as well (Andersen and Swanson, 2010). In fact,
prior work from Google and Baidu highlight how modern SSD systems are being
used for web-scale applications in datacenters (Ouyang et al., 2014; Schroeder
et al., 2016). Furthermore, given recent advances in Intel’s Optane
technology, balancing DRAM-like latency and byte-addressability with SSD-like
density, we anticipate the type of applications that leverage SSD based
storage systems to widen (Izraelevitz et al., 2019). In fact, training
platforms for terabyte scale personalized recommendation models rely heavily
on SSD storage capabilities for efficient and scalable execution (Zhao et al.,
2020).
In order to enable highly efficient SSD execution, modern storage solutions
rely on programmable memory systems (Do et al., 2019). Leveraging this compute
capability, there has been much work on both software and hardware solutions
for Near Data Processing in SSDs for other datacenter applications (Do et al.,
2013; Seshadri et al., 2014; Wang et al., 2016; Ouyang et al., 2014; Gu et
al., 2016; Jin et al., 2017; Wang et al., 2014; Tiwari et al., 2013). Previous
works which target more general SSD NDP solutions have relied on hardware
modifications, complex programming frameworks, and heavily modified driver
subsystems to support the performance requirements of more complex and general
computational tasks. Our system trades-off this generality for simplicity and
application specific performance and cost efficiency.
Accelerating recommendation inference Given the ubiquity of AI and machine
learning workloads, there have been many proposals for accelerating deep
learning workloads. In particular, recent work illustrates that recommendation
workloads dominate the AI capacity in datacenters (Gupta et al., 2019). As a
result, recent work proposes accelerating neural recommendation. For example,
the authors in (Kwon et al., 2019; Hyun et al., 2019) propose a customized
memory management unit for AI accelerators (i.e., GPUs) in order to accelerate
address translation operations across multi-node hardware platforms. Given the
low-compute intensity of embedding table operations, recent work also explores
the role of near memory processing for Facebook’s recommendation models (Ke et
al., 2019). Similarly, researchers have proposed the application of flash
memory systems to store large embedding tables found in Facebook’s
recommendation models (Eisenman et al., 2018b), exploring advanced caching
techniques to alleviate challenges with large flash page sizes. These
techniques can be used in tandem with RecSSD. In this paper, we explore the
role combining near data processing and NAND flash memory systems for at-scale
recommendation in order to reduce overall infrastructure cost. Furthermore, we
provide a real system evaluation across a wide collection of recommendation
workloads (Gupta et al., 2020).
## 8\. Conclusion
In this paper we propose, RecSSD, a near data processing solution customized
for neural recommendation inference. By offloading computations for key
embedding table operations, RecSSD reduces round-trip time for data
communication and improves internal SSD bandwidth utilization. Furthermore,
with intelligent host-side and SSD-side caching, RecSSD enables high
performance embedding table operations. We demonstrate the feasibility of
RecSSD by implementing it in a real-system using server-class CPUs and an
OpenSSD compatible system with Micron UNVMe’s driver library. RecSSD reduces
end-to-end neural recommendation inference latency by 4$\times$ compared to
off-the-shelf SSD systems and comparable performance to DRAM-based memories.
As a result, RecSSD enables highly efficient and scalable datacenter neural
recommendation inference.
###### Acknowledgements.
We would like to thank the anonymous reviewers for their thoughtful comments
and suggestions. We would also like to thank Glenn Holloway for his valuable
technical support. This work was sponsored in part by National Science
Foundation Graduate Research Fellowships (NSFGRFP), and the ADA (Applications
Driving Architectures) Center.
## Appendix A Artifact Appendix
### A.1. Abstract
RecSSD is composed of a number of open sourced artifacts. First, we implement
a fully-functional NDP SLS operator in the open source Cosmos+ OpenSSD system
(Ope, [n.d.]), provided in the _RecSSD-OpenSSDFirmware_ repository(rec,
[n.d.]a). To maintain compatibility with the NVMe protocols, the RecSSD
interface is implemented within Micron’s UNVMe driver library (unv, [n.d.]),
provided in the _RecSSD- UNVMeDriver_ repository(rec, [n.d.]c). To evaluate
RecSSD, we use a diverse set of eight industry-representative recommendation
models provided in DeepRecInfra (Gupta et al., 2020), implemented in Python
using Caffe2 (Caf, [n.d.]a) and provided in the _RecSSD-RecInfra_
repository(rec, [n.d.]b). In addition to the models themselves, we instrument
the open-source synthetic trace generators from Facebook’s open-sourced DLRM
(Naumov et al., 2019) with our locality analysis from production-scale
recommendation systems, also included in the RecSSD-RecInfra repository.
### A.2. Artifact check-list (meta-information)
* •
Compilation: GCC, Python3, PyTorch, Caffe2, Xilinx SDK 2014.4
* •
Model: DeepRecInfra
* •
Run-time environment: Ubuntu 14.04
* •
Hardware: Cosmos+ OpenSSD, two Linux Desktop machines, remote PDU
* •
How much time is needed to prepare workflow (approximately)?: 4-8 hours, once
hardware is acquired
* •
How much time is needed to complete experiments (approximately)?: 10+ hours
* •
Publicly available?: Software will be open-sourced and publicly available.
Required hardware platform is potentially still purchasable through original
developers.
* •
Code licenses (if publicly available)?: GNU GPL
### A.3. Description
#### A.3.1. How to access
RecSSD is provided through a number of publically available GitHub
repositories (rec, [n.d.]a, [n.d.]b, [n.d.]c), as well as a publicly available
archive on Zenodo, DOI: 10.5281/zenodo.4321943.
#### A.3.2. Hardware dependencies
Cosmos+ OpenSSD system (Ope, [n.d.]), two Linux Desktop class machines, and a
remote PDU for a fully remote workflow.
#### A.3.3. Software dependencies
Xilinx SDK 2014.4 for programming the OpenSSD. The Cosmos+ OpenSSD FTL
firmware and controller Bitstream. Python3, PyTorch, and Caffe2 for running
recommendation models.
#### A.3.4. Models
Uses recommendation model benchmarks from DeepRecInfra (Gupta et al., 2020),
and trace generation from Facebook’s open-sourced DLRM (Naumov et al., 2019).
### A.4. Installation
To set up the SSDDev machine, start by downloading the Cosmos+ OpenSSD
software available on their GitHub(ope, [n.d.]). You will need to install
Xilinx SDK 2014.4, and follow the instructions in their tutorial(ope, [n.d.])
to set up a project for the OpenSSD board. For RecSSD, we use the prebuilt
bitstream and associated firmware. After setting up the project, replace the
./GreedyFTL/src/ directory with the code from the RecSSD-OpenSSDFirmware
GitHub repository. The OpenSSD tutorial contains detailed instructions on
running the firmware, and the physical setup of the hardware.
To set up the SSDHost machine, download and make the RecSSD-UNVMeFirmware
repository. This repository provides a user level driver library to connect
the RecSSD-RecInfra recommendation models to the OpenSSD device. Once the
SSDHost has been booted with the OpenSSD running, use lspci to detect the PCIe
device identifier of the board, and use unvme-setup bind PCIEID to attach the
driver to the specific device. Make note of ./test/unvme/libflashrec.so, which
must be later copied into RecSSD-RecInfra, such that the Python3 runtime can
load and run the necessary driver functions to make use of our implemented NDP
techniques.
Next, download the RecSSD-RecInfra repository. Copy the libflashrec.so file
into ./models/libflashrec.so. Make sure to download and install Python3 and
PyTorch(pyt, [n.d.]).
### A.5. Experiment workflow
Detailed walk-throughs of the technical steps required are documented within
the provided individual repositories and from the OpenSSD tutorial(ope,
[n.d.]). At a high level the expected workflow is as follows.
1. (1)
With the SSDHost machine powered off, use the Xilinx SDK on the SSDDev machine
to launch the FTL firmware on the OpenSSD.
2. (2)
Power on and boot the SSDHost machine. Connect the UNVMe driver library to the
device through unvme-setup bind.
3. (3)
Run /models/input/create_dist.sh within RecSSD-RecInfra to generate the
desired synthetic locality patterns for input traces.
4. (4)
Run the python based experimental sweeps scripts within RecSSD-RecInfra
/models/ to run various recommendation models using either baseline SSD
interfaces or our NDP interfaces.
### A.6. Evaluation and expected results
Most of our results are reported as inference latency, output from scripts run
on the SSDHost machine. We compare relative latency results across a large
number of batches in order to guarantee regular steady state behavior. Figure
10 presents expected results for the important RM1, RM2, and RM3 models, while
Figure 11 presents results for an RM3-like model while tuning specific model
parameters.
Figure 8 reports breakdowns in time spent within the FTL for NDP requests
using microbenchmarks within the RecSSD-UNVMeDriver repository. To reproduce
these results, run ./test/unvme/unvme_embed_test. Unlike model latency
results, these measurements are performed within the FTL and directly reported
through output to the SDK, therefore they must be recorded from the SDK
running on the SSDDev machine.
Figures 3 and 4 use proprietary industry data and are not reproducible using
our open-sourced infrastructure.
## References
* (1)
* Caf ([n.d.]a) [n.d.]a. Caffe2. https://caffe2.ai/.
* Caf ([n.d.]b) [n.d.]b. Caffe2 Operator Catalog. https://caffe2.ai/docs/operators-catalogue.html#sparselengthssum.
* ope ([n.d.]) [n.d.]. Cosmos+ OpenSSD GitHub. https://github.com/Cosmos-OpenSSD/Cosmos-plus-OpenSSD.
* Ope ([n.d.]) [n.d.]. Cosmos+ OpenSSD Platform. http://www.openssd.io.
* ope ([n.d.]) [n.d.]. Cosmos+ OpenSSD Tutorial. https://github.com/Cosmos-OpenSSD/Cosmos-plus-OpenSSD/blob/master/doc/ Cosmos%2B%20OpenSSD%202017%20Tutorial.pdf.
* pyt ([n.d.]) [n.d.]. PyTorch. https://pytorch.org/.
* rec ([n.d.]a) [n.d.]a. RecSSD-OpenSSDFirmware GitHub. https://github.com/wilkening-mark/RecSSD-OpenSSDFirmware.
* rec ([n.d.]b) [n.d.]b. RecSSD-RecInfra GitHub. https://github.com/wilkening-mark/RecSSD-RecInfra.
* rec ([n.d.]c) [n.d.]c. RecSSD-UNVMeDriver GitHub. https://github.com/wilkening-mark/RecSSD-UNVMeDriver.
* unv ([n.d.]) [n.d.]. UNVMe - A User Space NVMe Driver Project. https://github.com/zenglg/unvme.
* Agrawal et al. (2008) Nitin Agrawal, Vijayan Prabhakaran, Ted Wobber, John D. Davis, Mark Manasse, and Rina Panigrahy. 2008\. Design Tradeoffs for SSD Performance. In _USENIX 2008 Annual Technical Conference_ (Boston, Massachusetts) _(ATC’08)_. USENIX Association, USA, 57–70.
* Andersen and Swanson (2010) David G Andersen and Steven Swanson. 2010. Rethinking flash in the data center. _IEEE micro_ 4 (2010), 52–54.
* Chui et al. (2018) Michael Chui, J Manyika, M Miremadi, N Henke, R Chung, P Nel, and S Malhotra. 2018\. NOTES FROM THE AI FRONTIER INSIGHTS FROM HUNDREDS OF USE CASES.
* Do et al. (2013) Jaeyoung Do, Yang-Suk Kee, Jignesh M. Patel, Chanik Park, Kwanghyun Park, and David J. DeWitt. 2013\. Query Processing on Smart SSDs: Opportunities and Challenges. In _Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data_ (New York, New York, USA) _(SIGMOD ’13)_. Association for Computing Machinery, New York, NY, USA, 1221–1230. https://doi.org/10.1145/2463676.2465295
* Do et al. (2019) Jaeyoung Do, Sudipta Sengupta, and Steven Swanson. 2019\. Programmable solid-state storage in future cloud datacenters. _Commun. ACM_ 62, 6 (2019), 54–62.
* Eisenman et al. (2018a) Assaf Eisenman, Darryl Gardner, Islam AbdelRahman, Jens Axboe, Siying Dong, Kim Hazelwood, Chris Petersen, Asaf Cidon, and Sachin Katti. 2018a. Reducing DRAM Footprint with NVM in Facebook. In _Proceedings of the Thirteenth EuroSys Conference_ (Porto, Portugal) _(EuroSys ’18)_. Association for Computing Machinery, New York, NY, USA, Article 42, 13 pages. https://doi.org/10.1145/3190508.3190524
* Eisenman et al. (2018b) Assaf Eisenman, Maxim Naumov, Darryl Gardner, Misha Smelyanskiy, Sergey Pupyrev, Kim Hazelwood, Asaf Cidon, and Sachin Katti. 2018b. Bandana: Using non-volatile memory for storing deep learning models. _arXiv preprint arXiv:1811.05922_ (2018).
* Gu et al. (2016) Boncheol Gu, Andre S. Yoon, Duck-Ho Bae, Insoon Jo, Jinyoung Lee, Jonghyun Yoon, Jeong-Uk Kang, Moonsang Kwon, Chanho Yoon, Sangyeun Cho, Jaeheon Jeong, and Duckhyun Chang. 2016\. Biscuit: A Framework for Near-data Processing of Big Data Workloads. In _Proceedings of the 43rd International Symposium on Computer Architecture_ (Seoul, Republic of Korea) _(ISCA ’16)_. IEEE Press, Piscataway, NJ, USA, 153–165. https://doi.org/10.1109/ISCA.2016.23
* Gupta et al. (2020) Udit Gupta, Samuel Hsia, Vikram Saraph, Xiu Qiao Wang, Brandon Reagen, Gu-Yeon Wei, Hsien-Hsin S. Lee, David M. Brooks, and Carole-Jean Wu. 2020. DeepRecSys: A System for Optimizing End-To-End At-scale Neural Recommendation Inference. _ArXiv_ abs/2001.02772 (2020).
* Gupta et al. (2019) Udit Gupta, Xiaodong Wang, Maxim Naumov, Carole-Jean Wu, Brandon Reagen, David Brooks, Bradford Cottel, Kim Hazelwood, Bill Jia, Hsien-Hsin S Lee, et al. 2019\. The architectural implications of Facebook’s DNN-based personalized recommendation. _arXiv preprint arXiv:1906.03109_ (2019).
* Hazelwood et al. (2018) K. Hazelwood, S. Bird, D. Brooks, S. Chintala, U. Diril, D. Dzhulgakov, M. Fawzy, B. Jia, Y. Jia, A. Kalro, J. Law, K. Lee, J. Lu, P. Noordhuis, M. Smelyanskiy, L. Xiong, and X. Wang. 2018\. Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective. In _2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)_. 620–629. https://doi.org/10.1109/HPCA.2018.00059
* He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017\. Neural Collaborative Filtering. In _Proceedings of the 26th International Conference on World Wide Web_ (Perth, Australia) _(WWW ’17)_. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 173–182. https://doi.org/10.1145/3038912.3052569
* Hyun et al. (2019) Bongjoon Hyun, Youngeun Kwon, Yujeong Choi, John Kim, and Minsoo Rhu. 2019. NeuMMU: Architectural Support for Efficient Address Translations in Neural Processing Units. _Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems_ (2019).
* Izraelevitz et al. (2019) Joseph Izraelevitz, Jian Yang, Lu Zhang, Juno Kim, Xiao Liu, Amirsaman Memaripour, Yun Joon Soh, Zixuan Wang, Yi Xu, Subramanya R Dulloor, et al. 2019\. Basic performance measurements of the intel optane DC persistent memory module. _arXiv preprint arXiv:1903.05714_ (2019).
* Jin et al. (2017) Y. Jin, H. W. Tseng, Y. Papakonstantinou, and S. Swanson. 2017\. KAML: A Flexible, High-Performance Key-Value SSD. In _2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)_. 373–384. https://doi.org/10.1109/HPCA.2017.15
* Ke et al. (2019) Liu Ke, Udit Gupta, Carole-Jean Wu, Benjamin Y. Cho, Mark Hempstead, Brandon Reagen, Xuan Zhang, David M. Brooks, Vikas Chandra, Utku Diril, Amin Firoozshahian, Kim Hazelwood, Bill Jia, Hsien-Hsin S. Lee, Mengxing Li, Bert Maher, Dheevatsa Mudigere, Maxim Naumov, Martin Schatz, Mikhail Smelyanskiy, and Xiu Qiao Wang. 2019. RecNMP: Accelerating Personalized Recommendation with Near-Memory Processing. _ArXiv_ abs/1912.12953 (2019).
* Kwon et al. (2019) Youngeun Kwon, Yunjae Lee, and Minsoo Rhu. 2019. TensorDIMM: A Practical Near-Memory Processing Architecture for Embeddings and Tensor Operations in Deep Learning. In _Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture_. 740–753.
* Lui et al. (2020) Michael Lui, Yavuz Yetim, Özgür Özkan, Zhuoran Zhao, Shin-Yeh Tsai, Carole-Jean Wu, and Mark Hempstead. 2020. Understanding Capacity-Driven Scale-Out Neural Recommendation Inference. arXiv:2011.02084 [cs.DC]
* Naumov et al. (2020) Maxim Naumov, John Kim, Dheevatsa Mudigere, Srinivas Sridharan, Xiting Wang, Whitney Zhao, Serhat Yilmaz, Changkyu Kim, Hector Yuen, Mustafa Ozdal, Krishnakumar Nair, Isabel Gao, Bor-Yiing Su, Jiyan Yang, and Mikhail Smelyanskiy. 2020. Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems. _ArXiv_ abs/2003.09518 (2020).
* Naumov et al. (2019) Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta, Carole-Jean Wu, Alisson G. Azzolini, Dmytro Dzhulgakov, Andrey Mallevich, Ilia Cherniavskii, Yinghai Lu, Raghuraman Krishnamoorthi, Ansha Yu, Volodymyr Kondratenko, Stephanie Pereira, Xianjie Chen, Wenlin Chen, Vijay Rao, Bill Jia, Liang Xiong, and Misha Smelyanskiy. 2019\. Deep Learning Recommendation Model for Personalization and Recommendation Systems. _CoRR_ abs/1906.00091 (2019). http://arxiv.org/abs/1906.00091
* Ouyang et al. (2014) Jian Ouyang, Shiding Lin, Song Jiang, Zhenyu Hou, Yong Wang, and Yuanzheng Wang. 2014\. SDF: software-defined flash for web-scale internet storage systems. In _Proceedings of the 19th international conference on Architectural support for programming languages and operating systems_. 471–484.
* Park et al. (2018) Jongsoo Park, Maxim Naumov, Protonu Basu, Summer Deng, Aravind Kalaiah, Daya Khudia, James Law, Parth Malani, Andrey Malevich, Satish Nadathur, et al. 2018\. Deep learning inference in facebook data centers: Characterization, performance optimizations and hardware implications. _arXiv preprint arXiv:1811.09886_ (2018).
* Rosenblum and Ousterhout (1992) Mendel Rosenblum and John K. Ousterhout. 1992. The Design and Implementation of a Log-Structured File System. _ACM Trans. Comput. Syst._ 10, 1 (Feb. 1992), 26–52. https://doi.org/10.1145/146941.146943
* Schroeder et al. (2016) Bianca Schroeder, Raghav Lagisetty, and Arif Merchant. 2016\. Flash reliability in production: The expected and the unexpected. In _14th $\\{$USENIX$\\}$ Conference on File and Storage Technologies ($\\{$FAST$\\}$ 16)_. 67–80.
* Seshadri et al. (2014) Sudharsan Seshadri, Mark Gahagan, Sundaram Bhaskaran, Trevor Bunker, Arup De, Yanqin Jin, Yang Liu, and Steven Swanson. 2014\. Willow: A User-programmable SSD. In _Proceedings of the 11th USENIX Conference on Operating Systems Design and Implementation_ (Broomfield, CO) _(OSDI’14)_. USENIX Association, Berkeley, CA, USA, 67–80. http://dl.acm.org/citation.cfm?id=2685048.2685055
* Tiwari et al. (2013) Devesh Tiwari, Simona Boboila, Sudharshan S. Vazhkudai, Youngjae Kim, Xiaosong Ma, Peter J. Desnoyers, and Yan Solihin. 2013\. Active Flash: Towards Energy-efficient, In-situ Data Analytics on Extreme-scale Machines. In _Proceedings of the 11th USENIX Conference on File and Storage Technologies_ (San Jose, CA) _(FAST’13)_. USENIX Association, Berkeley, CA, USA, 119–132. http://dl.acm.org/citation.cfm?id=2591272.2591286
* Underwood (2019) Corinna Underwood. 2019\. Use Cases of Recommendation Systems in Business – Current Applications and Methods. https://emerj.com/ai-sector-overviews/use-cases-recommendation-systems/
* Wang et al. (2016) Jianguo Wang, Dongchul Park, Yannis Papakonstantinou, and Steven Swanson. 2016. Ssd in-storage computing for search engines. _IEEE Trans. Comput._ (2016).
* Wang et al. (2014) Peng Wang, Guangyu Sun, Song Jiang, Jian Ouyang, Shiding Lin, Chen Zhang, and Jason Cong. 2014. An Efficient Design and Implementation of LSM-tree Based Key-value Store on Open-channel SSD. In _Proceedings of the Ninth European Conference on Computer Systems_ (Amsterdam, The Netherlands) _(EuroSys ’14)_. ACM, New York, NY, USA, Article 16, 14 pages. https://doi.org/10.1145/2592798.2592804
* Xie et al. (2018) Xing Xie, Jianxun Lian, Zheng Liu, Xiting Wang, Fangzhao Wu, Hongwei Wang, and Zhongxia Chen. 2018. Personalized Recommendation Systems: Five Hot Research Topics You Must Know. https://www.microsoft.com/en-us/research/lab/microsoft-research-asia/articles/personalized-recommendation-systems/
* Zhao et al. (2020) Weijie Zhao, Deping Xie, Ronglai Jia, Yulei Qian, Ruiquan Ding, Ming ming Sun, and Ping Li. 2020. Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems. _ArXiv_ abs/2003.05622 (2020).
* Zhao et al. (2019b) Weijie Zhao, Jingyuan Zhang, Deping Xie, Yulei Qian, Ronglai Jia, and Ping Li. 2019b. AIBox: CTR Prediction Model Training on a Single Node. In _Proceedings of the 28th ACM International Conference on Information and Knowledge Management_ (Beijing, China) _(CIKM ’19)_. Association for Computing Machinery, New York, NY, USA, 319–328. https://doi.org/10.1145/3357384.3358045
* Zhao et al. (2019a) Zhe Zhao, Lichan Hong, Li Wei, Jilin Chen, Aniruddh Nath, Shawn Andrews, Aditee Kumthekar, Maheswaran Sathiamoorthy, Xinyang Yi, and Ed Chi. 2019a. Recommending What Video to Watch Next: A Multitask Ranking System. In _Proceedings of the 13th ACM Conference on Recommender Systems_ (Copenhagen, Denmark) _(RecSys ’19)_. ACM, New York, NY, USA, 43–51. https://doi.org/10.1145/3298689.3346997
* Zhou et al. (2019) Guorui Zhou, Na Mou, Ying Fan, Qi Pi, Weijie Bian, Chang Zhou, Xiaoqiang Zhu, and Kun Gai. 2019. Deep interest evolution network for click-through rate prediction. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 33. 5941–5948.
* Zhou et al. (2018) Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018\. Deep interest network for click-through rate prediction. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_. ACM, 1059–1068.
|
# Scalable creation of silicon-vacancy color centers in diamond by ion
implantation through a 1-$\mu$m pinhole
L. Hunold<EMAIL_ADDRESS>Laboratory of Nano-Optics and
C$\mu$, University of Siegen, Siegen, Germany S. Lagomarsino Laboratory of
Nano-Optics and C$\mu$, University of Siegen, Siegen, Germany Istituto
Nazionale di Fisica Nucleare, Sezione di Firenze, Sesto Fiorentino, Italy
A.M. Flatae Laboratory of Nano-Optics and C$\mu$, University of Siegen,
Siegen, Germany H. Kambalathmana Laboratory of Nano-Optics and C$\mu$,
University of Siegen, Siegen, Germany F. Sledz Laboratory of Nano-Optics and
C$\mu$, University of Siegen, Siegen, Germany S. Sciortino Department of
Physics and Astronomy, University of Florence, Sesto Fiorentino, Italy
Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, Sesto Fiorentino,
Italy N. Gelli Istituto Nazionale di Fisica Nucleare, Sezione di Firenze,
Sesto Fiorentino, Italy L. Giuntini Department of Physics and Astronomy,
University of Florence, Sesto Fiorentino, Italy Istituto Nazionale di Fisica
Nucleare, Sezione di Firenze, Sesto Fiorentino, Italy M. Agio
<EMAIL_ADDRESS>https://nano-optics.physik.uni-siegen.de. Laboratory
of Nano-Optics and C$\mu$, University of Siegen, Siegen, Germany National
Institute of Optics (INO), National Research Council (CNR), Florence, Italy
###### Abstract
The controlled creation of quantum emitters in diamond represents a major
research effort in the fabrication of single-photon devices. Here, we present
the scalable production of silicon-vacancy (SiV) color centers in single-
crystal diamond by ion implantation. The lateral position of the SiV is
spatially controlled by a 1-$\mu$m pinhole placed in front of the sample,
which can be moved nanometer precise using a piezo stage. The initial
implantation position is controlled by monitoring the ion beam position with a
camera. Hereby, silicon ions are implanted at the desired spots in an area
comparable to the diffraction limit. We discuss the role of ions scattered by
the pinhole and the activation yield of the SiV color centers for the creation
of single quantum emitters.
Color centers in diamond are considered a very promising platform for quantum
photonics devices due to their optical properties and the possibility to
create diamond nanostructures Pezzagna _et al._ (2011); Schröder _et al._
(2016); Sipahigil _et al._ (2016); Aharonovich and Neu (2018); Sipahigil and
Lukin (2019). Among them, the negatively-charged silicon-vacancy (SiV) center
has already achieved a number of important goals, such as bright emission
concentrated in the zero-phonon line (ZPL) Neu, Agio, and Becher (2012), two-
photon interference Sipahigil _et al._ (2014), spin preparation and readout
Rogers _et al._ (2014), coherent dipole-dipole interaction Evans _et al._
(2018), electroluminescence Lohrmann _et al._ (2011) and single-photon
emission in n-type diamond Flatae _et al._ (2020), temperature sensitivity
Nguyen _et al._ (2018) and photostability even at high temperatures
Lagomarsino _et al._ (2015).
Color centers can be obtained either during diamond growth Neu (2012) or by
ion implantation followed by thermal annealing Lagomarsino _et al._ (2018a).
Only in the second case their position can be laterally controlled, for
instance by focused ion-beam techniques Tamura _et al._ (2014); Schröder _et
al._ (2017); Zhou _et al._ (2018) or by aperture-type AFM tips Raatz _et
al._ (2019), which is crucial for device fabrication. Although these
techniques have demonstrated very high lateral resolution (down to about 10
nm), so far they have only been investigated at low ion energies (a few keV),
which limits the implantation depth to only a few nanometers. It is therefore
desirable to study the creation of color centers, in particular the SiV, in a
broader range of ion energies to gain flexibility in terms of implantation
depths.
Here, we implement a commercial 1-$\mu$m pinhole and a high-precision
translation stage into a beamline of a tandem accelerator to implant Si ions
in spots comparable to the diffraction limit. This is done in a controlled
manner, and in a wide range of energies (0.4-3 MeV) as well as fluences
(108-1014 cm-2). In this way, we present a flexible and easy implementable
approach for a localized implantation process. We further investigate the role
of ions scattered by the pinhole to determine the conditions for the creation
of single SiV centers. Finally, we demonstrate the scalable generation of
single emitters by measuring their antibunching characteristics.
Figure 1: (a) Implantation setup with pinhole, diamond sample on sapphire
plate and objective sending the PL created by the beam to a camera. Sample and
plate can be moved in $x$, $y$ (perp. to the beam) with a piezo stage. Inset
shows an SEM image of the 1 $\mu$m pinhole. (b) Beam image on the camera used
to set the initial implantation position. (c) Plan of implantation session A
(compare Tab. 1 for session labeling). (d) Corresponding PL map acquired by
confocal scan.
The implantation facility is described in detail in Ref. Lagomarsino _et al._
, 2018b. In short terms, it consists of a 3 MV tandem accelerator employing a
cesium sputter ion source able to form ion beams from solid state samples. The
maximum energy depends on the ion state of charge, and amounts to 15 MeV in
the case of silicon. Implantation takes place in a vacuum chamber downstream
to the accelerator. The facility was previously used for versatile SiV color
center creation Lagomarsino _et al._ (2021) and is now upgraded with a system
for localized implantation as described in the following. Annealing of the
centers at $1200~{}^{\circ}$C is carried out in an alumina oven surrounded by
graphitic heaters, all placed in a stainless steel ultra-high vacuum chamber.
Figure 1a shows a scheme of the implantation setup. The beam enters the
configuration from the left and passes the pinhole to reach a sapphire (Al2O3)
plate. Here it produces a photoluminescence (PL) signal, which is monitored by
a CMOS camera (ZWO ASI178MM) using a microscope objective. The PL spot is
enlarged to about 5 pixels (see Fig. 1b) in order to clearly discriminate it
from hot pixels on the camera. The initial implantation position is then set
by moving the sample into the beam until the desired location is reached. The
absolute accuracy of this overall process is evaluated to approx. $\pm 5$
$\mu$m and it is limited by the resolution of the camera. The relative
positioning of the following implantation spots is then very accurate and
limited by the precision of the translation stage (Q-545.240, Physik
Instrumente GmbH, 26 mm travel range, 1 nm resolution).
The commercial pinhole is from Thorlabs (P1H) and it has a nominal diameter of
1 $\mu$m, with a tolerance of +0.25/-0.1 $\mu$m and a circularity of $\geq$
80%. The scanning electron microscopy (SEM) picture (Fig. 1a inset) indicates
a smooth circular geometry and a diameter slightly larger than 1 $\mu$m, but
within the specified tolerance. The foil thickness around it was measured
optically to $\mathrm{d}_{\mathrm{foil}}$ = (27.5 $\pm$ 1.1) $\mu$m, which is
important to evaluate a possible effect of a pinhole tilt on the ion
throughput. We emphasize here, that for our 1 $\mu$m hole, this issue is not
as critical as in the case of nano-apertures Raatz _et al._ (2019).
Nonetheless, the relatively thick frame of the pinhole demands a precise
perpendicular mounting relative to the incoming ion beam. For the given system
the horizontal and vertical tilt was measured to < $0.3^{\circ}$, suitable for
this implantation process.
Table 1: Implantation sessions and corresponding parameters. Implantation | Energy | Fluence | Separation | Rows x Columns
---|---|---|---|---
Session | (MeV) | (cm-2) | ($\mu$m) |
A | 2.9 | 108-1014 | 5 | 10 x 5
B | 0.4 | 108-1014 | 5 | 10 x 5
C | 1 | 109-1011 | 10 | 3 x 8
D | 0.4 | 1010 | 10 | 1 x 5
Figure 1c illustrates a typical plan for an implantation matrix on an
electronic grade diamond sample (ElementSix, 2$\times$2$\times$0.5 mm3, face
orientation 110, roughness < 5 nm). The beam fluence is varied in order to
generate spots with 106 to about 1 implanted ion with an energy of 2.9 MeV,
corresponding to an implantation depth of around 1.1 $\mu$m. Matrices with
other parameters were created as well, the four most important ones for this
work are summarized in Table 1.
After annealing, the PL signal of the formed emitters is acquired using a
confocal scanning setup described in detail in Ref. Kambalathmana _et al._ ,
2021. Figure 1d shows an exemplary result, reproducing the implanted matrix
except for the spots with 103 Si ions or less. These are not visible, because
their PL signal is overcome by a background noise associated with the diffuse
implantation of ions scattered at the pinhole. This issue is discussed further
below and was studied in detail to eventually enable the identification of
spots down to single emitters.
Figure 2: (a) Simulated distance dependencies of scattered ions for the case
of 2.9 MeV, 40∘ pinhole wall angle and three different pinhole-sample
distances corresponding to the used value of (1$\pm$0.2) mm. (b) SiV
background measured on the sample for one of the strongest spots in session A.
(c) Ratio of total scattered to total direct ions for ion energies of 0.4 MeV
and 3 MeV as a function of the pinhole wall angle. (d) Confocal scan of the
matrix associated with session C, showing scattered emitters around the
desired spots. Fluences of the spots in the different columns are given in the
figure (units cm-2).
Ions are scattered towards the sample by the pinhole, when its edges have thin
parts allowing ions to be transmitted and distracted from their initial path
by collisions within the material. We verify that this effect is relevant for
our implantations by comparing the SiV signal around the created spots with
simulations of the process carried out with the SRIM tool Ziegler, Ziegler,
and Biersack (2010). For this, we model the used pinhole with a conical shape
and determine the amount of ions passing through its thin parts for a given
energy. Then we simulate the resulting trajectories after the pinhole and the
corresponding distribution of ions on the sample for a given distance. In
principle, larger distances are favorable, since they result in lower
densities of scattered ions, but in the experiment not more than 1 mm was used
to avoid increased spot sizes due to the beam divergence (measured to about
0.3 mrad).
We use this value of 1 mm pinhole-sample distance and an energy of 2.9 MeV for
the described simulation. This results in the distance dependence of the
scattered ion density relative to the ion density in the implanted spot as
shown in Fig. 2a. The experimental counterpart was obtained by measuring the
SiV signal while moving away from the implanted spot (Fig. 2b). A similar
exponential decrease of the background resulting from scattered ions and
thereby created SiV emitters was found, supporting the model assumptions. It
was also found that the measured background drops faster than expected, which
might be attributed to uncertainties in the pinhole-sample distance or the
simulations at the given energy and for the used pinhole material (steel).
The absolute strength of the shown background is determined by the number of
scattered ions per directly implanted ion and depends strongly on the ion
energy and pinhole wall angle (Fig. 2c). The wall shape can not be adjusted
for the given pinhole and also not determined precisely. However, Figure 2d
indicates a small angle corresponding to strong scattering, visible as several
bright spots around the implanted matrix points of session C. Based on these
results, the energy was chosen to 0.4 MeV for the creation of single emitters
(session D) and the stronger spots needed for initial finding of the matrix
were located further away (400 $\mu$m distance).
Figure 3: (a) Count rates (black curve) and corresponding emitter numbers
(red curve) for highly implanted spots. Ten points were investigated for each
fluence from $10^{12}$ cm-2 over $10^{13}$ cm-2 to $10^{14}$ cm-2, as
indicated by the values given in the figure. From those, always five were
implanted with ion energies of 0.4 MeV (white columns) and five with 2.9 MeV
(gray columns). (b) Concluded activation yield of centers depending on energy
and dose.
The remaining task is to identify the appropriate beam fluence for the
creation of a single emitter per spot (on average). For this the spots with a
high number of emitters were studied first, which allow to quantify the
activation yield of the color center conversion process. By using confocal
intensity scans of the differently strong implanted spots, their SiV emission
count rate was determined (Fig. 3a black curve). With the two-dimensional
intensity map it is possible to take into account that the laser only excites
part of the spot when focused on it (by scaling up the signal with the ratio
between spot and laser focus size).
Next, the number of emitters corresponding to these signals is calculated by
using the average count rate of a single emitter as reference. This was
estimated by averaging the signal of more than 300 bright spots found around
the matrix of session C (compare Fig. 2d). By measuring the emission lifetime
and spectrum of several random sample spots, it was verified that these are
SiV color centers created by scattered ions. The relatively large separation
(few micrometers) allows the assumption that most of them are single centers.
In addition, spots with a size clearly larger than the diffraction limit were
excluded. Therefore, the average signal of the remaining spots is seen as a
good estimate for the single emitter count rate, which was evaluated to
(2700$\pm$300) cps under the given experimental conditions. Here, the used
excitation laser power (3.3 mW at the sample surface), the excitation
wavelength (656 nm), the NA of the focusing objective (0.95) and the overall
collection and detection efficiency of the system (estimated to 0.12 %) are
especially important.
By maintaining these parameters for the measurements on the strongly implanted
spots, the number of emitters per spot (Fig. 3a red curve) can be calculated.
Based on this, the SiV activation yield is concluded for the given fluences
(Fig. 3b). Here the uncertainties are mainly due to instabilities in the beam
current (error bars on the fluences) and an imprecise determination of the
pinhole throughput (error bars on the activation yield). The values are in
agreement with those reported in the literature Schröder _et al._ (2017);
Lagomarsino _et al._ (2018a). Beyond that, they provide information about the
SiV activation yield for the given energies, which have not been studied in
detail before.
Next, the determined activation yield allows us to predict the number of
created emitters also for spots with very low ion densities. If the activation
remains constant for fluences below $10^{12}$ cm-2, a value of 0.6$\times
10^{10}$ cm-2 would provide roughly one activated emitter per spot on average.
The assumption of constant activation yield at lower ion doses is supported by
previous studies. Schröder _et al._ (2017) To verify this, implantation
session C was used. It consist of eight columns of emitter spots, starting
with a fluence of 1.28$\times 10^{11}$ cm-2 and reducing it by a factor of two
for any of the following columns. Thereby transition between clear spots and
single emitters is immediately evident (see Fig. 2d). From the confocal scan,
the column with 0.8$\times 10^{10}$ cm-2 appears to be the first which reaches
single emitters in the spots, supporting the results obtained by extrapolation
from the strongly implanted spots.
Figure 4: (a) Confocal scan of the region around the matrix associated with
session D. The expected positions of the matrix spots are indicated by the
white circles. One of the identified emitters is marked with a white arrow.
(b) Emission spectrum of the marked spot. The Lorentzian fit shows a clear ZPL
with the parameters given in the figure. (c) Corresponding second order
correlation measurement verifying the creation of a single SiV emitter by the
presence of antibunching (parameters given in the figure).
Although some uncertainties are involved in the estimations above, the result
allows the creation of spots with single emitters, because the number of
activated centers per spot is assumed to follow a Poissonian distribution, as
found also e.g., in Ref. Tamura _et al._ , 2014. Based on this, a mean number
of 0.5-2 emitters per spot will statistically result in 25-35% spots with a
single emitter. Therefore, single emitter creation is also possible under the
above discussed conditions.
As proof of principle, five spots were created with a fluence of 1.6$\times
10^{10}$ cm-2 in the context of session D. The confocal scan reveals almost no
background associated with scattered ions (Fig. 4a), in contrast to the
previous approaches. This indicates that the adjustments discussed above
(lower ion energy, stronger spots at a larger distance) could solve the
problem. A few bright spots were identified, out of which two were located
roughly at the expected positions (white circles). We further investigate them
by measuring their spectral and temporal emission characteristics. The
Lorentzian fit shows a clear ZPL (5-6 times the sample background) around 738
nm, proving the desired creation of SiV color centers (Fig. 4b). Furthermore,
the second order correlation measurements reveal antibunching behavior,
verifying single photon emission (Fig. 4c). Based on this, we conclude the
general possibility of using the given concept for the localized creation of
single color centers.
In conclusion, we discuss the creation of SiV color centers by ion
implantation through a 1-$\mu$m pinhole at desired locations in spots with
size comparable to the diffraction limit. We identify conditions for obtaining
single emitters, including discussions of ion scattering by the pinhole and
the activation yield. Hence, we demonstrate the ability to create SiV centers
in a scalable manner, in a wide range of depths (energies). Our approach can
be further improved by reducing the pinhole diameter (higher lateral
precision) and the use of vertical pinhole walls (less ion scattering).
Moreover, the technique is general and can be applied to other ion species
like e.g. nitrogen or germanium and other host matrices such as e.g. silicon
carbide.
The authors acknowledge financial support from the University of Siegen and
the German Research Foundation (DFG) (INST 221/118-1 FUGG, 410405168).
Experimental support was provided by N. Soltani (optical setup) and M. Hepp
(SEM images) in association with the Micro- and Nanoanalytics Facility of the
University of Siegen. The authors also acknowledge INFN-CHNet, the network of
laboratories of the INFN for cultural heritage, for support and precious
contributions in terms of instrumentation and personnel. The authors wish to
thank F. Taccetti for experimental assistance and suggestions.
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Pezzagna _et al._ (2011) S. Pezzagna, D. Rogalla, D. Wildanger, J. Meijer, and A. Zaitsev, “Creation and nature of optical centres in diamond for single-photon emission – overview and critical remarks,” New J. Phys. 13, 035024 (2011).
* Schröder _et al._ (2016) T. Schröder, S. L. Mouradian, J. Zheng, M. E. Trusheim, M. Walsh, E. H. Chen, L. Li, I. Bayn, and D. Englund, “Quantum nanophotonics in diamond,” J. Opt. Soc. Am. B 33, B65–B83 (2016), publisher: OSA.
* Sipahigil _et al._ (2016) A. Sipahigil, R. E. Evans, D. D. Sukachev, M. J. Burek, J. Borregaard, M. K. Bhaskar, C. T. Nguyen, J. L. Pacheco, H. A. Atikian, C. Meuwly, R. M. Camacho, F. Jelezko, E. Bielejec, H. Park, M. Lončar, and M. D. Lukin, “An integrated diamond nanophotonics platform for quantum-optical networks,” Science 354, 847 (2016).
* Aharonovich and Neu (2018) I. Aharonovich and E. Neu, “Diamond Nanophotonics,” Adv. Opt. Mater. 2, 911–928 (2018), iSBN: 2195-1071 Publisher: John Wiley & Sons, Ltd.
* Sipahigil and Lukin (2019) A. Sipahigil and M. D. Lukin, “Quantum optics with diamond color centers coupled to nanophotonic devices,” in _Current Trends in Atomic Physics_ (Oxford University Press, 2019) pp. 1–28.
* Neu, Agio, and Becher (2012) E. Neu, M. Agio, and C. Becher, “Photophysics of single silicon vacancy centers in diamond: implications for single photon emission,” Opt. Express 20, 19956–19971 (2012), publisher: OSA.
* Sipahigil _et al._ (2014) A. Sipahigil, K. Jahnke, L. Rogers, T. Teraji, J. Isoya, A. Zibrov, F. Jelezko, and M. Lukin, “Indistinguishable Photons from Separated Silicon-Vacancy Centers in Diamond,” Phys. Rev. Lett. 113, 113602 (2014).
* Rogers _et al._ (2014) L. J. Rogers, K. D. Jahnke, M. H. Metsch, A. Sipahigil, J. M. Binder, T. Teraji, H. Sumiya, J. Isoya, M. D. Lukin, P. Hemmer, and F. Jelezko, “All-Optical Initialization, Readout, and Coherent Preparation of Single Silicon-Vacancy Spins in Diamond,” Phys. Rev. Lett. 113, 263602 (2014).
* Evans _et al._ (2018) R. E. Evans, M. K. Bhaskar, D. D. Sukachev, C. T. Nguyen, A. Sipahigil, M. J. Burek, B. Machielse, G. H. Zhang, A. S. Zibrov, E. Bielejec, H. Park, M. Lončar, and M. D. Lukin, “Photon-mediated interactions between quantum emitters in a diamond nanocavity,” Science 362, 662 (2018).
* Lohrmann _et al._ (2011) A. Lohrmann, S. Pezzagna, I. Dobrinets, P. Spinicelli, V. Jacques, J.-F. Roch, J. Meijer, and A. M. Zaitsev, “Diamond based light-emitting diode for visible single-photon emission at room temperature,” Appl. Phys. Lett. 99, – (2011).
* Flatae _et al._ (2020) A. M. Flatae, S. Lagomarsino, F. Sledz, N. Soltani, S. S. Nicley, K. Haenen, R. Rechenberg, M. F. Becker, S. Sciortino, N. Gelli, L. Giuntini, F. Taccetti, and M. Agio, “Silicon-vacancy color centers in phosphorus-doped diamond,” Diam. Relat. Mater. 105, 107797 (2020).
* Nguyen _et al._ (2018) C. T. Nguyen, R. E. Evans, A. Sipahigil, M. K. Bhaskar, D. D. Sukachev, V. N. Agafonov, V. A. Davydov, L. F. Kulikova, F. Jelezko, and M. D. Lukin, “All-optical nanoscale thermometry with silicon-vacancy centers in diamond,” Appl. Phys. Lett. 112, 203102 (2018).
* Lagomarsino _et al._ (2015) S. Lagomarsino, F. Gorelli, M. Santoro, N. Fabbri, A. Hajeb, S. Sciortino, L. Palla, C. Czelusniak, M. Massi, F. Taccetti, L. Giuntini, N. Gelli, D. Y. Fedyanin, F. S. Cataliotti, C. Toninelli, and M. Agio, “Robust luminescence of the silicon-vacancy center in diamond at high temperatures,” AIP Advances 5, 127117 (2015).
* Neu (2012) E. K. Neu, “Silicon vacancy color centers in chemical vapor deposition diamond : new insights into promising solid state single photon sources,” (2012), 10.22028/D291-22802.
* Lagomarsino _et al._ (2018a) S. Lagomarsino, A. M. Flatae, S. Sciortino, F. Gorelli, M. Santoro, F. Tantussi, F. De Angelis, N. Gelli, F. Taccetti, L. Giuntini, and M. Agio, “Optical properties of silicon-vacancy color centers in diamond created by ion implantation and post-annealing,” Diam. Relat. Mater. 84, 196–203 (2018a).
* Tamura _et al._ (2014) S. Tamura, G. Koike, A. Komatsubara, T. Teraji, S. Onoda, L. P. McGuinness, L. Rogers, B. Naydenov, E. Wu, L. Yan, F. Jelezko, T. Ohshima, J. Isoya, T. Shinada, and T. Tanii, “Array of bright silicon-vacancy centers in diamond fabricated by low-energy focused ion beam implantation,” Appl. Phys. Express 7, 115201 (2014).
* Schröder _et al._ (2017) T. Schröder, M. E. Trusheim, M. Walsh, L. Li, J. Zheng, M. Schukraft, A. Sipahigil, R. E. Evans, D. D. Sukachev, C. T. Nguyen, J. L. Pacheco, R. M. Camacho, E. S. Bielejec, M. D. Lukin, and D. Englund, “Scalable focused ion beam creation of nearly lifetime-limited single quantum emitters in diamond nanostructures,” Nat. Commun. 8, 15376 (2017).
* Zhou _et al._ (2018) Y. Zhou, Z. Mu, G. Adamo, S. Bauerdick, A. Rudzinski, I. Aharonovich, and W.-b. Gao, “Direct writing of single germanium vacancy center arrays in diamond,” New J. Phys. 20, 125004 (2018).
* Raatz _et al._ (2019) N. Raatz, C. Scheuner, S. Pezzagna, and J. Meijer, “Investigation of Ion Channeling and Scattering for Single-Ion Implantation with High Spatial Resolution,” Phys. Status Solidi A 216, 1900528 (2019).
* Lagomarsino _et al._ (2018b) S. Lagomarsino, S. Sciortino, N. Gelli, A. M. Flatae, F. Gorelli, M. Santoro, M. Chiari, C. Czelusniac, M. Massi, F. Taccetti, M. Agio, and L. Giuntini, “The center for production of single-photon emitters at the electrostatic-deflector line of the Tandem accelerator of LABEC (Florence),” Nucl. Instrum. Methods Phys. Res. B 422, 31–40 (2018b).
* Lagomarsino _et al._ (2021) S. Lagomarsino, A. M. Flatae, H. Kambalathmana, F. Sledz, L. Hunold, N. Soltani, P. Reuschel, S. Sciortino, N. Gelli, M. Massi, C. Czelusniak, L. Giuntini, and M. Agio, “Creation of Silicon-Vacancy Color Centers in Diamond by Ion Implantation,” Frontiers in Physics 8, 601362 (2021).
* Kambalathmana _et al._ (2021) H. Kambalathmana, A. Flatae, L. Hunold, F. Sledz, J. Müller, M. Hepp, P. Schmuki, M. Killian, S. Lagomarsino, N. Gelli, S. Sciortino, L. Giuntini, E. Wörner, C. Wild, B. Butz, and M. Agio, “Optical properties of silicon-implanted polycrystalline diamond membranes,” Carbon 174, 295–304 (2021).
* Ziegler, Ziegler, and Biersack (2010) J. F. Ziegler, M. Ziegler, and J. Biersack, “SRIM – The stopping and range of ions in matter (2010),” Nucl. Instrum. Methods Phys. Res. B 268, 1818–1823 (2010).
|
# Prospects of gravitational-waves detections from common-envelope evolution
with LISA
M. Renzo Department of Physics, Columbia University, New York, NY 10027, USA
Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New
York, NY 10010, USA T. Callister Center for Computational Astrophysics,
Flatiron Institute, 162 5th Ave, New York, NY 10010, USA K. Chatziioannou
Department of Physics, California Institute of Technology, Pasadena,
California 91125, USA LIGO Laboratory, California Institute of Technology,
Pasadena, CA 91125, USA Center for Computational Astrophysics, Flatiron
Institute, 162 5th Ave, New York, NY 10010, USA L. A. C. van Son Center for
Astrophysics — Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138,
USA C. M. F. Mingarelli Department of Physics, University of Connecticut, 196
Auditorium Road, U-3046, Storrs, CT 06269-3046, USA Center for Computational
Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010, USA M.
Cantiello Center for Computational Astrophysics, Flatiron Institute, 162 5th
Ave, New York, NY 10010, USA Department of Astrophysical Sciences, Princeton
University, Princeton, NJ 08544, USA K. E. S. Ford Dept. of Science, Borough
of Manhattan Community College, City University of New York, New York, NY
10007 USA Dept. of Astrophysics, American Museum of Natural History, New
York, NY 10024 USA Physics Program, CUNY Graduate Center, City University of
New York, New York, NY 10016 USA Center for Computational Astrophysics,
Flatiron Institute, 162 5th Ave, New York, NY 10010, USA B. McKernan Dept. of
Science, Borough of Manhattan Community College, City University of New York,
New York, NY 10007 USA Dept. of Astrophysics, American Museum of Natural
History, New York, NY 10024 USA Physics Program, CUNY Graduate Center, City
University of New York, New York, NY 10016 USA Center for Computational
Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010, USA G.
Ashton Royal Holloway University of London, Egham, Surrey, TW20 0EX, U.K.
School of Physics and Astronomy, Monash University, VIC 3800, Australia
###### Abstract
Understanding common envelope (CE) evolution is an outstanding problem in
binary evolution. Although the CE phase is not driven by gravitational-wave
(GW) emission, the in-spiraling binary emits GWs that passively trace the CE
dynamics. Detecting this GW signal would provide direct insight on the gas-
driven physics. Even a non-detection might offer invaluable constraints. We
investigate the prospects of detection of a Galactic CE by LISA. While the
dynamical phase of the CE is likely sufficiently loud for detection, it is
short and thus rare. We focus instead on the self-regulated phase that
proceeds on a thermal timescale. Based on population synthesis calculations
and the (unknown) signal duration in the LISA band, we expect $\sim 0.1-100$
sources in the Galaxy during the mission duration. We map the GW observable
parameter space of frequency $f_{\mathrm{GW}}$ and its derivative
$\dot{f}_{\mathrm{GW}}$ remaining agnostic on the specifics of the inspiral,
and find that signals with $\mathrm{SNR}>10$ are possible if the CE stalls at
separations such that $f_{\mathrm{GW}}\gtrsim 2\times 10^{-3}\,\mathrm{Hz}$.
We investigate the possibility of mis-identifying the signal with other known
sources. If the second derivative $\ddot{f}_{\mathrm{GW}}$ can also be
measured, the signal can be distinguished from other sources using a GW
braking-index. Alternatively, coupling LISA with electromagnetic observations
of peculiar red giant stars and/or infrared and optical transients, might
allow for the disentangling of a Galactic CE from other Galactic and extra-
galactic GW sources.
Common-envelope evolution — Common-envelope binary stars — Gravitational-wave
astronomy
††journal: ApJ††software: COSMIC (Breivik et al., 2020), ipython/jupyter
(Pérez & Granger, 2007), matplotlib (Hunter, 2007), NumPy (van der Walt et
al., 2011).
## 1 Introduction
Common envelope (CE) evolution is one of the most challenging processes to
study in the evolution of binary stars. CE evolution was originally proposed
by Paczyński (1976) to account for the short orbital separations observed in
cataclysmic variables. Despite its importance for many categories of stellar
objects, the details of this process remain elusive.
A CE is typically assumed to occur when the evolution of a binary system leads
to dynamically unstable mass transfer, causing the accretor to be engulfed in
the donor’s envelope. This results in a CE binary, in which the core of the
donor star and the accretor star coexist within a shared envelope. Drag forces
then drive the inspiral of the binary, injecting orbital energy (and angular
momentum, Nelemans et al., 2000) into the CE. Depending on the complex
coupling between the orbit and the envelope structure, the CE can ultimately
be ejected, or, alternatively, the two stars can merge. We expand on the
current understanding of CE evolution in Sec. 1.1.
Evolution through a CE phase is a key step in the formation of almost any
compact binary system from an isolated binary, including cataclysmic variables
(e.g., Paczyński, 1976), subdwarf B-stars (e.g., Han et al., 2003; Igoshev et
al., 2020) and stripped stars (Götberg et al., 2020), double white dwarfs (WD,
e.g., Nandez et al., 2015), X-ray binaries (e.g., Podsiadlowski et al., 2002;
Chen et al., 2020a), binary neutron-stars (e.g., Tauris et al., 2017; Kruckow
et al., 2018; Vigna-Gómez et al., 2020), black-hole neutron-star binaries
(e.g., Kruckow et al., 2018; Chattopadhyay et al., 2020; Broekgaarden et al.,
2021), black-hole WD binaries (e.g., Sberna et al., 2020), and binary black-
holes (e.g., Belczynski et al., 2016; Kruckow et al., 2018). CE evolution also
produces the tightest binaries which can in turn make the fastest runaway
stars from isolated systems if disrupted by the explosion of one star (e.g.,
Justham et al., 2009; Renzo et al., 2019; Evans et al., 2020; Neunteufel,
2020). Finally, CE preceeds the majority of stellar mergers111We neglect here
the possibility of (close to) head-on collisions between stars in dense
stellar environments which can also lead to mergers (e.g. Glebbeek et al.,
2009; Di Carlo et al., 2020; Renzo et al., 2020)., which are common across the
entire mass range of stars (e.g., de Mink et al., 2013; Zapartas et al., 2017;
Temmink et al., 2020) and may possibly lead to the formation of Thorne-Zytkow
objects (e.g., Thorne & Zytkow, 1977).
The final phases of a CE – the envelope ejection or the merger event – can
produce visible electromagnetic (EM) transients called luminous red novae
(LRN). The detection of the pre-transient period decay of V1309Sco (Tylenda et
al., 2011) unambiguously connected this type of transient to CE. LRNe exhibit
a variety of morphologies (e.g., Pastorello et al., 2020), possibly related to
the variety of CE progenitors and outcomes possible.
Numerical simulations of CE evolution remain challenging (e.g., Ricker & Taam
2008; Ohlmann et al. 2016; MacLeod et al. 2018; Chamandy et al. 2020; Cruz-
Osorio & Rezzolla 2020; Sand et al. 2020, see also Ivanova et al. 2013b, 2020
for reviews). The main difficulties relate to the large range of temporal and
spatial scales that need to be resolved in CE evolution. Moreover, there are
several possible energy sources other than the orbit that need to be
considered during CE evolution, such as recombination energy (e.g., Ivanova et
al., 2013a; Ivanova, 2018; Sand et al., 2020), accretion energy (e.g., Voss &
Tauris, 2003; MacLeod & Ramirez-Ruiz, 2014), and energy released by nuclear
burning (e.g., Ivanova et al., 2002). Both where the energy is released, and
how it is transported through the CE also have a crucial role in determining
the outcome (e.g., Sand et al., 2020; Wilson & Nordhaus, 2020). Therefore, CE
evolution remains one of the major uncertainties in the modeling of binary
evolution and stellar populations.
The inspiral within a CE is not driven by gravitational-wave (GW) emission.
Nevertheless, the binary inside of a CE, consisting of the core of the donor
and the companion star, has a time-varying mass quadrupole moment and hence
radiates GWs. If detectable, GWs from the binary inside of a CE will passively
trace the gas-drag driven inspiral, and could provide invaluable insight into
the CE process. Conversely, because of the low density of the envelope of
giant stars (which is possibly further lowered by the energy injection during
the CE), any GW signal from the envelope itself is unlikely to be detectable.
The GWs radiating from a Galactic CE binary would be in the mHz band, and thus
could be detectable by space-based GW-detectors, such as the Laser
Interferometer Space Antenna (LISA, Baker et al. 2019), and TianQin (Huang et
al., 2020). This can be expected because of the existence of double-WD
verification binaries for LISA (e.g., Stroeer & Vecchio, 2006), which likely
went through at least one CE phase (e.g., Korol et al., 2017). LISA is
currently scheduled for launch in 2034 and will commence operations in the mHz
GW frequency range shortly thereafter. Its projected mission lifetime is
$5-10$ years, during which time it is expected to detect a wide variety of GW
signals in the mHz band, from supermassive black hole binary mergers to
Galactic compact binaries with a fraction of a solar mass.
Here, we examine the prospects for observing the inspiral of two Galactic
stellar cores within a CE with LISA. Such signals are not accessible to
ground-based detectors (e.g., LIGO – Aasi et al. 2015, Virgo – Acernese et al.
2015, and KAGRA – Somiya 2012) as they are restricted to frequencies above a
few to tens of Hz. At these frequencies, CE binaries are expected to either
have merged or have ejected the envelope. In fact, Ginat et al. (2020) showed
that LISA could detect the portion of the CE inspiral proceeding on a
dynamical timescale ($\sim$ days).
However, not all CE events are expected to rapidly complete their evolution
over a dynamical timescale (Ivanova et al., 2013b). A significant fraction of
the envelope may be ejected on thermal timescales much longer than the
dynamical timescale (e.g., Meyer & Meyer-Hofmeister, 1979; Glanz & Perets,
2018; Michaely & Perets, 2019). Currently, the duration and occurrence rate of
this so-called ‘self-regulated inspiral’ is very uncertain (e.g., Meyer &
Meyer-Hofmeister, 1979; Fragos et al., 2019; Sand et al., 2020; Igoshev et
al., 2020), but if it is of the order of years or longer, this phase is likely
to constitute the most promising CE target for LISA.
Constraints placed by low-frequency GW on CE evolution will be from direct
detection/non-detection. These constraints will be complementary to those
obtained by ground-based detectors which observe the aftermath of successful
CE events in massive binaries where the envelope has been ejected and the
binary merged under the influence of GWs. The properties and rate of such
detections can be used to place constraints on the CE properties, for example
the CE efficiency (e.g., Wong et al., 2020; Zevin et al., 2020). Space-based
detectors, on the other hand, offer the possibility of studying binaries while
they undergo CE, possibly including systems that might fail to eject the
envelope. Provided the signal can be distinguished from other potential mHz GW
sources, space-based detectors will give us direct insight into the gas-driven
dynamics.
In particular, the GW frequency of the signal ($f_{\mathrm{GW}}$), can
constrain the orbital separation at which the self-regulated inspiral stalls,
and the frequency change ($\dot{f}_{\mathrm{GW}}$) and event duration can
constrain the rapidity of the inspiral and thus the internal gas density of
the CE (and potentially its radial distribution). Even a non-detection would
offer invaluable constraints on the occurrence rate, the orbital separation
during the self-regulated inspiral, and the duration of this phase, which are
not accessible by any other means.
### 1.1 Phases of common envelope evolution
Figure 1: Schematic representaion of all possible phases during a common
envelope event. We focus on the most promising source of GW in the LISA band,
i.e. the thermal-timescale self-regulated phase at step d. However, dynamical
phases such as the plunge-in at step c or the final ejection or merger might
also be detectable (e.g., Ginat et al., 2020).
Because of the aforementioned complexity of CE evolution and the diversity of
binary systems that can evolve through this phase (both in terms of masses and
evolutionary stages at the onset of the CE), a complete and exhaustive picture
of this phenomenon does not yet exist. Nevertheless, several phases which
might subsequently occur during the CE have been identified (e.g.,
Podsiadlowski, 2001; Ivanova et al., 2013b; Fragos et al., 2019; Ivanova et
al., 2020), and are outlined in Figure 1. Below we emphasize which are
potentially interesting targets for GW observations. It is possible that not
every binary evolving through a CE will experience all of these phases
(Ivanova et al., 2013b; Law-Smith et al., 2020).
#### a. Onset of the common envelope
Mass transfer through a Roche lobe overflow (RLOF) can become dynamically
unstable and result in the initiation of a CE for several reasons:
* (i)
an increasing rate of overflow – either because of the Roche lobe shrinks or
because it expands slower than the donor’s star radius;
* (ii)
because the mass transfer timescale is much shorter than the accretor thermal
timescale, the accretor is pushed out of thermal equilibrium. This leads to
its radial expansion until the accretor too fills its Roche lobe (e.g., Nariai
& Sugimoto, 1976);
* (iii)
because of the Darwin instability that redistributes angular momentum between
the stellar spins and the orbit (Darwin, 1879), causing the orbit to shrink;
* (iv)
because of mass loss from the L2 Lagrangian point, causing large orbital
energy loss. This accelerates the evolution to proceed on a dynamical
timescale (e.g., Pejcha et al., 2017).
These processes can cause the accretor to enter the envelope of the donor,
starting the CE on a timescale that is either thermal or dynamical. Possible
pre-CE mass ejection can also occur, depending on what drives the CE
initiation (e.g., Pejcha et al., 2017; MacLeod et al., 2017; MacLeod & Loeb,
2020). When RLOF becomes dynamically unstable, the separation, $a$, between
the core of the donor and the accretor is still wide and of the order of the
Roche radius of the dono,r $a\gtrsim R_{\mathrm{RL,1}}$. Except for binaries
already involving two compact objects (e.g., AM CVn systems, Liu et al.,
2021), this is likely to correspond to a GW signal of frequency
$f_{\mathrm{GW}}$ too low for the LISA band.
#### b. Loss of co-rotation
Soon after the initiation of the CE, the envelope will rapidly stop co-
rotating with the binary (even under the optimistic assumption that the
envelope was initially co-rotating with the donor’s core). The timescale for
the loss of co-rotation depends on how the CE was initiated at step (_a_), but
it varies between the tidal timescale up to a thermal timescale (e.g.,
Podsiadlowski, 2001; Fragos et al., 2019). The loss of co-rotation still
happens at separations wider than could be detectable in the LISA band, but it
allows for the onset of strong gas-drag which drives the subsequent CE
evolution.
#### c. Dynamical plunge-in
After the loss of co-rotation, with the envelope density still only slightly
perturbed (Fragos et al., 2019), a phase of dynamical plunge-in happens. This
rapid inspiral can take (a fraction of) an orbital period and possibly
decrease the orbital separation $a$ to zero (i.e., a prompt merger).
Alternatively it can lead to a prompt ejection of the CE (e.g., Law-Smith et
al., 2020). However, this is not always the case. The orbital energy liberated
in this very fast plunge-in could expand the CE, leading to a decrease in gas-
drag and thus stalling the inspiral, setting up the conditions for phase (_d_)
(see below). At which separation $a_{\mathrm{post-plunge}}$ the plunge-in
stalls is likely not to be universal, and is not yet fully understood.
Podsiadlowski (2001) suggested that $a_{\mathrm{post-plunge}}$ corresponds to
the location where the orbital energy equates the envelope binding energy,
however De Marco et al. (2011) caution about the subtleties of how the binding
energy is defined. Depending on the pre- and post-plunge-in separations, this
dynamical phase could be detectable in GW for Galactic binaries (e.g., Ginat
et al., 2020). However, because of its short duration ($\sim$ hours – days),
it is not the most promising source (see Sec. 4). Should one happen within the
Galaxy during the LISA mission, it would produce a frequency change
$\dot{f}_{\mathrm{GW}}$ much larger than expected from GW emission, and
possibly be associated to a detectable EM transient. The plunge-in phase can
also produce an eccentric binary within the CE (Ivanova et al., 2013b; Sand et
al., 2020), with potential implications for the GW signal they might produce.
#### d. Self-regulated inspiral
Assuming the plunge-in did not produce a prompt merger, what follows is a
relatively slow phase of self-regulated inspiral, that can last several
thermal time-scales (10-$10^{5}$ years, e.g., Meyer & Meyer-Hofmeister 1979;
Clayton et al. 2017; Fragos et al. 2019; Chamandy et al. 2020; Igoshev et al.
2020). During the self-regulated inspiral, the quasi-Keplerian motion of the
binary slowed by gas-drag progressively injects orbital energy and angular
momentum in the CE. Other energy sources can intervene and radiative transport
becomes relevant during this CE phase (e.g., hydrogen recombination can occur
if it happens in a sufficiently optically thick layer, Fragos et al. 2019).
Depending on the separation $a_{\mathrm{post-plunge}}$ (and the masses of the
binary), this phase might be the most promising for detecting a GW signal,
since it could allow for a slowly222We still expect $\dot{f}_{\mathrm{GW}}$ to
exceed the value predicted by general relativity for GW-driven inspiral in
vacuum. varying $f_{\mathrm{GW}}$ that can be integrated over several years to
build up signal-to-noise ratio (SNR). However, we emphasize that this self-
regulated inspiral might not happen for all CE events (see for example the
short timescale for subdwarf B-type progenitors inferred by Igoshev et al.
2020, or the numerical results for neutron star progenitors by Law-Smith et
al. 2020).
#### e. CE ejection or merger
Finally, the CE will end with one last dynamical phase during which either the
envelope is ejected (possibly because of the intervention of energy sources
other than the orbit) or one of the stars of the binary inside the CE fills
the equipotential surface distinguishing it from the other (improperly, its
Roche lobe) leading to the final merger (e.g, Podsiadlowski, 2001; Ivanova et
al., 2002). In the cases where the CE is successfully ejected, the surviving
binary is left with the final $a_{\mathrm{post-CE}}$ orbital separation
smaller than the initial radius of the donor. What governs the binary post-CE
orbital evolution depends on the specific details of the system (magnetic
braking, tidal evolution, wind mass loss, or possibly GW emission). The short
duration of phase (_c_), phase makes it an unpromising target for GW searches.
However, it would result in a large $\dot{f}_{\mathrm{GW}}$ for a brief
moment, and likely be accompanied by a detectable EM transient.
In the rest of this study, we will focus on potential signals from a long
lived, self-regulated inspiral (step _d_) unless otherwise stated. In Sec. 2
we show that CE binaries of various mass are expected to cross or end within
the LISA GW frequency band. We then consider the issue of whether a typical
binary crossing the LISA band can be detected in Sec. 3, and use population
synthesis simulations to estimate the number of sources expected at any given
time in the Galaxy in Sec. 4. Sec. 5 deals with the possible “stealth bias”,
that is the possibility of misinterpreting the detection of GWs from a CE as
another kind of GW source, and we consider how EM observation can help
resolving this bias in Sec. 6. We further discuss our results in Sec. 7 before
concluding in Sec. 8.
## 2 Common-envelope evolution across the LISA band
For a CE event to cross the LISA frequency band and produce a detectable GW
signal, the mass of the binary inside the CE needs to be sufficiently large.
This favors systems where both stars are evolved and have dense and well
defined cores (or are only what is left of their original core, like WDs). On
top of this, the separation of the binary inside the CE needs to correspond to
GW frequencies roughly $10^{-4}\,\mathrm{Hz}\lesssim f_{\mathrm{GW}}\lesssim
0.1\,\mathrm{Hz}$ (gray band in Fig. 2), bound at the low end by the
population of unresolved Galactic WD and at the high end by the instrument
response function (e.g., Robson et al., 2019). The existence of verification
WD binaries in the LISA band, which are likely to have formed through a CE
event, suggests that CE binaries crossing the LISA band may exist.
Figure 2: Estimates of the post-CE separations using the $\alpha\lambda$
energy formalism. Given a pre-CE separation (different colors in plot) and the
GWs emitting mass (x-axis), the post-CE separation can be determined. We show
a range of GW-emitting masses, consisting of the core mass of the donor star
plus the mass of the companion WD (0.3$\,M_{\odot}$ in this plot). We assume
$\alpha=1$ and $\lambda=0.3$ for illustrative purposes. The gray band shows
where the corresponding GW signal approximately be in the LISA frequency range
($10^{-4}\,\mathrm{Hz}\lesssim f_{\mathrm{GW}}\lesssim 0.1\,\mathrm{Hz}$).
Systems with post-CE separations within or below this band therefore
potentially experience a phase during which their GW emission could be
detectable by LISA. The hatched area corresponds to post-CE separation smaller
than the core radius, that is presumably merging systems.
To illustrate that some binaries will indeed cross the LISA band during a CE
we use a population-synthesis approach (see also Sec. 4). In rapid population
synthesis calculations, CE is typically treated as an instantaneous event.
This is usually a sufficient approximation since the duration of a CE is at
most of order of the thermal timescale (depending on the mass, luminosity and
radius of the CE this ranges from several hundreds to several thousand years),
that is much shorter than the entire evolution of a binary system. To relate
the pre-CE initial conditions to the estimated post-CE configuration,
conservation of energy is usually used ($\alpha\lambda$-algorithms, see e.g.,
Webbink, 1984; De Marco et al., 2011), although algorithms considering
conservation of angular momentum also exist (e.g., Nelemans et al., 2000).
Figure 2 shows the post-CE separation ($a_{\mathrm{post-CE}}$) as a function
of the mass that would be contributing to the GW emission ($M_{\mathrm{GW\
emitting}}$). In a rapid population synthesis calculation, each binary system
would drop instantaneously from its pre-CE separation to the post-CE
separation (as indicated by the vertical arrows). For illustrative purposes,
we assume fully efficient use of the orbital energy to eject the envelope and
no extra energy source (i.e., $\alpha=1$), and $\lambda=0.3$ as a typical
value for the binding energy parameter of giant donors (e.g., De Marco et al.,
2011). Increasing either value shifts the curves to larger post-CE
separations.
In Fig. 2 we assume $M_{\mathrm{GW\ emitting}}$ to be the (helium) core mass
of the donor star at the end of the main sequence plus $0.3\,M_{\odot}$
assumed to be the mass of a WD companion. We chose this value to reflect the
highest rate of initiation of Galactic CE events from the simulations
described in Sec. 4, which is expected for systems with
$M_{\mathrm{core}}\simeq 0.5\,M_{\odot}$ and $M_{\mathrm{WD}}\simeq
0.3\,M_{\odot}$. Although the combination of core masses with this particular
WD mass might not be realistic throughout the range shown, we have also
performed the same calculation assuming a more massive companion of
$1.4\,M_{\odot}$ (mimicking a Chandrasekhar mass WD or a neutron star). The
resulting curves are shown in Fig. 6 in Appendix C, each curve moves up to
larger post-CE separations, but most curves still intersect the LISA band
(gray area). For large core-radii the final dynamical phase (e. in Fig. 1
above) might start at frequencies in the LISA band (hatched area in Fig. 2).
The systems ending at separation smaller than the He core radii could result
in mergers rather than successful common envelope ejections. Whether this
leads to a detectable GW signal or not requires further investigation.
We use the solar metallicity models from Pols et al. (1998) to obtain the
(helium) core mass of the donor from their total masses. Our core masses are a
lower bound since we neglect the core-growth due to the ashes of (hydrogen)
shell burning. The gray shaded area indicates core separations corresponding
to a GW signal of frequency $10^{-4}\,\mathrm{Hz}\lesssim
f_{\mathrm{GW}}\lesssim 0.1\,\mathrm{Hz}$, roughly within the LISA band.
Binaries crossing this gray area or ending within it (e.g., verification WD
binaries) are potential candidates for GW detection of the CE evolution with
LISA, depending on the duration of the CE event. The duration cannot be
obtained from a population synthesis approach, it either needs to be modeled
with challenging multi-dimensional hydrodynamic simulations or, as we will
argue below, it might be directly constrained with observation of the GW
signal.
We return to a full population synthesis calculation of the rate of events in
Sec. 4, after discussing the observability of GW from a CE in Sec. 3.
## 3 Detectability of common envelope in LISA
Figure 3: LISA signal-to-noise ratio of a $M_{\mathrm{core}}=0.5\,M_{\odot}$
and $M_{\mathrm{WD}}=0.3\,M_{\odot}$ binary as a function of its initial
observed GW frequency $f_{\mathrm{GW}}$ and frequency derivative
$\dot{f}_{\mathrm{GW}}$ (blue shading and black solid lines). For the duration
of the observations, we assume the shortest between 5 years or the time to
merger at constant $\dot{f}_{\mathrm{GW}}$, and adopt 3 kpc as the source
distance. The top axis shows the orbital separation corresponding to
$f_{\mathrm{GW}}$ for the masses considered. Although the exact
$f_{\mathrm{GW}}$ and $\dot{f}_{\mathrm{GW}}$ values for a binary undergoing
CE are not known, the dashed and solid orange lines illustrate lower and upper
bounds on the expected values of $\dot{f}$ due to CE evolution. The former
assumes pure GW emission (see Appendix A), and the latter assumes pure gas
drag with no feedback (see Appendix B). Finally, dotted and dot-dashed black
lines mark different regimes in which we may or may not expect to successfully
distinguish a CE event from other sources of gravitational radiation (see Sec.
5). Frequency derivatives above the dotted line are likely detectable after
five years of observation, and systems above the dot-dashed line are further
expected to have measurable second derivatives $\ddot{f}_{\rm GW}$, enabling
measurement of braking indices. Systems lying below these thresholds,
meanwhile, are most likely to suffer from stealth bias or mis-identification.
The red crosses indicate known verification WD binary.
We now explore in what regimes we expect LISA to successfully detect GW
signals from binaries undergoing CE. Since the details of CE are not yet fully
understood, we do not know the precise frequency evolution of a GW source
undergoing CE. Instead, in Fig. 3 we agnostically quantify the expected
signal-to-noise ratios (SNRs) with which LISA will observe a $M_{\mathrm{GW\
emitting}}=0.5+0.3\,M_{\odot}$ binary with an initial GW frequency $f_{\rm
GW}$ and frequency derivatives $\dot{f}_{\rm GW}$ at a distance of 3 kpc. We
chose a value for the typical mass of the donor star core ($0.5\,M_{\odot}$)
and the WD companion ($0.3\,M_{\odot}$) from our population synthesis (see
Sec. 4). Expected SNRs have been averaged over possible sky locations and
binary inclinations. Throughout, we assume the standard quadrupole radiation
such that $f_{\rm GW}=2f_{\mathrm{orb}}$, where $f_{\mathrm{orb}}$ is the
(quasi-) Keplerian orbital frequency of the CE binary. We further assume the
observation to persist for five years333The SNR is proportional to the square
root of the observing time., or until the given binary’s frequency reaches
$f_{\rm GW}=10^{-1.5}\,{\rm Hz}$. Beyond this frequency, the He core would
fill its Roche lobe within the CE, initiating the dynamical phase that leads
to a prompt merger (bottom right-hand side of Fig. 1) and termination of the
GW signal. Additional details of the SNR calculation are given in Appendix A.
Although the true rate $\dot{f}_{\mathrm{GW}}$ at which a binary’s GW
frequency evolves under CE is not known, we can nevertheless place sensible
bounds on frequency evolution of a CE binary. In Fig. 3, the orange dashed
line shows the frequency derivative $\dot{f}_{\mathrm{GW}}(f_{\mathrm{GW}})$
due solely to the emission of GWs. For a given total mass, this is a strict
lower bound on the evolution rate of a binary undergoing CE. Meanwhile, the
orange solid line shows the value of $\dot{f}_{\mathrm{GW}}(f_{\mathrm{GW}})$
expected in the case of pure gas drag, neglecting the reaction of the envelope
and assuming $\rho=10^{-6}\,\mathrm{g\ cm^{-3}}$, $T=9\times
10^{5}\,\mathrm{K}$ to represent the density and temperature deep inside the
shared CE (see Appendix B). This provides a strict upper-limit, since in
nature we expect the envelope to expand due to the energy injected into the
envelope by the inspiral, lowering the gas drag on the binary inside the CE.
Generally, we expect CE binaries to lie between these two bounds. Under these
constraints, we see that a systems occupying a large portion of the remaining
$f_{\mathrm{GW}}-\dot{f}_{\mathrm{GW}}$ phase space are detectable with LISA.
In particular, systems whose gravitational wave frequency is
$f_{\mathrm{GW}}\gtrsim 10^{-3}\,{\rm Hz}$ (corresponding to orbital
separations less than $\sim{}0.3\,R_{\odot}$ for our fiducial masses) yield
${\rm SNR}\gtrsim 10$.
Although galactic binaries undergoing self-regulated CE evolution may be
detectable by LISA, they are not necessary identifiable as such. Without
additional information or careful consideration, we run the risk of
misidentifying such a system as a purely GW-driven compact binary, incorrectly
attributing an anomalously high $\dot{f}_{\mathrm{GW}}$ to large component
masses rather than accelerated evolution due to CE. This issue of
identification will be discussed further in Sec. 5 below.
## 4 Estimate of the number of Galactic common-envelope binaries
To determine the number of CE binaries in the Galaxy that LISA (and future GW
observatories) might be able to detect, we use the population synthesis code
COSMIC444https://cosmic-popsynth.github.io/. Our input files and results are
available at https://zenodo.org/record/4490011. version 3.2 (Breivik et al.,
2020). As in all rapid binary population synthesis codes, COSMIC treats CE as
an instantaneous event, therefore, we obtain from COSMIC the rate at which CE
event are initiated ($R_{\mathrm{CE,init}}$) by scaling the number of binaries
initiating a CE with the total mass of the population and multiplying by a
star formation rate (SFR) of $3.5\,M_{\odot}\ \mathrm{yr^{-1}}$ (commonly used
for Milky-way equivalent galaxies in population synthesis calculations of GW
sources, e.g., Dominik et al., 2012). The value of $R_{\mathrm{CE,init}}$ can
be rescaled for an arbitrary SFR, which anyway does not constitute the
dominant uncertainty in the determination of the number of Galactic sources.
Population synthesis calculations assume a CE event occurs if one of the
following condition is met: (i) both stars simultaneously fill their Roche
lobe (e.g., for initially equal mass ratio systems), (ii) one of the stellar
radii exceeds the periastron point – for eccentric binaries, or (iii) when the
mass transfer is deemed to be unstable. Case (iii) is most common. Mass
transfer stability is decided in COSMIC using critical mass ratio values
$q_{c}$ (Breivik et al. 2020, see also Hurley et al. 2002, Pavlovskii et al.
2017, Vigna-Gómez et al. 2020). Whenever the mass ratio between the donor star
and the accretor is larger than a given threshold
$M_{\mathrm{donor}}/M_{\mathrm{accretor}}\geq q_{c}$, mass transfer is defined
to be unstable and a CE is initiated. Different $q_{c}$ values can be used for
each evolutionary phase of the donor star, to reflect the changes in thermal
timescales as stars evolve.
In our fiducial run – labeled as “Clayes et al. 14 $q_{c}$” in Fig. 4, we
assume the donor-type dependent $q_{c}$ values555However, Claeys et al. (2014)
define the critical mass ratio as $M_{\mathrm{accretor}}/M_{\mathrm{donor}}$.
from Tab. 2 of Claeys et al. (2014), a metallicity $Z=0.02$, and the
correlated initial distributions in primary mass, mass ratio, period, and
eccentricity from Moe & Di Stefano (2017). We explore several variations of
our fiducial run, keeping the same setup except one of the following:
* •
constant $q_{c}$ for all evolutionary phases of the donor star. We consider
values from $q_{c}=0.1$ (more likely to enter CE) to $q_{c}=2$ (less likely to
enter CE);
* •
lower metallicity $Z=0.002$;
* •
independent and uncorrelated initial distributions – with a Kroupa et al. 1993
initial mass function for the primary stars, a flat mass-ratio distribution, a
log-normal period distribution (e.g., Duchêne & Kraus, 2013), and a thermal
eccentricity distribution.
Figure 4: Rate at which CE events are initiated $R_{\mathrm{CE,init}}$ as a
function of the minimum pre-CE separation (left) and minimum total mass
emitting GWs (right). The solid lines include all CE events, while the dashed
line only include CE where both stars are beyond the main sequence, or in
other words, where two dense stellar cores are involved. Different colors
correspond to different assumptions on the stability of mass transfer
($q_{c}$, see also text) and initial distribution of binaries (yellow).
COSMIC is a Monte Carlo population synthesis code (Breivik et al., 2020), and
for each population described above, we stop drawing systems when the sampling
uncertainty in the initial distributions is less than $10^{-5}$, that is
drawing more systems has an effect smaller than this threshold. More
specifically, we check the initial primary and secondary mass distributions in
the range [0,80] $M_{\odot}$, and initial orbital period below 1000 days. The
number of systems simulated to reach this threshold ultimately determines the
total mass of the simulated population, which we use to determine
$R_{\mathrm{CE,init}}$.
These parameter variations do not exhaust the entire range of uncertainties.
For example, the fraction of mass that is accreted during stable mass transfer
can influence the orbital evolution of the binary, and thus modify the chances
of a subsequent reverse CE from the initially less massive star to the
initially more massive (e.g., Zapartas et al., 2017). However, we expect the
impact of other parameters to be smaller than the impact of varying $q_{c}$
and the initial distributions for binaries.
We find a rate of CE initiation regardless of the masses involved of
$R_{\mathrm{CE,init}}=0.18^{+0.02}_{-0.09}\,\mathrm{yr^{-1}}$ for all CE
events, and $0.06^{+0.03}_{-0.02}\,\mathrm{yr^{-1}}$ when considering only CE
between two evolved stars (see below). The uncertainties on
$R_{\mathrm{CE,init}}$ are significantly smaller than the uncertainty on the
duration of the CE (cf. Sec. 1.1).
Figure 4 shows the rate of CE initiation that we obtain as a function of the
minimum pre-CE separation (left) and the minimum mass emitting GWs (right),
i.e. the rate of events with pre-CE separation (or GW emitting mass) indicated
by the abscissa or larger. For example, for our fiducial simulation, about
$1.6\times 10^{-2}$ CE events are initiated per year with a pre-CE separation
$a_{\mathrm{pre-CE}}\geq 10^{3}\,\mathrm{R_{\odot}}$. The solid lines in Fig.
4 show all CE events initiated, while the dashed lines focus on CE events
where both stars are evolved beyond the main sequence, meaning both have a
well defined, dense core. The latter (dashed lines) represent systems that are
more likely to be detectable in GWs.
When considering all CE events, higher $q_{\mathrm{crit}}$ correspond to lower
$R_{\mathrm{CE,\ init}}$. Conversely, CE with both stars post-MS, i.e. with a
(semi-) degenerate core, might not be the first mass transfer episode in a
given binary, and their occurrence and rate depend also on what is assumed for
previous mass transfer phases. The CE initiation rate is rather flat for pre-
CE separations as large as $a_{\mathrm{pre-CE}}\lesssim 700\,R_{\odot}$, while
it drops very quickly with increasing $M_{\mathrm{GW\ emitting}}$ because of
the initial mass function.
To determine the mass emitting GWs in each binary, we assume the following: if
a star (donor or accretor) is on the main sequence (MS), that is, it lacks a
well defined core, we use its total mass in the mass emitting GWs. If a star
has evolved beyond the MS, only the core mass adds to the GW emitting mass. If
a star has become a compact object (WD, neutron star or black hole) again the
total mass of the object adds to the GW emitting mass. There are two edge
cases where our assumptions might overestimate the amount of mass involved in
GW-emission as the total mass of the binary: CE between two MS stars, and
between two compact stars (e.g., a double-degenerate SNIa progenitor, Dan et
al. 2011). However, this has very little effect on our conclusions. With our
initial period distribution, CE between two MS stars are relatively rare, and
they are unlikely to be the most interesting target for LISA, since the lack
of dense cores limits the mass involved in the generation of GWs and thus the
signal amplitude. Setting instead the total mass emitting GWs to zero for
these systems has a negligible effect on the right panel of figure 4. CE
between two compact stars do not occur with our fiducial setup.
Assuming the stellar population of the Galaxy is in equilibrium, an estimate
of the current number of Galactic CE can then be obtained with
$N_{\mathrm{CE}}=R_{\mathrm{CE,init}}\times\Delta t_{\mathrm{CE}}\ \ ,$ (1)
where $\Delta t_{\mathrm{CE}}$ is the typical duration of a CE event, which is
not known. Moreover, what matters for GW detection is not the actual duration
$\Delta t_{\mathrm{CE}}$, but the fraction of it spent at a separation
corresponding to a GW frequency in the detector bandpass.
Depending on the evolutionary stage of the CE binary and the model assumed,
estimates of the CE duration $\Delta t_{\mathrm{CE}}$ can range from dynamical
timescale ($\sim$ days, e.g., Ginat et al. 2020; Law-Smith et al. 2020) to
thermal timescale ($\sim 10^{5}\,\mathrm{years}$), although the radial
expansion and luminosity increase of the envelope during the CE are likely to
reduce the relevant thermal timescale to $10^{3}-10^{4}$ years (e.g., Meyer &
Meyer-Hofmeister, 1979; Clayton et al., 2017; Igoshev et al., 2020). Recent
hydrodynamical simulations suggest a $\sim$$10$ years duration (e.g., Fragos
et al., 2019; Chamandy et al., 2020).
Based on these previous estimates, assuming a short duration of the CE of
$\sim$ years, the rate of CE at the low mass end ($R_{\mathrm{CE,init}}\approx
0.1$ yr-1) is sufficiently large that at least one source might be detectable
during a 10-year LISA mission. Considering only CE with evolved stars (dashed
lines), $R_{\mathrm{CE,init}}$ drops to $\sim 0.5\times 10^{-2}$ yr, making
the expected number of sources only marginally non-zero.
If the self-regulated phase of the CE instead lasts much longer than the
planned LISA mission, there might be upwards of hundreds of detectable sources
in the Galaxy ($R_{\mathrm{CE,init}}\approx 0.1$ yr-1 $\times 10^{4}$ yr
$\simeq 1000$). Thus, under the most pessimistic assumptions, there might be
only a slim chance of detecting a CE event through LISA, while under more
optimistic, but not extreme, assumptions there could be a significant number
of target sources available. We return on the possible implications of a non-
detection in Sec. 7.2.
## 5 Stealth bias
There is an ever growing catalog of potential systems that LISA might be
sensitive to, both galactic and extragalactic. Examples include exoplanets
(e.g., Tamanini & Danielski, 2019), WD binaries in various stages of their
evolution (e.g., Kupfer et al., 2020; Liu et al., 2021), and binary black
holes (BBH) in vacuum or embedded in gas (e.g., Chen et al., 2020b). Our study
suggests that CE events are also potentially detectable by LISA, raising the
question: how can we distinguish GWs from a CE from these other possibly
confounding sources? In this section we discuss GW-based ways that could be
used to conclude that a detected signal is a CE event and the expected bias if
we fail to do so. We discuss further EM-based ways of resolving this stealth
bias in Sec. 6.
Depending on the exact frequency evolution of the CE event, there may exist an
observational degeneracy between the CE inspiral and purely GW-driven binaries
of higher mass hosting black holes and/or neutron stars. In standard GW
analysis that assumes a binary in vacuum is described by general relativity
(GR), a measurement of a binary’s $f_{\rm GW}$ and $\dot{f}_{\rm GW}$ can be
used to estimate its mass and distance. If a particular binary is actually
undergoing CE, the assumption of vacuum breaks down and the resulting
estimates will be biased compared to their true values (see also Chen et al.,
2020b, for an extended discussion).
This bias is quantified in Fig. 5, which shows the chirp mass
$M_{c}=\frac{(M_{1}M_{2})^{3/5}}{(M_{1}+M_{2})^{1/5}}\ \ ,$ (2)
(color in top) and distance ($D$, color in bottom) that we would incorrectly
assign to a $0.5+0.3\,M_{\odot}$ binary undergoing CE at $3$ kpc, if we have
measured only its gravitational-wave frequency and frequency derivative (and
the signal amplitude). We note that for our example binary, the correct value
of the chirp mass would be $M_{c}=0.34\,M_{\odot}$ (pink line in the top
panel). Depending on the exact values of ($f_{\mathrm{GW}}$,
$\dot{f}_{\mathrm{GW}}$), we find the possibility of large biases in the
inferred parameters, some of them being consistent with BBHs at cosmological
distances.
We identify three possible means of breaking this degeneracy between BBHs and
CE events:
* (i)
Increased observation time could result in a measurement of
$\ddot{f}_{\mathrm{GW}}$ (and higher derivatives) in addition to
$f_{\mathrm{GW}}$ and $\dot{f}_{\mathrm{GW}}$, enabling a measurement of the
binary’s _braking index_. This scenario is further discussed in Sec. 5.1.
* (ii)
The (incorrectly) inferred chirp mass may be high enough that the system is
consistent with a BBH. In this case, one would expect to continue to see the
signal evolve, and eventually observe the merger in the LIGO band. Failure to
observe the final merger, or a signal that is different from a BBH merger
signal in vacuum GR would indicate that the binary may have been
misidentified.
* (iii)
The simultaneous measurement of an apparent high chirp mass and a low distance
might indicate the presence of a BBH close to Earth, potentially even in the
Galaxy for certain combinations of $f_{\mathrm{GW}}$ and
$\dot{f}_{\mathrm{GW}}$. Such a system might be inconsistent with independent
information about the BBH inspiral and merger rates obtained from ground-based
detectors. This possibility is further discussed in Sec. 5.2.
If these outcomes fail, the possibility of a _stealth_ bias on the system
parameters remains present, it may be impossible to conclusively identify the
true nature of the given binary.
Figure 5: Inferred chirp mass (top) and distance (bottom) for each measured
$f_{\mathrm{GW}}$ and $\dot{f}_{\mathrm{GW}}$ for a system with
$M_{\mathrm{core}}=0.5\,M_{\odot},M_{\mathrm{WD}}=0.3\,M_{\odot}$ at $3$ kpc.
Similar to Figure 3, we use a a dot-dashed line to show the border above which
$\ddot{f}_{\mathrm{GW}}$ can be measured and the nature of the binary can be
determined through the braking index measurement (see Sec. 5.1). Below that
line there is a possible stealth bias, which these two plots quantify. Above
the dotted line, frequency derivatives are likely detectable after five years
of observation. Systems lying below the dotted line would be observed as
stationary sources. The magenta line corresponds to the correct value of the
chirp mass and distance, namely the GR relation between $f_{\mathrm{GW}}$ and
$\dot{f}_{\mathrm{GW}}$ for these masses. Hence the bias vanishes.
### 5.1 Measuring the braking index
Stellar binaries evolving purely via GW radiation have GW frequencies that
evolve in time as $\dot{f}_{\mathrm{GW}}\propto f_{\mathrm{GW}}^{11/3}$. In
general, given some source evolving as $\dot{f}\propto f^{n}$, a measurement
of the braking index $n$ given by
$n=\frac{f\ddot{f}}{\dot{f}^{2}}\ \ ,$ (3)
is possible if the second time derivative $\ddot{f}$ is measurable. We
emphasize the analogy with the pulsar braking index, where $f$ is the pulsar
spin frequency rather than the frequency of GW radiated by the CE binary.
If LISA can successfully measure $f_{\mathrm{GW}}$, $\dot{f}_{\mathrm{GW}}$,
and $\ddot{f}_{\mathrm{GW}}$ for a given GW source, inference on the source’s
braking index offers a means of identifying the physical processes driving its
evolution. In particular, we can seek to verify if $n$ is consistent with
$n_{\mathrm{GW}}=\frac{11}{3}$, as expected for pure gravitational radiation
in vacuum, or inconsistent with $n_{\mathrm{GW}}$, possibly indicating a
binary in the midst of CE or subject to other environmental influences (e.g.,
Fedrow et al., 2017).
While LISA is expected to measure $\dot{f}_{\mathrm{GW}}$ for a considerable
number of binaries, $\ddot{f}_{\mathrm{GW}}$ is likely measurable only in a
much smaller subset of cases. Although determining when
$\dot{f}_{\mathrm{GW}}$ and $\ddot{f}_{\mathrm{GW}}$ are measurable requires
expensive simulations (Seto, 2002; Littenberg et al., 2020), we can make an
order-of-magnitude estimate by considering how LISA’s frequency resolution
scales with time. As LISA accumulates more data, its frequency resolution
scales with the observing time $T$ as $\Delta f_{\mathrm{GW}}\approx 1/T$.
Accordingly, we assume that $\dot{f}_{\mathrm{GW}}$ and
$\ddot{f}_{\mathrm{GW}}$ are measurable when they cause a binary’s frequency
to change enough such that it moves from one frequency bin to another over the
course of LISA’s lifetime. Then, lower limits on the minimum detectable
$\dot{f}_{\mathrm{GW}}$ and $\ddot{f}_{\mathrm{GW}}$ are
$\displaystyle\dot{f}_{\rm GW,min}T$ $\displaystyle=\Delta f_{\rm
GW}\quad\Rightarrow\quad\dot{f}_{\rm GW,min}\approx\frac{1}{T^{2}}\ \ ,$ (4)
$\displaystyle\frac{1}{2}\ddot{f}_{\rm GW,min}T^{2}$ $\displaystyle=\Delta
f_{\rm GW}\quad\Rightarrow\quad\ddot{f}_{\rm GW,min}\approx\frac{2}{T^{3}}\ \
.$ (5)
We refer the readers to Takahashi & Seto (2002) and Robson et al. (2018) for
more accurate estimates accounting for the SNR dependence.
On the $\dot{f}_{\mathrm{GW}}-f_{\mathrm{GW}}$ plane, the first condition is a
straight line depicted by the black dotted horizontal line in Fig. 3 and 5. We
can also identify the region of this plane in which $\ddot{f}_{\rm GW}$ is
measurable for a system evolving in vacuum GR. Using the definition of braking
index (Eq. 3) to express $\dot{f}_{\mathrm{GW}}$ as a function of
$f_{\mathrm{GW}}$ and $\ddot{f}_{\mathrm{GW}}$, a binary evolving purely via
GW emission with the minimum detectable $\ddot{f}_{\rm GW}$ (Eq. 5) has a
frequency derivative
$\dot{f}^{2}_{\rm GW}=\frac{3}{11}\ddot{f}_{\rm GW,min}f_{\rm GW}\ \ .$ (6)
This relation is plotted in Figs. 3 and 5 with black dot-dashed line. Consider
a system lying above this line. Such a system _should_ have a measurable
$\ddot{f}_{\rm GW}$ (and hence a measurable braking index) if it evolves via
GW emission. Therefore, if this system’s $\ddot{f}_{\rm GW}$ or $n$ is not
measured, or is measured to be different than the GR prediction, then the non-
vacuum or non-GR nature of the binary can be inferred. Next, we consider
systems lying below the line defined by Eq. (6). For such systems, GW
evolution alone cannot lead to a detectable $\ddot{f}_{\rm GW}$. If
$\ddot{f}_{\rm GW}$ is nevertheless measured, one would infer that the binary
does not evolve solely under vacuum GR effects. If $\ddot{f}_{\rm GW}$ is not
measured, then it is not possible to determine the nature of the binary.
### 5.2 Distinction from other sources bases on rates
One potential way to infer the nature of sources that cannot be unambiguously
identified using only their GW signature is through their rate of occurrence.
For example, if the rate of CE detections in LISA is higher than the
uncertainty in the rate of BBH mergers, one can statistically filter out the
over-abundance of BBH detections in LISA when comparing to the BBH merger rate
from ground-based detectors.
As of LIGO/Virgo O3a, the rate of BBH mergers
(29.9${}^{+14.9}_{-8.6}$Gpc-3yr-1, Abbott et al., 2020a, b) is relatively
poorly constrained due to the small sample size of BBH events and the
dependence on the assumed BH mass distribution. However, when LISA will become
operational (around the year 2035, Amaro-Seoane et al., 2017) the BBH catalog
size from ground based detectors will have increased to approximately
$10^{4.5}$–$10^{5}$ BBH detections per year (e.g. Baibhav et al., 2019). This
means that the constraints on the BBH merger rate will have improved
drastically.
The LISA mission will result in an independent measurement of the rate of BBH
mergers, which will be based on the GW signals from wide (in-spiraling) BBH
systems. If the BBH merger rate as inferred from LISA is significantly higher
than the, by then, well constrained BBH merger rate from ground based
detectors, one could argue that this overabundance could originate from, for
example, CE-events.
Moreover, the BBH merger rate from ground based detectors will be sufficiently
constrained to allow for a split based on event parameters such as BH mass and
spin. This will enable us to constrain, for example, the rate of massive BHs
($M_{\rm{BH}}\sim 40$ M⊙) in the local Universe. CE-events might appear as
more massive BBH events (see Fig. 5)
Whether or not the discrepancy between the BBH merger rates inferred from LISA
with respect to the ground based detectors will be significant, will depend
both on the size of the discrepancy and the uncertainty in the rate
measurement from LISA. If we assume that such an overabundance in the rate
comes from CE-events, then the size of the discrepancy is directly related to
the rate of CE events. Based on our discussion in Sec. 4 we expect the number
of CE events detectable by LISA, to lie between 0.1 and 1000. Although this
estimate spans several orders of magnitude, we will be able to constrain the
upper limit of this estimate (corresponding to long lasting CE-events with
$\Delta t_{\mathrm{CE}}\propto 10^{4}$yr), given the uncertainty in the
stellar mass BBH-merger rate in LISA.
## 6 Sky localization and electromagnetic counterparts
EM transients could provide interesting triggers for GW searches, and in this
case the probability of misinterpreting the signal might be lower. In fact,
the self-regulated inspiral phase is expected to follow the dynamical plunge-
in phase (step (_c_) in Fig. 1), which might cause optical (e.g., LRNe Soker &
Tylenda, 2003; Kulkarni et al., 2007; Ivanova et al., 2013a; Pastorello et
al., 2019) and infrared (IR) transients (e.g., SPRITES Kasliwal et al., 2017).
An example of what a possible EM precursor could look like is V1309Sco
(Tylenda et al., 2011), and given that the initial parameters of the binary
stars are known fairly well, it would be interesting to monitor this object
for possible GWs once LISA is operational. Concerning the characterization and
discovery of EM precursors, the spectral evolution and late-time observations
of V1309Sco suggest that LRNe could produce large amounts of dust and be
particularly bright in the infrared (IR, Pejcha et al., 2017; Metzger &
Pejcha, 2017; Iaconi et al., 2020; Nicholls et al., 2013; Jencson et al.,
2019; Blagorodnova et al., 2020). Therefore, it is conceivable that some of
the transients associated with CE events might only be observable in the IR.
The SPitzer InfraRed Intensive Transients Survey (SPIRITS) detected unusual
infrared transients with no optical counterparts (Kasliwal et al., 2017).
These transients, dubbed eSPecially Red Intermediate-luminosity Transient
Events (SPRITEs), are in the infrared luminosity gap between novae and
supernovae, and could be associated with stellar mergers and CE events.
Another way to identify possible interesting targets where the CE event might
have been initiated too long ago to detect its beginning is through peculiar
circumstellar environments. An example could be TYC 2597-735-1 which was
interpreted as a merger product based on its “blue-ring nebula” but still
exhibits variability (Hoadley et al., 2020, although its variability is not
necessarily related to a still ongoing merger).
Conversely, in the case of a GW detection, the interpretation of the signal
might be helped by EM followup. This might be necessary to distinguish a CE
signal from other potential mHz GW sources (e.g., AM CVns, sdO+WD binaries).
As mentioned before, the most promising GW signal for LISA does not come from
the dynamical phase of a CE associated to the brightest EM transients (e.g.,
LRNe). Instead, the long-lasting self-regulated inspiral produces a possibly
slowly-evolving GW source. The LISA sky localization is strongly dependent on
the signal and its SNR, for a monochromatic source typical uncertainty
ellipses are of order tens of degrees squared (Cutler, 1998).
During the self-regulated phase, the CE binary might look like a red giant
with the following possible features. The surrounding circumstellar material
might be peculiar: a CE is initiated by an unstable phase of RLOF (step _a_ in
Fig. 1), which can be non-conservative and spill over in the surroundings of
the system (e.g., Pejcha et al., 2017; MacLeod & Loeb, 2020). One-dimensional
CE simulations suggest that periodic outbursts of mass ejection during the
self-regulated phase are possible (e.g., Clayton et al., 2017). Thus, a CE
binary might appear as a red giant with an excess of circumstellar material
and peculiar time-variability.
## 7 Discussion
### 7.1 Comparison to other common envelope rate estimates
Assuming a steady-state stellar population in the Galaxy, the number of CE
binaries viable as GW sources depends on the initiation rate
$R_{\mathrm{CE,init}}$ and the time duration when the system might be
detectable. While presently the latter needs to be estimated from numerical
simulations (although, see also Igoshev et al. 2020), the former can be
constrained using observations of EM transients associated with CE events.
The most relevant EM transient to constrain the rate of CE are LRNe, which
have been conclusively linked to a CE event via observations of the period
decay preceding the outburst event in V1309Sco (Tylenda et al., 2011).
Kochanek et al. (2014) estimate the Galactic rate of these events with $V$
($I$) band magnitude brighter than $-3$ ($-4$) to be $\sim
0.5\,(0.3)\,\mathrm{yr^{-1}}$, in reasonable agreement with our
$R_{\mathrm{CE,init}}=0.18^{+0.02}_{-0.09}\,\mathrm{yr^{-1}}$ (cf. Sec. 4).
Their estimate was based on the then available sample of 4 events in 25 years
and correction for observational biases. Kochanek et al. (2014) also noted the
good agreement with population synthesis results obtained with the StarTrack
code. More recently, Howitt et al. (2020) used the COMPAS population synthesis
code to suggest a Galactic rate of LRNe of 0.2$\,\mathrm{yr^{-1}}$, which is
in agreement with our estimates obtained with COSMIC.
### 7.2 Consequences of non-detection
Even a non-detection of GWs from a CE event throughout the LISA mission might
provide unique and direct constraints on CE evolution. In this case,
regardless of the rate of CE initiation, the first possibility is that the
self-regulated thermal-timescale phase happens at too wide separations,
corresponding to $f_{\mathrm{GW}}\lesssim 10^{-4}$ Hz below the LISA band
pass. For our representative CE binary with $M_{\mathrm{core}}=0.5\,M_{\odot}$
and $M_{2}=0.3\,M_{\odot}$, this would mean $a_{\mathrm{post-plunge}}\gtrsim
1.5\,R_{\odot}$, although from Fig. 3 we expect to build up significant SNR
only for $a_{\mathrm{post-plunge}}\lesssim 0.5\,R_{\odot}$.
Alternatively, direct constraints on the duration in the LISA band ($\Delta
t_{\mathrm{CE}}$) can be derived with some informed assumptions on the rate of
initiation ($R_{\mathrm{CE,init}}$). Considering all CE events, the latter can
be estimated with the observed rate of LRNe at $\approx 0.1$ yr-1 (consistent
with our population synthesis calculations, see also Sec. 7.1). Restricting to
CE between two evolved stars with dense cores, our simulations suggest a
decrease of about a factor of $\sim$3 (cf. Sec. 4). Therefore, despite the
many uncertainties in binary mass transfer stability and CE evolution, both EM
transients observations and population synthesis estimates suggest a value of
about one CE initiation every decade in the Galaxy. Assuming most of these
event will cross the LISA band at some point (cf. Fig. 2) and a mission
duration of $T\approx 10$ yr, a non-detection would be at odds with models for
which $\Delta t_{\mathrm{CE}}\gg 1$ yr, which we predict might result in a
significant number of detectable sources. The longer the time $T$ during which
a GW signal might be observed is, the less dependent on uncertainties in
$R_{\mathrm{CE,init}}$ and more stringent this constrain will be.
### 7.3 Further caveats
There are several effects which we have not explored in detail in this study
which could impact the LISA detectability of a CE event. The dynamical plunge-
in could make the orbit inside the CE eccentric, modifying the GW emission in
two ways: make it more “bursty”, with stronger emission at each pericenter
passage, and make it more “directional”. If the rate of CE events is on the
optimistic side, LISA can still expect to detect eccentric CE events, and will
unveil a great deal about the physics of the dynamical plunge-in.
We have also neglected triple stellar systems. Triples can enhance the rate of
mass transfer in binaries, either because the third star fills its Roche lobe,
or because of the Kozai-Lidov oscillations inducing high eccentricity in the
inner binary (e.g., Toonen et al., 2020). This can in principle enhance the
rate of CE and produce events which are qualitatively different in EM and GWs,
see for example Glanz & Perets (2021). Hence, our most pessimistic rate
calculations may be regarded as a lower limit.
## 8 Summary and conclusions
Common-envelope evolution remains one of the largest uncertainties in stellar
physics. Here, we have investigated the possibility of detecting
gravitational-wave emission from a binary made of the core of the donor star
and the companion star inspiraling within a shared common envelope.
Although common-envelope evolution is never _driven_ by the loss of energy to
gravitational radiation, the binary inside the shared envelope does have a
time-dependent mass quadrupole moment and can thus emit gravitational
radiation. If detectable, such gravitational radiation offers invaluable
insight on the process by passively tracing the motion of the binary.
While the final dynamical phase of a common envelope might emit detectable
signals (Ginat et al., 2020), its short duration makes it an extremely rare
target. Conversely, we focus on a longer duration phase that might occur in
some common-envelope events: the self-regulating thermal-timescale phase.
Detections of this phase would constrain the stalling radius (from the
gravitational-wave frequency) and the duration of the stalled phase (from the
number of sources detected and/or the signal duration). Even non-detections
could put upper-limits on the duration or lower limits on the radius at which
common envelopes might stall (see Sec. 7.2).
The gravitational-wave frequency range of interest is likely in the mHz range.
This is suggested by the existence of verification white-dwarf binary for the
LISA mission, which are thought to be the outcome of successful common-
envelope ejections.
Based on rapid population synthesis calculations, we estimate that about one
common-envelope event is initiated per decade in the Galaxy (cf. Fig. 4), in
agreement with previous observational and theoretical determinations (Sec. 4
and Sec. 7.1). The largest uncertainty in converting this to a number of
Galactic sources is the duration of the common envelope, which is predicted to
lie between a dynamical ($\sim$ days) and a thermal (decades to $10^{5}$
years) timescale, and is likely to depend on the masses and evolutionary
stages of the binary considered. Nevertheless, assuming that some common-
envelope events go through a self-regulated thermal-timescale phase, it is
realistic to expect at least one source within the LISA mission, and possibly
more than hundreds.
While we remained agnostic on the details of the physics governing the common-
envelope dynamics, we can bracket the range of possibilities by requiring the
common envelope to proceed faster than if driven by pure gravitational-wave
emission (corresponding to the unrealistic assumption of neglecting the gas
drag), and slower than if the gas drag was constant (corresponding to the
unrealistic assumption of neglecting the envelope reaction to the inspiraling
binary). We found that a system representative of the most common kind of
Galactic common envelope – a $0.5\,M_{\odot}$ core with a $0.3\,M_{\odot}$
companion embedded in a shared envelope – might be (marginally) detectable
during the stalled phase within these limits (cf. Fig. 3). More specifically,
if the separation at which the inspiral stalls is of order of $0.1\,R_{\odot}$
and lasts $\gtrsim 5$ years, a SNR $>$10 is possible. The shared envelope is
likely too low density to contribute directly to the gravitational-wave
signal.
We have further investigated the risk of misinterpreting a future detection of
a Galactic common-envelope binary if analyzed as a compact binary in vacuum
evolving according to general relativity (cf. Sec. 5). Although the
possibility of misinterpretation exist (cf. Fig. 5), there are several ways in
which it could be avoided. To break the degeneracies and recognize a signal as
the product of a Galactic common envelope, one could couple electromagnetic
followup or using electromagnetic observations to attempt directional searches
in the gravitational-wave data (Sec. 6); measure deviations from the “braking
index” predicted by general relativity (for fast evolving signals); or compare
the rate of detections with the double compact-object-merger rate from ground-
based detectors.
Space-based gravitational-wave detectors might improve our understanding the
inner dynamics of common-envelope evolution by using gravitational waves to
probe in a direct way a phenomenon that is _not_ gravitational wave driven.
Portions of this study were performed during the LISA Sprint at the Center for
Computational Astrophysics of the Flatiron Institute, supported by the Simons
Foundation. MR thanks S. Justham and Y. F. Jiang for helpful discussions early
on during this project, and K. Breivik for guidance in using COSMIC and
helpful feedback. We thank T. Littenberg for useful discussion on LISA
parameter estimation.
## Appendix A Calculation of LISA Signal-to-Noise ratios
We calculate expected LISA signal-to-noise ratios following the discussion in
Robson et al. (2019), our code is available at
https://github.com/tcallister/LISA-and-CE-Evolution/. The gravitational-wave
signal from a quasicircular binary can be generically described via
$\displaystyle h_{+}(t)$
$\displaystyle=A(t)\frac{1+\cos^{2}\iota}{2}\cos\Psi(t)\ \ ,$ (A1)
$\displaystyle h_{\times}(t)$ $\displaystyle=A(t)\cos\iota\sin\Psi(t).$
Here, $A(t)$ is the gravitational wave’s amplitude, $\iota$ is the inclination
of the binary’s orbital plane with respect to our line of sight, and $\Psi(t)$
is the gravitational wave’s phase (the corresponding frequency of
gravitational-wave emission is $f_{\rm GW}=\frac{1}{2\pi}\dot{\Psi}$). In the
frequency domain (denoted by a tilde), the resulting signal that is measured
by LISA is the linear combination
$\tilde{h}(f_{\rm GW})=F_{+}(f_{\rm GW},\hat{n})\,\tilde{h}_{+}(f_{\rm
GW})+F_{\times}(f_{\rm GW},\hat{n})\,\tilde{h}_{\times}(f_{\rm GW})\ \ ,$ (A2)
assuming a signal incident on LISA from direction $\hat{n}$, and where
$F_{+}(f_{\rm GW},\hat{n})$ and $F_{\times}(f_{\rm GW},\hat{n})$ are the LISA
antenna response functions. Note that, unlike ground-based detectors, LISA
observes signals whose wavelengths are comparable to the size of the
instrument itself, and therefore its antenna patterns are strongly frequency-
dependent. Also, here we explicitly use $f_{\rm GW}$ to refer to
gravitational-wave frequencies in order to minimize confusion with orbital
frequencies $f_{\mathrm{orb}}=f_{\rm GW}/2$.
The resulting signal-to-noise ratio (SNR) with which this signal is observed
in LISA is given by
${\rm SNR}^{2}=4\int_{0}^{\infty}\frac{|h(f_{\rm GW})|^{2}}{S_{n}(f_{\rm
GW})}df_{\rm GW}\ \ ,$ (A3)
$S_{n}(f_{\rm GW})$ is the LISA strain sensitivity curve; we adopt the
analytic expression in Eq. 1 of Robson et al. (2019). As we do not know in
advance the sky location or orientation of possible CE sources, Eq. (A3) can
be averaged over all sky positions and binary inclination angles, yielding an
expected signal-to-noise ratio
$\langle{\rm
SNR}^{2}\rangle=\frac{16}{5}\int_{0}^{\infty}\frac{|\tilde{A}(f_{\rm
GW})|^{2}}{S_{n}(f_{\rm GW})}df_{\rm GW}\ \ .$ (A4)
We use Eq. (A4) to obtain the results shown with the colors in Fig. 3.
To compute the Fourier domain amplitudes $\tilde{A}(f_{\rm GW})$, we first
note that the time-domain amplitude of a quasi-circular binary with chirp mass
$\mathcal{M}_{c}$, at distance $D$, and gravitational-wave frequency $f_{\rm
GW}$ is (Maggiore, 2008)
$A(t)=\frac{4}{D}\left(\frac{G\mathcal{M}_{c}}{c^{2}}\right)^{5/3}\left(\frac{\pi
f_{\rm GW}}{c}\right)^{2/3}\ \ .$ (A5)
The corresponding Fourier-domain amplitude is (Buonanno et al., 2009)
$\tilde{A}(f_{\rm GW})=\frac{A\left(t(f_{\rm GW})\right)}{\sqrt{\dot{f}_{\rm
GW}\left(t(f_{\rm GW})\right)}}\ \ ,$ (A6)
where $\dot{f}_{\rm GW}$ is the time derivative of the gravitational-wave
frequency and we now regard time as a function of $f_{\rm GW}$.
## Appendix B Calculation of gas-drag neglecting the envelope reaction
Any massive object moving linearly through gas with a constant density
interacts with its own gravitational wake and loses momentum via dynamical
friction. An object of mass $M$ and radius $R$ travelling with velocity $v$
through gas of uniform density $\rho$ and sound speed $c_{s}$ generates an
enhanced density tail. This wake is confined to the Mach cone at supersonic
speeds ($\mathcal{M}=v/c_{s}\gg 1$) and at sub-sonic speeds the over-dense
wake lies within a sphere of radius $c_{s}t$ a distance $vt$ behind the mass.
If the wake is low amplitude, linear perturbation theory yields a drag force
of the form $F_{\rm DF}=-C4\pi(GM)^{2}\rho/v^{2}$ (Ostriker, 1999) where $C$
is a term that approaches $C\rightarrow\mathcal{M}^{3}/3$ as $\mathcal{M}$
$\ll$ $1$ and $C\rightarrow\ln(vt/R)$ as $\mathcal{M}$ $\gg$ $1$. For a
massive object on a circular orbit, the path of the wake bends and is
therefore asymmetric, leading to both azimuthal and radial components of drag.
The azimuthal component dominates and is generally well approximated by
$F_{\rm DF}$ (Kim & Kim, 2007). Where the perturber is more massive, the wake
is larger amplitude and the non-linear regime modifies $F_{\rm DF}$ by an
additional factor $(\eta/2)^{-0.45}$ where $\eta$ is a non-linear factor given
by $\eta=A/(\mathcal{M}^{2}-1)$ and $A=GM/c_{s}^{2}R_{s}$ with $R_{s}$ a
characteristic softening scale (Kim & Kim, 2009). Further modifications to the
Kim & Kim (2007) picture of drag can occur when the embedded mass is a binary
and the binary orbit or its orbit around its own center of mass become
supersonic (Sánchez-Salcedo & Chametla, 2014).
To calculate the solid orange line in Fig. 3, we assumed
$\rho=10^{-6}\,\mathrm{g\ cm^{-3}}$, roughly corresponding to the average
envelope density for a red supergiant, and a temperature $T=9\times
10^{5}\,\mathrm{K}$, corresponding to the temperature somewhere inside the
envelope. To obtain a (conservative) upper-limit on the gas drag force, we
consider these parameters constant throughout the inspiral, that is we neglect
the reaction of the envelope to the injection of energy and angular momentum
by the binary spiraling-in. With these parameters, we calculated $F_{\rm DF}$
and assumed it to be the only source of energy loss and inspiral.
## Appendix C Post-common-envelope separation with more massive companion
Figure 6: Same as Fig. 2 but assuming a compact companion of $1.4\,M_{\odot}$
representing a Chandrasekhar mass WD or a neutron star.
## References
* Aasi et al. (2015) Aasi, J., Abbott, B. P., Abbott, R., et al. 2015, Classical and Quantum Gravity, 32, 074001, doi: 10.1088/0264-9381/32/7/074001
* Abbott et al. (2020a) Abbott, R., Abbott, T. D., Abraham, S., et al. 2020a, arXiv e-prints, arXiv:2010.14527. https://arxiv.org/abs/2010.14527
* Abbott et al. (2020b) —. 2020b, arXiv e-prints, arXiv:2010.14533. https://arxiv.org/abs/2010.14533
* Acernese et al. (2015) Acernese, F., Agathos, M., Agatsuma, K., et al. 2015, Classical and Quantum Gravity, 32, 024001, doi: 10.1088/0264-9381/32/2/024001
* Amaro-Seoane et al. (2017) Amaro-Seoane, P., Audley, H., Babak, S., et al. 2017, arXiv e-prints, arXiv:1702.00786. https://arxiv.org/abs/1702.00786
* Baibhav et al. (2019) Baibhav, V., Berti, E., Gerosa, D., et al. 2019, Phys. Rev. D, 100, 064060, doi: 10.1103/PhysRevD.100.064060
* Baker et al. (2019) Baker, J., Bellovary, J., Bender, P. L., et al. 2019, arXiv e-prints, arXiv:1907.06482. https://arxiv.org/abs/1907.06482
* Belczynski et al. (2016) Belczynski, K., Holz, D. E., Bulik, T., & O’Shaughnessy, R. 2016, Nature, 534, 512, doi: 10.1038/nature18322
* Blagorodnova et al. (2020) Blagorodnova, N., Karambelkar, V., Adams, S. M., et al. 2020, MNRAS, 496, 5503, doi: 10.1093/mnras/staa1872
* Breivik et al. (2020) Breivik, K., Coughlin, S., Zevin, M., et al. 2020, ApJ, 898, 71, doi: 10.3847/1538-4357/ab9d85
* Broekgaarden et al. (2021) Broekgaarden, F. S., Berger, E., Neijssel, C. J., et al. 2021, arXiv e-prints, arXiv:2103.02608. https://arxiv.org/abs/2103.02608
* Buonanno et al. (2009) Buonanno, A., Iyer, B. R., Ochsner, E., Pan, Y., & Sathyaprakash, B. S. 2009, Physical Review D, 80, doi: 10.1103/physrevd.80.084043
* Chamandy et al. (2020) Chamandy, L., Blackman, E. G., Frank, A., Carroll-Nellenback, J., & Tu, Y. 2020, MNRAS, 495, 4028, doi: 10.1093/mnras/staa1273
* Chattopadhyay et al. (2020) Chattopadhyay, D., Stevenson, S., Hurley, J. R., Bailes, M., & Broekgaarden, F. 2020, arXiv e-prints, arXiv:2011.13503. https://arxiv.org/abs/2011.13503
* Chen et al. (2020a) Chen, W.-C., Liu, D.-D., & Wang, B. 2020a, arXiv e-prints, arXiv:2008.05143. https://arxiv.org/abs/2008.05143
* Chen et al. (2020b) Chen, X., Xuan, Z.-Y., & Peng, P. 2020b, ApJ, 896, 171, doi: 10.3847/1538-4357/ab919f
* Claeys et al. (2014) Claeys, J. S. W., Pols, O. R., Izzard, R. G., Vink, J., & Verbunt, F. W. M. 2014, A&A, 563, A83, doi: 10.1051/0004-6361/201322714
* Clayton et al. (2017) Clayton, M., Podsiadlowski, P., Ivanova, N., & Justham, S. 2017, MNRAS, 470, 1788, doi: 10.1093/mnras/stx1290
* Cruz-Osorio & Rezzolla (2020) Cruz-Osorio, A., & Rezzolla, L. 2020, ApJ, 894, 147, doi: 10.3847/1538-4357/ab89aa
* Cutler (1998) Cutler, C. 1998, Phys. Rev. D, 57, 7089, doi: 10.1103/PhysRevD.57.7089
* Dan et al. (2011) Dan, M., Rosswog, S., Guillochon, J., & Ramirez-Ruiz, E. 2011, ApJ, 737, 89, doi: 10.1088/0004-637X/737/2/89
* Darwin (1879) Darwin, G. H. 1879, The Observatory, 3, 79
* De Marco et al. (2011) De Marco, O., Passy, J.-C., Moe, M., et al. 2011, MNRAS, 411, 2277, doi: 10.1111/j.1365-2966.2010.17891.x
* de Mink et al. (2013) de Mink, S. E., Langer, N., Izzard, R. G., Sana, H., & de Koter, A. 2013, ApJ, 764, 166
* Di Carlo et al. (2020) Di Carlo, U. N., Mapelli, M., Bouffanais, Y., et al. 2020, MNRAS, 497, 1043, doi: 10.1093/mnras/staa1997
* Dominik et al. (2012) Dominik, M., Belczynski, K., Fryer, C., et al. 2012, ApJ, 759, 52, doi: 10.1088/0004-637X/759/1/52
* Duchêne & Kraus (2013) Duchêne, G., & Kraus, A. 2013, ARA&A, 51, 269, doi: 10.1146/annurev-astro-081710-102602
* Evans et al. (2020) Evans, F. A., Renzo, M., & Rossi, E. M. 2020, MNRAS, 497, 5344, doi: 10.1093/mnras/staa2334
* Fedrow et al. (2017) Fedrow, J. M., Ott, C. D., Sperhake, U., et al. 2017, Phys. Rev. Lett., 119, 171103, doi: 10.1103/PhysRevLett.119.171103
* Fragos et al. (2019) Fragos, T., Andrews, J. J., Ramirez-Ruiz, E., et al. 2019, ApJ, 883, L45, doi: 10.3847/2041-8213/ab40d1
* Ginat et al. (2020) Ginat, Y. B., Glanz, H., Perets, H. B., Grishin, E., & Desjacques, V. 2020, MNRAS, 493, 4861, doi: 10.1093/mnras/staa465
* Glanz & Perets (2018) Glanz, H., & Perets, H. B. 2018, MNRAS, 478, L12, doi: 10.1093/mnrasl/sly065
* Glanz & Perets (2021) —. 2021, MNRAS, 500, 1921, doi: 10.1093/mnras/staa3242
* Glebbeek et al. (2009) Glebbeek, E., Gaburov, E., de Mink, S. E., Pols, O. R., & Portegies Zwart, S. F. 2009, A&A, 497, 255, doi: 10.1051/0004-6361/200810425
* Götberg et al. (2020) Götberg, Y., Korol, V., Lamberts, A., et al. 2020, ApJ, 904, 56, doi: 10.3847/1538-4357/abbda5
* Han et al. (2003) Han, Z., Podsiadlowski, P., Maxted, P. F. L., & Marsh, T. R. 2003, MNRAS, 341, 669, doi: 10.1046/j.1365-8711.2003.06451.x
* Hoadley et al. (2020) Hoadley, K., Martin, D. C., Metzger, B. D., et al. 2020, Nature, 587, 387, doi: 10.1038/s41586-020-2893-5
* Howitt et al. (2020) Howitt, G., Stevenson, S., Vigna-Gómez, A. r., et al. 2020, MNRAS, 492, 3229, doi: 10.1093/mnras/stz3542
* Huang et al. (2020) Huang, S.-J., Hu, Y.-M., Korol, V., et al. 2020, Phys. Rev. D, 102, 063021, doi: 10.1103/PhysRevD.102.063021
* Hunter (2007) Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90
* Hurley et al. (2002) Hurley, J. R., Tout, C. A., & Pols, O. R. 2002, MNRAS, 329, 897, doi: 10.1046/j.1365-8711.2002.05038.x
* Iaconi et al. (2020) Iaconi, R., Maeda, K., Nozawa, T., De Marco, O., & Reichardt, T. 2020, MNRAS, 497, 3166, doi: 10.1093/mnras/staa2169
* Igoshev et al. (2020) Igoshev, A. P., Perets, H. B., & Michaely, E. 2020, MNRAS, 494, 1448, doi: 10.1093/mnras/staa833
* Ivanova (2018) Ivanova, N. 2018, ApJ, 858, L24, doi: 10.3847/2041-8213/aac101
* Ivanova et al. (2013a) Ivanova, N., Justham, S., Avendano Nandez, J. L., & Lombardi, J. C. 2013a, Science, 339, 433, doi: 10.1126/science.1225540
* Ivanova et al. (2020) Ivanova, N., Justham, S., & Ricker, P. 2020, Common Envelope Evolution, 2514-3433 (IOP Publishing), doi: 10.1088/2514-3433/abb6f0
* Ivanova et al. (2002) Ivanova, N., Podsiadlowski, P., & Spruit, H. 2002, MNRAS, 334, 819, doi: 10.1046/j.1365-8711.2002.05543.x
* Ivanova et al. (2013b) Ivanova, N., Justham, S., Chen, X., et al. 2013b, A&A Rev., 21, 59, doi: 10.1007/s00159-013-0059-2
* Jencson et al. (2019) Jencson, J. E., Kasliwal, M. M., Adams, S. M., et al. 2019, ApJ, 886, 40, doi: 10.3847/1538-4357/ab4a01
* Justham et al. (2009) Justham, S., Wolf, C., Podsiadlowski, P., & Han, Z. 2009, A&A, 493, 1081, doi: 10.1051/0004-6361:200810106
* Kasliwal et al. (2017) Kasliwal, M. M., Bally, J., Masci, F., et al. 2017, ApJ, 839, 88, doi: 10.3847/1538-4357/aa6978
* Kim & Kim (2007) Kim, H., & Kim, W.-T. 2007, ApJ, 665, 432, doi: 10.1086/519302
* Kim & Kim (2009) —. 2009, ApJ, 703, 1278, doi: 10.1088/0004-637X/703/2/1278
* Kochanek et al. (2014) Kochanek, C. S., Adams, S. M., & Belczynski, K. 2014, MNRAS, 443, 1319, doi: 10.1093/mnras/stu1226
* Korol et al. (2017) Korol, V., Rossi, E. M., Groot, P. J., et al. 2017, MNRAS, 470, 1894, doi: 10.1093/mnras/stx1285
* Kroupa et al. (1993) Kroupa, P., Tout, C. A., & Gilmore, G. 1993, MNRAS, 262, 545, doi: 10.1093/mnras/262.3.545
* Kruckow et al. (2018) Kruckow, M. U., Tauris, T. M., Langer, N., Kramer, M., & Izzard, R. G. 2018, MNRAS, 481, 1908, doi: 10.1093/mnras/sty2190
* Kulkarni et al. (2007) Kulkarni, S. R., Ofek, E. O., Rau, A., et al. 2007, Nature, 447, 458, doi: 10.1038/nature05822
* Kupfer et al. (2020) Kupfer, T., Bauer, E. B., Marsh, T. R., et al. 2020, ApJ, 891, 45, doi: 10.3847/1538-4357/ab72ff
* Law-Smith et al. (2020) Law-Smith, J. A. P., Everson, R. W., Ramirez-Ruiz, E., et al. 2020, arXiv e-prints, arXiv:2011.06630. https://arxiv.org/abs/2011.06630
* Littenberg et al. (2020) Littenberg, T., Cornish, N., Lackeos, K., & Robson, T. 2020, Phys. Rev. D, 101, 123021, doi: 10.1103/PhysRevD.101.123021
* Liu et al. (2021) Liu, W.-M., Jiang, L., & Chen, W.-C. 2021, arXiv e-prints, arXiv:2101.09850. https://arxiv.org/abs/2101.09850
* MacLeod & Loeb (2020) MacLeod, M., & Loeb, A. 2020, ApJ, 895, 29, doi: 10.3847/1538-4357/ab89b6
* MacLeod et al. (2017) MacLeod, M., Macias, P., Ramirez-Ruiz, E., et al. 2017, ApJ, 835, 282, doi: 10.3847/1538-4357/835/2/282
* MacLeod et al. (2018) MacLeod, M., Ostriker, E. C., & Stone, J. M. 2018, ApJ, 868, 136, doi: 10.3847/1538-4357/aae9eb
* MacLeod & Ramirez-Ruiz (2014) MacLeod, M., & Ramirez-Ruiz, E. 2014, The Astrophysical Journal, 798, L19, doi: 10.1088/2041-8205/798/1/l19
* Maggiore (2008) Maggiore, M. 2008, Gravitational Waves: Volume 1: Theory and Experiments (Oxford University Press). https://books.google.com/books?id=AqVpQgAACAAJ
* Metzger & Pejcha (2017) Metzger, B. D., & Pejcha, O. 2017, MNRAS, 471, 3200, doi: 10.1093/mnras/stx1768
* Meyer & Meyer-Hofmeister (1979) Meyer, F., & Meyer-Hofmeister, E. 1979, A&A, 78, 167
* Michaely & Perets (2019) Michaely, E., & Perets, H. B. 2019, MNRAS, 484, 4711, doi: 10.1093/mnras/stz352
* Moe & Di Stefano (2017) Moe, M., & Di Stefano, R. 2017, ApJS, 230, 15, doi: 10.3847/1538-4365/aa6fb6
* Nandez et al. (2015) Nandez, J. L. A., Ivanova, N., & Lombardi, J. C. J. 2015, MNRAS, 450, L39, doi: 10.1093/mnrasl/slv043
* Nariai & Sugimoto (1976) Nariai, K., & Sugimoto, D. 1976, PASJ, 28, 593
* Nelemans et al. (2000) Nelemans, G., Verbunt, F., Yungelson, L. R., & Portegies Zwart, S. F. 2000, A&A, 360, 1011
* Neunteufel (2020) Neunteufel, P. 2020, A&A, 641, A52, doi: 10.1051/0004-6361/202037792
* Nicholls et al. (2013) Nicholls, C. P., Melis, C., Soszynski, I., et al. 2013, MNRAS, 431, L33, doi: 10.1093/mnrasl/slt003
* Ohlmann et al. (2016) Ohlmann, S. T., Röpke, F. K., Pakmor, R., Springel, V., & Müller, E. 2016, MNRAS, 462, L121, doi: 10.1093/mnrasl/slw144
* Ostriker (1999) Ostriker, E. C. 1999, ApJ, 513, 252, doi: 10.1086/306858
* Paczyński (1976) Paczyński, B. 1976, in IAU Symp. 73: Structure and Evolution of Close Binary Systems, ed. P. Eggleton, S. Mitton, & J. Whelan, 75
* Pastorello et al. (2019) Pastorello, A., Mason, E., Taubenberger, S., et al. 2019, A&A, 630, A75, doi: 10.1051/0004-6361/201935999
* Pastorello et al. (2020) Pastorello, A., Valerin, G., Fraser, M., et al. 2020, arXiv e-prints, arXiv:2011.10590. https://arxiv.org/abs/2011.10590
* Pavlovskii et al. (2017) Pavlovskii, K., Ivanova, N., Belczynski, K., & Van, K. X. 2017, MNRAS, 465, 2092, doi: 10.1093/mnras/stw2786
* Pejcha et al. (2017) Pejcha, O., Metzger, B. D., Tyles, J. G., & Tomida, K. 2017, ApJ, 850, 59, doi: 10.3847/1538-4357/aa95b9
* Pérez & Granger (2007) Pérez, F., & Granger, B. E. 2007, Computing in Science & Engineering, 9, 21
* Podsiadlowski (2001) Podsiadlowski, P. 2001, Astronomical Society of the Pacific Conference Series, Vol. 229, Common-Envelope Evolution and Stellar Mergers, 239
* Podsiadlowski et al. (2002) Podsiadlowski, P., Rappaport, S., & Pfahl, E. D. 2002, ApJ, 565, 1107, doi: 10.1086/324686
* Pols et al. (1998) Pols, O. R., Schröder, K.-P., Hurley, J. R., Tout, C., & Eggleton, P. 1998, MNRAS, 298, 525, doi: 10.1046/j.1365-8711.1998.01658.x
* Renzo et al. (2020) Renzo, M., Cantiello, M., Metzger, B. D., & Jiang, Y. F. 2020, ApJ, 904, L13, doi: 10.3847/2041-8213/abc6a6
* Renzo et al. (2019) Renzo, M., Zapartas, E., de Mink, S. E., et al. 2019, A&A, 624, A66, doi: 10.1051/0004-6361/201833297
* Ricker & Taam (2008) Ricker, P. M., & Taam, R. E. 2008, ApJ, 672, L41, doi: 10.1086/526343
* Robson et al. (2019) Robson, T., Cornish, N. J., & Liu, C. 2019, Classical and Quantum Gravity, 36, 105011, doi: 10.1088/1361-6382/ab1101
* Robson et al. (2018) Robson, T., Cornish, N. J., Tamanini, N., & Toonen, S. 2018, Phys. Rev. D, 98, 064012, doi: 10.1103/PhysRevD.98.064012
* Sánchez-Salcedo & Chametla (2014) Sánchez-Salcedo, F. J., & Chametla, R. O. 2014, ApJ, 794, 167, doi: 10.1088/0004-637X/794/2/167
* Sand et al. (2020) Sand, C., Ohlmann, S. T., Schneider, F. R. N., Pakmor, R., & Röpke, F. K. 2020, A&A, 644, A60, doi: 10.1051/0004-6361/202038992
* Sberna et al. (2020) Sberna, L., Toubiana, A., & Miller, M. C. 2020, arXiv e-prints, arXiv:2010.05974. https://arxiv.org/abs/2010.05974
* Seto (2002) Seto, N. 2002, Mon. Not. Roy. Astron. Soc., 333, 469, doi: 10.1046/j.1365-8711.2002.05432.x
* Soker & Tylenda (2003) Soker, N., & Tylenda, R. 2003, ApJ, 582, L105, doi: 10.1086/367759
* Somiya (2012) Somiya, K. 2012, Classical and Quantum Gravity, 29, 124007, doi: 10.1088/0264-9381/29/12/124007
* Stroeer & Vecchio (2006) Stroeer, A., & Vecchio, A. 2006, Classical and Quantum Gravity, 23, S809, doi: 10.1088/0264-9381/23/19/s19
* Takahashi & Seto (2002) Takahashi, R., & Seto, N. 2002, ApJ, 575, 1030, doi: 10.1086/341483
* Tamanini & Danielski (2019) Tamanini, N., & Danielski, C. 2019, Nature Astronomy, 3, 858, doi: 10.1038/s41550-019-0807-y
* Tauris et al. (2017) Tauris, T. M., Kramer, M., Freire, P. C. C., et al. 2017, ApJ, 846, 170, doi: 10.3847/1538-4357/aa7e89
* Temmink et al. (2020) Temmink, K. D., Toonen, S., Zapartas, E., Justham, S., & Gänsicke, B. T. 2020, A&A, 636, A31, doi: 10.1051/0004-6361/201936889
* Thorne & Zytkow (1977) Thorne, K. S., & Zytkow, A. N. 1977, ApJ, 212, 832, doi: 10.1086/155109
* Toonen et al. (2020) Toonen, S., Portegies Zwart, S., Hamers, A. S., & Bandopadhyay, D. 2020, A&A, 640, A16, doi: 10.1051/0004-6361/201936835
* Tylenda et al. (2011) Tylenda, R., Hajduk, M., Kamiński, T., et al. 2011, A&A, 528, A114, doi: 10.1051/0004-6361/201016221
* van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science Engineering, 13, 22, doi: 10.1109/MCSE.2011.37
* Vigna-Gómez et al. (2020) Vigna-Gómez, A., MacLeod, M., Neijssel, C. J., et al. 2020, PASA, 37, e038, doi: 10.1017/pasa.2020.31
* Voss & Tauris (2003) Voss, R., & Tauris, T. M. 2003, MNRAS, 342, 1169, doi: 10.1046/j.1365-8711.2003.06616.x
* Webbink (1984) Webbink, R. F. 1984, ApJ, 277, 355, doi: 10.1086/161701
* Wilson & Nordhaus (2020) Wilson, E. C., & Nordhaus, J. 2020, MNRAS, doi: 10.1093/mnras/staa2088
* Wong et al. (2020) Wong, K. W. K., Breivik, K., Kremer, K., & Callister, T. 2020, arXiv e-prints, arXiv:2011.03564. https://arxiv.org/abs/2011.03564
* Zapartas et al. (2017) Zapartas, E., de Mink, S. E., Izzard, R. G., et al. 2017, A&A, 601, A29, doi: 10.1051/0004-6361/201629685
* Zevin et al. (2020) Zevin, M., Bavera, S. S., Berry, C. P. L., et al. 2020, arXiv e-prints, arXiv:2011.10057. https://arxiv.org/abs/2011.10057
|
# CAMBI: Contrast-aware Multiscale Banding Index
Pulkit Tandon1§, Mariana Afonso2, Joel Sole2 and Lukáš Krasula2 1Department
of Electrical Engineering, Stanford University, CA, USA, 94305.
<EMAIL_ADDRESS>2Netflix Inc., Los Gatos, CA, USA, 95032. {mafonso,
jsole<EMAIL_ADDRESS>
###### Abstract
Banding artifacts are artificially-introduced contours arising from the
quantization of a smooth region in a video. Despite the advent of recent
higher quality video systems with more efficient codecs, these artifacts
remain conspicuous, especially on larger displays. In this work, a
comprehensive subjective study is performed to understand the dependence of
the banding visibility on encoding parameters and dithering. We subsequently
develop a simple and intuitive no-reference banding index called CAMBI
(Contrast-aware Multiscale Banding Index) which uses insights from Contrast
Sensitivity Function in the Human Visual System to predict banding visibility.
CAMBI correlates well with subjective perception of banding while using only a
few visually-motivated hyperparameters.
§§footnotetext: Work done during an internship at Netflix.
## I Introduction
Banding artifacts are staircase-like contours introduced during the
quantization of spatially smooth-varying signals, and exacerbated in the
encoding of the video. These artifacts are visible in large, smooth regions
with small gradients, and present in scenes containing sky, ocean, dark
scenes, sunrise, animations, etc. Banding detection is essentially a problem
of detecting artificially introduced contrast in a video. Even with high
resolution and bit-depth content being viewed on high-definition screens,
banding artifacts are prominent and tackling them becomes even more important
for viewing experience.
Figure 1 shows an example frame from Night on Earth series on Netflix, encoded
using a modern video codec, AV1 [1], and the libaom encoder. Bands are clearly
visible in the sky due to the intensity ramp present between the sun and its
periphery. Traditional video quality metrics such as PSNR, SSIM [2] or VMAF
[3] are not designed to identify banding and are hence not able to account for
this type of artifact [4], [5], as we will also show in Section III-C. These
artifacts are most salient in a medium bitrate regime where the video is
neither highly compressed and thus exacerbated by other artifacts, nor
provided with large number of bits to faithfully represent the intensity ramp.
Having a banding detection mechanism that is robust to multiple encoding
parameters can help identify the onset of banding in the videos and serve as a
first step towards its mitigation.
Related Work. Although banding detection has been studied in the literature,
no single metric or index is widely employed. Previous works on banding
detection have focused on either false segment or false edge detection. For
false segment detection, past methods have utilized segmentation approaches,
such as pixel [4], [6], [7] or block-based segmentation [8], [9]. For false
edge detection, methods have utilized various local statistics such as
gradients, contrast and entropy [10], [11], [12]. But both of these approaches
suffer the hard problem of distinguishing between true and false
segments/edges. Typically, this issue is solved by employing multiple hand-
designed criteria obtained via observing a limited dataset. Moreover, most of
these methods do not consider the possibility of dithering in the encoded
video, which can be introduced by common tools such as ffmpeg [13] during bit-
depth reduction and can significantly affect the banding visibility. One
recent no-reference banding detection method has outperformed previous work by
using heuristics motivated by various properties of the Human Visual System,
along with a number of pre-processing steps to improve banding edge detection
[5]. This algorithm also contains a large number of hyperparameters trained
and tested over a limited dataset [4].
Figure 1: Banding Motivation. Example from Night on Earth series on Netflix
(4k, 10-bit). Red box shows a zoomed-in luma segment with prominent bands.
In this work we studied banding artifact’s dependence on various properties of
encoded videos, viz. quantization parameters, encoding resolution and
incidence of dithering. We present a simple, intuitive, no-reference,
distortion-specific index called Contrast-aware Multiscale Banding Index
(CAMBI), motivated by the algorithm presented in Ref. [6]. CAMBI directly
tackles the problem of contrast detection by utilizing properties of Contrast
Sensitivity Function (CSF) [14], instead of framing banding detection as a
false segment/edge detection. In addition, CAMBI contains only few
hyperparameters, most of which are visually-motivated. Results from the
experiments conducted show that CAMBI has a strong linear correlation with
subjective scores.
## II Banding Detection Algorithm
Figure 2: Block diagram of the proposed algorithm. Figure 3: Effect of
Contrast Sensitivity Function (CSF) on banding visibility. (a) CSF and its
dependence on spatial frequency [14]. (b) and (c) Toy example showing banding
visibility with smoothly varying intensity quantized at increasing contrast
step (purple arrow) and spatial frequency (orange arrow).
We describe here the developed banding detection algorithm: CAMBI. A block
diagram describing all the steps involved in CAMBI is shown in Figure 2. CAMBI
operates as a no-reference banding detector. It takes a video as an input and
produces a banding visibility score. CAMBI extracts multiple pixel-level maps
at multiple scales, for temporally sub-sampled frames of the encoded video,
and subsequently combines these maps into a single index motivated by human
CSF [14]. Steps involved in CAMBI are described next.
### II-A Pre-processing
Each input frame is taken through several pre-processing steps. Firstly, the
luma component is extracted. Although it has been shown that chromatic banding
exists, like most of the past works we assume that majority of the banding can
be captured in the luma channel [15], [16].
Next, dithering present in the frame is accounted for. Dithering is
intentionally applied noise used to randomize quantization error, and has been
shown to significantly affect banding visibility [6], [10]. Presence of
dithering makes banding detection harder, as otherwise clean contours might
have noisy jumps in quantized steps, leading to unclean edges or segments
detection. Thus to account for both dithered and non-dithered regions in a
frame, we use a $2\times 2$ averaging low-pass filter (LPF) to smoothen the
intensity values, in an attempt to replicate the low-pass filtering done by
the human visual system.
Low-pass filtering is done after converting the frame to a bit-depth of 10
(encodes studied in this work are 8-bit, but obtained from a 10-bit source as
described in Section III-B). This ensures that the obtained pixel values are
in steps of one in 10-bit scale after application of LPF. Finally, we assume
that the display is 4k (see Section III-B), and hence irrespective of the
encode resolution the frame is upscaled to 4k. Further steps in the algorithm
are agnostic to the encode properties studied in this work, viz. resolution,
quantization parameter, and incidence of dithering. Though we assume 10-bit
sources and 4k displays in this work, CAMBI can be extended to encodes from
sources at arbitrary bit-depths and display resolutions by modifying the bit-
depth conversion and spatial upscaling steps appropriately.
### II-B Multiscale Banding Confidence
Figure 4: Exemplary CAMBI maps. Frames are from example shown in Figure 1. A
warmer color represents higher banding confidence $c(k,s=65\times 65)$.
As mentioned in Section I, we consider banding detection as a contrast-
detection problem, and hence banding visibility is majorly governed by the
CSF. CSF itself largely depends on the perceived contrast across the step and
spatial-frequency of the steps, as illustrated in Figure 3. CAMBI explicitly
tries to account for the contrast across pixels by looking at the differences
in pixel intensity and does this at multiple scales to account for spatial
frequency.
CAMBI generalizes the approach used in [6], which computes a pixel-wise
banding confidence $c(s)$ at a scale $s$ as
$c(s)=p(0,s)\times\max\left[\frac{p(-1,s)}{p(0,s)+p(-1,s)},\frac{p(1,s)}{p(0,s)+p(1,s)}\right]$
(1)
where $p(k,s)$ is given by
$p(k,s)=\frac{\sum\limits_{\begin{subarray}{c}\\{(x^{\prime},y^{\prime})\in
N_{s}(x,y)\mid\\\
\left\lVert\nabla(x^{\prime},y^{\prime})\right\rVert<\tau_{g}\\}\end{subarray}}\delta(I(x^{\prime},y^{\prime}),I(x,y)+k)}{\sum\limits_{\\{(x^{\prime},y^{\prime})\in
N_{s}(x,y)\mid\left\lVert\nabla(x^{\prime},y^{\prime})\right\rVert<\tau_{g}\\}}1}$
(2)
In Eq. 2, $(x,y)$ refers to a particular pixel, and $I(x,y)$, $N_{s}(x,y)$ and
$\left\lVert\nabla(x,y)\right\rVert$ correspond to the intensity, neighborhood
of a scale $s$ and gradient magnitude at this particular pixel, respectively.
$\delta(.,.)$ is an indicator function. Thus, $p(k,s)$ corresponds to the
fraction of pixel (in a neighborhood around pixel $(x,y)$) with an intensity
difference of $k$ amongst the set of pixels with gradient magnitude smaller
than $\tau_{g}$. Hyperparameter $\tau_{g}$ ensures avoidance of textures
during banding detection [6]. Therefore, Eq. 1 calculates a banding confidence
$c(s)$ which explicitly tries to find if there is an intensity step of $\pm 1$
in a pixel’s non-texture neighborhood at scale $s$. $p(0,s)$ ensures that at
the scale $s$, the pixel around which banding is being detected belongs to a
visually-significant contour.
In CAMBI, the above approach is modified to explicitly account for multiple
contrast steps and different spatial-frequencies, thus accounting for CSF-
based banding visibility. This is done by calculating pixel-wise banding
confidence $c(k,s)$ per frame at various different contrasts ($k$) and scales
($s$), each referred to as a CAMBI map for the frame. A total of twenty CAMBI
maps are obtained per-frame capturing banding across $4$ contrast-steps and
$5$ spatial-frequencies.
For calculating CAMBI maps, Eq. 1 is modified as follows:
$c(k,s)=p(0,s)\max\left[\frac{p(-k,s)}{p(0,s)+p(-k,s)},\frac{p(k,s)}{p(0,s)+p(k,s)}\right]$
(3)
where $k\in\\{1,2,3,4\\}$. Intensity differences of up to $\pm 4$ are
considered because of the conversion from 8-bit to 10-bit. If the pixel
belongs to a dithering region it would have neighbouring pixels with intensity
difference of $<$ $4$ because of the applied anti-dithering filter. On the
other hand, if banding edge exists without any dithering in the frame, this
would lead to an intensity difference of $\pm 4$ at a bit-depth of 10, as a
false contour appearing due to quantization will have pixels differing by $1$
on either side of the contour at bit-depth of 8. This leads to four CAMBI maps
per frame, at each scale. Figure 4a shows the CAMBI maps obtained at different
contrasts for the example shown in Figure 1, for both dithered and non-
dithered frame (see Section III-A). Warmer colors represent higher $c(k,s)$
values and highlighted boxes clearly show that in undithered frame banding
largely occurs at a contrast step of $k=4$ whereas for a frame containing
dithering banding confidence shows up largely at lower contrast steps.
To account for the banding visibility dependence on spatial frequency of
bands, we modify the multiscale approach used by Ref. [6] to reduce the
computational complexity. First, we fix the window-size ($s$) and then find
$c(k,s)$ for frames after a mode-based downsampling is applied in powers of
two from the initial resolution of 4k. In total 5 scales are considered: 4k,
1080p, 540p, 270p and 135p. This leads to five CAMBI maps per frame at each
contrast. Furthermore, a window-size ($s$) of $65\times 65$ (centered at
pixel) is chosen in this study which corresponds to $\sim 1^{\circ}$ visual
angle at 4k resolution based on subjective test design as described in Section
III-B. Thus, our multiscale approach calculates banding visibility at spatial-
frequencies corresponding to visual degrees ($v^{\circ}$) of
$\sim${$1^{\circ}$, $2^{\circ}$, $4^{\circ}$, $8^{\circ}$, $16^{\circ}$}.
Figure 4b shows CAMBI maps obtained at these five different scales at a
contrast step of $4$, for a frame without dithering as shown in Figure 4a top
panel. Figure 4b clearly shows that CAMBI is able to identify bands at various
spatial-frequencies (e.g. high-frequency bands near sun at 4k and low-
frequency bands away from the sun at 135p).
### II-C Spatio-Temporal Pooling
Finally, CAMBI maps obtained per frame are spatio-temporally pooled to obtain
the final banding index. Spatial pooling of CAMBI maps is done based on the
observation that above described CAMBI maps belong to the initial linear phase
of the CSF (Figure 3, red box). Since perceived quality of video is dominated
by regions with poorest perceived quality, only the worst $\kappa_{p}(p=30\%)$
of the pixels are considered during spatial pooling [5]. Though this improved
correlation results (Section III-C), using $\kappa_{p}(p=100\%)$ also leads to
competitive correlation numbers (not shown).
$\mbox{CAMBI${}_{f}$}=\frac{\sum\limits_{(x,y)\in\kappa_{p}}\sum\limits_{\begin{subarray}{c}k=\\\
1,..,4\end{subarray}}\sum\limits_{\begin{subarray}{c}v^{\circ}=\\\
1,2,..,16\end{subarray}}c(k,s)\times
k\times\log_{2}\left(\frac{16}{v^{\circ}}\right)}{\sum\limits_{(x,y)\in\kappa_{p}}1}$
(4)
where $1/v^{\circ}$ represents spatial-frequency at which banding is detected
(described in Section II-B).
Finally, CAMBI is applied to a frame every $\tau_{s}=0.5$s and averaged,
resulting in final CAMBI scores for the video. The value of $\tau_{s}$ was
chosen based on temporal frequency dependence of CSF [17] as well as for
implementation efficiency. According to our experiments, CAMBI was observed to
be temporally stable within a single shot of a video but simple temporal
pooling may fail if applied to a video with multiple shots. More sophisticated
methods are planned for future work.
$\mbox{CAMBI}=\left.\sum\limits_{f\in\tau_{s}}\mbox{CAMBI${}_{f}$}\middle/\sum\limits_{f\in\tau_{s}}1\right.$
(5)
TABLE I: Hyperparameters used in CAMBI. low-pass filter (LPF) | $2\times 2$ avg filter
---|---
window-size ($s$ in $N_{s}$) | $65\times 65$
gradient threshold ($\tau_{g}$) | $2$
spatial pooling ($\kappa_{p}$) | $30\%$
temporal sub-sampling ($\tau_{s}$) | $0.5$s
Hyperparameters used in CAMBI are summarized in Table I and validation results
are shown in Section III-C.
## III Performance Evaluation
### III-A Banding Dataset
A banding dataset was created for this study based on existing Netflix
catalogue. Nine 4k-10bit source clips with duration between 1 and 5 seconds
were utilized. Of these, eight clips had various levels of banding and one had
no banding. Nine different encodes were created for each of these sources by
using the following steps: 1. downsampling source to appropriate resolution
(1080p, quad-HD or 4k) and bit-depth (8-bit) using ffmpeg, 2. encoding the
downsampled content at three different QPs (12, 20, 32) using libaom.
Ordered-dithering gets introduced in the frames during downsampling by ffmpeg
and gets selectively pruned during encoding (dependent on QP and resolution).
Thus, we also added a tenth encode per source where dithering is not
introduced to explicitly validate whether CAMBI can track banding visibility
across dithering. This encode was done at maximum quality (4k resolution, 12
QP) to juxtapose the banding visibility in absence of dithering against other
encoding parameters.
### III-B Subjective Study
The subjective evaluation was performed on the above described dataset by
asking viewers familiar with banding artifacts to rate the worst-case
annoyance caused by the banding across all video frames on a modified DCR
scale from 0 (unwatchable) to 100 (imperceptible) [18]. For each viewer, six
encodes of an additional (not from banding dataset) source content with
expected score ranging from $0$ to $100$ (in steps of $20$) were firstly
shown, along-with the expected scores, in a training session. Following this,
a single-stimulus test with randomized exhaustive clip-order from the banding
dataset was performed remotely111Future reader, note that we are in the middle
of a pandemic.. Each viewer watched the test content on a 15 inch Apple
Macbook Pro. All videos were played in a loop until terminated by the viewer.
In addition, viewers were asked to maintain a distance of $\sim
1.5\times$screen height from the screen throughout the experiment. All the
encodes presented were 8-bit, upsampled to 4k and cropped to $2880\times 1800$
pixels for uniformity across subject’s display resolutions. Although no
attempt was made to control ambient lightning, we asked the viewers to adjust
the display brightness to around 80% of the maximum. A detailed study
including ambient lightning dependence is planned for future work.
A total of $86$ encodes were evaluated in this study (with four 4k sequences
removed because of non-real time decoding of highest quality AV1 encode by
browser). All chosen sequences had qualities of VMAF $>80$ and PSNR $>40$ dBs,
highlighting the problem of banding prevalence even in highly-rated videos
using traditional metrics. To the best of our knowledge, this subjective study
is the first to account for dependence of banding on resolution and presence
of dithering. In total $23$ subjects participated in this study. Figure 5
shows that the banding scores obtained had a thorough coverage of the DCR
scale as well as a $95\%$ Student’s t-confidence interval of $<$$10$.
Figure 5: Subjective test properties (86 encodes, 23 viewers). (a) Designed
test had a thorough coverage of the scale, and (b) mean opinion scores
obtained had a $95\%$ Student’s t-confidence interval of $<10$. Figure 6:
Subjective study results. (left panel) CAMBI is linearly correlated with mean
opinion scores (MOS) obtained through subjective study, (middle, right panels)
whereas VMAF and PSNR are uncorrelated with the MOS.
### III-C Results
#### III-C1 CAMBI is linearly correlated with subjective scores
The Mean Opinion Scores (MOS) obtained from the subjective study were compared
with the output from CAMBI and two objective quality metrics, VMAF and PSNR.
Results are shown in Figure 6. We can see that CAMBI provides high negative
correlation with MOS while VMAF and PSNR have very little correlation. A
number of correlation coefficients, namely Spearman Rank Order Correlation
(SROCC), Kendall-tau Rank Order Correlation (KROCC), Pearson’s Linear
Correlation (PLCC) and $(|$KROCC$|+1)/2$ over statistically-significant pairs
(C0) [19] are reported in Table II. From a total of $3655$ comparisons
possible amongst MOS of $86$ videos, $2895$ pairs had a difference in MOS
which was statistically significant and $95\%$ of these orderings were
correctly identified by CAMBI. Individual scores reported are mean $\pm$
standard deviation (maximum) correlation coefficients when an individual
viewer’s subjective scores are compared against MOS and suggests CAMBI
performs equivalent to an individual sensitive in identifying banding. These
results suggest that CAMBI is able to accurately estimate banding visibility
across a number of variables with high linear-dependence (without any
additional fitting).
TABLE II: Performance comparison of metrics against subjective scores. | CAMBI $\downarrow$ | VMAF $\uparrow$ | PSNR $\uparrow$ | Individual
---|---|---|---|---
SROCC | -0.923 | 0.088 | -0.202 | 0.844 $\pm$ 0.108 (0.953)
KROCC | -0.765 | 0.099 | -0.124 | 0.678 $\pm$ 0.108 (0.821)
PLCC | -0.929 | 0.000 | -0.271 | 0.853 $\pm$ 0.097 (0.957)
C0 | 0.950 | 0.545 | 0.409 | —
#### III-C2 CAMBI is unbiased over range of video qualities
CAMBI was also validated on an independent dataset without visible banding
artifacts. This dataset contains 84 HEVC encodes from seven 4k 10-bit sources
with a range of VMAF scores [20]. Figure 7 shows CAMBI against VMAF for both
the datasets. Though CAMBI is designed for worst-case banding visibility and
verified using subjective scores based on worst-case annoyance caused by
banding, this false-positive analysis seems to indicate that CAMBI does not
over-predict banding scores. Figure 7 also provides an interpretation for the
range of CAMBI scores, where CAMBI $<5$ would suggest no visible banding
artifacts are present.
Figure 7: Checking for False Positives. CAMBI when applied to another dataset
with no banding [20] doesn’t over-predict banding scores. Inset shows a
piecewise linear fit between MOS and CAMBI.
## IV Conclusion
In this work, we present a simple and intuitive, no-reference, distortion-
specific banding index called CAMBI. CAMBI is able to estimate banding
visibility across multiple encoding parameters by employing visually-motivated
computational motifs. We conducted a comprehensive subjective study to
validate CAMBI and showed that it has a high correlation and a near-linear
relationship with mean opinion scores. In addition, the small number of
hyperparameters and false-positive analysis suggest a good generalizability of
this index. In the future, we plan to validate and improve CAMBI on a larger
subjective dataset using videos with varied bit-depths and encoded using
different video codecs. CAMBI can also be used in conjunction with or
integrated as an additional feature in future versions of VMAF, and to aid the
development of debanding algorithms.
## Acknowledgment
The authors would like to thank Zhi Li, Christos Bampis and codec team at
Netflix for feedback on this work and all the observers who participated in
the subjective test.
## References
* [1] Y. Chen, D. Murherjee, J. Han, A. Grange, Y. Xu, Z. Liu, S. Parker, C. Chen, H. Su, U. Joshi _et al._ , “An overview of core coding tools in the av1 video codec,” in _2018 Picture Coding Symposium_. IEEE, 2018.
* [2] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” _IEEE transactions on image processing_ , vol. 13, no. 4, pp. 600–612, 2004.
* [3] Z. Li, A. Aaron, I. Katsavounidis, A. Moorthy, and M. Manohara, “Toward a practical perceptual video quality metric,” _The Netflix Tech Blog_ , vol. 6, p. 2, 2016.
* [4] Y. Wang, S.-U. Kum, C. Chen, and A. Kokaram, “A perceptual visibility metric for banding artifacts,” in _2016 IEEE International Conference on Image Processing (ICIP)_. IEEE, 2016, pp. 2067–2071.
* [5] Z. Tu, J. Lin, Y. Wang, B. Adsumilli, and A. C. Bovik, “Bband index: a no-reference banding artifact predictor,” in _ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2020, pp. 2712–2716.
* [6] S. Bhagavathy, J. Llach, and J. Zhai, “Multiscale probabilistic dithering for suppressing contour artifacts in digital images,” _IEEE Transactions on Image Processing_ , vol. 18, no. 9, pp. 1936–1945, 2009.
* [7] G. Baugh, A. Kokaram, and F. Pitié, “Advanced video debanding,” in _Proceedings of the 11th European Conference on Visual Media Production_ , 2014, pp. 1–10.
* [8] X. Jin, S. Goto, and K. N. Ngan, “Composite model-based dc dithering for suppressing contour artifacts in decompressed video,” _IEEE Transactions on Image Processing_ , vol. 20, no. 8, pp. 2110–2121, 2011.
* [9] Y. Wang, C. Abhayaratne, R. Weerakkody, and M. Mrak, “Multi-scale dithering for contouring artefacts removal in compressed uhd video sequences,” in _2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP)_. IEEE, 2014, pp. 1014–1018.
* [10] S. J. Daly and X. Feng, “Decontouring: Prevention and removal of false contour artifacts,” in _Human Vision and Electronic Imaging IX_ , vol. 5292\. International Society for Optics and Photonics, 2004.
* [11] J. W. Lee, B. R. Lim, R.-H. Park, J.-S. Kim, and W. Ahn, “Two-stage false contour detection using directional contrast and its application to adaptive false contour reduction,” _IEEE Transactions on Consumer Electronics_ , vol. 52, no. 1, pp. 179–188, 2006.
* [12] Q. Huang, H. Y. Kim, W.-J. Tsai, S. Y. Jeong, J. S. Choi, and C.-C. J. Kuo, “Understanding and removal of false contour in hevc compressed images,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 28, no. 2, pp. 378–391, 2016.
* [13] S. Tomar, “Converting video formats with ffmpeg,” _Linux Journal_ , vol. 2006, no. 146, p. 10, 2006.
* [14] K. Seshadrinathan, T. N. Pappas, R. J. Safranek, J. Chen, Z. Wang, H. R. Sheikh, and A. C. Bovik, “Chapter 21 - image quality assessment,” in _The Essential Guide to Image Processing_ , A. Bovik, Ed. Boston: Academic Press, 2009, pp. 553 – 595. [Online]. Available: http://www.sciencedirect.com/science/article/pii/B9780123744579000214
* [15] I. Yoo, X. Luo, Y. Wang, F. Yang, and P. Milanfar, “Gifnets: Differentiable gif encoding framework,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020.
* [16] G. Denes, G. Ash, H. Fang, and R. K. Mantiuk, “A visual model for predicting chromatic banding artifacts,” _Electronic Imaging_ , vol. 2019, no. 12, pp. 212–1, 2019.
* [17] G. Monaci, G. Menegaz, S. Susstrunk, and K. Knoblauch, “Color contrast detection in spatial chromatic noise,” blah, Tech. Rep., 2002.
* [18] Recommendation, “Itu-tp. 913,” _ITU_ , 2016.
* [19] L. Krasula, K. Fliegel, P. Le Callet, and M. Klíma, “On the accuracy of objective image and video quality models: New methodology for performance evaluation,” in _2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)_. IEEE, 2016, pp. 1–6.
* [20] Netflix, _Test Conditions_ , accessed 2020. [Online]. Available: http://download.opencontent.netflix.com/?prefix=Netflix_test_conditions/
|
# Long-Range Seasonal Forecasting of 2m-Temperature with Machine Learning
Etienne E. Vos
IBM Research
South Africa
etienne.vosibm.com
&Ashley Gritzman
IBM Research
South Africa
&Sibusisiwe Makhanya
IBM Research
South Africa
&Thabang Mashinini
IBM Research
South Africa
&Campbell Watson
IBM Research
USA
###### Abstract
A significant challenge in seasonal climate prediction is whether a prediction
can beat climatology. We hereby present results from two data-driven models –
a convolutional (CNN) and a recurrent (RNN) neural network – that predict 2 m
temperature out to 52 weeks for six geographically-diverse locations. The
motivation for testing the two classes of ML models is to allow the CNN to
leverage information related to teleconnections and the RNN to leverage long-
term historical temporal signals. The ML models boast improved accuracy of
long-range temperature forecasts up to a lead time of 30 weeks for PCC and up
52 weeks for RMSESS, however only for select locations. Further iteration is
required to ensure the ML models have value beyond regions where the
climatology has a noticeably reduced correlation skill, namely the tropics.
## 1 Introduction
Climate change as a result of global warming is a pressing international
problem. Concerns are mounting over the significant changes in the variability
and extremes of weather, with an increasing possibility of catastrophes from
the activation of tipping points in the earth’s climate system [1, 2]. There
is therefore an increased interest in accurate long-range seasonal forecasts
of key climate variables such as surface temperature and precipitation given
their relevance to developing strategies that mitigate anticipated seasonal
impacts on various sectors, including disaster risk reduction and prevention
[3].
Numerical climate models (NCMs) have a long history of being used to produce
accurate weather and climate predictions, albeit at the cost of running large
and expensive physics-based simulations (e.g. [4, 5]). The focus of this work
is to investigate how convolutional (CNN) and recurrent (RNN) neural networks
can be applied to seasonal forecasting of 2m temperature in lieu of NCMs, and
if they are capable of improving upon a generally accepted benchmark that is
the $30$-year climatology.
Previous works [6, 7, 8, 9] have shown that these data-driven approaches can
perform adequately with respect to physics-based simulations and, in certain
cases, surpass them to some extent. For example, [9] developed a CNN model
with consistently superior all-season correlation skill ($>0.5$) when compared
to a state of the art dynamical model (SINTEX-F) [5] for predicting the
Nino3.4 index for lead times of up to 17 months.
Figure 1: Left: A global map of the PCC calculated between the ERA5 reanalysis
data and the 30-year standard climatology for 2m-temperature. Right: Similar
to the left panel, but for MAE. Darker red regions indicate lower PCC or
higher MAE values.
## 2 Methods
For this work, the ERA5 reanalysis data [10] is used for training (1979 –
2007), validation (2008 – 2011) and testing (2012 – 2020) of ML models. Data
is pre-processed by regridding global fields of variables from a native
spatial resolution of $0.25^{\circ}\times 0.25^{\circ}$ to $3^{\circ}\times
3^{\circ}$, as well as aggregating over time from hourly intervals to weekly.
The predictor variables used here are $150$mb geopotential ($150$gp), $500$mb
geopotential ($500$gp), and $2$m temperature (t2m) fields, the latter of which
is also the predictand.
Training and inference for both the CNN- and LSTM-type models were set up in a
similar manner: given a series of inputs
$\textbf{s}_{in}=\\{\textbf{x}_{-h_{in}+1}^{k},...,\textbf{x}_{0}^{k}\\}$ that
spans an input-horizon of $h_{in}$ time steps, with
$k\in\\{\text{t}2\text{m},150\text{gp},500\text{gp}\\}$ and x the global field
of variable $k$ at a given time step, the task of the models is to produce
predictions $y_{out}$ that estimate the ground-truth values
${x}_{h_{f}}^{\text{t}2\text{m}}$, which is the 2m temperature for a given
target location at a lead time of $h_{f}$ (forecast-horizon) steps ahead of
the latest input time step. This is done by minimizing the mean squared error
(MSE) loss between $y_{out}$ and ${x}_{h_{f}}^{\text{t}2\text{m}}$ via
gradient descent. The final results are multiple sets of time series
predictions of the test data from 2013 up to 2020, each of which is
essentially a rolling time-series forecast with a constant $h_{f}$ lead time,
where $h_{f}\in[1:52]$. The year 2012 is reserved as a buffer-year for the
input horizon.
Predictions are made for single target locations so that separate models had
to be trained for all locations. The following locations were chosen at low
and mid/high latitudes across the globe to effectively illustrate the
capabilities and limitations of the CNN and LSTM models:
Low latitudes: Honolulu, USA ($21.3^{\circ}$N, $157.9^{\circ}$W); Panama City
($9.0^{\circ}$N, $79.5^{\circ}$W) Singapore ($1.4^{\circ}$N,
$103.8^{\circ}$E), Middle of the Pacific Ocean ($4.4^{\circ}$N,
$167.7^{\circ}$W) Mid/High latitudes: Moscow, Russia ($55.8^{\circ}$N,
$37.6^{\circ}$E); London, UK ($51.5^{\circ}$N, $0.1^{\circ}$W) Christchurch,
NZ ($43.5^{\circ}$N, $172.6^{\circ}$E); Perth, Australia
($32.0^{\circ}$S,$115.9^{\circ}$E)
In addition to training a separate model for each location, a separate CNN
model was required to make predictions for each lead time. This setup was
mirrored for the LSTM by using a many-to-one model. The main difference
between the CNN and LSTM approaches is that inputs to the CNN are full global
fields of all of the predictor variables over an input horizon of
$h_{in}=6\,$weeks, whereas inputs to the LSTM are multi-variate time series of
the predictor variables extracted at the position of the target location over
an input horizon of $h_{in}=52\,$weeks.
The metrics used to evaluate the final results on the test data are the
Pearson Correlation Coefficient (PCC) and the Root Mean Square Error Skill
Score (RMSESS), given by the following equations:
$\text{PCC}=\frac{\sum_{i=1}^{n}(x_{i}-\bar{x})(y_{i}-\bar{y})}{\sqrt{\sum_{i=1}^{n}(x_{i}-\bar{x})^{2}\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}}}\qquad\text{and}\qquad\text{RMSESS}=1-\frac{\text{RMSE}_{\text{model}}}{\text{RMSE}_{\text{clim}}},\qquad$
(1)
where $x$ and $y$ represent the ground-truth and predicted samples,
respectively, with $\bar{x}$ and $\bar{y}$ the corresponding sample means over
the test data. The RMSESS compares the model’s RMSE to that of the $30$-year
climatology. It is generally difficult to improve upon the climatology in
terms of correlation and absolute error.
The CNN architecture consists of 4 convolution blocks
(Conv2D$\rightarrow$ReLU$\rightarrow$MaxPool$\rightarrow$Dropout), followed by
a 50-unit fully-connected layer and a single-unit output layer. Fields that
comprise $\textbf{s}_{in}$ are stacked as input channels for the CNN. The LSTM
architecture consists of an RNN layer with 64 LSTM units, followed by a fully-
connected layer with 32 units, and a single-unit output layer. The LSTM does
not produce any intermediate predictions, and only produces an output
prediction after reading in the full input horizon of 52 weeks.
Anomaly fields with respect to the standard $30$-year climatology were used
for all variables during training and inference. The climatology was
subsequently added to the outputs to obtain the actual values for final
evaluation with the PCC and RMSESS metrics.
Figure 2: Left & Center Panels: Time series plots comparing ERA5 weekly data
to the climatology and predictions from the CNN and LSTM for Panama City (top)
and Perth (bottom). Left and center panels correspond to predictions at lead
times of $10\,$weeks and $40\,$weeks, respectively. Right Panels: The PCC at
different lead times for Panama City (top) and Perth (bottom).
## 3 Results and Discussion
The motivation for investigating the two classes of ML models is to allow the
CNN to leverage information related to teleconnections in the predictor
variables to improve its forecasting skill, while the LSTM should be able to
leverage long-term historical temporal information to achieve the same. In
this work, ML results for selected target locations are compared against a
baseline prediction, which is the $30$-year standard climatology calculated
from weekly-aggregated ERA5 data between 1981 and 2010 (similar to the
approach by [11]).
For low latitude locations (near the equator), the climatology has a
noticeably reduced correlation skill, as shown in Figure 1. Using Panama City
as an example, we show in Figure 2 (top panels) that the CNN and LSTM are able
to improve on the climatology’s PCC skill up to lead times of around
$30\,$weeks and $18\,$weeks, respectively. For a lead time of $10\,$weeks,
both models predict the peaks and troughs with reasonable accuracy, capturing
to some extent the warmer than usual summers and winters during 2015, 2016 and
2019. As expected, correlation skill reduces for larger lead times as
indicated by the red and green PCC curves that fall below the climatology
line. This can also be seen in the $40\,$week lead time series plot, where CNN
and LSTM predictions don’t seem to deviate much from climatology, except for a
few instances of warmer summers and winters.
In the bottom panels of Figure 2, Perth is used as an example of a mid/high-
latitude location for which the climatology alone already has a PCC skill of
$\sim 0.93$. The time series plots show that Perth exhibits a regular annual
cycle that is well represented by the climatology so that, for the most part,
deviations from the climatology for Perth likely only represent high-frequency
noise. This likely explains why the CNN and LSTM models fail to learn any
useful patterns outside of the annual cycle.
Figure 3 gives the RMSESS results for the CNN (left panels) and the LSTM
(right panels). These results convey a similar message as those in Figure 2,
but in terms of RMSE. A RMSESS value $>0$ indicates that the ML model has a
lower RMSE than the climatology and, conversely, a value $<0$ means the
climatology has a lower RMSE than the model. For low-latitude locations (top
panels) the CNN predictions are able to improve on the climatology for almost
all lead times considered. The same is true for the LSTM, except for the Mid-
Pacific location which falls below the climatology for lead times
$>20\,$weeks. Evidently, neither model fares any better than the climatology
for the mid/high latitude locations (bottom panels), even at lead times of
$<10\,$weeks. The LSTM does, however, marginally improve on the RMSE for
London.
Figure 3: Plots of the RMSESS for lead times of 1 - 52 weeks, and for
different locations. Left Panels: RMSESS results from the CNN for locations
where predictions have improved skill relative to the climatology (top), and
for locations where predictions have similar or reduced skill than the
climatology. Right Panels: Similar to the left panels, but for the LSTM.
## 4 Conclusions and Future Work
The standard $30$-year climatology, often used as a baseline for seasonal
forecasts, does not perform equally well across the globe as highlighted in
Figure 1. However, the $30$-year climatology accurately represents the most
important modes of variability for $2$m temperature at all locations with
relatively high PCC ($>~{}0.8$), i.e. outside of the tropics.
Despite the $30$-year climatology being generally difficult to outperform,
this study shows that ML methods do achieve comparable, and for some
locations, improved, accuracy of long-range temperature forecasts up to a lead
time of 30 weeks for PCC and up 52 weeks for RMSESS. Being able to improve
upon such a baseline in the context of seasonal forecasting is an invaluable
advantage when considering preparedness against extreme climate events that
have characterized climate change impacts over the past two decades.
Other future considerations and improvements on this work include using a more
accurate climatology, training on larger datasets like CMIP 5/6, implementing
a U-Net approach [12] in order to generate predictions across the entire
globe, as well as to combine the CNN and LSTM models for a unified approach
that exploits the spatio-temporal dynamics of the underlying processes.
## Acknowledgments and Disclosure of Funding
The authors would like to thank Brian White for his mentorship and advice
during the preparation of this paper.
## References
* [1] T. M. Lenton, J. Rockström, O. Gaffney, S. Rahmstorf, K. Richardson, W. Steffen, and H. J. Schellnhuber, “Climate tipping points — too risky to bet against,” Nature, vol. 575, pp. 592–595, 2019.
* [2] IPCC, “Summary for policymakers,” in Special Report: Global Warming of $1.5^{\circ}\textrm{C}$, p. 32, 2018.
* [3] W. J. Merryfield, J. Baehr, L. Batté, E. J. Becker, A. H. Butler, C. A. S. Coelho, G. Danabasoglu, P. A. Dirmeyer, F. J. Doblas-Reyes, D. I. V. Domeisen, et al., “Current and emerging developments in subseasonal to decadal prediction,” Bulletin of the American Meteorological Society, vol. 101, no. 6, pp. E869–E896, 2020.
* [4] S. J. Johnson, T. N. Stockdale, L. Ferranti, M. A. Balmaseda, F. Molteni, L. Magnusson, S. Tietsche, D. Decremer, A. Weisheimer, G. Balsamo, S. P. E. Keeley, K. Mogensen, H. Zuo, and B. M. Monge-Sanz, “SEAS5: the new ECMWF seasonal forecast system,” Geoscientific Model Development, vol. 12, no. 3, pp. 1087–1117, 2019.
* [5] T. Doi, S. K. Behera, and T. Yamagata, “Improved seasonal prediction using the SINTEX-F2 coupled model,” Journal of Advances in Modeling Earth Systems, vol. 8, no. 4, pp. 1847–1867, 2016.
* [6] L. Xu, N. Chen, X. Zhang, and Z. Chen, “A data-driven multi-model ensemble for deterministic and probabilistic precipitation forecasting at seasonal scale,” Climate Dynamics, vol. 54, no. 7-8, pp. 3355–3374, 2020\.
* [7] J. Cohen, D. Coumou, J. Hwang, L. Mackey, P. Orenstein, S. Totz, and E. Tziperman, “S2S reboot: An argument for greater inclusion of machine learning in subseasonal to seasonal forecasts,” WIREs Climate Change, vol. 10, no. 2, p. e00567, 2019.
* [8] M. Kämäräinen, P. Uotila, A. Y. Karpechko, O. Hyvärinen, I. Lehtonen, and J. Räisänen, “Statistical learning methods as a basis for skillful seasonal temperature forecasts in Europe,” Journal of Climate, vol. 32, no. 17, pp. 5363–5379, 2019.
* [9] Y.-G. Ham, J.-H. Kim, and J.-J. Luo, “Deep learning for multi-year ENSO forecasts,” Nature, vol. 573, no. 7775, pp. 568–572, 2019.
* [10] H. Hersbach, B. Bell, P. Berrisford, S. Hirahara, A. Horányi, J. Muñoz-Sabater, J. Nicolas, C. Peubey, R. Radu, D. Schepers, et al., “The ERA5 global reanalysis,” Quarterly Journal of the Royal Meteorological Society, vol. 146, no. 730, pp. 1999–2049, 2020.
* [11] M. Janoušek, “ERA-interim daily climatology,” ECMWF, 2011.
* [12] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” arXiv preprint arXiv:1505.04597, 2015.
|
# Challenges in Automated Debiasing for Toxic Language Detection
Xuhui Zhou♡ Maarten Sap♣ Swabha Swayamdipta♢ Noah A. Smith♣♢ Yejin Choi♣♢
♡Department of Linguistics, University of Washington
♣Paul G. Allen School of Computer Science & Engineering, University of
Washington
♢Allen Institute for Artificial Intelligence
xuhuizh<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Warning: this paper contains content that may be offensive or upsetting.
Biased associations have been a challenge in the development of classifiers
for detecting toxic language, hindering both fairness and accuracy. As
potential solutions, we investigate recently introduced debiasing methods for
text classification datasets and models, as applied to toxic language
detection. Our focus is on lexical (e.g., swear words, slurs, identity
mentions) and dialectal markers (specifically African American English). Our
comprehensive experiments establish that existing methods are limited in their
ability to prevent biased behavior in current toxicity detectors. We then
propose an automatic, dialect-aware data correction method, as a proof-of-
concept study. Despite the use of synthetic labels, this method reduces
dialectal associations with toxicity. Overall, our findings show that
debiasing a model trained on biased toxic language data is not as effective as
simply relabeling the data to remove existing biases.
## 1 Introduction
Current hate speech or toxic language detection111We use _hate speech_ and
_toxic language_ interchangeably in this work, though their definitions do not
perfectly align. systems exhibit problematic and discriminatory behavior that
causes them to have disparate negative impact on minority populations Yasin
(2018); Guynn (2020); Kim et al. (2020); Dias Oliva et al. (2020). Tweets
simply containing a minority identity mention are commonly flagged as toxic by
current systems, in contrast to those containing majority identity mentions,
as illustrated in Figure 1.
Figure 1: Lexical items and dialect markers cause problematic behavior for
toxic language detection systems such as the widely used PerspectiveAPI. In
the top two example pairs, statements with minority identity mentions and
swear words used inoffensively are flagged as toxic, but majority identity
mentions or offensive statements without overt swearing are missed. The bottom
pair shows dialect-based racial bias for two inoffensive greetings, where
markers of African American English (aae) trigger the toxicity detector.
At the core of the issue are dataset biases, i.e., spurious correlations
between surface patterns and annotated toxicity labels (§2), which stem from
the data creation process Sap et al. (2019). Previous work has outlined two
such biases for hate speech datasets (both shown in Figure 1): lexical bias
which associates toxicity with the presence of certain words (e.g.,
profanities, identity mentions; Dixon et al., 2018; Dinan et al., 2019) and
dialectal bias, where toxicity is correlated with surface markers of African
American English (aae; Davidson et al., 2019; Sap et al., 2019). When trained
on biased datasets, models acquire and exacerbate these biases (e.g., flagging
text by Black authors as more toxic than by white authors; Sap et al., 2019;
Zhang et al., 2018).
Concurrently, there has been elevated interest in developing debiasing methods
for standard natural language understanding (NLU) tasks, i.e., methods that
aim to decrease over-reliance on spurious correlations in NLU models Clark et
al. (2019); He et al. (2019); Karimi Mahabadi et al. (2020); Bras et al.
(2020). This raises a natural question: are current debiasing approaches
effective for mitigating biases specific to toxic language detection?
In this work, we address the above question by investigating two classes of
debiasing approaches to mitigate lexical and dialectal biases—one that employs
additional training objectives for bias removal, and another that filters
training instances likely exhibiting spurious biases (§3). Through
comprehensive experiments, we show that both approaches face major challenges
in mitigating biases from a model trained on a biased dataset (in our case,
the dataset from Founta et al., 2018) for toxic language detection. While data
filtering results in reduced bias associations in the data, models trained on
filtered datasets still pick up on lexical (§4) and dialectal biases (§5). We
find that dialectal biases are particularly challenging to address, as has
also been shown by Xia et al. (2020). “Debiased” models still
disproportionately flag text in certain dialects as toxic. Notably, mitigating
dialectal bias through current debiasing methods does not mitigate a model’s
propensity to label tweets by Black authors as more toxic than by white
authors.
We additionally explore an alternative proof-of-concept study—relabeling
supposedly toxic training instances whose automatic translations into a
majority dialect are deemed non-toxic by the classifier. To this end, we
create a synthetic dataset via few-shot dialect translation system built with
GPT-3 Brown et al. (2020). While only an illustrative solution, it
nevertheless takes into account the dialectal context of the tweet, resulting
in a model less prone to dialectal and racial biases (§6). Overall, our
findings indicate that debiasing a model already trained on biased toxic
language data can be challenging, compared to relabeling the data to remove
existing biases. Our code and data are publicly available on
Github.222https://github.com/XuhuiZhou/Toxic_Debias
## 2 Biases in Toxic Language Detection
We test the use of debiasing333Our definition of “bias” is specific to the
social biases in toxic language detection datasets, grounded as lexical and
dialectal biases; see Blodgett et al. (2020) for a detailed investigation of
the term “bias”. methods for the task of toxic language detection, which aims
to flag rude, offensive, hateful, or toxic language on the internet, with the
goal of moderating online communities Roberts (2019); Vidgen et al. (2019).
This task differs in several ways from the natural language understanding
(NLU) tasks that debiasing methods have been successful on, such as textual
entailment (e.g., SNLI, MNLI; Bowman et al., 2015; Williams et al., 2018) or
reading comprehension (e.g., SQuAD; Rajpurkar et al., 2016). First, compared
to these NLU tasks where there is one correct label, the toxicity of language
is inherently more nuanced, subjective, and contextual, which causes toxic
language datasets to have lower agreement in general Ross et al. (2017).
Second, the dataset biases in NLU are predominantly artifacts introduced
during data creation (e.g., negations, exaggerations; Schwartz et al., 2017;
Gururangan et al., 2018), whereas those in toxic language detection are
grounded in the social dynamics of the world Spears (1998); Technau (2018).
For example, viewing aae as a more toxic or less proper variety of English is
a form of linguistic discrimination that upholds racial hierarchies in the
United States (Rosa and Flores, 2017).
In this work, we consider two broad categories of toxic language dataset
biases—lexical (§2.1) and dialectal (§2.2). Our experiments focus on a single,
widely used dataset (§2.3) from Founta et al. (2018).
### 2.1 Lexical Biases (ToxTrig)
Current toxic language detection systems often rely on the presence or absence
of certain words (e.g., swear words, identity mentions) to make their
predictions Dixon et al. (2018); Dinan et al. (2019). While most previous
analyses of this bias relied on a simple list of “bad” words Davidson et al.
(2019); Dinan et al. (2019),444https://tinyurl.com/list-of-bad-words we take a
more nuanced view of how lexical items can convey toxicity, inspired by work
in pragmatics and sociolinguistics of rudeness (Dynel, 2015; Kasper, 1990,
inter alia). Specifically, we manually split our full list of words into three
distinct categories depending on the extent to which they carry profane or
hateful meanings or are simply associated with hateful contexts.555We note,
however, that this categorization is in itself subjective. We refer to the
full set of words as ToxTrig, for Toxicity Triggers, which is included in our
released
repository.666https://github.com/XuhuiZhou/Toxic_Debias/blob/master/data/word_based_bias_list.csv
#### Non-offensive minority identity mentions (nOI)
refers to descriptive mentions of minoritized demographic or social identities
(e.g., gay, female, Muslim). While these mentions are not usually inherently
offensive by themselves, they are often found in offensive statements that are
hateful towards minorities (Dixon et al., 2018). We detect these identity
mentions in text using a list of 26 regular expressions.
#### Possibly offensive minority identity mentions (OI)
are mentions of minoritized identities that could denote profanity or hate
depending on pragmatic and contextual interpretations. This includes slurs and
objectifying outdated terms to refer to minority groups, which are usually
understood as attacks. Additionally, this includes reclaimed slurs (queer,
n*gga), which connote less offensive intent when spoken by in-group members
compared to out-group members Croom (2013).
#### Possibly offensive non-identity mentions (OnI)
contains swear words and other profanities, which are usually offensive but
not associated to any social groups (e.g., f*ck, sh*t). Note that the
pragmatic interpretation of these words is not necessarily always toxic or
offensive Dynel (2012), as they are often used to convey closeness between the
speaker and listener or emphasize the emotionality of a statement (e.g.,
second example in in Figure 1).
### 2.2 Dialectal Biases (aae)
Current toxic language detection systems also associate higher toxicity with
dialectal markers of African American English (aae; Sap et al., 2019; Davidson
et al., 2019). Since aae is a variety of English that is common among African
Americans and often signals a cultural identity in the US Green (2002), this
dialect-based racial bias causes speech by Black authors to be suppressed more
often than non-Black authors Sap et al. (2019), thereby exacerbating racial
inequality Rosa (2019).
In our experiments, we estimate the dialect of a tweet using a topic model
from Blodgett et al. (2016). This model was trained on 60M tweets, where the
dialect of the tweet was inferred from the model coordinates, which yielded a
probability of a tweet being in one of four dialects (African-American
English, white-aligned English, Hispanic, and other). In this study, we only
focus on African-American English (aae) and white-aligned English (wae)
tweets; both definitions are based on US English, as per Blodgett et al.
(2016).777We avoid using disputed terms such as _general American English_ ,
_standard American English_ , or _mainstream US English_ , which are
frequently used for wae, since we believe that no dialect should be privileged
with the designation “general”, “standard”, or “mainstream” Rosa (2019). Our
experiments either use the probability of a tweet being in these dialects, or
assign tweets their estimated-most-probable dialect.
### 2.3 Dataset for Toxic Language Detection
We focus our analyses on a widely used hate speech dataset of English tweets
Founta et al. (2018). The tweets were collected using a multi-round
bootstrapping procedure, and were labeled out of context888Only the tweet
text—no profile information or conversational context—was shown to annotators.
for toxic language. We focus on the 86k tweets that are annotated as hateful,
abusive, or neither and discard those labelled as spam. We aggregate the
abusive and hateful labels into a single toxic category, yielding 32k toxic
and 54k non-toxic tweets.999We also explored using another widely used hate
speech dataset Davidson et al. (2017), which collected tweets using a seed
list of swear words and slurs. However, in line with findings by Xia et al.
(2020), debiasing led to degenerate behavior due to the data collection
process, as discussed in Appendix B.
## 3 Debiasing Methods
We consider two types of debiasing methods from current literature. The first
type addresses known, pre-defined biases—such as lexical and dialectal biases
for hate speech detection, via a model-based approach involving additional
training objectives (§3.1). In contrast, the second type is agnostic to prior
knowledge about biases, and instead filters out examples that appear “too
easy” and might hence contain spurious correlations (§3.2).
### 3.1 Debiased Training for Pre-Defined Toxicity Biases
We use the Learned-Mixin method of Clark et al. (2019), which achieved high
out-of-distribution (OOD) performance on several NLU tasks, for debiased
training. This method trains an ensemble containing a bias-only model which
only uses pre-defined features corresponding to known biases, and a full model
which uses all features. Intuitively, the ensemble encourages the full model
to rely more on features unrelated to the biases. Once trained, the bias-only
model is discarded, and only the “bias-free” full model is used for inference,
following Clark et al. (2019).
#### Bias-only model
Given its effectiveness on bag-of-words (BoW) features, we use an SVM
classifier as the lexical-bias-only model. For example, the ToxTrig-only model
counts the frequency of ToxTrig words in each tweet. Our dialectal-bias-only
model uses the probability of dialects (aae, wae, Hispanic, and other)
obtained from a dialect detector Blodgett et al. (2016) as features in a SVM
classifier.
#### Full model
We fine-tune a RoBERTa-large classifier Liu et al. (2019), a state-of-the-art
classifier for the toxicity detection task. See Appendix A.1 for more modeling
details.
Note that we only consider the Learned-Mixin-OnI and Learned-Mixin-ToxTrig
models for lexical debiasing, due to poor accuracies of the bias-only models
for nOI and OI.101010The nOI and OI bias-only models reach 63% and 67%
accuracy, respectively, which is empirically hard for the ensemble to use.
This is likely due to low coverage in the train set of those categories (4.43%
nOI and 4.25% OI).
### 3.2 Data Filtering for Spurious Biases
In addition to debiasing methods that handle known biases, we also explore
automated approaches which filter out instances exhibiting unspecified,
spurious biases. Specifically, we describe below two data selection methods
that have shown strong OOD performance.
#### AFLite (Bras et al., 2020)
is an algorithm based on the key intuition that examples predicted correctly
by the simplest methods likely exhibit spurious biases. An ensemble of simple
linear classifiers is trained and tested on different partitions of the data;
test instances which are “predictable”, or classified correctly by most
classifiers in the ensemble are discarded. The algorithm is iterative, and is
repeated until a target data size is achieved. Models trained on this filtered
dataset achieve higher performance on OOD and adversarially constructed test
sets, compared to the original model, on several text and image classification
datasets. This indicates a reduction in spurious biases in the filtered data.
#### DataMaps Swayamdipta et al. (2020)
show the presence of distinct regions in a dataset—namely, easy, hard and
ambiguous—defined with respect to a given model. These regions are discovered
based on the training dynamics of a model, determined by the model’s
confidence in the true class, for each example, as well as the variability of
this confidence, throughout training epochs. Swayamdipta et al. (2020) show
that training exclusively on the hard and ambiguous regions of the data
results in high OOD performance, indicating lower prevalance of spurious
biases. The easy region is the largest in size for RoBERTa; however,
experiments showed that training exclusively on these examples hurt OOD
generalization on different NLU tasks. Following this work, we create
DataMaps-Easy, DataMaps-Ambiguous, and DataMaps-Hard subsets for our dataset.
Following Swayamdipta et al. (2020), we set the target filtered subset size to
33% of the original training set for both filtering methods, but our filtering
additionally preserved the original label proportions. We then fine-tune a
RoBERTa-large classifer on these filtered subsets; see Appendix A.2 for more
details.
## 4 Experiments: Lexical Biases
We investigate the effect of debiasing approaches (§3) on removing lexical
biases in hate speech detection. First, we discuss the evaluation framework
for measuring bias reduction (§4.1). We present quantitative (§4.2) and
qualitative (§4.3) results on lexical bias removal for all debiasing
approaches, and OOD evaluation for debiased training methods (§4.4). See
Appendix A.3 for hyperparameters and other experimental settings.
### 4.1 Evaluation Framework
We report the performance of all models as overall accuracy and $F_{1}$ with
respect to the toxic class. Given that current hate speech systems tend to
rely heavily on the presence of nOI, OI, and OnI mentions (§2.1) for labeling
text as toxic, we use false positive rate (FPR) over each of these categories
to measure the degree of bias in the model, following Hardt et al. (2016) and
Xia et al. (2020). Specifically, we report the FPR of a model on tweets
containing nOI (FPR${}_{\textsc{nOI}}$), OI (FPR${}_{\textsc{OI}}$), and OnI
(FPR${}_{\textsc{OnI}}$), as well the $F_{1}$ corresponding to each of these
classes. Intuitively, the lower the FPR∗, the less the model infers lexical
associations for toxicity, and hence is less biased.
| | $R_{\textsc{nOI}}\downarrow$ | $R_{\textsc{OI}}\downarrow$ | $R_{\textsc{OnI}}\downarrow$
---|---|---|---|---
| Original | 0.0445 | 0.2641 | 0.6718
33% train | Random | 0.0345 | 0.2603 | 0.6683
AFLite | 0.0434 | 0.2458 | 0.6016
DataMaps-Ambig. | 0.0126 | 0.1968 | 0.5839
DataMaps-Hard | 0.0081 | 0.1853 | 0.5849
DataMaps-Easy | 0.0772 | 0.3661 | 0.7720
Table 1: Lexical associations between toxicity and ToxTrig mentions in the original dataset Founta et al. (2018) and various filtered counterparts. Random, AFLite, and DataMaps all contain only 33% of the original data after filtering. Lower Pearson $R$ correlation value indicates less superficial patterns in the dataset, i.e., less bias. Takeaway: The hard and ambiguous subsets given by DataMaps contain the lowest amount of lexical associations, indicated in boldface. | | Test (12893) | nOI (602) | OI (553) | OnI (3236)
---|---|---|---|---|---
| | Acc.$\uparrow$ | $F_{1}\uparrow$ | $F_{1}\uparrow$ | FPR${}_{\textsc{nOI}}\downarrow$ | $F_{1}\uparrow$ | FPR${}_{\textsc{OI}}\downarrow$ | $F_{1}\uparrow$ | FPR${}_{\textsc{OnI}}\downarrow$
| Vanilla | 94.210.0 | 92.330.0 | 89.760.3 | 10.241.3 | 98.840.1 | 85.710.0 | 97.340.1 | 64.720.8
| LMixin-OnI | 89.651.5 | 85.592.5 | 87.041.1 | 13.991.5 | 98.870.0 | 85.710.0 | 87.874.5 | 43.743.1
| LMixin-ToxTrig | 90.440.7 | 86.941.1 | 85.470.3 | 11.151.7 | 97.640.3 | 71.430.0 | 90.411.8 | 44.551.5
33% train | Random | 94.070.1 | 92.180.1 | 89.480.4 | 9.330.7 | 98.930.0 | 83.333.4 | 97.400.1 | 67.150.6
AFLite | 93.860.1 | 91.940.1 | 90.210.4 | 11.261.1 | 98.900.0 | 85.710.0 | 97.320.1 | 67.973.4
DataMaps-Ambig. | 94.330.1 | 92.450.1 | 89.160.7 | 7.391.0 | 98.870.0 | 85.710.0 | 97.540.0 | 64.391.4
DataMaps-Hard | 94.500.0 | 92.610.1 | 89.540.4 | 6.260.9 | 98.840.0 | 85.710.0 | 97.430.0 | 61.951.1
DataMaps-Easy | 94.000.1 | 91.940.2 | 86.810.6 | 5.920.7 | 98.870.0 | 83.333.4 | 97.170.1 | 60.333.8
Table 2: Evaluation of lexical bias removal for all debiasing methods on the Founta et al. (2018) test set. Results show the mean and s.d. (subscript) of accuracy and $F_{1}$ across 3 runs, as well as $F_{1}$ and false positive rate exclusively on test examples containing specific ToxTrig mentions—nOI, OI and OnI, along with the number of examples in each category. The lower the FPR${}_{\textbf{*}}$, the less the model infers lexical associations for toxicity. The first block shows debiased training approaches, along with the vanilla classifier, which are trained on the full dataset. The second block shows data filtering approaches, all trained on only 33% of the training data. Best performance in each block is boldfaced. Takeaway: While data filtering approaches achieve overall higher performance, debiased training approaches perform better on lexical bias reduction, in aggregate. | Gold | DM-Hard | DM-Easy
---|---|---|---
@user THIS? LMAOOOOO…do yourself a solid and stay out of Black people’s mentions and mind your caucasia… | | A |
RT @user I wish I wasn’t so annoying like I even piss myself off | A | | A
@user If you want to attack people, attack fundamentalists of all faiths. Attack those who condemn 1.5bn people out of hand. | | A |
Table 3: Examples of test set tweets with their gold-standard annotations and
predictions from models trained on DataMaps-Hard (DM-Hard) and DataMaps-Easy
(DM-Easy) subsets. A denotes tweets with toxic labels, and represents non-
toxic labels. We anonymize the usernames to protect user privacy.
#### Evaluation for Filtered Datasets
We additionally consider metrics based on spurious lexical associations for
data filtering approaches. This measures prevalence of spurious surface
patterns in the filtered datasets, which might propagate to models trained on
the data. Specifically, we report the Pearson’s correlation between the gold
standard toxicity label and whether or not it contains nOI, OI, or OnI
mentions. These correlations are denoted as $R_{\textsc{OnI}}$,
$R_{\textsc{nOI}}$, and $R_{\textsc{OI}}$, respectively; lower values indicate
reduction in lexical biases.
#### Baselines
We consider comparison against two natural baselines: a vanilla RoBERTa-large
classifier trained on the original dataset (Original). We also consider a
baseline trained on a random selection of the training data (Random), for
comparison with data filtering methods for debiasing. Each subset is trained
on 33% of the training data.
### 4.2 Results for Lexical Bias Reduction
First, we measure the reduction in lexical biases in filtered datasets, as
given by AFLite and DataMaps. As shown in Table 1, subsets given by AFLite and
the ambiguous and hard regions produced by DataMaps reduce the overall
associations between ToxTrig words and toxicity, compared to the original and
random baselines; DataMaps-Hard has the largest reduction. On the other hand,
as expected, DataMaps-Easy shows an _increased_ association between ToxTrig
mentions and toxicity, showing that the these examples display overt lexical
biases.
Table 2 shows results for lexical bias reduction using both debiased training
approaches, as well as models trained on datasets filtered using AFLite and
all three regions from DataMaps. Both debiased training approaches, LMixin-OnI
and LMixin-ToxTrig, reduce FPR${}_{\textsc{OnI}}$ as well as
FPR${}_{\textsc{OI}}$ by a large amount. However, both approaches also hurt
in-distribution test performance, indicating that OnI and other ToxTrig
features are essential for good performance.111111When we combine the bias-
only model and the full model, we obtain competitive performance (see Appendix
A.4). In contrast, the models trained on hard and ambiguous subsets from
DataMaps both preserve in-distribution performance, even though they are
trained only a third of the original data. They also reduce the rate of
falsely predicting nOI mentions as toxic (FPR${}_{\textsc{nOI}}$), while not
showing much improvement for OnI and maintaining FPR${}_{\textsc{OI}}$ of the
original baseline.
Surprisingly, the model trained on the easy subset from DataMaps shows good
bias reduction on the nOI and OnI categories, while matching the random
selection baseline for OI. This is despite DataMaps-Easy showing an increased
association between ToxTrig mentions and toxicity (Table 1). Notably, the
$F_{1}$ for all categories suffers under this model, indicating that it is
less competent than the baseline. These results suggest that reduced
associations in the data might not necessarily lead to debiased models trained
on the same data. Overall, no single approach outperforms all others across
different categories for lexical debiasing.
### 4.3 Qualitative Analysis
A qualitative study of the Founta et al. (2018) test set shows the presence of
many annotation errors. We show three representative annotation errors in
Table 3. The first example contains an atypical example of toxicity, towards
white folks, which the annotators might have been unaware of. It also contains
a link which annotators had access to, but not models. The second contains the
word p*ss which the annotators may have relied for their assessment. The third
encourages violence/abuse towards an identity which isn’t typically the target
of violence. Interestingly, the DataMaps-Easy predictions agree with all the
gold standard annotations; perhaps such annotation errors and ambiguity are
responsible for the performance discussed in §4.2. These annotation
ambiguities might also impair our measurement for models’ performance and
debiasing effects, and expose a limitation of these datasets.
### 4.4 Adversarial Evaluation: OnI-Adv
Figure 2: Challenge set evaluation for lexical biases, comparing all debiasing
methods with baselines, using the OnI-Adv test set. Takeaway:
$F_{1}(\uparrow)$ measures show that all models perform poorly at identifying
toxic text not containing overtly lexical cues of toxicity. In general,
debiased training approaches outperform the original model on this challenge
set, while data filtering is not as effective.
To further study the reliance of debiased models on the OnI words, we use the
test set from Dinan et al. (2019), which contains 3000 manually created
sentences, 300 toxic. The toxic sentences barely contain any OnI mentions,
e.g., Cheese is made by french people who smell. We call this test set OnI-Adv
(for adversarial) since it challenges models with a reversal in the
association between toxicity and offensive non-identity words (e.g., “f*ck”,
“sh*t”).
We report $F_{1}$ for all models in Figure 2, which shows how well a model
identifies toxicity in offensive tweets that do not contain overtly lexical
cues of toxicity. The debiased training approaches improve over the baselines;
data filtering methods do not. One reason for this might be that data
filtering methods were trained on much less data than both LMixin models.
Regardless, none of the models we test are good at predicting subtle, non-
overt toxicity.
| | | Test
---|---|---|---
| | $R_{\textsc{aae}}\downarrow$ | $F_{1}\uparrow$ | FPR${}_{\textsc{aae}}\downarrow$
| Vanilla | 0.4079 | 92.330.0 | 16.840.3
| LMixin-Dialect | - | 92.260.1 | 16.070.4
33% train | Random | 0.4027 | 92.180.1 | 16.670.6
AFLite | 0.3577 | 91.940.1 | 16.840.8
DataMaps-Ambig. | 0.2965 | 92.450.1 | 15.990.4
DataMaps-Hard | 0.2878 | 92.610.1 | 13.710.2
DataMaps-Easy | 0.5347 | 91.940.2 | 19.462.8
| aae-relabeled | 0.3453 | 91.640.3 | 12.690.0
Table 4: Dialectal bias evaluation for all debiasing methods (§5), as well as
the relabeling approach (§6) on the Founta et al. (2018) test set. We report
$F_{1}$ and the false positive rate with respect to tweets in aae
(FPR${}_{\textsc{aae}}$), reflecting dialectal bias (lower is less biased),
showing mean and s.d. (subscript) across 3 runs. (Top Block) Debiased training
approaches, along with the vanilla classifier, are all trained on the full
dataset. (Middle Block) Random, AFLite and DataMaps all are trained on only
33% of the training data. Best performance for each training set size is in
boldface. Takeaway: Both debiasing approaches improve performance over
baselines, with DataMaps-Hard proving the most effective at debiasing. (Bottom
Block) aae-relabeling results in a model which despite following a noisy
process yields even larger improvements for dialectal debiasing.
## 5 Experiments: Dialectal and Racial Biases
We test the efficacy of the bias reduction methods from §3 for dialectal bias
(§2.2) reduction.
### 5.1 Dialectal Biases
For our dialectal bias experiments, we first infer the dialect of a tweet as
described in §2.2. Then, analogous to the lexical bias evaluation, we quantify
the dialectal debiasing using the Pearson’s correlation between estimated
probabilities of aae and toxicity ($R_{\textsc{aae}}$), and the false positive
rates of models on aae tweets (FPR${}_{\textsc{aae}}$). See Appendix A.3 for
hyperparameters and other experimental settings.
Results in Table 4 show that almost all data filtering and debiasing methods
reduce dialectal biases, with DataMaps-Easy as the exception (consistent with
Table 1). Notably, DataMaps-Hard performs the best at dialectal debiasing,
both in terms of toxicity-aae correlation ($R_{\textsc{aae}}$) and in terms of
false flagging of toxicity (FPR${}_{\textsc{aae}}$). Interestingly, most
models’ decrease in false flagging is small, suggesting room for improvement.
| | W-Tox. | AA-Tox. | $\Delta\downarrow$ | AA/W$\downarrow$
---|---|---|---|---|---
| Original | 7.24 | 12.61 | 5.37 | 1.74
| LMixin-Dialect | 7.50 | 12.55 | 5.06 | 1.67
33% train | Random | 8.28 | 13.24 | 4.96 | 1.60
AFLite | 7.32 | 11.64 | 4.33 | 1.59
DataMaps-Ambig. | 6.75 | 12.17 | 5.42 | 1.80
DataMaps-Hard | 6.36 | 11.67 | 5.31 | 1.84
DataMaps-Easy | 8.46 | 16.30 | 7.83 | 1.94
| aae-relabeled | 6.93 | 10.60 | 3.67 | 1.53
Table 5: Racial disparity in toxicity prediction reported on Preoţiuc-Pietro
and Ungar (2018). W-Tox. indicates % of white users’ tweets being flagged as
toxic, AA-Tox. indicates % of African American users’ tweets being flagged as
toxic, $\Delta$ refers to the difference between AA-Tox. and W-Tox., and AA/W
refers to the ratio between AA-Tox. and W-Tox. Takeaway: Methods generally
fail in debiasing on this OOD test set except the relabeling approach shows
some benefit.
### 5.2 Racial Biases
To quantify the real-world impact of dialect-based racial bias, we measure the
rates of toxicity predicted by models on a corpus of tweets for which the race
of authors is available, but not annotations of toxicity. Specifically, we
consider the dataset released by Preoţiuc-Pietro and Ungar (2018), which
consists of 5.4M tweets, collected from 4,132 survey participants (3,184
White, 374 African American) with self-reported race/ethnicity and Twitter
user handles.121212For efficiency, we randomly select 12k tweets from the
dataset as the OOD test set.
We quantify our models’ racial bias by measuring the difference in rates of
flagging tweets by African American authors and those by white authors,
following Sap et al. (2019).131313Note that we assume that authors from all
races have the same likelihood of writing toxic language.
Listed in Table 5, our results show that automatic debiasing methods do not
consistently decrease the racial discrepancy in flagging toxicity. Notably,
the toxicity rates on tweets by African American authors—and the diferences
compared to white authors—are similar across all debiasing methods and
baselines, except for DataMaps-Easy, which shows the most racial bias in
toxicity flagging. Surprisingly, DataMaps-Hard, which mitigated dialectal bias
the best out of all debiasing methods, also shows high discrepancy between
author races. Confirming previous results, this suggests that debiasing these
systems requires more than automatic debiasing methods.
## 6 Towards Data Relabeling
aae | GPT-3 wae Translation | Gold | New
---|---|---|---
RT @user I can’t stand a bad texter bruh like don’t be mad if I forget about yo ass | RT @user I can’t stand a bad texter bro like don’t be mad if I forget about you | A |
RT @user Retweet if you fuck with this!!!! | RT @user Retweet if you like this! | A |
RT @user That nigga needs anger management | RT @user That guy needs anger management | A |
RT @user oh fucking hell take a day off man | RT @user oh fuck take a day off man | A | A
Table 6: Examples of aae tweets with their GPT-3 based wae translation, and
original gold standard and new annotations based on aae-relabeled. For the
first three tweets, the (biased) gold labels are changed by models predicting
the new labels on their wae translations. A indicates presence of toxicity,
and represents non-toxic. We anonymize the usernames to protect user privacy.
Based on our quantitative and qualitative analyses, we believe there still is
room for improvement in debiasing hate speech detection. Therefore, we turn
our attention to the role of label noise in datasets. Partly inspired by our
qualitative analyses of debiased models’ predictions, we design a proof-of-
concept study where we automatically correct the label of tweets using a(n
automatic) dialectal translation of the tweet, inspired by previous work
showing that highlighting aae tweets’ dialect led them to be labeled as less
toxic Sap et al. (2019). We conclude this study by discussing the limitations
and ethical implications of the synthetic data, and cautioning against its
real-world application.
Focusing on dialectal bias, our key assumption is that an aae tweet and its
corresponding wae version should have the same toxicity label, therefore toxic
aae tweets whose wae versions are non-toxic are candidates for label
correction.141414Note that this assumption does not hold for lexical items,
because substituting lexical items (e.g., swapping a minority mention for a
majority mention) would drastically change the denotational meaning of the
sentence.
However, gold-standard translations of aae to wae would require qualified
translators, and automatic aae-to-wae translation systems do not exist, to the
best of our knowledge. Therefore, we create a proof-of-concept study—we set up
a aae to wae “translation” system using the few-shot capabilities of the GPT-3
language model Brown et al. (2020). Under this mechanism, we prompt GPT-3 with
four translation pairs (taken from Spears, 1998) and an aae tweet from our
training data, and generate its wae “translation”. The list of prompts, as
well as further details, are provided in Appendix C. Note that we do _not_
recommend this approach to build large scale parallel data for dialects, as
discussed under ethical implications and limitations.
Next, as per our heuristic, we only relabel toxic aae tweets whose wae
translation is predicted as non-toxic by either our vanilla classifier trained
on the original Founta et al. (2018) dataset, or an identical classifier
trained on the wae translated tweets. The resulting dataset (aae-relabeled) is
the same size as the original dataset, but with 954 (12%) out of 8260 toxic
aae tweets relabeled as non-toxic (examples in Table 6). To assess the
validity of the relabeling, the first three authors manually annotated
toxicity of 50 randomly selected relabeled tweets. On average, authors agreed
with 84% of the relabeling decisions.
Then, we evaluate the dialectal bias of aae-relabeled and quantify the dialect
and racial prediction biases from a RoBERTa-large classifier trained on aae-
relabeled, following §5. As shown in the last row of Table 4, this relabeling
scheme decreases dialectal bias more than any other debiasing method,
specifically as measured by the FPR on aae tweets, with one point drop in
$F_{1}$ score. The $F_{1}$ score on the “gold” test data (Table 4) are not
fully reliable, as test data contain label biases and better performance could
come from exploiting these biases. As shown in Table 5, the model trained on
aae-relabeled has the lowest racial disparity in toxicity flagging rates
compared to all other methods.
These results highlight that debiasing methods are much less effective at
mitigating dialectal dataset biases compared to data relabeling. For future
investigations, we recommend obtaining human-written aae-wae pairs (e.g., as
done by Groenwold et al., 2020). Additionally, to ensure less biased toxicity
labeling, we recommend recruiting aae speakers or experts for avoiding over-
identification of aae-markers as toxic Spears (1998); Croom (2013).
Alternatively, we recommend exploring more holistic representations of social
biases or toxicity (e.g., Social Bias Frames; Sap et al., 2020).
### Ethical Implications & Limitations
The above synthetic setting is meant to illustrate the role of labeling
quality on biases in annotations. We strongly caution against using this
approach in real-world applications, such as building parallel datasets for
dialects. First, due to how its training data was selected, GPT-3 has likely
not been exposed to many African American English varieties during training Jo
and Gebru (2020). Second, pretrained language models are known to generate
toxic language at non-trivial rates Gehman et al. (2020), which could cause
differential toxicity in the translations.
## 7 Related Work
#### Debiasing Toxicity Detection
As the popularity of hate speech and toxic language detection systems has
grown, several biases have been found in dataset and models, spurring various
debiasing efforts to mitigate these individual biases (e.g., gender bias,
racial bias; Park et al., 2018; Sap et al., 2019; Davidson et al., 2019). Some
work tackles identity-based biases, e.g., using data re-balancing Dixon et al.
(2018), or adversarial feature learning Vaidya et al. (2019). Less work has
tackled racial or dialectal bias. Notably, Xia et al. (2020) use adversarial
training to prevent the model from associating toxicity with aae, showing only
small improvements in fairness. Based on those results, we do not explore
adversarial methods, opting instead for ensemble-based methods of predefined
bias reduction. In contemporary work, Mozafari et al. (2020) use a re-
weighting mechanism, which shows some effects in debiasing racial bias. We
leave it for future work to evaluate this method in our setting. In contrast
to all previous work, our experiments also measure the effectiveness of bias-
agnostic methods.
#### Other General Debiasing Methods
Several approaches for debiasing NLU tasks have been proposed lately. Some
approaches rely on adversarial training to remove protected attributes (e.g.
gender or race), from a model’s internal representations (Zhang et al., 2018;
Wang et al., 2019; Xia et al., 2020). Other approaches include confidence
regularization Utama et al. (2020), as well as other product of expert
approaches He et al. (2019); Karimi Mahabadi et al. (2020) similar to the
debiased training approach from Clark et al. (2019), which is the only
debiased training we employ due to its relatively strong performance.
## 8 Conclusion
We investigate whether toxic language detection systems can be debiased using
recently introduced methods for debiasing text classification in NLU tasks.
Focusing on two types of biases, lexical and dialectal, our experiments show
that these methods face significant challenges in reducing the biased behavior
in toxicity detectors. This indicates that biases in toxic language detection
might be different in nature compared to spurious associations studied in
typical NLU settings. We studied a synthetic scheme for relabeling examples
with potential dialectal biases; our results indicate that correcting noisy
labels results in better bias reduction. Our findings suggest that instead of
solely relying on development of automatic debiasing for existing, imperfect
datasets, future work focus primarily on the quality of the underlying data
for hate speech detection, such as accounting for speaker identity and
dialect. Indeed, such efforts could act as an important step towards making
systems less discriminatory, and hence safe and usable.
## Acknowledgments
We thank the anonymous reviewers and Laura Vianna for helpful comments on this
work. This research was supported in part by NSF grants 1813153 and 1714566.
## References
* Blodgett et al. (2020) Su Lin Blodgett, Solon Barocas, Hal Daumé, III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of “bias” in NLP. In _Proc. of ACL_.
* Blodgett et al. (2016) Su Lin Blodgett, Lisa Green, and Brendan O’Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In _Proc. of EMNLP_.
* Bowman et al. (2015) Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015\. A large annotated corpus for learning natural language inference. In _Proc. of EMNLP_.
* Bras et al. (2020) Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial filters of dataset biases. In _Proc. of ICML_.
* Brown et al. (2020) Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In _Proc. of NeurIPS_.
* Clark et al. (2019) Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases. In _Proc. of EMNLP_.
* Croom (2013) Adam M Croom. 2013. How to do things with slurs: Studies in the way of derogatory words. In _Language & communication_.
* Davidson et al. (2019) Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In _Abusive Language Workshop (at ACL)_.
* Davidson et al. (2017) Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In _Proceedings of the International AAAI Conference on Web and Social Media_.
* Dias Oliva et al. (2020) Thiago Dias Oliva, Dennys Marcelo Antonialli, and Alessandra Gomes. 2020. Fighting hate speech, silencing drag queens? artificial intelligence in content moderation and risks to lgbtq voices online. In _Sexuality & Culture_.
* Dinan et al. (2019) Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. In _Proc. of EMNLP_.
* Dixon et al. (2018) Lucas Dixon, John Li, Jeffrey Scott Sorensen, Nithum Thain, and L. Vasserman. 2018\. Measuring and mitigating unintended bias in text classification. In _Proc. of AES_.
* Dynel (2012) Marta Dynel. 2012. Swearing methodologically : the (im)politeness of expletives in anonymous commentaries on youtube. In _Journal of English Studies_.
* Dynel (2015) Marta Dynel. 2015. The landscape of impoliteness research. In _Journal of Politeness Research_.
* Founta et al. (2018) Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In _Proc. of WSM_.
* Gehman et al. (2020) Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020\. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. In _Findings of EMNLP_.
* Green (2002) Lisa Green. 2002. _African American English: A Linguistic Introduction_. Cambridge University Press.
* Groenwold et al. (2020) Sophie Groenwold, Lily Ou, Aesha Parekh, Samhita Honnavalli, Sharon Levy, Diba Mirza, and William Yang Wang. 2020. Investigating African-American vernacular english in Transformer-Based text generation. In _Proc. of EMNLP_.
* Gururangan et al. (2018) Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In _Proc. of NAACL_.
* Guynn (2020) Jessica Guynn. 2020. What civil rights groups want from facebook boycott: Stop hate speech and harassment of black users.
* Hardt et al. (2016) Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In _Proc. of NeurIPS_.
* He et al. (2019) He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. In _EMNLP Workshop on Deep Learning Approaches for Low-Resource NLP_.
* Jo and Gebru (2020) Eun Seo Jo and Timnit Gebru. 2020. Lessons from archives: strategies for collecting sociocultural data in machine learning. In _Proc. of FAT_.
* Karimi Mahabadi et al. (2020) Rabeeh Karimi Mahabadi, Yonatan Belinkov, and James Henderson. 2020. End-to-end bias mitigation by modelling biases in corpora. In _Proc. of ACL_.
* Kasper (1990) Gabriele Kasper. 1990. Linguistic politeness: current research issues. In _Journal of Pragmatics_. Elsevier.
* Kim et al. (2020) Jae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santiago, and Vivek Datta. 2020. Intersectional bias in hate speech and abusive language datasets.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. In _arXiv preprint arXiv:1907.11692_.
* Mozafari et al. (2020) Marzieh Mozafari, Reza Farahbakhsh, and Noël Crespi. 2020. Hate speech detection and racial bias mitigation in social media based on bert model. In _PLOS ONE_. Public Library of Science.
* Park et al. (2018) Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In _Proc. of EMNLP_.
* Preoţiuc-Pietro and Ungar (2018) Daniel Preoţiuc-Pietro and Lyle Ungar. 2018. User-level race and ethnicity predictors from twitter text. In _Proc. of COLING_.
* Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In _Proc. of EMNLP_ , pages 2383–2392.
* Roberts (2019) Sarah T Roberts. 2019. _Behind the screen: Content moderation in the shadows of social media_. Yale University Press.
* Rosa (2019) Jonathan Rosa. 2019. _Looking like a language, sounding like a race_. Oxford University Press.
* Rosa and Flores (2017) Jonathan Rosa and Nelson Flores. 2017. Unsettling race and language: Toward a raciolinguistic perspective. In _Language In Society_. Cambridge University Press.
* Ross et al. (2017) Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, and Michael Wojatzki. 2017. Measuring the reliability of hate speech annotations: the case of the european refugee crisis. In _NLP 4 CMC Workshop_.
* Sap et al. (2019) Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In _Proc. of ACL_.
* Sap et al. (2020) Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In _Proc. of ACL_.
* Schwartz et al. (2017) Roy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles, Yejin Choi, and Noah A Smith. 2017. The effect of different writing tasks on linguistic style: A case study of the roc story cloze task. In _Proc. of CoNLL_.
* Spears (1998) Arthur K Spears. 1998. African-American language use: Ideology and so-called obscenity. In _African-American English: Structure, History and Use_. Routledge New York.
* Swayamdipta et al. (2020) Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In _Proc. of EMNLP_.
* Technau (2018) Björn Technau. 2018. Going beyond hate speech: The pragmatics of ethnic slur terms. _Lodz Papers in Pragmatics_ , 14(1):25–43.
* Utama et al. (2020) Prasetya Ajie Utama, Nafise Sadat Moosavi, and Iryna Gurevych. 2020. Mind the trade-off: Debiasing NLU models without degrading the in-distribution performance. In _Proc. of ACL_.
* Vaidya et al. (2019) Ameya Vaidya, Feng Mai, and Yue Ning. 2019. Empirical analysis of multi-task learning for reducing model bias in toxic comment detection. In _Proc. of ICWSM_.
* Vidgen et al. (2019) Bertie Vidgen, Helen Margetts, and Alex Harris. 2019. How much online abuse is there? In _Alan Turing Institute_.
* Wang et al. (2019) Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, and V. Ordonez. 2019. Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In _Proc. of ICCV_.
* Williams et al. (2018) Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In _Proc. of NAACL_.
* Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing.
* Xia et al. (2020) Mengzhou Xia, Anjalie Field, and Yulia Tsvetkov. 2020. Demoting racial bias in hate speech detection. In _Proc. of Social NLP_.
* Yasin (2018) Danyaal Yasin. 2018. Black and banned: Who is free speech for?
* Zhang et al. (2018) Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In _Proc. of AES_. Association for Computing Machinery.
## Appendix
## Appendix A Further Details for Models
### A.1 Model Debiasing
The learned-mixin ensemble allows the model to explicitly determine how much
to trust the bias given the input:
$\displaystyle\hat{p_{i}}=$
$\displaystyle\text{softmax}\\{\log(p_{i})+g(\mathbf{x}_{i})\log b_{i}\\}$
where $\mathbf{x}_{i}$ is the $i$th input text, $p_{i}$ and $b_{i}$ is the
toxicity prediction produced by RoBERTa, and bias-only model respectively, and
$g$ is a parametric function, which is defined as
$\text{softplus}(\mathbf{w}\cdot\mathbf{h}_{i})$, where $\mathbf{w}$ is a
learned vector, $\mathbf{h}_{i}$ is the last hidden layer of the model for
example $\mathbf{x}_{i}$, and the $\text{softplus}(x)=\text{log}(1+\exp{x})$.
To prevent the learned-mixin ensemble from ignoring $b_{i}$, Clark et al.
(2019) add an entropy penalty ($H$) to the loss:
$\displaystyle R=$ $\displaystyle\alpha
H(\text{softmax}\\{g(\mathbf{x}_{i})\log b_{i}\\})$
Where $H(z)=-\sum_{j}z_{j}\log z_{j}$ is the entropy and $\alpha$ is a
hyperparameter.
### A.2 Data Filtering
For the data filtering methods, we first filter data to 50% of the original
data as in Swayamdipta et al. (2020). Then we further downsample the dataset
to 33% of the original data to control that each training set has the same
toxic ratio as the original training set. This step is to avoid confounding
our results with different toxic ratio among different training sets.
### A.3 Training Settings
For all the experiments, we fine-tune RoBERTa-large Liu et al. (2019) over the
corresponding corpus with one GTX2080 Ti. We use the default hyperparameters
as provided in the HuggingFace Transformers library (Wolf et al., 2019), with
two major changes: we use a learning rate of $10^{-5}$ and 8 batch size in all
experiments.
### A.4 Prediction Combining with Bias-only Model
To prevent the possibility that our LMixin-ToxTrig/OnI is not well trained,
thus resulting in the decrease of models’ in-distribution performance, we use
the joint-prediction from the main and bias-only model to infer the in-
distribution test set and they obtain 94.15% and 94.17% accuracy,
respectively. This is competitive performance as shown in Table 2.
## Appendix B Alternative Dataset of Toxic Language
Davidson et al. (2017) collected data from Twitter, starting with 1,000 terms
from HateBase (an online database of hate speech terms) as seeds, which the
process relies on lexical biases. We find that performing data filtering
methods over this dataset leads to degenerate behaviour. Specifically, as
shown in Table 7, the easy region demonstrates least spurious correlation due
to its heavily skewed class distribution, which further prevent us from
downsampling to control the toxic ratio. We also train LMixin-ToxTrig and
LMixin-dialect over the dataset. Table 8 shows that FPR of the debiased model
increase instead except for the OI category and Table 9’s results behave in-
line with Table 4.
| Toxic Ratio | $R_{\textsc{nOI}}\downarrow$ | $R_{\textsc{OI}}\downarrow$ | $R_{\textsc{OnI}}\downarrow$ | $R_{\textsc{aae}}\downarrow$
---|---|---|---|---|---
Original$\dagger$ | 0.8308 | 0.0287 | 0.4320 | 0.2610 | 0.4061
Random | 0.8312 | 0.0288 | 0.4312 | 0.2621 | 0.4011
AFLite | 0.7669 | 0.0342 | 0.4708 | 0.2835 | 0.4236
DataMaps-Ambig. | 0.6736 | 0.0493 | 0.4683 | 0.3230 | 0.4445
DataMaps-Hard | 0.6645 | 0.0521 | 0.4533 | 0.3190 | 0.4426
DataMaps-Easy | 0.9972 | 0.0135 | 0.0771 | 0.0396 | 0.0928
Table 7: Lexical and dialectal associations between toxicity in the original dataset Davidson et al. (2017) and various filtered counterparts. Random, AFLite, and DataMaps all contain only 50% of the original data after filtering. (We could not perform downsampling on these datasets due to their heavily skewed label distribution.) Lower Pearson $R$ correlation value indicates less superficial patterns in the dataset, thus are less biased. The easy subset gives the best results here are due to its severe inbalanced label distribution. | Test | nOI | OI | OnI
---|---|---|---|---
| Acc.$\uparrow$ | $F_{1}\uparrow$ | $F_{1}\uparrow$ | FPR${}_{\textsc{nOI}}\downarrow$ | $F_{1}\uparrow$ | FPR${}_{\textsc{OI}}\downarrow$ | $F_{1}\uparrow$ | FPR${}_{\textsc{OnI}}\downarrow$
Original | 96.37 | 97.81 | 96.42 | 25.00 | 99.86 | 57.14 | 99.57 | 63.64
LMixin-ToxTrig | 96.15 | 97.69 | 96.19 | 28.57 | 99.78 | 42.86 | 99.28 | 72.73
Table 8: Lexical bias removal evaluation for debiasing methods. Original refers to the model trained over the full training set. The test set is further categorized into tweets that contained relevant ToxTrig words. $F_{1}$ indicates models’ performance while the false positive rate (FPR${}_{\textbf{*}}$) reflects models’ bias. The lower the FPR${}_{\textbf{*}}$ is, the less biased the model tend to be. Debiasing Method | | Test
---|---|---
| $R_{\textsc{aae}}$ | Acc. $\uparrow$ | $F_{1}\uparrow$ | FPR${}_{\textsc{aae}}\downarrow$
Original | 0.4079 | 96.37 | 97.81 | 24.76
LMixin-Dialect | - | 96.48 | 97.88 | 22.86
Table 9: Dialectal bias evaluation for all debiasing methods, on both in-
distribution test set as well as out-of-distribution dialect and race priming
test sets. In addition to accuracy and $F_{1}$, we report the false positive
rate with respect to tweets in aae (FPR${}_{\textsc{aae}}$), reflecting
dialectal bias (lower is less debiased). Each method is based on a RoBERTa-
large classifier.
## Appendix C Few-shot aae-to-wae Translation
Note that we do _not_ recommend the following approach to build large scale
parallel data for dialects, as discussed under ethical implications and
limitations (§6).
We use GPT-3 Brown et al. (2020) to create a few-shot aae-to-wae translation
system, using the following set of example translation pairs drawn from Spears
(1998):
> aae: Get your triflin’ ass out of here.
> wae: Get your trifling self out of here. aae: I saw his ass yesterday.
> wae: I saw him yesterday. aae: His ass is gonna get fried.
> wae: He is gonna get fried aae: Wassup, nigga?
> wae: What’s up bro? aae: $\langle$tweet$\rangle$
> wae:
Note that Spears (1998) refers to wae as White language varieties, and deals
with English prevalent in the United States.
We prepend the formatted example pairs to each aae tweet in our training data,
and generate the translation from GPT-3 using top-0.95 nucleus sampling with a
temperature of 0.5. Prompts, formatting, and generation parameters were chosen
based on manual inspection of the output.
|
# Stimuli-Sensitive Hawkes Processes for Personalized Student Procrastination
Modeling
Mengfan Yao Department of Computer Science, University at Albany - SUNY
<EMAIL_ADDRESS>, Siqian Zhao Department of Computer Science, University at
Albany - SUNY
<EMAIL_ADDRESS>, Shaghayegh Sahebi Department of Computer Science,
University at Albany - SUNY
<EMAIL_ADDRESS>and Reza Feyzi Behnagh Department of Educational Theory &
Practice, University at Albany - SUNY
<EMAIL_ADDRESS>
(2021)
###### Abstract.
Student procrastination and cramming for deadlines are major challenges in
online learning environments, with negative educational and well-being side
effects. Modeling student activities in continuous time and predicting their
next study time are important problems that can help in creating personalized
timely interventions to mitigate these challenges. However, previous attempts
on dynamic modeling of student procrastination suffer from major issues: they
are unable to predict the next activity times, cannot deal with missing
activity history, are not personalized, and disregard important course
properties, such as assignment deadlines, that are essential in explaining the
cramming behavior. To resolve these problems, we introduce a new personalized
stimuli-sensitive Hawkes process model (SSHP), by jointly modeling all
student-assignment pairs and utilizing their similarities, to predict
students’ next activity times even when there are no historical observations.
Unlike regular point processes that assume a constant external triggering
effect from the environment, we model three dynamic types of external stimuli,
according to assignment availabilities, assignment deadlines, and each
student’s time management habits. Our experiments on two synthetic datasets
and two real-world datasets show a superior performance of future activity
prediction, comparing with state-of-the-art models. Moreover, we show that our
model achieves a flexible and accurate parameterization of activity
intensities in students.
††copyright: acmcopyright††journalyear: 2021††conference: Web Confernce 2021;
;
## 1\. Introduction
Academic procrastination can be defined as postponing the planned studies,
despite being aware of its negative consequences (Moon and Illingworth, 2005).
This behavior is common in students, particularly in online education
settings, in which students have to self-regulate their learning and studying
(Lee and Choi, 2011). Although there is no formal quantitative definition for
procrastination, traces of this behavior can be observed by looking at
students’ study behavior, such as cramming the studies as deadlines approach
(Perrin et al., 2011). However, despite the negative side-effects of
procrastination on students, such as on their academic performance and
psychological well-being (Steel, 2007), dynamic data-driven approaches that
can model these indicator behaviors in students are scarce.
Past research has mainly described student procrastination by summarizing
student activities into static features (Perrin et al., 2011; Cerezo et al.,
2017), which cannot fully represent the dynamics of students’ behavior through
time. More recently, sequential models of student behaviors have been used in
the study of procrastination behaviors (Park et al., 2018; Yao et al., 2020).
However, these models fail to capture an important aspect of the cramming
behavior: its relation to triggers such as course deadlines and availability
of assignments, as in class schedule. Additionally, these models are not
personalized and do not model factors related to individual students, such as
students’ studying habits into account. Finally, they cannot deal with missing
activity data and fail to estimate students’ next study times or predict their
behavior in relation to various tasks and assignments. An ideal student
activity model should be able to capture students’ response to the major
events in the course, be personalized to learn student-specific behaviors, and
be able to predict the students’ future activity intensities as a way for
early detection of procrastination, even if student sequence data is not
completely observed.
Meanwhile, Hawkes processes (Hawkes, 1971), as a family of Point processes,
have shown great potential in dealing with complicated sequential data in most
real-world applications, including in the education domain (Yao et al., 2020).
However, the state-of-the-art Hawkes process models used in the Education
domain suffer from the above limitations, for two main reasons. First,
external stimuli and their triggering effects are conventionally parameterized
as a constant, which results in ignoring factors such as class schedule and
personalized student habits. For example, assignment deadline, as a format of
external stimuli, may only start to show its triggering effect when it is
approaching. Students’ personal habits (e.g. log-in time and frequency) which
is a reflection of their time management skills, also could evolve over time.
Secondly, the majority of the Hawkes processes model different sequences
independently. As a result, only future activities of the sequences with
historical observations can be predicted, whereas the future of unobserved
sequences can not be inferred.
To address the above-mentioned limitations, we propose stimuli-sensitive
Hawkes process (SSHP), that models the external course stimulus in addition to
the internal activity stimulus, is personalized, and can predict the next
activity times towards each assignment for students. In SSHP, we represent
activities in each student-assignment pair as a Hawkes process. To tackle the
first aforementioned limitation, our model is designed to capture three types
of external stimuli as parameterized functions of time: the effect of
assignment availability, assignment deadline, and each student’s personal
study time and frequency habits. To deal with the second limitation, SSHP
jointly models all student-assignment pairs, imposing a low-rank structure
between student and assignment parameters in the model. As a result, it can
learn a personalized parameterization, even for unobserved sequences, based on
the similarities shared between the students as well as the assignments. Our
extensive experiments on two synthetic datasets and two real-world datasets
show a significant performance improvement in future activity predictions,
compared with the state-of-the-art models, both when a sequence’s data is
partially or completely missing. We perform ablation studies on SSHP and show
that all aspects of our model, including the external and internal parts, are
important in contributing to its superior performance. And finally, we show
the meaningful procrastination patterns that are captured by SSHP parameters,
using clustering analysis and studying their associations with student
performance in the course.
## 2\. Related Work
Procrastination Modeling in Education Domain. As there is no quantitative
definition for procrastination behavior, in most of the recent educational
data mining literature, procrastination-related behavior has been summarized
by curating time-related features from student interactions in the course.
These studies aim to evaluate the relationships between these time-related
features with student performance and do not model temporal aspects of
procrastination (Baker et al., 2016; Cerezo et al., 2017; Kazerouni et al.,
2017; Agnihotri et al., 2020). For example, Asarta et al. examined the
students’ log data from an online course use measures such as anti-cramming,
pacing, completeness, etc. (Asarta and Schmidt, 2013). However, such methods
are static and can not describe students’ varying behaviors over time. For
another example, Park et al. classify students into procrastinators and non-
procrastinators by formulating a measure via a mixture model of per-day
student activity counts during each week of the course (Park et al., 2018).
However, this is not able to model non-homogeneously spaced deadlines in a
course.
As none of these models consider the timing of students’ activities, they are
not able to predict when the future activities will happen. Sequential data
modeling via point process could potentially deal with this limitation,
however, it has not been applied to procrastination modeling until recently.
To the best of our knowledge, the most related attempt that is comparable to
ours has been made in (Yao et al., 2020), where Yao et al. modeled each
student’s activities sequence as a Hawkes process and relates procrastination
to the mutual excitation among activity types. This work does not predict
student’s unseen activities, rather, a procrastination measure was proposed
based on the learned parameters that have shown to be better correlated with
students’ grades than conventional delay measures.
Hawkes process and modeling scenarios. Hawkes processes, as a popular family
of point processes, model two types of activities: activities that are
triggered by external stimuli, and activities that are self-excited by the
historical activities. The intensity of these two types of activities is
usually parameterized by a base rate function and an excitation function,
respectively. To describe the complicated dynamics of real-world activity
sequences, different state-of-the-art parameterizations have been proposed.
For example, Rizoiu et al. modeled the watching history of a Youtube video as
a Hawkes process, and proposed to use the number of shares of a video on
YouTube scaled by a constant to represent the base rate (Rizoiu et al., 2017).
In another example, Bao et al. proposed to use a sinusoidal function to
capture the periodical rise-and-fall patterns of user activities on social
media (Bao, 2016). More recently, neural Hawkes models have been proposed to
allow higher model capacity for learning arbitrary and nonlinear distributions
of the history (Du et al., 2016; Mei and Eisner, 2017; Xiao et al., 2017). For
example, Du et al. proposed to use RNN to model the arrival times of a given
sequence and characterized the intensity as a function of the embedded hidden
cell representations (Du et al., 2016). Even though such neural-based Hawkes
models allow for less bias and more flexibility than the traditional
parametric models, they do not provide meaningful interpretations of the
activity arrival patterns, which could be important to some scenarios such as
procrastination analysis in educational settings.
In terms of the applications of sequential data modeling via Hawkes models,
the majority of the state-of-the-art Hawkes models treat each individual
sequence as an independent input, in other words, no relationship among the
sequences is assumed. As a result, sequences without any observed history are
usually excluded from the study. To tackle this problem, a few state-of-the-
art Hawkes process approaches model all sequences jointly by assuming
underlying similarity among the sequences. For example, Du et al. modeled each
user-product pair, i.e. the collection of interactions of a user to a product,
as a Hawkes process. By assuming the similarity between users and products,
their model learns the low rank representation of all Hawkes processes,
including those that do not have historical purchasing history (Du et al.,
2015). Other similar approaches have also been proposed to measure sequences
similarity by using auxiliary features (He et al., 2015; Li et al., 2018;
Shang and Sun, 2018). However, auxiliary information in education domains are
usually excluded from the data due to privacy concerns.
## 3\. Stimuli-Sensitive Hawkes Process (SSHP)
### 3.1. Problem Formulation
Consider the case where there are $U$ students and $N$ assignments in a
course. We assume that the time when student $u_{i}$ interacts with assignment
$a_{j}$ depends on two things: (1) the effects of external stimuli (e.g., the
deadline of $a_{j}$ is approaching, therefore student $u_{i}$ starts to review
the lectures and practices on the quizzes), and (2) the self-exciting nature
of the events, in other words, past events can trigger the future ones (e.g.,
student $u_{i}$ decides to work on assignment $a_{j}$ because they just
watched the lecture video that is related to $a_{j}$). To capture these
triggering effects which can be important in explaining students behaviors in
the course, we propose to model the collection of activity timestamps of
student $u_{i}$’s interactions with assignment $a_{j}$, or student-assignment
pair $(u_{i},a_{j})$, as a point process (Sec. 5.2), characterized by a
function that captures the effects of both external stimuli and the effects of
self-excitement (Sec. 5.3). All the important notations used in the following
section are summarized in Tbl. 1.
Table 1. A summary of important notations
Function | $\lambda$: intensity | Vector | $\mathbf{v}=(v_{i})_{N}$
---|---|---|---
$f$: density | $\mathbf{b}=(b_{i})_{N}$
$\mu$: base rate | $\mathbf{c}=(c_{i})_{N}$
$s(t)$: self-excitement | $\mathbf{p}=(p_{i})_{N}$
$\mathcal{L}$: loss | Matrix | $A=(\alpha_{ij})_{U\times N}$
$\mathcal{M}$: proximal operator | $M=(m_{ij})_{U\times N}$
$P$: projection function | $\Gamma=(\gamma_{ij})_{U\times N}$
Scalar | $\alpha$: self-exciting coef. | Super- script | $d$: deadline
$\beta$: decay coef. | $h$: student habit
$m$: deadline effects | $o$: assignment opening
$\gamma$: coef. in $\mu(t)$ | $S$: search point
$v$: shape parameter | Set | $X$: event sequence
$p$: peak in $\mu^{h}$ | $\mathcal{O}$: observed sequences
$b$: base of $\mu^{o}$ | $\Theta$: matrix parameter set
$c$: offset in $\mu^{h}$ | $\phi$: vector parameter set
### 3.2. Modeling Student-Assignment Activity Timestamps
Formally, given a student-assignment pair $(u_{i},a_{j})$, we describe it as
the timestamps of all student $u_{i}$’s interactions with assignment $a_{j}$:
$X{ij}=\\{x_{ij}^{\tau}|\tau=1,...,K_{ij}\\}$ 111For simplicity, without
causing any confusion, we omit the individual subscripts $i$ and $j$ in the
rest of this section.. Given time $t$, let $\mathcal{H}_{t}$ denote the
historical observations in $X$ up to, but not including, time $t$, i.e.
$\mathcal{H}_{t}=\\{x^{\tau}|\tau=1,...,n\\}$, where $x^{n}$ is the time of
the last event that took place before time $t$. If the conditional p.d.f.
(probability density function) of the next event’s time is defined as
$f^{*}:=f(t|\mathcal{H}_{t})$, the joint p.d.f. for a realization follows:
(1) $\displaystyle
f(x^{1},...,x^{K})=\prod_{\tau=1}^{K}f(x^{\tau}|\mathcal{H}_{\tau-1})=\prod_{\tau=1}^{n}f^{*}(x^{\tau}).$
The above conditional p.d.f is one way to characterize a particular Hawkes
process, however could be difficult for model design and interpretability
(Daley and Vere-Jones, 2007). Alternatively, in this work, we adopt a more
commonly-used function for the characterization of Hawkes, i.e. the
conditional intensity function, which can be shown to be a function of
$f^{*}(t)$ and its corresponding cumulative distribution function $F^{*}(t)$:
(2)
$\displaystyle\lambda(t)=\frac{f^{*}(t)}{1-F^{*}(t)}=\frac{f^{*}(t)}{1-\int_{x^{n}}^{t}f^{*}(s|\mathcal{H}_{x^{n}})ds}$
### 3.3. Parameterization of External Stimuli and Self-excitement
As mentioned above, we assume that there are two types of activities in the
sequence of a given student-assignment pair, i.e., activities that are excited
by the external stimuli, and those are self-excited by the previous
activities. The intensities of both types of activities are respectively
parameterized by a base rate function and a excitation function, defined as
follows:
Modeling external stimuli. We parameterize the following $3$ types of external
stimuli that can trigger student’s interactions with the assignment. Firstly,
the effect of student habit: we assume that each student interacts with the
course based on their own periodical studying schedule. For example, some
students habitually log in the course at noon every day, but some prefer to
study after midnight. Secondly, the decaying effect of the assignment
availability (opening): we assume that students activities can be triggered
once the assignment is posted. However, this effect decays over time. For
example, once an assignment is posted, students may log in and check the
assignment requirements or deadlines, or revisit it later for the detailed
descriptions. However, over time, this effect will die out and will be
dominated by other stimuli. Finally, the deadline of an assignment: we assume
that student activities can be triggered by the deadline, and this effect gets
stronger by approaching the deadline and wears off eventually.
Formally, we define the base rate intensity for students at each time $t$ as a
combination of each of the above stimulus as in Equation 3.
(3) $\displaystyle\mu(t)$
$\displaystyle=\gamma^{d}\mu^{d}(t)+\gamma^{o}\mu^{o}(t)+\gamma^{h}\mu^{h}(t),$
(4) $\displaystyle\mu^{h}(t)$ $\displaystyle=\sin(\frac{2\pi}{s}(t+p))+c,$ (5)
$\displaystyle\mu^{o}(t)$ $\displaystyle=b^{t/s},$ (6)
$\displaystyle\mu^{d}(t)$ $\displaystyle=\begin{cases}\frac{1}{\sqrt{2\pi
v}(d-m-t/s)}e^{-\frac{(\ln{(d-m-t/s))^{2}}}{v}}&\text{if $d-m\leq t/s$,}\\\
0&\text{if $d-m>t/s$}.\end{cases}$
Specifically, Eq. 4 models the activity intensity triggered by students habit
as a sinusoidal function. In other word, $\mu^{h}(t)$ captures periodicity of
length $s$, that peaks at $p$. $c$ can be interpreted as the minimum number of
the activities triggered by the student habits, which works as a base of
$\mu^{h}(t)$. Eq. 5 models the opening effects of the assignment as an
exponential function parameterized by $b$, with a decay speed of $1/b$ over
time scaled by $s$. This formulation will result in exponentially less number
of activities, as a result of assignment posting, as time passes. Eq. 6 models
the effect of deadline via a reversed log-normal function. $d$ here is the
known time of the assignment deadline, $d-m$ represents the time when the
deadline’s triggering effect on student activities is over. As a result, $m$
represents the difference between the end of the deadline’s effect and the
deadline. If the effect of deadline is over after the actual time of deadline
(e.g. late submission), $m$ would be negative. Otherwise, $m\geq 0$. Non-
negative $v$ controls how intense the activities are closing to the deadline
and how fast this effect decays after the peak. This formulation represents
that student activity intensities will peak around their last assignment-
related activity, which is close to the deadline, either before or after it.
$\gamma^{h}$, $\gamma^{o}$ and $\gamma^{d}$ respectively are the weight
coefficients that describe the importance of $\mu^{h}(t)$, $\mu^{o}(t)$ and
$\mu^{d}(t)$.
Modeling internal stimuli. To model the effect of past activities, we adopt
the following conventional self-excitation function used in point processes:
(7) $\displaystyle s(t)=\sum_{x^{\tau}<t}\alpha\beta e^{-\beta(t-x^{\tau})},$
The above excitation function characterizes the effect of each historical
event $x^{\tau}$ to current time $t$, as a decaying function of the time
difference between $t$ and $x^{\tau}$, with the decaying speed of $1/\beta$.
Therefore, the more recent a historical event is, the more effect it has in
terms of self-excitation. $\alpha$ can be shown to be the branching ratio
under this definition, i.e. the expected number of activities that are
triggered by a given activity. Thus it is called self-exciting coefficient.
Intensity function. Finally, our intensity function for one student-assignment
pair can be defined as follows :
(8) $\displaystyle\lambda(t)$ $\displaystyle=\mu(t)+s(t)$
$\displaystyle=\gamma^{h}(\sin(\frac{2\pi}{s}(t+p))+c)+\gamma^{o}b^{t/s}$
$\displaystyle+\gamma^{d}(\frac{1}{\sqrt{2\pi
v}(d-m-t/s)}e^{-\frac{(\ln{(d-m-t/s))^{2}}}{v}})+\sum_{x^{\tau}<t}\alpha\beta
e^{-\beta(t-x^{\tau})}.$
As we can see, the intensity is the combination of base rate function $\mu(t)$
that models external stimuli, and the excitation function $s(t)$ that models
the self-excitement. The proposed intensity function falls in the category of
a popular family of point process, i.e. Hawkes processes, which conventionally
model the effect of all external stimuli as a constant. As our proposed model
parameterizes the effects of different external stimuli in educational setting
as functions of time, we call our model Stimuli-Sensitive Hawkes process model
(SSHP).
Matrix representation for all student-assignment pairs. Equation 8 above
represents the intensity function for activities of one student on one
assignment. To model all student activities on all assignments, one can model
them as separate sequences and learn the parameters for each sequence
independently. However, this kind of model will result in two limitations.
Firstly, no parameters can be learned for student-assignment sequences that
are completely unobserved, and thus, student activities in such sequences
cannot be predicted. For example, consider a student, who has not started
working on a future assignment by the end of the observation window, or a
student, who skips an assignment for now and plans to come back to it later.
Excluding these sequences from the study largely limits the capacity of the
model in our application. Secondly, the parameters of the model that are not
assignment-related, such as student habit parameters, are going to be learned
independently for each sequence. As a result, they will lose meaning. A common
approach to deal with these limitations is to extend the data collection
window, which could be costly and inefficient. Another solution could be using
the learned parameters from the observed sequences and applying them to the
sequences that do not have observations. However, such an approach cannot
provide personalized inferences, thus is not ideal.
To deal with these problems, while learning personalized parameters for
students, we assume similarity between the learned parameters for all student-
assignment pairs. Particularly, we represent the relationship between students
and assignments as a student-assignment matrix, where a row is a student and
each column represents an assignment from the course. We represent the
student-assignment related parameters of the model in such a matrix format,
model the student-related parameters of the model in a vector format (so that
they are shared between all assignments for a student), and share some generic
parameters of the model between all students. As a result, for example, the
intensity function of student-assignment pair $(u_{i},a_{j})$ can be defined
as the parameters correspond to the $j$-th cell in row $i$ from the parameter
matrices. More specifically, the parameters are set to follow the following
three structures:
(1) scalars: following the convention of Hawkes processes, we set global
decay coefficient $\beta$ to be shared among all sequences. We also set $s$ to
be a global scalar, so that time $t$ is scaled to the same unit across all
student-assignment pairs.
(2) vector sets $\phi$: We let $\mathbf{c}$ = $(c_{1},...,c_{U})$,
$\mathbf{p}=(p_{1},...,p_{U})$, $\mathbf{b}=(b_{1},...,b_{U})$ and
$\mathbf{v}=(v_{1},...,v_{U})$ to be vectors, assuming a student’s habit is
unchanged across the assignments (i.e. $c$ and $p$). Similarly, their
sensitivity to the effect of assignment openings (i.e. $b$). Furthermore, how
fast their activities becoming intense once the deadline started affecting
them (i.e. $v$) is also set to be shared among assignments.
(3) low-rank matrices $\Theta$: For each of the rest of the parameters, we
consider a matrix format and assume similarity among student-assignment pairs,
i.e. a low rank structure on the matrix format.
### 3.4. Objective Function
Maximum likelihood estimation on one sequence. Given a student assignment pair
$(u_{i},a_{j})$’s historical activities
$X_{ij}=\\{x_{ij}^{\tau}|\tau=1,...,K_{ij}\\}$ over the time period $[0,T]$,
and a parameter set
$\theta=(\alpha,\beta,s,p,c,b,v,m,\gamma^{h},\gamma^{o},\gamma^{d})$, the
likelihood $L$ is the joint probability of observing all historical events
till time $T$, which has the following form (Daley and Vere-Jones, 2007):
(9) $\displaystyle
L(X;\theta)=\prod_{\tau=1}^{K}f^{*}(x^{\tau})=\prod_{\tau=1}^{K}\lambda(x^{\tau})\cdot
e^{(-\int_{0}^{T}\lambda(u)d(u))},$
where $f^{*}(t)$ and $\lambda(t)$ are respectively the p.d.f defined in Eq. 1
and the intensity function in Eq. 8. Directly taking the log of the above
equation to obtain the log-likelihood entails $\mathcal{O}(K^{2})$ complexity
due to the double summations - i.e. the summation in Eq. 8 combined with the
summation term introduced by the log of the product from Eq. 9. To achieve a
more feasible complexity of $\mathcal{O}(K)$, we use the recursive function
$R(\cdot)$ defined as follows:
(10) $\displaystyle
R(\tau)=\begin{cases}\big{(}1+R(\tau-1)\big{)}e^{-\beta\big{(}x^{\tau}-x^{\tau-1})}\big{)}&\text{if
$\tau>1$,}\\\ 0&\text{if $\tau=1$}.\end{cases}$
As a result, the final explicit form of log-likelihood $l(\theta)$ can be
shown as below:
(11) $\displaystyle l(X;\theta)$ $\displaystyle=\log
L(\theta)=\sum_{\tau=1}^{K}\log(\lambda(x^{\tau})-\int_{0}^{x^{K}}\lambda(u)du$
$\displaystyle=\sum_{\tau}\log\big{(}\gamma^{d}\mu^{d}(x^{\tau})+\gamma^{o}\mu^{o}(x^{\tau})+\gamma^{h}\mu^{h}(x^{\tau})$
$\displaystyle+\alpha\beta
R(\tau)\big{)}-\gamma^{d}U^{d*}(x^{K})-\gamma^{o}U^{o*}(x^{K})-\gamma^{h}U^{h*}(x^{k})$
$\displaystyle+\alpha\sum_{\tau=1}^{K}(e^{-\beta(x^{K}-x^{\tau})}-1).$
$U^{d*}(\cdot)$, $U^{o*}(\cdot)$, $U^{h*}(\cdot)$ is respectively the
cumulative intensity of $\mu^{d}$, $\mu^{o}$ and $\mu^{h}$ introduced due to
the integral in Eq. 9, which can be obtained as below:
(12) $\displaystyle U^{h*}(x^{\tau})$
$\displaystyle=\int_{0}^{x^{\tau}}\mu^{h}(u)du$
$\displaystyle=\frac{1}{\pi}\big{(}-24s\cos(\frac{\pi x^{\tau}+\pi
ps}{24s})+24s\cos\frac{\pi p}{24}+\pi cx^{\tau}\big{)},$ (13) $\displaystyle
U^{o*}(x^{\tau})$
$\displaystyle=\int_{0}^{x^{\tau}}\mu^{o}(u)du=s(\frac{b^{x^{\tau}/s}-1}{\ln{b}}),$
(14) $\displaystyle U^{d*}(x^{\tau})$
$\displaystyle=\int_{0}^{x^{\tau}}\mu^{d}(u)du=-\dfrac{s\left(\operatorname{erf}\left(\frac{\ln\left(-\frac{x^{\tau}-(d-m)s}{s}\right)}{\sqrt{v}}\right)-\operatorname{erf}\left(\frac{\ln\left(d-m\right)}{\sqrt{v}}\right)\right)}{2^{\frac{3}{2}}}.$
where $\text{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}dt$ is the
Gauss error function.
Matrix representation of all sequences. Thus far, one could model a single
student-assignment pair via SSHP based on its historical observations by
maximizing the log-likelihood function defined in Eq. 11. However, as
mentioned in the previous section, we represent some of the parameters
($\Theta$) in a matrix format for all student-assignment pairs and assume
similarity among them, i.e. a low rank structure on the matrix. Specifically,
we denote the set of the vector parameters as
$\phi=\\{\mathbf{c},\mathbf{p},\mathbf{b},\mathbf{v}\\}$, and the set of the
matrix parameters as $\Theta=(A,M,\Gamma^{h},\Gamma^{o},\Gamma^{d})$ and
impose a low-rank structure on all $\Theta$ in our objective function. By
using trace norm as a surrogate for low-rank structure, we constraint the
trace-norm of $A=(\alpha_{ij})_{U\times N}$, $M=(m_{ij})_{U\times N}$,
$\Gamma^{h}=(\gamma^{h}_{ij})_{U\times N}$,
$\Gamma^{o}=(\gamma^{o}_{ij})_{U\times N}$, and
$\Gamma^{d}=(\gamma^{d}_{ij})_{U\times N}$ to be small.
Loss for all sequences. Finally, we can formulate the objective function as
follows, based on the collection of observed sequences
$\mathcal{O}=\\{X_{ij},~{}\text{s.t.}~{}|X_{ij}|>0\\}$:
(15) $\displaystyle\min_{\Theta,\phi}\mathcal{L}=$
$\displaystyle-\frac{1}{|\mathcal{O}|}\sum_{X_{ij}\in\mathcal{O}}l(\mathcal{X}_{ij};\Theta_{ij},\phi_{i})$
s.t. $\displaystyle\mathbf{A}\geq 0,\mathbf{\Gamma}_{d}\geq
0,\mathbf{\Gamma}_{o}\geq 0,\mathbf{\Gamma}_{h}\geq 0,\mathbf{c}\geq
1,\mathbf{v}>0,1>\mathbf{b}>0$ $\displaystyle tr(\theta_{u})\leq
k_{u},~{}\text{for}~{}\theta_{u}\in\Theta_{ij}.$
The main objective is the negative log-likelihood of observing all sequences
with events, while the non-negative constraint on $A$ is introduced to fit the
definition of Hawkes that the sequences are self-exciting. All coefficients of
the 3 types of external stimuli are also set to be non-negative. $\mathbf{c}$
is constrained to be greater than or equal to $1$ to make sure the non-
negative effect of student habit with the use of sinusoidal function, and each
$\mathbf{v}$ element is the shape parameter in the reversed log-normal
function thus needs to be positive. Each cell of $\mathbf{b}$ is set to be
constrained between $0$ and $1$ to meet the assumption that the effect of
assignment opening is decaying but not increasing or unchanged. We also
constrain each parameter $\theta_{u}$’s trace norm in the matrix format to be
small, which is equivalent to constraining the rank of $\theta_{u}$ to be less
than or equal to $k_{u}$.
### 3.5. Parameter Inference
We adopt Accelerated Gradient Method (AGM) (Nesterov, 2013) framework for the
inference of parameters. Our choice is for having a faster convergence rate,
especially when we have both non-smooth trace norm and non-negativity in the
constraints. The key subroutines of AGM in our model can be summarized as
follows. For a matrix format parameter $\theta_{u}\in\Theta$, the objective is
to compute the proximal operator:
(16) $\displaystyle\theta_{u}^{*}$
$\displaystyle=\text{argmin}_{\theta_{u}}\mathcal{M}_{\gamma,\theta_{u}^{S}}(\theta_{u})$
$\displaystyle=\text{argmin}_{\theta_{u}}\frac{\gamma}{2}\|\theta_{u}-P_{\theta_{u}}(\theta^{S}_{u}-\frac{1}{\gamma}\nabla_{\theta_{u}}\mathcal{L})\|_{F}^{2}.$
$\gamma$ is the step size, $\theta^{S}_{u}$ is used to denote the current
search point of $\theta_{u}$, and $\nabla_{\theta_{u}}\mathcal{L}$ is the
gradient of loss $\mathcal{L}$ w.r.t $\theta_{u}$. $P_{\theta_{u}}(\cdot)$ is
a projection function to make sure the parameter value at each step is
properly constrained. More specifically, $P_{\theta_{u}}(\cdot)$ for all
$\theta_{u}\in\Theta$ is set to be $\big{(}TraceProj(\cdot)\big{)}_{+}$ where
the inner $TraceProj(\cdot)$ is a trace projection (Cai et al., 2010) and the
outer $(\cdot)_{+}$ projects negative values to $0$. Similarly, the key
subroutine for the inference of $\phi_{u}\in\phi$ is shown as follows:
(17) $\displaystyle\phi_{u}^{*}$
$\displaystyle=\text{argmin}_{\phi_{u}}\mathcal{M}_{\phi_{u}^{S},\gamma}(\phi_{u})$
$\displaystyle=\text{argmin}_{\phi_{u}}\frac{\gamma}{2}\|\phi_{u}-P_{\phi_{u}}(\phi_{u}^{S}-\frac{1}{\gamma}\nabla_{\phi_{u}}\mathcal{L})\|_{F}^{2}.$
$P_{\phi_{u}}(\cdot)$ is also a projection function that makes sure the
constraint of $\phi_{u}$ is met. When a value falls out of the constrained
interval, it is projected to the closet value within the interval.
We also present Algorithm 1 to effectively solve the objective according to
the subroutines mentioned above.
Input: $\eta>1$, step size $\gamma_{0}$, MaxIter
1 initialization:
$\theta_{u,1}=\theta_{u,0}~{}\text{for}~{}\theta_{u}\in\Theta$;
$\phi_{u,1}=\phi_{u,0}$ for $\phi_{u}\in\phi$; $\alpha_{0}=0;\alpha_{1}=1$;
2 for _$i=1$ to MaxIter_ do
3 $a_{i}=\frac{\alpha_{i-1}-1}{\alpha_{i}}$;
4 update $\theta_{u,i}^{S}=\theta_{u,i}+a_{i}(\theta_{u,i}-\theta_{u,i-1})$
for all $\theta_{u}\in\Theta$;
5 update $\phi_{u,i}^{S}=\phi_{u,i}+a_{i}(\phi_{u,i}-\phi_{u,i-1})$ for all
$\phi_{u}\in\phi$;
6
7 while _Ture_ do
8 compute
$\theta_{u}^{*}=P_{\theta_{u}}(\theta_{u,i}^{S}-\nabla\mathcal{L}/\gamma_{i})$
for all $\theta_{u}\in\Theta$;
9 compute
$\phi_{u}^{*}=P_{\phi_{u}}(\phi_{u,i}^{S}-\nabla\mathcal{L}/\gamma_{i})$ for
all $\phi_{u}\in\phi$;
10 if
_$\mathcal{L}(X;\Theta^{*},\theta^{*}_{i})\leq\mathcal{L}(X;\Theta^{S},\phi^{S})+\sum_{\theta\in\\{\Theta,\phi\\}}\langle\theta^{S}_{i},\nabla\mathcal{L}(\theta^{S}_{i})\rangle+\alpha_{k}/2\|\theta^{S}_{i}-\theta^{*}_{i}\|_{F}^{2}$_
then
11 break;
12
13 else
14 $\gamma_{i}=\gamma_{i-1}\times\eta$;
15
16 end if
17
18 $\theta_{u,i+1}=\theta^{*}$ for all $\theta_{u}\in\Theta$;
19 $\phi_{u,i+1}=\phi^{*}$ for all $\phi_{u}\in\phi$ ;
20 if _stopping criterion satisfied_ then
21 break;
22 else
23 $\alpha_{i}=\frac{1+\sqrt{1+4\alpha_{i-1}^{2}}}{2}$
24 end if
25
26 end while
27
28 end for
Output: $\theta_{u}=\theta_{u,i+1}~{}\text{for
all}~{}\theta_{u}\in\Theta,\phi_{u}=\phi_{u,i+1}~{}\text{for
all}~{}\phi_{u}\in\phi$
Algorithm 1 Algorithm for parameter inference of SSHP
## 4\. Experiment Setup and Baselines
In this section, we first introduce the state-of-the-art approaches that we
used as baselines in Sec. 4.1. An introduction of both synthetic and real-
world datasets is given in Sec. 4.2. Finally, the experiment setup including
train-test splitting and hyperparameters tuning is presented in Sec. 4.3.
### 4.1. Baseline Approaches
In this work, we compare the proposed SSHP to the following $7$ baselines
considering different aspects, i.e., model parameterization, modeling strategy
(if can generate personalized predictions for unobserved sequences), and
application scenarios.
Poisson (Kingman, 2005): We use the Poisson process model as the simplest
baseline, where the intensity function is characterized by the event arrival
rate.
HRPF (Hosseini et al., 2018): The state-of-the-art Poisson factorization model
proposed by Hosseini et al. (among the proposed models in the paper, this is
the version that does not require user-network as auxiliary features). All
sequences are modeled jointly, therefore, unobserved sequences can be
predicted as well.
RMTPP (Du et al., 2016): A state-of-the-art Neural Hawkes model that uses RNN
to model the dependencies between past and future events in a sequence. The
intensity function of this Hawkes model is defined based on the hidden states.
All sequences are assumed to be independent.
ERPP (Xiao et al., 2017): Another state-of-the-art Neural Hawkes model, which
models auxiliary features as time series. These time series and the event
sequences are modeled by two separate LSTMs. Similar to RMTPP, all sequences
are modeled independently.
DHPR (Hosseini et al., 2018): A variation of HRPF, where an excitation
parameter is used to capture the self-excitement in the sequences. However,
the excitation is represented as a hyperparameter that is shared among all
sequences.
HPLR (Du et al., 2015): The state-of-the-art user-item recommendation model
using Hawkes processes. This model can be seen as an improvement of DHPR, in
which the excitation parameter can be learned for all sequences.
EdMPH (Yao et al., 2020): The most recent approach that studies student
procrastination using Hawkes processes. All activities of a student during the
course are modeled in a sequence, independent from other sequences.
A summary of the baselines is presented in Table 2.
Table 2. A summary of baseline approach and the proposed SSHP
Model | Self-exciting | | Non-constant
---
base of time
| Infer completely
---
missing seq.
| Application
---
in Education
Poisson | ✗ | ✗ | ✗ | ✗
HRPF | ✗ | ✗ | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | ✗
RMTPP | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | ✗ | ✗
ERPP | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | ✗ | ✗
DHPR | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | ✗ | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | ✗
HPLR | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | ✗ | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | ✗
EdMPH | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | ✗ | ✗ | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$
SSHP | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$ | $\color[rgb]{0.0,0.42,0.24}{\checkmark}$
### 4.2. Datasets
Synthetic Data. Presuming $500$ students and $20$ assignments, we created
$10k$ $(500\times 20)$ simulated student-assignment pairs, and sampled $\sim
100$ events for each pair using the Ogata thinning algorithm (Ogata, 1988),
which is the most commonly used sampling method in the related literature.
Specifically, we used the intensity function defined in Eq. 8 and sampled each
of its parameters from normal distributions, where
$A\sim\mathcal{N}(0.4,0.1)$, $M\sim\mathcal{N}(0,5)$,
$\Gamma^{d}\sim\mathcal{N}(15,3)$, $\Gamma^{o}\sim\mathcal{N}(5,3)$,
$\Gamma^{h}\sim\mathcal{N}(0.5,0.1)$, $\mathbf{v}\sim\mathcal{N}(20,10)$,
$\mathbf{b}\sim\mathcal{N}(0.5,0.3)$, $\mathbf{p}\sim\mathcal{N}(6,4)$, and
$\mathbf{c}\sim\mathcal{N}(1.2,0.1)$. We empirically set these distributions
to approximate the intensity patterns observed in real data. For
visualization, Fig. 1 shows a sequence generated by open library tick (Bacry
et al., 2017), in which all the parameters are set to be the means. The solid
blue line shows the sequence intensity, where each blue dot represents a
sampled activity, the dashed orange line is the base rate, and the synthetic
deadline is $80$, shown as the vertical red line.
Figure 1. Activity intensity of a sampled student-assignment pair.
To simulate the real-world scenarios, in which only some of the data sequences
can be observed, we created two datasets, randomly masking $10\%$ (named as
Syn-$10$ data) and $90\%$ (named as Syn-$90$ data) of the sequences to be
unobserved. In other words, $10\%$ of the sequences from Syn-$10$ dataset, and
$90\%$ of the sequences from Syn-$90$ dataset are unobserved.
Computer Science Course on Canvas Network (CANVAS). This real-world dataset
is from the Canvas Network online platform (Canvas-Network, 2016) that hosts
various open courses in different academic disciplines. The computer science
course we use happens during $\sim 6$ weeks. In each week, a graded
assignment-style quiz is published in the course resulting in $6$ graded
course assignments. From this dataset, we obtain $\sim 729$K timestamps of
$384$ student-assignment pairs. Activities include submission activities,
module learning (reading, watching videos, etc.) activities, and discussions.
Big Data in Education on Coursera (MORF) Our second real-world dataset is
collected from an 8-week “Big Data in Education” course on the Coursera
platform. The dataset is available through the MOOC Replication Framework
(MORF) (Andres et al., 2016). In total, we extract $\sim 52$K activities from
$246$ students-assignment pairs, that contain quiz and assignment activities,
watching lecture videos, and discussion-related activities.
### 4.3. Experiment Setup
We test our method in two scenarios according to our application: 1) when the
historical observations are available, we want to predict what will happen in
the future based on the history, and 2) when the whole sequence of activities
for a student-assignment pair is completely missing, we want to infer its
future without observing its history. To test the model’s performance in
predicting the future in these two scenarios, we split our data into the
following $3$ sets: training set that contains the initial historical
observations, which is used to train the model for parameter inference;
partially missing test set that contains the rest of the historical
observations, that is used for testing the first scenario. Finally, the
completely missing test set contains the entire observations of the sequences,
and it is used to examine models’ ability in generating personalized and
accurate predictions for unobserved sequences, i.e. scenario 2.
For Syn-$10$, we naturally set the $10\%$ masked sequences to be the
completely missing test set. In the remaining 90% unmasked sequences, we use
the first $70\%$ of the activities (i.e. synthetic past observations) to be
training and the later $30\%$ (i.e. synthetic future activities to be
predicted) to be partial missing testing. We perform a similar procedure on
Syn-$90$, with 90% masked sequences to be completely missing test set, and a
$70\%-30\%$ split in the remaining sequences for training and partially
missing testing respectively. For both real-world datasets, we randomly
holdout $20\%$ of the sequences to be completely missing, and for the rest of
the $80\%$ sequences, we also use the same $70\%-30\%$ split to generate
training and partially missing testing.
For the baseline models that are not able to generate personalized predictions
of future times without historical observations (i.e. Poisson, RMTPP, ERPP,
and EdMPH), we report the root mean squared error (RMSE) of the time
prediction on the partially missing test set only, and for the other models,
we report the RMSE on both partially and completely missing test sets.
The hyperparameters of proposed SSHP across all datasets are tuned via grid
search on the following values: global decay $\in\\{1,6,12\\}$; initial step
size $\gamma_{0}$ $\in\\{1,10,100,200\\}$; update speed
$\eta\in\\{2,5,10,20\\}$; and trace norm penalty in trace norm projection
$rho\in\\{0.01,0.1,1\\}$. For the synthetic datasets, the best hyperparemters
are set as follows: we have decay $\beta=1$, the step size $\gamma_{0}=100$,
update speed is $\eta=2$, trace norm penalty is $1$. In CANVAS, decay
$\beta=6$, the step size $\gamma_{0}=200$, $\eta=5$, trace norm penalty is
$1$. In MORF, we have decay $\beta=1$, $\gamma_{0}=200$, $\eta=2$, and
$\rho=0.1$. Similarly, the hyperparameters for baseline approaches are tuned
via grid search according to the ranges provided in the original papers.
## 5\. Fit and Arrival Time Prediction
In the following set of experiments, we study SSHP’s ability to recover the
correct parameters for the underlying processes, investigate its performance
in predicting the next activity time compared to the state-of-the-art
baselines, and analyze the contribution of different parts of the model in its
performance.
### 5.1. Model Fit on Synthetic Data
As a way to evaluate SSHP’s performance in capturing the sequence dynamics, we
investigate its ability to find the true parameters of the underlying
processes. Since these parameters are available from the synthetic datasets,
we calculate the root mean squared error (RMSE) between the estimated
parameter values by SSHP and the actual parameter values that have been used
to generate the synthetic datasets. The results are shown in Tbl. 3.
Generally, SSHP performs better in the partially missing test set than in the
compleletly missing test. That is because the task of learning completely
unobserved sequences without histories is more challenging than learning
sequences with partially observed histories. Additionally, the results show
that the RMSEs in Syn-$90$ dataset are only marginally higher than in Syn-$10$
in both partially and completely missing test sets. This suggests the model’s
robustness and its potential to recover the parameters even when the ratio of
unobserved sequences is high in the dataset.
Datasets | $\mathbf{v}$ | $\mathbf{b}$ | $\mathbf{p}$ | $\mathbf{c}$ | $A$ | $M$ | $\Gamma^{d}$ | $\Gamma^{o}$ | $\Gamma^{h}$
---|---|---|---|---|---|---|---|---|---
Syn-10 | part. miss. | 1.33 | 0.1 | 1.33 | 0.09 | 0.05 | 2.64 | 1.65 | 1.08 | 0.16
compl. miss. | 1.23 | 0.12 | 1.39 | 0.16 | 0.13 | 2.60 | 2 | 1.54 | 0.13
Syn-90 | part. miss. | 1.34 | 0.10 | 1.33 | 0.09 | 0.06 | 2.39 | 1.80 | 1.14 | 0.18
compl. miss. | 1.31 | 0.12 | 1.38 | 0.16 | 0.12 | 2.61 | 1.97 | 1.51 | 0.17
Table 3. RMSE of parameters learned by proposed model SSHP in synthetic
datasets.
To provide a visual representation of these results, Fig. 2 shows the sampled
intensity of a real sequence in Syn-$90$ dataset and the predicted intensity
that is sampled based on the predicted parameters. This figure demonstrates
the model’s ability in accurately capturing the dynamics of the sequence.
Figure 2. Predicted Intensity of a synthetic sequence.
### 5.2. Predicting Future Event Arrival Times
Predicting the arrival times of future events for a given sequence, is the
most commonly used evaluation method in the related literature. More formally,
for a student-assignment pair, the arrival time of future $z$-th event after
observation window, denoted as $x_{z}$, can be computed as the expectation of
the sequence intensity w.r.t to time $t$. However, since time is continuous
and the intensity functions of Hawkes processes are usually complicated, the
analytic form of this expectation is hard to obtain.
Alternatively, in this work, we adopt another popular approach to predict
future event arrival times. We first use Ogatha thinning algorithm to sample
inter-arrival times $\Delta t_{z}$, which is the time difference between
$z$-th and $(z-1)$-th events. Then, we compute the predicted time of $z$-th
event $\hat{x}_{z}$ as
$\hat{x}_{z}=\hat{x}_{z-1}+\frac{1}{N_{t}}\sum_{i=1}^{N_{t}}\Delta t^{i}_{z}$,
where $N_{t}$ is the trail number for the sampling and $\Delta t^{i}_{z}$ is
the sampled inter-arrival time at the $i$-th trail. The intuition is that
inter-arrival times are sampled $N_{t}$ times, then the sample mean of all
$N_{t}$ trails is used as the approximation of the actual inter-arrival time.
In this way, we can recursively sample the arrival times for future events
from the last historical observation and the learned intensity function.
In this work, we evaluate the model performances in predicting the next $10$
future activities after the observation window is ended, using RMSE between
the actual and predicted times as our measure. As the number of future
activities grows, the task of predicting their arrival times becomes more
challenging.
Figure 3. Time prediction RMSE on partially missing test set set with 95%
confidence interval on Syn-$10$ and Syn-$90$.
Figure 4. Time prediction RMSE on completely missing test set with 95%
confidence interval on Syn-$10$ and Syn-$90$.
Performance on synthetic datasets. In this section, we present the experiment
results for SSHP and baseline approaches on synthetic datasets. Fig. 4 shows
the model performances in partially missing test set in both Syn-$10$ and
Syn-$90$, while Fig. 4 shows the performances in the completely missing test
set. The x-axis represents the future events’ indices. For example, $x=2$
represents the second event in the future after the end of the observation
period $T$. The y-axis is RMSE of time predictions in the log-scale, for a
clearer separation between the models in the figures. Some baselines, such as
ERPP and RMTPP, are missing from the lower plots since they cannot predict
unobserved sequences (student-assignment sequences in completely missing test
set). We can see that SSHP clearly achieves the smallest RMSE of time
predictions comparing to the baseline approaches in all settings. Even though
neural models ERPP and RMTPP start to show better performances in later event
predictions, they are not able to predict unobserved sequences (i.e.
completely missing test set). As expected, since recovering completely
unobserved sequences is more challenging, SSHP’s performance on the partially
missing test set is better than its performance in the completely missing test
set set.
Performance on real-world datasets. Next, we evaluate each model’s performance
using the two real-world datasets. It is worth mentioning that the observed
history in MORF is the shortest among all datasets, having an average of less
than $\sim 26$ observations per sequence, and $\sim 18$ observations for
training. For this reason, the prediction window is set to be $8$ in MORF
instead of $10$ to achieve meaningful evaluation. The evaluation in partially
missing test set and completely missing test set is respectively presented in
Fig. 6 and Fig. 6. As it is shown in the figures the proposed SSHP model
outperforms the baseline approaches, especially by a big margin in Canvas’s
completely missing test set. This is consistent with the synthetic dataset
results. In contrast, the performances of neural models ERPP and RMTPP are not
as promising as they are in the synthetic datasets, especially in MORF. One
possible explanation is that the short training sequences in MORF restrict the
ability of neural based models.
Another observation is that for higher indexed events in MORF’s completely
missing set and for lower indexed events in Canvas’s partially missing test
set, we observe overlapped confidence intervals between HPLR and SSHP,
suggesting a less siginificant difference between two models’ performances.
However, as the large confidence interval in HPLR suggests, its results are
not robust and vary too much in the experiments. A potential explanation for
the good predictions in HPLR is that for some student-assignment pairs the
activity dynamics are rather invariant and a constant base rate, as in HPLR,
is sufficient to capture them.
Figure 5. Time prediction RMSE on partially missing test set with 95%
confidence interval on two real-world datasets.
Figure 6. Time prediction RMSE on completely missing test set with 95%
confidence interval on two real-world datasets.
In conclusion, SSHP has shown to have superior time prediction performance in
both synthetic and real-world datasets comparing with baseline approaches,
especially on the challenging task of predicting the future for the completely
missing test set.
### 5.3. Ablation Study
To verify each component’s importance in the intensity function, we compare
SSHP to its variations SSHP-$s$, SSHP-$o$, SSHP-$h$ and SSHP-$d$, which
respectively represents the model achieved by taking out the following
components: self-excitement $s(t)$, effect of assignment opening $\mu^{o}(t)$,
effect of student habit $\mu^{h}(t)$ and effect of deadline $\mu^{d}(t)$.
Figure 7. Time prediction RMSE with 95% confidence interval of SSHP and
variations on partially missing test set.
Figure 8. Time prediction RMSE with 95% confidence interval of SSHP and
variations on completely missing test set.
Fig. 8 and Fig. 8 show the performance of these models in comparison with each
other and with SSHP in respectively partially and completely missing test set.
In general, SSHP achieves lower time prediction errors in both real-world
datasets, indicating the importance of each individual component. Furthermore,
in the partially missing test set as shown in Fig. 8, while the improvement of
modeling self-excitement is only marginal comparing with SSHP in CANVAS (left
figure), self-excitement is shown to be a major factor in MORF (right figure),
as SSHP-$s$ has higher prediction errors in MORF than other variations.
Additionally, as shown in Fig. 8, we can see the differences between SSHP and
its variations are much more distinct in the completely missing test set (i.e.
when the history is unobserved). More specifically, in CANVAS dataset (left
figure), we see that SHPP-$d$’s error is the highest among all models. This
shows strong evidence of the deadlines’ effect on student activities in
CANVAS, which also suggests the importance of modeling $\mu^{d}(t)$. On the
other hand in MORF, when comparing SSHP and SSHP-$h$ (right figure), we can
see that the effect of student habit is not presented at the beginning of the
sequence, as the error is lower when this stimulus is not included. However,
the importance of including student habits in the model is significant after
the second event. Another interesting observation is the higher confidence
interval presented in SSHP-$o$. One explanation is that some students are more
sensitive to assignment opening compared to the others, therefore excluding
$\mu^{o}(t)$ from the equation can cause a higher error to some sequences but
not the others. The difference that is observed in the components’ importance
in the two datasets can come from the different nature of the two educational
systems and the presented courses. For example, one expects the effect of the
deadline to be more prevalent in courses with a high late-submission penalty,
compared to the ones with a more flexible scheme.
To conclude, despite the different characteristics that have been unveiled in
the two datasets, we can see that all three external stimuli and the self-
excitement components are important in modeling student activities.
## 6\. Procrastination Pattern Discovery
In Section. 3, we have described the intuition behind SSHP’s parameterization.
In this section, we analyze these parameters to demonstrate their
interpretation and their association with student performance patterns.
### 6.1. Cluster Analysis
First, we investigate if the learned parameters can describe students’
behaviors in assignments in a meaningful way that shows their cramming and
procrastination behaviors. To do so, we cluster all student-assignment pairs
via K-Means clustering algorithm, representing each of student-assignment item
as its learned parameters:
$(u_{i},a_{j})=(\alpha_{ij},m_{ij},\gamma^{d}_{ij},\gamma^{o}_{ij},\gamma^{h}_{ij},v_{i},b_{i},p_{i},c_{i})$.
To find the optimal number of clusters, we use the elbow method on clustering
loss. In both CANVAS and MORF datasets, the achieved optimal cluster number is
$3$, which means that $3$ student-assignment interaction patterns are
uncovered in both datasets. Figures 9 and 10 show the parameter values for
cluster centers in CANVAS and MORF datasets, respectively. For a clearer
presentation, $m$ is scaled down by $24$ (time unit changes from hours to
days) and $\alpha$ and $b$ are scaled up by $10$ in the figures respectively.
Error bars show the $95\%$ confidence interval within each cluster.
Figure 9. Clusters of student learning dynamics characterized by SSHP in
CANVAS.
Specifically, by comparing CANVAS $3$ (cluster $3$ in CANVAS dataset) with
clusters $1$ and $2$ in CANVAS, we can see that the interactions between
students and assignments in CANVAS $3$ are shown to be less sensitive to the
deadline until much later, when it is too close to the deadline (smaller $v$
and larger $\gamma^{d}$). Also, negative $m$ in CANVAS $3$ indicates late
submissions or other assignment-related activities, after the deadline. Not
only that, but the burstiness of the events in this cluster is also shown to
be higher than other clusters (larger $\alpha$). One possible explanation is
that the students in this cluster procrastinated on the assignments in it and
only started to work on them much later than they should have, which explains
the bursty and intense activities close to the deadline. Furthermore, we can
see that the effect of assignment opening or availability wears off much
faster in CANVAS $3$ (smaller $b$), meaning that the period of time that this
cluster is affected by assignment opening is shorter. This suggests that
overall, this cluster is less sensitive to the assignment opening. When it
comes to student habit, we see that the peak of periodicity shows up at a
later time (large $p$), indicating that the students in CANVAS $3$ interact
with the course usually later during the day, comparing with CANVAS $2$ and
$1$. On the other hand, even though the differences are shown to be smaller
when comparing CANVAS $1$ and CANVAS $2$ clusters, many of them are
significant. Particularly, the results clearly show that CANVAS $2$ is more
sensitive to the deadline in the sense that assignment-related activities are
finished much earlier (larger positive $m$ and $\gamma^{d}$). Their base
activities triggered by student habits are also shown to be more intense
(higher $c$) even though their peak time is usually later during the day
(larger $p$). So to conclude, learning pattern in CANVAS $3$ suggests
procrastinating-like behaviors, with less sensitivity to the deadline and the
assignment opening, as well as more bursty and intense behaviors. On the other
hand, learning patterns in CANVAS $2$ suggests an “early birds” type of
learning behavior, in which assignment-related activities are finished earlier
by around $4$ days. Also, they tend to be more sensitive to the opening of
assignment, with less bursty behaviors, which can be interpreted as an
opposite behavior of procrastination.
Figure 10. Clusters of student learning dynamics characterized by SSHP in
MORF.
Similarly in MORF (Figure 10), different characteristics are uncovered by the
discovered clusters. We can see that the effect of the deadline starts late
and ends late (smaller $v$ and smaller negative $m$) for MORF $1$ and $3$.
Also, student activities on assignments in MORF $1$ and $3$ are more bursty
(larger $\alpha$). On the other hand, MORF $2$ is more sensitive to the effect
of assignment opening for a longer period of time (larger $b$), and student
habits also seem to have a stronger effect on MORF $2$, suggested by larger
$c$ and $\gamma^{h}$. Overall, we can conclude that MORF $2$ activity patterns
represent the “early birds” type and MORF $1$ activity patterns show the most
procrastination-like behaviors among the 3 clusters.
By comparing the clusters where procrastination-like behaviors are suggested
between the two datasets, we can see that the parameters show different
strategies in them. Specifically, in MORF $1$, less bursty and more delayed
submissions are observed (smaller negative $m$ and smaller $\alpha$) than in
CANVAS $3$, which can be an indication of procrastination. Another potential
explanation for this difference can be the different nature of the courses as
we mentioned in the section of ablation study, where the penalty of late
submissions can be stronger in CANVAS than in MORF.
Figure 11. Clusters of student learning dynamics characterized by SSHP in
CANVAS and MORF.
### 6.2. Association with Grades
To show the association between student activity patterns on assignments and
their performance in them, we check the student grades on assignments in each
cluster. The results are presented as box plots shown in Fig. 11. As we can
see, median grades in CANVAS $3$ and MORF $1$ are visibly smaller than other
clusters in their datasets. But also, the distribution of grades in each two
clusters are different. To see if the differences of grade distributions
between clusters are significant, for each of the datasets, we conduct a
Kruskal-Wallis test on the grades between any two clusters discovered by SSHP.
We find out that all the p-values are significantly smaller than $0.05$,
suggesting significant differences in the grade distribution between all
clusters. Combining these observations with the conclusions from Figures 9 and
10, we see that clusters that show procrastination behaviors with less
sensitivity to the deadlines and assignment openings (CANVAS $3$ and MORF $1$)
also are shown to have significantly lower grades. We can conclude that
clusters with more procrastination-like behaviors are associated with lower
grades in both datasets. This demonstrates that SSHP can capture underlying
student activity patterns with meaningful parameters that can be used as good
indicators of procrastination behaviors and student performances.
## 7\. Conclusion
In this work, we proposed a novel stimuli-sensitive Hawks process model (SSHP)
to represent student’s cramming and procrastination behaviors in online
courses, according to their activities. Our model captures three types of
external stimuli in addition to the internal stimuli between activities, i.e.,
the effect of assignment deadline, assignment availability, and student’s
personal habits. SSHP models all student-assignment pairs jointly, which
enables the model to generate personalized predictions for both partially
missing and completely missing activity sequences. Our experiments on both
synthetic and real-world datasets demonstrated SSHP’s superior performance
comparing to the state-of-the-art baseline approaches, especially in the more
challenging task of future time prediction for time sequences where the
history is completely missing. Our ablation studies on SSHP showed that each
component of our model is necessary for achieving its superior performance.
Finally, we demonstrated that not only SSHP excels at future time predictions,
but also its model parameterization provides meaningful interpretations and
insights into the association between students’ procrastination patterns and
their grades. Particularly, we discovered $3$ clusters of behaviors on
assignments: one with stronger procrastinating behaviors, with less
sensitivity to the deadline and the assignment opening, as well as more bursty
and intense behaviors; another one with “early birds” type of learning
behaviors, with more sensitivity to deadlines and less bursty behaviors; and a
third one in between the two. We showed that grade distributions in these
clusters have meaningful differences, with the lowest grades associated with
procrastinating-like behaviors.
## References
* (1)
* Agnihotri et al. (2020) Lalitha Agnihotri, Ryan S Baker, and Steve Stalzer. 2020\. A Procrastination Index for Online Learning Based on Assignment Start Time. In _The 13th International Conference on Educational Data Mining_.
* Andres et al. (2016) Juan Miguel L Andres, Ryan S Baker, George Siemens, Dragan Gašević, and Catherine A Spann. 2016. Replicating 21 findings on student success in online learning. _Technology, Instruction, Cognition, and Learning_ (2016), 313–333.
* Asarta and Schmidt (2013) Carlos J Asarta and James R Schmidt. 2013. Access patterns of online materials in a blended course. _Decision Sciences Journal of Innovative Education_ 11, 1 (2013), 107–123.
* Bacry et al. (2017) E. Bacry, M. Bompaire, S. Gaïffas, and S. Poulsen. 2017. tick: a Python library for statistical learning, with a particular emphasis on time-dependent modeling. _ArXiv e-prints_ (July 2017). arXiv:1707.03003
* Baker et al. (2016) Rachel Baker, Brent Evans, and Thomas Dee. 2016. A Randomized Experiment Testing the Efficacy of a Scheduling Nudge in a Massive Open Online Course (MOOC). _AERA Open_ 2, 4 (2016).
* Bao (2016) Peng Bao. 2016. Modeling and predicting popularity dynamics via an influence-based self-excited Hawkes process. In _Proceedings of the 25th ACM International on Conference on Information and Knowledge Management_. 1897–1900.
* Cai et al. (2010) Jian-Feng Cai, Emmanuel J Candès, and Zuowei Shen. 2010\. A singular value thresholding algorithm for matrix completion. _SIAM Journal on optimization_ 20, 4 (2010), 1956–1982.
* Canvas-Network (2016) Canvas-Network. 2016\. Canvas Network Courses, Activities, and Users (4/2014 \- 9/2015) Restricted Dataset. https://doi.org/10.7910/DVN/XB2TLU
* Cerezo et al. (2017) Rebeca Cerezo, María Esteban, Miguel Sánchez-Santillán, and José C. Núñez. 2017. Procrastinating Behavior in Computer-Based Learning Environments to Predict Performance: A Case Study in Moodle. _Frontiers in Psychology_ 8 (Aug. 2017).
* Daley and Vere-Jones (2007) Daryl J Daley and David Vere-Jones. 2007. _An introduction to the theory of point processes: volume II: general theory and structure_. Springer Science & Business Media.
* Du et al. (2016) Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. 2016\. Recurrent marked temporal point processes: Embedding event history to vector. In _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_. 1555–1564.
* Du et al. (2015) Nan Du, Yichen Wang, Niao He, and Le Song. 2015\. Time-sensitive recommendation from recurrent user activities. _Advances in Neural Information Processing Systems_ 2015-Janua (2015), 3492–3500.
* Hawkes (1971) Alan G Hawkes. 1971\. Spectra of some self-exciting and mutually exciting point processes. _Biometrika_ 58, 1 (1971), 83–90.
* He et al. (2015) Xinran He, Theodoros Rekatsinas, James Foulds, Lise Getoor, and Yan Liu. 2015. Hawkestopic: A joint model for network inference and topic modeling from text-based cascades. In _International conference on machine learning_. 871–880.
* Hosseini et al. (2018) Seyed Abbas Hosseini, Ali Khodadadi, Keivan Alizadeh, Ali Arabzadeh, Mehrdad Farajtabar, Hongyuan Zha, and Hamid R Rabiee. 2018\. Recurrent poisson factorization for temporal recommendation. _IEEE Transactions on Knowledge and Data Engineering_ 32, 1 (2018), 121–134.
* Kazerouni et al. (2017) Ayaan M. Kazerouni, Stephen H. Edwards, T. Simin Hall, and Clifford A. Shaffer. 2017. DevEventTracker: Tracking Development Events to Assess Incremental Development and Procrastination. In _Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education - ITiCSE ’17_. ACM Press, Bologna, Italy, 104–109.
* Kingman (2005) John Frank Charles Kingman. 2005\. P oisson processes. _Encyclopedia of biostatistics_ 6 (2005).
* Lee and Choi (2011) Youngju Lee and Jaeho Choi. 2011. A review of online course dropout research: Implications for practice and future research. _Educational Technology Research and Development_ 59, 5 (2011), 593–618.
* Li et al. (2018) Tianbo Li, Pengfei Wei, and Yiping Ke. 2018. Transfer hawkes processes with content information. In _2018 IEEE International Conference on Data Mining (ICDM)_. IEEE, 1116–1121.
* Mei and Eisner (2017) Hongyuan Mei and Jason M Eisner. 2017. The neural hawkes process: A neurally self-modulating multivariate point process. In _Advances in Neural Information Processing Systems_. 6754–6764.
* Moon and Illingworth (2005) Simon M Moon and Alfred J Illingworth. 2005. Exploring the dynamic nature of procrastination: A latent growth curve analysis of academic procrastination. _Personality and Individual Differences_ 38, 2 (2005), 297–309.
* Nesterov (2013) Yu Nesterov. 2013\. Gradient methods for minimizing composite functions. _Mathematical Programming_ 140, 1 (2013), 125–161.
* Ogata (1988) Yosihiko Ogata. 1988\. Statistical models for earthquake occurrences and residual analysis for point processes. _Journal of the American Statistical association_ 83, 401 (1988), 9–27.
* Park et al. (2018) Jihyun Park, Renzhe Yu, Fernando Rodriguez, Rachel Baker, Padhraic Smyth, and Mark Warschauer. 2018\. Understanding Student Procrastination via Mixture Models. _International Educational Data Mining Society_ (2018).
* Perrin et al. (2011) Christopher J Perrin, Neal Miller, Alayna T Haberlin, Jonathan W Ivy, James N Meindl, and Nancy A Neef. 2011. Measuring and Reducing Colledge Students’ Procrastination. _Journal of applied behavior analysis_ 44, 3 (2011), 463–474.
* Rizoiu et al. (2017) Marian-Andrei Rizoiu, Lexing Xie, Scott Sanner, Manuel Cebrian, Honglin Yu, and Pascal Van Hentenryck. 2017. Expecting to be HIP: Hawkes intensity processes for social media popularity. In _Proceedings of the 26th International Conference on World Wide Web_. 735–744.
* Shang and Sun (2018) Jin Shang and Mingxuan Sun. 2018. Local low-rank Hawkes processes for modeling temporal user–item interactions. _Knowledge and Information Systems_ (2018), 1–24.
* Steel (2007) Piers Steel. 2007\. The nature of procrastination: A meta-analytic and theoretical review of quintessential self-regulatory failure. _Psychological bulletin_ 133, 1 (2007), 65\.
* Xiao et al. (2017) Shuai Xiao, Junchi Yan, Xiaokang Yang, Hongyuan Zha, and Stephen M Chu. 2017. Modeling the intensity function of point process via recurrent neural networks. In _Thirty-first aaai conference on artificial intelligence_.
* Yao et al. (2020) Mengfan Yao, Shaghayegh Sahebi, and Reza Feyzi-Behnagh. 2020\. Analyzing Student Procrastination in MOOCs: A Multivariate Hawkes Approach. In _The 13th International Conference on Educational Data Mining_.
|
# Time resolution and efficiency of SPADs and SiPMs for photons and charged
particles
W. Riegler P. Windischhofer CERN University of Oxford
###### Abstract
We give an analytic treatment of the time resolution and efficiency of Single
Photon Avalanche Diodes (SPADs) and Silicon Photomultipliers (SiPMs). We
provide closed-form expressions for structures with uniform electric fields
and efficient numerical prescriptions for arbitrary electric field
configurations. We discuss the sensor performance for single photon detection
and also for charged particle detection.
## 1 Introduction
Single Photon Avalanche Diodes (SPADs) have been used for photon detection and
photon counting since many decades. These semiconductor devices contain a
highly doped p-n junction of $0.5{-}2\,\mu$m thickness, the so called gain
region, that is biased above the breakdown voltage. This means that a single
primary electron or hole entering this region can produce a diverging
avalanche through impact ionization and therefore lead to a detectable signal.
The growth of the diverging avalanche is quenched by the breakdown of the
electric field, which leads to a digital type signal with an amplitude
independent of the number of primary charges. The restoration of the electric
field is governed by a quench resistor or quench circuit external to the
device.
Figure 1: a) p-in-n SPAD with a gain layer only. b) n-in-p SPAD with a gain
layer and a conversion layer.
Two basic SPAD geometries are shown in Fig. 1. For the p-in-n SPAD in Fig. 1a
the high field region is placed close to the surface of the silicon. Photons
with a wavelength of $<500$ nm have an absorption length of less than
$1\,\mu$m, so they will be efficiently absorbed in the gain region and trigger
the breakdown.
For photons with longer wavelengths and therefore larger absorption length,
the geometry of Fig. 1b with a so called ’conversion region’ will be more
efficient. This region consists of a depleted layer of silicon with thickness
ranging from 10-100 $\mu$m that is adjacent to a gain region. The electrons
created in the conversion region will drift to the gain region where they
provoke the breakdown. The sensor can be illuminated from the front and the
back side. These SPADs are nowadays heavily used for LIDAR [1] applications
with wavelengths in the near-infrared above 750 nm.
While originally being a single channel device, the advances in silicon
industry allowed the arrangement of many of these diodes in chessboard
structures. Depending on the readout circuitry these are called SPAD image
sensors or Silicon Photomultipliers (SiPMs). In SPAD image sensors, the SPAD
pixels are read out individually in order to form an image. In SiPMs the
individual pixels are connected in parallel through series resistors into a
single channel. The amplitude of the output signal will then be proportional
to the total number of fired pixels and therefore proportional to the number
photons that have hit the pixel matrix, which represents the function of a
traditional photomultiplier. There are many technological challenges related
to the implementation of such pixel structures, specifically the elimination
of crosstalk between the channels and the minimization of the dark count rate.
Scope and outline
In this report, we discuss the time resolution and efficiency of SPADs and
SiPMs of the two types shown in Fig. 1 and evaluate their performance for the
detection of photons as well as charged particles. Our results are derived
from a series of fundamental equations that describe the movement and the
interactions of the participating charge carriers.
We provide analytic expressions for the simplified structures in Fig. 2 with
constant electric fields. Fig. 2a represents the situation where a photon
interacts directly inside a gain layer of constant electric field $E_{1}$ that
is above the breakdown value. Fig. 2b shows a SPAD where we represent the
conversion region by a silicon layer of thickness $w$ with constant electric
field $E_{0}$ below the breakdown field, which is adjacent to a gain layer
with constant field $E_{1}$.
Figure 2: Two simplified SPAD structures with constant electric fields
discussed in this report: a) A photon interacts directly in the gain layer and
produces an e-h pair that provokes breakdown. b) A photon produces an e-h pair
in the conversion layer and the electron moves to the gain layer where it
provokes breakdown. A thin gain layer of 0.5-2 $\mu$m is also highly efficient
for charged particle detection.
This report is structured as follows: Section 2 discusses the absorption of
photons in silicon, and the contribution of the conversion layer to the time
resolution. Section 3 then discusses the mechanisms of avalanche breakdown and
the average growth of the avalanche. Section 4 describes the contribution to
the efficiency resulting from the avalanche formation in the gain layer.
Section 5 discusses avalanche fluctuations and their contribution to the time
resolution. This discussion is based to a large degree on the companion paper
[2], which develops the statistics of electron-hole avalanches in great
detail. Section 6 then discusses the performance of SPADs for the detection of
charged particles. We finally drop the assumption of constant electric field
in Section 7 and return to the realistic field configurations of Fig. 1. We
give efficient numerical prescriptions that extend the analytic results
obtained previously.
Details of all calculations are given in several appendices. Although we focus
specifically on devices based on silicon, our results are expected to also
cover the basic geometries for different types of semiconductors.
## 2 Conversion layer
We assume a layer of silicon of thickness $w$ extending from $x=0$ to $x=w$ as
shown in Fig. 3. The probability for a photon to be absorbed between position
$x_{0}$ and $x_{0}+dx_{0}$ is given by
$P(x_{0})dx_{0}=1/l_{a}e^{-x_{0}/l_{a}}dx_{0}$, where $l_{a}$ is the photon
absorption length from Fig. 4a. The efficiency, i.e. the probability for a
photon to convert in the layer, is then given by $p=1-e^{-w/l_{a}}$ and the
numbers are shown in Fig. 4b. Photons of wavelength $<500$ nm are efficiently
absorbed in $<1\,\mu$m of silicon while infrared photons of ${>}750$ nm need
several tens of $\mu$m of silicon to be absorbed efficiently.
Figure 3: The conversion layer of a SPAD or SiPM. A photon is absorbed at
position $x_{0}$ producing an e-h pair. The electron will drift to the gain
layer at $x=w$.
a) b)
Figure 4: Absorption length $l_{a}$ for photons of different wavelengths in
silicon [3, 4]. b) Photon absorption efficiency for different values of
silicon thickness.
First we investigate the case where the photon is arriving from the ’left’
side as shown in Fig. 3a. Normalising the conversion probability to the
efficiency, the probability for a photon to be absorbed between position
$x_{0}$ and $x_{0}+dx_{0}$ is
$P(x_{0})dx_{0}=\frac{1}{1-e^{-w/l_{a}}}\frac{1}{l_{a}}e^{-x_{0}/l_{a}}\Theta(w-x_{0})dx_{0}$
(1)
The electron is then moving to the edge of the silicon layer at $x=w$ with a
velocity $v_{e}$, where it arrives at time $t=(w-x_{0})/v_{e}$. The velocity
of electrons and holes in silicon is shown in Fig. 5a and the parametrization
is given in Appendix A.
a) b)
Figure 5: a) Drift velocity of electrons ($v_{e}$) and holes ($v_{h}$) as a
function of electric field in silicon. The velocity
$v^{*}=2v_{e}v_{h}/(v_{e}+v_{h})$ that is relevant for the avalanche growth in
the gain layer is shown as well. b) Probability for the electron to arrive at
$x=w$ between times $t$ and $t+dt$ for $w=10\,\mu$m, for a photon with
$\l_{a}=1\,\mu$m entering the layer from the left side. The ’no diffusion’
curve refers Eq. 2 and the ’diffusion’ curve refers to Eq. 6.
The arrival time distribution of the electron at $x=w$ is therefore
$\rho_{1}(t)=\int_{0}^{w}P(x_{0})\delta[t-(w-x_{0})/v_{e}]dx_{0}=\frac{w}{l_{a}(e^{w/l_{a}}-1)}\frac{1}{T}e^{\frac{tw}{l_{a}T}}\,\Theta(T-t)$
(2)
where we have expressed the velocity $v_{e}$ by the maximum drift time
$T=w/v_{e}$ of the electrons inside the conversion layer. An example is shown
in Fig. 5b. The variance of the arrival time is then
$\sigma_{t}^{2}=\int_{0}^{T}t^{2}\,P_{2}(t)dt-\left(\int_{0}^{T}t\,P_{2}(t)dt\right)^{2}=T^{2}\left(\frac{l_{a}^{2}}{w^{2}}-\frac{1}{4\sinh[w/2\l_{a}]^{2}}\right)$
(3)
Including the effect of diffusion we use the fact that an electron deposited
at position $x_{0}$ at $t=0$ and moving with an average velocity of $v_{e}$
will be found at position $x$ after a time $t$ with a probability of
$p(x,x_{0},t)dx=\frac{1}{\sqrt{2\pi}\sqrt{2Dt}}\exp\left[-\frac{(x-(x_{0}+v_{e}t))^{2}}{2(2Dt)}\right]dx$
(4)
The standard deviation of the distribution is given by $\sigma(t)=\sqrt{2Dt}$.
The probability for an electron to arrive at $x=w$ between time $t$ and $t+dt$
is then $p(w,x_{0},t)v_{e}dt$. Since the probability of a photon absorption at
$x_{0}$ is given by $P(x_{0})$ from Eq. 1 we find the arrival time
distribution $P_{3}(t)$ as
$\rho_{2}(t)=\int_{0}^{w}P(x_{0})p(w,x_{0},t)v_{e}dx_{0}=$ (5)
$\frac{v_{e}}{2l_{a}}\frac{1}{e^{w/l_{a}}-1}e^{(D+l_{a}v_{e})t/l_{a}^{2}}\left(\mbox{Erf}\left[\left(\frac{\sqrt{Dt}}{l_{a}}+\frac{v_{e}}{2}\sqrt{\frac{t}{D}}\right)\right]-\mbox{Erf}\left[\frac{\sqrt{Dt}}{l_{a}}+\frac{v_{e}}{2}\sqrt{\frac{t}{D}}-\frac{w}{2\sqrt{Dt}}\right]\right)$
(6)
with Erf$(z)=2/\sqrt{\pi}\int_{0}^{z}e^{-t^{2}}dt$. The variance evaluates to
$\sigma_{t}^{2}=T^{2}\left(\frac{l_{a}^{2}}{w^{2}}-\frac{1}{4\sinh[w/2l_{a}]^{2}}\right)+T\frac{2D}{v_{e}^{2}}\left(\frac{1}{1-e^{-w/l_{a}}}-\frac{l_{a}}{w}\right)+\frac{8D^{2}}{v_{e}^{4}}$
(7)
The first term is the one from Eq. 3 due to the varying position of the photon
interaction together with the average drift time, the second term is due to
diffusion. The third term is an artefact of the assumption that a charge
placed at $x=0$ can diffuse into a region of $x<0$, so the term does not
vanish even for $w=0$. In a realistic implementation of a conversion layer,
the region of $x<0$ in Fig. 3 will represent a region where the electric field
drops sharply to low values. The electron might spend a rather long time in
this region before moving back into the high field region or it might even get
lost. We therefore count the cases where the electron moves to this area as
additional small inefficiency and neglect the term. In practice, photons
interacting at the boundary of the conversion layer will cause a long tail in
the time distribution and it becomes a practical question whether to include
the tails in the calculation of the time resolution or count them as
inefficiency. In the limit of large and small values of $l_{a}/w$ the time
resolution approximates to
$\sigma_{t}^{2}=\frac{T^{2}}{12}+\frac{DT}{v_{e}^{2}}\quad l_{a}\gg
w\qquad\sigma_{t}^{2}=T^{2}\frac{l_{a}^{2}}{w^{2}}+\frac{2DT}{v_{e}^{2}}\quad
l_{a}\ll w$ (8)
We assume $D=35$ cm${}^{2}/$s for electrons in silicon. For electric fields in
excess of $5\times 10^{4}$ V/cm the electron velocity is close to saturation
and we have $D/(v_{sat}^{e})^{2}=0.35$ ps. For a conversion layer of
$w=1/10/100\,\mu$m the saturated drift time is $T\approx 10/100/1000$ ps.
For $\l_{a}\gg w$ the probability for the photon conversion position becomes
uniform across $w$, diffusion is negligible and the time resolution is
$\sigma_{t}=T/\sqrt{12}\approx 2.89/28.9/289$ ps for $w=1/10/100\,\mu$m. For
$\l_{a}\ll w$ the photon conversion point is always close to $x=0$, diffusion
will dominate and the time resolution is equal to
$\sigma_{t}=\sqrt{2DT}/v_{e}\approx 1.6/8.37/26.46$ ps for $w=1/10/100\,\mu$m.
If the illumination takes place from the ’right’ side as indicated in Fig. 3b,
the time resolution becomes
$\sigma_{t}^{2}=T^{2}\left(\frac{l_{a}^{2}}{w^{2}}-\frac{1}{4\sinh[w/2l_{a}]^{2}}\right)+T\frac{2D}{v_{e}^{2}}\left(\frac{l_{a}}{w}+\frac{1}{1-e^{w/l_{a}}}\right)+\frac{8D^{2}}{v_{e}^{4}}$
(9)
This expression differs from the previous one only by the diffusion term,
because the distance between the conversion point and $x=w$ is now different.
In the limit of large and small values of $l_{a}$ we have
$\sigma_{t}^{2}=\frac{T^{2}}{12}+\frac{DT}{v_{e}^{2}}\quad l_{a}\gg
w\qquad\sigma_{t}^{2}=T^{2}\frac{l_{a}^{2}}{w^{2}}+\frac{2DT}{v_{e}^{2}}\frac{l_{a}}{w}\quad
l_{a}\ll w$ (10)
For $l_{a}\gg w$ the expression is equal to the one from above, while for
$l_{a}\ll w$ the variance goes to zero because the conversion point is close
to $x=w$. The expression does of course not apply if the absorption length
$l_{a}$ is of the same order or smaller than the gain layer thickness $d$,
because the photons will interact directly in the gain layer.
## 3 Electron-hole avalanches and breakdown, average signal
Figure 6: The primary electrons and holes deposited at $x=x_{0}$ are
multiplying, which results in a diverging avalanche in case the electric field
$E_{1}$ is above the breakdown limit.
An electron drifting inside the conversion layer will move to the gain layer
and trigger an avalanche starting from $x=0$. Alternatively a photon can
convert inside the gain layer and the e-h pair at position $x=x_{0}$ will
trigger the avalanche. To cover both situations, we treat the general case
where $n^{0}_{e}$ electrons and $n^{0}_{h}$ holes are deposited at $x=x_{0}$
at time $t=0$, as shown in Fig. 6.
To derive equations describing the avalanche, we allow for general position-
dependent electric fields $E_{1}(x)$. With the field orientated as shown in
Fig. 6, electrons move to the right and holes move to the left with velocities
$v_{e}(x),v_{h}(x)$. The probability for an electron to create an e-h pair
when travelling a distance $dx$ is $\alpha(x)dx$ while the probability for a
hole to produce an e-h pair over distance $dx$ is $\beta(x)dx$, where
$\alpha(x)$ and $\beta(x)$ are called the impact ionization coefficients. The
values for silicon (Fig. 7) are reported in [7] with the parameters listed in
Appendix A. Since $1/\alpha$ and $1/\beta$ refer to the average distance that
an electron or a hole has to travel in order to produce one additional e-h
pair, we see that only for fields in excess of $2{-}3\times 10^{5}$ V/cm there
is an appreciable probability to provoke an avalanche in a few $\mu$m of
silicon.
a) b)
Figure 7: a) Impact ionization coefficient $\alpha$ for electrons and $\beta$
for holes as a function of the electric field. b) Minimum electric field value
provoking breakdown for a gain layer with constant electric field across a
given thickness $d$.
Fig. 8 shows a Monte Carlo (MC) simulation of a few avalanches starting with a
single electron. After some initial fluctuations the avalanche just grows
exponentially. There is also a finite probability that no diverging avalanche
develops.
Figure 8: Monte Carlo simulation for an electron-hole avalanche starting with
a single electron at $x_{0}=0$ for a $1\,\mu$m diode at a field of 4 V/$\mu$m.
After some initial fluctuations the number of charge carriers increases
exponentially. The dominant term from Eq. 30 approximates the average signal
extremely well for times $t>d/v^{*}\approx 11$ ps. When the avalanche is still
small there is a finite probability that no breakdown occurs, as is the case
for the avalanche MC4.
We denote as $n_{e}(x,t)dx$ and $n_{h}(x,t)dx$ the average number of electrons
and holes between position $x$ and $x+dx$ at time $t$ (note that this is
different from the notation used in [2]). These charge densities result in
local average current densities of $j_{e}(x,t)=v_{e}(x)n_{e}(x,t)$ and
$j_{h}(x,t)=-v_{h}(x)n_{h}(x,t)$. By the continuity equation $\partial
j/\partial x+\partial n/\partial t=\sigma$, with $\sigma$ the generation rate,
we therefore have
$\displaystyle\frac{\partial n_{e}(x,t)}{\partial t}+\frac{\partial
v_{e}(x)\,n_{e}(x,t)}{\partial x}$ $\displaystyle=$
$\displaystyle\alpha(x)v_{e}(x)n_{e}(x,t)+\beta(x)v_{h}(x)n_{h}(x,t)$ (11)
$\displaystyle\frac{\partial n_{h}(x,t)}{\partial t}-\frac{\partial
v_{h}(x)\,n_{h}(x,t)}{\partial x}$ $\displaystyle=$
$\displaystyle\alpha(x)v_{e}(x)n_{e}(x,t)+\beta(x)v_{h}(x)n_{h}(x,t)$
The fact that electrons move to the left and holes move to the right gives the
boundary conditions
$n_{e}(0,t)=0\qquad n_{h}(d,t)=0$ (12)
Since Eq. 11 represents a set of linear equations we can use the Ansatz
$n_{e}(x,t)=f(x)e^{St}$ and $n_{h}(x,t)=g(x)e^{St}$ and we find
$\displaystyle Sf(x)+[v_{e}(x)f(x)]^{\prime}$ $\displaystyle=$
$\displaystyle\alpha(x)v_{e}(x)f(x)+\beta(x)v_{h}(x)g(x)$ (13) $\displaystyle
Sg(x)-[v_{h}(x)g(x)]^{\prime}$ $\displaystyle=$
$\displaystyle\alpha(x)v_{e}(x)f(x)+\beta(x)v_{h}(x)g(x)$
with $f(0)=0$ and $g(d)=0$. The multiplication of electrons and holes can lead
to a finite amount of total charge in the avalanche ($S<0$) or it can diverge
and cause breakdown ($S>0$). The boundary between the two regimes is at $S=0$,
so by setting $S=0$ in the above equations and solving them with the given
boundary conditions we find the breakdown condition (Appendix B)
$\int_{0}^{d}\alpha(x)\,\exp\left[-\int_{0}^{x}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}\right]dx=1$
(14)
The breakdown condition is independent of $v_{e}(x)$ and $v_{h}(x)$. This
relation is usually derived by evaluating the point at which the gain for a
constant current injected into the gain layer diverges [8]. Evaluating the
breakdown equation for constant $\alpha$ and $\beta$ implies that breakdown
occurs if
$d>\frac{1}{\alpha-\beta}\ln\frac{\alpha}{\beta}$ (15)
The electric field at which breakdown takes place for a gain layer with a
given thickness $d$ is shown in Fig. 7b. For general electric field profiles,
Eqs. 11 can be efficiently solved with numerical methods. The solution can be
given in analytical form for constant values of $\alpha,\beta,v_{e},v_{h}$ and
is derived in [2]. It is represented as an infinite sum of exponential terms
with (generally complex valued) time constants. At least one time constant is
guaranteed to be real-valued. The largest real-valued time constant defines
the long-term behaviour of the avalanche. Above the breakdown limit, this term
determines the rate of exponential growth of the avalanche. Starting with
$n_{e}^{0}$ electrons and $n_{h}^{0}$ holes at position $x_{0}$ at time $t=0$,
it reads
$n_{e}(x,t)=\frac{1}{d}\,a(x)\left[n_{e}^{0}u_{e}(x_{0})+n_{h}^{0}u_{h}(x_{0})\right]\,e^{\gamma
v^{*}t}\qquad
n_{h}(x,t)=\frac{1}{d}\,b(x)\left[n_{e}^{0}u_{e}(x_{0})+n_{h}^{0}u_{h}(x_{0})\right]\,e^{\gamma
v^{*}t}$ (16)
with
$\displaystyle u_{e}(x_{0})$ $\displaystyle=$ $\displaystyle
e^{-a_{1}x_{0}}\sin(k-kx_{0}/d)$ (17) $\displaystyle u_{h}(x_{0})$
$\displaystyle=$ $\displaystyle\,\frac{e^{-a_{1}x_{0}}}{\alpha
d}\left[k\cos\left(k-kx_{0}/d\right)+\lambda_{1}\sin(k-kx_{0}/d)\right]$ (18)
$\displaystyle a(x)$ $\displaystyle=$
$\displaystyle\frac{2v_{h}ke^{a_{1}x}\sin\left(k\frac{x}{d}\right)}{(v_{e}+v_{h})(1+\lambda_{1})\sin
k}$ (19) $\displaystyle b(x)$ $\displaystyle=$
$\displaystyle\frac{2kv_{e}e^{a_{1}x}\left[k\cos\left(k\frac{x}{d}\right)+\lambda_{1}\sin\left(k\frac{x}{d}\right)\right]}{(v_{e}+v_{h})\beta
d(1+\lambda_{1})\sin k}$ (20)
and the constants $v^{*},a_{1},\gamma,k$ are defined by
$\displaystyle v^{*}$ $\displaystyle=$
$\displaystyle\frac{2v_{e}v_{h}}{v_{e}+v_{h}}$ (21) $\displaystyle a_{1}$
$\displaystyle=$ $\displaystyle\frac{\alpha v_{e}-\beta
v_{h}}{v_{e}+v_{h}}+\frac{v_{e}-v_{h}}{v_{e}+v_{h}}\frac{1}{d}\ \,\lambda_{1}$
(22) $\displaystyle\gamma$ $\displaystyle=$
$\displaystyle\frac{\alpha+\beta}{2}+\frac{\lambda_{1}}{d}$ (23)
$\displaystyle k$ $\displaystyle=$ $\displaystyle\sqrt{\alpha\beta
d^{2}-\lambda_{1}^{2}}$ (24)
The parameter $\lambda_{1}$ is the largest real solution of the equation
$\lambda_{1}+\sqrt{\alpha\beta d^{2}-\lambda_{1}^{2}}\cot\sqrt{\alpha\beta
d^{2}-\lambda_{1}^{2}}=0$ (25)
It holds that $-\infty<\lambda_{1}<d\sqrt{\alpha\beta}$. For
$\lambda_{1}<-d\sqrt{\alpha\beta}$ the constant $k$ will become imaginary,
which will still lead to real valued expressions for $n_{e}(x,t)$ and
$n_{h}(x,t)$ with $\sin,\cos$ becoming $\sinh,\cosh$. The functional form of
$a(x)$ and $b(x)$ as well as the equation for $\gamma$ were already derived in
[11]. Fig. 9a shows the functions
$u_{e}(x_{0}),u_{h}(x_{0}),u_{e}(x_{0})+u_{h}(x_{0})$ that determine how the
average growth of the avalanche depends on the position of a primary electron,
hole or e-h pair. They are the mirror images of the functions $a(x),b(x)$ from
Eqs. 19, 20 that determine the distribution of the electrons and holes inside
the gain layer [2].
The parameter $\gamma$ defines the exponential growth of the avalanche and is
shown in Fig. 9b. It has the following properties:
$\displaystyle\gamma$ $\displaystyle=$ $\displaystyle
0\qquad\qquad\qquad\quad\,\,\alpha\beta
d^{2}=\frac{\alpha\beta}{(\alpha-\beta)^{2}}\ln^{2}\frac{\beta}{\alpha}\leq 1$
(26) $\displaystyle\gamma$ $\displaystyle=$
$\displaystyle\frac{\alpha+\beta}{2}-\sqrt{\alpha\beta}\qquad\alpha\beta
d^{2}=1$ (27) $\displaystyle\gamma$ $\displaystyle=$
$\displaystyle\frac{\alpha+\beta}{2}\phantom{+\sqrt{\alpha\beta}}\qquad\alpha\beta
d^{2}=\frac{\pi^{2}}{4}\approx 2.47$ (28) $\displaystyle\gamma_{max}$
$\displaystyle=$
$\displaystyle\frac{\alpha+\beta}{2}+\sqrt{\alpha\beta}\qquad\alpha\beta
d^{2}\rightarrow\infty$ (29)
a) b)
Figure 9: a) The functions
$u_{e}(x_{0}),u_{h}(x_{0}),u_{e}(x_{0})+u_{h}(x_{0})$ from Eqs. 17, 18 that
determine how the average growth of the avalanche depends on the position of a
primary electron, hole or e-h pair. The values are for a $1\,\mu$m gain layer
at $E_{1}=4.5\times 10^{5}$ V/cm. b) $\gamma$ as a function of electric field
in silicon for different values of the gain layer thickness $d$. At the
breakdown limit we have $\gamma=0$. For higher electric fields $\gamma$
quickly approach $\gamma_{\max}$.
The total number of electrons and holes is then given by
$N=\int_{0}^{d}n(x,t)dx$ and evaluates to
$N_{e}(t)=B_{e}\,\left[n_{e}^{0}u_{e}(x_{0})+n_{h}^{0}u_{h}(x_{0})\right]\,e^{\gamma
v^{*}t}\qquad
N_{h}(t)=B_{h}\,\left[n_{e}^{0}u_{e}(x_{0})+n_{h}^{0}u_{h}(x_{0})\right]\,e^{\gamma
v^{*}t}\qquad$ (30)
where we have
$\displaystyle B_{e}$ $\displaystyle=$
$\displaystyle\frac{2kv_{h}\left[e^{a_{1}d}(a_{1}d-k\cot k)+k\csc
k\right]}{(v_{e}+v_{h})(a_{1}^{2}d^{2}+k^{2})(1+\lambda_{1})}$ (31)
$\displaystyle B_{h}$ $\displaystyle=$
$\displaystyle\frac{2kv_{e}\left[e^{a_{1}d}(k^{2}+a_{1}d\lambda_{1}+k(a_{1}d-\lambda_{1})\cot
k)+k(\lambda_{1}-a_{1}d)\csc k\right]}{(v_{e}+v_{h})\beta
d(a_{1}^{2}d^{2}+k^{2})(1+\lambda_{1})}$ (32)
The total induced current becomes
$\displaystyle I(t)$ $\displaystyle=$ $\displaystyle
e_{0}\frac{E_{w}}{V_{w}}\left[v_{e}N_{e}(t)+v_{h}N_{h}(t)\right]$ (33)
$\displaystyle=$ $\displaystyle
e_{0}\frac{E_{w}}{V_{w}}\left(v_{e}B_{e}+v_{h}B_{h}\right)\left[n_{e}^{0}u_{e}(x_{0})+n_{h}^{0}u_{h}(x_{0})\right]e^{\gamma
v^{*}t}$ (34)
where $e_{0}$ is the electron charge. Here we have assumed a constant
weighting field $E_{w}/V_{w}$ in the region in which the charges are moving.
For the single photon detection using the conversion layer we have only a
single electron at $x_{0}=0$ and therefore $n_{e}^{0}=1,n_{h}^{0}=0$ and the
expression is
$I(t)=e_{0}\frac{E_{w}}{V_{w}}\left[v_{e}B_{e}+v_{h}B_{h}\right]\sin
k\,e^{\gamma v^{*}t}$ (35)
Assuming a velocity $v_{e}\approx v_{h}\approx v_{sat}\approx 0.1\,\mu$m/ps
and a weighting field of $E_{w}/V_{w}=1/d=1/(1\,\mu$m), the current
corresponding to $10^{5}$ charges at $t=44$ ps in Fig. 8 is 1.6 mA.
## 4 Efficiency
In this section we calculate the probabilities $P_{e}(x)$ and $P_{h}(x)$ for a
single electron or a single hole placed at position $x$ in the gain layer to
cause breakdown. We follow [9] to establish the equations for these
quantities. We start by considering a single electron at position $x-dx$ which
moves in positive $x$-direction in the applied electric field. The probability
for it to create a diverging avalanche is $P_{e}(x-dx)$. Between $x-dx$ and
$x$ two things can happen. 1) With a probability of $(1-\alpha dx)$ there is
no multiplication of the electron and then the electron at position $x$
creates a diverging avalanche or 2) the electron is multiplying over the
distance $dx$ and at least one of the two electrons and the hole create
breakdown. This can be written as
$P_{e}(x-dx)=(1-\alpha dx)P_{e}(x)+\alpha
dx\left[1-(1-P_{e}(x))^{2}(1-P_{h}(x)\right]$ (36)
Writing the corresponding equation for $P_{h}(x)$ and expanding for small $dx$
gives
$\displaystyle\frac{d\,P_{e}(x)}{d\,x}$ $\displaystyle=$
$\displaystyle-\alpha(x)[1-P_{e}(x)]\left[P_{e}(x)+P_{h}(x)-P_{e}(x)P_{h}(x)\right]$
(37) $\displaystyle\frac{d\,P_{h}(x)}{d\,x}$ $\displaystyle=$
$\displaystyle\phantom{+}\beta(x)[1-P_{h}(x)]\left[P_{e}(x)+P_{h}(x)-P_{e}(x)P_{h}(x)\right]$
Provided $\alpha(x)$ and $\beta(x)$ are known, these equations can be
integrated with the boundary conditions $P_{e}(d)=0$ and $P_{h}(0)=0$.
Following [10] we define $P(x)=P_{e}(x)+P_{h}(x)-P_{e}(x)P_{h}(x)$ and by
differentiating this expression and using Eqs. 37 we have
$\frac{d\,P(x)}{d\,x}=-(\alpha-\beta)P(x)[1-P(x)]$ (38)
We use the boundary condition $P(0)=P_{e}(0)=p_{0}$, with $p_{0}$ still to be
determined, giving the solution
$P(x)=\frac{p_{0}}{p_{0}+(1-p_{0})\exp\left[\int_{0}^{x}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}\right]}$
(39)
Since $P(x)$ can be written as $P(x)=1-[(1-P_{e}(x))(1-P_{h}(x))]$ we see that
$P(x)$ refers to the breakdown efficiency of a single e-h pair, which we will
use later for the efficiency calculation for MIPs. Knowing $P(x)$ and using
$P_{h}(d)=0$ we can integrate Eqs. 37 and have
$P_{e}(x)=1-\exp\left(-\int_{x}^{d}\alpha(x^{\prime})P(x^{\prime})dx^{\prime}\right)\qquad
P_{h}(x)=1-\exp\left(-\int_{0}^{x}\beta(x^{\prime})P(x^{\prime})dx^{\prime}\right)$
(40)
And finally $P_{e}(0)=p_{0}$ gives the equation that allows us to determine
$p_{0}$
$\displaystyle
p_{0}=1-\exp\left(-\int_{0}^{d}\frac{p_{0}\,\alpha(x^{\prime})}{p_{0}+(1-p_{0})\exp\left[\int_{0}^{x^{\prime}}(\alpha(x^{\prime\prime})-\beta(x^{\prime\prime}))dx^{\prime\prime}\right]}dx^{\prime}\right)$
(41)
In general this equation can only be evaluated numerically. This equation also
reveals again the breakdown condition. Close to the threshold of breakdown the
value of $p_{0}$ will be small. For small values of $p_{0}$ the expression
$1-\exp\left[-p_{0}/(p_{0}+(1-p_{0})e^{v}))\right]$ is approximated by
$p_{0}e^{-v}+O(p_{0}^{2})$, so the above relation turns into the breakdown
condition of Eq. 14.
For constant $\alpha$ and $\beta$ the above expressions evaluate to
$\displaystyle P(x)$ $\displaystyle=$
$\displaystyle\frac{p_{0}}{p_{0}+[1-p_{0}]e^{(\alpha-\beta)x}}$ (42)
$\displaystyle P_{e}(x)$ $\displaystyle=$ $\displaystyle
1-e^{-\alpha(d-x)}\left[\frac{(1-p_{0})e^{(\alpha-\beta)d}+p_{0}}{(1-p_{0})e^{(\alpha-\beta)x}+p_{0}}\right]^{\frac{\alpha}{\alpha-\beta}}$
(43) $\displaystyle P_{h}(x)$ $\displaystyle=$ $\displaystyle 1-e^{-\beta
x}\left[(1-p_{0})e^{(\alpha-\beta)x}+p_{0}\right]^{\frac{\beta}{\alpha-\beta}}$
(44)
Eq. 41 that determines $p_{0}$ reads as
$e^{-(\alpha-\beta)d}=\frac{1}{p_{0}}\left[(1-p_{0})^{1-\frac{\beta}{\alpha}}-(1-p_{0})\right]$
(45)
Fig. 10 shows a few examples.
a) b)
Figure 10: a) Breakdown probability (efficiency) for a single electron and a
single hole deposited at position $x$ inside a gain layer of $d=1\,\mu$m for
two values of the electric field. b) Breakdown probability $p_{0}$
(efficiency) for a single electron placed at $x=0$ for different values of the
gain layer thickness $d$.
## 5 Time resolution
The statistics of electron-hole avalanches and the resulting contribution to
the time resolution are discussed in detail in [2], where the problem is
treated by using the theory of continuous-time Markov processes.
### 5.1 Avalanches in absence of boundaries
For the case of an e-h avalanche in absence of any boundaries and for a
constant electric field, an alternative approach can be used to derive the
avalanche fluctuations [12] that does not require the formalism developed in
[2] and which is closely related to the arguments resulting in Eq. 37. We
define $p_{e}(n,m,\Delta)$ to be the probability to find $n$ electrons and $m$
holes at time $t_{0}+\Delta$ for an avalanche starting with a single electron
at $t_{0}$. For an avalanche starting at $t=0$, there are two ways to reach
this state at the later time $t+dt$. First, the initial electron does not
multiply in the first small time interval $[0,dt]$ (with probability $1-\alpha
v_{e}dt$), but then produces $n$ electrons and $m$ holes during the subsequent
time interval $[dt,t+dt]$. This happens with a probability $p_{e}(n,m,t)$.
Second, the electron already multiplies in the interval $[0,dt]$ (with
probability $\alpha v_{e}dt$) and the resulting two electrons and one hole
multiply into $n$ electrons and $m$ holes during $[dt,t+dt]$. This is written
as
$\displaystyle p_{e}(n,m,t+dt)$ $\displaystyle=$ $\displaystyle(1-\alpha
v_{e}dt)p_{e}(n,m,t)$ (46) $\displaystyle+$ $\displaystyle\alpha
v_{e}dt\sum_{i=1}^{n}\sum_{j=1}^{i}\sum_{r=1}^{m}\sum_{s=1}^{r}p_{e}(n-i-j,m-r-s,t)p_{e}(i,r,t)p_{h}(j,s,t)$
(47)
where $p_{h}(n,m,\Delta)$ is the probability that an avalanche starting with a
single hole at time $t_{0}$ produces $n$ electrons and $m$ holes at
$t_{0}+\Delta$. Writing the corresponding equation for $p_{h}(n,m,t)$ and
expanding for small $dt$ results in the equations defining $p_{e}(n,m,t)$ and
$p_{h}(n,m,t)$. The equations have a structure similar to Eq. 37. Their
solution is given in Appendix C and equal to the one derived in [2]. Having
$n_{e}^{0}$ electrons and $n_{h}^{0}$ holes at $t=0$, the probability to have
$n$ additionally created e-h pairs at time $t$ is
$p(n,t)=\frac{\Gamma(A+n)}{\Gamma(A)\Gamma(1+n)}\left(\frac{1}{\nu(t)}\right)^{A}\left(1-\frac{1}{\nu(t)}\right)^{n}\qquad\sum_{n=0}^{\infty}p(n,t)=1$
(48) $A=\frac{n_{e}^{0}\alpha v_{e}+n_{h}^{0}\beta v_{h}}{\alpha v_{e}+\beta
v_{h}}\qquad\nu(t)=e^{\lambda_{t}t}\qquad\lambda_{t}=\alpha v_{e}+\beta v_{h}$
(49)
The average number of e-h pairs and the variance are
$\overline{n}(t)=[\nu(t)-1]A\qquad\sigma_{n}^{2}(t)=\nu(t)[\nu(t)-1]A$ (50)
In the continuous approximation for $n$, which is valid for large $n$ (and
therefore for large avalanches i.e. late times) we have
$p(n,t)=\frac{n^{A-1}}{\Gamma(A)}\,\left(\frac{A}{\overline{n}(t)}\right)^{A}e^{-nA/\overline{n}(t)}\qquad\int_{0}^{\infty}p(n,t)dn=1$
(51)
with
$\overline{n}(t)=Ae^{\lambda_{t}t}\qquad\sigma_{n}(t)=\frac{1}{\sqrt{A}}\overline{n}(t)$
(52)
a) b)
Figure 11: a) A threshold is applied to the signal. The fluctuations of the
avalanche size result in a fluctuation of the threshold crossing time, which
defines the time resolution. b) Time response function $\rho(t)$ for different
values of the parameter $A$.
In order to measure the ’signal time’ we apply a threshold to the signal,
which is proportional to the total number of charge carriers. Fig. 11a shows
how the avalanche fluctuations lead to fluctuations of the threshold crossing
time, which determine the time resolution. The probability that the signal
crosses the threshold of $n$ e-h pairs between time $t$ and $t+dt$, the so-
called time response function, is given by
$\rho(n,t)dt=\lambda_{t}\,\frac{\Gamma(1+A+n)}{\Gamma(A)\Gamma(1+n)}\left(\frac{1}{\nu(t)}\right)^{A}\left(1-\frac{1}{\nu(t)}\right)^{n}dt\qquad\int_{0}^{\infty}\rho(n,t)dt=1$
(53)
The average threshold crossing time and its variance are given by
$\overline{t}=\frac{1}{\lambda_{t}}\left[\psi_{0}(n+A+1)-\psi_{0}(A)\right]\qquad\sigma=\frac{1}{\lambda_{t}}\sqrt{\psi_{1}(A)-\psi_{1}(n+A+1)}$
(54)
where $\psi_{0}(x)=d\ln\Gamma(z)/dz$ is the digamma function and
$\psi_{1}(x)=d^{2}\ln\Gamma(z)/dz^{2}$ is the trigamma function. For large
numbers of $n$ the above time response function approximates to
$\rho(n,t)=\frac{\lambda_{t}}{\Gamma(A)}\exp\left[A\ln
n-A\lambda_{t}t-ne^{-\lambda_{t}t}\right]$ (55)
and the average threshold crossing time and the time resolution approximate to
$\overline{t}=\frac{1}{\lambda_{t}}\left[\log
n-\psi_{0}(A)\right]\qquad\sigma=\frac{1}{\lambda_{t}}\sqrt{\psi_{1}(A)}$ (56)
so the time resolution becomes independent of the threshold. This can be
understood when looking at Fig. 11a and it is a well-established fact for
detectors like Resistive Plate Chambers [13], where avalanche fluctuations
dominate the signal characteristics [14, 15]: scaling the threshold by a
constant $c_{1}$ will just shift the time response function by $\Delta t=(\ln
c_{1})/\lambda_{t}$ without altering its shape. If we are not interested in
the absolute time of the threshold crossing but just the time variations we
can arbitrarily set $n=1$ and have the time response function
$\rho(t)=\frac{\lambda_{t}}{\Gamma(A)}\exp\left[-A\lambda_{t}t-e^{-\lambda_{t}t}\right]\qquad\int_{-\infty}^{\infty}\rho(t)dt=1$
(57)
An example of the time response function for different values of the parameter
$A$ is shown in Fig. 11b. The function $\sqrt{\psi_{1}(x)}$ approaches
$\sqrt{1/x+1/x^{2}}$ for small and large values $x$ and it is within 10% in
the entire range of $x>0$, as can be seen in Fig. 12a. The values of
$\sqrt{\psi_{1}(A)}$ for a primary electron, a primary hole and a primary e-h
pair are shown in Fig. 12b.
We see that when starting with a primary hole, the time resolution is
significantly worse when compared to a primary electron or a primary e-h pair.
An electron travels on average a distance of $1/\alpha$ and a hole travels on
average a distance of $1/\beta$ before creating an additional e-h pair. Since
$\alpha>\beta$ in silicon the holes will travel significantly longer than the
electrons before triggering the breakdown and therefore the time fluctuations
are larger. The geometries of SPADs and SiPMs produced from silicon are
therefore built such that the electrons trigger the avalanche in the gain
region. Charge carrier mobilities are quite different in other semiconductors
and the arrangement of doping layers can therefore differ.
a) b)
Figure 12: a) The trigamma function $\psi_{1}(x)$. b) Values of
$\sqrt{\psi_{1}(A)}$ for different initial conditions. Starting with a single
e-h pair we have $A=1$ and $\sqrt{\psi_{1}(1)}=\pi/\sqrt{6}\approx 1.28$.
### 5.2 Avalanches in a gain layer of finite thickness
Our central result in [2] is the conclusion that the finite thickness of the
gain layer will to first order not affect the avalanche fluctuations but will
only affect the average growth of the avalanche. This approximation works best
if the primary charge carrier has the larger impact ionization coefficient,
i.e. if the primary charge in the gain layer is either an electron or an e-h
pair for silicon. Then, the probability to have a total number of $n$
electrons and $m$ holes at time $t$ in the gain layer, starting with
$n_{e}^{0}$ electrons and $n_{h}^{0}$ holes at $x=x_{0}$ at time $t=0$, is
$p(n,t)\approx[1-\varepsilon(x_{0})]\delta(n)+\varepsilon(x_{0})\frac{n^{A-1}}{\Gamma(A)}\,\left(\frac{A}{N_{e}(t)}\right)^{A}e^{-nA/N_{e}(t)}\qquad
m=\frac{B_{h}}{B_{e}}n\qquad A=\frac{n_{e}^{0}\alpha v_{e}+n_{h}^{0}\beta
v_{h}}{\alpha v_{e}+\beta v_{h}}\qquad$ (58)
with $N_{e}(t)$ from Eq. 30 and $B_{e},B_{h}$ are from Eqs. 31, 32. The number
of electrons $n$ and the number of holes $m$ are considered continuous
variables and taken to be fully correlated in this approximation (as shown in
[2], this is strictly true only at late times). The efficiency
$\varepsilon(x_{0})$ that the $n_{e}^{0}$ electrons and $n_{h}^{0}$ holes at
$x{=}x_{0}$ trigger a diverging avalanche is
$\varepsilon(x_{0})=1-[1-P_{e}(x_{0})]^{n_{e}^{0}}[1-P_{h}(x_{0})]^{n_{h}^{0}}$
(59)
with $P_{e}(x_{0})$ and $P_{h}(x_{0})$ from Eqs. 43, 44. The corresponding
time response function in reference to Eq. 57 is then
$\rho(t)\approx\frac{\gamma v^{*}\,h^{A}}{\Gamma(A)}\exp\left[-A\gamma
v^{*}t-he^{-\gamma v^{*}t}\right]$ (60)
with
$h=\frac{A\,\varepsilon(x_{0})}{B_{e}\left[n_{e}^{0}u_{e}(x_{0})+n_{h}^{0}u_{h}(x_{0})\right]}$
(61)
Here we have divided $N_{e}(t)$ by the efficiency in order to account for the
fact that only diverging avalanches are crossing the threshold. The
corresponding time resolution due to the avalanche fluctuations, when keeping
$n_{e}^{0},n_{h}^{0},x_{0}$ constant, is
$\sigma=\frac{\sqrt{\psi_{1}(A)}}{\gamma v^{*}}$ (62)
where $\gamma$ and $v^{*}$ are from Eq. 21 and Eq. 23. The value of $1/\gamma
v^{*}$ for different values of the thickness $d$ of the gain layer is shown in
Fig. 13a.
a) b)
Figure 13: a) The factor $1/\gamma v^{*}$ for different values of the gain
layer thickness. b) Contributions to the time resolution for a photon
interacting in the gain layer. The value of $\pi/\sqrt{6}$ indicated by the
horizontal line is due to avalanche fluctuations and the other lines are due
to fluctuations of the position of the photon conversion assuming a uniform
distribution. The two components have to be added in squares, so we see that
for thick gain layers and high fields the position dependence starts to
dominate over the avalanche fluctuations.
For a photon interacting in the conversion layer (Fig. 2b), the electron
arriving at the gain layer will therefore start an avalanche from $x_{0}=0$
and the time resolution contribution of the gain layer will be
$\sigma\approx\frac{\sqrt{\psi_{1}(A)}}{\gamma v^{*}}\qquad A=\frac{\alpha
v_{e}}{\alpha v_{e}+\beta v_{h}}$ (63)
As seen in Fig. 12 we have $\sqrt{\psi_{1}(A)}\approx 1.5$ in the entire
electric field range, so e.g. for a $1\,\mu$m SPAD at $4\times 10^{5}\,$V/cm
the contribution to the time resolution will be around 6 ps. For higher fields
a time resolution of 1 ps should theoretically be achievable.
In case the photon interacts inside the gain layer and produces an e-h pair,
the varying position of the photon interaction will also contribute to the
time resolution. Having one e-h pair at position $x_{0}$ as shown in Fig. 2a
we have $n_{e}^{0}=n_{h}^{0}=1$ and therefore $A=1$,
$\varepsilon(x_{0})=P(x_{0})$ from Eq. 42 and the time response function is
$\rho(t,x_{0})=\gamma v^{*}\,h(x_{0})\exp\left[-\gamma
v^{*}t-h(x_{0})e^{-\gamma v^{*}t}\right]\qquad
h(x_{0})=\frac{P(x_{0})}{B_{e}\left[u_{e}(x_{0})+u_{h}(x_{0})\right]}\qquad$
(64)
The probability of conversion at position $x_{0}$ is given by
$p_{1}(x_{0})dx_{0}=\frac{1}{1-e^{-d/l_{a}}}\frac{1}{l_{a}}e^{-x_{0}/l_{a}}dx_{0}$
(65)
and the time response function including the fluctuation of the conversion
point is therefore defined by
$\overline{\rho}(t)=\int_{0}^{d}p_{1}(x_{0})\rho(t,x_{0})dx_{0}$ (66)
The time resolution $\sigma^{2}=\int t^{2}\overline{\rho}(t)dt-(\int
t\overline{\rho}(t)dt)^{2}$ has then two components, a contribution
$\sigma_{av}$ from avalanche fluctuations and a contribution $\sigma_{pos}$
from the varying position of the primary e-h pair in the gain layer
$\displaystyle(\gamma v^{*})^{2}\sigma_{av}^{2}$ $\displaystyle=$
$\displaystyle\frac{\pi^{2}}{6}$ (67) $\displaystyle(\gamma
v^{*})^{2}\sigma_{pos}^{2}$ $\displaystyle=$
$\displaystyle\int_{0}^{d}p_{1}(x_{0})\left[\ln
h(x_{0})\right]^{2}dx_{0}-\left(\int_{0}^{d}p_{1}(x_{0})\ln
h(x_{0})dx_{0}\right)^{2}$ (68)
In the case of a very small photon absorption length i.e. $l_{a}\ll d$ the
primary e-h pair will always be created at the very edge of the sensor which
is equal to the situation of $x_{0}=0$ and the time resolution is given by
$\sigma_{av}$. In the other extreme of $l_{a}\gg d$ there will be a uniform
distribution for the position of the photon interaction in the gain layer and
the contribution to the time resolution is given in Fig. 13b. Whether the
avalanche fluctuations or the position fluctuations dominate depends on the
gain layer thickness and the electric field. We can conclude that the time
resolution is well approximated by $\sigma_{1}\approx c_{0}/\gamma v^{*}$,
where the main dependence is given by the variation of $\gamma$ with sensor
thickness and electric field (Fig. 13a) while $c_{0}\approx 1{-}3$ and $v^{*}$
is saturated at $\approx 0.1\,\mu$m/ps.
The term $\sigma_{pos}$ can also be derived directly from the average
avalanche growth. Neglecting avalanche fluctuations the primary e-h pair will
simply trigger an average avalanche according to Eq. 30
$N(t)=B_{e}\frac{u_{e}(x_{0})+u_{h}(x_{0})}{P(x_{0})}e^{\gamma v^{*}t}$ (69)
where we have divided by the efficiency to account for the avalanches that do
not cross the threshold. Applying a threshold $N_{thr}$ to this signal gives a
threshold crossing time of
$t(x_{0})=\frac{1}{\gamma v^{*}}\left[\ln N_{thr}-\ln h(x_{0})\right]$ (70)
and the variance $\overline{t^{2}}-\overline{t}^{2}$ is the one from Eq. 68.
## 6 Charged particle detection with SPADs
Figure 14: A charged particle is leaving clusters of e-h pairs in the gain
layer.
SPADs can also be used for detection of charged particles. Charged particles
interacting with silicon produce clusters of e-h pairs along their track, with
an average distance of $\lambda\approx 0.21\,\mu$m for MIPs. A SPAD sensor of
1 $\mu$m or 2 $\mu$m thickness will therefore be highly efficient to charged
particles and there is no need for a conversion layer. The probability
$p_{clu}(n)$ for having $n>0$ e-h pairs in a single cluster is approximately
given by a $1/n^{2}$ distribution, but there are significant deviations at
small numbers of $n$ in silicon. Fig. 15a shows the cluster size distribution
for silicon as calculated with HEED [16].
a) b)
Figure 15: a) Cluster size distribution i.e. the probability for a single
cluster to contain $n$ electron-hole pairs as calculated with HEED [16]. b)
Probability $p(n,d)$ to find $n$ e-h pairs inside a silicon layer of thickness
$d$. The solid lines refer to an $1/n^{2}$ distribution in both plots.
Assuming a sensor thickness $d$ and an average distance between clusters of
$\lambda$, the average number of clusters in the sensor is $n_{0}=d/\lambda$.
Assuming the cluster size distribution $p_{clu}(n)$, the probability $p(n,d)$
to find $n$ e-h pairs in the sensor can then be calculated by using the
Z-transform as [15]
$P_{clu}(z)=\sum_{n=1}^{\infty}\frac{p_{clu}(n)}{z^{n}}\qquad
G(z)=\frac{e^{d/\lambda P_{clu}(z)}-1}{e^{d/\lambda}-1}\qquad
p(n,d)=\frac{1}{n!}\left[\frac{d^{n}G(1/z)}{dz^{n}}\right]_{z=0}$ (71)
It is shown in Fig. 15b.
### 6.1 Efficiency
First we want to calculate the efficiency for a charged particle to cause
breakdown in the gain layer. Since $P(x)$ from Eq. 39 is the probability for a
single e-h pair at position $x$ to trigger a diverging avalanche, the
probability $Q(x)$ that this single e-h pair does not cause breakdown is
$Q(x)=1-P(x)=\frac{1}{1+\frac{p_{0}}{1-p_{0}}\exp\left[-\int_{0}^{x}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}\right]}$
(72)
The probability $dq$ that there is no breakdown caused by the charged particle
traversing the interval $[x,x+\Delta x]$ is given by a) the probability
$1-\Delta x/\lambda$ that there is no interaction in $\Delta x$ and b) the
probability that there is an interaction but it does not lead to breakdown,
which we can write as
$dq=\left(1-\frac{\Delta x}{\lambda}\right)+\frac{\Delta
x}{\lambda}\sum_{n=1}^{\infty}p_{clu}(n)Q^{n}(x)\\\ $ (73)
From this, the probability $q$ that the charged particle does not create a
diverging avalanche in any of the slices $\Delta x$ is derived in Appendix D
and the efficiency $p=1-q$ is given by
$p=1-\exp\left[-\frac{1}{\lambda}\left(d-\sum_{n=1}^{\infty}p_{clu}(n)\int_{0}^{d}Q(x)^{n}dx\right)\right]$
(74)
If $Q(x)=0$ i.e. if an e-h pair deposited at $x$ will definitely cause
breakdown, we have $p=1-e^{-d/\lambda}$, which is the correct probability that
there is at least one interaction within $d$. The evaluation for constant
$\alpha,\beta$ is given in Appendix E and shown in Fig. 16 together with the
efficiency for a single electron starting at $x_{0}=0$ in the gain layer.
Above the breakdown field the efficiency rises steeply to the maximum level
$1-e^{-d/\lambda}$. For a gain layer thickness of $d>1\,\mu$m the efficiency
is larger that 99 % above the breakdown limit.
Figure 16: Efficiency for a MIP to provoke breakdown. Beyond the breakdown
field, the efficiency rises sharply from 0 to the level of $1-e^{-d/\lambda}$.
The thin lines correspond to the efficiency for a single electron at
$x_{0}=0$.
### 6.2 Time resolution
We first consider the simpler case where the MIP produces a single cluster of
$m$ e-h pairs at position $x_{0}$. Then, we have $A=m$,
$n_{e}^{0}=n_{h}^{0}=m$ and the time response function is
$\rho(t,m,x_{0})\approx\frac{\gamma v^{*}}{(m-1)!}\
h(x_{0})^{m}\exp\left[-m\gamma v^{*}t-h(x_{0})e^{-\gamma v^{*}t}\right]$ (75)
$h(x_{0})=\frac{1-[1-P(x_{0})]^{m}}{B_{e}[u_{e}(x_{0})+u_{h}(x_{0})]}\approx\frac{1}{B_{e}[u_{e}(x_{0})+u_{h}(x_{0})]}$
(76)
In case the cluster size varies according to $p_{clu}(m)$ and the probability
to have the cluster at position $x_{0}$ varies according to $p_{1}(x_{0})$,
the time response function becomes
$\overline{\rho}(t)=\sum_{m=1}^{\infty}\int_{0}^{d}p_{clu}(m)p_{1}(x_{0})\rho(t,m,x_{0})dx_{0}$
(77)
and the related time resolution has three contributions
$\displaystyle(\gamma v^{*})^{2}\sigma^{2}$ $\displaystyle=$
$\displaystyle\sum_{m=1}^{\infty}p_{clu}(m)\psi_{1}(m)$ (78) $\displaystyle+$
$\displaystyle\sum_{m=1}^{\infty}p_{clu}(m)\psi_{0}(m)^{2}-\left(\sum_{m=1}^{\infty}p_{clu}(m)\psi_{0}(m)\right)^{2}$
(79) $\displaystyle+$ $\displaystyle\int_{0}^{d}p_{1}(x_{0})[\ln
h(x_{0})]^{2}dx_{0}-\left(\int_{0}^{d}p_{1}(x_{0})\ln
h(x_{0})dx_{0}\right)^{2}$ (80)
The first term represents the average of the avalanche fluctuations, where
$\psi_{1}(m)$ is decreasing from $\psi_{1}(1)=\pi^{2}/6$ to zero with $\approx
1/m$ dependence. For the cluster size distribution in silicon from Fig. 15a
this first term evaluates to $\approx 0.36$.
The second term represents the fact that an avalanche starting with $m$ e-h
pairs will on average grow as $me^{\gamma v^{*}t}$. The term evaluates to
$\approx 1.39$ for the cluster size distribution in silicon, significantly
larger than the contribution from the avalanche fluctuations.
The third term represents the dependence on the position of the primary
cluster and for a uniform probability this term evaluates to the values
already shown in Fig. 13b.
In conclusion we therefore observe that assuming a single e-h cluster at a
random position in the gain layer, the avalanche fluctuations are negligible
and only the average growth of the avalanche as well as the position
dependence play a role. In a regime where the contribution from the position
dependence is negligible ($<1$) only the fluctuation of the total charge is
important, which results in a universal dependence of the time resolution on
the thickness of the gain layer
$(\gamma^{2}v^{*})^{2}\sigma^{2}=\sum_{m=1}^{\infty}p(m,d)\psi_{0}(m)^{2}-\left(\sum_{m=1}^{\infty}p(m,d)\psi_{0}(m)\right)^{2}$
(81)
Here, $p(m,d)$ is the probability that the passing MIP produces $m$ e-h pairs
in the gain layer of thickness $d$. The resulting time resolution is shown in
Fig. 17a. Since $\gamma v^{*}\sigma$ is close to unity for typical dimensions
of the gain layer, the time resolution of a SPAD for a MIP is essentially
defined only by $\gamma$ and $v^{*}$ and the values are the ones given in Fig.
13a. Since the cluster size distribution $p_{clu}(m)$ has a long tail towards
large values of $m$, the same is true for the time response function. The
standard deviation of the threshold crossing time is then generally not
identical to the parameter $\sigma$ extracted from a Gaussian fit to the
distribution of the threshold crossing time. Both measures are compared in
Figure 17a.
a) b)
Figure 17: a) Time resolution for a MIP when the fluctuations of the cluster
position can be neglected, for different values of the gain layer thickness
$d$. The solid lines refer to $p_{clu}(n)$ for silicon from Fig. 15a, while
the dashed lines assume $p_{clu}(n)\sim 1/n^{2}$. The time resolution is
quantified by the standard deviation of the threshold crossing time (“r.m.s.”)
as well as by the parameter $\sigma$ extracted from a Gaussian fit. b) Time
resolution for a MIP for a gain layer of $0.5,1,2\,\mu$m thickness, taking the
fluctuation of the cluster positions into account. The dashed lines refer to
the numbers from a).
Next, we consider the general case where the MIP produces a variable number of
clusters, all of which fluctuate in size. Since avalanche fluctuations are
negligible we perform this calculation by following the discussion around Eq.
70. We divide the gain layer into $N+1$ slices of thickness $\Delta x=d/(N+1)$
and assume that a charged particle leaves $m_{n}$ primary e-h pairs in the
$n^{th}$ slice. The efficiency for this case is very close to unity, and the
additional small dependence on the cluster distribution is neglected here.
Then, the total average number of charges in the avalanche becomes
$N_{tot}(t)=\sum_{n=0}^{N}m_{n}h\left(n\Delta x\right)e^{\gamma v^{*}t}$ (82)
with $h(x)$ taken from Eq. 76. Applying a threshold $N_{thr}$ to this signal
and shifting it by a constant offset of $\ln N_{thr}$ we find a threshold
crossing time of
$t(m_{0},m_{1},...,m_{N})=-\frac{1}{\gamma
v^{*}}\ln\left[\sum_{n=0}^{N}m_{n}h\left(n\Delta x\right)\right]$ (83)
The probability $p(m,\Delta x)$ to find $m$ e-h pairs in a slice $\Delta x$ is
given by
$p(m,\Delta x)=\left(1-\frac{\Delta x}{\lambda}\right)\delta_{m0}+\frac{\Delta
x}{\lambda}p_{clu}(m)$ (84)
so the average threshold crossing time and the second moment are
$\displaystyle\overline{t}$ $\displaystyle=$
$\displaystyle-\lim_{N\rightarrow\infty}\frac{1}{\gamma
v^{*}}\sum_{m_{0}=0}^{\infty}\sum_{m_{1}=0}^{\infty}...\sum_{m_{N}=0}^{\infty}p(m_{0},\Delta
x)p(m_{1},\Delta x)...p(m_{N},\Delta
x)\ln\left[\sum_{n=0}^{N}m_{n}h\left(n\Delta x\right)\right]$ (85)
$\displaystyle\overline{t^{2}}$ $\displaystyle=$
$\displaystyle\lim_{N\rightarrow\infty}\frac{1}{(\gamma
v^{*})^{2}}\sum_{m_{0}=0}^{\infty}\sum_{m_{1}=0}^{\infty}...\sum_{m_{N}=0}^{\infty}p(m_{0},\Delta
x)p(m_{1},\Delta x)...p(m_{N},\Delta
x)\ln^{2}\left[\sum_{n=0}^{N}m_{n}h\left(n\Delta x\right)\right]$ (86)
These relations are evaluated in Appendix F, giving a contribution to the time
resolution of
$(\gamma
v^{*})^{2}\sigma^{2}=\left[\int_{0}^{\infty}w_{0}(y)\ln^{2}ydy-\left(\int_{0}^{\infty}w_{0}(y)\ln
ydy\right)^{2}\right]$ (87)
with
$W_{0}(s)=\frac{\exp\left[\frac{d}{\lambda}\,\frac{1}{d}\int_{0}^{d}P_{clu}\left(s\frac{h(x)}{h(0)}\right)dx\right]-1}{e^{d/\lambda}-1}\qquad\qquad
w_{0}(y)=L^{-1}[W_{0}(s)]$ (88)
Here, $P_{clu}(s)$ is the Laplace transform of the cluster size distribution
and the operator $L^{-1}$ denotes the inverse Laplace transform. The
evaluation is shown in Fig. 17b. The dashed lines show the time resolution
from Fig. 17a where the contribution from position fluctuations is neglected.
We see that for fields around the breakdown limit the effect from the
positions variations is small and it increases with increasing field. Overall,
the time resolution for MIPs stays within $\sigma_{t}=(0.8{-}1.5)/\gamma
v^{*}$ for the parameters investigated.
## 7 Realistic field configuration
In this section we finally discuss a realistic field configuration of a SPAD
and we apply the insights from all previous sections to assess its
performance. Fig. 18a shows an example for the electric field in a SPAD
created by a highly doped p-n junction. The specific functional form is
defined in Appendix A, Eq. 95. With the impact ionization and drift velocity
parameters from Appendix A we obtain $\alpha(x)$, $\beta(x)$, $v_{e}(x)$ and
$v_{h}(x)$ as explicit functions throughout the sensor. The impact ionization
coefficients $\alpha(x)$ and $\beta(x)$ are shown in Fig. 18b. For the
purposes of our discussion here, we define $x_{1}=0.4\,\mu$m and
$x_{2}=1.9\,\mu$m as the boundaries of the gain layer, which thus has a
thickness of $d=1.5\,\mu$m.
a) b)
Figure 18: a) Electric field in a realistic SPAD or SiPM. b) Impact ionization
coefficients inside the sensor with parameters from Appendix A.
#### Efficiency
For this geometry, the integral in Eq. 14 evaluates to 1.39 which is larger
than unity and therefore guarantees that breakdown can take place. To find the
efficiency of the sensor we solve Eqs. 37 numerically, using as boundary
conditions $P_{h}(x_{1})=0$ and $P_{e}(x_{2})=0$. The solution is shown in
Fig. 19a together with the corresponding efficiencies obtained from a MC
simulation of the avalanche development. The efficiency for a MIP passing this
sensor can be calculated with Eq. 74 using $Q(x)=1-P_{eh}(x)$. With the
cluster size distribution from Fig. 15 and $\lambda=0.21\,\mu$m, the
efficiency evaluates to $p=1-8.5\times 10^{-4}$. A SPAD of this kind is a
highly efficient detector for a MIP.
a) b)
Figure 19: a) Breakdown probability for a primary electron ($P_{e}$), primary
hole ($P_{h}$) and primary e-h pair $(P_{eh})$ placed at position $x_{0}$ in
the gain layer. b) Time at which the average total number of charges in the
avalanche (proportional to the average signal) crosses a threshold of $10^{4}$
charges. The avalanche is initiated by a primary electron, a primary hole or a
primary e-h pair placed at position $x_{0}$ in the gain layer. For both plots,
the numerical solutions give rise to the lines, the markers correspond to the
values from MC simulations.
#### Average signal and contribution to time resolution
To study the average growth of the avalanche, we solve Eqs. 11 with
$n_{e}(x_{1})=0$ and $n_{h}(x_{2})=0$ as boundary conditions. This yields the
average charge densities in the gain layer, $n_{e}(x,t)$ and $n_{h}(x,t)$. The
average total charge present in the gain layer can be obtained through a
numerical integration of these densities. This quantity is proportional to the
average signal produced by the avalanche. As shown in Section 3, it grows
exponentially as $e^{St}$. The time constant $S$ can be directly extracted
from the numerical solution and evaluates to $S=0.48$ ps.
In case the position $x_{0}$ of the primary charge fluctuates, it generates a
contribution to the time resolution according to Eq. 80. The magnitude of this
effect can be estimated from Fig. 19b, which shows the time at which the
average signal crosses the applied threshold. If the position $x_{0}$ of the
initial charge is uniformly distributed, the resulting contribution to the
time resolution is 2.7/2.1/2.4 ps for an initial electron, an initial hole,
and an initial e-h pair.
#### Avalanche fluctuations and contribution to time resolution
According to the discussion in Section 6.2, we expect the contribution of the
time resolution from fluctuations in the avalanche development,
$\sigma_{\mathrm{av}}$, to be of the order of $1/S\approx 2$ ps. A more
precise estimate of the time resolution takes into account the primary charge
initiating the avalanche. Following the discussion leading to Eq. 62, we
approximate $\sigma_{\mathrm{av}}\approx\sqrt{\psi_{1}(A)}/S$. This formula
neglects effects due to the finite size of the gain region and was originally
derived for constant impact ionization coefficients, which enter into the
computation of the parameter $A$. For position-dependent electric fields, the
largest values of $\alpha$ and $\beta$ in the gain layer are relevant for the
formation of the avalanche fluctuations and can be used to compute $A$. As the
comparison with results from MC in Fig. 20 shows, this estimates the time
resolution for the case of an initial electron to within at most 20%. As
expected, the approximation becomes worse if the initial charge includes
holes, which have a low impact ionization coefficient. In this case,
corrections due to the finite size of the gain layer become important. These
can also be computed numerically, as shown in [2], but the calculations are
more involved.
Figure 20: Time resolution $\sigma_{\mathrm{av}}$ due to avalanche
fluctuations, for an avalanche initiated by a primary electron, a primary hole
or a primary e-h pair. The primary charge is placed at position $x_{0}$ in the
gain layer. The approximation from Eq. 62 neglects the $x_{0}$-dependence and
gives rise to the horizontal lines. The markers correspond to the values
obtained from MC simulations.
## 8 Conclusions
We have performed a detailed study of the time resolution and efficiency of
SPADs and SiPMs for the detection of photons and charged particles. Our
discussions start from a series of differential equations, which cover the
conversion and the drift of charges in the conversion layer as well as the
formation of the avalanche in the gain layer. For arbitrary electric field
profiles, the equations for the average avalanche development as well as the
breakdown efficiency can be easily solved using numeric solvers. The
calculation of the avalanche fluctuations and their impact on the time
resolution is more involved in this case and a detailed discussion is given in
[2]. We have provided analytic solutions for the case of constant electric
fields.
For the detection of single photons, the contribution of the conversion layer
of thickness $w$ to the time resolution for constant electric field is
$\sigma=w/(v_{e}\sqrt{12})$. This is valid in case the photon absorption
length is much larger than the conversion layer thickness, which simply
corresponds to a uniform distribution of the photon conversion point inside
the layer.
The contribution of the gain layer to the time resolution has the general form
$\sigma=\frac{c_{0}}{\gamma v^{*}}$ (89)
with $c_{0}=0.8{-}2.5$ for silicon and a gain layer thickness of
$0.5{-}2\mu$m. This relation holds for single photon detection and MIP
detection. It also extends to realistic non-uniform electric fields. Both
contributions from avalanche fluctuations as well as the variation of the
photon or MIP conversion point in the gain layer are captured. The constant
$\gamma v^{*}$ determines the average growth of the avalanche according to
$N(t)\propto e^{\gamma v^{*}t}$, with $v^{*}\approx 0.1\mu$m/ps and $\gamma$
saturating at $\gamma_{max}\approx\alpha+\beta$ at high fields. It should be
possible in practice to limit this contribution to the level of a few
picoseconds at high fields.
The efficiency of a SPAD or SiPM for photons has many contributions, including
the photon conversion probability, the geometry and fill factor of the sensor
as well as the breakdown probability in the gain layer. The contribution from
the breakdown probability can be easily calculated by numerically solving the
related equations. For SPADs with a conversion layer or for photons with
absorption length $<1\,\mu$m that are absorbed close to the edge of the gain
layer this efficiency quickly approaches values close to 100% when biasing the
sensor beyond the breakdown field.
SPADs or SiPMs with a gain layer of $1{-}2\,\mu$m thickness should be highly
efficient for MIP detection. A dedicated conversion layer is not necessary.
This report discussed ’one-dimensional’ sensors. For realistic implementations
of SPAD pixel sensors and SiPMs the boundaries of the pixels together with all
the elements for limitation of optical crosstalk make up complex three
dimensional electric fields. To study these sensors, the 3D field map together
with a full MC simulation with programs like Garfield++ [17] has to be used
and our results can serve as benchmarks for these simulations.
## 9 Appendix A
### 9.1 Velocity of electrons and holes in silicon
The velocity of electrons and holes in silicon is shown is parametrized by
$v_{e}(E)=\frac{\mu_{e}\,E}{\left[1+\left(\frac{\mu_{e}\,E}{v^{e}_{sat}}\right)^{\beta_{e}}\right]^{1/\beta_{e}}}\qquad\qquad
v_{h}(E)=\frac{\mu_{h}\,E}{\left[1+\left(\frac{\mu_{h}\,E}{v^{h}_{sat}}\right)^{\beta_{h}}\right]^{1/\beta_{h}}}$
(90)
The parameters from [5] are $\mu_{e}=1417$ cm2/Vs, $\mu_{h}=471$ cm2/Vs,
$\beta_{e}=1.109$, $\beta_{h}=1.213$ and $v^{e}_{sat}=1.07\times 10^{7}$ cm/s
and $v^{h}_{sat}=0.837\times 10^{7}$ cm/s at 300 K .
### 9.2 Impact ionization coefficients for electrons and holes in silicon
The impact ionization parameters $\alpha$ and $\beta$ as reported in [7] are
given by
$\alpha(E)=\alpha_{\infty}e^{-a/E}\qquad\beta(E)=\beta_{\infty}e^{-b/E}$ (91)
with
$\displaystyle\alpha_{\infty}=7.030\times 10^{5}\,cm^{-1}$ $\displaystyle
a=1.231\times 10^{6}\,V/cm\qquad 1.75\times 10^{5}\leq E\leq 6.0\times
10^{5}\,V/cm$ (92) $\displaystyle\beta_{\infty}=1.582\times 10^{6}\,cm^{-1}$
$\displaystyle b=2.036\times 10^{6}\,V/cm\qquad 1.75\times 10^{5}\leq E\leq
4.0\times 10^{5}\,V/cm$ (93) $\displaystyle\beta_{\infty}=6.710\times
10^{5}\,cm^{-1}$ $\displaystyle b=1.693\times 10^{6}\,V/cm\qquad 4.00\times
10^{5}\leq E\leq 6.0\times 10^{5}\,V/cm$ (94)
### 9.3 Functional form of electric field
For a realistic SPAD we assume the following electric field:
$E(x)=E_{0}\,\exp\left[1-(x-\mu)/\sigma-e^{-(x-\mu)/\sigma}\right]\qquad
E_{0}=5\times
10^{5}\,\mbox{V/cm}\quad\mu=1\,\mu\mbox{m}\quad\sigma=0.5\,\mu\mbox{m}$ (95)
## 10 Appendix B
For $S=0$ Eq. 13 reads as
$\displaystyle[v_{e}(x)f(x)]^{\prime}$ $\displaystyle=$
$\displaystyle\alpha(x)v_{e}(x)f(x)+\beta(x)v_{h}(x)g(x)$ (96)
$\displaystyle-[v_{h}(x)g(x)]^{\prime}$ $\displaystyle=$
$\displaystyle\alpha(x)v_{e}(x)f(x)+\beta(x)v_{h}(x)g(x)$ (97)
with boundary conditions $f(0)=0$ and $g(d)=0$. Subtracting the two equations
gives
$[v_{e}(x)f(x)]^{\prime}+[v_{h}(x)g(x)]^{\prime}=0\qquad\rightarrow\qquad
v_{e}(x)f(x)=-v_{h}(x)g(x)+c_{1}$ (98)
Inserting this expression into Eq. 96 we have
$[v_{h}(x)g(x)]^{\prime}-v_{h}(x)g(x)[\alpha(x)-\beta(x)]=-c_{1}\alpha(x)$
(99)
with the general solution
$v_{h}(x)g(x)=\frac{c_{2}-c_{1}\int_{0}^{x}\alpha(x^{\prime})\exp[-\int_{0}^{x^{\prime}}(\alpha(x^{\prime\prime})-\beta(x^{\prime\prime}))dx^{\prime\prime}]dx^{\prime}}{\exp[-\int_{0}^{x}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}]}$
(100)
The condition $f(0)=0$ implies $c_{1}=c_{2}$ and $g(d)=0$ then implies
$\int_{0}^{d}\alpha(x)\exp\left[-\int_{0}^{x}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}\right]dx=1$
(101)
This is the general breakdown condition which is independent of the electron
and hole velocities. Expressing $v_{h}g$ from Eq. 98 and inserting this
expression into Eq. 97 we have
$v_{h}(x)g(x)=-v_{e}(x)f(x)+c_{1}\qquad[v_{e}(x)f(x)]^{\prime}-v_{e}(x)f(x)[\alpha(x)-\beta(x)]=c_{1}\beta(x)$
(102)
and therefore
$v_{e}(x)f(x)=\frac{c_{1}\int_{0}^{x}\beta(x^{\prime})\exp[-\int_{0}^{x^{\prime}}(\alpha(x^{\prime\prime})-\beta(x^{\prime\prime}))dx^{\prime\prime}]dx^{\prime}+C}{\exp[-\int_{0}^{x}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}]}$
(103)
The condition $f(0)=0$ gives $C=0$ and $g(d)=0$ gives
$\int_{0}^{d}\beta(x)\exp\left[-\int_{0}^{x}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}\right]dx=\exp\left[-\int_{0}^{d}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}\right]$
(104)
which is equal to
$\int_{0}^{d}\beta(x)\exp\left[\int_{x}^{d}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}\right]dx=1$
(105)
As shown in [8] Eq. 101 and 105 are identical because
$\int_{0}^{d}(\alpha(x)-\beta(x))\exp\left[-\int_{0}^{x}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}\right]dx=1-\exp\left[-\int_{0}^{d}(\alpha(x^{\prime})-\beta(x^{\prime}))dx^{\prime}\right]$
(106)
## 11 Appendix C
The relation that determines the number of electrons $n$ and the number of
holes $m$ at a given time when starting with a single electron is defined in
Eq. 46 as
$\displaystyle p_{e}(n,m,t+dt)$ $\displaystyle=$ $\displaystyle(1-\alpha
v_{e}dt)p_{e}(n,m,t)$ $\displaystyle+$ $\displaystyle\alpha
v_{e}dt\sum_{i=1}^{n}\sum_{j=1}^{i}\sum_{r=1}^{m}\sum_{s=1}^{r}p_{e}(n-i-j,m-r-s,t)p_{e}(i,r,t)p_{h}(j,s,t)$
Establishing the corresponding equation for $p_{h}(m,n,t)$ and expanding for
small $dx$ we have
$\displaystyle\frac{1}{v_{e}}\frac{dp_{e}(n,m,t)}{dt}$ $\displaystyle=$
$\displaystyle-\alpha p_{e}(n,m,t)$ $\displaystyle+$
$\displaystyle\alpha\sum_{i=1}^{n}\sum_{j=1}^{i}\sum_{r=1}^{m}\sum_{s=1}^{r}p_{e}(n-i-j,m-r-s,t)p_{e}(i,r,t)p_{h}(j,s,t)$
$\displaystyle\frac{1}{v_{h}}\frac{dp_{h}(n,m,t)}{dt}$ $\displaystyle=$
$\displaystyle-\beta p_{h}(n,m,t)$ $\displaystyle+$
$\displaystyle\beta\sum_{i=1}^{n}\sum_{j=1}^{i}\sum_{r=1}^{m}\sum_{s=1}^{r}p_{h}(n-i-j,m-r-s,t)p_{h}(i,r,t)p_{e}(j,s,t)$
The Z-transform of these equations is
$\frac{1}{v_{e}}\frac{\partial P_{e}(z_{1},z_{2},t)}{\partial t}=-\alpha
P_{e}(z_{1},z_{2},t)[1-P_{e}(z_{1},z_{2},t)P_{h}(z_{1},z_{2},t)]$ (110)
$\frac{1}{v_{h}}\frac{\partial P_{h}(z_{1},z_{2},t)}{\partial t}=-\beta
P_{h}(z_{1},z_{2},t)[1-P_{h}(z_{1},z_{2},t)P_{e}(z_{1},z_{2},t)]$ (111)
The equations have a structure similar to 37 and we therefore form the
function
$P(z_{1},z_{2},t)=1-P_{e}(z_{1},z_{2},t)P_{h}(z_{1},z_{2},t)$ (112)
Differentiating this equation and using the above relations gives
$\frac{\partial P}{\partial t}=(\alpha v_{e}+\beta v_{h})(1-P)P$ (113)
with the solution
$P(t)=\frac{e^{(\alpha v_{e}+\beta v_{h})t}}{e^{(\alpha v_{e}+\beta
v_{h})t}+c_{1}}$ (114)
The initial conditions that there is one electron at $t=0$ for $p_{e}$ and one
hole for $p_{h}$ reads as $p_{e}(n,m,t=0)=\delta_{n,1}\delta_{m,0}$ and
$p_{h}(n,m,t=0)=\delta_{n,0}\delta_{m,1}$ and we have therefore
$P_{e}(z_{1},z_{2},t=0)=\frac{1}{z_{1}}\qquad
P_{h}(z_{1},z_{2},t=0)=\frac{1}{z_{2}}\qquad\rightarrow\qquad
P(z_{1},z_{2},t=0)=1-\frac{1}{z_{1}z_{2}}$ (115)
and
$P(z_{1},z_{2},t)=\frac{e^{(\alpha v_{e}+\beta
v_{h})t}(z_{1}z_{2}-1)}{1+e^{(\alpha v_{e}+\beta v_{h})t}(z_{1}z_{2}-1)}$
(116)
We can now write Eqs. 110 and 111 as
$\frac{d\ln P_{e}}{dt}=-\alpha v_{e}P\qquad\frac{d\ln P_{h}}{dt}=-\beta
v_{h}P$ (117)
Integrating the equations with the above initial conditions we finally have
$\displaystyle P_{e}(z_{1},z_{2},t)$ $\displaystyle=$
$\displaystyle\frac{1}{z_{1}}\left[\frac{z_{1}z_{2}}{1+e^{(\alpha v_{e}+\beta
v_{h})t}(z_{1}z_{2}-1)}\right]^{\frac{\alpha v_{e}}{\alpha v_{e}+\beta
v_{h}}}$ (118) $\displaystyle P_{h}(z_{1},z_{2},t)$ $\displaystyle=$
$\displaystyle\frac{1}{z_{2}}\left[\frac{z_{1}z_{2}}{1+e^{(\alpha v_{e}+\beta
v_{h})t}(z_{1}z_{2}-1)}\right]^{\frac{\beta v_{h}}{\alpha v_{e}+\beta v_{h}}}$
(119)
In case the avalanche is initiated by $n_{e}^{0}$ electrons and $n_{h}^{0}$
holes at time $t=0$, we are interested in the probability $p_{0}(n,m,t)$ to
find $n$ electrons and $m$ holes at time $t$. In terms of $p_{e}(n,m,t)$ and
$p_{h}(n,m,t)$, it is expressed as an iterated convolution in analogy to the
right-hand sides of Eqs. 11 and 11. In the $z$-domain, $P_{0}(z_{1},z_{2},t)$
reads
$\displaystyle P_{0}(z_{1},z_{2},t)$ $\displaystyle=$ $\displaystyle
P_{e}(z_{1},z_{2},t)^{n_{e}^{0}}P_{h}(z_{1},z_{2},t)^{n_{h}^{0}}$ (120)
$\displaystyle=$
$\displaystyle\frac{1}{z_{1}^{n_{e}^{0}}}\frac{1}{z_{2}^{n_{h}^{0}}}\left[\frac{z_{1}z_{2}}{1+e^{(\alpha
v_{e}+\beta v_{h})t}(z_{1}z_{2}-1)}\right]^{\frac{n_{e}^{0}\alpha
v_{e}+n_{h}^{0}\beta v_{h}}{\alpha v_{e}+\beta v_{h}}}$ (121)
The inverse Z-Transform of this expression also gives access to $p(n,t)$ shown
in Eq. 48, which is defined as the probability to find $n$ e-h pairs that are
created in addition to the initial $n_{e}^{0}$ electrons and $n_{h}^{0}$
holes.
$p(n,t)=\frac{\Gamma(A+n)}{\Gamma(A)\Gamma(1+n)}\left(\frac{1}{\nu(t)}\right)^{A}\left(1-\frac{1}{\nu(t)}\right)^{n}\qquad\sum_{n=0}^{\infty}p(n,t)=1$
(122) $A=\frac{n_{e}^{0}\alpha v_{e}+n_{h}^{0}\beta v_{h}}{\alpha v_{e}+\beta
v_{h}}\qquad\nu(t)=e^{(\alpha v_{e}+\beta v_{h})t}$ (123)
## 12 Appendix D
We divide the sensor into $N+1$ slices of thickness $\Delta x=d/(N+1)$. The
probability that there is no breakdown caused by the particle traversing the
slice $[x,x+\Delta x]$ is given by the probability $1-\Delta x/\lambda$ that
there is no interaction in $\Delta x$ and the probability that there is an
interaction but it does not lead to breakdown.
$\displaystyle dq$ $\displaystyle=$ $\displaystyle\left(1-\frac{\Delta
x}{\lambda}\right)+\frac{\Delta
x}{\lambda}\sum_{n=1}^{\infty}p_{clu}(n)Q^{n}(x)$ (124) $\displaystyle=$
$\displaystyle 1-\frac{\Delta
x}{\lambda}\left(1-\sum_{n=1}^{\infty}p_{clu}(n)Q^{n}(x)\right)$ (125)
$\displaystyle:=$ $\displaystyle 1-\frac{\Delta x}{\lambda}f(x)$ (126)
The probability $q$ that there is no breakdown in any of the slices of $\Delta
x$ throughout the sensor is then
$\displaystyle q$ $\displaystyle=$ $\displaystyle\left[1-\frac{\Delta
x}{\lambda}f(0)\right]\left[1-\frac{\Delta x}{\lambda}f(\Delta
x)\right]\left[1-\frac{\Delta x}{\lambda}f(2\Delta
x)\right]...\left[1-\frac{\Delta x}{\lambda}f(N\Delta x)\right]$ (127)
$\displaystyle\ln q$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{N}\ln\left[1-\frac{d}{N\lambda}f(nd/N)\right]$ (128)
$\displaystyle\approx$ $\displaystyle\sum_{n=0}^{N}-\frac{d}{N\lambda}f(nd/N)$
(129) $\displaystyle\approx$
$\displaystyle-\frac{1}{\lambda}\int_{0}^{\infty}f(x)dx$ (130)
$\displaystyle=$
$\displaystyle-\frac{1}{\lambda}\left(d-\sum_{n=1}^{\infty}p_{clu}(n)\int_{0}^{d}Q(x)^{n}dx\right)$
(131)
and the probability $p=1-q$ that the sensor is efficient is therefore
$p=1-\exp\left[-\frac{1}{\lambda}\left(d-\sum_{n=1}^{\infty}p_{clu}(n)\int_{0}^{d}Q(x)^{n}dx\right)\right]$
(132)
## 13 Appendix E
$Q(x)=1-P(x)=\frac{1}{1+\frac{p_{0}}{1-p_{0}}\exp\left[-(\alpha-\beta)x\right]}$
(133)
$\int_{0}^{d}Q(x)^{n}dx=\frac{1}{\alpha-\beta}\left[H\left(n,1+\frac{p_{0}\exp[-(\alpha-\beta)d]}{1-p_{0}}\right)-H\left(n,1+\frac{p_{0}}{1-p_{0}}\right)\right]$
(134)
$H(n,y)=\int\frac{dy}{y^{n}(1-y)}=\ln\frac{y}{1-y}-\sum_{m=1}^{n-1}\frac{1}{my^{m}}$
(135)
## 14 Appendix F
We assume the probability $p_{clu}(n)$ to be continuous in $n$ (which we can
imagine by expressing it as a sum of delta functions centered at integer
values of $n$) and write the expression in Eq. 84 as
$p(n,\Delta x)=\left(1-\frac{\Delta x}{\lambda}\right)\delta(n)+\frac{\Delta
x}{\lambda}p_{clu}(n)$ (136)
We can then replace the sums in Eq. 85 by integrals and have
$\overline{t}(N)=-\frac{1}{\gamma v^{*}}\int dm_{0}\int dm_{1}...\int
dm_{N}p(m_{0},\Delta x)p(m_{1},\Delta x)...p(m_{N},\Delta
x)\ln\left[\sum_{n=0}^{N}m_{n}h\left(n\Delta x\right)\right]$ (137)
We change variables according to
$m=\frac{1}{h_{0}}\sum_{n=0}^{N}m_{n}h_{n}\quad\rightarrow\quad
m_{0}=m-\frac{1}{h_{0}}\sum_{n=1}^{N}m_{n}h_{n}$ (138)
where we have written $h_{n}=h(n\Delta x)$, which gives
$\displaystyle\overline{t}(N)$ $\displaystyle=$ $\displaystyle-\frac{1}{\gamma
v^{*}}\int dm\left[\int dm_{1}...\int
dm_{N}p\left(m-\frac{1}{h_{0}}\sum_{n=1}^{N}m_{n}h_{n},\Delta
x\right)p(m_{1},\Delta x)...p(m_{N},\Delta x)\right]\ln(h_{0}m)$ (139)
$\displaystyle=$ $\displaystyle-\frac{1}{\gamma v^{*}}\int
w(m)\ln(h_{0}m)\,dm$
with
$w(m)=\int dm_{1}\int dm_{2}...\int
dm_{N}p\left(m-\frac{1}{h_{0}}\sum_{n=1}^{N}m_{n}h_{n},\Delta
x\right)p(m_{1},\Delta x)...p(m_{N},\Delta x)$ (140)
The Laplace transform of this expression is
$W(s)=P(s,\Delta x)P\left(\frac{h_{1}}{h_{0}}s,\Delta
x\right)P\left(\frac{h_{2}}{h_{0}}s,\Delta
x\right)...P\left(\frac{h_{N}}{h_{0}}s,\Delta
x\right)=\exp\left[\sum_{n=0}^{N}\ln P\left(\frac{h_{n}}{h_{0}}s,\Delta
x\right)\right]$ (141)
With $P(s,\Delta x)$ being the Laplace transform of Eq. 136
$P(s,\Delta x)=1+\frac{\Delta x}{\lambda}(P_{clu}(s)-1)$ (142)
we have
$\displaystyle W(s)$ $\displaystyle=$
$\displaystyle\exp\left(\sum_{n=1}^{N}\ln\left[1+\frac{\Delta
x}{\lambda}(P_{clu}\left(s\frac{h_{n}}{h_{0}}\right)-1)\right]\right)$ (143)
$\displaystyle\approx$ $\displaystyle\exp\left(\sum_{n=1}^{N}\frac{\Delta
x}{\lambda}\left[P_{clu}\left(s\frac{h_{n}}{h_{0}}\right)-1\right]\right)$
(144) $\displaystyle=$ $\displaystyle
e^{-d/\lambda}\exp\left[\frac{1}{\lambda}\int_{0}^{d}P_{clu}\left(s\frac{h(x)}{h(0)}\right)dx\right]$
(145)
Normalizing to the probability that there is at least one interaction inside
the gain layer we have
$W_{0}(s)=\frac{\exp\left[\frac{d}{\lambda}\,\frac{1}{d}\int_{0}^{d}P_{clu}\left(s\frac{h(x)}{h(0)}\right)dx\right]-1}{e^{d/\lambda}-1}$
(146)
The first and second moment of the threshold crossing time are therefore
$w_{0}(y)=L^{-1}[W_{0}(s)]\qquad\overline{t}=-\frac{1}{\gamma v^{*}}\int
w_{0}(y)\ln[h(0)y]dy\qquad\overline{t^{2}}=\frac{1}{(\gamma v^{*})^{2}}\int
w_{0}(y)\ln^{2}[h(0)y]dy$ (147)
Comparing the expression to Eq. 71 we see that the effect of the position
dependence $h(x)$ is equivalent to a change of the cluster size distribution
according to
$\overline{P}_{clu}(s)=\frac{1}{d}\int_{0}^{d}P_{clu}\left(s\frac{h(x)}{h(0)}\right)dx$
(148)
## 15 Bibliography
## References
* [1] K. T. Son, C. C. Lee, Multiple-Target Laser Range finding Receiver Using a Silicon Photomultiplier Array, IEEE Trans. Instrum. Meas. 59, 3005-3011, (2010)
* [2] P. Windischhofer, W. Riegler, The statistics of electron-hole avalanches, arXiv:2012.11285, (2020)
* [3] M. A. Green and M. Keevers, Optical properties of intrinsic silicon at 300 K, Progress in Photovoltaics, Vol. 3, 189-192, (1995)
* [4] M. A. Green, Self-consistent optical parameters of intrinsic silicon at 300 K including temperature coefficients, Solar Energy Materials & Solar Cells 92, 1305-1310, (2008)
* [5] C. Canali et al., Electron and hole drift velocity measurements in silicon and their empirical relation to electric field and temperature, IEEE Trans. Electron Dev. 22, 1045, (1975)
* [6] Synopsis, Inc., Sentaurus Device User Guide Version D-2010.03
* [7] R. Van Oversträten and H. de Man, Measurement of the ionization rates in diffused silicon p-n junctions, Solid-St. Electron. 13, 583, (1969)
* [8] G. E. Stillman, C.M. Wolfe, Avalanche Photodiodes, Semiconductors and Semimetals, Chapter 5, Vol. 12, 291-393, (1977)
* [9] W. Oldham, R. Samuelson, P. Antognetti, Triggering Phenomena in Avalanche Diodes, IEEE, Vol. 19, No. 9, 1056-1060, (1972)
* [10] R. J. Mcintyre, On the Avalanche Initiation Probability of Avalanche Diodes Above the Breakdown Voltage, IEEE Trans. Electron. Dev., Vol 20, No. 7, 637-641, (1973)
* [11] L. H. Holway, Electron-Hole Avalanches with Constant Ionization Coefficients, IEEE Trans. Electron. Dev., Vol. 26, No. 6, 991-993, (1979)
* [12] W. Legler, Die Statistik der Elektronenlawinen in elektronegativen Gasen, bei hohen Feldstärken und bei grosser Gasverstärkung, Z. Naturforschg. 16a, 253-261, (1961)
* [13] R. Santonico, R. Cardarelli, Development of Resistive Plate Counters, NIMA 187, 377, (1981)
* [14] A. Mangiarotti, A. Gobbi, On the physical origin of tails in the time response of spark counters, NIMA 482, 192-215, (2002)
* [15] W. Riegler, Time response functions and avalanche fluctuations in resistive plate chambers, NIMA 602, 377-390, (2009)
* [16] I. Smirnov, HEED, Program to compute energy loss of fast particles in gases, Version 1.01, CERN
* [17] H. Schindler, R. Veenhof, et al., Garfield++ simulation of tracking detectors, https://garfieldpp.web.cern.ch/garfieldpp/
|
# Relaxed Clustered Hawkes Process for Procrastination Modeling in MOOCs
Mengfan Yao,1 Siqian Zhao, 1 Shaghayegh Sahebi, 1 Reza Feyzi Behnagh 2
###### Abstract
Hawkes processes have been shown to be efficient in modeling bursty sequences
in a variety of applications, such as finance and social network activity
analysis. Traditionally, these models parameterize each process independently
and assume that the history of each point process can be fully observed. Such
models could however be inefficient or even prohibited in certain real-world
applications, such as in the field of education, where such assumptions are
violated. Motivated by the problem of detecting and predicting student
procrastination in students Massive Open Online Courses (MOOCs) with missing
and partially observed data, in this work, we propose a novel personalized
Hawkes process model (RCHawkes-Gamma) that discovers meaningful student
behavior clusters by jointly learning all partially observed processes
simultaneously, without relying on auxiliary features. Our experiments on both
synthetic and real-world education datasets show that RCHawkes-Gamma can
effectively recover student clusters and their temporal procrastination
dynamics, resulting in better predictive performance of future student
activities. Our further analyses of the learned parameters and their
association with student delays show that the discovered student clusters
unveil meaningful representations of various procrastination behaviors in
students.
## 1 Introduction
Academic procrastination, or postponing the starting of planned studies, has
been associated with negative side-effects on students’ academic performance,
psychological well-being, and health (Moon and Illingworth 2005; Steel 2007).
This behavior is more prevalent in online educational settings, that require
high levels of time-management and self-regulation skills (Lee and Choi 2011)
and can lead to low academic outcomes and course drop-outs (Vitiello et al.
2018). With the growth of online education, it is essential to devise
mechanisms to detect the potential future procrastination in students, to be
able to prevent this behavior and its associated negative consequences.
In studies on self-reported academic procrastination, this behavior is
indicated by cramming of studying activities: given a time interval followed
by a deadline, students show limited studying activities at the beginning of
the interval, followed by a burst of studying (cramming) closer to the
deadline Perrin et al. (2011); Gelman et al. (2016). However, these studies do
not provide a unified quantitative definition of procrastination, other than
qualitative student self-reports, that can be scarce and hard to obtain. Prior
work also shows that although each student has their individual studying
habits, students can be clustered into a few distinct groups by their studying
behaviors (Yao, Sahebi, and Feyzi-Behnagh 2020; Uzir et al. 2020). In essence,
in highly procrastinating students, getting closer to the deadline may trigger
more intense studying activities, while in others, their studies are more
regulated and distributed across the time interval. Despite these finding,
most of the studies on student procrastination either ignored the temporal
aspects of students’ behavior (Cerezo et al. 2017; Kazerouni et al. 2017), or
were not personalized for students (Baker, Evans, and Dee 2016; Park et al.
2018; Backhage, Ojeda, and Sifa 2017). More importantly, current research
cannot predict when student’s next activity will take place. Ideally, a
procrastination model can capture the underlying cluster structures in student
activity sequences, can be personalized to capture different students’
studying habits, and can deal with unseen data such as assignments that are
not yet started by students, and represent students’ activity burstiness.
We note that Hawkes processes (Hawkes 1971) have the potential to represent
students’ procrastination behavior, as they model activity burstiness, as
opposed to memoryless Poisson processes. However, when modeling one sequence
per user-item pair, conventional Hawkes processes model each item’s sequences
individually and do not rely on the similarities between different items.
Thus, they cannot infer parameters for items that have unseen data (Hosseini
et al. 2016; Choi et al. 2015; Mei and Eisner 2017; Du et al. 2016a). In some
recent work, low-rank personalized Hawkes models aim to address this problem
(Du et al. 2015b), usually with the help of auxiliary features to reinforce
the low-rank assumption (Shang and Sun 2018, 2019). Yet, to the best of our
knowledge, none of the previous Hawkes models were able to represent the
cluster structure between sequences, while being personalized and inferring
unseen data.
In this paper, we propose a novel Relaxed Clustered Hawkes process with a
Gamma prior (RCHawkes-Gamma) to model and predict the cramming procrastination
behavior in students of Massive Open Online Courses (MOOCs). To do this, we
model each student-assignment pair, that is the interactions of a student with
a course assignment characterized by activity times, as a uni-variate Hawkes
process. By modeling all student-assignment pairs jointly, our proposed model
is able to capture similarities shared among students (i.e. cluster
structures) by learning a low-dimensional representation of procrastination
(i.e. personalization). As a result, even for student-assignment pairs without
observed history (i.e. unseen data), their parameters can be inferred based on
the group structure, without relying on auxiliary features or historical
observations.
More specifically, our contributions are: (1) We propose a Relaxed Clustered
Hawkes model, driven by the problem of modeling academic procrastination in
MOOCs; (2) Our personalized model represents the similarity structure between
multiple event sequences without requiring auxiliary features (Section 4) and
infers unseen data in event sequences; (3) We experiment on both synthetic and
real-world datasets to show that the proposed model can recover clusters of
students and their temporal procrastination dynamics, resulting in a better
predictive performance of future activities (Section 5); and (4) We further
study the learned parameters to demonstrate that the discovered student
clusters are meaningful representations of various procrastination-related
behaviors in students (Section 5.4).
## 2 Related Work
Low-Rank Hawkes Processes Hawkes processes (Hawkes 1971) have been
successfully used in applications such as social networks (e.g. Chen et al.
2019), mobility patterns (e.g. Vassøy et al. 2019), and recommender systems
(e.g. Du et al. 2015b). Among them, the most relevant work to ours is low-rank
uni-variate Hawkes processes proposed by Du et al., to capture user-item
interaction and recommend items “at the right moment” (Du et al. 2015b).
However, this work does not incorporate the clustering behavior that is
essential in our problem domain.
Other relevant literature on Hawkes processes mainly falls into 3 categories:
(1) Multi-variate Hawkes processes that focus on modeling the mutual
excitation among sequences (Zhou, Zha, and Song 2013; Luo et al. 2015; Bacry,
Gaïffas, and Muzy 2015; Lemonnier, Scaman, and Kalogeratos 2017). (2) Uni-
variate Hawkes models that model each sequence independently and discard the
potential relatedness among all sequences, thus cannot infer sequence’s future
when its history is not observed e.g. (Mei and Eisner 2017; Du et al. 2016b,
2015a; Xiao et al. 2017; Li, Wei, and Ke 2018; Li and Ke 2020). For example,
Du et al. propose to use RNN to model the arrival times of a given sequence to
capture more complicated sequence dynamics compared to traditional Hawkes
models (Du et al. 2016b). Such RNN-based models predict future time after time
$t$ based on the observed history unfolded up to time $t$, therefore cannot
directly predict sequences that do not have historical observations; (3)
Approaches that jointly model different sequences as uni-variate Hawkes
processes by capturing the similarities among the sequences (e.g. via a low-
rank constraint). Therefore, they can predict the future events for the
sequences without historical observations, by utilizing histories from
sequences that are structurally similar. However, such methods usually rely on
auxiliary information (He et al. 2015; Li, Wei, and Ke 2018; Shang and Sun
2018, 2019). For example, in the recommender system setting, Shang et al.
impose a local low-rank constraint on the parameter matrix to model large-
scale user-item interactions by first computing user-item pairs’ similarities
via item features (Shang and Sun 2018). In contrast, due to privacy
constraints in our application, many educational datasets are highly
anonymized and scarce. Consequently, having a model that does not require such
information is valuable in our context.
Procrastination Modeling in Education Domain As there is no quantitative
definition for procrastination behavior, in most of the recent educational
data mining literature, procrastination-related behavior has been summarized
by curating time-related features from student interactions in the course.
These studies aim to evaluate the relationships between these time-related
features with student performance and do not model temporal aspects of
procrastination (Baker, Evans, and Dee 2016; Cerezo et al. 2017; Kazerouni et
al. 2017; Agnihotri, Baker, and Stalzer 2020). The few recent works that model
student activity sequences, are limited in their assumptions, do not capture
student activity intensities, are not personalized, do not model time
dependencies between student actions, and do not infer missing data (Park et
al. 2018; Yao, Sahebi, and Feyzi-Behnagh 2020). For example, Park et al.
classify students to procrastinators vs. non-procrastinators by formulating a
measure using a mixture model of per-day student activity counts during each
week of the course (Park et al. 2018). But, it cannot model non-homogeneously
spaced deadlines in a course. Furtheremore, even though each student’s
activity is counted in a daily basis, it is not a temporal approach that
models activity time points. Indeed, none of these models can predict when the
future activities will happen. Similarly, Backhage et al. proposed Poisson
distribution to model students’ daily activity count in order to capture
procrastination-deadline cycles of all students in the course (Backhage,
Ojeda, and Sifa 2017). In their work, each day of the week is associated with
a Poisson rate parameter that is constant during the whole course. Despite
representing individual student activity counts, this model cannot
differentiate between different weeks in the course, does not have a
continuous time scale, and cannot capture non-homogeneously spaced deadlines
in a course.
To the best of our knowledge, the only model that can be compared to ours in
predicting activity times is a Hawkes process model by Yao et al. (Yao,
Sahebi, and Feyzi-Behnagh 2020) that relates procrastination to the mutual
excitation among activity types. This work does not model student behavior
clusters, and cannot infer unseen data. We use this model, called EdMPH, as
one of our baselines.
## 3 Problem Formulation
Our goal is to model partially observed student-assignment interactions and
predict two types of future student-assignment interactions: 1) future
assignments with no historical activities (unseen data requirement), and 2)
current assignments that students are working on (assignment-student pairs
with partially observed history).
Specifically, we consider a course that includes $N$ non-parallel assignments
and $M$ students. Each student $u_{j}$ can perform a sequence of activities
towards each assignment $a_{i}$, such that each sequence is indexed by a
student-assignment pair $(a_{i},u_{j})$. Activities in a sequence are
presented with a timestamp that marks their arrival time. We assume that the
activities within each student-assignment pair happen either because they are
a pre-requisite for another assignment-related activity (internal stimuli), or
because of a non-activity related reason (external stimuli). For an example of
internal stimuli, think of when students divide their big tasks (e.g.,
submitting the final assignment response) into smaller sub-tasks (e.g.,
solving a sub-problem of the assignment), within each sub-task, one activity
spontaneously leads to another related activity. Conversely, external stimuli
can come from the individual student’s tendency to study regularly or due to
the assignment deadline 111As student activities are triggered by the upcoming
deadlines from the future but not the past, without loss of generalizability,
we use a reversed activity timeline for our data.. On the other hand, we
assume no causal relationship between student-assignment pairs: since
assignments are not parallel, activities towards assignments do not trigger
each other. Further, since students do not work in teams and are not in a
social setting, there are no triggering effects between student activities. We
also assume that while students having their individual learning pattern
towards each assignment (personalization assumption), their studying
activities follow a latent structure that can group students with similar
learning behaviors (cluster assumption).
## 4 Model: Relaxed Clustered Hawkes
According to our problem formulation and assumptions, we build our model based
on uni-variate Hawkes processes. The reason behind our choice of the model is
two-fold: (1) Unlike the memoryless Poisson process that assumes the
independence among activities, Hawkes can model the aforementioned internal
and external stimuli that exist in student activities; (2) Unlike the multi-
variate Hawkes processes that assume triggering effects between dimensions,
there are no exciting effects between assignments or student sequences. We
first present the intensity function that defines student-assignment pairs. We
then add low-rank and relaxed clustering constraints to capture our
personalization and cluster assumptions, and add a Gamma prior to address the
unseen data requirement.
### 4.1 Uni-Variate Hawkes Intensity Function
Formally, given a sequence of activities for an assignment-student pair
$(a_{i},u_{j})$, we model its activities’ arrival times
$X_{i}^{j}=\\{x_{i\tau}^{j}|\tau=1,...,n_{i}^{j}\\}$ by a uni-variate Hawkes
process, via the intensity function of time $t$, defined as follows (Hawkes
1971):
$\lambda(t)_{ij}=U_{ij}+A_{ij}\beta\sum_{\tau=1}^{n_{ij}}\exp(-\beta(t-x^{j}_{i,\tau})),\vspace{-5pt}$
(1)
where $x^{j}_{i,\tau}$ is the $\tau$-th element in the vector
$X^{j}_{i}\in\mathbb{R}^{n_{ij}}$, which denotes the arrival time of the
$\tau$-th activity that belongs to assignment-student pair $(a_{i},u_{j})$,
$n_{ij}$ is the total number of observed activities for $(a_{i},u_{j})$;
$U\in\mathbb{R}^{N\times M}$ is the non-negative base rate matrix, where
$U_{ij}$ quantifies the expected number of activities that are triggered
externally within $(a_{i},u_{j})$; $A\in\mathbb{R}^{N\times M}$ is the non-
negative self-excitement matrix, with $A_{ij}$ representing the self-exciting
or bursty nature of $(a_{i},u_{j})$, i.e., the expected number of activities
that are triggered by the past activities; and $\beta$ is a global decay rate
that represents how fast the historical activities stop affecting the future
activities.
### 4.2 Relaxed Clustered Hawkes
Conventional uni-variate Hawkes processes model each process individually. In
this work, we assume that the occurrences of assignment activities and their
characteristics, parameterized by Hawkes process, are similar among some
students, but less similar to some others, i.e. parameter matrix $A$ exhibits
cluster structure on its columns.
Particularly, we assume that students form $k<M$ clusters according to their
behaviors towards all assignments represented in $A$’s column vectors. To
impose this, we add a clustering constraint to our model using the sum of
squared error (SSE) penalty, similar to K-means clustering:
$\displaystyle P(A,W)$
$\displaystyle=\rho_{1}\text{tr}(A^{\top}A-W^{\top}A^{\top}AW)+\rho_{2}\text{tr}(A^{\top}A)$
(2)
$\displaystyle=\text{tr}(A((1+\frac{\rho_{1}}{\rho_{2}})I-WW^{T})A^{\top},$
where $\rho_{1}$ and $\rho_{2}$ are regularization coefficients;
$W\in\mathbb{R}^{M\times k}$ is an orthogonal cluster indicator matrix, with
$W_{ij}=\frac{1}{\sqrt{n_{j}}}$ if $i$ is in $j$-th cluster, and 0 otherwise
(showing which students belong to which cluster); and $n_{j}$ is the size of
cluster $j$.
Since this strict constraint is non-convex, we follow Jacob et al.’s work
(Jacob, Vert, and Bach 2009) to obtain its convex relaxation problem:
$\displaystyle\min$
$\displaystyle\mathcal{L}_{c}(A,Z)=\min\frac{\rho_{2}(\rho_{2}+\rho_{1})}{\rho_{1}}tr(A(\frac{\rho_{1}}{\rho_{2}}I+Z)^{-1}A^{\top})$
(3) $\displaystyle\text{s.t. }tr(Z)=k,Z\preceq I,Z\in S_{+}^{M}.$
$Z=WW^{\top}\in\mathbb{R}^{M\times M}$ represents cluster-based similarity of
students, with $W$ defined in Eq. 2. Here, the trace norm is a surrogate of
the original assumption that there are $k$ clusters and the other two
constraints are the relaxation of $W$ being orthogonal. As a result, this
equation is jointly convex to both $A$ and $Z$. We call this model RCHawkes.
### 4.3 A mixture Gamma prior
To improve our model’s robustness to potential outliers and to possibly reduce
overfitting, we add a mixture Gamma prior on the self-excitement matrix $A$.
As a result, the summation of the first three terms in Equation 8 is the
A-Posteriori estimation, which not only is more robust comparing with Maximum
Likelihood Estimation, also it provides an interpretation of each component’s
hyperparameters in student clusters: i.e. the pseudo counts of externally and
internally excited activities. Specifically, consider the prior for $A_{ij}$
when student $i$ is in $m$-th cluster:
$\displaystyle p(A_{ij};\Theta_{m})$
$\displaystyle=\frac{1}{\Gamma(s_{m})\theta_{m}^{s_{m}}}A_{ij}^{s_{m}-1}\exp(-\frac{A_{ij}}{\theta_{m}^{s_{m}}}),$
(4)
where $\Theta_{m}=(s_{m},k_{m})$, are hyperparameters which respectively
control the shape and the scale of the gamma distribution in cluster $m$. The
loss brought by the mixture Gamma prior can be computed as follows:
$\displaystyle\mathcal{L}_{g}=\log p(A;\Theta_{1},..,\Theta_{k})$ (5)
$\displaystyle=\sum_{X_{i}^{j}\in\mathcal{O}}\big{[}\log\sum_{m=1}^{k}\frac{1}{k}\frac{1}{\Gamma(s_{m})\theta_{m}^{s_{m}}}A_{ij}^{s_{m}-1}\exp(-\frac{A_{ij}}{\theta_{m}^{k_{m}}})\big{]},$
where $\mathcal{O}$ is the collection of all observed $X_{i}^{j}$.
### 4.4 Objective Function
For our model, we need to consider the multiple sequences (as in Eq. 1) and
add the introduced constraints. Here we first introduce a recursive function
$R$ and matrix $T$ that can be computed offline to ease the computation.
$\displaystyle
R_{ij}(\tau)=\begin{cases}\big{(}1+R_{ij}(\tau-1)\big{)}\exp(-\beta\big{(}x^{j}_{i,\tau}-x^{j}_{i,\tau-1})\big{)}&\text{if
$\tau>1$,}\\\ 0&\text{if $\tau=1$}.\end{cases}$ (6)
We also construct the matrix $T$ as follows to avoid repetitive computation in
iterations:
$\displaystyle
T=[\sum_{\tau=1}^{n_{ij}}(\exp(-\beta(x^{j}_{i,n_{ij}}-x^{j}_{i,\tau}))-1)]_{N\times
M}$ (7)
To this end, the final objective function of our proposed model, given the
observed activities for all assignment-student pairs $X$ can be described as
in Eq. 8.
$\displaystyle\min_{A\geq 0,U\geq 0,Z}-L(X;A,U)$ (8)
$\displaystyle=-\sum_{X_{i}^{j}\in\mathcal{O}}\sum_{\tau=1}^{n_{ij}}\log\big{(}U_{ij}+A_{ij}\beta
R^{j}_{i}(\tau)\big{)}+U_{ij}x^{j}_{i,n_{ij}}$ $\displaystyle+A\circ
T-\mathcal{L}_{g}(A;\Theta_{1},..,\Theta_{k})+\mathcal{L}_{c}(A,Z)+\rho_{3}tr(A)$
$\displaystyle\text{s.t. }A\geq 0,U\geq 0,$
where $\mathcal{L}_{c}$and $\mathcal{L}_{g}$ are the previously defined losses
introduced by clustering and gamma prior respectively and $\rho_{3}$ is a
regularization coefficient.
The trace norm regularization, is a convex surrogate for computing rank of
$A$, which enables the knowledge transfer from the processes with observations
to the unseen assignment-user pairs that do not have any observed historical
activities. Finally, to not violate the definition of Hawkes process, we have
non-negative constraints on $A$ and $U$.
### 4.5 Optimization
To solve the minimization problem in Eq. 8, we could use the Stochastic
Gradient Descent algorithms. However, the non-negative constraints on $A$ and
$U$ along with the non-smoothed trace norms can complicate the optimization.
To tackle this problem, we used the Accelerated Gradient Method (Nesterov
2013). The key component of using this method is to compute the following
proximal operator:
$\displaystyle\min_{A_{z},U_{z},Z_{z}}\|A_{z}-A_{s}\|_{F}^{2}+\|U_{z}-U_{s}\|_{F}^{2}+\|Z_{z}-Z_{s}\|_{F}^{2}+$
(9) $\displaystyle\text{s.t. }tr(Z_{z})=k,tr(A_{z})\leq c,A_{z}\geq
0,U_{z}\geq 0,Z_{z}\preceq I,Z_{z}\in S_{+}^{M}$
where subscripts $z$ and $s$ respectively represents the corresponding
parameter value at the current iteration and search point (Nesterov 2013). We
present Algorithm 2 to efficiently solve the objective function using the
Accelerated Gradient Descent framework. 222Details of the algorithm, its
complexity, convergence analyses, and our code can be found in
https://github.com/persai-lab/AAAI2020-RCHawkes-Gamma.
Input: $\eta>1$, step size $\gamma_{0}$, $\rho_{3}$, MaxIter
1 initialization: $A_{1}=A_{0};U_{1}=U_{0};Z_{1}=\frac{k}{M}\times I$;
$\alpha_{0}=0;\alpha_{1}=1$;
2 for _$i=1$ to MaxIter_ do
3 $a_{i}=\frac{\alpha_{i-1}-1}{\alpha_{i}}$;
4 $S_{i}^{A}=A_{i}+a_{i}(A_{i}-A_{i-1})$;
5 $S_{i}^{B}=U_{i}+a_{i}(U_{i}-U_{i-1})$;
6 $S_{i}^{Z}=Z_{i}+a_{i}(Z_{i}-Z_{i-1})$;
7 while _Ture_ do
8 Compute $A_{*}=\mathcal{M}_{S^{A}_{i},\gamma_{i}}(A)$
9 =
$\big{(}\text{TrPro}(S_{i}^{A}-\nabla\mathcal{L}(A)/\gamma_{i},\rho_{3})\big{)}_{+}$
;
10 Compute $U_{*}=\mathcal{M}_{S^{U}_{i},\gamma_{i}}(U)$ ;
11 Eigen-decompose $S_{i}^{Z}=Q\Sigma Q^{-1}$;
12 Compute
$\underset{\sigma^{*}_{i}}{\operatorname{argmin}}\sum\limits_{i}(\sigma_{i}-\hat{\sigma}_{i})^{2}$,
$\sum\limits_{i}^{M}\sigma_{i}=k,~{}0\leq\sigma_{i}\leq 1$;
13 Compute $\Sigma_{*}=diag(\sigma^{*}_{1},...,\sigma^{*}_{M})$;
14 Compute $Z_{*}=Q\Sigma_{*}Q^{-1}$;
15 if
_$\mathcal{L}(A_{*},U_{*},Z_{*})\leq\mathcal{L}(S_{i}^{A},S_{i}^{U},Z_{i})+\sum_{x\in\\{A,U,Z\\}}\langle
S_{i}^{x},\delta\mathcal{L}(S_{i}^{x})\rangle+\alpha_{k}/2\|S_{i}^{x}-x_{*}\|_{F}^{2}$_
then
16 break;
17
18 else
19 $\gamma_{i}=\gamma_{i-1}\times\eta$;
20
21 end if
22
23 $A_{i+1}=A_{*}$; $U_{i+1}=U_{*}$; $Z_{i+1}=Z_{*}$;
24 if _stopping criterion satisfied_ then
25 break;
26 else
27 $\alpha_{i}=\frac{1+\sqrt{1+4\alpha_{i-1}^{2}}}{2}$
28 end if
29
30 end while
31
32 end for
Output: $A=A_{i+1},U=U_{i+1},Z=Z_{i+1}$
Algorithm 1 Accelerated PGA
## 5 Experiments
In this section we evaluate our approach with several state-of-the-art
competitors on both simulated and real datasets.
Setup. In simulated data, we randomly select a ratio of $r=[0.1,0.3,0.5,0.7]$
amount of students’ last two assignment activities to be entirely missing
(unseen set), and for the rest of the student-assignment pairs, the first
$70\%$ of activities are used for training (training set) and the last $30\%$
are used for testing (seen set). In both real datasets, the unit time is set
to be an hour, and we use activities that took place before the mid point of
the last assignment as training. Hyperparameters of the proposed and baseline
models are tuned via grid search.
### 5.1 Baselines
We consider two sets of state-of-the-art baselines: the ones that are able to
infer unseen data, and the ones that cannot. A summary of all baseline
approaches is presented in Table 1. In the following we briefly introduce each
of the baselines.
EdMPH: A Hawkes model that was recently proposed to model student
procrastination in Educational Data Mining domain (Yao, Sahebi, and Feyzi-
Behnagh 2020). It applies a Multivariate Hawkes Model which utilizes student
activity types as extra information, and cannot infer unseen data.
RMTPP: A Recurrent Neural Network Hawkes model to represent user-item
interactions (Du et al. 2016b). It does not directly infer parameters of
unseen data and it uses activity markers (i.e. features) as an input.
ERPP: A similar approach to baseline RMTPP, but it includes time-series loss
in the loss function (Xiao et al. 2017).
HRPF and DHPR: Two Poisson factorization models proposed in (Hosseini et al.
2018) that do not require user-network as auxiliary features. These models,
however, do not directly model the time-dependencies between the future and
the past, thus cannot quantify activity self-excitement.
HPLR: An item recommendation model using Hawkes process (Du et al. 2015b). It
is the most similar to ours, as it imposes a low rank assumption on matrices
$A$ and $U$ and can infer unseen data. However, unlike our model, it does not
consider the cluster structure of parameter matrix $A$.
RCHawkes: A variation of our proposed model that does not use a Gamma prior.
Its objective is to find the maximum of log-likelihood rather than the maximum
of A-posterior.
Table 1: A summary of baseline approaches.
Application | Model | | Infer
---
Unseen
Data
| Require No
---
External
Features
| Model Time
---
Dependency
| RCHawkes-Gamma | ✓ | ✓ | ✓
| RCHakwes | ✓ | ✓ | ✓
EDM | EdMHP | ✗ | ✗ | ✓
| HPLR | ✓ | ✓ | ✓
| ERPP | ✗ | ✗ | ✓
| HRPF | ✓ | ✓ | ✗
| DRPF | ✓ | ✓ | ✗
Rec-Sys | RMTPP | ✗ | ✗ | ✓
### 5.2 Datasets
Synthetic Data To create simulated student-assignment pairs, we first
construct the parameter matrices. We build $A_{s}$ by: a) sampling $k=3$ sets
of column $\alpha$’s from different Gamma distributions, for different student
procrastination behavior clusters; b) adding white noise ($\sigma^{2}=0.1$);
and c) shuffling all columns randomly to break the order. We build $U_{s}$, by
sampling it from a normal distribution. Then, we sample $150$ activities for
each assignment-student pair using the Ogata thinning algorithm (Ogata 1988),
which is the most commonly used sampling method in the related literature.
Finally, we obtain $5400$ simulated student-assignment pairs and $810$K
synthetic activities.
Computer Science Course on Canvas Network (CANVAS) This real-world MOOC
dataset is from the Canvas Network platform (Canvas-Network 2016). Canvas
Network is an online platform that hosts various open courses in different
academic disciplines. The computer science course we use happens during $\sim
6$ weeks. In each week, an assignment-style quiz is published in the course
resulting in $6$ course assignments. In total, we extract $\sim 740$K
assignment-related activity timestamps from $471$ students.
Big Data in Education on Coursera (MORF) Our second real-world dataset is
collected from an 8-week “Big Data in Education” course in Coursera platform.
The dataset is available through the MOOC Replication Framework (MORF) (Andres
et al. 2016). In total, we extract $\sim 102$K activities of $675$ students
related to $8$ assignments.
### 5.3 Fit and Prediction Performance of RCHawkes-Gamma
#### Estimated Parameters on Simulated Data
In the simulated dataset, as we know the true parameters (i.e. $A$ and $U$),
we compare the Root Mean Squared Error (RMSE) of estimated $\hat{A}$ and
$\hat{U}$, varying unseen data ratio $r$ 333Baselines ERPP, RTMPP, and
EdHawkes cannot be used in this analysis, since they parameterize the
processes differently.. The results are presented in Tbl. 2. RCHawkes-Gamma
and RCHakwes outperform the baseline methods usually by a large margin, for
both the sequences with seen and unseen history. Also, even though all models
perform worse with the increase of $r$, RCHawkes-Gamma and RCHakwes’ RMSEs
have a lower standard deviation, indicating less variation in their
performance even in high missing data ratios. Additionally, the models’
performances in unseen data are generally worse than their performances in the
processes with observed historical activities. One possible reason is that the
Hawkes parameters for unseen data in this simulation can only be inferred from
the similar processes with observed data by leveraging the row and column
relatedness, while the true characteristics of the unseen processes can not be
entirely captured as there are no observations that the models can use.
Table 2: RMSE ($\pm$standard deviation) of $\hat{A}$ and $\hat{U}$ on seen and
unseen data, with various missing data ratios ($r$)
RMSE for A | Model | r = 0.1 | r = 0.3 | r = 0.5 | r = 0.7
---|---|---|---|---|---
seen | unseen | seen | unseen | seen | unseen | seen | unseen
RCHawkes-Gamma | 0.094$\pm$0.024 | 0.102$\pm$0.037 | 0.121$\pm$0.017 | 0.114$\pm$0.056 | 0.141$\pm$0.033 | 0.139$\pm$0.033 | 0.136$\pm$0.077 | 0.137$\pm$0.052
RCHawkes | 0.108$\pm$0.017 | 0.108$\pm$0.054 | 0.115$\pm$0.024 | 0.116$\pm$0.039 | 0.126$\pm$0.033 | 0.136$\pm$0.033 | 0.180$\pm$0.072 | 0.170$\pm$0.048
HPLR | 0.631$\pm$0.110 | 0.663$\pm$0.331 | 0.645$\pm$0.141 | 0.607$\pm$0.216 | 0.635$\pm$0.133 | 0.633$\pm$0.133 | 0.634$\pm$0.304 | 0.634$\pm$0.204
HRPF | 0.664$\pm$0.769 | 0.664$\pm$0.769 | 0.664$\pm$0.770 | 0.664$\pm$0.770 | 0.663$\pm$0.769 | 0.663$\pm$0.770 | 0.664$\pm$0.769 | 0.664$\pm$0.767
DRPF | 0.474$\pm$0.461 | 0.474$\pm$0.461 | 0.479$\pm$0.465 | 0.479$\pm$0.465 | 0.473$\pm$0.462 | 0.473$\pm$0.462 | 0.474$\pm$0.463 | 0.474$\pm$0.463
RMSE for $U$ | RCHawkes-Gamma | 0.075$\pm$0.022 | 0.085$\pm$0.036 | 0.069$\pm$0.017 | 0.060$\pm$0.050 | 0.062$\pm$0.030 | 0.064$\pm$0.030 | 0.071$\pm$0.039 | 0.075$\pm$0.026
RCHawkes | 0.074$\pm$0.020 | 0.089$\pm$0.061 | 0.074$\pm$0.020 | 0.075$\pm$0.032 | 0.077$\pm$0.030 | 0.079$\pm$0.030 | 0.069$\pm$0.026 | 0.062$\pm$0.017
HPLR | 0.110$\pm$0.082 | 0.078$\pm$0.047 | 0.081$\pm$0.060 | 0.078$\pm$0.094 | 0.091$\pm$0.035 | 0.091$\pm$0.035 | 0.090$\pm$0.096 | 0.095$\pm$0.065
HRPF | 0.105$\pm$0.055 | 0.311$\pm$0.055 | 0.119$\pm$0.068 | 0.183$\pm$0.068 | 0.141$\pm$0.071 | 0.142$\pm$0.071 | 0.179$\pm$0.068 | 0.120$\pm$0.070
DRPF | 0.062$\pm$0.052 | 0.300$\pm$0.035 | 0.088$\pm$0.049 | 0.165$\pm$0.045 | 0.121$\pm$0.051 | 0.121$\pm$0.050 | 0.167$\pm$0.053 | 0.102$\pm$0.054
Figure 1: Time prediction error and 95% confidence interval on synthetic
datasets with varying data missing ratios ($r$)
Figure 2: Time prediction error on seen and unseen data with 95% confidence
interval on real-world datasets
#### Clustering Structure of Hawkes Parameters
To see if the cluster structure of students is well captured by each model, we
compute and present the correlation matrix of $\hat{A}$ between students with
the recovered cluster orders in Figure 3.
Figure 3: The ground truth of $A$’s correlation matrix (a), and the estimated
$\hat{A}$’s correlation matrix learned by each model.
Our proposed models recover this structure closer to the ground truth (Figure
3 (a)), i.e. a higher correlation within clusters (darker blocks) and a lower
correlation between clusters (lighter blocks). HPLR introduces unnecessary
correlations between clusters, possibly because of not having the student
cluster assumption. HRPF simply assumes all assignment-student pairs share the
same parameter, thus has a meaningless correlation of 1 among all students.
Finally, although DRPF improves HRPF by considering activity self-excitements,
it fails to capture any meaningful correlation within clusters.
#### Returning Time Prediction on Simulated and Real Data
In these experiments, we use a popular metric that has been used in many
Hawkes-based models, i.e. RMSE on Time Prediction (TP), where TP is defined on
the estimated next activity time, given the observed history (e.g. (Du et al.
2013)). The baselines that do not directly infer parameters on unseen data
(the future assignment scenario) are not included in the unseen data
evaluation. Following the method used in (Du et al. 2013), we sampled future
activities based on the learned parameters via Ogata’s thinning algorithm.
Figures 1 and 2 respectively present the prediction error and $95\%$
confidence interval on simulated and real-world data. As we can see from
Figure 1, the proposed methods RCHawkes-Gamma and RCHakwes consistently
outperform other baselines in all settings, except when the missing ratio is
0.1. In that case, RMTPP and ERPP achieve the smallest error in seen data.
However, unlike the proposed models that are almost invariant to the increase
of missing ratio, ERPP and RMTPP’s performances change dramatically with
increasing $r$. More importantly, they lack the ability to directly predict
the next activity time when the activity history is unseen. When comparing the
baselines in the real datasets (Figure 2), all approaches perform better in
CANVAS dataset, compared to MORF. One possible explanation is that, in CANVAS,
each assignment-student pair contains more historical activity time stamps.
Therefore it provides all approaches more training data than the MORF dataset.
### 5.4 Student Procrastination in RCHawkes-Gamma
Our application goal is to study students’ cramming behaviors in MOOCs by
modeling students’ historical learning activities. In the following section,
we will switch our focus to finding the connections between the
characteristics of students’ learning activities (parameterized by our model)
and students’ cramming behaviors.
#### Students’ Cramming Behaviors
Since procrastination does not have a quantitative definition, in the first
step of our analysis, we define the following measure to describe the degree
of student procrastination presented in MOOCs:
$\text{delay}=\frac{t^{a}_{ij}-t^{s}_{ij}}{t^{d}_{ij}-t^{s}_{ij}}$ to quantify
student $j$’s normalized delay in starting any activity that is associated
with assignment $i$, where superscript $s,a,d$ respectively represents the
start of the assignment, the first, and the last activity in the student-
assignment pair. Intuitively, this measure is the absolute time that student
$j$ delays in starting assignment $i$, normalized by the duration that
assignment $i$ is available for student $j$. Note that this measure is just a
simple representation and cannot replace our model in predicting next activity
times or uncovering cluster structures.
#### Correlation Analysis
In order to show how students activities parameterized by Hawkes and student
delays are associated, we compute the Spearman’s rank correlation coefficient
between each pair of the variables.
Table 3: Spearman’s correlation between learned parameters and computed
normalized student delays. p$<$0.001*** p$<$0.01 ** p$<$0.05*
| $\alpha$ | $\mu$ | delay
---|---|---|---
CANVAS | $\alpha$ | 1 | |
$\mu$ | 0.284*** | 1 |
delay | 0.345*** | 0.144*** | 1
MORF | $\alpha$ | 1 | |
$\mu$ | 0.243*** | 1 |
delay | 0.264*** | 0.412*** | 1
We choose the Spearman’s correlation because it does not assume a normal
distribution on the parameters, nor a linear relationship between the
variables as Pearson correlation does. As we can see in Table 3, the two-sided
p-values suggest that the correlations between these variables are
statistically significant. We can also see that all the correlation
coefficients are positive, meaning that student delays are positively
associated with the base rate, i.e. expected number of occurrences per unit
time that are excited by external stimuli (for example deadlines), and the
burstiness of the occurrences. On the other hand, by looking at the two
courses side-by-side, we can see that delay is more strongly associated with
$\alpha$ in CANVAS. But, its association with the base rate $\mu$ is stronger
in MORF. This suggests two different kinds of relationships between students
and assignments: while in CANVAS big bursts of activities might suggest
delays, in MORF small but frequent activities are associated with student
delays.
#### Clustering Analysis
To see if there are meaningful differences in the delay measure for students
in different clusters, we first cluster the students using the K-means
clustering algorithm, which has a similar objective to the cluster-structure
term in our model (Eq. 2), on the learned $\hat{A}$ matrix. Specifically,
student $u_{j}$ is represented by the vector of estimated self-excitement
parameters ($\hat{\alpha}_{1,i},...,\hat{\alpha}_{N,i}$) that are learned by
RCHawkes-Gamma, and the cluster number for K-means is decided via grid search
by looking at SSE. To examine the possible differences between clusters of
students in terms of student delays, we conduct the Kruskal-Wallis test on all
student delays across the clusters for each assignment. We report the average
delay of all students in each cluster and for each assignment. The results are
shown in Table 4 for CANVAS and in Table 5 for MORF dataset.
Table 4: Kruskal Wallis test on delays in different clusters in CANVAS
dataset. p$<$0.001*** p$<$0.01 ** p$<$0.05*
Assign. #. | cluster 1 | cluster 2 | cluster 3 | cluster 4 | p-value
---|---|---|---|---|---
size | 81 | 144 | 207 | 39 | -
1 | 0.3335 | 0.4583 | 0.6108 | 0.9064 | 1.34E-16***
2 | 0.6245 | 0.5788 | 0.8476 | 1.0854 | 3.59E-09***
3 | 0.6911 | 0.7143 | 0.8633 | 0.9655 | 4.36E-05***
4 | 0.6050 | 0.6958 | 0.8515 | 1.0717 | 0.0008***
5 | 0.5969 | 0.7080 | 0.9084 | 1.1217 | 0.0195*
6 | 0.5351 | 0.7647 | 0.9002 | 1.0970 | 0.0149*
Table 5: Kruskal Wallis test on delays in different clusters in MORF dataset.
p$<$0.001*** p$<$0.01 ** p$<$0.05*
Assign. #. | cluster 1 | cluster 2 | cluster 3 | p-value
---|---|---|---|---
size | 573 | 34 | 68 | -
1 | 0.4991 | 0.6710 | 0.4477 | 2.30E-09***
2 | 0.5120 | 0.7288 | 0.4855 | 1.90E-08***
3 | 0.5570 | 0.6904 | 0.6105 | 7.50E-05***
4 | 0.4699 | 0.6122 | 0.5360 | 0.0004***
5 | 0.5626 | 0.6358 | 0.6308 | 0.0070***
6 | 0.5329 | 0.6236 | 0.6642 | 8.56E-06***
7 | 0.4325 | 0.5598 | 0.7672 | 2.12E-20***
8 | 0.3974 | 0.5172 | 0.7629 | 3.84E-27***
In CANVAS, 4 student clusters are found. These clusters all have significant
differences in terms of delays. For example, students in cluster 1 have the
smallest delay, with a general decreasing trend towards the later assignments.
On the other hand, delays are the worst for students in cluster 4, with an
average delay greater or close to 1 for all assignments, which implies that
this group of students tend to start the assignments very close to or even
later than the deadline. In the 3 clusters that are found in the MORF dataset,
the p-values of Kruskal-Wallis tests show strong evidence of cluster
differences for each assignment. Specifically, the majority of the students in
the MORF course are in cluster 1 and their delays are overall the lowest
comparing to the other two clusters. They tend to delay less and less over
time. On the other hand, students in cluster 3 start the course with a low
delay but increase their delay so fast that at the end of the course, they
turn out to be the students who delay the most. This analysis demonstrates
that the self-excitement parameters have strong associations with student
delays, which not only reinforces the findings from the correlation analysis,
but also suggests that they are good indicators in characterizing students’
cramming behaviors.
## 6 Conclusion
In this paper, we proposed a novel uni-variate clustered Hawkes process model,
RCHawkes-Gamma to model procrastination behaviors of students in MOOCs.
Particularly, the proposed method models activities on all assignment-student
pairs jointly and assumes cluster structures between students and relatedness
between assignments. We test our proposed model on a synthetic dataset and two
real-world MOOC datasets. The results of our experiments show that our
proposed model can predict students’ next activity time with lower time
prediction error on both seen and unseen data, compared to the baseline
methods. We also study and analyze the parameters learned by the proposed
model on both MOOC datasets. Our analysis reveals the positive associations
between student delays with our model’s parameters. The model also discovers
meaningful clusters of students who show different delaying behavior trends.
It is also worth noting that our proposed approach can be useful in real-world
scenarios such as for professional educators or adaptive learning systems. As
an example, the prediction of future activities especially on the unseen
student-assignment pairs can provide teachers the opportunity to intervene
with students who show strong procrastination tendencies. For students, their
learning activities can be presented in formats such as a dashboard, for
visualization, summarization, and feedback generation, which in turn can be
beneficial in regularizing students’ learning behaviors. While our model is
created with the education domain in mind, it can be applied to other domains
such as recommender systems.
A limitation of this work is that the delay measure is used as a proxy for
procrastination, while self-reported procrastination measures could have
helped in labeling delays more accurately as procrastination. Furthermore, our
proposed method does not aim to differentiate between active procrastination
(i.e. due to the internal need to experience thrill by delaying the tasks to
the last minute) and passive procrastination (irrational delay despite
expecting negative results due to the delay), as indications such as
purposeful, strategic, and arousal delay can not be inferred from the datasets
used in this work.
## 7 Acknowledgement
This paper is based upon work supported by the National Science Foundation
under Grant Number 1917949.
## 8 Appendix
### 8.1 Algorithm 1
#### Algorithm Walk-through
In the following, we provide some details of Algorithm 1 shown as below.
Specifically, the the following subroutine is repeated in the algorithm:
Input: $\eta>1$, step size $\gamma_{0}$, $\rho_{3}$, MaxIter
1 initialization: $A_{1}=A_{0};U_{1}=U_{0};Z_{1}=\frac{k}{M}\times I$;
$\alpha_{0}=0;\alpha_{1}=1$;
2 for _$i=1$ to MaxIter_ do
3 $a_{i}=\frac{\alpha_{i-1}-1}{\alpha_{i}}$;
4 $S_{i}^{A}=A_{i}+a_{i}(A_{i}-A_{i-1})$;
5 $S_{i}^{B}=U_{i}+a_{i}(U_{i}-U_{i-1})$;
6 $S_{i}^{Z}=Z_{i}+a_{i}(Z_{i}-Z_{i-1})$;
7 while _Ture_ do
8 Compute $A_{*}=\mathcal{M}_{S^{A}_{i},\gamma_{i}}(A)$
9 =
$\big{(}\text{TrPro}(S_{i}^{A}-\nabla\mathcal{L}(A)/\gamma_{i},\rho_{3})\big{)}_{+}$
;
10 Compute $U_{*}=\mathcal{M}_{S^{U}_{i},\gamma_{i}}(U)$ ;
11 Eigen-decompose $S_{i}^{Z}=Q\Sigma Q^{-1}$;
12 Compute
$\underset{\sigma^{*}_{i}}{\operatorname{argmin}}\sum\limits_{i}(\sigma_{i}-\hat{\sigma}_{i})^{2}$,
$\sum\limits_{i}^{M}\sigma_{i}=k,~{}0\leq\sigma_{i}\leq 1$;
13 Compute $\Sigma_{*}=diag(\sigma^{*}_{1},...,\sigma^{*}_{M})$;
14 Compute $Z_{*}=Q\Sigma_{*}Q^{-1}$;
15 if
_$\mathcal{L}(A_{*},U_{*},Z_{*})\leq\mathcal{L}(S_{i}^{A},S_{i}^{U},Z_{i})+\sum_{x\in\\{A,U,Z\\}}\langle
S_{i}^{x},\delta\mathcal{L}(S_{i}^{x})\rangle+\alpha_{k}/2\|S_{i}^{x}-x_{*}\|_{F}^{2}$_
then
16 break;
17
18 else
19 $\gamma_{i}=\gamma_{i-1}\times\eta$;
20
21 end if
22
23 $A_{i+1}=A_{*}$; $U_{i+1}=U_{*}$; $Z_{i+1}=Z_{*}$;
24 if _stopping criterion satisfied_ then
25 break;
26 else
27 $\alpha_{i}=\frac{1+\sqrt{1+4\alpha_{i-1}^{2}}}{2}$
28 end if
29
30 end while
31
32 end for
Output: $A=A_{i+1},U=U_{i+1},Z=Z_{i+1}$
Algorithm 2 Accelerated PGA
(1) Computation of $A_{*}$ (lines 8-9): The objective of this part is defined
as follow:
$\displaystyle\min_{A_{z}}F_{A}(A_{z}):=\|A_{z}-A_{s}\|_{F}^{2}~{}\text{s.t.}~{}tr(A_{z})\leq
c,A_{z}\geq 0.$ (10)
by following the Accelerated Gradient Method schema, we compute
$A_{*}=\mathcal{M_{\gamma,S^{A}}}$ (line 8), where
$\mathcal{M_{\gamma,S^{A}}}:=\frac{1}{\gamma}\|A-\big{(}S^{A}-\frac{1}{\gamma}\nabla\mathcal{L}(A)\big{)}_{+}\|_{F}^{2}+\rho_{3}tr(A)$
(Ji and Ye 2009); where $S^{A}$ is current search point; $\gamma$ is the step
size; and $\rho_{3}$ is the regularization coefficient. Specifically, we use
trace norm projection (TrPro) (Cai, Candès, and Shen 2010) to solve the above
minimization problem. Finally $(\cdot)_{+}$ projects negative values to 0 as
we constraint $A$ to be nonnegative.
(2) Computation of $U_{*}$ (line 10): similarly to the compuation of $A_{*}$,
we compute optimal value of $U$,
$U_{*}=\mathcal{M}_{S^{U}_{i},\gamma_{i}}(U)$, where $S^{U}$ is the current
search point of $U$, and $(\cdot)_{+}$ is the nonnegative projection.
Specifically the objective of this computation is:
$\displaystyle\min_{U_{z}}F_{U}(U_{z}):=\|U_{z}-U_{s}\|_{F}^{2}~{}\text{s.t.}~{}U_{z}\geq
0.$ (11)
(3) Computation of $Z_{*}$ (lines 11-14): as the constraints on $Z$ are more
complicated, the proximal operator also has more terms. Specifically, the goal
is to solve the following optimization problem:
$\displaystyle\min_{Z_{z}}\|Z_{z}-\hat{Z_{s}}\|_{F}^{2},\text{ s.t.
}tr(Z_{z})=k,Z_{z}\preceq I,Z_{z}\in S_{+}^{M}$ (12)
To solve this problem, we apply eigen decomposition on $Z_{i}$ such that
$Z_{i}=Q\Sigma Q^{\prime}$, where
$\Sigma=diag(\hat{\sigma}_{1},...,\hat{\sigma}_{M})$. It has been shown that
$Z_{*}=Q\Sigma_{*}Q^{\prime}$, where
$\Sigma_{*}=diag(\sigma^{*}_{1},...,\sigma^{*}_{k})$, and $\sigma^{*}_{i}$ is
the optimal solution to the problem (Zha et al. 2002):
$\displaystyle\min_{\Sigma}\|\Sigma_{*}-\Sigma\|_{F}^{2},\text{ s.t.
}\sum_{i}^{M}\sigma_{i}=k,~{}0\leq\sigma_{i}\leq 1.$ (13)
To solve Eq. 13 with constraints, we apply the linear algorithm proposed in
(Kiwiel 2007).
Remark: we want to quickly show that by solving problem 13, the resulting
$Q\Sigma_{*}Q^{\prime}$ provides a closed-form solution to Eq. 12. If denote
eigen-decomposition of $M_{z}=P\Lambda P^{\prime}$, by definition,
$P^{\prime}P=PP^{\prime}=I$ and $\Lambda=diag(\lambda_{1},...,\lambda_{M})$
where $\lambda_{i}$ for $i=1,...,M$ are eigenvalues of $z$. Then Eq. 12 can be
equivalently written as:
$\displaystyle\min_{\Lambda,P}\|Q^{\prime}P\Lambda
P^{\prime}Q-\Sigma\|_{F}^{2}~{}\text{s.t.}~{}tr(\Lambda)=k$ (14)
$\displaystyle\lambda=diag(\lambda_{1},...,\lambda_{M}),0\leq\lambda_{i}\leq
1,P^{\prime}P=PP^{\prime}=I.$
It is easy to see that the constraints of the two equations with respect to
$\Lambda$ and $\Sigma$ are equivalent. Furthermore, if denote the objectives
of Eq. 13 and Eq. 14 as $f(\cdot)$ and $g(\cdot)$ respectively, by definition,
the feasible domain of Eq. 13 is a subset of the feasible domain of Eq. 14,
therefore $f(\Sigma_{*})\geq g(Q^{\prime}P_{*}\lambda_{*}P_{*}^{\prime}Q)$. On
the other hand, knowing that $\Sigma$ is a diagonal matrix,
$\|Q^{\prime}P_{*}\lambda_{*}P_{*}^{\prime}Q-\Sigma\|_{F}^{2}\geq\|(Q^{\prime}P_{*}\lambda_{*}P_{*}^{\prime}Q)\circ
I-\Sigma\|_{F}^{2}$, meaning that the optimal objective value of Eq. 13 is no
greater than the optimal objective value of Eq. 14. Therefore, the two
problems are equivalent.
#### Complexity Analysis
Recall that we consider the setting where there are $M$ students and $N$
assignments. The complexity of the computation of $A_{*}$ (line $8-9$) is
$\mathcal{O}(MN^{2})$ where a truncate SVD is used. To solve Eq. 12, we first
apply eigen-decomposition on the $M\times M$ matrix $S_{i}^{Z}$ (in line 11),
which has time complexity of $\mathcal{O}(M^{3})$, then we solve Eq. 13 which
has shown to be the closed-form solution to Eq. 12 (line 12), a complexity of
$\mathcal{O}(M)$ can be achieved (Kiwiel 2007). As we introduce recursive
function $R$ in Sec. 4.4, the complexity of computing loss $\mathcal{L}$ (line
$15$) is $\mathcal{O}(MNK)$ if let $K$ denote the number of activities of the
longest student-assignment pair. Each line of the other parts of the algorithm
requires $\mathcal{O}(MN)$ as only basic operations are involved. As a result,
the time complexity per time step is $\mathcal{O}(\max(M,N)^{2}M+MNK)$. In the
cases where conventional Hawkes model is used, without the help of recursive
function $R$, computing the loss per time step needs $\mathcal{O}(MNK^{2})$.
Note that without operations such as truncated SVD, even though a complexity
of $\mathcal{O}(MN^{2})$ can be avoided for conventional Hawkes models, the
parameters of student-assignment pairs that do not have observed activities
can not be inferred.
When it comes to the number of parameters to be learned, for our model, due to
our low rank and cluster structure assumption on $A\in\mathbb{R}^{N\times M}$,
the number of parameters it requires to meet these two assumptions is
$(M+N)c+2Mk$ where $c<\min(M,N)$ and $k<M$ is respectively the rank of $A$ and
the number of clusters among students, i.e. the rank of
$Z\in\mathbb{R}^{M\times M}$. For conventional Hawkes models, each student-
assignment pair needs to be learned independently. As a result, the number of
parameters need to complete matrix $A$ is $M\times N$.
#### Convergence Analysis
As mentioned earlier in this section, we have shown that Algorithm 1
repeatedly solves the subroutines respectively defined in Eq. 10, 11 and 12,
where solving Eq. 12 is mathematically equivalent to solving Eq. 13. As it is
known that accelerated gradient descent can achieve the optimal convergence
rate of $\mathcal{O}(1/k^{2})$ when the objective function is smooth, and only
the subrountine of solving Eq. 10 involves non-smooth trace norm, the focus of
the following section is to provide a convergence analysis on this
subrountine. Specifically, by following the outline of proof provided in Ji
and Ye’s work (Ji and Ye 2009), we show that a rate of
$\mathcal{O}(1/\epsilon^{2})$ can be achieved in solving Eq. 10, even with the
presence of trace norm in the objective. Specifically, if let $A_{*}$ denotes
the optimal solution, by applying Lemma 3.1 from Ji and Ye’s work, we can
obtain the following:
$\displaystyle\medmath$
$\displaystyle\medmath{\geq\frac{\gamma_{1}}{2}\|A_{1}-S_{1}^{A}\|^{2}+\gamma_{1}\langle
S_{1}^{A}-A_{*},A_{1}-S^{A}_{1}\rangle}$ (15)
$\displaystyle=\medmath{\frac{\gamma_{1}}{2}\|A_{1}-A_{*}\|^{2}-\frac{\gamma_{1}}{2}\|S^{A}_{1}-A_{*}\|^{2}},$
which is equivalent to:
$\displaystyle\medmath{\frac{2}{\gamma_{1}}(F_{A}(A_{1})-F_{A}(A_{*}))\leq\|S^{A}_{1}-A_{*}\|^{2}-\|A_{1}-A_{*}\|^{2}.}$
(16)
Then by following the proof of Theorem 4 in Ji and Ye’s work, we can obtain
the following inequality, using the equality
$\alpha_{i}^{2}=\alpha_{i+1}^{2}-\alpha_{i+1}$ derived from the equation in
line $24$ of our algorithm and the definition of $S^{A}_{i}$ in line 4:
$\displaystyle\medmath{\frac{2}{\gamma_{i+1}}\big{[}\alpha_{i}^{2}(F_{A}(A_{i})-F_{A}(A_{*}))-\alpha_{i+1}^{2}\big{(}(F_{A}(A_{i+1})-F_{A}(A_{*}))\big{)}\big{]}}$
(17)
$\displaystyle\medmath{\geq\|\alpha_{i+1}A_{i+1}-(\alpha_{i+1})A_{i}-A_{*}\|^{2}-\|\alpha_{i}A_{i}}$
$\displaystyle\medmath{-(\alpha_{i}-1)A_{i-1}-A_{*}\|^{2}}.$
As $\eta\geq 1$ and we update $\gamma_{i+1}$ by multiplying $\eta$ with
$\gamma_{i}$, we know that $\gamma_{i+1}\geq\gamma_{i}$. By plugging in this
inequality to Eq. 17, we can obtain the following:
$\displaystyle\medmath{\frac{2}{\gamma_{i}}\alpha_{i}^{2}(F_{A}(A_{i})-F_{A}(A_{*}))-\frac{2}{\gamma_{i+1}}\alpha_{i+1}^{2}(F_{A}(A_{i+1})-F_{A}(A_{*}))}$
(18)
$\displaystyle\medmath{\geq\|\alpha_{i+1}A_{i+1}-(\alpha_{i+1}-1)A_{i}-A_{*}\|^{2}-\|\alpha_{i}A_{i}}$
$\displaystyle\medmath{-(\alpha_{i}-1)A_{i-1}-A_{*}\|^{2}.}$
By summing up each side of Eq. 18 from $i=1$ to $i=k$, then combining with Eq.
16, we can obtain the following:
$\displaystyle\medmath{\frac{2}{\gamma_{i}}\alpha_{i}^{2}\big{(}F_{A}(A_{i})-F_{A}(A_{*})\big{)}\leq\|A_{1}-A_{*}\|^{2}}$
(19)
$\displaystyle\medmath{-\|\alpha_{i}A_{i}-(\alpha_{i}-1)A_{i-1}-A_{*}\|^{2}+\frac{2}{\gamma_{1}}\big{(}F_{A}(A_{1})-F_{A}(A_{*})\big{)}}$
$\displaystyle\medmath{\leq\|A_{i}-A_{*}\|^{2}-\|\alpha_{i}A_{i}-(\alpha_{i}-1)A_{i-1}-A_{*}\|^{2}}$
$\displaystyle\medmath{+\|A_{0}-A_{*}\|^{2}-\|A_{i}-A_{*}\|^{2}}$
$\displaystyle\medmath{\leq\|A_{0}-A_{*}\|^{2}}$
Using the fact that $\alpha_{i}\geq\frac{i+1}{2}$ (can be shown using
induction from line $24$ of the algorithm), we can obtain:
$\displaystyle\medmath{F_{A}(A_{i})-F_{A}(A_{*})\leq\frac{2\gamma_{i}\|A_{*}-A_{0}\|^{2}}{(i+1)^{2}}.}$
(20)
### 8.2 Intuition Explained
Since our goal is to study sequences of student activities and their inter-
arrival times, point processes are of the best choices for our application.
Poisson process assumes that the past and future activities are completely
independent. Unlike the memory less nature of the Poisson process, the Hawkes
process expects that activities to be exciting both externally (similar to the
Poisson process) and internally, that is, activities are self-exciting.
From the branching process point of view of the Hawkes process, activities are
assumed to have latent or unobserved branching structures, where the offspring
activities (i.e. future activities) are triggered by parent activities (i.e.
past activities) while the immigrant activities arrive independently.
Therefore, the offspring are also said to be structured into clusters. In the
online learning setting, smaller activity chunks towards a goal or deadline
can be examples of offspring: students divide the big tasks (the whole
process) into small sub-tasks (offspring clusters). The deadline (external
stimuli) of a big task (such as a task) triggers the follow-up activities
related to small tasks, which come one after another in a so-called burst mode
(self-excitement).
Figure 4: Two tests show the differences between a simulated Poisson process
vs. a true process extracted from CANVAS dataset in terms of (a) inter-arrival
times distributions and (b) inter-arrival times autocorrelation.
To empirically demonstrate that self-excitement or burstiness is observed in
the online course setting, we conducted two tests to show that Poissonian
properties are not present in true student activity sequences. The first test
is to check the distribution of the inter-arrival times, which is defined as
the difference between two consecutive activity occurrences’ arrival times. In
Figure. 4 (a), we show the inter-arrival times versus simulated Poisson
process in a real student’s sequence of activities for an assignment. The
simulated Poisson process is generated with the same average rate, as the real
student’s sequence, on a log-log scale. We see that the Poisson process almost
forms a straight line, indicating the exponential distribution of inter-
arrival times, whereas the real data is “nonpoissonian”, i.e. includes short
pauses followed by long ones. The second test is to check the 1-lag
autocorrelation of inter-arrival times. As we can see in Figure. 4 (b), no
autocorrelation is spotted in the Poisson process, whereas the real data
exhibits some pattern: dense activities followed by long pauses.
## References
* Agnihotri, Baker, and Stalzer (2020) Agnihotri, L.; Baker, R. S.; and Stalzer, S. 2020. A Procrastination Index for Online Learning Based on Assignment Start Time. In _The 13th International Conference on Educational Data Mining_.
* Andres et al. (2016) Andres, J. M. L.; Baker, R. S.; Siemens, G.; Gašević, D.; and Spann, C. A. 2016. Replicating 21 findings on student success in online learning. _Technology, Instruction, Cognition, and Learning_ 313–333.
* Backhage, Ojeda, and Sifa (2017) Backhage, C.; Ojeda, C.; and Sifa, R. 2017. Circadian Cycles and Work Under Pressure: A Stochastic Process Model for E-learning Population Dynamics. In _Data Science–Analytics and Applications_ , 13–18. Springer.
* Bacry, Gaïffas, and Muzy (2015) Bacry, E.; Gaïffas, S.; and Muzy, J.-F. 2015. A generalization error bound for sparse and low-rank multivariate Hawkes processes. _arXiv preprint arXiv:1501.00725_ 14: 160–194.
* Baker, Evans, and Dee (2016) Baker, R.; Evans, B.; and Dee, T. 2016. A Randomized Experiment Testing the Efficacy of a Scheduling Nudge in a Massive Open Online Course (MOOC). _AERA Open_ 2(4). ISSN 2332-8584, 2332-8584.
* Cai, Candès, and Shen (2010) Cai, J.-F.; Candès, E. J.; and Shen, Z. 2010. A singular value thresholding algorithm for matrix completion. _SIAM Journal on optimization_ 20(4): 1956–1982.
* Canvas-Network (2016) Canvas-Network. 2016. Canvas Network Courses, Activities, and Users (4/2014 - 9/2015) Restricted Dataset. doi:10.7910/DVN/XB2TLU.
* Cerezo et al. (2017) Cerezo, R.; Esteban, M.; Sánchez-Santillán, M.; and Núñez, J. C. 2017. Procrastinating Behavior in Computer-Based Learning Environments to Predict Performance: A Case Study in Moodle. _Frontiers in Psychology_ 8\. ISSN 1664-1078.
* Chen et al. (2019) Chen, H.; Liu, R.; Park, N.; and Subrahmanian, V. 2019. Using twitter to predict when vulnerabilities will be exploited. In _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, 3143–3152. ACM.
* Choi et al. (2015) Choi, E.; Du, N.; Chen, R.; Song, L.; and Sun, J. 2015. Constructing disease network and temporal progression model via context-sensitive hawkes process. In _2015 IEEE International Conference on Data Mining_ , 721–726. IEEE.
* Du et al. (2016a) Du, N.; Dai, H.; Trivedi, R.; Upadhyay, U.; Gomez-Rodriguez, M.; and Song, L. 2016a. Recurrent marked temporal point processes: Embedding event history to vector. In _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 1555–1564.
* Du et al. (2016b) Du, N.; Dai, H.; Trivedi, R.; Upadhyay, U.; Gomez-Rodriguez, M.; and Song, L. 2016b. Recurrent marked temporal point processes: Embedding event history to vector. In _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 1555–1564.
* Du et al. (2015a) Du, N.; Farajtabar, M.; Ahmed, A.; Smola, A. J.; and Song, L. 2015a. Dirichlet-hawkes processes with applications to clustering continuous-time document streams. In _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 219–228.
* Du et al. (2013) Du, N.; Song, L.; Woo, H.; and Zha, H. 2013. Uncover topic-sensitive information diffusion networks. In _Artificial Intelligence and Statistics_ , 229–237.
* Du et al. (2015b) Du, N.; Wang, Y.; He, N.; Sun, J.; and Song, L. 2015b. Time-sensitive recommendation from recurrent user activities. In _Advances in Neural Information Processing Systems_ , 3492–3500.
* Gelman et al. (2016) Gelman, B.; Revelle, M.; Domeniconi, C.; Johri, A.; and Veeramachaneni, K. 2016\. Acting the Same Differently: A Cross-Course Comparison of User Behavior in MOOCs. _International Educational Data Mining Society_ .
* Hawkes (1971) Hawkes, A. G. 1971. Spectra of some self-exciting and mutually exciting point processes. _Biometrika_ 58(1): 83–90.
* He et al. (2015) He, X.; Rekatsinas, T.; Foulds, J.; Getoor, L.; and Liu, Y. 2015. Hawkestopic: A joint model for network inference and topic modeling from text-based cascades. In _International conference on machine learning_ , 871–880.
* Hosseini et al. (2018) Hosseini, S. A.; Khodadadi, A.; Alizadeh, K.; Arabzadeh, A.; Farajtabar, M.; Zha, H.; and Rabiee, H. R. 2018. Recurrent poisson factorization for temporal recommendation. _IEEE Transactions on Knowledge and Data Engineering_ 32(1): 121–134.
* Hosseini et al. (2016) Hosseini, S. A.; Khodadadi, A.; Arabzadeh, A.; and Rabiee, H. R. 2016. Hnp3: A hierarchical nonparametric point process for modeling content diffusion over social media. In _2016 IEEE 16th International Conference on Data Mining (ICDM)_ , 943–948. IEEE.
* Jacob, Vert, and Bach (2009) Jacob, L.; Vert, J.-p.; and Bach, F. R. 2009. Clustered multi-task learning: A convex formulation. In _Advances in neural information processing systems_ , 745–752.
* Ji and Ye (2009) Ji, S.; and Ye, J. 2009. An accelerated gradient method for trace norm minimization. In _Proceedings of the 26th annual international conference on machine learning_ , 457–464.
* Kazerouni et al. (2017) Kazerouni, A. M.; Edwards, S. H.; Hall, T. S.; and Shaffer, C. A. 2017. DevEventTracker: Tracking Development Events to Assess Incremental Development and Procrastination. In _Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education - ITiCSE ’17_ , 104–109. Bologna, Italy: ACM Press.
* Kiwiel (2007) Kiwiel, K. C. 2007. On linear-time algorithms for the continuous quadratic knapsack problem. _Journal of Optimization Theory and Applications_ 134(3): 549–554.
* Lee and Choi (2011) Lee, Y.; and Choi, J. 2011. A review of online course dropout research: Implications for practice and future research. _Educational Technology Research and Development_ 59(5): 593–618.
* Lemonnier, Scaman, and Kalogeratos (2017) Lemonnier, R.; Scaman, K.; and Kalogeratos, A. 2017. Multivariate Hawkes processes for large-scale inference. In _Thirty-First AAAI Conference on Artificial Intelligence_.
* Li and Ke (2020) Li, T.; and Ke, Y. 2020. Tweedie-Hawkes Processes: Interpreting the Phenomena of Outbreaks. In _AAAI_ , 4699–4706.
* Li, Wei, and Ke (2018) Li, T.; Wei, P.; and Ke, Y. 2018. Transfer hawkes processes with content information. In _2018 IEEE International Conference on Data Mining (ICDM)_ , 1116–1121. IEEE.
* Luo et al. (2015) Luo, D.; Xu, H.; Zhen, Y.; Ning, X.; Zha, H.; Yang, X.; and Zhang, W. 2015. Multi-task multi-dimensional hawkes processes for modeling event sequences. In _Twenty-Fourth International Joint Conference on Artificial Intelligence_.
* Mei and Eisner (2017) Mei, H.; and Eisner, J. M. 2017. The neural hawkes process: A neurally self-modulating multivariate point process. In _Advances in Neural Information Processing Systems_ , 6754–6764.
* Moon and Illingworth (2005) Moon, S. M.; and Illingworth, A. J. 2005. Exploring the dynamic nature of procrastination: A latent growth curve analysis of academic procrastination. _Personality and Individual Differences_ 38(2): 297–309.
* Nesterov (2013) Nesterov, Y. 2013. Gradient methods for minimizing composite functions. _Mathematical Programming_ 140(1): 125–161.
* Ogata (1988) Ogata, Y. 1988. Statistical models for earthquake occurrences and residual analysis for point processes. _Journal of the American Statistical association_ 83(401): 9–27.
* Park et al. (2018) Park, J.; Yu, R.; Rodriguez, F.; Baker, R.; Smyth, P.; and Warschauer, M. 2018. Understanding Student Procrastination via Mixture Models. _International Educational Data Mining Society_ .
* Perrin et al. (2011) Perrin, C. J.; Miller, N.; Haberlin, A. T.; Ivy, J. W.; Meindl, J. N.; and Neef, N. A. 2011. Measuring and Reducing Colledge Students’ Procrastination. _Journal of applied behavior analysis_ 44(3): 463–474.
* Shang and Sun (2018) Shang, J.; and Sun, M. 2018. Local low-rank Hawkes processes for modeling temporal user–item interactions. _Knowledge and Information Systems_ 1–24.
* Shang and Sun (2019) Shang, J.; and Sun, M. 2019. Geometric Hawkes Processes with Graph Convolutional Recurrent Neural Networks. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, 4878–4885.
* Steel (2007) Steel, P. 2007. The nature of procrastination: A meta-analytic and theoretical review of quintessential self-regulatory failure. _Psychological bulletin_ 133(1): 65.
* Uzir et al. (2020) Uzir, N. A.; Gašević, D.; Jovanović, J.; Matcha, W.; Lim, L.-A.; and Fudge, A. 2020. Analytics of time management and learning strategies for effective online learning in blended environments. In _Proceedings of the Tenth International Conference on Learning Analytics & Knowledge_, 392–401.
* Vassøy et al. (2019) Vassøy, B.; Ruocco, M.; de Souza da Silva, E.; and Aune, E. 2019. Time is of the essence: a joint hierarchical rnn and point process model for time and item predictions. In _Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining_ , 591–599.
* Vitiello et al. (2018) Vitiello, M.; Walk, S.; Helic, D.; Chang, V.; and Guetl, C. 2018. User Behavioral Patterns and Early Dropouts Detection: Improved Users Profiling through Analysis of Successive Offering of MOOC. _J. UCS_ 24(8): 1131–1150.
* Xiao et al. (2017) Xiao, S.; Yan, J.; Yang, X.; Zha, H.; and Chu, S. M. 2017. Modeling the intensity function of point process via recurrent neural networks. In _Thirty-First AAAI Conference on Artificial Intelligence_.
* Yao, Sahebi, and Feyzi-Behnagh (2020) Yao, M.; Sahebi, S.; and Feyzi-Behnagh, R. 2020. Analyzing Student Procrastination in MOOCs: A Multivariate Hawkes Approach. In _The 13th International Conference on Educational Data Mining_.
* Zha et al. (2002) Zha, H.; He, X.; Ding, C.; Gu, M.; and Simon, H. D. 2002. Spectral relaxation for k-means clustering. In _Advances in neural information processing systems_ , 1057–1064.
* Zhou, Zha, and Song (2013) Zhou, K.; Zha, H.; and Song, L. 2013. Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes. In _Artificial Intelligence and Statistics_ , 641–649.
|
# Bessel-Type Operators and a Refinement of Hardy’s Inequality
Fritz Gesztesy Department of Mathematics, Baylor University, Sid Richardson
Bldg., 1410 S. 4th Street, Waco, TX 76706, USA<EMAIL_ADDRESS>http://www.baylor.edu/math/index.php?id=935340 , Michael M. H. Pang
Department of Mathematics, University of Missouri, Columbia, MO 65211, USA
<EMAIL_ADDRESS>https://www.math.missouri.edu/people/pang and Jonathan
Stanfill Department of Mathematics, Baylor University, Sid Richardson Bldg.,
1410 S. 4th Street, Waco, TX 76706, USA<EMAIL_ADDRESS>http://sites.baylor.edu/jonathan-stanfill/ Dedicated with great pleasure to
Lance Littlejohn on the occasion of his 70th birthday.
###### Abstract.
The principal aim of this paper is to employ Bessel-type operators in proving
the inequality
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{\sin^{2}(x)}+\dfrac{1}{4}\int_{0}^{\pi}dx\,|f(x)|^{2},\quad
f\in H_{0}^{1}((0,\pi)),$
where both constants $1/4$ appearing in the above inequality are optimal. In
addition, this inequality is strict in the sense that equality holds if and
only if $f\equiv 0$. This inequality is derived with the help of the exactly
solvable, strongly singular, Dirichlet-type Schrödinger operator associated
with the differential expression
$\displaystyle\tau_{s}=-\dfrac{d^{2}}{dx^{2}}+\dfrac{s^{2}-(1/4)}{\sin^{2}(x)},\quad
s\in[0,\infty),\;x\in(0,\pi).$
The new inequality represents a refinement of Hardy’s classical inequality
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{x^{2}},\quad
f\in H_{0}^{1}((0,\pi)),$
it also improves upon one of its well-known extensions in the form
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{d_{(0,\pi)}(x)^{2}},\quad
f\in H_{0}^{1}((0,\pi)),$
where $d_{(0,\pi)}(x)$ represents the distance from $x\in(0,\pi)$ to the
boundary $\\{0,\pi\\}$ of $(0,\pi)$.
###### Key words and phrases:
Hardy-type inequality, strongly singular differential operators, Friedrichs
extension.
###### 2020 Mathematics Subject Classification:
Primary: 26D10, 34A40, 34B20, 34B30; Secondary: 34L10, 34B24, 47A07.
Originally appeared in From Operator Theory to Orthogonal Polynomials,
Combinatorics, and Number Theory. A Volume in Honor of Lance Littlejohn’s 70th
Birthday, F. Gesztesy and A. Martinez-Finkelshtein (eds.), Operator Theory:
Advances and Applications, Vol. 285, Birkhäuser, Springer, Cham, 2021, pp.
143–172. This version contains some updates.
###### Contents
1. 1 Introduction
2. 2 An Exactly Solvable, Strongly Singular, Periodic Schrödinger Operator
3. 3 A Refinement of Hardy’s Inequality
4. A The Weyl–Titchmarsh–Kodaira $m$-Function Associated with $T_{s,F}$
5. B Remarks on Hardy-Type Inequalities
## 1\. Introduction
Happy Birthday, Lance! We hope this modest contribution to Hardy-type
inequalities will cause some joy.
In a nutshell, the aim of this note is to employ a Bessel-type operator in
deriving the Hardy-type inequality (see the footnote on p. 7),
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{\sin^{2}(x)}+\dfrac{1}{4}\int_{0}^{\pi}dx\,|f(x)|^{2},\quad
f\in H_{0}^{1}((0,\pi)).$ (1.1)
As is readily verified, (1.1) indeed represents an improvement over the
classical Hardy inequality
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{x^{2}},\quad
f\in H_{0}^{1}((0,\pi)),$ (1.2)
while also improving upon one of its well-known extensions in the form
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{d_{(0,\pi)}(x)^{2}},\quad
f\in H_{0}^{1}((0,\pi)).$ (1.3)
Here $d_{(0,\pi)}(x)$ represents the distance from $x\in(0,\pi)$ to the
boundary $\\{0,\pi\\}$ of the interval $(0,\pi)$, that is,
$\displaystyle d_{(0,\pi)}(x)=\begin{cases}x,&x\in(0,\pi/2],\\\
\pi-x,&x\in[\pi/2,\pi).\end{cases}$ (1.4)
We emphasize that all constants $1/4$ in (1.1)–(1.3) are optimal and all
inequalities are strict in the sense that equality holds in them if and only
if $f\equiv 0$.
Our refinement (1.1) (and the optimality of both constants $1/4$ in (1.1))
rests on the exact solvability of the one-dimensional Schrödinger equation
with potential $q_{s}$, $s\in[0,\infty)$, given by
$q_{s}(x)=\frac{s^{2}-(1/4)}{\sin^{2}(x)},\quad x\in(0,\pi),$ (1.5)
as illustrated by Rosen and Morse [59] in 1932, Pöschl and Teller [56] in
1933, and Lotmar [48] in 1935. These authors are either concerned with the
following extension of (1.5)
$\frac{c_{1}}{\sin^{2}(x)}+\frac{c_{2}}{\cos^{2}(x)},\quad x\in(0,\pi/2),$
(1.6)
or its hyperbolic analog of the form
$\frac{c_{1}}{\sinh^{2}(x)}+\frac{c_{2}}{\cosh^{2}(x)},\quad
x\in{\mathbb{R}}\;(\text{or }\,x\in(0,\infty)).$ (1.7)
The upshot of these investigations for the purpose at hand was the realization
that such problems are exactly solvable in terms of the hypergeometric
function $F(a,b;c;\,\cdot\,)$ (frequently denoted by
$\mathstrut{}_{2}F_{1}(a,b;c;\,\cdot\,)$). These types of problems are further
discussed by Infeld and Hull [38] and summarized in [23, Sect. 38, 39, 93],
and more recently in [19]. A discussion of the underlying singular periodic
problem (1.5) on ${\mathbb{R}}$, including the associated Floquet (Bloch)
theory, was presented by Scarf [62]. These investigations exclusively focus on
aspects of ordinary differential equations as opposed to operator theory even
though Dirichlet problems associated with singular endpoints were formally
discussed (in this context see also [61]). The operator theoretic approach to
(1.5) and (1.6) over a finite interval bounded by singularities, and a variety
of associated self-adjoint boundary conditions including coupled boundary
conditions leading to energy bands (Floquet–Bloch theory) in the periodic
problem on ${\mathbb{R}}$, on the basis of generalized boundary values due to
Rellich [57] (see also [14]), was first discussed in [25] and [29]. Finally,
we briefly mention that the case of $n$-soliton potentials
$q_{(1/2)+n}(x)=n(n+1)/\cosh^{2}(x),\quad
n\in{\mathbb{N}},\;x\in{\mathbb{R}},$ (1.8)
has received special attention as it represents a solution of infinitely many
equations in the stationary Korteweg–de Vries (KdV) hierarchy (starting from
level $n$ upward).
Introducing the differential expression
$\tau_{s}=-\frac{d^{2}}{dx^{2}}+q_{s}(x)=-\frac{d^{2}}{dx^{2}}+\frac{s^{2}-(1/4)}{\sin^{2}(x)},\quad
x\in(0,\pi),$ (1.9)
the exact solvability of the differential equation $\tau_{s}y=zy$,
$z\in{\mathbb{C}}$, or a comparison with the well-known Bessel operator case
$-(d^{2}/dx^{2})+\big{[}s^{2}-(1/4)\big{]}x^{-2}$ near $x=0$ and
$-(d^{2}/dx^{2})+\big{[}s^{2}-(1/4)\big{]}(x-\pi)^{-2}$ near $x=\pi$ then
yields the nonoscillatory property of $\tau_{s}$ if and only if
$s\in[0,\infty)$. Very roughly speaking, nonnegativity of the Friedrichs
extension associated with the differential expression $\tau_{0}-(1/4)$,
implying nonnegativity of the underlying quadratic form defined on
$H_{0}^{1}((0,\pi))$, implies the refinement (1.1) of Hardy’s inequality.
In Section 2 we briefly discuss (principal and nonprincipal) solutions of the
exactly solvable Schrödinger equation $\tau_{s}y=0$ (solutions of the general
equation $\tau_{s}y=zy$, $z\in{\mathbb{C}}$, are discussed in Appendix A), and
introduce minimal $T_{s,min}$ and maximal $T_{s,max}=T_{s,min}^{*}$ operators
corresponding to $\tau_{s}$ as well as the Friedrichs extension $T_{s,F}$ of
$T_{s,min}$ and the boundary values associated with $T_{s,max}$, following
recent treatments in [27], [28]. Section 3 contains the bulk of this paper and
is devoted to a derivation of inequality (1.1). We also indicate how two
related results by Avkhadiev and Wirths [9], [10], involving Drichlet boundary
conditions on both ends and a mixture of Dirichlet and Neumann boundary
conditions naturally fits into the framework discussed in this paper. In
Appendix A we study solutions of $\tau_{s}y=zy$, $z\in{\mathbb{C}}$, in more
detail and also derive the singular Weyl–Titchmarsh–Kodaira $m$-function
associated with $T_{s,F}$. Finally, Appendix B collects some facts on Hardy-
type inequalities.
## 2\. An Exactly Solvable, Strongly Singular,
Periodic Schrödinger Operator
In this section we examine a slight variation of the example found in Section
4 of [25] by implementing the methods found in [27].
Let $a=0$, $b=\pi,$
$p(x)=r(x)=1,\quad q_{s}(x)=\frac{s^{2}-(1/4)}{\sin^{2}(x)},\quad
s\in[0,\infty),\;x\in(0,\pi).$ (2.1)
We now study the Sturm–Liouville operators associated with the corresponding
differential expression given by
$\displaystyle\tau_{s}=-\dfrac{d^{2}}{dx^{2}}+q_{s}(x)=-\dfrac{d^{2}}{dx^{2}}+\dfrac{s^{2}-(1/4)}{\sin^{2}(x)},\quad
s\in[0,\infty),\;x\in(0,\pi),$ (2.2)
which is in the limit circle case at the endpoints $x=0,\pi$ for $s\in[0,1)$,
and limit point at both endpoints for $s\in[1,\infty)$. The maximal and
preminimal operators, $T_{s,max}$ and $\overset{\textbf{\Large.}}{T}_{s,min}$,
associated to $\tau_{s}$ in $L^{2}((0,\pi);dx)$ are then given by
$\displaystyle T_{s,max}f=\tau_{s}f,\quad s\in[0,\infty),$ $\displaystyle
f\in\operatorname{dom}(T_{s,max})=\big{\\{}g\in
L^{2}((0,\pi);dx)\,\big{|}\,g,g^{\prime}\in AC_{loc}((0,\pi));$ (2.3)
$\displaystyle\hskip 180.67517pt\tau_{s}g\in L^{2}((0,\pi);dx)\big{\\}},$
and
$\displaystyle\overset{\textbf{\Large.}}{T}_{s,min}f=\tau_{s}f,\quad
s\in[0,\infty),$ $\displaystyle
f\in\operatorname{dom}\big{(}\overset{\textbf{\Large.}}{T}_{s,min}\big{)}=\big{\\{}g\in
L^{2}((0,\pi);dx)\,\big{|}\,g,g^{\prime}\in AC_{loc}((0,\pi));$ (2.4)
$\displaystyle\hskip 54.06006pt\operatorname{supp}\,(g)\subset(0,\pi)\text{ is
compact; }\tau_{s}g\in L^{2}((0,\pi);dx)\big{\\}}.$
Since
$q_{s}\in L^{2}_{loc}((0,\pi);dx),\quad s\in[0,\infty),$ (2.5)
one can replace $\overset{\textbf{\Large.}}{T}_{s,min}$ by
$\overset{\textbf{\Large..}}{T}_{s,min}=\tau_{s}\big{|}_{C_{0}^{\infty}((0,\pi))},\quad
s\in[0,\infty).$ (2.6)
For $s\in[0,1)$, we introduce principal and nonprincipal solutions
$u_{0,s}(0,\,\cdot\,)$ and $\widehat{u}_{0,s}(0,\,\cdot\,)$ of $\tau_{s}u=0$
at $x=0$ by
$\displaystyle u_{0,s}(0,x)$
$\displaystyle=[\sin(x)]^{(1+2s)/2}F\big{(}(1/4)+(s/2),(1/4)+(s/2);1+s;\sin^{2}(x)\big{)},$
$\displaystyle\hskip 237.58067pts\in[0,1),$
$\displaystyle\widehat{u}_{0,s}(0,x)$
$\displaystyle=\begin{cases}(2s)^{-1}[\sin(x)]^{(1-2s)/2}\\\ \quad\times
F\big{(}(1/4)-(s/2),(1/4)-(s/2);1-s;\sin^{2}(x)\big{)},\quad
s\in(0,1),\\\\[2.84526pt]
[\sin(x)]^{1/2}F\big{(}1/4,1/4;1;\sin^{2}(x)\big{)}\\\
\quad\times\displaystyle\int_{x}^{c}dx^{\prime}\,[\sin(x^{\prime})]^{-1}\big{[}F\big{(}1/4,1/4;1;\sin^{2}(x^{\prime})\big{)}\big{]}^{-2},\quad
s=0,\end{cases}$ (2.7)
and principal and nonprincipal solutions $u_{\pi,s}(0,\,\cdot\,)$ and
$\widehat{u}_{\pi,s}(0,\,\cdot\,)$ of $\tau_{s}u=0$ at $x=\pi$ by
$\displaystyle u_{\pi,s}(0,x)$
$\displaystyle=[\sin(x)]^{(1+2s)/2}F\big{(}(1/4)+(s/2),(1/4)+(s/2);1+s;\sin^{2}(x)\big{)},$
$\displaystyle\hskip 236.15796pts\in[0,1),$
$\displaystyle\widehat{u}_{\pi,s}(0,x)$
$\displaystyle=\begin{cases}-(2s)^{-1}[\sin(x)]^{(1-2s)/2}\\\ \quad\times
F\big{(}(1/4)-(s/2),(1/4)-(s/2);1-s;\sin^{2}(x)\big{)},\quad
s\in(0,1),\\\\[2.84526pt]
-[\sin(x)]^{1/2}F\big{(}1/4,1/4;1;\sin^{2}(x)\big{)}\\\
\quad\times\displaystyle\int_{c}^{x}dx^{\prime}\,[\sin(x^{\prime})]^{-1}\big{[}F\big{(}1/4,1/4;1;\sin^{2}(x^{\prime})\big{)}\big{]}^{-2},\quad
s=0.\end{cases}$ (2.8)
Here $F(\,\cdot\,,\,\cdot\,,\,\cdot\,;\,\cdot\,)$ $($frequently written as
$\mathstrut{}_{2}F_{1}(\,\cdot\,,\,\cdot\,;\,\cdot\,;\,\cdot\,)$$)$ denotes
the hypergeometric function $($see, e.g., [1, Ch. 15]$)$.
###### Remark 2.1.
We note that the case $c=1$ in $F(a,b;c;\xi)$, corresponding to the case $s=0$
in (2.7), (2.8), is a special one in the sense that linearly independent
solutions of the hypergeometric differential equation are then of the form
(see, e.g., [1, Nos. 15.5.16, 15.5.17])
$\displaystyle y_{1}(\xi)=F(a,b;1;\xi),$ $\displaystyle
y_{2}(\xi)=F(a,b;1;\xi)\text{\rm ln}(\xi)$ (2.9)
$\displaystyle\quad+\sum_{n\in{\mathbb{N}}}\frac{(a)_{n}(b)_{n}}{(n!)^{2}}[\psi(a+n)-\psi(a)+\psi(b+n)-\psi(b)+2\psi(1)-2\psi(n+1)]\xi^{n}.$
Here $(d)_{n}$, $n\in{\mathbb{N}}_{0}$, represents Pochhammer’s symbol (see,
(A.8)), and $\psi(\,\cdot\,)$ denotes the Digamma function. Since we wanted to
ensure (2.12) and our principal aim in connection with the boundary values
(2.14)–(2.17) was the derivation of the asymptotic relations (2.10) and
(2.11), our choice of $\widehat{u}_{0,0}$ and $\widehat{u}_{\pi,0}$ in (2.7)
and (2.8) is to be preferred over the use of the pair of functions in (2.9).
For more details in this connection see Appendix A. $\diamond$
Since
$\displaystyle u_{0,s}(0,x)$ $\displaystyle\underset{x\downarrow
0}{=}x^{(1+2s)/2}\big{\\{}1+\big{[}\big{(}4s^{2}-1\big{)}/\big{(}48+48s\big{)}\big{]}x^{2}+O\big{(}x^{4}\big{)}\big{\\}},\quad
s\in[0,1),$ $\displaystyle\widehat{u}_{0,s}(0,x)$
$\displaystyle\underset{x\downarrow
0}{=}\begin{cases}(2s)^{-1}x^{(1-2s)/2}\big{\\{}1+\big{[}\big{(}4s^{2}-1\big{)}/\big{(}48-48s\big{)}\big{]}x^{2}+O\big{(}x^{4}\big{)}\big{\\}},\\\
\hskip 205.7133pts\in(0,1),\\\\[2.84526pt] \text{\rm
ln}(1/x)x^{1/2}\big{\\{}1+\big{[}\big{(}[\text{\rm
ln}(x)]^{-1}-1\big{)}/48\big{]}x^{2}+O\big{(}x^{4}\big{)}\big{\\}},\quad
s=0,\end{cases}$ (2.10) $\displaystyle u_{\pi,s}(0,x)$
$\displaystyle\underset{x\uparrow\pi}{=}(\pi-x)^{(1+2s)/2}\big{\\{}1+\big{[}\big{(}4s^{2}-1\big{)}/\big{(}48+48s\big{)}\big{]}(\pi-x)^{2}$
$\displaystyle\hskip 88.2037pt+O\big{(}(\pi-x)^{4}\big{)}\big{\\}},\quad
s\in[0,1),$ $\displaystyle\widehat{u}_{\pi,s}(0,x)$
$\displaystyle\underset{x\uparrow\pi}{=}\begin{cases}-(2s)^{-1}(\pi-x)^{(1-2s)/2}\big{\\{}1+\big{[}\big{(}4s^{2}-1\big{)}/\big{(}48-48s\big{)}\big{]}(\pi-x)^{2}\\\
\quad+O\big{(}(\pi-x)^{4}\big{)}\big{\\}},\quad s\in(0,1),\\\\[2.84526pt]
\text{\rm ln}(\pi-x)(\pi-x)^{1/2}\big{\\{}1+\big{[}\big{(}[\text{\rm
ln}(\pi-x)]^{-1}-1\big{)}/48\big{]}(\pi-x)^{2}\\\
\quad+O\big{(}(\pi-x)^{4}\big{)}\big{\\}},\quad s=0,\end{cases}$ (2.11)
one deduces that
$\displaystyle
W(\widehat{u}_{0,s}(0,\,\cdot\,),u_{0,s}(0,\,\cdot\,))(0)=1=W(\widehat{u}_{\pi,s}(0,\,\cdot\,),u_{\pi,s}(0,\,\cdot\,))(\pi),\quad
s\in[0,1),$ (2.12)
and
$\displaystyle\lim_{x\downarrow
0}\dfrac{u_{0,s}(0,x)}{\widehat{u}_{0,s}(0,x)}=0,\quad\lim_{x\uparrow\pi}\dfrac{u_{\pi,s}(0,x)}{\widehat{u}_{\pi,s}(0,x)}=0,\quad
s\in[0,1).$ (2.13)
The generalized boundary values for $g\in\operatorname{dom}(T_{s,max})$ (the
maximal operator associated with $\tau_{s}$) are then of the form
$\displaystyle\widetilde{g}(0)$ $\displaystyle=\begin{cases}\lim_{x\downarrow
0}g(x)/\big{[}(2s)^{-1}x^{(1-2s)/2}\big{]},&s\in(0,1),\\\\[2.84526pt]
\lim_{x\downarrow 0}g(x)/\big{[}x^{1/2}\text{\rm
ln}(1/x)\big{]},&s=0,\end{cases}$ (2.14)
$\displaystyle\widetilde{g}^{\,\prime}(0)$
$\displaystyle=\begin{cases}\lim_{x\downarrow
0}\big{[}g(x)-\widetilde{g}(0)(2s)^{-1}x^{(1-2s)/2}\big{]}\big{/}x^{(1+2s)/2},&s\in(0,1),\\\\[2.84526pt]
\lim_{x\downarrow 0}\big{[}g(x)-\widetilde{g}(0)x^{1/2}\text{\rm
ln}(1/x)\big{]}\big{/}x^{1/2},&s=0,\end{cases}$ (2.15)
$\displaystyle\widetilde{g}(\pi)$
$\displaystyle=\begin{cases}\lim_{x\uparrow\pi}g(x)/\big{[}-(2s)^{-1}(\pi-x)^{(1-2s)/2}\big{]},&s\in(0,1),\\\\[2.84526pt]
\lim_{x\uparrow\pi}g(x)/\big{[}(\pi-x)^{1/2}\text{\rm
ln}(\pi-x)\big{]},&s=0,\end{cases}$ (2.16)
$\displaystyle\widetilde{g}^{\,\prime}(\pi)$
$\displaystyle=\begin{cases}\lim_{x\uparrow\pi}\big{[}g(x)+\widetilde{g}(\pi)(2s)^{-1}(\pi-x)^{(1-2s)/2}\big{]}\big{/}(\pi-x)^{(1+2s)/2},&s\in(0,1),\\\\[2.84526pt]
\lim_{x\uparrow\pi}\big{[}g(x)-\widetilde{g}(0)(\pi-x)^{1/2}\text{\rm
ln}(\pi-x)\big{]}\big{/}(\pi-x)^{1/2},&s=0.\end{cases}$ (2.17)
As a result, the minimal operator $T_{s,min}$ associated to $\tau_{s}$, that
is,
$T_{s,min}=\overline{\overset{\textbf{\Large.}}{T}_{s,min}}=\overline{\overset{\textbf{\Large..}}{T}_{s,min}},\quad
s\in[0,\infty),$ (2.18)
is thus given by
$\displaystyle T_{s,min}f=\tau_{s}f,$ $\displaystyle
f\in\operatorname{dom}(T_{s,min})=\big{\\{}g\in
L^{2}((0,\pi);dx)\,\big{|}\,g,g^{\prime}\in AC_{loc}((0,\pi));$ (2.19)
$\displaystyle\hskip
31.2982pt\widetilde{g}(0)={\widetilde{g}}^{\,\prime}(0)=\widetilde{g}(\pi)={\widetilde{g}}^{\,\prime}(\pi)=0;\,\tau_{s}g\in
L^{2}((0,\pi);dx)\big{\\}},\quad s\in[0,1),$
and satisfies $T_{s,min}^{*}=T_{s,max}$, $T_{s,max}^{*}=T_{s,min}$,
$s\in[0,\infty)$. Due to the limit point property of $\tau_{s}$ at $x=0$ and
$x=\pi$ if and only if $s\in[1,\infty)$, one concludes that
$T_{s,min}=T_{s,max}\,\text{ if and only if }\,s\in[1,\infty).$ (2.20)
The Friedrichs extension $T_{s,F}$ of $T_{s,min}$, $s\in[0,1)$, permits a
particularly simple characterization in terms of the generalized boundary
conditions (2.14)–(2.17) and is then given by (cf. [41], [51], [58], [60] and
the extensive literature cited in [27], [33, Ch. 13])
$\displaystyle T_{s,F}f=\tau_{s}f,\quad
f\in\operatorname{dom}(T_{s,F})=\big{\\{}g\in\operatorname{dom}(T_{s,max})\,\big{|}\,\widetilde{g}(0)=\widetilde{g}(\pi)=0\big{\\}},\quad
s\in[0,1),$ (2.21)
moreover,
$T_{s,F}=T_{s,min}=T_{s,max},\quad s\in[1,\infty),$ (2.22)
is self-adjoint (resp., $\overset{\textbf{\Large.}}{T}_{s,min}$ and
$\overset{\textbf{\Large..}}{T}_{s,min}$, $s\in[1,\infty)$, are essentially
self-adjoint) in $L^{2}((0,\pi);dx)$. In this case the Friedrichs boundary
conditions in (2.21) are automatically satisfied and hence can be omitted.
By (A.21) one has
$\inf(\sigma(T_{s,F}))=[(1/2)+s]^{2},\quad s\in[0,\infty),$ (2.23)
in particular,
$T_{s,F}\geq[(1/2)+s]^{2}I_{(0,\pi)},\quad s\in[0,\infty),$ (2.24)
with $I_{(0,\pi)}$ abbreviating the identity operator in $L^{2}((0,\pi);dx)$.
All results on 2nd order differential operators employed in this section can
be found in classical sources such as [2, Sect. 129], [16, Chs. 8, 9], [20,
Sects. 13.6, 13.9, 13.10], [39, Ch. III], [49, Ch. V], [51], [54, Ch. 6], [63,
Ch. 9], [64, Sect. 8.3], [65, Ch. 13], [66, Chs. 4, 6–8]. In addition, [27]
and [33, Ch. 13] contain very detailed lists of references in this context.
## 3\. A Refinement of Hardy’s Inequality
The principal purpose of this section is to derive a refinement of the
classical Hardy inequality
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{x^{2}},\quad
f\in H_{0}^{1}((0,\pi)),$ (3.1)
as well as of one of its well-known extensions in the form
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{d_{(0,\pi)}(x)^{2}},\quad
f\in H_{0}^{1}((0,\pi)),$ (3.2)
where $d_{(0,\pi)}(x)$ represents the distance from $x\in(0,\pi)$ to the
boundary $\\{0,\pi\\}$ of the interval $(0,\pi)$, that is,
$\displaystyle d_{(0,\pi)}(x)=\begin{cases}x,&x\in(0,\pi/2],\\\
\pi-x,&x\in[\pi/2,\pi).\end{cases}$ (3.3)
The constant $1/4$ in (3.1) and (3.2) is known to be optimal and both
inequalities are strict in the sense that equality holds in them if and only
if $f\equiv 0$.
For background on Hardy-type inequalities we refer, for instance, to [11, p.
3–5], [12], [15], [17, p. 104–105], [26, 34, 35], [36, Sect. 7.3, p. 240–243],
[46, Sect. 5], [44, Ch. 3], [45, Ch. 1], [47], [52, Ch. 1], [55].
The principal result of this section then can be formulated as follows111After
our paper got published, we received a message from Professor Avkhadiev who
kindly pointed out to us that he proved Theorem 3.1 in 2015, see [6, Lemma
1]..
###### Theorem 3.1.
Let $f\in H_{0}^{1}((0,\pi))$. Then,
$\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{\sin^{2}(x)}+\dfrac{1}{4}\int_{0}^{\pi}dx\,|f(x)|^{2},$
(3.4)
where both constants $1/4$ in (3.4) are optimal. In addition, the inequality
is strict in the sense that equality holds in (3.4) if and only if $f\equiv
0$.
###### Proof.
By Section 2 for $s=0$ and by [25, Sect. 4] for $s\in(0,\infty)$, one has
$\displaystyle\bigg{(}-\dfrac{d^{2}}{dx^{2}}+\dfrac{s^{2}-(1/4)}{\sin^{2}(x)}-[(1/2)+s]^{2}I_{(0,\pi)}\bigg{)}\bigg{|}_{C_{0}^{\infty}((0,\pi))}\geq
0,\quad s\in[0,\infty).$ (3.5)
Thus, setting $s=0$ in (3.5) yields
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{\sin^{2}(x)}+\dfrac{1}{4}\int_{0}^{\pi}dx\,|f(x)|^{2},\quad
f\in C_{0}^{\infty}((0,\pi)).$ (3.6)
Now denote by $H_{0}^{1}((0,\pi))$ the standard Sobolev space on $(0,\pi)$
obtained upon completion of $C_{0}^{\infty}((0,\pi))$ in the norm of
$H^{1}((0,\pi))$. Since $C_{0}^{\infty}((0,\pi))$ is dense in
$H_{0}^{1}((0,\pi))$, given $f\in H_{0}^{1}((0,\pi))$, there exists a sequence
$\\{f_{n}\\}_{n\in{\mathbb{N}}}\subset C_{0}^{\infty}((0,\pi))$ such that
$\lim_{n\to\infty}\|f_{n}-f\|^{2}_{H_{0}^{1}((0,\pi))}=0$. Hence, one can find
a subsequence $\\{f_{n_{p}}\\}_{p\in{\mathbb{N}}}$ of
$\\{f_{n}\\}_{n\in{\mathbb{N}}}$ such that $f_{n_{p}}$ converges to $f$
pointwise almost everywhere on $(0,\pi)$ as $p\to\infty$. Thus an application
of Fatou’s lemma (cf., e.g., [24, Corollary 2.19]) yields that (3.6) extends
to $f\in H_{0}^{1}((0,\pi))$, namely,
$\displaystyle\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{\sin^{2}(x)}+\dfrac{1}{4}\int_{0}^{\pi}dx\,|f(x)|^{2}$
$\displaystyle\quad\leq\liminf_{p\to\infty}\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f_{n_{p}}(x)|^{2}}{\sin^{2}(x)}+\liminf_{p\to\infty}\dfrac{1}{4}\int_{0}^{\pi}dx\,|f_{n_{p}}(x)|^{2}\quad\text{(by
Fatou's lemma)}$
$\displaystyle\quad\leq\liminf_{p\to\infty}\bigg{\\{}\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f_{n_{p}}(x)|^{2}}{\sin^{2}(x)}+\dfrac{1}{4}\int_{0}^{\pi}dx\,|f_{n_{p}}(x)|^{2}\bigg{\\}}$
$\displaystyle\quad\leq\liminf_{p\to\infty}\int_{0}^{\pi}dx\,|f^{\prime}_{n_{p}}(x)|^{2}\quad\text{(by
\eqref{3.6})}$
$\displaystyle\quad=\lim_{p\to\infty}\int_{0}^{\pi}dx\,|f^{\prime}_{n_{p}}(x)|^{2}$
$\displaystyle\quad=\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}.$ (3.7)
The substitution $s\mapsto is$ in (2.7) results in solutions that have
oscillatory behavior due to the factor $[\sin(x)]^{\pm is}$ in (A.3), (A.7),
rendering all solutions of $\tau_{s}y(\lambda,\,\cdot\,)=\lambda
y(\lambda,\,\cdot\,)$ oscillatory for each $\lambda\in{\mathbb{R}}$ if and
only if $s^{2}<0$. Classical oscillation theory results (see, e.g., [27,
Theorem 4.2]) prove that $\overset{\textbf{\Large.}}{T}_{s,min}$, and hence
$T_{s,min}$ are bounded from below if and only if $s\in[0,\infty)$. This
proves that the first constant $1/4$ on the right-hand side of (3.4) is
optimal.
Next we demonstrate that also the second constant $1/4$ on the right-hand side
of (3.4) is optimal arguing by contradiction as follows: Suppose that for some
$\varepsilon>0$,
$\displaystyle\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{\sin^{2}(x)}+\bigg{(}\frac{1}{4}+\varepsilon\bigg{)}\int_{0}^{\pi}dx\,|f(x)|^{2},\quad
f\in\operatorname{dom}(T_{0,min}).$ (3.8)
Upon integrating by parts in the left-hand side of (3.8) this implies
$T_{0,min}\geq\bigg{(}\frac{1}{4}+\varepsilon\bigg{)}I_{(0,\pi)},$ (3.9)
implying
$T_{0,F}\geq\bigg{(}\frac{1}{4}+\varepsilon\bigg{)}I_{(0,\pi)}$ (3.10)
(as $T_{0,min}$ and $T_{0,F}$ share the same lower bound by general
principles), contradicting (2.23) for $s=0$. Hence also the 2nd constant $1/4$
on the right-hand side of (3.4) is optimal.
It remains to prove strictness of inequality (3.4) if $f\not\equiv 0$: Arguing
again by contradiction, we suppose there exists $0\neq f_{0}\in
H^{1}_{0}((0,\pi))$ such that
$\int_{0}^{\pi}dx\,|f_{0}^{\prime}(x)|^{2}=\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f_{0}(x)|^{2}}{\sin^{2}(x)}+\frac{1}{4}\int_{0}^{\pi}dx\,|f_{0}(x)|^{2}.$
(3.11)
Since
$H_{0}^{1}((0,\pi))\subseteq\operatorname{dom}\big{(}T^{1/2}_{s,F}\big{)}$,
$s\in[0,\infty)$ (in fact, one even has the equality
$H_{0}^{1}((0,\pi))=\operatorname{dom}\big{(}T^{1/2}_{s,F}\big{)}$ for all
$s\in(0,\infty)$, see, e.g., [4], [13], [18], [31], [41], [43]), one concludes
via (3.11) that
$\big{(}T_{0,F}^{1/2}f_{0},T_{0,F}^{1/2}f_{0}\big{)}_{L^{2}((0,\pi);dx)}=(1/4)\|f_{0}\|_{L^{2}((0,\pi);dx)}^{2}.$
(3.12)
Moreover, since $T_{0,F}$ is self-adjoint with purely discrete and necessarily
simple spectrum, $T_{0,F}$ has the spectral representation
$T_{0,F}=\sum_{n\in{\mathbb{N}}_{0}}\lambda_{n}P_{n},\quad\lambda_{0}=1/4<\lambda_{1}<\lambda_{2}<\cdots,$
(3.13)
where $\sigma(T_{0,F})=\\{\lambda_{n}\\}_{n\in{\mathbb{N}}_{0}}$ and $P_{n}$
are the one-dimensional projections onto the eigenvectors associated with the
eigenvalues $\lambda_{n}$, $n\in{\mathbb{N}}_{0}$, explicitly listed in
(A.21), in particular, $\lambda_{0}=1/4$. Thus,
$\displaystyle\big{(}T_{0,F}^{1/2}f_{0},T_{0,F}^{1/2}f_{0}\big{)}_{L^{2}((0,\pi);dx)}$
$\displaystyle=\sum_{n\in{\mathbb{N}}_{0}}\lambda_{n}(f_{0},P_{n}f_{0})_{L^{2}((0,\pi);dx)}$
$\displaystyle>\lambda_{0}\sum_{n\in{\mathbb{N}}_{0}}(f_{0},P_{n}f_{0})_{L^{2}((0,\pi);dx)}$
$\displaystyle=\lambda_{0}\|f_{0}\|_{L^{2}((0,\pi);dx)}^{2}=(1/4)\|f_{0}\|_{L^{2}((0,\pi);dx)}^{2}$
(3.14)
contradicting (3.12) unless
$P_{n}f_{0}=0,\;n\in{\mathbb{N}},\,\text{ and hence, }\,P_{0}f_{0}=f_{0},$
(3.15)
that is,
$f_{0}\in\operatorname{dom}(T_{0,F})\,\text{ and }\,T_{0,F}f_{0}=(1/4)f_{0},$
(3.16)
employing $\lambda_{0}=1/4$. However, (3.16) implies that
$f_{0}(x)\underset{x\downarrow
0}{=}cx^{1/2}\big{[}1+O\big{(}x^{2}\big{)}\big{]}\,\text{ and hence,
}\,f_{0}\notin H_{0}^{1}((0,\pi)),$ (3.17)
a contradiction. ∎
###### Remark 3.2.
$(i)$ That inequality (3.4) represents an improvement over the previously
well-known cases (3.1) and (3.2) can be shown as follows: Since trivially
$\sin(x)\leq x,\quad x\in[0,\pi],$ (3.18)
inequality (3.4) is obviously an improvement over the classical Hardy
inequality (3.1). On the other hand, since also
$\displaystyle\sin(x)\leq\begin{cases}x,&x\in[0,\pi/2],\\\
\pi-x,&x\in[\pi/2,\pi],\end{cases}$ (3.19)
that is (cf. (3.3)),
$\sin(x)\leq d_{(0,\pi)}(x),\quad x\in[0,\pi],$ (3.20)
inequality (3.4) also improves upon the refinement (3.2).
$(ii)$ Assuming $a,b\in{\mathbb{R}}$, $a<b$, the elementary change of
variables
$\displaystyle\begin{split}&(0,\pi)\ni
x\mapsto\xi(x)=[(b-a)x+a\pi]/\pi\in(a,b),\\\ &f(x)=F(\xi),\end{split}$ (3.21)
yields
$\displaystyle\begin{split}\int_{a}^{b}d\xi\,|F^{\prime}(\xi)|^{2}&\geq\frac{\pi^{2}}{4(b-a)^{2}}\int_{a}^{b}d\xi\,\frac{|F(\xi)|^{2}}{\sin^{2}(\pi(\xi-a)/(b-a))}\\\
&\quad+\frac{\pi^{2}}{4(b-a)^{2}}\int_{a}^{b}d\xi\,|F(\xi)|^{2},\quad F\in
H_{0}^{1}((a,b)).\end{split}$ (3.22)
These scaling arguments apply to all Hardy-type inequalities considered in
this paper and hence it suffices to restrict ourselves to convenient fixed
intervals such as $(0,\pi)$, etc. $\diamond$
###### Remark 3.3.
An earlier version of our preprint contained the following factorization of
$\tau_{s}-(s+(1/2))^{2}$ into
$\delta^{+}_{s}\delta_{s}=\tau_{s}-(s+(1/2))^{2}=-\frac{d^{2}}{dx^{2}}+\frac{s^{2}-(1/4)}{\sin^{2}(x)}-(s+(1/2))^{2},\quad
s\in[0,\infty),\;x\in(0,\pi),$ (3.23)
where the differential expressions $\delta_{s}$, $\delta_{s}^{+}$ are given by
$\delta_{s}=\frac{d}{dx}-[s+(1/2)]\cot(x),\quad\delta^{+}_{s}=-\frac{d}{dx}-[s+(1/2)]\cot(x),\quad
s\in[0,\infty),\;x\in(0,\pi).$ (3.24)
Thus, $\delta^{+}_{s}\delta_{s}\big{|}_{C_{0}^{\infty}((0,\pi))}\geq 0$ yields
$\tau_{s}\big{|}_{C_{0}^{\infty}((0,\pi))}\geq(s+(1/2))^{2}I$ and taking $s=0$
implies inequality (3.4) for $f\in C_{0}^{\infty}((0,\pi))$ and hence for
$f\in H_{0}^{1}((0,\pi))$ by the usual Fatou-type argument. Hence, if one is
primarily interested in the refined Hardy inequality (3.4) itself, taking
$s=0$ in (3.23) appears to be its quickest derivation. We’re indebted to Ari
Laptev for independently pointing this out to us which resulted in our
reintroducing the factorization (3.23).
Considering
$y_{s}(x)=[\sin(x)]^{(1+2s)/2},\quad s\in[0,\infty),\;x\in(0,\pi),$ (3.25)
one confirms that $\delta_{s}y_{s}=0$, $s\in[0,\infty)$, and hence
$(\tau_{s}y_{s})(x)=[s+(1/2)]^{2}y_{s}(x),\quad s\in[0,\infty),\;x\in(0,\pi).$
(3.26)
A second linearly independent solution of $\tau_{s}y=[s+(1/2)]^{2}y$ is then
given by
$\displaystyle\begin{split}\widehat{y}_{s}(x)&=[\sin(x)]^{(1+2s)/2}\int_{x}^{\pi/2}dt\,[\sin(t)]^{-(1+2s)}\\\
&\underset{x\downarrow
0}{=}\begin{cases}(2s)^{-1}x^{(1-2s)/2}[1+o(1)],&s\in(0,\infty),\\\
x^{1/2}\text{\rm ln}(1/x)[1+o(1)],&s=0.\end{cases}\end{split}$ (3.27)
By inspection, $y_{s}^{\prime}\in L^{2}((0,\pi);dx)$ if and only if
$s\in(0,\infty)$, and hence there is a cancellation taking place in
$\delta_{0}y_{0}=0$ for $s=0$, whereas $\widehat{y}_{s}\notin
L^{2}((0,\pi);dx)$ for $s\in[1,\infty)$ (in accordance with $\tau_{s}$ being
in the limit point case for $s\in[1,\infty)$) and
$\widehat{y}_{s}^{\,\prime}\notin L^{2}((0,\pi);dx)$ for $s\in[0,\infty)$.
$\diamond$
We emphasize once again that Theorem 3.1 was originally obtained by Avkhadiev
in 2015, see [6, Lemma 1].
A closer inspection of the proof of Theorem 3.1 reveals that $[\sin(x)]^{-2}$
is just a very convenient choice for a function that has inverse square
singularities at the interval endpoints as it leads to explicit optimal
constants $1/4$ in (3.4). To illustrate this point, we consider the
differential expressions
$\omega_{0}=-\frac{d^{2}}{dx^{2}}-\frac{1}{4x^{2}},\quad\alpha_{0}=\frac{d}{dx}-\frac{1}{2x},\quad\alpha_{0}^{+}=-\frac{d}{dx}-\frac{1}{2x},\quad
x\in(0,\pi),$ (3.28)
such that
$\alpha_{0}^{+}\alpha_{0}=\omega_{0}.$ (3.29)
The minimal and maximal $L^{2}((0,\pi);dx)$-realizations associated with
$\omega_{0}$ are then given by
$\displaystyle S_{0,min}f=\omega_{0}f,$ $\displaystyle
f\in\operatorname{dom}(S_{0,min})=\big{\\{}g\in
L^{2}((0,\pi);dx)\,\big{|}\,g,g^{\prime}\in AC_{loc}((0,\pi));$ (3.30)
$\displaystyle\hskip 48.36958pt\operatorname{supp}\,(g)\subset(0,\pi)\text{ is
compact; }\omega_{0}g\in L^{2}((0,\pi);dx)\big{\\}}.$ $\displaystyle
S_{0,max}f=\omega_{0}f,$ $\displaystyle
f\in\operatorname{dom}(S_{0,max})=\big{\\{}g\in
L^{2}((0,\pi);dx)\,\big{|}\,g,g^{\prime}\in AC_{loc}((0,\pi));$ (3.31)
$\displaystyle\hskip 179.25244pt\omega_{0}g\in L^{2}((0,\pi);dx)\big{\\}},$
implying $S_{0,min}^{*}=S_{0,max}$, $S_{0,max}^{*}=S_{0,min}$, and we also
introduce the following self-adjoint extensions of $S_{0,min}$, respectively,
restrictions of $S_{0,max}$ (see, e.g., [3], [5], [4], [13], [18], [22], [27],
[31], [41], [43], [60]),
$\displaystyle S_{0,D,N}f=\omega_{0}f,$ $\displaystyle
f\in\operatorname{dom}(S_{0,D,N})=\\{g\in\operatorname{dom}(S_{0,max})\,|\,\widetilde{g}(0)=g^{\prime}(\pi)=0\\}$
(3.32) $\displaystyle\hskip
73.97733pt=\big{\\{}g\in\operatorname{dom}(S_{0,max})\,\big{|}\,g^{\prime}(\pi)=0;\,\alpha_{0}g\in
L^{2}((0,\pi);dx)\big{\\}},$ $\displaystyle S_{0,F}f=\omega_{0}f,$
$\displaystyle
f\in\operatorname{dom}(S_{0,F})=\\{g\in\operatorname{dom}(S_{0,max})\,|\,\widetilde{g}(0)=g(\pi)=0\\}$
(3.33) $\displaystyle\hskip
64.01869pt=\big{\\{}g\in\operatorname{dom}(S_{0,max})\,\big{|}\,g(\pi)=0;\,\alpha_{0}g\in
L^{2}((0,\pi);dx)\big{\\}},$
with $S_{0,F}$ the Friedrichs extension of $S_{0,min}$. The quadratic forms
corresponding to $S_{0,D,N}$ and $S_{0,F}$ are of the form
$\displaystyle
Q_{S_{0,D,N}}(f,g)=(\alpha_{0}f,\alpha_{0}g)_{L^{2}((0,\pi);dx)},$
$\displaystyle f,g\in\operatorname{dom}(Q_{S_{0,D,N}})=\big{\\{}g\in
L^{2}((0,\pi);dx)\,\big{|}\,g\in AC_{loc}((0,\pi));\,g^{\prime}(\pi)=0,$
(3.34) $\displaystyle\hskip 224.7766pt\alpha_{0}g\in
L^{2}((0,\pi);dx)\big{\\}},$ $\displaystyle
Q_{S_{0,F}}(f,g)=(\alpha_{0}f,\alpha_{0}g)_{L^{2}((0,\pi);dx)},$
$\displaystyle f,g\in\operatorname{dom}(Q_{S_{0,F}})=\big{\\{}g\in
L^{2}((0,\pi);dx)\,\big{|}\,g\in AC_{loc}((0,\pi));\,g(\pi)=0,$ (3.35)
$\displaystyle\hskip 213.39566pt\alpha_{0}g\in L^{2}((0,\pi);dx)\big{\\}}.$
One verifies (see (B.6)) that for all $\varepsilon>0$ and $g\in
AC_{loc}((0,\varepsilon))$,
$\alpha_{0}g\in L^{2}((0,\varepsilon);dx)\,\text{ implies
}\,\widetilde{g}(0)=0.$ (3.36)
By inspection,
$\displaystyle\begin{split}f_{0}(\lambda,x)&=x^{1/2}J_{0}\big{(}\lambda^{1/2}x\big{)},\quad
x\in(0,\pi),\\\ &\underset{x\downarrow
0}{=}x^{1/2}\big{[}1+O\big{(}x^{2}\big{)}\big{]}\end{split}$ (3.37)
(where $J_{\nu}(\,\cdot\,)$ denotes the standard Bessel function of order
$\nu\in{\mathbb{C}}$, cf. [1, Ch. 9])), satisfies (cf. (3.32), (3.33))
$\widetilde{f}_{0}(0)=0.$ (3.38)
Thus, introducing Lamb’s constant, now denoted by $\lambda_{D,N,0}^{1/2}$, as
the first positive zero of
$(0,\infty)\ni x\mapsto J_{0}(x)+2xJ_{1}(x)$ (3.39)
(see the brief discussion in [9]), one infers that $\lambda_{D,N,0}/\pi^{2}$
is the first positive zero of $f_{0}^{\prime}(\,\cdot\,,\pi)$, that is,
$f_{0}^{\prime}(\lambda_{D,N,0}/\pi^{2},\pi)=0.$ (3.40)
In addition denoting by $\lambda_{F,0}/\pi^{2}$ the first strictly positive
zero of $f_{0}(\,\cdot\,,\pi)$ one has
$f_{0}(\lambda_{F,0}/\pi^{2},\pi)=0,$ (3.41)
and hence $\lambda_{D,N,0}/\pi^{2}$ and $\lambda_{F,0}/\pi^{2}$ are the first
eigenvalue of the mixed Dirichlet/Neumann operator $S_{0,D,N}$ and the
Dirichlet operator (the Friedrichs extension of $S_{0,min}$) $S_{0,F}$,
respectively222In particular, $\lambda_{D,N,0}$ and $\lambda_{F,0}$ are the
first eigenvalue of the mixed Dirichlet/Neumann and Dirichlet operator on the
interval $(0,1)$.. Equivalently,
$\inf(\sigma(S_{0,D,N}))=\lambda_{D,N,0}\pi^{-2},\quad\inf(\sigma(S_{0,F}))=\lambda_{F,0}\pi^{-2},$
(3.42)
in particular,
$\displaystyle S_{0,D,N}\geq\lambda_{D,N,0}\pi^{-2}I_{L^{2}((0,\pi);dx)},\quad
S_{0,F}\geq\lambda_{F,0}\pi^{-2}I_{L^{2}((0,\pi);dx)},$ (3.43) $\displaystyle
Q_{S_{0,D,N}}(f,f)\geq\lambda_{D,N,0}\pi^{-2}\|f\|^{2}_{L^{2}((0,\pi);dx)},\quad
f\in\operatorname{dom}(Q_{S_{0,D,N}}),$ (3.44) $\displaystyle
Q_{S_{0,F}}(f,f)\geq\lambda_{F,0}\pi^{-2}\|f\|^{2}_{L^{2}((0,\pi);dx)},\quad
f\in\operatorname{dom}(Q_{S_{0,F}}).$ (3.45)
Numerically, one confirms that
$\lambda_{D,N,0}=0.885...,\quad\lambda_{F,0}=5.783...\,.$ (3.46)
Thus, arguments analogous to the ones in the proof of Theorem 3.1 yield the
following variants of (3.1), (3.2),
$\displaystyle\begin{split}\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}&\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{x^{2}}+\frac{\lambda_{D,N,0}}{\pi^{2}}\int_{0}^{\pi}dx\,|f(x)|^{2},\\\
&\hskip 54.06006ptf\in\operatorname{dom}(Q_{S_{0,D,N}})\cap
H^{1}((0,\pi)),\end{split}$ (3.47)
$\displaystyle\begin{split}\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}&\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{x^{2}}+\frac{\lambda_{F,0}}{\pi^{2}}\int_{0}^{\pi}dx\,|f(x)|^{2},\\\
&\hskip 113.81102ptf\in H_{0}^{1}((0,\pi)),\end{split}$ (3.48)
as well as,
$\displaystyle\begin{split}\int_{0}^{\pi}dx\,|f^{\prime}(x)|^{2}\geq\dfrac{1}{4}\int_{0}^{\pi}dx\,\dfrac{|f(x)|^{2}}{d_{(0,\pi)}(x)^{2}}+\frac{4\lambda_{D,N,0}}{\pi^{2}}\int_{0}^{\pi}dx\,|f(x)|^{2},&\\\
f\in H_{0}^{1}((0,\pi)).&\end{split}$ (3.49)
All constants in (3.47)–(LABEL:3.41) are optimal and the inequalities are all
strict (for $f\not\equiv 0$).
In obtaining (3.47)–(LABEL:3.41) one makes use of the fact that the domain of
a semibounded, self-adjoint operator $A$ in the complex, separable Hilbert
space ${\mathcal{H}}$ is a form core for $A$, equivalently, a core for
$|A|^{1/2}$. In addition, we used (cf. [22, Theorem 7.1]) that for
$f\in\operatorname{dom}(S_{0,D,N})\cup\operatorname{dom}(S_{0,F})$, there
exists $K_{0}(f)\in{\mathbb{C}}$ such that
$\lim_{x\downarrow 0}x^{-1/2}f(x)=K_{0}(f),\quad\lim_{x\downarrow
0}x^{1/2}f^{\prime}(x)=K_{0}(f)/2,\quad\lim_{x\downarrow
0}f(x)f^{\prime}(x)=K_{0}(f)^{2}/2.$ (3.50)
Moreover, if in addition $f^{\prime}\in L^{2}((0,1);dx)$, $f(0)=0$, combining
(3.50) with estimate (B.10) yields $K_{0}(f)=0$, and hence
$\lim_{x\downarrow}f(x)f^{\prime}(x)=0$. This permits one to integrate by
parts in $Q_{S_{0,D,N}}(f,f)$ and $Q_{S_{0,F}}(f,f)$ and in the process verify
(3.47)–(LABEL:3.41).
We note that inequalities (3.47) and (LABEL:3.41) were first derived by
Avkhadiev and Wirths [9] (also recorded in [10] and [11, Sect. 3.6.3]; see
also [7], [8], [37], [50]) following a different approach applicable to the
multi-dimensional case. We have not found inequality (3.48) in the literature,
but expect it to be known.
###### Remark 3.4.
The arguments presented thus far might seem to indicate that Hardy-type
inequalities are naturally associated with underlying second-order
differential operators satisfying boundary conditions of the Dirichlet and/or
Neumann type at the interval endpoints. However, this is not quite the case as
the following result (borrowed, e.g., from [17, Lemma 5.3.1], [52, Sect. 1.1])
shows: Suppose $b\in(0,\infty)$, $f\in AC_{loc}((0,b))$, $f^{\prime}\in
L^{2}((0,b);dx)$, $f(0)=0$, then (with $f$ real-valued without loss of
generality),
$\displaystyle\int_{0}^{b}dx\,|f^{\prime}(x)|^{2}=\int_{0}^{b}dx\,\big{|}x^{1/2}\big{[}x^{-1/2}f(x)\big{]}^{\prime}+(2x)^{-1}f(x)\big{|}^{2}$
$\displaystyle\quad=\int_{0}^{b}dx\,\Big{\\{}4^{-1}x^{-2}f(x)^{2}+x^{-1/2}f(x)\big{[}x^{-1/2}f(x)\big{]}^{\prime}+x\big{[}\big{(}x^{-1/2}f(x)\big{)}^{\prime}\big{]}^{2}\Big{\\}}$
$\displaystyle\quad\geq\int_{0}^{b}dx\,\Big{\\{}4^{-1}x^{-2}f(x)^{2}+x^{-1/2}f(x)\big{[}x^{-1/2}f(x)\big{]}^{\prime}\Big{\\}}$
$\displaystyle\quad=\int_{0}^{b}dx\,\frac{|f(x)|^{2}}{4x^{2}}+2^{-1}\big{[}x^{-1/2}f(x)\big{]}^{2}\big{|}_{x=0}^{b}$
$\displaystyle\quad\geq\int_{0}^{b}dx\,\frac{|f(x)|^{2}}{4x^{2}}-2^{-1}\lim_{x\downarrow
0}\big{[}x^{-1/2}f(x)\big{]}^{2}$
$\displaystyle\quad=\int_{0}^{b}dx\,\frac{|f(x)|^{2}}{4x^{2}},$ (3.51)
employing the estimate (B.10) with $f(0)=0$. In particular, no boundary
conditions whatsoever are needed at the right end point $b$. One notes that
the hypotheses on $f$ imply that $f\in AC([0,b])$ and hence actually that $f$
behaves like an $H^{1}_{0}$-function in a right neighborhood of $x=0$,
equivalently, $f\widetilde{\chi}_{[0,b/2]}\in H^{1}_{0}((0,b))$, where
$\widetilde{\chi}_{[0,r/2]}(x)=\begin{cases}1,&x\in[0,r/4],\\\
0,&x\in[3r/4,r],\end{cases}\quad\widetilde{\chi}_{[0,r/2]}\in
C^{\infty}([0,r]),\;r\in(0,\infty).$ (3.52)
$\diamond$
###### Remark 3.5.
Employing locality of the operators involved, one can show (cf. [30]) that all
considerations in the bulk of this paper, extend to the situation where
$q(x)=\frac{s^{2}-(1/4)}{x^{2}},\,\text{ respectively,
}\,q(x)=\frac{s^{2}-(1/4)}{\sin^{2}(x)},\quad s\in[0,\infty),$ (3.53)
is replaced by a potential $q$ satisfying $q\in L^{1}_{loc}((0,\pi);dx)$ and
for some $s_{j}\in[0,\infty)$, $j=1,2$, and some $0<\varepsilon$ sufficiently
small,
$\displaystyle
q_{s_{1},s_{2}}(x)=\begin{cases}\big{[}s_{1}^{2}-(1/4)\big{]}x^{-2},&x\in(0,\varepsilon),\\\\[2.84526pt]
\big{[}s_{2}^{2}-(1/4)\big{]}(x-\pi)^{-2},&x\in(\pi-\varepsilon,\pi).\end{cases}$
(3.54)
As discussed in [41], this can be replaced by $q\geq q_{s_{1},s_{2}}$ a.e.
In addition, we only presented the tip of an iceberg in this section as these
considerations naturally extend to more general Sturm–Liouville operators in
$L^{2}((a,b);dx)$ generated by differential expressions of the type
$-\frac{d}{dx}p(x)\frac{d}{dx}+q(x),\quad x\in(a,b),$ (3.55)
as discussed to some extent in [32]. We will return to this and the general
three-coefficient Sturm–Liouville operators in $L^{2}((a,b);rdx)$ generated by
$\frac{1}{r(x)}\bigg{[}-\frac{d}{dx}p(x)\frac{d}{dx}+q(x)\bigg{]},\quad
x\in(a,b),$ (3.56)
elsewhere. $\diamond$
## Appendix A The Weyl–Titchmarsh–Kodaira $m$-Function
Associated with $T_{s,F}$
We start by introducing a normalized fundamental system of solutions
$\phi_{0,s}(z,\,\cdot\,)$ and $\theta_{0,s}(z,\,\cdot\,)$ of $\tau_{s}u=zu$,
$s\in[0,1)$, $z\in{\mathbb{C}}$, satisfying (cf. the generalized boundary
values introduced in (2.14), (2.15))
$\displaystyle\widetilde{\theta}_{0,s}(z,0)=1,\quad\widetilde{\theta}^{\,\prime}_{0,s}(z,0)=0,\quad\widetilde{\phi}_{0,s}(z,0)=0,\quad\widetilde{\phi}^{\,\prime}_{0,s}(z,0)=1,$
(A.1)
with $\phi_{0,s}(\,\cdot\,,x)$ and $\theta_{0,s}(\,\cdot\,,x)$ entire for
fixed $x\in(0,\pi)$. To this end, we introduce the two linearly independent
solutions to $\tau_{s}y=zy$ (entire w.r.t. $z$ for fixed $x\in(0,\pi)$) given
by
$\displaystyle y_{1,s}(z,x)$ $\displaystyle=[\sin(x)]^{(1-2s)/2}$
$\displaystyle\quad\times
F\big{(}\big{[}(1/2)-s+z^{1/2}\big{]}\big{/}2,\big{[}(1/2)-s-z^{1/2}\big{]}\big{/}2;1/2;\cos^{2}(x)\big{)},$
$\displaystyle y_{2,s}(z,x)$ $\displaystyle=\cos(x)[\sin(x)]^{(1-2s)/2}$ (A.2)
$\displaystyle\quad\times
F\big{(}\big{[}(3/2)-s+z^{1/2}\big{]}\big{/}2,\big{[}(3/2)-s-z^{1/2}\big{]}\big{/}2;3/2;\cos^{2}(x)\big{)},$
$\displaystyle\hskip 157.91287pts\in[0,1),\;z\in{\mathbb{C}},\;x\in(0,\pi).$
Using the connection formula found in [1, Eq. 15.3.6] yields the behavior near
$x=0,\pi,$
$\displaystyle y_{1,s}(z,x)$
$\displaystyle=[\sin(x)]^{(1-2s)/2}\dfrac{\pi^{1/2}\Gamma(s)}{\Gamma\big{(}\big{[}(1/2)+s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)+s-z^{1/2}\big{]}\big{/}2\big{)}}$
$\displaystyle\qquad\times
F\big{(}\big{[}(1/2)-s+z^{1/2}\big{]}\big{/}2,\big{[}(1/2)-s-z^{1/2}\big{]}\big{/}2;1-s;\sin^{2}(x)\big{)}$
$\displaystyle\quad\;+[\sin(x)]^{(1+2s)/2}\dfrac{\pi^{1/2}\Gamma(-s)}{\Gamma\big{(}\big{[}(1/2)-s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)-s-z^{1/2}\big{]}\big{/}2\big{)}}$
$\displaystyle\qquad\times
F\big{(}\big{[}(1/2)+s+z^{1/2}\big{]}\big{/}2,\big{[}(1/2)+s-z^{1/2}\big{]}\big{/}2;1+s;\sin^{2}(x)\big{)},$
$\displaystyle\hskip 159.3356pts\in(0,1),\;z\in{\mathbb{C}},\;x\in(0,\pi),$
(A.3) $\displaystyle y_{2,s}(z,x)$ $\displaystyle=\cos(x)[\sin(x)]^{(1-2s)/2}$
$\displaystyle\qquad\times\dfrac{\pi^{1/2}\Gamma(s)}{2\Gamma\big{(}\big{[}(3/2)+s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)+s-z^{1/2}\big{]}\big{/}2\big{)}}$
$\displaystyle\qquad\times
F\big{(}\big{[}(3/2)-s+z^{1/2}\big{]}\big{/}2,\big{[}(3/2)-s-z^{1/2}\big{]}\big{/}2;1-s;\sin^{2}(x)\big{)}$
$\displaystyle\quad\;+\cos(x)[\sin(x)]^{(1+2s)/2}$
$\displaystyle\qquad\times\dfrac{\pi^{1/2}\Gamma(-s)}{2\Gamma\big{(}\big{[}(3/2)-s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)-s-z^{1/2}\big{]}\big{/}2\big{)}}$
$\displaystyle\qquad\times
F\big{(}\big{[}(3/2)+s+z^{1/2}\big{]}\big{/}2,\big{[}(3/2)+s-z^{1/2}\big{]}\big{/}2;1+s;\sin^{2}(x)\big{)},$
$\displaystyle\hskip
142.26378pts\in(0,1),\;z\in{\mathbb{C}},\;x\in(0,\pi)\backslash\\{\pi/2\\}.$
###### Remark A.1.
Before we turn to the case $s=0$, we recall Gauss’s identity (cf. [1, no.
15.1.20])
$F(\alpha,\beta;\gamma;1)=\frac{\Gamma(\gamma)\Gamma(\gamma-\alpha-\beta)}{\Gamma(\gamma-\alpha)\Gamma(\gamma-\beta)},\quad\gamma\in{\mathbb{C}}\backslash\\{-{\mathbb{N}}_{0}\\},\;\text{\rm
Re}(\gamma-\alpha-\beta)>0,$ (A.4)
and the differentiation formula (cf. [1, no. 15.2.1])
$\frac{d}{dz}F(\alpha,\beta;\gamma;z)=\frac{\alpha\beta}{\gamma}F(\alpha+1,\beta+1;\gamma+1;z),\quad\alpha,\beta,\gamma\in{\mathbb{C}},\;z\in\\{\zeta\in{\mathbb{C}}\,|\,|\zeta|<1\\},$
(A.5)
which imply that for $s\in(0,1)$, the two $F(\,\cdot\,,\,\cdot\,;\,\cdot\,;1)$
exist in (A) (indeed, for $j=1,2$ one obtains with $s\in(0,1)$, and with the
values of $\alpha,\beta,\gamma$ taken from (A), that
$\gamma-\alpha-\beta=s>0$) and hence the asymptotic behavior of
$y_{j,s}(z,x)$, $j=1,2$, as $x\downarrow 0$ and $x\uparrow\pi$ is dominated by
$x^{(1-2s)/2}$ and $(\pi-x)^{(1-2s)/2}$, respectively. However, the analogous
statement fails for $y_{j,s}^{\prime}(z,x)$, $j=1,2$, as, taking into account
(A.5), the analog of the 2nd condition in (A.4), namely, $\text{\rm
Re}[\gamma+1-(\alpha+1)-(\beta+1)]>0$, is not fulfilled (in this case the
values of $\alpha,\beta,\gamma$ taken from (A) with $s\in(0,1)$ yield
$[\gamma+1-(\alpha+1)-(\beta+1)]=s-1<0$). The situation is similar for the
first two $F(\,\cdot\,,\,\cdot\,;\,\cdot\,;x)$ for $y_{1,s}(z,x)$ in (A.3) as
$x\to\pi/2$ as in this case the two $F(\,\cdot\,,\,\cdot\,;\,\cdot\,;1)$
exist. Even though for $y_{2,s}(z,x)$ in (A.3) the two
$F(\,\cdot\,,\,\cdot\,;\,\cdot\,;1)$ do not exist individually, the limit of
each term does exist due to the multiplication by the factor $\cos(x)$. To see
this, one can instead consider the limit (cf. [53, no. 15.4.23])
$\displaystyle\lim_{z\to
1^{-}}\dfrac{F(\alpha,\beta;\gamma;z)}{(1-z)^{\gamma-\alpha-\beta}}=\dfrac{\Gamma(\gamma)\Gamma(\alpha+\beta-\gamma)}{\Gamma(\alpha)\Gamma(\beta)},\quad\gamma\in{\mathbb{C}}\backslash\\{-{\mathbb{N}}_{0}\\},\;\text{\rm
Re}(\gamma-\alpha-\beta)<0,$ (A.6)
which through the appropriate change of variable reveals that the connection
formula for $y_{2,s}(z,x)$ as $x\to\pi/2$ approaches $0$ as expected from
evaluating $y_{2,s}(z,\pi/2)$ in (A). But once again, the analog of the 2nd
condition in (A.4), namely, $\text{\rm Re}[\gamma+1-(\alpha+1)-(\beta+1)]>0$,
fails for the four $F^{\prime}(\,\cdot\,,\,\cdot\,;\,\cdot\,;x)$ in (A.3) as
$x\to\pi/2$. $\diamond$
Similarly, by [1, Eq. 15.3.10] one obtains for the remaining case $s=0$,
$\displaystyle
y_{1,0}(z,x)=\dfrac{\pi^{1/2}[\sin(x)]^{1/2}}{\Gamma\big{(}\big{[}(1/2)+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)-z^{1/2}\big{]}\big{/}2\big{)}}$
$\displaystyle\hskip
48.36958pt\times\sum_{n=0}^{\infty}\dfrac{\big{(}\big{[}(1/2)+z^{1/2}\big{]}\big{/}2\big{)}_{n}\big{(}\big{[}(1/2)-z^{1/2}\big{]}\big{/}2\big{)}_{n}}{(n!)^{2}}\big{[}2\psi(n+1)$
$\displaystyle\hskip
56.9055pt-\psi\big{(}n+\big{[}(1/2)+z^{1/2}\big{]}\big{/}2\big{)}-\psi\big{(}n+\big{[}(1/2)-z^{1/2}\big{]}\big{/}2\big{)}$
$\displaystyle\hskip 56.9055pt-\text{\rm
ln}(\sin^{2}(x))\big{]}[\sin(x)]^{2n},$ (A.7) $\displaystyle
y_{2,0}(z,x)=\dfrac{\pi^{1/2}\cos(x)[\sin(x)]^{1/2}}{2\Gamma\big{(}\big{[}(3/2)+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)-z^{1/2}\big{]}\big{/}2\big{)}}$
$\displaystyle\hskip
48.36958pt\times\sum_{n=0}^{\infty}\dfrac{\big{(}\big{[}(3/2)+z^{1/2}\big{]}\big{/}2\big{)}_{n}\big{(}\big{[}(3/2)-z^{1/2}\big{]}\big{/}2\big{)}_{n}}{(n!)^{2}}\big{[}2\psi(n+1)$
$\displaystyle\hskip
56.9055pt-\psi\big{(}n+\big{[}(3/2)+z^{1/2}\big{]}\big{/}2\big{)}-\psi\big{(}n+\big{[}(3/2)-z^{1/2}\big{]}\big{/}2\big{)}$
$\displaystyle\hskip 56.9055pt-\text{\rm
ln}\big{(}\sin^{2}(x)\big{)}\big{]}[\sin(x)]^{2n},$ $\displaystyle\hskip
61.17325pts=0,\;z\in{\mathbb{C}},\;x\in(0,\pi).$
Here $\psi(\,\cdot\,)=\Gamma^{\prime}(\,\cdot\,)/\Gamma(\,\cdot\,)$ denotes
the Digamma function, $\gamma_{E}=-\psi(1)=0.57721\dots$ represents Euler’s
constant, and
$(\zeta)_{0}=1,\quad(\zeta)_{n}=\Gamma(\zeta+n)/\Gamma(\zeta),\;n\in{\mathbb{N}},\quad\zeta\in{\mathbb{C}}\backslash(-{\mathbb{N}}_{0}),$
(A.8)
abbreviates Pochhammer’s symbol $($see, e.g., [1, Ch. 6]$)$. Direct
computation now yields
$\displaystyle\widetilde{y}_{1,s}(z,0)$
$\displaystyle=-\widetilde{y}_{1,s}(z,\pi)=\dfrac{2\pi^{1/2}\Gamma(1+s)}{\Gamma\big{(}\big{[}(1/2)+s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)+s-z^{1/2}\big{]}\big{/}2\big{)}},$
$\displaystyle\widetilde{y}^{\,\prime}_{1,s}(z,0)$
$\displaystyle=\widetilde{y}^{\,\prime}_{1,s}(z,\pi)=\dfrac{\pi^{1/2}\Gamma(-s)}{\Gamma\big{(}\big{[}(1/2)-s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)-s-z^{1/2}\big{]}\big{/}2\big{)}},$
$\displaystyle\widetilde{y}_{2,s}(z,0)$
$\displaystyle=\widetilde{y}_{2,s}(z,\pi)=\dfrac{\pi^{1/2}\Gamma(1+s)}{\Gamma\big{(}\big{[}(3/2)+s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)+s-z^{1/2}\big{]}\big{/}2\big{)}},$
$\displaystyle\widetilde{y}^{\,\prime}_{2,s}(z,0)$
$\displaystyle=-\widetilde{y}^{\,\prime}_{2,s}(z,\pi)=\dfrac{\pi^{1/2}\Gamma(-s)}{2\Gamma\big{(}\big{[}(3/2)-s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)-s-z^{1/2}\big{]}\big{/}2\big{)}},$
$\displaystyle\hskip 199.16928pts\in(0,1),\,z\in{\mathbb{C}},$ (A.9)
$\displaystyle\begin{split}\widetilde{y}_{1,0}(z,0)&=-\widetilde{y}_{1,0}(z,\pi)=\dfrac{2\pi^{1/2}}{\Gamma\big{(}\big{[}(1/2)+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)-z^{1/2}\big{]}\big{/}2\big{)}},\\\\[2.84526pt]
\widetilde{y}^{\,\prime}_{1,0}(z,0)&=\widetilde{y}^{\,\prime}_{1,0}(z,\pi)\\\
&=\dfrac{-\pi^{1/2}\big{[}2\gamma_{E}+\psi\big{(}\big{[}(1/2)+z^{1/2}\big{]}\big{/}2\big{)}+\psi\big{(}\big{[}(1/2)-z^{1/2}\big{]}\big{/}2\big{)}\big{]}}{\Gamma\big{(}\big{[}(1/2)+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)-z^{1/2}\big{]}\big{/}2\big{)}},\\\\[2.84526pt]
\widetilde{y}_{2,0}(z,0)&=\widetilde{y}_{2,0}(z,\pi)=\dfrac{\pi^{1/2}}{\Gamma\big{(}\big{[}(3/2)+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)-z^{1/2}\big{]}\big{/}2\big{)}},\\\\[2.84526pt]
\widetilde{y}^{\,\prime}_{2,0}(z,0)&=-\widetilde{y}^{\,\prime}_{2,0}(z,\pi)\\\
&=\dfrac{-\pi^{1/2}\big{[}2\gamma_{E}+\psi\big{(}\big{[}(3/2)+z^{1/2}\big{]}\big{/}2\big{)}+\psi\big{(}\big{[}(3/2)-z^{1/2}\big{]}\big{/}2\big{)}\big{]}}{2\Gamma\big{(}\big{[}(3/2)+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)-z^{1/2}\big{]}\big{/}2\big{)}},\\\\[2.84526pt]
&\hskip 213.39566pts=0,\,z\in{\mathbb{C}}.\end{split}$ (A.10)
In particular, one obtains
$\displaystyle\begin{split}\phi_{0,s}(z,x)&=\widetilde{y}_{2,s}(z,0)y_{1,s}(z,x)-\widetilde{y}_{1,s}(z,0)y_{2,s}(z,x),\\\\[2.84526pt]
\theta_{0,s}(z,x)&=\widetilde{y}^{\,\prime}_{1,s}(z,0)y_{2,s}(z,x)-\widetilde{y}^{\,\prime}_{2,s}(z,0)y_{1,s}(z,x),\\\\[2.84526pt]
&\hskip 59.75095pts\in[0,1),\;z\in{\mathbb{C}},\;x\in(0,\pi),\end{split}$
(A.11)
since
$\displaystyle
W(y_{1,s}(z,\,\cdot\,),y_{2,s}(z,\,\cdot\,))=\widetilde{y}_{1,s}(z,0)\widetilde{y}^{\,\prime}_{2,s}(z,0)-\widetilde{y}^{\,\prime}_{1,s}(z,0)\widetilde{y}_{2,s}(z,0)=-1,$
(A.12)
with the generalized boundary values given by (A.9), (A.10). To prove (A.12)
one recalls Euler’s reflection formula (cf. [1, no. 6.1.17])
$\displaystyle\Gamma(z)\Gamma(1-z)=\dfrac{\pi}{\sin(\pi z)},\quad
z\in{\mathbb{C}}\backslash{\mathbb{Z}},$ (A.13)
and hence concludes that
$\displaystyle\begin{split}&\Gamma\big{(}\big{[}(1/2)+\varepsilon s\pm
z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)-\varepsilon s\mp
z^{1/2}\big{]}\big{/}2\big{)}\\\
&\quad=\dfrac{\pi}{\sin\big{(}\pi\big{[}(1/2)+\varepsilon\pm
z^{1/2}\big{]}\big{/}2\big{)}},\quad\varepsilon\in\\{-1,1\\}.\end{split}$
(A.14)
Thus one computes for $s\in(0,1)$,
$\displaystyle W(y_{1,s}(z,\,\cdot\,),y_{2,s}(z,\,\cdot\,))$
$\displaystyle\quad=-[\sin(\pi
s)]^{-1}\big{\\{}\sin\big{(}\pi\big{[}(1/2)+s+z^{1/2}\big{]}\big{/}2\big{)}\sin\big{(}\pi\big{[}(1/2)+s-z^{1/2}\big{]}\big{/}2\big{)}$
$\displaystyle\quad\hskip
71.13188pt-\sin\big{(}\pi\big{[}(1/2)-s+z^{1/2}\big{]}\big{/}2\big{)}\sin\big{(}\pi\big{[}(1/2)-s-z^{1/2}\big{]}\big{/}2\big{)}\big{\\}}$
$\displaystyle\quad=-[2\sin(\pi
s)]^{-1}\\{-\cos(\pi[(1/2)+s])+\cos(\pi[(1/2)-s])\\}=-1.$ (A.15)
For the case $s=0$, one recalls the reflection formula for the Digamma
function (cf. [1, no. 6.3.7])
$\displaystyle\psi(1-z)-\psi(z)=\pi\cot(\pi z),\quad
z\in{\mathbb{C}}\backslash{\mathbb{Z}},$ (A.16)
and applies trigonometric identities to obtain
$W(y_{1,0}(z,\,\cdot\,),y_{2,0}(z,\,\cdot\,))=-1$.
The singular Weyl–Titchmarsh–Kodaira function $m_{0,0,s}(z)$ is then uniquely
determined (cf. [28, Eq. (3.18)] and [27] for background on $m$-functions) to
be333Here the subscripts $0,0$ in $m_{0,0,s}$ indicate the Dirichlet (i.e.,
Friedrichs) boundary conditions at $x=0,\pi$, a special case of the
$m_{\alpha,\beta}$-function discussed in [27] associated with separated
boundary conditions at $x=0,\pi$, indexed by boundary condition parameters
$\alpha,\beta\in[0,\pi]$.
$\displaystyle
m_{0,0,s}(z)=-\frac{\widetilde{\theta}_{0,s}(z,\pi)}{\widetilde{\phi}_{0,s}(z,\pi)},\quad
s\in[0,1),\;z\in\rho(T_{s,F}).$ (A.17)
Direct calculation once again yields
$\displaystyle m_{0,0}(z)$
$\displaystyle=-\dfrac{\widetilde{y}^{\,\prime}_{2,s}(z,0)\widetilde{y}_{1,s}(z,\pi)-\widetilde{y}^{\,\prime}_{1,s}(z,0)\widetilde{y}_{2,s}(z,\pi)}{2\widetilde{y}_{1,s}(z,0)\widetilde{y}_{2,s}(z,0)}$
$\displaystyle=\begin{cases}\dfrac{\pi\Gamma(-s)}{4\Gamma(1+s)}\bigg{[}\dfrac{\Gamma\big{(}\big{[}(3/2)+s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)+s-z^{1/2}\big{]}\big{/}2\big{)}}{\Gamma\big{(}\big{[}(3/2)-s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(3/2)-s-z^{1/2}\big{]}\big{/}2\big{)}}\\\\[8.53581pt]
\qquad\qquad\quad+\dfrac{\Gamma\big{(}\big{[}(1/2)+s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)+s-z^{1/2}\big{]}\big{/}2\big{)}}{\Gamma\big{(}\big{[}(1/2)-s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)-s-z^{1/2}\big{]}\big{/}2\big{)}}\bigg{]},\\\
\hskip 233.3125pts\in(0,1),\\\\[8.53581pt]
-\big{[}4\gamma_{E}+\psi\big{(}\big{[}(1/2)+z^{1/2}\big{]}\big{/}2\big{)}+\psi\big{(}\big{[}(1/2)-z^{1/2}\big{]}\big{/}2\big{)}\\\\[2.84526pt]
\quad\,+\psi\big{(}\big{[}(3/2)+z^{1/2}\big{]}\big{/}2\big{)}+\psi\big{(}\big{[}(3/2)-z^{1/2}\big{]}\big{/}2\big{)}\big{]}/4,\quad
s=0,\end{cases}$ $\displaystyle\hskip 231.88979ptz\in\rho(T_{s,F}),$ (A.18)
which has simple poles precisely at the simple eigenvalues of $T_{s,F}$ given
by
$\displaystyle\sigma(T_{s,F})=\big{\\{}[(1/2)+s+n]^{2}\big{\\}}_{n\in{\mathbb{N}}_{0}},\quad
s\in[0,1).$ (A.19)
###### Remark A.2.
For the limit point case at both endpoints, that is, for $s\in[1,\infty)$, the
solutions $y_{j,s}(z,\,\cdot\,)$ in (A) remain linearly independent and also
the connection formulas (A.3) remain valid for
$s\in[1,\infty)\backslash{\mathbb{N}}$. Moreover, employing once again (A.4)
and (A.5) one verifies that the two $F(\,\cdot\,,\,\cdot\,;\,\cdot\,;1)$ as
well as $F^{\prime}(\,\cdot\,,\,\cdot\,;\,\cdot\,;1)$ are well defined in (A)
and hence for $s\in[1,\infty)$, the asymptotic behavior of $y_{j,s}(z,x)$ and
$y_{j,s}^{\prime}(z,x)$, $j=1,2$, as $x\downarrow 0$ and $x\uparrow\pi$ is
dominated by $x^{(1-2s)/2}$ and $x^{-(1+2s)/2}$ and $(\pi-x)^{(1-2s)/2}$ and
$(\pi-x)^{-(1+2s)/2}$, respectively. Since in connection with (A.3) one then
has $\gamma-\alpha-\beta=\pm 1/2$, independently of the value of
$s\in(0,\infty)$, the situation described in Remark A.1 for (A.3) and
$s\in(0,1)$ applies without change to the current case $s\in[1,\infty)$.
Actually, some of these failures (as $x\to\pi/2$ in $y_{j,s}^{\prime}(z,x)$,
$j=1,2$) are crucial for the following elementary reason: The function
$\displaystyle[\sin(x)]^{(1+2s)/2}\dfrac{\pi^{1/2}\Gamma(-s)}{\Gamma\big{(}\big{[}(1/2)-s+z^{1/2}\big{]}\big{/}2\big{)}\Gamma\big{(}\big{[}(1/2)-s-z^{1/2}\big{]}\big{/}2\big{)}}$
$\displaystyle\quad\times
F\big{(}\big{[}(1/2)+s+z^{1/2}\big{]}\big{/}2,\big{[}(1/2)+s-z^{1/2}\big{]}\big{/}2;1+s;\sin^{2}(x)\big{)},$
(A.20) $\displaystyle\hskip
159.3356pts\in[1,\infty),\;z\in{\mathbb{C}},\;x\in(0,\pi),$
(i.e., the analog of the second part of $y_{1,s}(z,\,\cdot\,)$ on the right-
hand side in (A.3)) generates an $L^{2}((0,\pi);dx)$-element near $x=0,\pi$,
and hence if this function and its $x$-derivative were locally absolutely
continuous in a neighborhood of $x=\pi/2$ (the only possibly nontrivial point
in the interval $(0,\pi)$), the self-adjoint maximal operator $T_{s,max}$,
$s\in[1,\infty)$, would have eigenvalues for all $z\in{\mathbb{C}}$, an
obvious contradiction. $\diamond$
Because of the subtlety pointed out in Remark A.2 we omit further details on
the limit point case $s\in[1,\infty)$ and refer to [25, Sect. 4], instead. In
particular, [25, Theorem 4.1 b)] extends (A.19) to $s\in[1,\infty)$ and hence
one actually has
$\displaystyle\sigma(T_{s,F})=\big{\\{}[(1/2)+s+n]^{2}\big{\\}}_{n\in{\mathbb{N}}_{0}},\quad
s\in[0,\infty).$ (A.21)
## Appendix B Remarks on Hardy-Type Inequalities
In this appendix we recall a Hardy-type inequality useful in Section 2.
Introducing the differential expressions $\alpha_{s}$, $\alpha^{+}_{s}$ (cf.
(3.28) for $s=0$),
$\alpha_{s}=\frac{d}{dx}-\frac{s+(1/2)}{x},\quad\alpha_{s}^{+}=-\frac{d}{dx}-\frac{s+(1/2)}{x},\quad
s\in[0,\infty),\;x\in(0,\pi),$ (B.1)
one confirms that
$\alpha_{s}^{+}\alpha_{s}=\omega_{s}=-\frac{d^{2}}{dx^{2}}+\frac{s^{2}-(1/4)}{x^{2}},\quad
s\in[0,\infty),\;x\in(0,\pi).$ (B.2)
Following the Hardy inequality considerations in [31], [40], [42], one obtains
the following basic facts.
###### Lemma B.1.
Suppose $f\in AC_{loc}((0,\pi))$, $\alpha_{s}f\in L^{2}((0,\pi);dx)$ for some
$s\in{\mathbb{R}}$, and $0<r_{0}<r_{1}<\pi<R<\infty$. Then,
$\displaystyle\begin{split}&\int_{r_{0}}^{r_{1}}dx\,|(\alpha_{s}f)(x)|^{2}\geq
s^{2}\int_{r_{0}}^{r_{1}}dx\,\frac{|f(x)|^{2}}{x^{2}}+\frac{1}{4}\int_{r_{0}}^{r_{1}}dx\,\frac{|f(x)|^{2}}{x^{2}[\text{\rm
ln}(R/x)]^{2}}\\\ &\hskip
92.47145pt-s\frac{|f(x)|^{2}}{x}\bigg{|}_{x=r_{0}}^{r_{1}}-\frac{|f(x)|^{2}}{2x[\text{\rm
ln}(R/x)]}\bigg{|}_{x=r_{0}}^{r_{1}},\end{split}$ (B.3)
$\displaystyle\int_{r_{0}}^{r_{1}}dx\,x\text{\rm
ln}(R/x)\bigg{|}\bigg{[}\frac{f(x)}{x^{1/2}[\text{\rm
ln}(R/x)]^{1/2}}\bigg{]}^{\prime}\bigg{|}^{2}$
$\displaystyle\quad=\int_{r_{0}}^{r_{1}}dx\,\bigg{[}|f^{\prime}(x)|^{2}-\frac{|f(x)|^{2}}{4x^{2}}-\frac{|f(x)|^{2}}{4x^{2}[\text{\rm
ln}(R/x)]^{2}}\bigg{]}$ (B.4)
$\displaystyle\qquad-\frac{|f(x)|^{2}}{2x}\bigg{|}_{x=r_{0}}^{r_{1}}+\frac{|f(x)|^{2}}{2x\text{\rm
ln}(R/x)}\bigg{|}_{x=r_{0}}^{r_{1}}\geq 0,$
$\displaystyle\begin{split}&\int_{r_{0}}^{r_{1}}dx\,|(\alpha_{s}f)(x)|^{2}=\int_{r_{0}}^{r_{1}}dx\,\bigg{[}|f^{\prime}(x)|^{2}+\big{[}s^{2}-(1/4)\big{]}\frac{|f(x)|^{2}}{x^{2}}\bigg{]}\\\
&\hskip 92.47145pt-[s+(1/2)]\frac{|f(x)|^{2}}{x}\bigg{|}_{x=r_{0}}^{r_{1}}\geq
0.\end{split}$ (B.5)
If $s=0$,
$\int_{0}^{r_{1}}dx\,\frac{|f(x)|^{2}}{x^{2}[\text{\rm
ln}(R/x)]^{2}}<\infty,\quad\lim_{x\downarrow 0}\frac{|f(x)|}{[x\text{\rm
ln}(R/x)]^{1/2}}=0.$ (B.6)
If $s\in(0,\infty)$, then
$\int_{0}^{r_{1}}dx\,|f^{\prime}(x)|^{2}<\infty,\quad\int_{0}^{r_{1}}dx\,\frac{|f(x)|^{2}}{x^{2}}<\infty,\quad\lim_{x\downarrow
0}\frac{|f(x)|}{x^{1/2}}=0,$ (B.7)
in particular,
$f\widetilde{\chi}_{[0,r_{1}/2]}\in H^{1}_{0}((0,r_{1})),$ (B.8)
where
$\widetilde{\chi}_{[0,r/2]}(x)=\begin{cases}1,&x\in[0,r/4],\\\
0,&x\in[3r/4,r],\end{cases}\quad\widetilde{\chi}_{[0,r/2]}\in
C^{\infty}([0,r]),\;r\in(0,\infty).$ (B.9)
###### Proof.
Relations (B.4) and (B.5) are straightforward (yet somewhat tedious)
identities; together they yield (B.3). The 1st relation in (B.6) is an instant
consequence of (B.3), so is the fact that $\lim_{x\downarrow
0}|f(x)|^{2}/[x\text{\rm ln}(R/x)]$ exists. Moreover, since $[x\text{\rm
ln}(R/x)]^{-1}$ is not integrable at $x=0$, the 1st relation in (B.6) yields
$\liminf_{x\downarrow 0}|f(x)|^{2}/[x\text{\rm ln}(R/x)]=0$, implying the 2nd
relation in (B.6).
Finally, if $s\in(0,\infty)$, then inequality (B.3) implies the 2nd relation
in (B.7); together with $\alpha_{s}f\in L^{2}((0,\pi);dx)$, this yields the
1st relation in (B.7). By inequality (B.3), $\lim_{x\downarrow 0}|f(x)|^{2}/x$
exists, but then the second relation in (B.7) yields $\liminf_{x\downarrow
0}|f(x)|^{2}/x=0$ and hence also $\lim_{x\downarrow 0}|f(x)|^{2}/x=0$. ∎
We also recall the following elementary fact.
###### Lemma B.2.
Suppose $f\in H^{1}((0,r))$ for some $r\in(0,\infty)$. Then, for all
$x\in(0,r)$,
$\displaystyle\begin{split}|f(x)-f(0)|&=\bigg{|}\int_{0}^{x}dt\,f^{\prime}(t)\bigg{|}\leq
x^{1/2}\bigg{(}\int_{0}^{x}dt\,|f^{\prime}(t)|^{2}\bigg{)}^{1/2}\\\ &\leq
x^{1/2}\|f^{\prime}\|_{L^{2}((0,x);dt)}\underset{x\downarrow
0}{=}o\big{(}x^{1/2}\big{)}.\end{split}$ (B.10)
Thus, if $f\in H^{1}((0,r))$, then $\int_{0}^{r}dx\,|f(x)|^{2}/x^{2}<\infty$
if and only if $f(0)=0$, that is, if and only if
$f\widetilde{\chi}_{[0,r/2]}\in H^{1}_{0}((0,r))$.
In particular, if $f\in H^{1}((0,r))$ and $f(0)=0$, then actually,
$\lim_{x\downarrow 0}\frac{|f(x)|}{x^{1/2}}=0.$ (B.11)
###### Proof.
Since (B.10) is obvious, we briefly discuss the remaining assertions in Lemma
B.2. If $f\in H^{1}((0,r))$ and $\int_{0}^{r}dx\,|f(x)|^{2}/x^{2}<\infty$ then
identity (B.5) for $s<-1/2$, that is,
$\displaystyle\begin{split}&\int_{r_{0}}^{r_{1}}dx\,|(\alpha_{s}f)(x)|^{2}=\int_{r_{0}}^{r_{1}}dx\,\bigg{[}|f^{\prime}(x)|^{2}+\big{[}s^{2}-(1/4)\big{]}\frac{|f(x)|^{2}}{x^{2}}\bigg{]}\\\
&\hskip 92.47145pt-[s+(1/2)]\frac{|f(x)|^{2}}{x}\bigg{|}_{x=r_{0}}^{r_{1}}\geq
0,\quad s<-1/2,\end{split}$ (B.12)
yields the existence of $\lim_{x\downarrow 0}|f(x)|^{2}/x$. Since
$\int_{0}^{r}dx\,|f(x)|^{2}/x^{2}<\infty$ implies that $\liminf_{x\downarrow
0}|f(x)|^{2}/x=0$, one concludes that $\lim_{x\downarrow 0}|f(x)|^{2}/x=0$ and
hence $f$ behaves locally like an $H^{1}_{0}$-function in a right neighborhood
of $x=0$. Conversely, if $f(0)=0$, then
$\int_{0}^{r}dx\,|f(x)|^{2}/x^{2}<\infty$ by Hardy’s inequality as discussed
in Remark 3.4. Relation (B.11) is clear from (B.10) with $f(0)=0$. ∎
###### Remark B.3.
$(i)$ If $f\in AC_{loc}((0,r))$ and $f^{\prime}\in L^{p}((0,r);dx)$ for some
$p\in[1,\infty)$, the Hölder estimate analogous to (B.10),
$\displaystyle\begin{split}|f(d)-f(c)|=\bigg{|}\int_{c}^{d}dt\,f^{\prime}(t)\bigg{|}\leq|d-c|^{1/p^{\prime}}\bigg{(}\int_{c}^{d}dt\,|f^{\prime}(t)|^{p}\bigg{)}^{1/p},\\\
(c,d)\subset(0,r),\;\frac{1}{p}+\frac{1}{p^{\prime}}=1,\end{split}$ (B.13)
implies the existence of $\lim_{c\downarrow 0}f(c)=f(0)$ and $\lim_{d\uparrow
r}f(d)=f(r)$ and hence yields $f\in AC([0,r])$.
$(ii)$ The fact that $f\in H^{1}((0,r))$ and
$\int_{0}^{r}dx\,|f(x)|^{2}/x^{2}<\infty$ implies
$f\widetilde{\chi}_{[0,r/2]}\in H^{1}_{0}((0,r))$ is a special case of a
multi-dimensional result recorded, for instance, in [21, Theorem 5.3.4].
$(iii)$ When replacing $x^{-2}$, $x\in(0,r)$, by $[\sin(x)]^{-2}$,
$x\in(0,\pi)$, due to locality, the considerations in Lemmas B.1 and B.2 at
the left endpoint $x=0$ apply of course to the right interval endpoint $\pi$.
$\diamond$
Acknowledgments. We are indebted to Jan Derezinski, Aleksey Kostenko, Ari
Laptev, and Gerald Teschl for very helpful discussions and to Farit Avkhadiev
for kindly pointing out to us references [6] and [8].
## References
* [1] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1972.
* [2] N. I. Akhiezer and I. M. Glazman, Theory of Linear Operators in Hilbert Space, Volume II, Pitman, Boston, 1981.
* [3] V. S. Alekseeva and A. Yu. Ananieva, On extensions of the Bessel operator on a finite interval and a half-line, J. Math. Sci. 187, 1–8, 2012.
* [4] A. Yu. Anan’eva and V. S. Budyka, On the spectral theory of the Bessel operator on a finite interval and the half-line, Diff. Eq. 52, 1517–1522 (2016).
* [5] A. Yu. Ananieva and V. S. Budyika, To the spectral theory of the Bessel operator on finite interval and half-line, J. Math. Sci. 211, 624–645 (2015).
* [6] F. G. Avkhadiev, Integral inequalities in domains of hyperbolic type and their applications, Sbornik Math. 206, 1657–1681 (2015).
* [7] F. G. Avkhadiev, Brezis–Marcus problem and its generalizations, J. Math. Sci. 252, 291–301 (2021).
* [8] F. Avkhadiev, Selected results and open problems on Hardy–Rellich and Poincaré–Friedrichs inequalities, Anal. Math. Physics 11, No. 134 (2021), 20pp.
* [9] F. G. Avkhadiev and K. J. Wirths, Unified Poincaré and Hardy inequalities with sharp constants for convex domains, Angew. Math. Mech. 87, 632–642 (2007).
* [10] F. G. Avkhadiev and K. J. Wirths, Sharp Hardy-type inequalities with Lamb’s constant, Bull. Belg. Math. Soc. Simon Stevin 18, 723–736 (2011).
* [11] A. A. Balinsky, W. D. Evans, and R. T. Lewis, The Analysis and Geometry of Hardy’s Inequality, Universitext, Springer, 2015.
* [12] H. Brezis and M. Marcus, Hardy’s inequalities revisited, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 25, 217–237 (1997).
* [13] L. Bruneau, J. Dereziński, and V. Georgescu, Homogeneous Schrödinger operators on half-line, Ann. H. Poincaré 12, 547–590 (2011).
* [14] W. Bulla and F. Gesztesy, Deficiency indices and singular boundary conditions in quantum mechanics, J. Math. Phys. 26, 2520–2528 (1985).
* [15] R. S. Chisholm, W. N. Everitt, and L. L. Littlejohn, An integral operator inequality with applications, J. of Inequal. & Applications 3, 245–266 (1999).
* [16] E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, Krieger Publ., Malabar, FL, 1985.
* [17] E. B. Davies, Spectral Theory and Differential Operators, Cambridge Studies in Advanced Mathematics, Vol. 42, Cambridge University Press, Cambridge, UK, 1995.
* [18] J. Dereziński and V. Georgescu, On the domains of Bessel operators, Ann. H. Poincaré 22, 3291–3309 (2021).
* [19] J. Dereziński and M. Wrochna, Exactly solvable Schrödinger operators, Ann. H. Poincaré 12, 397–418 (2011); see also the update at arXiv:1009.0541.
* [20] N. Dunford and J. T. Schwartz, Linear Operators. Part II: Spectral Theory, Wiley, Interscience, New York, 1988.
* [21] D. E. Edmunds and W. D. Evans, Spectral Theory and Differential Operators, 2nd ed., Oxford University Press, 2018.
* [22] W. N. Everitt and H. Kalf, The Bessel differential equation and the Hankel transform, J. Comp. Appl. Math. 208, 3–19 (2007).
* [23] S. Flügge, Practical Quantum Mechanics, Vol. I, reprinted 2nd 1994 ed., Springer, Berlin, 1999.
* [24] G. B. Folland, Real Analysis. Modern Techniques and Their Applications, 2nd ed., Wiley-Interscience, New York, 1999.
* [25] F. Gesztesy and W. Kirsch, One-dimensional Schrödinger operators with interactions singular on a discrete set, J. Reine Angew. Math. 362, 28–50 (1985).
* [26] F. Gesztesy, L. L. Littlejohn, I. Michael, and R. Wellman, On Birman’s sequence of Hardy–Rellich-type inequalities, J. Diff. Eq. 264, 2761–2801 (2018).
* [27] F. Gesztesy, L. L. Littlejohn, and R. Nichols, On self-adjoint boundary conditions for singular Sturm–Liouville operators bounded from below, J. Diff. Eq. 269, 6448–6491 (2020).
* [28] F. Gesztesy, L. L. Littlejohn, M. Piorkowski, and J. Stanfill, The Jacobi operator and its Weyl–Titchmarsh–Kodaira $m$-functions, in preparation.
* [29] F. Gesztesy, C. Macedo, and L. Streit, An exactly solvable periodic Schrödinger operator, J. Phys. A18, L503–L507 (1985).
* [30] F. Gesztesy, M. M. H. Pang, and J. Stanfill, On domain properties of Bessel-type operators, arXiv:2107.09271.
* [31] F. Gesztesy and L. Pittner, On the Friedrichs extension of ordinary differential operators with strongly singular potentials, Acta Phys. Austriaca 51, 259–268 (1979).
* [32] F. Gesztesy and M. Ünal, Perturbative oscillation criteria and Hardy-type inequalities, Math. Nachr. 189, 121–144 (1998).
* [33] F. Gesztesy and M. Zinchenko, Sturm–Liouville Operators, Their Spectral Theory, and Some Applications, in preparation.
* [34] G. R. Goldstein, J. A. Goldstein, R. M. Mininni, and S. Romanelli, Scaling and variants of Hardy’s inequality, Proc. Amer. Math. Soc. 147, 1165–1172 (2019).
* [35] G. H. Hardy, Notes on some points in the integral calculus, LX. An inequality between integrals, Messenger Math. 54, 150–156 (1925).
* [36] G. H. Hardy, J. E. Littlewood, and G. Pólya, Inequalities, Cambridge University Press, Cambridge, UK, reprinted, 1988.
* [37] M. Hoffmann-Ostenhof, T. Hoffmann-Ostenhof, and A. Laptev, A geometrical version of Hardy’s inequality, J. Funct. Anal. 189, 539–548 (2002).
* [38] L. Infeld and T. E. Hull, The factorization method, Rev. Mod. Phys. 23, 21–68 (1951).
* [39] K. Jörgens and F. Rellich, Eigenwerttheorie Gewöhnlicher Differentialgleichungen, Springer-Verlag, Berlin, 1976.
* [40] H. Kalf, On the characterization of the Friedrichs extension of ordinary or elliptic differential operators with a strongly singular potential, J. Funct. Anal. 10, 230–250 (1972).
* [41] H. Kalf, A characterization of the Friedrichs extension of Sturm–Liouville operators, J. London Math. Soc. (2) 17, 511–521 (1978).
* [42] H. Kalf and J. Walter, Strongly singular potentials and essential self-adjointness of singular elliptic operators in $C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})$, J. Funct. Anal. 10, 114–130 (1972).
* [43] A. Kostenko and G. Teschl, On the singular Weyl–Titchmarsh function of perturbed spherical Schrödinger operators, J. Diff. Eq. 250, 3701–3739 (2011).
* [44] A. Kufner, L. Maligranda, and L.-E. Persson, The Hardy Inequality. About its History and Some Related Results, Vydavatelský Servis, Pilsen, 2007.
* [45] A. Kufner, L.-E. Persson, and N. Samko, Weighted Inequalities of Hardy Type, 2nd ed., World Scientific, Singapore, 2017.
* [46] A. Kufner, Weighted Sobolev Spaces, A Wiley-Interscience Publication, John Wiley & Sons, 1985.
* [47] E. Landau, A note on a theorem concerning series of positive terms: extract from a letter of Prof. E. Landau to Prof. I. Schur, J. London Math. Soc. 1, 38–39 (1926).
* [48] W. Lotmar, Zur Darstellung des Potentialverlaufs bei zweiatomigen Molekülen, Z. Physik 93, 528–533 (1935).
* [49] M. A. Naimark, Linear Differential Operators. Part II: Linear Differential Operators in Hilbert Space, Transl. by E. R. Dawson, Engl. translation edited by W. N. Everitt, Ungar Publishing, New York, 1968.
* [50] R. G. Nasibullin and R. V. Makarov, Hardy’s inequalities with remainders and Lamb-type equations, Siberian Math. J. 61, 1102–1119 (2020).
* [51] H.-D. Niessen and A. Zettl, Singular Sturm–Liouville problems: the Friedrichs extension and comparison of eigenvalues, Proc. London Math. Soc. (3) 64, 545–578 (1992).
* [52] B. Opic and A. Kufner, Hardy-Type Inequalities, Pitman Research Notes in Mathematics Series, Vol. 219. Longman Scientific & Technical, Harlow, 1990.
* [53] F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark (eds.), NIST Handbook of Mathematical Functions, National Institute of Standards and Technology (NIST), U.S. Dept. of Commerce, and Cambridge Univ. Press, 2010.
* [54] D. B. Pearson, Quantum Scattering and Spectral Theory, Academic Press, London, 1988.
* [55] L.-E. Persson and S. G. Samko, A note on the best constants in some Hardy inequalities, J. Math. Inequalities 9, 437–447 (2015).
* [56] G. Pöschl and E. Teller, Bemerkungen zur Quantenmechanik des anharmonischen Oszillators, Zeitschr. Physik 83, 143–151 (1933).
* [57] F. Rellich, Die zulässigen Randbedingungen bei den singulären Eigenwertproblemen der mathematischen Physik. (Gewöhnliche Differentialgleichungen zweiter Ordnung.), Math. Z. 49, 702–723 (1943/44).
* [58] F. Rellich, Halbbeschränkte gewöhnliche Differentialoperatoren zweiter Ordnung. Math. Ann. 122, 343–368 (1951). (German.)
* [59] N. Rosen and P. M. Morse, On the Vibrations of Polyatomic Molecules, Phys. Rev. 42, 210–217 (1932).
* [60] R. Rosenberger, A new characterization of the Friedrichs extension of semibounded Sturm–Liouville operators, J. London Math. Soc. (2) 31, 501–510 (1985).
* [61] F. L. Scarf, Discrete states for singular potential problems, Phys. Rev. 109, 2170–2176 (1958).
* [62] F. L. Scarf, New soluble energy band problem, Phys. Rev. 112, 1137–1140 (1958).
* [63] G. Teschl, Mathematical Methods in Quantum Mechanics. With Applications to Schrödinger Operators, 2nd ed., Graduate Studies in Math., Vol. 157, Amer. Math. Soc., RI, 2014.
* [64] J. Weidmann, Linear Operators in Hilbert Spaces, Graduate Texts in Mathematics, Vol. 68, Springer, New York, 1980.
* [65] J. Weidmann, Lineare Operatoren in Hilberträumen. Teil II: Anwendungen, Teubner, Stuttgart, 2003.
* [66] A. Zettl, Sturm–Liouville Theory, Mathematical Surveys and Monographs, Vol. 121, Amer. Math. Soc., Providence, RI, 2005.
|
# On the periodicity of cardiovascular fluid dynamics simulations
Martin R. Pfaller, Jonathan Pham, Nathan M. Wilson, David W. Parker, Alison L.
Marsden
###### Abstract
Three-dimensional cardiovascular fluid dynamics simulations typically require
computation of several cardiac cycles before they reach a periodic solution,
rendering them computationally expensive. Furthermore, there is currently no
standardized method to determined whether a simulation has yet reached that
periodic state. In this work, we propose use of the asymptotic error measure
to quantify the difference between simulation results and their ideal periodic
state using lumped-parameter modeling. We further show that initial conditions
are crucial in reducing computational time and develop an automated framework
to generate appropriate initial conditions from a one-dimensional model of
blood flow. We demonstrate the performance of our initialization method using
six patient-specific models from the Vascular Model Repository. In our
examples, our initialization protocol achieves periodic convergence within one
or two cardiac cycles, leading to a significant reduction in computational
cost compared to standard methods. All computational tools used in this work
are implemented in the open-source software platform SimVascular.
Automatically generated initial conditions have the potential to significantly
reduce computation time in cardiovascular fluid dynamics simulations.
## 1 Introduction
Three-dimensional (3D) blood flow simulations are commonly coupled with zero-
dimensional (0D) lumped parameter models, representing the downstream
vasculature at the model’s boundary [1, 2, 3, 4]. These lumped parameter
models are analogous to an electric circuit, with resistors and capacitors
modeling the viscosity of the blood and the elasticity of the vessel wall,
respectively. A popular choice is the three-element Windkessel model, also
known as the RCR model [5]. The Windkessel consists of a proximal resistance
in series with a parallel distal resistance and capacitance (Figure 1).
The RCR boundary condition, like many other outflow boundary conditions,
contains a capacitor that stores blood volume. While these capacitors are
“charging”, it typically takes several cardiac cycles with a periodic
pulsatile inflow to reach a periodic state. A periodic state is here defined
as two consecutive cardiac cycles yielding results for pressure and flow rate
that agree within a given tolerance. Mathematically, this corresponds to the
limit cycle of the model. It is essential to extract simulation results only
when they have reached this periodic state, e.g., when comparing them to in
vivo measurements. However, there are currently no clear guidelines on how to
determine whether this periodic state has been achieved.
Unfortunately, running several cardiac cycles of a 3D simulation is
computationally expensive, typically requiring a high-performance computer.
The computation time scales linearly with the number of cardiac cycles since
they cannot be run in parallel. A common practice is initializing a simulation
with results from a steady-state solution [5]. Here, a computationally
inexpensive simulation with constant inflow is computed first. Its solution is
then used as an initial condition for the simulation with a periodic pulsatile
inflow condition. However, as we will show in this work, a simulation
initialized with a steady-state solution often still requires several cardiac
cycles to reach a reasonably periodic state.
Our goal is twofold. First, we introduce metrics and tools to quantify whether
a simulation has yet reached a periodic state within a given tolerance.
Second, we shorten the computation time of the computationally expensive 3D
simulation by leveraging reduced-order modeling. It was previously shown that
one-dimensional (1D) models of cardiovascular fluid dynamics could accurately
approximate integral quantities of 3D solutions such as velocity and pressure
at the outlets [6, 7, 8, 9, 10, 11, 12]. In this work, we propose a novel
method to initialize a 3D simulation using the results from an inexpensive 1D
simulation. This framework is fully automated and requires no user
interaction.
## 2 Methods
We begin by revisiting the governing equations of 3D, 1D, and 0D fluid
dynamics in Sections 2.1, 2.2, and 2.3. Following theoretical considerations
of the convergence of lumped parameter models, we define suitable error
metrics to determine the difference between a 3D simulation and its periodic
state in Section 2.4. In Section 2.5, we introduce a tool to determine whether
a simulation has yet reached its periodic state. Finally, we develop a method
to initialize a 3D simulation from a 1D simulation in Section 2.6 in order to
jumpstart initialization and reduce computational cost.
### 2.1 3D flow physics
The dynamics of blood flow in the cardiovascular system is mathematically
governed by the incompressible 3D Navier-Stokes equations,
$\displaystyle\rho\left(\dot{\boldsymbol{v}}+\boldsymbol{v}\cdot\nabla\boldsymbol{v}\right)$
$\displaystyle=-\nabla P+\mu\nabla^{2}\boldsymbol{v}+\boldsymbol{f},$ in
$\displaystyle\Omega,$ (1) $\displaystyle\nabla\cdot\boldsymbol{v}$
$\displaystyle=0,$ in $\displaystyle\Omega.$ (2)
The first equation in this system represents conservation of momentum for a
Newtonain, incompressible fluid, where $\boldsymbol{v}$ is the velocity of the
blood flow, $P$ is pressure, $\boldsymbol{f}$ is a body force, $\rho$ is the
density of the blood, and $\mu$ is the dynamic viscosity. The second equation
in this system represents conservation of mass. In the computational
cardiovascular modeling and simulation context, we typically numerically solve
the 3D Navier-Stokes equations in patient-specific models of vascular
anatomies to simulate hemodynamics. Simulation results are used to elucidate
the relationship between cardiovascular diseases and fluid mechanics, for
personalized treatment planning, and to aid the development of novel
biomedical technologies. The initial conditions for velocity and pressure are
$\displaystyle\boldsymbol{v}(\boldsymbol{x},t=0)=\boldsymbol{v}_{0}(\boldsymbol{x}),\quad
P(\boldsymbol{x},t=0)=P_{0}(\boldsymbol{x}),$ (3)
Boundary conditions that model the portion of the cardiovascular system not
captured by the anatomical model must be provided as well. A flow rate, $Q$,
is commonly prescribed at the inlet surfaces of the 3D vascular model, where
the flow rate is computed via integration of the normal velocity over each
inlet surface,
$\displaystyle Q(\boldsymbol{x},t)$
$\displaystyle=\int_{\Gamma}\boldsymbol{v}\cdot\boldsymbol{n}\,d\Gamma=Q_{in}(t),$
in $\displaystyle\Gamma_{on}.$ (4)
On the other hand, lumped parameter models, also known as 0D models, are
commonly used as boundary conditions at the outlets of the model. These lumped
parameter models usually relate the pressure to the flow rate via parametric
differential-algebraic equations,
$\displaystyle P(\boldsymbol{x},t)$
$\displaystyle=f(\boldsymbol{x},t,Q_{out}(t),\dot{Q}_{out}(t),\boldsymbol{\phi}),$
on $\displaystyle\Gamma_{out},$ (5)
where $\boldsymbol{\phi}$ represents the set of variables parametrizing the
differential equation. A discussion of 0D models and some commonly used outlet
boundary conditions is provided in section 2.3.
We generate 3D patient-specific models using SimVascular, an open-source,
comprehensive modeling, and simulation software for vascular anatomies
(simvascular.org) [13]. The models are simulated in our open-source solver
svSolver using the Finite Element Method (FEM)
(github.com/SimVascular/svSolver). It uses liner P1-P1 elements with a
streamline upwind Petrov-Galerkin and pressure-stabilizing Petrov-Galerkin
formulation (SUPG/PSPG) [14]. The pressure and momentum stabilization is
detailed in [15, 16]. Furthermore, it uses a linear solver with specialized
pre-conditioners tailored to handle large vascular resistances coupled at
outflow boundaries [17]. All 3D simulations in this work were run on
Stanford’s Sherlock supercomputing cluster using four 12-core Intel Xeon Gold
5118 CPUs.
### 2.2 1D flow physics
In contrast to 3D models, one-dimensional (1D) models have only a single
spatial dimension, the axial dimension along the centerline of the vessel [18,
6, 19]. Due to this lack of 3D spatial information, 1D models are capable of
simulating only bulk flow rate, and cross-sectionally averaged pressure at
each centerline node of the 1D finite element model. We integrate
incompressible 3D Navier-Stokes equations (1) over the cross-section while
assuming Newtonian fluid properties for the blood and an axisymmetric
parabolic flow profile to obtain the governing equations for the 1D model.
This process yields
$\displaystyle\dot{S}+\frac{\partial Q}{\partial z}$ $\displaystyle=0,$ (6)
$\displaystyle\dot{Q}+\frac{4}{3}\frac{\partial}{\partial
z}\frac{Q^{2}}{S}+\frac{S}{\rho}\frac{\partial P}{\partial z}$
$\displaystyle=Sf-8\pi\nu\frac{Q}{S}+\nu\frac{\partial^{2}Q}{\partial z^{2}},$
(7)
with flow rate $Q$, cross-sectional area $S$, pressure $P$, density $\rho$,
body force $f$, and kinematic viscosity $\nu$. The coordinate $z$ represents
the axial dimension of 1D model. To solve these equations, we also require a
constitutive law to relate the pressure to the cross-sectional area. In this
work, we use the constitutive relationship proposed by Olufsen [20],
$\displaystyle
P(z,t)=P^{0}(z)+\frac{4}{3}\frac{Eh}{r^{0}(z)}\left(1-\sqrt{\frac{S^{0}(z)}{S(z,t)}}\right),\quad\frac{Eh}{r^{0}(z)}=k_{1}e^{k_{2}r^{0}(z)+k_{3}},$
(8)
where $E$ is the Young’s modulus of the blood vessel, $h$ is the wall
thickness, $P_{0}$ is the reference pressure, $r_{0}$ is the reference radius,
and $k_{1}$, $k_{2}$, and $k_{3}$ are empirically derived constants.
Furthermore, for models with multiple vascular branches, mass conservation is
obeyed at the junction regions, and the pressure is assumed to be constant
between the inlet and outlets of the junctions [6]. As with 3D models, we need
initial conditions to initialize the simulation,
$\displaystyle Q(z,t=0)=Q_{0}(z),\quad P(z,t=0)=P_{0}(z),\quad
S(z,t=0)=S_{0}(z),$ (9)
as well as inlet and outlet boundary conditions, discussed in section 2.3, to
represent the portion of the cardiovascular not reflected in our 1D model,
$\displaystyle Q(z,t)$ $\displaystyle=Q_{in}(t),$ in
$\displaystyle\Gamma_{in},$ (10) $\displaystyle\quad P(z,t)$
$\displaystyle=f(z,t,Q_{out}(t),\dot{Q}_{out}(t),\boldsymbol{\phi}),$ in
$\displaystyle\Gamma_{in},$ (11)
We generate and simulate the 1D centerline and finite element models using
SimVascular and VMTK [21]. We solve the set of differential equations using
our open-source solver svOneDSolver (github.com/SimVascular/svOneDSolver). We
employ a stabilized space-time finite element method based on the
discontinuous Galerkin method in time [6]. The spatial discretization employs
continuous piecewise linear polynomials whereas we use a piecewise constant
temporal discretization. For more background information, see [22, 23, 24].
### 2.3 0D flow physics
The third model fidelity we consider in this work is the zero-dimensional (0D)
model. Unlike 3D and 1D models, 0D models lack spatial information. However,
as with 1D models, 0D models are capable of accurately simulating bulk flow
rate and bulk pressure quantities in the cardiovascular system [25], [26].
These 0D models are composed of individual lumped-parameter elements that
connect to form an entire complex lumped parameter network. There are many
lumped parameter elements commonly used in the context of cardiovascular
modeling and simulation. Some of these elements include resistors, capacitors,
and inductors [26]. Resistors model the viscous effects of blood flow,
capacitors represent the elastic nature of blood vessels, while inductors
capture the inertia of the blood flow. In lumped parameter networks that model
the heart, diode elements are also employed. These diodes mimic the behavior
of heart valves, where they allow flow to pass only when the valves are open
[27], [28]. The flow rates and pressures in each of these elements are
respectively governed by the following linear differential and algebraic
equations,
$\displaystyle\Delta P=RQ,\quad Q=C\Delta\dot{P},\quad\Delta P=L\dot{Q},\quad
Q=\frac{|Q|+Q}{2},$ (12)
where $R$ is the resistance, $C$ is the capacitance, $L$ is the inductance,
and $\Delta P$ is the pressure drop across the element. Lumped parameter 0D
models are also analogous to electrical circuits, where the flow rate and
pressure are representative of current and voltage, respectively. In this
work, we will focus our attention to just resistors and capacitors.
Lumped parameter networks are commonly used in two contexts. First, 0D models
can be used as surrogate models of entire vascular anatomies [25]. In this
case, each blood vessel in the vascular system is represented by one or more
lumped-parameter elements in the 0D model, where the value of each element is
determined by the geometric and material properties of the blood and blood
vessel. Second, 0D models can be used to represent boundary conditions in 3D
and 1D models [27], [29], [17], [30], [31]. In this context, each 0D element
reflects a different downstream (or upstream) anatomical feature of the
cardiovascular system. To employ the 0D models as boundary conditions, they
must be numerically coupled to the 3D or 1D finite element models [17]. The
governing equations for these 0D models then can be numerically solved using
traditional time-stepping schemes, such as an explicit fourth-order Runge-
Kutta method.
Some of the 0D models most commonly employed as boundary conditions are the
resistance model and the 3-element Windkessel model [5, 31]. The resistance
model is composed of a single linear resistor element that captures the
downstream resistance of the vascular network not portrayed in the 3D or 1D
model. On the other hand, the 3-element Windkessel model, also known as the
RCR model, as shown in Figure 1, models the proximal resistance, distal
resistance, and compliance of the downstream vasculature using two resistor
elements and a capacitor. The RCR boundary condition is discussed further in
the next section.
To simulate our 0D surrogate models, which represent the vessels using linear
resistors, we solve the governing system of equations using an in-house,
modular numerical solver that employs the implicit generalized-$\alpha$ scheme
for time advancement [32].
Figure 1: Flowchart of method to check periodic state of 3D simulation
results.
### 2.4 The RCR boundary condition
Thus far, we have introduced three different model fidelities commonly used in
computational vascular modeling and simulation. Before any of the models and
simulation results can be used in predictive and scientific applications, the
quantities of interest, primarily the flow rates and pressures, must be
simulated until they converge to a periodic state. Here, a simulated quantity
of interest is considered to be periodic if its values between 2 adjacent
periods are the same, within some defined tolerance. A period in the
cardiovascular context is typically defined as a single cardiac cycle. We
expound on the concept of a periodic state in this section, using the RCR
model to illustrate the primary concepts.
The three parameters of the RCR boundary condition are commonly tuned to
clinical measurements, i.e., phase-contrast magnetic resonance imaging and
pressure measurements by solving an optimization problem to match minimal and
maximal pressure over a cardiac cycle, measured flows, and flow distribution
among different outlets [33]. A common strategy is to tune the total
resistance and capacitance in the model to produce a physiologic pressure
waveform, then distributed proportional to the vessel outlet areas [34].
We begin by reviewing the response of a single RCR model to pulsatile inflow.
The governing differential equation for the inlet pressure, $P$, of the
3-element Windkessel model, as a function of the inlet flow rate, $Q$, is
$\displaystyle\dot{P}+\frac{P}{\tau}=R_{p}\dot{Q}+\frac{1}{\tau}(R_{p}+R_{d})Q,\quad\tau=R_{d}C,$
(13)
where $R_{p}$ is the proximal resistance, $C$ is the capacitance, and $R_{d}$
is the distal resistance. We obtain the semi-analytical solution [29] for this
ordinary differential equation as
$\displaystyle
P(t)=\underbrace{[P(0)-R_{p}Q(0)]\,e^{-t/\tau}}_{\text{I}}+\underbrace{R_{p}Q(t)}_{\text{II}}+\underbrace{\int_{0}^{t}\frac{e^{-(t-\tilde{t})/\tau}}{C}Q(\tilde{t})\leavevmode\nobreak\
\mathrm{d}\tilde{t}}_{\text{III}},\quad\tau=R_{d}C>0,$ (14)
which depends on the inflow $Q$ and the time constant $\tau$. We can identify
three different terms in this equation: (I) exponential decay of the initial
solution, (II) pressure drop at proximal resistance, (III) pressure drop at
sub-circuit $R_{d}C$. Assuming a constant inflow $\bar{Q}>0$ for $t>0$, we
obtain the pressure step response for the RCR boundary condition as
$\displaystyle P(t)=P_{\infty}+e^{-t/\tau}\,[P_{0}-P_{\infty}],\quad
P_{0}=P(0),\quad\lim_{t\to\infty}P(t)=P_{\infty}=\bar{Q}(R_{p}+R_{d}),$ (15)
starting at the initial pressure $P_{0}$ and exponentially approaching the
asymptotic pressure $P_{\infty}$ for the limit $t\to\infty$. We now define the
periodic inflow $Q(t)$ as
$\displaystyle
Q(t+T)=Q(t),\quad\bar{Q}=\frac{1}{T}\int_{0}^{T}Q(t)\leavevmode\nobreak\
\mathrm{d}t,$ (16)
where the period $T$ is the length of a cardiac cycle and $\bar{Q}$ the time-
averaged mean flow. The mean pressure in the $n$-th cardiac cycle is denoted
by
$\displaystyle\bar{P}_{n}=\frac{1}{T}\int_{nT}^{(n+1)T}P(t)\leavevmode\nobreak\
\mathrm{d}t,\quad n\in\mathbb{N}^{+}_{0}.$ (17)
With this notation, Equation (15) can be reformulated for a non-constant,
periodic pulsatile inflow as
$\displaystyle\bar{P}_{n}=\bar{P}_{\infty}+e^{-nT/\tau}\,[\bar{P}_{0}-\bar{P}_{\infty}],\quad\bar{P}_{\infty}=\lim_{n\to\infty}\bar{P}_{n},$
(18)
starting at the initial mean pressure $\bar{P}_{0}$ and approaching the
asymptotic mean pressure $\bar{P}_{\infty}$. Note that the asymptotic mean
pressure $\bar{P}_{\infty}$ can in general not be determined analytically but
depends on the function $Q(t)$. Notably, $\bar{P}_{\infty}$ is different from
the asymptotic pressure $P_{\infty}$ that $P$ approaches for a solution with
steady mean flow $\bar{Q}$.
In the remainder of this section, we will define an error metric to quantify
the difference between the pressure $\bar{P}_{n}$ in the $n$-th cardiac cycle
and the asymptotic pressure $\bar{P}_{\infty}$. For simplicity of notation, we
show the following derivations for $\bar{P}_{n}<\bar{P}_{\infty}$, i.e. the
pressure approaches the asymptotic pressure “from below”. However, the
conclusions hold for any choice of initial pressure. We define the asymptotic
error $\epsilon_{\infty}$ as
$\displaystyle
0<\epsilon_{\infty}\leq\frac{\bar{P}_{\infty}-\bar{P}_{n}}{\bar{P}_{\infty}}=e^{-nT/\tau}\cdot\left(1-\frac{\bar{P}_{0}}{\bar{P}_{\infty}}\right).$
(19)
Thus, we can calculate the number of cardiac cycles $n_{\infty}$ required for
the RCR boundary condition to reach a periodic state within the tolerance
$\epsilon_{\infty}$ as
$\displaystyle
n_{\infty}\leq-\frac{\tau}{T}\cdot\ln{\frac{\epsilon_{\infty}}{1-\bar{P}_{0}/\bar{P}_{\infty}}},\quad
n_{\infty}\in\mathbb{N}^{+}.$ (20)
Several observations can be made from Equation (20). First, the number of
cycles scales linearly with $\tau/T$, the ratio of the length of the time
constant $\tau$ to the length of one cardiac cycle. A two-fold increase in the
time constant doubles the number of cardiac cycles required to reach the same
tolerance $\epsilon_{\infty}$. Second, the closer the initial pressure
$\bar{P}_{0}$ to the asymptotic pressure $\bar{P}_{\infty}$ the fewer cardiac
cycles $n_{\infty}$ are required to achieve periodic convergence with an error
$\epsilon_{\infty}$. The error $\epsilon_{\infty}$ and the number of cardiac
cycles $n_{\infty}$ can in general not be determined a priori since the
constant $\bar{P}_{\infty}$ cannot be evaluated analytically. Only in the
special case of starting from zero initial conditions, i.e. $P_{0}=0$, the
number of cardiac cycles to periodic convergence can be directly given as
$\displaystyle n_{\infty}\leq-\frac{\tau}{T}\cdot\ln\epsilon_{\infty}.$ (21)
We can easily compute the difference $\bar{P}_{n}-\bar{P}_{n-1}$ between two
cardiac cycles numerically without the solution having reached a periodic
state. Using this difference, we define the cyclic error $\epsilon_{n}$
between two consecutive cardiac cycles as
$\displaystyle 0<\epsilon_{n}$
$\displaystyle\leq\frac{\bar{P}_{n}-\bar{P}_{n-1}}{\bar{P}_{\infty}}=e^{-nT/\bar{\tau}}\cdot\left(1-\frac{\bar{P}_{0}}{\bar{P}_{\infty}}\right)\cdot[e^{T/\tau}-1],\quad
n\geq 2.$ (22)
We then define the ratio $\alpha$ between asymptotic error $\epsilon_{\infty}$
and cyclic error $\epsilon_{n}$ as
$\displaystyle\alpha=\frac{\epsilon_{\infty}}{\epsilon_{n}}=\frac{\bar{P}_{\infty}-\bar{P}_{n}}{\bar{P}_{n}-\bar{P}_{n-1}}=\frac{1}{e^{T/\tau}-1},\quad\leadsto\leavevmode\nobreak\
\epsilon_{\infty}>\epsilon_{n}\text{\leavevmode\nobreak\ ,
for\leavevmode\nobreak\ }\frac{\tau}{T}>\frac{1}{\ln 2}\approx 1.44$ (23)
Equation (23) shows that in general $\epsilon_{n}\neq\epsilon_{\infty}$. While
error metrics like $\epsilon_{n}$ are commonly used in practice to determine
whether a simulation as reached a periodic state within a given tolerance, it
is not an adequate metric to do so. For time constants $\tau$ larger than 1.44
times the length of the cardiac cycle $T$, the cyclic error $\epsilon_{n}$
will underestimate the asymptotic error $\epsilon_{\infty}$. In simulations
with a large time constant $\tau\gg T$ the error $\epsilon_{n}$ will appear
small despite the solution being far from a periodic state due to their slow
convergence. As a remedy, we propose a method in Section 2.5 to estimate the
asymptotic error $\epsilon_{\infty}$ as the simulation is being computed.
### 2.5 Checking the periodicity of 3D simulations
The previous section introduced analytical and theoretical methods to check
the periodicity of a single RCR model subjected to a prescribed inflow
condition. This section extends this discussion to consider computational
vascular models and present a method to check the periodicity of patient-
specific, multi-outlet 3D models, where each outlet is coupled to a different
RCR boundary condition. The steps for this method are summarized in the
flowchart shown in Figure 1.
As mentioned in the previous section, $\bar{P}_{\infty}$ generally cannot be
predicted analytically. However, a value of $\bar{P}_{\infty}$ is needed to
check if the simulated flow rate and pressure at each outlet of the 3D model
have converged to a periodic state. To remedy this issue, we estimate
$\bar{P}_{\infty}$ for each outlet by simulating a separate 0D model of the
attached RCR boundary condition. The inflow to each 0D RCR model is the
simulated 3D flow rate, $Q_{3D}$, corresponding to that outlet. We then
simulate these simple 0D RCR models for many cardiac cycles, to guarantee
periodic convergence of the 0D RCR model, and use the final simulated pressure
values, $P_{0D}$, as our estimates of $\bar{P}_{\infty}$.
Furthermore, as previously discussed, $\epsilon_{\infty}$ represents the
asymptotic error of the pressure solution. We therefore specify a desired
value for $\epsilon_{\infty}$ and use this as our criteria for periodic
convergence. Our 3D pressure solution, $P_{3D}$, is considered to be
periodically converged, as per Equation (19), if it matches $P_{0D}$ within
this error threshold. Mathematically, this condition is expressed as,
$\displaystyle\frac{P_{3D}-P_{0D}(Q_{3D})}{P_{0D}(Q_{3D})}\leq\epsilon_{\infty}.$
(24)
Note that $Q_{3D}$ should be reasonably close to a periodic state before we
can use it as the inflow to our RCR model. Typically, $Q_{3D}$ converges much
faster, i.e., within one or two cardiac cycles, than $\bar{P}_{3D}$ does. This
note will be further discussed in Section 3.1.3.
Lastly, each outlet of our 3D models has in general a unique time constant
$\tau$. Although, in practice, parameter tuning yields time constants that are
similar. It can be shown that the convergence of each outlet is determined by
a single model time constant $\bar{\tau}$ that can be approximated by the mean
of all individual time constants. As such, we can use this average time
constant to estimate the number of cardiac cycles for which our 3D models must
be simulated to achieve periodic convergence.
### 2.6 Generating initial conditions
As previously discussed, 3D models must be simulated until the flow rates and
pressures converge to a periodic state before they can be applied in
scientific or clinical investigations, which currently requires running the
simulation for several cardiac cycles. This limits the utility of
computational 3D models and simulations in real-world clinical applications.
To alleviate this bottleneck, we introduce a novel method to jumpstart the
initialization of 3D simulations. In particular, we generate initial
conditions to minimize the number of cardiac cycles required to reach periodic
convergence within a chosen tolerance. This process is fully automated in
SimVascular [13] and requires no user input. Starting from a periodic 1D
solution (Section 2.6.1) we create a map from the centerline, the 1D
representation of the model, to the 3D volume mesh (Section 2.6.2), and
generate an artificial initial velocity vector field (Section 2.6.3) for
simulation initialization.
#### 2.6.1 Generating a periodic 1D solution
We automatically generate a 1D flow model of the high-fidelity 3D flow model
using the SimVascular 1D-plugin. We then run the model until it achieves
periodic convergence and extract the solution of the last cardiac cycle.
#### 2.6.2 Mapping centerline to volume mesh
To map the 1D solution to the 3D Finite Element mesh, we create a map
$\boldsymbol{I}$ from nodes $P_{\text{1D}}$ on the 1D centerline to nodes
$P_{\text{3D}}$ in the 3D volume mesh. This allows us to map quantities
defined on the centerline to the volume mesh, such as 1D flow and pressure,
cross-sectional area, and normal vectors. The iterative process is outlined in
Algorithm 1 and visualized in Figure 2. In Line 1, we first create a set of
seed points $P_{\text{seed}}$ consisting of volume mesh nodes $P_{\text{3D}}$
that are closest to the centerline nodes $P_{\text{1D}}$ and store the
corresponding indices in $\boldsymbol{I}$. To do this, we find the shortest
Euclidean distances between centerline coordinates
$\boldsymbol{x}^{\text{1D}}_{p}$ and 3D mesh coordinates
$\boldsymbol{x}^{\text{3D}}_{j}$. We then employ a region growing algorithm
(Line 1) to grow the 1D-3D map outwards, starting from the centerline seed
points $P_{\text{seed}}$. The algorithm in Line 1 selects nodes
$P_{\text{new}}$ in the new layer from the previous layer $P_{\text{old}}$
using cell-connectivity. Finally, the map is expanded in Line 1 by assigning
nodes in the new layer $P_{\text{new}}$ to the same 1D node as the closest 3D
node in the previous layer $P_{\text{old}}$. This results in centerline nodes
being roughly assigned to 3D mesh nodes within the same cross-section. For 3D
meshes with $\mathcal{O}(10^{6})$ nodes Algorithm 1 only takes a few seconds
to complete on a single CPU. We generate this map only once and use it
repeatedly to map various centerline quantities to the volume mesh, see Figure
3.
// initialize empty map
1 $\boldsymbol{I}\leftarrow[0,\dots,0]$
// initialize 3D seed points
2 $P_{\text{seed}}\leftarrow\\{\\}$
// loop all 1D points
3 for _$p\in P_{\text{1D}}$ _ do
// find closest 3D point
4 $q=\underset{j\in P_{\text{3D}}}{\text{arg\,min}}\leavevmode\nobreak\
||\boldsymbol{x}^{\text{1D}}_{p}-\boldsymbol{x}^{\text{3D}}_{j}||$
5 if _$q\notin P_{\text{seed}}$_ then
// add point to 3D seed points
6 $P_{\text{seed}}\leftarrow P_{\text{seed}}\cup\\{q\\}$
// assign map
7 $\boldsymbol{I}_{q}=p$
8 end if
9
10 end for
11$C_{\text{all}}\leftarrow\\{\\}$
12 $P_{\text{all}}\leftarrow P_{\text{seed}}$
13 $P_{\text{new}}\leftarrow P_{\text{seed}}$
14 $i\leftarrow 0$
// in each iteration, grow seed points one layer outward
15 while _$|P_{\text{new}}| >0$ _ do
16 $P_{\text{old}}\leftarrow P_{\text{new}}$
17 $P_{\text{new}}\leftarrow\\{\\}$
// loop all 3D points in previous layer
18 for _$p\in P_{\text{old}}$ _ do
// loop all 3D cells connected to point
19 for _$c\in$ PointCells$(p)$_ do
// skip 3D cells in previous layers
20 if _$c\notin C_{\text{all}}$_ then
21 $C_{\text{all}}\leftarrow C_{\text{all}}\cup\\{c\\}$
// loop all 3D points connected to cell
22 for _$q\in\text{CellPoints}(c)$_ do
// skip 3D points in previous layers
23 if _$q\notin P_{\text{all}}$_ then
// add 3D points to new layer
24 $P_{\text{new}}\leftarrow P_{\text{new}}\cup\\{q\\}$
25 $P_{\text{all}}\leftarrow P_{\text{all}}\cup\\{q\\}$
26
27 end if
28
29 end for
30
31 end if
32
33 end for
34
35 end for
// loop all 3D points in new layer
36 for _$p\in P_{\text{new}}$ _ do
// assign map according to map of closest 3D point in previous layer
37 $\boldsymbol{I}_{p}=\boldsymbol{I}_{q}\text{\leavevmode\nobreak\
with\leavevmode\nobreak\ }q=\underset{j\in
P_{\text{old}}}{\text{arg\,min}}\leavevmode\nobreak\
||\boldsymbol{x}^{\text{3D}}_{p}-\boldsymbol{x}^{\text{3D}}_{j}||$
38 end for
39 $i\leftarrow i+1$
40
41 end while
Algorithm 1 Mapping centerline nodes to nodes in the 3D volume mesh.
Figure 2: Mapping centerline nodes to nodes in the 3D volume mesh. Colors
correspond to the node order of the centerline. The centerline within 3D
volume mesh is shown on the left. The figures from left to right show the
current set of all points $P_{\text{all}}$ for iterations $i\in\\{2,3,5,24\\}$
until Algorithm 1 converges.
#### 2.6.3 Extrapolating pressure and velocity
We directly map the pressure from the centerline to the volume mesh using the
map generated in Section 2.6.2, resulting in a pressure that’s approximately
constant over the cross-section of the vessels. However, as the 1D solution
only provides a scalar flow along the centerline, we must generate a velocity
vector field from scratch. The ingredients for the velocity vector field are
visualized in Figure 3. From the mapped 1D flow rate and the cross-sectional
area we calculate the velocity magnitude. Assuming a Poiseuille flow, we apply
a parabolic flow profile to the velocity magnitude. With the help of a
normalized radial coordinate, we prescribe the flow profile to be maximal on
the centerline and zero on the boundary. Finally, we multiply the scalar
velocity magnitude with the centerline tangent vector to generate a vector
field. We aim to preserve the amount of flow through a cross-section of the
vessel as obtained from the 1D solution while approximating a somewhat
physiological velocity field. This velocity vector field can then be used for
simulation initialization.
Figure 3: Ingredients to generate the velocity field from a 1D solution. From
left to right: Flow mapped from 1D, cross-sectional area, radial coordinate,
and normal vectors
## 3 Results
Throughout this section, we consider a simulation periodically converged if
the asymptotic error is $\epsilon_{\infty}\leq 1\,\%$, as defined in Equation
(20). We utilize in this work a subset of 52 models from the Vascular Model
Repository (vascularmodel.org) [35]. Six out of these models are shown in
Table 1.
| | | | | |
---|---|---|---|---|---|---
ID | 0003_0001 | 0097_0001 | 0107_0001 | 0111_0001 | 0130_0000 | 0156_0001
Type | aorta-femoral | pulmonary | aorta | aorta | aorta | aorta-femoral
State | normal | Glenn | end-to-end anastomosis | coarctation | aneurysm | aneurysm
Table 1: Subset of models from the Cardiovascular Model Repository.
### 3.1 Convergence
In this section, we detail the convergence behavior of cardiovascular fluid
dynamics simulations with three-element Windkessel, or RCR, boundary
conditions. The models used in this section have different numbers of outlets,
ranging from 4 in the aortic models to 33 in the pulmonary artery models. For
each model, we automatically generate a reduced-order 0D model as outlined in
Section 2.3. Using a computationally inexpensive 0D model allows us to run a
large number of simulations for many cardiac cycles and analyze their
convergence behavior in detail. These 0D results are directly applicable to 1D
and 3D simulation models and form the foundation for generating initial
conditions in Section 3.2.2.
#### 3.1.1 Time constants
The time constant $\tau$ is the metric of a boundary condition that determines
the rate of periodic convergence. It is thus essential for all numerical
experiments in this work. Throughout the remainder of this work, we normalize
$\tau$ by the length of the cardiac cycle $T$, with all values reported in
Figure 4. All normalized time constants fall within the range $[0,10]$, with
time constants in pulmonary models being the lowest. Furthermore, the time
constants of different outlets in one model all fall within a narrow range. In
addition to the outlet time constants, we also show the model time constants
$\bar{\tau}/T$ as crosses. We extract the model time constants from the slope
of the exponential curves of the asymptotic error $\epsilon_{\infty}$ (see
Equation (20)) when running the models for several cardiac cycles. Each model
has one unique model time constant $\bar{\tau}/T$ that is approximately the
mean value of the outlet time constants.
Figure 4: Time constants for models from the Vascular Model Repository. Bars
indicate the range of time constants $\tau/T$ for all outlets, crosses
indicate the model time constant $\bar{\tau}/T$. The colors indicate the model
category: aorta (blue), aorta-femoral (purple), and pulmonary (yellow).
#### 3.1.2 Initial conditions
We estimated the number of cardiac cycles $n_{\infty}$ required to reach a
periodic pressure in Equation (20). In the special case of zero initial
conditions, this relationship simplified to the inequality in Equation (21),
scaling linearly with the time constant $\tau/T$ and the logarithm of the
asymptotic error $\epsilon_{\infty}$. This relationship holds regardless of
model fidelity, i.e. for 0D, 1D, and 3D. In Figure 5(a), we record the number
of cardiac cycles required to reach an asymptotic error of
$\epsilon_{\infty}\leq 1\,\%$ when starting a simulation from zero initial
conditions, and report the number of cardiac cycles required for periodic
pressure (left) and flow (right) over the model time constant $\bar{\tau}/T$.
For pressure (left), the results confirm the linear relationship between the
single model time constant $\bar{\tau}/T$ and the number of cardiac cycles
$n_{\infty}$ from Equation (21) (black line). For a range of model time
constants $\bar{\tau}/T\ \in[0.3,9.6]$ we find the range of cardiac cycles
$n_{\infty}\in[2,44]$. The number of cardiac cycles to reach a periodic flow
solution (right) is not correlated to the model time constant and is, in
general, much lower than the number of cycles $n_{\infty}$ to reach a periodic
pressure. Flow rate commonly converges within one cardiac cycle, with a
maximum of eight cardiac cycles in our cohort of models.
The same study is repeated in Figure 5(b) for starting from steady state
initial conditions. Compared to zero initial conditions, the number of cardiac
cycles required to each an asymptotic pressure with an error of
$\epsilon_{\infty}\leq 1\,\%$ are much lower: $n_{\infty}\in[2,12]$. Flow is
converged within a maximum of 5 cardiac cycles. When starting from steady
state initial conditions, the number of cardiac cycles $n_{\infty}$ cannot be
given analytically, it depends on the prescribed inflow profile. However, the
number of cardiac cycles commonly still increases with the model time
constant.
(a) Starting from zero initial conditions
(b) Starting from steady state initial conditions
Figure 5: Number of cardiac cycles $n_{\infty}$ required to reach an
asymptotic error $e_{\infty}=1\,\%$ for 0D models from the Vascular Model
Repository. The colors indicate the model category: aorta (blue), aorta-
femoral (purple), and pulmonary (yellow). The black line (left) indicates the
number of cardiac cycles predicted by Equation (21).
#### 3.1.3 Comparison of error metrics
We visualize the convergence of pressure and flow for $n=30$ cardiac cycles in
Figure 6 for model 0107_0001 (normal aorta). The pressure curve (top left)
builds up slowly in each cardiac cycle, starting from zero, whereas the flow
curve (bottom) is close to periodic starting from the first cycle. Taking the
mean value over each cardiac cycle, both solutions exponentially approach
their periodic state, as shown in Equation (18). The logarithmic plots in the
two rightmost columns show the exponential decay of the cyclic error
$\epsilon_{n}$ and the asymptotic error $\epsilon_{\infty}$, as defined in
Equations (20) and (22), respectively. Note that the flow (bottom) exhibits
smaller errors than the pressure (top) and converges faster during the first
few cardiac cycles. After that, both flow and pressure at all outlets converge
with the model time constant $\bar{\tau}/T\approx 4.4$. This model time
constant yields a factor between cyclic and asymptotic error of
$\alpha=\epsilon_{\infty}/\epsilon_{n}\approx 3.9$. The threshold is indicated
by horizontal lines in the error plots. Here, the solution is converged after
$n_{\infty}=21$ cardiac cycles, reaching errors of $\epsilon_{\infty}=1\,\%$
and $\epsilon_{n}=\epsilon_{\infty}/\alpha\approx 0.26\,\%$. This example
demonstrates that for simulations with $\bar{\tau}/T>1/\ln 2\approx 1.44$ the
cyclic error $\epsilon_{n}$ underestimates the asymptotic error
$\epsilon_{\infty}$.
Figure 6: Convergence of 0D pressure (top) and flow (bottom) solution in
geometry 0107_0001 (normal aorta) for $n=30$ cardiac cycles. From left to
right: Solution, mean solution over one cardiac cycle (normalized by
asymptotic solution), cyclic error $\epsilon_{n}$, and asymptotic error
$\epsilon_{\infty}$. Each color refers to one of the outlets.
#### 3.1.4 Prediction of the periodic state
As explained in Section 2.4, the periodic error $\epsilon_{\infty}$, comparing
the current cardiac cycle to a perfectly periodic cycle, can in general not be
computed analytically. It requires the periodic solution of the model which is
not known _a priori_. We thus outlined a method in Section 2.5 to use the 0D
lumped-parameter boundary condition to estimate the periodic cycle
numerically. Figure 7 shows the pressure at all outlets of model 0107_0001
from cycle one to cycle $n_{\infty}=21$ (from blue to red), starting from zero
initial conditions as in Figure 6. The top row shows the pressure at the
outlets of the 3D model in each cardiac cycle. The bottom row shows the
prediction of the periodic state using the flow at the outlets of the 3D model
in each cardiac cycle. Since flow converges much faster than pressure in this
model, see Figure 6, the cycle-to-cycle variation is minimal. The periodic
state can be accurately predicted even from early cardiac cycles, where the
actual pressure of the model has not yet converged.
Figure 7: Pressure for model 0107_0001 for $n_{\infty}=21$ cardiac cycles
(blue to red) starting from zero initial conditions until reaching periodic
convergence. Pressure at the 3D outlets (top) and peridioc state predicted
from 0D (bottom).
### 3.2 Initialization
To demonstrate the performance of our 3D initialization method, we compare
results for the 6 models shown in Table 1. The chosen models contain a wide
range of anatomies and physiological conditions and are specified by an eight-
digit ID. They include aorta and femoral arteries (0003_0001), pulmonary
arteries after a Glenn procedure (0097_0001), aortic coarctation post end-to-
end anastomosis (0107_0001), untreated aortic coarctation (0111_0001), aortic
aneurysm in a patient with Marfan Syndrome (0130_0000), and abdominal aortic
aneurysm (0156_0001).
#### 3.2.1 Generating initial conditions
In this section, we demonstrate the performance of our pipeline to generate 3D
initial conditions that greatly reduce the number of cardiac cycles, as
proposed in Section 2.6. As a ground truth, i.e. the ideal initial condition,
we use an asymptotic 3D solution extracted after reaching periodic
convergence. Using this initial condition, the 3D simulation would reach
periodic converge within one cardiac cycle. We compare this solution to
initial conditions we generated from a periodic 1D solution, using the mapping
technique from Section 2.6.2. By using initial conditions mapped from 1D to
3D, we introduce two kinds of errors. First, the 1D solution is computed on a
highly simplified geometrical representation of the 3D geometry assuming a
constant flow profile, see Section 2.2. Second, the mapping process tries to
but cannot guarantee local preservation of the 1D solution characteristics,
such as incompressibility.
Figure 8 shows these errors by comparing different initial conditions: 3D
ground truth (blue), 1D solution (green), and 3D mapped from 1D (orange). We
integrate the 3D ground truth and the mapped 3D initial conditions over the
cross-section of the 3D geometry continuously along the centerline. This
allows us to plot pressure (top) and flow (bottom) continuously over the
vessel path. For a perfect 1D approximation of the 3D simulation, the blue and
green lines would overlap. For a perfect mapping from 1D to 3D, the blue and
orange lines would overlap. In general, pressure is approximated well by the
1D model and mapped well to the 3D domain. The differences between the models
($<1$ mmHg) are small compared to the overall pressure level ($\sim 75$ mmHg).
Flow is approximated well by the 1D model whereas the the mapping results in
oscillations and overestimates the actual flow. Since all three models should
represent a model with rigid walls and incompressible flow, flow should be
constant along the vessel path. However, the 3D mapped from 1D solution still
roughly represents the correct flow splits to the different vessel branches.
Figure 8: Initial conditions in aortic aneurysm model 0130_0000 for different
models: 3D ground truth (blue), 1D solution (green), 3D mapped from 1D
(orange).
The differences between 3D ground truth and 3D mapped from 1D initial
conditions are visualized in Figure 9(a) and are evident, especially within
the aneurysm. The mapping from 1D to 3D results in a pressure that is
approximately constant over the cross-sections of the vasculature. However,
the overall variation, 74-75 mmHg in the ground truth and 73-76 mmHg in the
extrapolated solution, is negligibly small compared to the overall pressure
level. Similarly, the velocity field in Figure 9(b) is visibly different for
the ground truth (left) and mapped solution (right).
(a) Pressure: ground truth (left), mapped from 1D (right)
(b) Velocity (mag.): ground truth (left), mapped from 1D (right)
Figure 9: Initial conditions mapped from 1D compared to the 3D asymptotic
solution in aortic aneurysm model 0130_0000.
#### 3.2.2 Initial conditions
Finally, we demonstrate that 3D initial conditions mapped from periodic 1D
solutions can greatly reduce the 3D simulation time. We compare three kinds of
initial conditions: Zero, Steady, and 1D. For zero initial conditions, we
prescribe uniformly $P_{0}(\boldsymbol{x})=\boldsymbol{0}$ and
$\boldsymbol{v}_{0}=10^{-4}\cdot[1,1,1]$. To generate steady initial
conditions, we first run a steady state simulation with constant mean inflow
$\bar{Q}$. We compared starting the pulsatile flow simulation at different
time steps for model 0003_0001. Common choices are either the first time step
$t=0$ or the time step where the inflow closely matches the mean flow [5]. We
found that the number of cardiac cycles required to reach periodic convergence
was not affected by the choice of starting time step, so for simplicity we
chose $t=0$ for all comparisons in this section. For 1D initial conditions, we
use the framework proposed in Section 2.6. The 3D simulations were run using
an incompressible Newtonian fluid, a rigid wall, and a no-slip condition at
the wall. All 1D simulations and the mapping from 1D to 3D were run a single
CPU on a workstation computer.
Figure 10 shows the convergence of the asymptotic error $\epsilon_{\infty}$
for pressure (top) and flow (bottom) at all outlets (colors) for all three
initial conditions: zero (left), steady (middle), 1D (right). The threshold
$\epsilon_{\infty}\leq 1\,\%$ is indicated by a horizontal line, the number of
cardiac cycles $n_{\infty}$ required to reach that threshold is indicated in
each simulation by a vertical line. All simulations are shown for a total of
$n=12$ cardiac cycles. Note that the scale of the flow error is one magnitude
lower than the scale of the pressure error. From left to right, the flow and
pressure solutions start with a successively lower error in the first cardiac
cycle. In the case of 1D initial conditions (right), the pressure errors of
all outlets already fulfil the convergence criterion after only one cardiac
cycle. Both pressure and flow errors stagnate below $10^{-3}$ as other
numerical errors in the simulations outweigh the asymptotic error. As
previously observed, pressure converges faster than flow when using zero or
steady initial conditions. However, pressure and flow are converged in one and
two cardiac cycles, respectively, when using 1D initial conditions. Due to the
drastic reduction in the number of cardiac cycles required for the pressure
solution, flow is now slightly slower to converge.
Figure 10: Convergence of the 3D pressure in geometry 0003_0001 (aorta-
femoral) for different initial conditions: zero (left), steady (middle), 1D
(right). It shows pressure (top) and flow (bottom) over multiple cardiac
cycles. Each color refers to one of the outlets. The horizontal black line
indicates the asymptotic error of $\epsilon_{\infty}\leq 1\,\%$. The vertical
black line indicates after how many cardiac cycles the simulation results have
reached that asymptotic error.
Convergence results for all selected 3D geometries and are reported in Table
2. While initializing with a steady state solution considerably reduces the
number of cardiac cycles to reach convergence, most of the models still
require at least five cardiac cycles for convergence. In all models, the
simulations initialized with the 1D solution converge in one or two cardiac
cycles. This leads to speedup factors between one and nine compared to the
steady state initialization, saving between 0 and 53 h of computation time on
48 CPU cores each per 3D model. Only in model 0097_0001 which has a very low
model time constant and the simulation even with zero initial conditions
converges in two iterations does the 1D initialization not yield a speedup.
ID | 0003_0001 | 0097_0001 | 0107_0001 | 0111_0001 | 0130_0000 | 0156_0001
---|---|---|---|---|---|---
$t_{\text{3D}}$ | 2.2 h | 1.7 h | 1.9 h | 6.3 h | 6.6 h | 6.4 h
$\bar{\tau}/T$ | 2.5 | 0.2 | 4.4 | 1.3 | 3.8 | 2.1
Zero | 12 / 6 | 2 / 2 | 20 / 1 | 7 / 1 | 16 / 1 | 10 / 3
Steady | 7 / 3 | 2 / 2 | 6 / 1 | 5 / 2 | 9 / 3 | 5 / 2
1D | 1 / 2 | 1 / 2 | 1 / 1 | 1 / 1 | 1 / 1 | 1 / 2
$t_{\text{saved}}$ | 11 h | 0 | 9.5 h | 25 h | 53 h | 19 h
Table 2: Simulation time $t_{\text{3D}}$ for one cardiac cycle of the 3D
simulation, model time constant $\bar{\tau}/T$, and number of cardiac cycles
(pressure / flow) for different initial conditions (zero, steady, 1D) with
$\epsilon_{\infty}\leq 1\,\%$. The maximum number of cardiac cycles in a
simulation is highlighted in bold. The last row shows the time savings on 48
CPU cores for each simulation when using the 1D initialization compared to the
steady state initialization.
## 4 Discussion
We gave a detailed review of properties of lumped-parameter boundary
conditions in cardiovascular fluid dynamics simulations. The speed of
convergence to a periodic state solution is determined by a single model time
constant, that can be approximated as the mean of all individual time
constants of multiple outlet boundary conditions. We found that for 53 models
of the Vascular Model Repository, the model time constant $\bar{\tau}/T$
spanned from 0.3 in pulmonary models to 9.6 in aorta models. That means that
in the most extreme case, the time constant of the model is almost ten times
as large as the length of the cardiac cycle. Using zero initial conditions,
the number of cardiac cycles required to reach a periodic state scales
linearly with the model time constant, reaching a median of eight cardiac
cycles for our subset of models. Even when using initial conditions other than
zero, e.g., from a steady state solution, the number of cardiac cycles
required still scales with model time constant.
Figure 11: Pressure convergence of an outlet in model 0076_1001 with time
constant $\bar{\tau}/T=9.6$, starting from zero initial conditions. The red
cardiac cycle was selected at $\epsilon_{n}=1\,\%$, the blue one at
$\epsilon_{\infty}=1\,\%$.
We introduced the asymptotic error to determine whether a simulation has
reached a periodic state, calculating the relative difference of the results
in the current cardiac cycle to a perfectly periodic solution. As the periodic
solution is not known _a priori_ , we proposed to use a 0D model of the
boundary condition. Taking the flow at a 3D outlet as input, we estimate the
periodic pressure and compare it to the pressure at the 3D outlet. This method
is a quick and easy validation tool for any 3D vascular fluid dynamics
simulation with lumped-parameter boundary conditions. In models with time
constants larger than $1/\ln 2\approx 1.44$, the cycle-to-cycle difference is
lower than the asymptotic error. A low cycle-to-cycle difference can
erroneously lead the user to believe that the simulation has already reached a
periodic state when comparing two consecutive cardiac cycles. This is
visualized in Figure 11. Using the same error tolerance of 1 % but different
error norms, cyclic error (red) and asymptotic error (blue), yields a pressure
difference of 10 mmHg. Thus, only the asymptotic error is a suitable metric to
determine the distance of a simulation to its periodic state.
To reduce 3D computation times, we proposed a method to generate good initial
conditions in an automated and computationally inexpensive way. We used
SimVascular to automatically generate a 1D replication of the 3D model, which
we ran until we achieved a periodic state. We than mapped the 1D solution onto
the 3D Finite Element mesh. Note that the mapping does in general not result
in a fluid field that is divergence free. However, we are not interested in
generating a physical or physiological meaningful solution. Instead, the
initial conditions are “washed out” by the first iteration of our numerical
solver of the Navier Stokes equations. Similarly, the pressure field does not
capture local variations. Here, it should be noted that it is much more
important to match the overall pressure level of the model, which is in
general much higher than any local variations. In a convergence analysis with
six different vascular models, we demonstrated that models with our 1D
initialization method converge in one or two cardiac cycles. This greatly
reduces the computation time for the 3D model over the standard method of
initializing pulsatile simulations with a steady-state solution, typically
dozens of hours in simulation time and hundreds of hours in CPU time. The code
for the 1D initialization is freely available on GitHub
(github.com/SimVascular/SimVascular).
We close with a discussion of the the limitations and future perspectives of
our work. We did only consider RCR boundary conditions in this work. However,
there are many more examples of lumped parameter 0D networks that are coupled
to 3D models, such as coronary boundary conditions [36, 37] and 0D closed loop
models used, e.g., in simulations for single ventricle [38, 39] and coronary
artery bypass graft (CABG) patients [40]. In the general case of lumped
parameter boundary conditions, the model time constant cannot be determined
analytically. However, it can still be calculated numerically from the time it
takes the boundary condition to respond to a step in the inflow condition. We
plan to include these boundary conditions in future work. It should be noted
that those boundary conditions not only require initialization of the velocity
and pressure field in the 3D model but also in unknowns that are internal to
the 0D model. Furthermore, we only considered rigid-wall simulation in this
work. We plan to test our 1D initialization method for deformable wall
simulations as well, using the coupled momentum method [41] or an arbitrary
Lagrangian–Eulerian formulation [42]. To further improve our method and
guarantee periodic convergence within one cardiac cycle, it will be necessary
to improve the mapping of the velocity field to the 3D model. This could be
accomplished for example by solving a Stokes flow problem, using the 1D
results as boundary conditions.
## 5 Acknowledgments
We thank Dr. Luca Pegolotti and Dr. Irene Vignon-Clementel for insightful
discussions. This work was supported by NIH grants R01LM013120 and
R01EB029362. The authors gratefully acknowledge the Stanford Research
Computing Center for providing the computational resources necessary to the
numerical simulations presented in this work.
## References
* [1] Ethan Kung, Alessia Baretta, Catriona Baker, Gregory Arbia, Giovanni Biglino, Chiara Corsini, Silvia Schievano, Irene E. Vignon-Clementel, Gabriele Dubini, Giancarlo Pennati, Andrew Taylor, Adam Dorfman, Anthony M. Hlavacek, Alison L. Marsden, Tain-Yen Hsia, and Francesco Migliavacca. Predictive modeling of the virtual hemi-fontan operation for second stage single ventricle palliation: Two patient-specific cases. Journal of Biomechanics, 46(2):423–429, jan 2013.
* [2] Jongmin Seo, Casey Fleeter, Andrew M. Kahn, Alison L. Marsden, and Daniele E. Schiavazzi. Multi-fidelity estimators for coronary circulation models under clinically-informed data uncertainty. arXiv, 2019.
* [3] Casey M. Fleeter, Gianluca Geraci, Daniele E. Schiavazzi, Andrew M. Kahn, and Alison L. Marsden. Multilevel and multifidelity uncertainty quantification for cardiovascular hemodynamics. Computer Methods in Applied Mechanics and Engineering, 365:113030, jun 2020.
* [4] Christopher J. Arthurs, Nan Xiao, Philippe Moireau, Tobias Schaeffter, and C. Alberto Figueroa. A flexible framework for sequential estimation of model parameters in computational hemodynamics. Advanced Modeling and Simulation in Engineering Sciences, 7(1), dec 2020.
* [5] I.E. Vignon-Clementel, C.A. Figueroa, K.E. Jansen, and C.A. Taylor. Outflow boundary conditions for 3d simulations of non-periodic blood flow and pressure fields in deformable arteries. Computer Methods in Biomechanics and Biomedical Engineering, 13(5):625–640, oct 2010.
* [6] Jing Wan, Brooke Steele, Sean A. Spicer, Sven Strohband, Gonzalo R. Feijo´o, Thomas J.R. Hughes, and Charles A. Taylor. A one-dimensional finite element method for simulation-based medical planning for cardiovascular disease. Computer Methods in Biomechanics and Biomedical Engineering, 5(3):195–206, jan 2002.
* [7] S. M. Moore, K. T. Moorhead, J. G. Chase, T. David, and J. Fink. One-dimensional and three-dimensional models of cerebrovascular flow. Journal of Biomechanical Engineering, 127(3):440–449, sep 2004\.
* [8] L. Grinberg, E. Cheever, T. Anor, J. R. Madsen, and G. E. Karniadakis. Modeling blood flow circulation in intracranial arterial networks: A comparative 3d/1d simulation study. Annals of Biomedical Engineering, 39(1):297–309, jul 2010.
* [9] Philippe Reymond, Fabienne Perren, François Lazeyras, and Nikos Stergiopulos. Patient-specific mean pressure drop in the systemic arterial tree, a comparison between 1-d and 3-d models. Journal of Biomechanics, 45(15):2499–2505, oct 2012.
* [10] Nan Xiao, Jordi Alastruey, and C. Alberto Figueroa. A systematic comparison between 1-d and 3-d hemodynamics in compliant arterial models. International Journal for Numerical Methods in Biomedical Engineering, 30(2):204–231, sep 2013.
* [11] S. Pant, B. Fabrèges, J-F. Gerbeau, and I. E. Vignon-Clementel. A methodological paradigm for patient-specific multi-scale CFD simulations: from clinical measurements to parameter estimates for individual analysis. International Journal for Numerical Methods in Biomedical Engineering, 30(12):1614–1648, nov 2014.
* [12] Giulia Bertaglia, Adrián Navas-Montilla, Alessandro Valiani, Manuel Ignacio Monge García, Javier Murillo, and Valerio Caleffi. Computational hemodynamics in arteries with the one-dimensional augmented fluid-structure interaction system: viscoelastic parameters estimation and comparison with in-vivo data. Journal of Biomechanics, 100:109595, feb 2020.
* [13] Adam Updegrove, Nathan M. Wilson, Jameson Merkow, Hongzhi Lan, Alison L. Marsden, and Shawn C. Shadden. SimVascular: An open source pipeline for cardiovascular simulation. Annals of Biomedical Engineering, 45(3):525–541, dec 2016.
* [14] Leopoldo P. Franca and Sérgio L. Frey. Stabilized finite element methods: II. the incompressible navier-stokes equations. Computer Methods in Applied Mechanics and Engineering, 99(2-3):209–233, sep 1992.
* [15] Charles A. Taylor, Thomas J.R. Hughes, and Christopher K. Zarins. Finite element modeling of blood flow in arteries. Computer Methods in Applied Mechanics and Engineering, 158(1-2):155–196, may 1998.
* [16] Christian H. Whiting and Kenneth E. Jansen. A stabilized finite element method for the incompressible navier-stokes equations using a hierarchical basis. International Journal for Numerical Methods in Fluids, 35(1):93–116, 2001.
* [17] Mahdi Esmaily-Moghadam, Yuri Bazilevs, and Alison L. Marsden. A new preconditioning technique for implicitly coupled multidomain simulations with applications to hemodynamics. Computational Mechanics, 52(5):1141–1152, may 2013.
* [18] Thomas J.R. Hughes and J. Lubliner. On the one-dimensional theory of blood flow in the larger vessels. Mathematical Biosciences, 18(1-2):161–170, oct 1973.
* [19] B.N. Steele, Jing Wan, J.P. Ku, T.J.R. Hughes, and C.A. Taylor. In vivo validation of a one-dimensional finite-element method for predicting blood flow in cardiovascular bypass grafts. IEEE Transactions on Biomedical Engineering, 50(6):649–656, jun 2003.
* [20] Mette S. Olufsen. Structured tree outflow condition for blood flow in larger systemic arteries. American Journal of Physiology-Heart and Circulatory Physiology, 276(1):H257–H268, jan 1999.
* [21] Luca Antiga, Marina Piccinelli, Lorenzo Botti, Bogdan Ene-Iordache, Andrea Remuzzi, and David A. Steinman. An image-based modeling framework for patient-specific computational hemodynamics. Medical & Biological Engineering & Computing, 46(11):1097–1112, nov 2008.
* [22] Alexander N. Brooks and Thomas J.R. Hughes. Streamline upwind/petrov-galerkin formulations for convection dominated flows with particular emphasis on the incompressible navier-stokes equations. Computer Methods in Applied Mechanics and Engineering, 32(1-3):199–259, sep 1982.
* [23] Thomas J.R. Hughes and Michel Mallet. A new finite element formulation for computational fluid dynamics: III. the generalized streamline operator for multidimensional advective-diffusive systems. Computer Methods in Applied Mechanics and Engineering, 58(3):305–328, nov 1986.
* [24] Thomas J.R. Hughes, Leopoldo P. Franca, and Gregory M. Hulbert. A new finite element formulation for computational fluid dynamics: VIII. the galerkin/least-squares method for advective-diffusive equations. Computer Methods in Applied Mechanics and Engineering, 73(2):173–189, may 1989.
* [25] Mehran Mirramezani and Shawn C. Shadden. A distributed lumped parameter model of blood flow. Annals of Biomedical Engineering, jul 2020.
* [26] Luca Formaggia, Alfio Quarteroni, and Alessandro Veneziani, editors. Cardiovascular Mathematics. Springer Milan, 2009.
* [27] H. J. Kim, I. E. Vignon-Clementel, C. A. Figueroa, J. F. LaDisa, K. E. Jansen, J. A. Feinstein, and C. A. Taylor. On coupling a lumped parameter heart model and a three-dimensional finite element aorta model. Annals of Biomedical Engineering, 37(11):2153–2169, jul 2009.
* [28] Daniele E. Schiavazzi, Alessia Baretta, Giancarlo Pennati, Tain-Yen Hsia, and Alison L. Marsden. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty. International Journal for Numerical Methods in Biomedical Engineering, 33(3):e02799, jun 2016.
* [29] Irene E. Vignon and Charles A. Taylor. Outflow boundary conditions for one-dimensional finite element modeling of blood flow and pressure waves in arteries. Wave Motion, 39(4):361–374, apr 2004.
* [30] Mehran Mirramezani, Scott L. Diamond, Harold I. Litt, and Shawn C. Shadden. Reduced order models for transstenotic pressure drop in the coronary arteries. Journal of Biomechanical Engineering, 141(3), jan 2019.
* [31] Irene E. Vignon-Clementel, C. Alberto Figueroa, Kenneth E. Jansen, and Charles A. Taylor. Outflow boundary conditions for three-dimensional finite element modeling of blood flow and pressure in arteries. Computer Methods in Applied Mechanics and Engineering, 195(29-32):3776–3796, jun 2006.
* [32] Kenneth E. Jansen, Christian H. Whiting, and Gregory M. Hulbert. A generalized-alpha method for integrating the filtered navier–stokes equations with a stabilized finite element method. Computer Methods in Applied Mechanics and Engineering, 190(3-4):305–319, oct 2000.
* [33] Ryan L. Spilker and Charles A. Taylor. Tuning multidomain hemodynamic simulations to match physiological measurements. Annals of Biomedical Engineering, 38(8):2635–2648, mar 2010.
* [34] Yifang Zhou, Ghassan S Kassab, and Sabee Molloi. On the design of the coronary arterial tree: a generalization of murray's law. Physics in Medicine and Biology, 44(12):2929–2945, nov 1999.
* [35] Nathan M. Wilson, Ana K. Ortiz, and Allison B. Johnson. The vascular model repository: A public resource of medical imaging data and blood flow simulation results. Journal of Medical Devices, 7(4), dec 2013.
* [36] H.J. Kim, I.E. Vignon-Clementel, C.A. Figueroa, K.E. Jansen, and C.A. Taylor. Developing computational methods for three-dimensional finite element simulations of coronary blood flow. Finite Elements in Analysis and Design, 46(6):514–525, jun 2010\.
* [37] H. J. Kim, I. E. Vignon-Clementel, J. S. Coogan, C. A. Figueroa, K. E. Jansen, and C. A. Taylor. Patient-specific modeling of blood flow and pressure in human coronary arteries. Annals of Biomedical Engineering, 38(10):3195–3209, jun 2010.
* [38] E. L. Bove, M. R. de Leval, F. Migliavacca, R. Balossino, and G. Dubini. Toward optimal hemodynamics: Computer modeling of the fontan circuit. Pediatric Cardiology, 28(6):477–481, aug 2007.
* [39] Edward L. Bove, Francesco Migliavacca, Marc R. de Leval, Rossella Balossino, Giancarlo Pennati, Thomas R. Lloyd, Sachin Khambadkone, Tain-Yen Hsia, and Gabriele Dubini. Use of mathematic modeling to compare and predict hemodynamic effects of the modified blalock–taussig and right ventricle–pulmonary artery shunts for hypoplastic left heart syndrome. The Journal of Thoracic and Cardiovascular Surgery, 136(2):312–320.e2, aug 2008.
* [40] Abhay B. Ramachandra, Andrew M. Kahn, and Alison L. Marsden. Patient-specific simulations reveal significant differences in mechanical stimuli in venous and arterial coronary grafts. Journal of Cardiovascular Translational Research, 9(4):279–290, jul 2016.
* [41] C. Alberto Figueroa, Irene E. Vignon-Clementel, Kenneth E. Jansen, Thomas J.R. Hughes, and Charles A. Taylor. A coupled momentum method for modeling blood flow in three-dimensional deformable arteries. Computer Methods in Applied Mechanics and Engineering, 195(41-43):5685–5706, aug 2006.
* [42] Kathrin Bäumler, Vijay Vedula, Anna M. Sailer, Jongmin Seo, Peter Chiu, Gabriel Mistelbauer, Frandics P. Chan, Michael P. Fischbein, Alison L. Marsden, and Dominik Fleischmann. Fluid–structure interaction simulations of patient-specific aortic dissection. Biomechanics and Modeling in Mechanobiology, 19(5):1607–1628, jan 2020.
|
# Mock Lightcones and Theory Friendly Catalogs for the CANDELS Survey
Rachel S. Somerville1,2, Charlotte Olsen2, L. Y. Aaron Yung1,2, Camilla
Pacifici3, Henry C. Ferguson3, Peter Behroozi4, Shannon Osborne3, Risa H.
Wechsler5, Viraj Pandya6, Sandra M. Faber6, Joel R. Primack7, Avishai Dekel8
1Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue,
New York, NY 10010
2Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen
Road, Piscataway, NJ 08854, USA
3 Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD
21218, USA
4 Department of Astronomy and Steward Observatory, University of Arizona,
Tucson, AZ 85721, USA
5 Kavli Institute for Particle Astrophysics and Cosmology & Physics
Department, Stanford University, Stanford, CA 94305, USA;
SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA
6 Department of Astronomy and Astrophysics, University of California, Santa
Cruz, CA 95064, USA
7 Physics Department, University of California, Santa Cruz, CA 95064, USA
8 Racah Institute of Physics, The Hebrew University, Jerusalem 91904, Israel
e-mail<EMAIL_ADDRESS>
###### Abstract
We present mock catalogs created to support the interpretation of the CANDELS
survey. We extract halos along past lightcones from the Bolshoi Planck
dissipationless N-body simulations and populate these halos with galaxies
using two different independently developed semi-analytic models of galaxy
formation and the empirical model UniverseMachine. Our mock catalogs have
geometries that encompass the footprints of observations associated with the
five CANDELS fields. In order to allow field-to-field variance to be explored,
we have created eight realizations of each field. In this paper, we present
comparisons with observable global galaxy properties, including counts in
observed frame bands, luminosity functions, color-magnitude distributions and
color-color distributions. We additionally present comparisons with physical
galaxy parameters derived from SED fitting for the CANDELS observations, such
as stellar masses and star formation rates. We find relatively good agreement
between the model predictions and CANDELS observations for luminosity and
stellar mass functions. We find poorer agreement for colors and star formation
rate distributions. All of the mock lightcones as well as curated “theory
friendly” versions of the observational CANDELS catalogs are made available
through a web-based data hub.
###### keywords:
galaxies: formation, evolution, stellar content, high-redshift – astronomical
data base: surveys
## 1 Introduction
The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) is a
multi-cycle treasury program on the Hubble Space Telescope (HST; Grogin et
al., 2011; Koekemoer et al., 2011). The CANDELS project111candels.ucolick.org
surveyed five widely separated fields, each $\sim 0.25$ square degrees,
building on the legacy of previous surveys such as the Great Observatories
Origins Deep Survey (GOODS; Giavalisco et al., 2004), the Hubble Ultra Deep
Field (HUDF; Beckwith et al., 2006), COSMOS (AEGIS; Scoville et al., 2007),
and the UKIDSS Ultra Deep Survey (UDS; Cirasuolo et al., 2007). The major new
contribution from CANDELS is Near-IR imaging with the Wide Field Camera 3
(WFC3) in a “wedding cake” configuration, with deeper imaging over two smaller
areas and shallower imaging over five wider areas. CANDELS-deep is sensitive
enough to reveal galaxy candidates viewed during “cosmic dawn” at redshifts of
$z\sim 6$–9. CANDELS-wide allows structural and morphological properties of
galaxies to be measured in the rest-frame optical back to “cosmic high noon”,
$z\sim 2$–3, at the peak of cosmic star formation activity. The HST component
of CANDELS is supplemented by a rich set of ancillary data from the UV through
the radio, including observations from the Chandra Space Telescope, the
Spitzer Space Telescope, the Herschel Space telescope, and numerous ground-
based facilities.
An important goal of the CANDELS project is to provide to the community a
legacy database of high level data products such as object catalogs,
photometric redshifts, rest-frame photometry, and estimates of physical
parameters such as stellar masses and star formation rates. Because each
CANDELS field has a different set of ancillary data, the catalogs are
documented in separate papers (Guo et al., 2013; Galametz et al., 2013;
Stefanon et al., 2017; Nayyeri et al., 2017; Barro et al., 2019). Photometric
redshift and stellar mass estimates have been presented in Dahlen et al.
(2013), Mobasher et al. (2015), and Santini et al. (2015), and star formation
rate estimates are presented in Barro et al. (2019).
One goal of this paper is to document a set of “theory friendly” CANDELS high-
level science products, which we have curated in order to make it easier to
compare the CANDELS results with the predictions of theoretical models (or
with other surveys). The “theory friendly” catalogs (hereafter TF-CANDELS
catalogs) have a standard format, with the same set of observational and
derived quantities included, and have had a fairly generic set of data quality
cuts pre-applied. The quantities included in the TF-CANDELS catalogs have been
selected to comprise those that we expect to be of the most interest for
comparison with theoretical models.
Another important component of the CANDELS project has been the development of
custom theoretical models and simulations to aid in the interpretation of
CANDELS results. One major part of the theory effort has been the development
of detailed “mock catalogs” tailored to the characteristics of the CANDELS
survey. To build these mock catalogs, we have extracted “lightcones” from a
large dissipationless N-body simulation, with geometries matched to the five
CANDELS fields. The lightcones are lists of the masses, redshifts, and
positions on the sky (right ascension and declination) of halos extracted
along a past lightcone. We can then construct “merger trees” which describe
the build-up of these halos over time via merging of smaller halos. The
observable properties of the galaxies that form in these halos can then be
computed using an approach known as semi-analytic modeling.
Semi-analytic models (SAMs) of galaxy formation are a widely used tool for
studying the formation and evolution of galaxies in a cosmological context. In
this approach, one tracks bulk quantities such as diffuse hot gas, cold star
forming gas, stars, heavy elements, etc, using approximations and
phenomenological recipes. These models are set within the backbone of the dark
matter halo merger trees mentioned above, which track the build-up of
gravitationally collapsed structures. They typically include modeling the
shock heating and radiative cooling of gas, star formation and stellar
feedback, chemical evolution, and morphological transformation via mergers.
Some recent models also include the formation and growth of supermassive black
holes and feedback from Active Galactic Nuclei (AGN). The resulting star
formation and chemical enrichment histories can then be combined with stellar
population models (e.g. Bruzual & Charlot, 2003) and a treatment of
attenuation by dust in order to obtain estimates of luminosities at UV-NIR
wavelengths.
Semi-analytic models adopt many simplifications and approximations, and do not
provide information that is as detailed as the output from a numerical
hydrodynamic simulation. But SAMs have the advantage of much greater
computational efficiency, as well as flexibility. Moreover, numerical
simulations must still adopt phenomenological treatments of “sub-grid physics”
to describe physical processes that occur at scales smaller than the
resolution of the simulation (Somerville & Davé, 2015; Naab & Ostriker, 2017).
In many cases, these recipes are similar to those utilized in SAMs. Modern
SAMs and cosmological numerical hydrodynamic simulations apparently yield very
consistent results, at least for many key global quantities (Somerville &
Davé, 2015).
An alternative method of linking dark matter halo properties with observables
is to use empirical models such as sub-halo abundance matching models (SHAMs)
or their variants (see Wechsler & Tinker, 2018, for a recent review). Rather
than attempting to implement a priori all of the detailed physical processes
associated with galaxy formation, these models derive mappings between dark
matter halo properties and observationally derived quantities, such that
observational constraints are satisfied. The UniverseMachine developed by
Behroozi et al. (2019) is an example of such an approach.
We have created a set of mock catalogs based on the CANDELS lightcones using
three different approaches: the Santa Cruz SAM developed by R. Somerville and
collaborators (Somerville et al., 2015, and references therein), the SAM code
of Y. Lu and collaborators (Lu et al., 2011), and the UniverseMachine
(Behroozi et al., 2019). In Lu et al. (2014), we conducted an extensive
comparison of the predictions of the SC and Lu SAMs, as well as a third SAM by
Croton et al. (2006), for “intrinsic” galaxy properties over the redshift
range $z\sim 0$–6, such as stellar mass functions, the stellar mass versus
star formation rate and the fraction of quiescent galaxies, cold gas fraction
versus stellar mass, the mass-metallicity relation, and the outflow rates of
gas expelled by stellar feedback. The three models were run in the same merger
trees and were all calibrated to reproduce the $z=0$ stellar mass function.
Overall, we found that the models produced fairly similar results, although
with some significant differences particularly at the highest redshifts
investigated. However, we did not compare the model predictions with actual
CANDELS data in that work, as the high level data products were not yet
available.
The goals of this paper are three-fold: first, we document the details of the
construction and contents of the mock catalogs, which have already been used
in a number of CANDELS papers, and which we now release to the community.
Second, we present the predictions of the SC and Lu SAMs for standard
quantities such as observed counts, rest-frame luminosity functions, color-
magnitude relations, and color-color diagrams. We focus here on the redshift
range $0.5\lower 2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}z\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}3$, which encompasses CANDELS
“cosmic noon” science results. We compare the predictions of the SC SAM with
the higher redshift Universe $z\lower
2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}4$ in other work (Yung et al.,
2019a, b). Third, we document and release the new “theory friendly" versions
of the CANDELS observational catalogs.
The structure of this paper is as follows. In §2 we describe how we created
the mock catalogs, including providing background on the underlying N-body
simulations, the method used to extract the lightcones, and some brief
background about the models that are used to predict galaxy properties. In §4
we briefly describe the CANDELS observations and the new “theory friendly”
catalogs. In §5, we present a comparison of the predictions of the models with
observed and derived quantities from CANDELS. We discuss our results,
including a comparison with previous work, in §6 and summarize and conclude in
§7. Throughout, we adopt the cosmological parameters consistent with the
recent analysis of the Planck survey (as given in §2.1) and a Chabrier stellar
initial mass function (Chabrier, 2003). All magnitudes are quoted in the AB
system.
## 2 Building the Mock Catalogs
Figure 1: Approximate footprints of the five mock lightcones (left), and the observed CANDELS fields, for a slice $0.9<z<1.1$ with F160W$<26$. The color scale shows the density of galaxies on the sky in arcmin-2. The mock lightcones subtend a much larger area than the CANDELS HST footprint, by design. field | dimensions (arcmin) | area (arcmin2)
---|---|---
COSMOS | $17\times 41$ | 697
EGS | $17\times 46$ | 782
GOODS-N | $32\times 32$ | 1024
GOODS-S | $39\times 41$ | 1599
UDS | $36\times 35$ | 1260
Table 1: Dimensions and areas of the mock lightcones for the five CANDELS
fields. Note that these dimensions are typically different from the HST
footprint of the observed CANDELS fields.
### 2.1 Simulations and Lightcones
The lightcones used to construct our mock catalogs are extracted from the
Bolshoi Planck (hereafter BolshoiP) N-body simulations (Klypin et al., 2016;
Rodríguez-Puebla et al., 2016). The cosmological parameters are: matter
density $\Omega_{m}=0.307$, baryon density $\Omega_{b}=0.048$, Hubble
parameter $H_{0}=67.8$ km s-1 Mpc-1, tilt $n_{s}=0.96$, power spectrum
normalization $\sigma_{8}=0.823$. The BolshoiP simulation box is 369 (250
$h^{-1}$) comoving Mpc on a side, with particle mass $2.2\times 10^{8}$ ${\rm
M}_{\odot}$ ($1.5\times 10^{8}$ $h^{-1}$ ${\rm M}_{\odot}$), and a force
resolution of 1.5 kpc (1 ${h^{-1}{\rm kpc}}\;$) in physical units.
Dark matter halos and subhalos were identified using the ROCKSTAR code
(Behroozi et al., 2013a). The halo catalogs are complete above a mass of
$\simeq 2.2\times 10^{10}\mbox{${\rm M}_{\odot}$}$ (50 km s-1). Merger trees
have been constructed from these halo catalogs using the Consistent Trees code
(Behroozi et al., 2013b). All results presented here make use of the halo
virial mass definition of Bryan & Norman (1998), given in Eqn. 1 of Rodríguez-
Puebla et al. (2016).
Lightcones are extracted from 164 snapshots between redshifts $0-10$.
Lightcone origins and orientations are chosen randomly within the simulation
volume. The simulation has been constructed with periodic boundary conditions,
and the simulated volume is replicated and tiled in all directions. Halos are
collected along each lightcone from the snapshot closest to their cosmological
redshift. As CANDELS is comprised of pencil-beam surveys, no restrictions on
sampling overlapping regions of the simulation volume are applied, as overlaps
typically occur at redshift spacings $\Delta z>1$. The source code for
creating lightcones is available
online222https://bitbucket.org/pbehroozi/universemachine/src/master/ in the
lightcone package, and the full description of the algorithm is in Behroozi et
al. (2020). For more information on how to use the lightcone package, please
see the online documention at
https://bitbucket.org/pbehroozi/universemachine/src/master/README.md#markdown-
header-making-new-lightcones.
The geometry of each field is chosen to encompass the largest footprint of any
dataset used by the CANDELS project — these are in general significantly
larger than the area of the HST mosaic. Figure 1 shows the footprint of the
mock catalogs in RA and DEC on the sky compared with the F160W footprint of
the actual CANDELS fields. The dimensions for each field are given in Table 1.
In order to investigate field-to-field variance, we have created eight
realizations of each field. Note that the lightcones used for the SC SAM and
the Lu-SAM are for the same lines of sight, and contain identical sets of
halos. The lightcones used for the UniverseMachine are different lines of
sight and contain different halos, but comprise a statistical representation
of the same halo population.
### 2.2 Semi-Analytic Models
The two semi-analytic models used in this work contain a similar suite of
physical processes, but these processes are parameterized and implemented in
different ways. Both models are based on merger trees, which describe how dark
matter halos collapse and merge to form larger structures over time. The
models contain prescriptions describing the cosmological accretion of gas into
halos, cooling of hot halo gas into the interstellar medium (ISM) of galaxies,
and the formation of stars from cold ISM gas. In addition, the models track
the return of mass and metals to the ISM from massive stars and supernovae,
and contain a schematic treatment of “stellar feedback”, the ejection of mass
and metals by stellar and supernova driven winds. The Santa Cruz model
contains a prescription for the formation and growth of supermassive black
holes, and associated “black hole feedback”, while the Lu models include a
phenomenological halo-based quenching model. Both models track the stellar
mass in a “disk” and “spheroid” component of each galaxy separately, allowing
for simplified estimates of galaxy morphology to be made. Both models
additionally contain estimates of the radial size of the disk component of
each galaxy. The Santa Cruz model also includes estimates of the size of the
spheroid component, based on the models developed by Porter et al. (2014).
Note that the version of the Santa Cruz models used here contains tracking of
multiphase gas and a molecular hydrogen based star formation recipe, as
described in Somerville et al. (2015). The model parameters have also been
updated relative to those presented in Somerville et al. (2015) to account for
the BolshoiP cosmology (see Yung et al., 2019a, for details). In addition, as
in Yung et al. (2019a), the filtering mass for photoionization squelching has
been updated to the results from Okamoto et al. (2008).
An important difference between the models is that the Lu models utilize the
merger trees extracted directly from the BolshoiP simulations, and therefore
the mass resolution is limited to $\sim 10^{11}\mbox{${\rm M}_{\odot}$}$ for
root halos. The Santa Cruz SAMs use the “root halos” along the lightcones from
BolshoiP, but construct the halo merger histories using the Extended Press-
Schechter formalism as presented in Somerville et al. (2008). Therefore, the
halo merger histories depend only on halo mass and redshift, and do not carry
a second-order dependence on the large scale environment. However, this means
that the Santa Cruz mocks extend an order of magnitude further down in mass
resolution, to root halos of $\sim 10^{10}\mbox{${\rm M}_{\odot}$}$.
Both SAMs carry out stellar population synthesis by combining the predicted
star formation and chemical enrichment histories with simple stellar
population models and analytic estimates of the effects of dust attenuation,
to predict galaxy spectral energy distributions. The Santa Cruz models
additionally utilize dust emission templates to extend the SED predictions to
longer wavelengths, where the light is dominated by dust emission rather than
starlight. More details are given in §2.2.2 below.
For a detailed description of the semi-analytic models used in this work,
please refer to Lu et al. (2014, hereafter L14) and references therein,
especially Lu et al. (2011), Somerville et al. (2008, S08), Somerville et al.
(2012), Porter et al. (2014), and Somerville et al. (2015).
#### 2.2.1 Substructure and Orphans
Sub-halos are halos that have become subsumed within another virialized halo.
In the typical terminology of SAMs, sub-halos are said to host “satellite”
galaxies. Sub-halos are tidally stripped as they orbit within their host halo.
They may be tidally destroyed before they merge, or they may merge with the
central galaxy or with another satellite. The SAMs used in this study treat
sub-halos (which host satellite galaxies) in different ways. The ROCKSTAR
catalogs provide merger trees for sub-halos as well as distinct halos.
However, as with any simulation, the ability to explicitly track the evolution
of sub-halos is limited by the mass and force resolution of the simulation
(van den Bosch et al., 2018; van den Bosch & Ogiya, 2018). Moreover, the
presence of baryons can affect the timescale for tidal stripping and
destruction of satellites (e.g. Garrison-Kimmel et al., 2017), yet these
effects are not accounted for self-consistently as our merger trees are based
on dark-matter only simulations. Sub-halos that can no longer be identified in
the N-body outputs, but which may still have surviving satellite galaxies
associated with them, are commonly referred to as “orphans”. Many SAMs utilize
semi-analytic recipes to continue to track the evolution of orphans until they
merge or are tidally destroyed. For a detailed discussion of these issues, and
a state of the art semi-analytic treatment of sub-halo evolution, see Jiang et
al. (2020).
The Santa Cruz SAM treats all satellite galaxies as “orphans” from the time
that they enter the host halo. A modified version of the Chandrasekhar
equation, which tracks the loss of orbital angular momentum due to dynamical
friction against the dark matter halo, is used to estimate the radial distance
of the satellite from the center of the host halo as a function of time
(Boylan-Kolchin et al., 2008). As the satellite orbits, a fixed amount of its
mass is stripped off in each orbit, following Taylor & Babul (2001). If the
sub-halo’s mass drops below $M(<f_{\rm strip}r_{s})$, where $f_{\rm strip}$ is
an adjustable parameter and $r_{s}$ is the Navarro-Frenk-White (Navarro et
al., 1996) scale radius, then the sub-halo is considered tidally destroyed.
Its stars are added to the “diffuse stellar halo” and its cold gas is added to
the hot gas reservoir. If the satellite survives until it reaches the center
of the halo, then the satellite is merged with the central galaxy (satellites
are not allowed to merge with other satellites). The details of the treatment
of mergers are described in S08 and L14.
The Lu SAM uses the sub-halo information from the N-body catalogs to follow
the satellite population for as long as the sub-halo can be resolved. When the
sub-halo disappears from the N-body merger tree catalog, its properties when
it was last identified are used in a formula that computes the dynamical
friction time using the Chandrasekhar formula as given in Binney & Tremaine
(1987). The orphan satellite is assumed to merge with the central galaxy after
this time has ellapsed. Tidal stripping and destruction of orphan satellites
is not accounted for. See L14 section A.7.3 for details.
#### 2.2.2 Stellar Populations and Dust
Each semi-analytic model produces a prediction for the joint distribution of
ages and metallicities in each galaxy along the lightcone at its observation
time. These are obtained from the star formation and chemical enrichment
histories of all progenitors that have merged into that galaxy by the output
time. These age-metallicity distributions are then convolved with stellar
population synthesis models to obtain intrinsic (non-dust-attenuated) spectral
energy distributions (SED) which may be convolved with any desired filter
response functions. Both SAMs use the stellar population synthesis models of
Bruzual & Charlot (2003, BC03) with the Padova 1994 isochrones and a Chabrier
IMF. Note that the synthetic SEDs currently do not currently include nebular
emission.
If we write the mass of stars formed in all progenitors of a given galaxy with
ages between $t$, $t+dt$ and metallicities between $Z$ and $Z+dZ$ as
$\Psi(t,Z)\,dt\,dZ$, then the SED of the galaxy is obtained by summing the
“simple stellar population” components provided by BC03 over all ages and
metallicities:
$F_{\lambda}(t_{\rm obs})=\int^{t_{\rm obs}}_{t_{0}}\int^{Z_{\rm max}}_{Z_{\rm
min}}T_{\rm dust}(\lambda)\Psi(t,Z)S_{\lambda}(t,Z)dt\,dZ$
where in practice, the SSPs are provided at a set of discrete ages and
metallicities (196 ages and 6 metallicities, in the case of the BC03 models)
so the integral is actually a sum. The timestep in the SAM is chosen such that
the time binning is at least as fine as that in $S_{\lambda}(t,Z)$ at any
point.
Dust attenuation is included through the term $T_{\rm dust}(\lambda)$, which
is given by $T_{\rm dust}(\lambda)=10.0^{-0.4A_{V}k_{\lambda}}$ where $A_{V}$
is the attenuation in the rest-V band and $k_{\lambda}$ is the attenuation as
a function of wavelength relative to the $V$-band.
We model the rest V-band optical depth using the expressions:
$\displaystyle N_{\rm H}=m_{\rm{cold}}/(r_{\rm{gas}})^{2}$
$\displaystyle\tau_{V,0}=f_{\rm
dust}(z)\,\tau_{\rm{dust,0}}\,(Z_{\rm{cold}})^{\alpha_{\rm dust}}\,(N_{\rm
H})^{\beta_{\rm dust}}$
where $\tau_{\rm{dust,0}}$, $\alpha_{\rm dust}$, and $\beta_{\rm dust}$ are
free parameters, $Z_{\rm{cold}}$ is the metallicity of the cold gas,
$m_{\rm{cold}}$ is the mass of the cold gas in the disc, and $r_{\rm{gas}}$ is
the radius of the cold gas disc, which is assumed to be a fixed multiple of
the stellar scale length (see S08). We adopt $\tau_{\rm{dust,0}}=0.2$,
$\alpha_{\rm dust}=0.4$, and $\beta_{\rm dust}=1.0$.
Several works have found (see e.g. Somerville et al., 2012, and references
therein) that adopting this simple prescription with a fixed value of
$\tau_{V,0}$ results in attenuation that is too strong at high redshift. As a
result, we adopt an empirical redshift dependent functional form for
$\tau_{V,0}$. For $z<3.5$, we adopt the redshift dependent correction factor
$f_{\rm dust}(z)=(1+z)^{\gamma_{\rm dust}}\,$ (1)
and for $z>3.5$, we adopt the expression given in Section 2.4 of Yung et al.
(2019a). This empirical relation was adjusted by hand to achieve a reasonable
“by-eye” match to the observed rest-frame UV, B and V-band luminosity function
from $z\sim 0$–4, and the observed rest-frame UV luminosity function at
$z\lower 2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}4$.
To compute the attenuation we assign a random inclination to each disc galaxy
and use a standard ‘slab’ model; i.e. the extinction in the $V$-band for a
galaxy with inclination $i$ is given by:
$A_{V}=-2.5\log_{10}\left[\frac{1-\exp[-\tau_{V,0}/\cos(i)]}{\tau_{V,0}/\cos(i)}\right].$
(2)
For $k_{\lambda}$, we adopt the starburst attenuation curve of Calzetti et al.
(2000). We have also experimented with a two-component (cirrus plus
birthcloud) model for the attenuation, as presented in S12. However, we found
that the simpler Calzetti attenuation curve does a better job of reproducing
the colors of observed CANDELS galaxies over the whole redshift range that we
study here.
Dust emission modeling is included in the SC SAMs using the same approach
described in S12, but adopting the Chary & Elbaz (2001) emission templates.
Table 2: Summary of recalibrated SC SAM parameters. Parameter | Description | Value
---|---|---
$\epsilon_{\text{SN}}$ | SN feedback efficiency | 1.7
$\alpha_{\text{rh}}$ | SN feedback slope | 3.0
$V_{\rm eject}$ | halo gas ejection scale | 130 km/s
$\tau_{*,0}$ | SF timescale normalization | 1.0
$y$ | Chemical yield (in solar units) | 2.0
$\kappa_{\text{AGN}}$ | Radio mode AGN feedback | $3.0\times 10^{-3}$
#### 2.2.3 Calibration
All cosmological models of galaxy formation contain parameterized recipes,
which are typically adjusted by tuning them to match a selected subset of
observations. For a detailed summary of the tunable parameters in the three
SAMs presented here, and the approach used to tune them, please see L14. Some
parameters were re-tuned relative to the values used in L14, due to the change
in cosmological parameters from the original Bolshoi simulations (used in L14)
to BolshoiP. Table 2 provides a summary of parameters for the Santa Cruz SAM
that have different values from those specified in Somerville et al. (2015).
Please see Somerville et al. (2015) for a full description of the parameters,
and Table 1 in that work for a complete table of parameter values. The
observations used for the calibration and the results of the calibration
comparison are shown in Yung et al. (2019a) Appendix B. The calibration
quantities include the stellar mass function, the stellar mass vs. cold gas
fraction, stellar mass vs. metallicity relation, and the bulge mass vs. black
hole mass relation.
field | reference | $\sigma$ | aperture | depth (W/D/UD) | RA | DEC | effective area
---|---|---|---|---|---|---|---
| | | [arcsec] | [AB mag] | [degree] | [degree] | [arcmin2]
COSMOS | Nayyeri et al. (2017) | 5 | 0.17 | 27.56 | 150.116321 | +62.238572 | 216
EGS | Stefanon et al. (2017) | 5 | 0.20 | 27.6 | 214.825000 | +52.825000 | 198.6
GOODS-N | Barro et al. (2019) | 5 | 0.17 | 27.8, 28.2, 28.7 | 189.228621 | +62.238572 | 163.13
GOODS-S | Guo et al. (2013) | 5 | 0.17 | 27.4, 28.2, 29.7 | 53.122751 | -27.805089 | 159.36
UDS | Galametz et al. (2013) | 1 | 1 | 27.9 | 34.406250 | -5.2000000 | 195.58
Table 3: References and image characteristics for the published papers on the
five observed CANDELS fields. The $\sigma$ column indicates whether limiting
magnitudes were computed at $5\sigma$ or $1\sigma$, and aperture provides the
aperture used to compute the limiting magnitude (see Equation 5). Depths are
for the F160W image. The quoted effective areas are for the “wide" images.
### 2.3 Empirical model: UniverseMachine
The UniverseMachine is an empirical model that connects galaxies’ star
formation rates to their host haloes’ masses ($M_{h}$), accretion rates
($\dot{M}_{h}$), and redshifts (Behroozi et al., 2019). Using an initial guess
for the distribution of galaxy SFRs as a function of host halo properties
(i.e., $P(SFR|M_{h},\dot{M}_{h},z)$), it populates all haloes in a dark matter
simulation with SFRs. These SFRs are then integrated along merger trees to
obtain galaxy stellar masses and luminosities. The statistics of the resulting
mock universe are compared to those from observations, including stellar mass
functions ($z=0-4$), quenched fractions ($z=0-4$), cosmic SFRs ($z=0-9$),
specific SFRs ($z=0-8$), UV luminosity functions ($z=4-10$), UV–stellar mass
relations ($z=4-8$), auto- and cross-correlation functions for star-forming
and quiescent galaxies ($z=0-1$), and quenched fractions of isolated galaxies
as a function of environment ($z=0$). Comparing these observables results in a
likelihood for the guess for $P(SFR|M_{h},\dot{M}_{h},z)$. This likelihood is
given to a Monte Carlo Markov Chain algorithm to generate a new guess, and the
process is repeated millions of times to obtain the posterior distribution of
galaxy–halo connections that are consistent with all input observations.
The UniverseMachine attempts to forward-model to available observations as
much as possible. This includes accounting for random and systematic errors in
both stellar masses and SFRs, which can both rise to levels of $\sim 0.3$ dex
even at intermediate redshifts ($1<z<3$). As with other models in this paper,
the UniverseMachine uses an orphan prescription to extend the lifetime of
infalling satellites. Specifically, satellite lifetimes are extended until (or
truncated after) their circular velocities reach $\sim 0.5$ of the value
reached at peak mass; this ratio is calibrated to match $z=0-1$ galaxy
autocorrelation functions.
## 3 Mock Catalogs: Contents, documentation and access
The mock catalogs contain a large number of “intrinsic” physical parameters,
such as halo mass, stellar mass, star formation rate, etc, as well as
observable parameters such as magnitudes in filter bands relevant to the
CANDELS survey. A full list of the quantities contained in the mock catalogs,
with their units, is given in the supplementary information that may be
downloaded at
https://users.flatironinstitute.org/~rsomerville/Data_Release/CANDELS/mocklc_pub1.0.pdf.
A very large number of different instruments have been used to observe the
CANDELS fields. In many cases, different filters have been used that are, in
practice, very similar to one another. In order to prevent our already very
large catalogs from becoming even more unwieldy, we have recorded only a
representative set of filter response functions. Typically, the slightly
different filters used in the different CANDELS fields differ from the
corresponding representative filter by less than 0.01 magnitude for all
galaxies in our mock catalogs. The full set of filter response functions may
be downloaded from
https://users.flatironinstitute.org/~rsomerville/Data_Release/CANDELS/filters/CANDELS.filters.tar.
The above documentation and supplementary information, as well as access to
flat files containing the Santa Cruz SAM mock lightcone files for all fields
and all realizations is available at https://www.simonsfoundation.org/candels-
survey. All three sets of mock catalogs (SC-SAM, Lu-SAM, and UniverseMachine)
may also be accessed through the Flatiron Institute Data Exploration and
Comparison Hub (Flathub), at
http://flathub.flatironinstitute.org/group/candels. Flathub allows the user to
interactively select a subset of models, fields, realizations, and/or catalog
columns for download, and to pre-filter the data before downloading. This is a
convenient option for users who are only interested in a subset of the models,
or specific quantities or types of objects, since the total data volume is
quite large.
Figure 2: Total counts as a function of apparent magnitude in different
observed frame filters as indicated by the labels in each panel, for the SC
SAM mock catalogs compared with CANDELS. Solid dark blue lines show the SC SAM
predictions with dust attenuation included, and dashed light blue lines show
the SC SAM predictions without dust. The grey shaded region shows the range of
values between the different CANDELS fields, and the black symbols show the
median of the values in all four fields. No magnitude cuts or completeness
corrections have been applied to either the models or observations, and the
observed counts become incomplete at around magnitude 25.5 or 26. The SC SAM
predictions match the observations well in the F435W, F606W, and F160W bands,
but a bit less well in the redder K and IRAC bands.
## 4 CANDELS observations and Theory Friendly Catalogs
The CANDELS survey is a 902-orbit legacy program which carried out imaging
with the WFC3 camera on HST in five fields: COSMOS, EGS, GOODS-N, GOODS-S, and
the UDS, over a combined area of about 0.22 deg2. Each field has a different
suite of ancillary imaging data from X-ray to radio from the ground and space,
which have been incorporated into multi-wavelength catalogs and used to
estimate photometric redshifts and physical properties such as stellar masses
and star formation rates. Please see Grogin et al. (2011) and Koekemoer et al.
(2011) for details of the survey design and basic image processing, and the
five “field” papers (summarized in Table 3) for details on the catalog
construction for each field. The CANDELS catalogs released by the team may be
accessed at https://archive.stsci.edu/prepds/candels/, and an interactive web-
based portal to some of the CANDELS catalog and image data is available at
https://rainbowx.fis.ucm.es/Rainbow_Database/Home.html.
We have created a curated version of the CANDELS high level science products,
which have been designed to be easy to use for comparisons with theoretical
models and simulations. The format and contents of the CANDELS “theory
friendly catalogs” (TF-CANDELS) has been standardized and homogenized over all
five fields, and the catalogs have had a standard set of flags and cuts
applied. Each theory friendly catalog contains a standardized set of observed
frame and rest-frame photometry, along with redshifts, structural parameters
(size and Sersic index), and multiple stellar mass and star formation rate
estimates. In the original catalogs, the “value added” quantities such as
photometric redshifts, structural parameters, and stellar masses are all in
separate files which must be joined.
The photometric redshifts in the TF catalogs are the updated estimates from
Kodra et al. (in prep), and these are used for all derived quantities in the
TF-CANDELS catalogs that depend on redshift (e.g. absolute magnitudes, stellar
masses, SFR). We have checked, however, that none of the results shown in this
paper differ significantly from those that are obtained using the published
team redshifts as documented in Dahlen et al. (2013).
Rest-frame absolute magnitudes were computed for the same filter response
functions used to compute rest frame photometry in the mock catalogs. Rest-
frame magnitudes were computed using the package EAZY (Brammer et al., 2010),
with the details of the set-up and parameter file as specified in the TFCD
(Appendix E). In addition, the TF-CANDELS catalogs provide alternate estimates
of the absolute magnitudes computed using the zphot package333Note that unlike
all other magnitudes in the catalogs, these are in the Vega system. (Fontana
et al., 2000; Merlin et al., 2019), and of $U-V$ and $V-J$ colors computed
using the SED-fitting method of Pacifici et al. (2012, hereafter P12). The TF-
CANDELS catalogs also include stellar masses estimated using both the zphot
and P12 approaches. For star formation rates (SFR), in addition to estimates
based on zphot and P12, the catalogs also include the SFR estimates presented
by Barro et al. (2019), which utilize either a combination of rest-UV and mid-
IR photometry, for galaxies that are detected in the IR, or a dust-corrected
estimate based on the rest-UV. A detailed comparison of how these derived
quantities differ for the different methods is not in the scope of this paper,
however, we do comment briefly on this issue in the discussion (Section 6).
The original files and catalog field names used to create each entry in the
TF-CANDELS catalogs are specified in the TF-CANDELS Documentation (TFCD
Appendix A;
https://users.flatironinstitute.org/~rsomerville/Data_Release/CANDELS/TFCD.pdf).
We have selected a “representative” observed U-band and K-band filter for each
field. The fields were observed with different telescopes and different
instruments, so in practice the actual filters differ a bit from field to
field. The details of the actual filters used for each field are provided in
Appendix B of the TFCD.
Appendix C of the TFCD describes how we carried out the calculation of F160W
limiting magnitude for each object in the TF-CANDELS catalogs. Using the F160W
weight maps for each field, we computed the average RMS as $\langle
RMS\rangle=\sqrt{(1/\langle w_{i}\rangle}$, where $\langle w_{i}\rangle$ is
the average weight over a 6x6 square of pixels surrounding the center of each
galaxy. We then computed the limiting magnitude as
$m_{\rm lim}=-2.5\log_{10}(\sqrt{A_{1}\langle RMS^{2}\rangle}+z_{p}$ (3)
where $A_{1}=1/(0.06{\rm arcsec/pixel})^{2}$, and the zeropoint $z_{p}$ is
given by
$z_{p}=-2.5\log_{10}(PHOTFLAM)\\\ -5\log_{10}(PHOTPLAM)-2.408$
PHOTFLAM is the conversion factor from counts/s to erg s-1 cm-2 $\AA^{-1}$,
and PHOTPLAM converts from flux per unit wavelength $f_{\lambda}$ to flux per
unit frequency $f_{\nu}$. The values are taken from the image headers. In the
TF catalogs, the limiting magnitude is defined as the 1$\sigma$ limiting
magnitude within an aperture with area 1 arcsec2. Some of the CANDELS field
papers have used other definitions, such as the 5$\sigma$ limiting magnitude
within a different aperture (see Table 3). In order to compute the limiting
magnitude at 5$\sigma$ within some other aperture, one can adopt:
$m_{\rm lim}=-2.5\log_{10}(\sqrt{5A\langle RMS^{2}\rangle}+z_{p}$ (4)
where $A$ is the area of the desired aperture in pixels.
Cuts were applied to remove objects with flags indicating bad photometry, and
the field CLASS_STAR was used to remove likely stars (see TFCD Appendix D).
We computed effective areas for various F160W limiting magnitude cuts in the
following manner (see also Appendix F of the TFCD). The CANDELS weight maps
were transformed into limiting magnitude maps using the formulae given above,
and the effective area was calculated for each limiting magnitude bin by
adding up the number of pixels for which the limiting magnitude was at the
bin’s value or fainter. Tables of these calculated effective areas (in arcmin)
vs. the F160W limiting magnitude are provided at
https://users.flatironinstitute.org/~rsomerville/Data_Release/CANDELS/effarea/.
All of the above documentation and supplementary information, as well as
access to flat files containing the theory friendly catalogs for all five
fields are linked from the landing page
https://www.simonsfoundation.org/candels-survey. In addition, the TF-CANDELS
catalogs can be previewed and downloaded through
http://flathub.flatironinstitute.org/group/candels.
## 5 Results
Figure 3: Counts as a function of apparent observed frame F160W magnitude
split into redshift bins, for the Lu (orange) and SC (blue) SAM mocks compared
with CANDELS (black symbols show the mean over all four fields; dark gray
shaded areas show the minimum and maximum value in each bin over the four
fields). Light gray vertical shaded regions show approximately where the
CANDELS Wide observations are expected to be incomplete. Dashed lines show the
intrinsic counts before dust attenuation has been added to the model galaxies,
and solid lines show the predictions including dust attenuation. The agreement
between the predicted and observed counts is qualitatively good in all
redshift bins.
In this section we compare the predictions of the mock catalogs with CANDELS
observations in different redshift bins. We investigate quantities in
‘observational’ space such as counts in the observed H160 (F160W) band as well
as derived quantities such as rest-frame luminosity functions and color-color
diagrams. We further investigate comparisons with physical properties derived
from SED fitting to the CANDELS data, such as stellar mass functions and SFR
functions. For this analysis, we adopt a standard set of redshift bins: $0.1$,
$0.5$, $1.24$, $1.72$, $2.15$, $2.57$, $3.0$. These have been chosen so that
all except for the lowest redshift bin have roughly equal comoving volume. In
all of the results presented below, the model results are obtained by
averaging over all eight realizations of all five fields (40 lightcones in
all, covering a total area of 40 times 5362 arcmin2, or $\simeq 60$ sq. deg.).
The observational results shown in all figures to follow are obtained by
averaging over the EGS, GOODS-S, GOODS-N, and UDS fields, and the shaded areas
show the minimum and maximum values of the binned quantity in each bin from
field to field. We omitted the COSMOS field from this analysis because we
found that the counts in redshift bins $z\lower
2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}1.5$ were anomolously low compared
with the other four fields (we are in the process of investigating the reason
for this discrepancy). The TF catalogs were cut at a limiting magnitude of
27.6, and only objects with F160W$<25.5$ are plotted, unless noted otherwise.
Areas were computed using the effective area tables for the corresponding
depth provided in the supplementary materials. We adopt the P12 estimates for
the stellar mass and SFR as defaults.
### 5.1 Observed-frame counts as a function of apparent magnitude
Figure 2 shows the counts (number of objects per bin in apparent magnitude,
per sq. arcminute on the sky) integrated over all redshifts, for a selection
of the CANDELS observed frame filter bands. As the Lu mocks do not include
IRAC photometry, we only show the comparison with the SC SAMs here — the Lu
SAMs produce similar results in the F435W, F606W, F160W, and K bands. No
magnitude cuts or completeness corrections have been applied to either the
models or observations, and the observed counts become incomplete at around
magnitude 25.5 or 26. The SC SAM predictions match the observations well in
the F435W, F606W, and F160W bands, but less well in the redder K and IRAC
bands. This is in part because there is flexibility to match the bluer bands
by adjusting the dust correction. It may also reflect the presence of an older
stellar population in the model galaxies than is present in the real Universe,
as we will discuss further below. This figure illustrates the potential to
calibrate models using multiband photometry instead of derived quantities such
as stellar masses.
Figure 3 shows the counts in the HST F160W filter (which is the detection
filter for CANDELS) split into redshift bins. The model results are shown for
the intrinsic fluxes without accounting for attenuation by dust, and including
the model for dust attenuation described in Section 2.2.2. Overall, there is
very good agreement between the models and observations once dust attenuation
is accounted for, for magnitudes where the CANDELS wide catalogs are highly
complete (F160W $<25.5$). For fainter magnitudes, the CANDELS catalogs are
incomplete and the photometric redshifts also become unreliable. The turnover
in the model curves at faint magnitudes is due to the limited mass resolution
of the N-body simulation on which the SAM catalogs are built. As noted in
Section 2.2, the SC SAMs effectively have higher resolution than the Lu-SAMs,
as they make use of EPS-based merger trees rather than the N-body based trees,
which is the main reason that the counts turn over at a slightly fainter
magnitude.
### 5.2 Rest-frame Luminosity Functions
Figure 4: Luminosity functions in the rest-frame UV ($1500$ Å) divided into
redshift bins, for the SC SAM, compared with the corresponding distributions
from CANDELS (black symbols show the mean over all fields; shaded areas show
the minimum and maximum value in each bin over the four fields). Dashed lines
show the intrinsic luminosity functions with no dust attenuation; solid lines
show the model predictions with dust attenuation included. Dotted lines show
the dust attenuated model predictions with a cut of F160W$<25.5$ applied
(similar to the observations). The SC SAM predictions agree with the observed
distributions quite well in the regime where the observations are highly
complete.
Figure 4 shows the binned histograms of rest-frame absolute magnitude in the
rest-$1500$ Åband, for the CANDELS observations (using the EAZY-based
estimate) and the SC SAM, in our standard redshift bins. We note that although
we refer to these as “luminosity functions", we have made no attempt to
correct these for incompleteness as is generally done in the literature. We
simply apply a cut of F160W$<25.5$ and bin in absolute luminosity. For the
models, we show results both without and with modeling of dust attenuation. In
addition, we show the model predictions both for the full sample and with a
cut of observed frame F160W$<25.5$ (including dust), similar to the cut
applied to the observational sample. We remind the reader that the CANDELS
Wide sample is expected to be highly complete at this magnitude limit. Similar
comments apply to the V-band “luminosity functions". Figure 13 shows the same
comparison for the Lu SAMs. The SC SAMs show very good agreement with the
bright end of the UV LF when dust attenuation modeling is included; and the
models with an F160W$<25.5$ cut show good agreement with the turnover and
faint end of the UV LF as well. The Lu SAMs also show good agreement, but
slightly underpredict the observed number density of faint galaxies in the
lowest redshift bins (see Fig. 13).
Figure 5: Luminosity functions in the rest-frame V-band divided into redshift
bins for the SC SAM, compared with the corresponding distributions from
CANDELS. Key is as in Fig. 4. The SC SAM predictions agree with the observed
rest-frame V-band magnitude distributions fairly well in the regime where the
observations are highly complete.
Figure 5 and 14 show the rest-V band luminosity functions for the CANDELS
observations (using the EAZY-based estimate) and the SC and Lu SAM models,
with the same permutations as before. Once again, both models are in very good
agreement with the observations once dust attenuation and a cut on $F160W$ are
applied. In the SC SAMs, there is a small but significant excess of faint
galaxies (fainter than $L_{*}$) relative to the CANDELS observations at
redshifts $z\lower 2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}1.5$, but the
agreement on the bright end is very good. The Lu SAMs show better agreement
for faint galaxies, but show a possible deficit of bright galaxies at $z\lower
2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}2$ – but the model predictions are
mostly within the range of values seen across the four different fields (see
Fig. 14). The constraints on the models could be improved by adding
observations from larger-area surveys.
#### 5.2.1 Color-magnitude and color-color relations
Figure 6: The greyscale and overlaid contours show joint distributions of
rest-frame $U-V$ color versus rest $V$ magnitude for the CANDELS observations
(top) and the SC (middle) and Lu (bottom) SAMs, in three redshift bins. Both
the observed and model galaxies are selected to have F160W$<25.5$, where the
CANDELS Wide samples are highly complete. Both SAMs predict colors for low-
luminosity galaxies that are up to $\sim 0.5$ dex redder than the observed
colors, with an increasing discrepancy towards lower redshift.
For both results presented in this sub-section, a magnitude cut of
F160W$<25.5$ has been applied to both the model and observational samples, and
dust attenuation is included in the model magnitudes. The CANDELS Wide samples
used in this study should be highly complete at this magnitude limit. We use
the P12 estimates from CANDELS for both $U-V$ and $V-J$ colors, and the EAZY
estimate for the rest-frame V-band magnitude.
Figure 6 shows the distribution of the number density of galaxies in the
rest-$V$ magnitude versus $U-V$ color plane, in three redshift bins. The
overall distributions in color-magnitude space appear similar between the
models and observations, although some quantitative differences are apparent.
The main population of galaxies in the models is shifted to the red relative
to the observations, by as much as 0.5 dex. The discrepancy increases towards
lower redshifts. The colors in the SC SAM are slightly redder than in the Lu
SAM. The Lu SAM does not show as pronounced a trend between luminosity and
color as the observations or the SC SAM. Both the Lu SAM and, to a lesser
extent, the SC SAM, show a bimodal distribution of colors at faint
luminosities. The population with redder colors is associated with satellite
galaxies, and reflects a well-known tendency of SAMs to produce over-quenched
low-mass satellite galaxies. However, even when comparing only central
galaxies with the observed colors, there is a significant discrepancy in the
sense of the SAMs producing galaxies with colors that are too red. This is due
to the known tendency in these SAMs for low-mass galaxies to form too many
stars at high redshift (and so to have too large an old stellar population)
while being too inefficient at forming stars at low redshift (White et al.,
2015). The effect of dust reddening on colors is also very uncertain.
Figure 7: Distribution of galaxies in the $U-V$ vs. $V-J$ plane for the
CANDELS observations (top row), the SC SAM (middle row), and the Lu SAM
(bottom row). Both the observed and model galaxies are selected to have
F160W$<25.5$, where the CANDELS Wide samples are highly complete. The solid
black line shows the region of this diagram that is typically associated with
quiescent galaxies (Williams et al., 2009).
Figure 7 shows the distribution in rest-frame $U-V$ versus $V-J$ color space
for the CANDELS observational sample and the SC SAM and Lu SAM, in three
redshift bins. This diagram is often used to identify and separate star
forming galaxies and quiescent galaxies, where quiescent galaxies are expected
to be located in the upper left-hand region of the plot, and a nominal
dividing line is shown. Once again we can clearly see that the star forming
population in the models is too red in $U-V$, but is in better agreement in
$V-J$ (which is less sensitive to stellar age). We also see that the standard
dividing line between quiescent and star forming galaxies does not separate
these populations very effectively in the models, perhaps reflecting
shortcomings in the dust modeling, or differences in the ensemble of star
formation histories. See Brennan et al. (2015) for a detailed analysis of the
quiescent fraction in the the Santa Cruz SAMs compared with CANDELS
observations, and Pandya et al. (2017) for an analysis of the transition
galaxy population and the quenching rate in the SC SAMs and CANDELS.
### 5.3 Stellar Mass functions, Star Formation Rate functions and the Star
Formation Sequence
Figure 8: Stellar mass functions divided into redshift bins, for the two SAMs
and UniverseMachine, compared with stellar mass distribution functions derived
from CANDELS. The CANDELS observations are shown for F160W$<25.5$ and are not
corrected for incompleteness. The SAM mass functions are shown for all
galaxies (solid) and for galaxies with F160W$<25.5$ (dashed). The stellar
masses in all models shown here do not account for observational errors. Both
SAMs predict an excess of low-mass galaxies at intermediate redshift ($1\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}z\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}3$), and perhaps a deficit of
massive galaxies at $z\lower 2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}1.5$.
Figure 8 shows stellar mass functions for the CANDELS observations, using
stellar mass estimates based on the P12 method, compared with stellar mass
functions from the SC SAM, Lu SAM, and UniverseMachine. We note that
UniverseMachine is an empirical model that was calibrated to match previous
estimates of the galaxy stellar mass function over a wide range of redshifts
(see Behroozi et al., 2019, for details), so this is an indirect way to
compare the SAM predictions and the CANDELS measurements with previous
estimates of the stellar mass function. It is important to note that the
CANDELS observational “stellar mass functions" have not been corrected for
incompleteness and only galaxies with F160W$<25.5$ have been plotted. The SAM
predictions are shown for all galaxies and also with a cut of F160W$<25.5$, as
in the observations. The turnover in the stellar mass function occurs at a
similar mass in the SAMs and in the CANDELS observations. The UniverseMachine
mass functions are in good agreement with the CANDELS estimates, except at the
high mass end. This is due to errors on the stellar masses, which cause an
Eddington bias that make the high-mass end of the SMF shallower. This is
illustrated in Figure 15, which shows the UniverseMachine predictions with
stellar mass errors included as described in Behroozi et al. (2019). Here it
can be seen that the stellar mass errors can have a significant effect on the
high-mass end of the SMF, especially at high redshift. With the observational
errors included, UniverseMachine is in near-perfect agreement with the CANDELS
stellar mass distributions, as expected. However, estimating the error in
stellar mass and how it depends on other galaxy properties in detail is highly
non-trivial, so we do not include errors on the stellar masses in the SAM mock
catalogs. The SC SAMs agree well with UniverseMachine intrinsic stellar masses
at the high mass end, but systematically overproduce low-mass galaxies (below
the knee in the SMF) at all redshifts, but to an increasing degree at high
redshift. This is a well-known and widespread problem with many current models
of galaxy formation, which is caused by too-early formation of stars in low-
mass galaxies (see Somerville et al., 2015; White et al., 2015, for a detailed
discussion). The Lu SAM shows better agreement with the abundance of low-mass
galaxies, but still overproduces them in the highest redshift bins, and may
underproduce massive galaxies at high redshift. However, it is impossible to
rigorously assess the agreement at the high-mass end for both SAMs due to the
uncertainty in the errors on the observational estimates of the stellar
masses, as well as uncertainties due to field-to-field variance, which are
only crudely indicated here.
Figure 9: Star formation rate distribution functions divided into redshift
bins, for the two SAMs and UniverseMachine, compared with SFR functions
derived from CANDELS. The CANDELS observations are shown for F160W$<25.5$ and
are not corrected for incompleteness. The SAM SFR distribution functions are
shown for all galaxies (solid) and for galaxies with F160W$<25.5$ (dashed).
The model SFR predictions do not account for observational errors. The CANDELS
SFR distribution has a large excess of high-SFR galaxies relative to both SAMs
and UniverseMachine predictions.
Figure 9 shows star formation rate distribution functions for the CANDELS
observations, using SFR estimates based on the method of P12, compared with
SFR functions from the SC SAM, Lu SAM, and UniverseMachine (intrinsic values
of SFR, without observational errors, are shown). The SC and Lu SAM SFR have
been averaged over a timescale of 100 Myr, while the P12 SFR estimates are
averaged over 10 Myr; however, we do not expect this to cause large
differences. CANDELS observational SFR functions have not been corrected for
incompleteness and only galaxies with F160W$<25.5$ have been plotted. The SAM
predictions are shown for all galaxies and also with a cut of F160W$<25.5$, as
in the observations. The amplitude and location of the “knee" of the SFR
distribution function agrees well between both SAMs, UniverseMachine, and the
observations. The predictions of both SAMs and UniverseMachine are very
similar for the high SFR part of the distribution. At $z\lower
2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}1.7$, both SAMs predict a higher
amplitude and steeper distribution below the knee than UM and the CANDELS
observations. However, the SFR distribution derived from CANDELS is
significantly higher in amplitude above SFR values of $\sim 100$ ${\rm
M}_{\odot}$yr-1 than any of the model predictions, by as much as several
orders of magnitude. Fig. 16 again shows the SFR distribution for CANDELS and
for UniverseMachine predictions with and without observational errors added.
Again, the observational errors cause a small increase in the amplitude at the
high SFR end, but based on the assumed magnitude of the errors on SFR from
Behroozi et al. (2019), this cannot fully account for the discrepancy between
the models and observations.
Figure 10: Greyscale and overlaid contours show the conditional distribution
of star formation rate for a given stellar mass, for CANDELS observations (top
row), the two SAMs (SC; second row, and Lu; third rows), and UniverseMachine
with and without observational errors included (bottom two rows). Relations
from the literature (Speagle et al., 2014; Iyer et al., 2018) are overplotted.
In both SAMs, the predicted SFR are too low at fixed stellar mass,
particularly for low-mass galaxies at $z\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}1$.
Fig. 10 shows the conditional distribution of SFR for a given stellar mass in
several redshift bins, for CANDELS using stellar mass and SFR estimates from
P12, and for both SAMs and for UniverseMachine without and with observational
errors included. SFR-stellar mass sequence relations from the literature
(Speagle et al., 2014; Iyer et al., 2018) are also overplotted. In the first
two redshift bins shown, the CANDELS results are consistent with the
literature sequence, while in the highest redshift bin, there is a population
of massive galaxies that lies above the literature sequence. The median SFR at
a given stellar mass is systematically lower than the literature relations and
CANDELS in both SAMs, more so in the SC SAM, and has a steeper slope, such
that low-mass galaxies lie below the observed SFR sequence. This is a further
reflection of the same problem that caused the colors in the SAMs to be overly
red, namely, SFR is too strongly suppressed in low-mass galaxies in the SAMs.
Interestingly, UniverseMachine also shows a mild steepening in the SFR
sequence at low-mass, but to a much lesser extent than the SAMs. Observational
estimates of complete samples of such low-mass galaxies are extremely
challenging to obtain, but these results suggest it may be interesting to do
so.
Figure 11: Greyscale and overlaid contours show the distribution of log SFR
vs. rest-frame UV magnitude, for CANDELS observations (top row) and the two
SAMs (SC; second row, and Lu; third row). The CANDELS SFR estimates at a given
rest-UV magnitude are significantly higher than the predictions of the SAMs.
In order to try to interpret the very different conclusions we might reach
from comparing the rest-UV luminosity function of galaxies (which shows
excellent agreement between model predictions and observations) and the SFR
function (which shows disagreement between the models predictions and
observations at the level of multiple orders of magnitude), we examine the
relationship between rest-UV luminosity and SFR in the CANDELS observations
(using the P12 estimates of SFR) and in the semi-analytic models (where the
rest-UV magnitude includes dust). Figure 11 shows this relationship, and
reveals that the P12 SFR estimates in CANDELS are significantly higher for a
given rest-UV magnitude than the predictions of the SAMs, especially at high
redshift. This helps to reconcile the different conclusions that we might
reach from comparing the observed and predicted rest UV luminosity functions
and SFR functions, but begs the question as to the reason the relationship
between rest-UV magnitude and SFR is so different. The main possibilities are
the assumed/estimated dust attenuation and the star formation history. We
investigate the former possibility in the next sub-section.
### 5.4 Dust Attenuation
Figure 12: The greyscale and overlaid contours show the joint distribution of
dust attenuation in the rest V-band and rest-V band magnitude, in different
redshift bins, as estimated from SED fitting using the method of P12 (top
row), and as added to the model galaxies in the SC (middle) and Lu (bottom
row) SAMs. It is encouraging that the dust attenuation adopted in the SAMs is
similar to the results from SED fitting.
Dust attenuation is an important ingredient in forward modeling the semi-
analytic models to the observational plane. By the same token, it is a
critical ingredient in SED fitting methods used to estimate physical
properties from galaxy photometry. As described in Section 2.2.2, the
normalization of the relationship between dust optical depth in the V-band and
galaxy properties such as gas surface density and metallicity in the SAMs has
been adjusted empirically to match the observed UV, B, and V-band luminosity
functions. Therefore it is interesting to see how this quantity compares with
the dust attenuation derived from SED fitting to the CANDELS observations
using the method of P12. Figure 12 shows the attenuation in the rest V-band
versus the (attenuated) V-band, in three redshift bins, for the SED-fitting
derived results from CANDELS and for the SC and Lu SAMs. The medians of the
distributions are quite similar. In the two higher redshift bins, the SAMs
show a stronger trend between V-band magnitude and attenuation than the
CANDELS estimates. Both the SAMs assume a fixed dust attenuation curve, while
the SED fitting procedure of P12 adopts a two component dust model.
## 6 Summary and Discussion
The mock lightcones that we have presented here enable a more detailed
comparison with observations than has often been done in the past. We find
extremely good agreement between the observed frame counts predicted by the
SAM and observations for the F435W, F606W, F160W, and K bands, and less
precise but still good ($\sim$ 0.3 dex or better) for the counts in the IRAC
Ch1 and Ch2 bands. The agreement with observed frame F160W counts split by
redshift is excellent (better than 0.1 dex for the SC SAM where the
observations are highly complete). Agreement between predicted and observed
rest-frame UV and V-band luminosity functions from $0.1<z<3$ is also very
good, everywhere better than $\sim$0.3 dex. Agreement between predicted and
observed rest-frame U-V colors is less successful; the SAM predictions are as
much as $\sim 0.5$ too red for low-luminosity galaxies at low redshift
$z\lower 2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}1$. The agreement between
the SAM predicted SMF and that derived from CANDELS via SED fitting is good
but notably poorer quantitatively than for the luminosity function comparison.
Even more dramatically, the SAM predictions for the SFR function show very
large discrepancies with the CANDELS SFR functions derived from SED fitting
using the method of P12 (up to several orders of magnitude), presenting a very
different picture from that obtained through comparing the rest-UV luminosity
functions, which are in excellent agreement as noted above. We show that, as
expected, the SFR-$L_{\rm UV}$ relationship predicted by the SAM and derived
in CANDELS via SED fitting are very different, likely reflecting either
different assumptions about dust attenuation and/or the galaxy star formation
histories.
Although semi-analytic models of galaxy formation are known to reproduce many
key observations, the current generation of models is also known to show some
tensions with observations that have been discussed extensively in the
literature. These tensions include 1) models tend to _overproduce_ low-mass
galaxies at intermediate redshifts $1\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}z\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}2$ relative to observations; 2)
models tend to _underproduce_ massive galaxies at $z\lower
2.15277pt\hbox{$\;\buildrel>\over{\sim}\;$}1$; 3) model galaxies at $1\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}z\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}4$ have SFR that tend to be _lower_
than observational estimates. It can be seen in the compilation presented in
figure 4 and 5 of Somerville & Davé (2015) that these discrepancies are common
not only to most semi-analytic models but also to several large-volume
hydrodynamic simulations. So far, these problems have been overcome only by
explicitly tuning the models to match observational constraints at high
redshift, as in Henriques et al. (2015). Taken together, these discrepancies
suggest a picture in which star formation is _not efficient enough_ in massive
galaxies at intermediate redshifts ($1\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}z\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}4$), and is _too efficient_ in low-
mass galaxies in this same redshift range. At the same time, star formation
rates are _too low_ in low mass galaxies at low redshift ($z\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}1$). This leads to low-mass
galaxies with too large an old stellar population (and too few young stars),
which is likely the reason that the predicted colors in the models are too red
for low-luminosity galaxies.
There of course remain many uncertainties in the physical processes that are
implemented in these models, as discussed extensively in recent reviews on the
subject of physics-based cosmological models of galaxy formation (Somerville &
Davé, 2015; Naab & Ostriker, 2017). Some of the most important processes
include the efficiency with which stellar driven winds can transport mass,
energy, momentum, and metals out of the ISM and into the CGM and IGM, and the
timescale on which ejected material returns to galactic halos or to the ISM.
The treatment of these processes in cosmological simulations and SAMs has been
highly phenomenological up until now. But recently, simulations in which the
Sedov-Taylor blastwaves from supernovae explosions are explicitly resolved (or
semi-resolved) have been used to extract the emergent wind launching
characteristics (Kim et al., 2020b, a, Pandya et al. in prep). Much progress
can be made by using these wind scalings as input into larger volume
cosmological simulations, and we can hope that this will help to resolve some
of the tension with observations seen in this study and others. Other key
areas where development effort is needed on the theoretical modeling side is
in how black hole formation, growth, and feedback operate, and on modelling of
dust attenuation and emission.
However, these past conclusions have mainly been reached via comparison of the
theoretical models and simulations with observations in the “theoretical
plane", i.e., by comparing the predicted physical properties such as stellar
mass and SFR from simulations with _estimates_ of these physical properties
obtained from SED fitting to observations. This approach has many advantages,
including greater ease of interpretation in terms of intuitive physical
quantities, and greater ease in linking populations across different epochs.
However, it is very important to keep in mind that these estimates still carry
significant uncertainties (see e.g. Conroy, 2013; Leja et al., 2017, 2019).
Moreover, because of the complexity of the procedure used to obtain estimates
of these derived quantities, it is very difficult to accurately and completely
quantify their uncertainties, which are needed for a rigorous statistical
assessment of the “goodness of fit" of any theoretical model. The error budget
should include contributions from the systematic uncertainties in deriving
physical quantities such as stellar mass or SFR from SED fitting, as well as
errors due to photometric redshift errors, photometric noise, and field-to-
field variance. This detailed error budget has not been computed for
quantities commonly used to calibrate models, such as stellar mass functions
and SFR densities.
We can get a first order sense for the possible systematic errors in the
estimates of physical quantities by comparing the results from different
methods. For stellar mass estimates using the zphot code compared with the P12
method, which incorporates a more sophisticated prior on star formation
history, we find systematic differences between the two methods of $0.2-0.5$
dex, and a dispersion of $0.1-0.5$ dex, with dependencies on stellar mass and
redshift (similar to the findings of Leja et al. (2019)). For SFR estimates,
we find systematic errors for the P12 estimates compared with the Barro et al.
(2019) estimates of typically at least 0.3-0.5 dex up to 2 dex, and a scatter
of 0.5-1 dex. Both the systematic and random differences show dependencies on
stellar mass and redshift. The zphot-based SFR estimates show even larger
differences compared with the Barro et al. (2019) SFR estimates, with
different dependencies on stellar mass than the P12 SFR estimates. Clearly,
SFR estimates from either rest-UV and available IR photometry and different
SED fitting approaches still show very significant discrepancies which must be
better understood.
An alternative approach is to _forward model_ the simulations into the
observational plane. This of course requires additional modeling steps, and
the inclusion of assumptions regarding additional ingredients such as stellar
population models and dust. However, estimating physical properties from SED
fitting also contains similar assumptions, and in some cases one has more
information about the conditions in the simulated galaxies than one does for
the real galaxies. An additional advantage to working in the observational
plane is that it is much easier to include modeling of observational errors
and selection effects in this plane. We advocate carrying out comparison in
both planes (theoretical and observational), as any differences in conclusions
may illuminate problems. One of the important results of this work is that a
comparison between theoretical predictions and observations in the
“theoretical plane" of stellar mass or SFR distribution functions versus the
“observational plane" of rest-UV and V-band luminosity functions appears to
yield quantitatively different assessments of the goodness of fit of models
compared with observations444We note that it is currently impossible to make
rigorous statements about model goodness of fit due to the unavailability of
complete, accurate error budgets, as discussed above. It may be that if the
uncertainties on the observational quantities being used as constraints were
properly accounted for in both cases, this difference would not be present..
One of our main long term goals is to work towards a full forward modeling
pipeline for multi-wavelength galaxy surveys. Over the next decade, wide area
surveys from DESI, VRO, Euclid, the Nancy Grace Roman Space Telescope, 4MOST,
and other facilities will be carried out. We can use the legacy observations
from surveys such as CANDELS, to build a foundation for interpreting these new
surveys. What we have shown here is that the current generation of semi-
analytic models produce decent broad agreement with key properties of galaxy
evolution as represented by CANDELS over the redshift range $0.5\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}z\lower
2.15277pt\hbox{$\;\buildrel<\over{\sim}\;$}3$. It has been shown elsewhere
that these models produce similar results to those of numerical cosmological
simulations and other semi-analytic models (Somerville & Davé, 2015), and that
they are also in agreement with higher redshift observations of galaxy
populations (Yung et al., 2019a, b), the reionization history (Yung et al.,
2020), and observational probes of the cold gas phase in galaxies (Popping et
al., 2014, 2019). While there are certainly remaining tensions with
observations, as seen here and also in e.g. Popping et al. (2019), there is
promising ongoing work to continue to improve the realism of the treatment of
physical processes in SAMs (e.g. Pandya et al., 2020). In work in progress, we
are using this framework to create similar mock observations for future
planned surveys with the James Webb Space Telescope and the Nancy Grace Roman
Space Telescope (L. Y. A. Yung et al. in prep). SAMs coupled with lightcones
extracted from large volume N-body simulations have recently been used to
create a 2 sq. deg. lightcone from $0<z<10$ (Yang et al., 2020, Yung et al. in
prep). In order to create mock surveys for even larger areas — tens to
hundreds of square degrees — that will be probed by the projects mentioned
above, it is likely that new, even more computationally efficient techniques
will need to be developed, perhaps enabled by machine learning based tools.
## 7 Conclusions
In this paper, we presented mock lightcones that were custom created to aid in
the interpretation of observations from the CANDELS program. We populated
these lightcones with galaxies using two different semi-analytic modeling
codes, and the empirical model UniverseMachine. In addition, we presented
specially curated “theory friendly" catalogs for the CANDELS observations,
which include a selection of the observed and rest-frame photometry as well as
estimates of physical galaxy properties such as redshift, stellar mass, and
star formation rate. We make all data products available through a web-based
data hub that allows users to preview and download the data.
We showed comparisons between the mock lightcones and the CANDELS observations
for a selection of key quantities in the “observational plane", including
observed frame counts, rest-frame luminosity functions, color-magnitude and
color-color distributions. We also compared our model predictions with
physical quantities estimated via SED-fitting from the CANDELS photometry,
such as stellar mass functions, SFR distribution functions, the stellar mass
vs. SFR relation, and dust attenuation. Although there are some tensions
between the theoretical predictions and the observations, we conclude that
these mock catalogs reproduce the observational estimates accurately enough to
be useful for interpreting current observations and making predictions for
future ones.
## Acknowledgements
We thank the anonymous referee for helpful comments that improved the
manuscript. We thank the Flatiron Institute for providing computing resources
and data access. We warmly thank Dylan Simon, Elizabeth Lovero, and Austen
Gabrielpillai for building Flathub. We thank Yotam Cohen for useful comments
on the Flathub datahub. RSS is supported by the Simons Foundation. CP is
supported by the Canadian Space Agency under a contract with NRC Herzberg
Astronomy and Astrophysics. This work makes use of observations taken by the
CANDELS Multi-Cycle Treasury Program with the NASA/ESA HST, which is operated
by the Association of Universities for Research in Astronomy, Inc., under NASA
contract NAS5-26555. This work is based in part on observations made with the
Spitzer Space Telescope, which was operated by the Jet Propulsion Laboratory,
California Institute of Technology under a contract with NASA.
## Data Availability
The data underlying this article are available from the Flatiron Institute
Data Exploration and Comparison Hub (Flathub), at
http://flathub.flatironinstitute.org/group/candels.
## Appendix A Supplementary Figures
In this appendix we show results that supplement those in the main text. Fig.
13 and 14 show the rest-UV and V-band luminosity functions for the Lu SAMs
compared with the CANDELS observations. Fig. 15 and 16 show the stellar mass
function and SFRF from UniverseMachine with observational errors included. For
details please see the main text.
Figure 13: Luminosity functions in the rest-frame UV ($1500$ Å) divided into
redshift bins, for the Lu-SAM, compared with the corresponding distribution
from CANDELS (black symbols show the mean over all fields; shaded areas show
the minimum and maximum value in each bin over the four fields). Dashed lines
show the intrinsic luminosity functions with no dust attenuation; solid lines
show the model predictions with dust attenuation included. Dotted lines show
the dust attenuated model predictions with a cut of F160W$<25.5$ applied
(similar to the observations). The Lu SAM predictions agree with the observed
distributions quite well in the regime where the observations are highly
complete. Figure 14: Luminosity functions in the rest-frame V-band divided
into redshift bins for the Lu-SAM and CANDELS observations. Key is as in Fig.
13. The Lu SAM predictions agree with the observed rest-frame V-band magnitude
distributions fairly well in the regime where the observations are highly
complete. Figure 15: Stellar mass functions divided into redshift bins, for
UniverseMachine, compared with stellar mass distribution functions derived
from CANDELS. Solid lines show the results for the intrinsic (error-free)
stellar mass predictions in UniverseMachine, while dot-dashed lines show the
predictions after modeling the expected errors on the stellar masses. Errors
in stellar mass estimates can be significant, especially at high redshift, and
lead to an Eddington bias that impacts the high-mass end of the distribution.
Figure 16: Star formation rate functions divided into redshift bins, for
UniverseMachine, compared with SFR functions derived from CANDELS. Solid lines
show the results for the intrinsic (error-free) SFR predictions in
UniverseMachine, while dot-dashed lines show the predictions after modeling
the expected errors on the SFR estimates.
## References
* Barro et al. (2019) Barro G., et al., 2019, ApJS, 243, 22
* Beckwith et al. (2006) Beckwith S. V. W., et al., 2006, AJ, 132, 1729
* Behroozi et al. (2019) Behroozi P., Wechsler R. H., Hearin A. P., Conroy C., 2019, MNRAS, 488, 3143
* Behroozi et al. (2020) Behroozi P., et al., 2020, arXiv e-prints, arXiv:2007.04988
* Behroozi et al. (2013a) Behroozi P. S., Wechsler R. H., Wu H.-Y., Busha M. T., Klypin A. A., Primack J. R., 2013a, ApJ, 763, 18
* Behroozi et al. (2013b) —, 2013b, ApJ, 763, 18
* Binney & Tremaine (1987) Binney J., Tremaine S., 1987, Galactic Dynamics. Princeton University Press
* Boylan-Kolchin et al. (2008) Boylan-Kolchin M., Ma C.-P., Quataert E., 2008, MNRAS, 383, 93
* Brammer et al. (2010) Brammer G. B., van Dokkum P. G., Coppi P., 2010, EAZY: A Fast, Public Photometric Redshift Code
* Brennan et al. (2015) Brennan R., et al., 2015, MNRAS, 451, 2933
* Bruzual & Charlot (2003) Bruzual G., Charlot S., 2003, MNRAS, 344, 1000
* Bryan & Norman (1998) Bryan G. L., Norman M. L., 1998, ApJ, 495, 80
* Calzetti et al. (2000) Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682
* Chabrier (2003) Chabrier G., 2003, PASP, 115, 763
* Chary & Elbaz (2001) Chary R., Elbaz D., 2001, ApJ, 556, 562
* Cirasuolo et al. (2007) Cirasuolo M., et al., 2007, MNRAS, 380, 585
* Conroy (2013) Conroy C., 2013, ARA&A, 51, 393
* Croton et al. (2006) Croton D. J., et al., 2006, MNRAS, 365, 11
* Dahlen et al. (2013) Dahlen T., Mobasher B., Faber S. M., Ferguson H. C., Barro G., Finkelstein S. L., 2013, ApJ, 775, 93
* Fontana et al. (2000) Fontana A., D’Odorico S., Poli F., Giallongo E., Arnouts S., Cristiani S., Moorwood A., Saracco P., 2000, AJ, 120, 2206
* Galametz et al. (2013) Galametz A., et al., 2013, ApJS, 206, 10
* Garrison-Kimmel et al. (2017) Garrison-Kimmel S., et al., 2017, MNRAS, 471, 1709
* Giavalisco et al. (2004) Giavalisco M., et al., 2004, ApJ, 600, L93
* Grogin et al. (2011) Grogin N. A., et al., 2011, ApJS, 197, 35
* Guo et al. (2013) Guo Y., et al., 2013, ApJS, 207, 24
* Henriques et al. (2015) Henriques B. M. B., White S. D. M., Thomas P. A., Angulo R., Guo Q., Lemson G., Springel V., Overzier R., 2015, MNRAS, 451, 2663
* Iyer et al. (2018) Iyer K., et al., 2018, ApJ, 866, 120
* Jiang et al. (2020) Jiang F., Dekel A., Freundlich J., van den Bosch F. C., Green S. B., Hopkins P. F., Benson A., Du X., 2020, arXiv e-prints, arXiv:2005.05974
* Kim et al. (2020a) Kim C.-G., Ostriker E. C., Fielding D. B., Smith M. C., Bryan G. L., Somerville R. S., Forbes J. C., Genel S., Hernquist L., 2020a, ApJ, 903, L34
* Kim et al. (2020b) Kim C.-G., Ostriker E. C., Somerville R. S., Bryan G. L., Fielding D. B., Forbes J. C., Hayward C. C., Hernquist L., Pandya V., 2020b, ApJ, 900, 61
* Klypin et al. (2016) Klypin A., Yepes G., Gottlöber S., Prada F., Hess S., 2016, MNRAS, 457, 4340
* Koekemoer et al. (2011) Koekemoer A. M., et al., 2011, ApJS, 197, 36
* Leja et al. (2017) Leja J., Johnson B. D., Conroy C., van Dokkum P. G., Byler N., 2017, ApJ, 837, 170
* Leja et al. (2019) Leja J., et al., 2019, ApJ, 877, 140
* Lu et al. (2011) Lu Y., Mo H. J., Weinberg M. D., Katz N., 2011, MNRAS, 416, 1949
* Lu et al. (2014) Lu Y., et al., 2014, ApJ, 795, 123
* Merlin et al. (2019) Merlin E., et al., 2019, MNRAS, 490, 3309
* Mobasher et al. (2015) Mobasher B., et al., 2015, ApJ, 808, 101
* Naab & Ostriker (2017) Naab T., Ostriker J. P., 2017, ARA&A, 55, 59
* Navarro et al. (1996) Navarro J. F., Frenk C. S., White S. D. M., 1996, ApJ, 462, 563
* Nayyeri et al. (2017) Nayyeri H., et al., 2017, ApJS, 228, 7
* Okamoto et al. (2008) Okamoto T., Gao L., Theuns T., 2008, MNRAS, 390, 920
* Pacifici et al. (2012) Pacifici C., Charlot S., Blaizot J., Brinchmann J., 2012, MNRAS, 421, 2002
* Pandya et al. (2020) Pandya V., Somerville R. S., Anglés-Alcázar D., Hayward C. C., Bryan G. L., Fielding D. B., Forbes J. C., Burkhart B., Genel S., Hernquist L., Kim C.-G., Tonnesen S., Starkenburg T., 2020, arXiv e-prints, arXiv:2006.16317
* Pandya et al. (2017) Pandya V., et al., 2017, MNRAS, 472, 2054
* Popping et al. (2014) Popping G., Somerville R. S., Trager S. C., 2014, MNRAS, 442, 2398
* Popping et al. (2019) Popping G., et al., 2019, ApJ, 882, 137
* Porter et al. (2014) Porter L. A., Somerville R. S., Primack J. R., Johansson P. H., 2014, MNRAS, 444, 942
* Rodríguez-Puebla et al. (2016) Rodríguez-Puebla A., Behroozi P., Primack J., Klypin A., Lee C., Hellinger D., 2016, MNRAS, 462, 893
* Santini et al. (2015) Santini P., et al., 2015, ApJ, 801, 97
* Scoville et al. (2007) Scoville N., et al., 2007, ApJS, 172, 1
* Somerville & Davé (2015) Somerville R. S., Davé R., 2015, ARA&A, 53, 31
* Somerville et al. (2012) Somerville R. S., Gilmore R. C., Primack J. R., Domínguez A., 2012, MNRAS, 423, 1992
* Somerville et al. (2008) Somerville R. S., Hopkins P. F., Cox T. J., Robertson B. E., Hernquist L., 2008, MNRAS, 391, 481
* Somerville et al. (2015) Somerville R. S., Popping G., Trager S. C., 2015, MNRAS, 453, 4337
* Speagle et al. (2014) Speagle J. S., Steinhardt C. L., Capak P. L., Silverman J. D., 2014, ApJS, 214, 15
* Stefanon et al. (2017) Stefanon M., et al., 2017, ApJS, 229, 32
* Taylor & Babul (2001) Taylor J. E., Babul A., 2001, ApJ, 559, 716
* van den Bosch & Ogiya (2018) van den Bosch F. C., Ogiya G., 2018, MNRAS, 475, 4066
* van den Bosch et al. (2018) van den Bosch F. C., Ogiya G., Hahn O., Burkert A., 2018, MNRAS, 474, 3043
* Wechsler & Tinker (2018) Wechsler R. H., Tinker J. L., 2018, ARA&A, 56, 435
* White et al. (2015) White C. E., Somerville R. S., Ferguson H. C., 2015, ApJ, 799, 201
* Williams et al. (2009) Williams R. J., Quadri R. F., Franx M., van Dokkum P., Labbé I., 2009, ApJ, 691, 1879
* Yang et al. (2020) Yang S., Somerville R. S., Pullen A. R., Popping G., Breysse P. C., Maniyar A. S., 2020, arXiv e-prints, arXiv:2009.11933
* Yung et al. (2019a) Yung L. Y. A., Somerville R. S., Finkelstein S. L., Popping G., Davé R., 2019a, MNRAS, 483, 2983
* Yung et al. (2020) Yung L. Y. A., Somerville R. S., Finkelstein S. L., Popping G., Davé R., Venkatesan A., Behroozi P., Ferguson H. C., 2020, MNRAS, 496, 4574
* Yung et al. (2019b) Yung L. Y. A., Somerville R. S., Popping G., Finkelstein S. L., Ferguson H. C., Davé R., 2019b, MNRAS, 490, 2855
|
# The mass radius of the proton
Dmitri E. Kharzeev Center for Nuclear Theory, Department of Physics and
Astronomy, Stony Brook University, New York 11794-3800, USA Department of
Physics and RIKEN-BNL Research Center, Brookhaven National Laboratory, Upton,
New York 11973-5000, USA
###### Abstract
The mass radius is a fundamental property of the proton that so far has not
been determined from experiment. Here we show that the mass radius of the
proton can be rigorously defined through the formfactor of the trace of the
energy-momentum tensor (EMT) of QCD in the weak gravitational field
approximation, as appropriate for this problem. We then demonstrate that the
scale anomaly of QCD enables the extraction of the formfactor of the trace of
the EMT from the data on threshold photoproduction of $J/\psi$ and $\Upsilon$
quarkonia, and use the recent GlueX Collaboration data to extract the r.m.s.
mass radius of the proton ${\rm R_{m}=0.55\pm 0.03\ fm}$. The extracted mass
radius is significantly smaller than the r.m.s. charge radius of the proton
${\rm R_{C}=0.8409\pm 0.0004\ fm}$. We attribute this difference to the
interplay of asymptotic freedom and spontaneous breaking of chiral symmetry in
QCD, and outline future measurements needed to determine the mass radius more
precisely.
###### pacs:
12.38.Aw;12.40.Yx;13.60.Le
## I Introduction
The mass distribution is a fundamental property of a physical object. Yet,
while a lot of information is available about the charge distribution inside
the proton, nothing is known at present about its mass radius. In astrophysics
and cosmology, the study of the mass distribution in galaxies has led to
establishing the presence of Dark Matter that is believed to constitute about
$85\%$ of the total mass of matter in the Universe. Drawing an analogy to the
physics of the proton, the electron scattering experiments reveal the spatial
distribution of quarks (matter visible to photons), but do not directly
constrain the spatial distribution of gluons – “dark matter of QCD” that is
not visible to photons. One may thus fully expect that an experimental
determination of the mass distribution would constitute a big advance in the
understanding of the proton structure.
Because of the extreme weakness of the gravitational field created by a single
proton, its direct measurement at short distances is clearly impossible.
Likewise, a study of graviton–proton scattering is off limits for present
experiments. Does this mean that the mass radius of the proton cannot be
measured? We believe that the possibility to measure the mass distribution
inside the proton is provided by the scale anomaly, reflecting the asymptotic
freedom of QCD Gross and Wilczek (1973); Politzer (1973). In the chiral limit
of massless quarks, the scale anomaly Ellis (1970); Chanowitz and Ellis (1972)
expresses the trace of the energy-momentum tensor (EMT) of QCD in terms of the
scalar gluon operator Collins et al. (1977); Nielsen (1977). It has been
proposed Kharzeev (1996); Kharzeev et al. (1999) that the matrix elements of
this operator (which is largely responsible for the mass of the proton) can be
extracted from the photoproduction of heavy quarkonia near the threshold.
Below, we will show how the formfactor of the scalar gluon operator can be
determined from the recent data on photoproduction of $J/\psi$ near the
threshold recently reported by the GlueX Collaboration Ali et al. (2019). We
will then use this formfactor to extract the mass radius of the proton from
the GlueX data.
## II The mass distribution and gravitational formfactors
As a first step, let us review how the Newton’s law of gravitation emerges
from the Einstein theory Einstein (1915) in the weak gravitational field, non-
relativistic approximation. The Einstein equation reads
$\displaystyle R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi G\ T_{\mu\nu},$ (1)
where $g_{\mu\mu}$ is the metric tensor, $R_{\mu\nu}$ is the Ricci curvature
tensor, $R$ is the scalar curvature (Ricci scalar), $G$ is the Newton’s
constant, and $T_{\mu\nu}$ is the EMT. We have omitted the cosmological
constant term that is not relevant for our present discussion, and put the
speed of light $c=1$.
Taking the trace w.r.t. the metric on both sides of (1), we get
$\displaystyle-R=8\pi G\ T,$ (2)
where $T\equiv T_{\mu}^{\mu}$ is the trace of the EMT. This equation relates
the trace of the EMT to the scalar curvature of space-time – in fact, this
relation first appeared in the extension of Nordström’s scalar gravity theory
Nordstrom (1912) proposed by Einstein and Fokker Einstein and Fokker (1914);
see Norton (1984) for a historical overview. In classical Maxwell
electrodynamics, $T_{\rm EM}=0$ without massive charges, so the
electromagnetic field does not curve space-time, and does not gravitate.
Moreover, light does not bend in the presence of massive bodies if they induce
only a scalar curvature – and thus the observation of light bending has ruled
out the scalar gravity in favor of the tensor one (1) proposed by Einstein
Einstein (1915) in 1915.
Nevertheless, in weak gravitational fields the trace of EMT and the temporal
component of the EMT $T_{0}^{0}$ coincide. Therefore the distribution of mass
can be obtained from the formfactor of the trace of the EMT – we will call it
for brevity the “scalar gravitational formfactor”, because this would be the
only formfactor in Nordström’s scalar gravity Nordstrom (1912). To show the
equivalence of the distributions of $T$ and $T_{0}^{0}$ in a weak
gravitational field, let us review how the Newtonian limit emerges from the
Einstein theory, see e.g. Landau and Lifshitz (1951).
In the non-relativistic limit
$\displaystyle g_{00}=1+2\varphi,$ (3)
where $\varphi$ is the gravitational field potential, and the EMT is given by
$\displaystyle T_{\mu}^{\nu}=\mu\ u_{\mu}u^{\nu},$ (4)
where $\mu$ is the mass density, and the 4-velocity of the non-relativistic
body can be chosen as $u_{0}=u^{0}=1$, with all spatial components equal to
zero, $u_{i}=0$. Therefore, in this limit
$\displaystyle T_{0}^{0}=\mu;\hskip 28.45274ptT\equiv
T_{\mu}^{\mu}=T_{0}^{0}=\mu,$ (5)
so the distribution of mass and the distribution of the trace of the EMT
indeed coincide.
The equations (1) and (2) lead to
$\displaystyle R_{\mu\nu}=8\pi G(T_{\mu\nu}-\frac{1}{2}g_{\mu\nu}T);$ (6)
the temporal component $\mu=\nu=0$ of this equation is
$\displaystyle R_{0}^{0}=4\pi G\mu,$ (7)
and all other components vanish. For the metric (3), we get
$\displaystyle R_{0}^{0}=\frac{\partial^{2}\varphi}{\partial x^{\mu
2}}\equiv\Delta\varphi,$ (8)
and thus (7) yields the equation describing the gravitational field in
Newtonian mechanics:
$\displaystyle\Delta\varphi=4\pi G\mu.$ (9)
Its solution gives the gravitational field potential created by a distribution
of mass $\mu(R)$:
$\displaystyle\varphi=-G\ \int\frac{\mu(R)\ dV}{R}.$ (10)
For a particle of mass $M=\int\mu\ dV$, the total potential is
$\varphi=-GM/R$, and the force acting on a probe mass $m$ is $F_{g}=-m\
\partial\varphi/\partial R$ which yields the Newton’s law of gravity:
$\displaystyle F_{g}=-G\ \frac{mM}{R^{2}}.$ (11)
The purpose of reviewing this textbook derivation was to show that in the weak
gravitational field, non-relativistic, limit of gravity the distribution of
mass and the distribution of the trace of the EMT are identical, see (5).
Therefore, to measure the mass distribution of a microscopic object with a
weak gravitational field, instead of utilizing graviton scattering, we can
measure its scalar gravitational formfactor (i.e. the formfactor of the trace
of EMT).
We can arrive at the same conclusion by comparing the formfactor of the EMT
with the scalar gravitational formfactor. Let us consider first the formfactor
of the EMT for a proton (spin $1/2$ particle of mass $M$) Pais and Epstein
(1949); Pagels (1966):
$\displaystyle\langle{\bf p}_{1}|T_{\mu\nu}|{\bf p}_{2}\rangle$
$\displaystyle=$ $\displaystyle\left(\frac{M^{2}}{p_{01}\
p_{02}}\right)^{1/2}\frac{1}{4M}\
{\bar{u}}(p_{1},s_{1})\Big{[}G_{1}(q^{2})(p_{\mu}\gamma_{\nu}+p_{\nu}\gamma_{\mu})+G_{2}(q^{2})\frac{p_{\mu}p_{\nu}}{M}+$
$\displaystyle{}+G_{3}(q^{2})\frac{(q^{2}g_{\mu\nu}-q_{\mu}q_{\nu})}{M}\Big{]}u(p_{2},s_{2}),$
where $p_{\mu}=(p_{1}+p_{2})_{\mu}$, $q_{\mu}=(p_{1}-p_{2})_{\mu}$,
$p_{1}^{2}=p_{2}^{2}=M^{2}$, and the four-component spinor $u(p,s)$ satisfies
the free Dirac equation $({\hat{p}}-M)u(p,s)=0$ and is normalized according to
$\sum_{s}{\bar{u}}(p,s)u(p,s)=({\hat{p}}+M)/2M$. The formfactors
$G_{i}(q^{2})$ completely describe the mechanical structure of the spin $1/2$
particle.
The energy-momentum conservation
$\displaystyle\partial^{\mu}T_{\mu\nu}=0$ (12)
implies
$\displaystyle q^{\mu}\langle{\bf p}_{1}|T_{\mu\nu}|{\bf p}_{2}\rangle=0;$
(13)
it is easy to check that (II) satisfies the condition (13) for on-shell
nucleons that obey the free Dirac equation; the symmetry of (II) w.r.t. the
interchange of $p_{1}$ and $p_{2}$ is necessary for (13) to hold.
In the limit of vanishing momentum transfer $q_{\mu}\to 0$, the forward matrix
element of the energy-momentum tensor takes the form
$\displaystyle\langle{\bf p}|T_{\mu\nu}|{\bf
p}\rangle=\left(\frac{M^{2}}{p_{0}^{2}}\right)^{1/2}\ {\bar{u}}(p,s)u(p,s)\
\frac{p_{\mu}p_{\nu}}{M^{2}}\left[G_{1}(0)+G_{2}(0)\right],$ (14)
characterized by the $q^{2}=0$ values of the formfactors $G_{1}(q^{2})$ and
$G_{2}(q^{2})$. The Hamiltonian $H$ is given by the temporal component of the
energy-momentum tensor:
$\displaystyle H=\int d^{3}x\ T_{00}(x);$ (15)
therefore, in the rest frame of the particle, the forward matrix element of
$T_{00}$ should yield the mass of the particle:
$\displaystyle\langle{\bf p}=0|T_{00}|{\bf p}=0\rangle=M.$ (16)
Imposing this constraint on (14), we get the condition
$\displaystyle G_{1}(0)+G_{2}(0)=M.$ (17)
The derivation of this condition is completely analogous to the derivation of
the condition on the electromagnetic formfactor $F(q^{2}=0)=e$, where $e$ is
the electric charge $e$ of the particle.
Let us now consider the matrix element of the trace of the EMT $T\equiv
T_{\mu}^{\mu}$; from (II) we find
$\displaystyle\langle{\bf p}_{1}|T|{\bf
p}_{2}\rangle=\left(\frac{M^{2}}{p_{01}\ p_{02}}\right)^{1/2}\
{\bar{u}}(p_{1},s_{1})u(p_{2},s_{2})\ G(q^{2}),$ (18)
with a new formfactor
$\displaystyle
G(q^{2})=G_{1}(q^{2})+G_{2}(q^{2})\left(1-\frac{q^{2}}{4M^{2}}\right)+G_{3}(q^{2})\frac{3q^{2}}{4M^{2}}.$
(19)
In the rest frame of the particle,
$\displaystyle\langle{\bf p}=0|T|{\bf p}=0\rangle=\langle{\bf p}=0|T_{00}|{\bf
p}=0\rangle=M,$ (20)
therefore
$\displaystyle G(0)=M$ (21)
that is obviously consistent with (19) and (17).
## III The mass radius
The charge radius of the proton is usually defined Feynman (1973) through the
derivative of its electromagnetic formfactor w.r.t. the momentum transfer
$t=q^{2}\equiv-Q^{2}$ evaluated at $t=0$:
$\displaystyle\langle R_{\rm C}^{2}\rangle=-6\ \frac{dG_{\rm
EM}}{dQ^{2}}\Big{|}_{Q^{2}=0}.$ (22)
To enable a direct comparison to the charge radius, we propose to define the
mass radius analogously through the formfactor of mass density $T_{00}$ given
by (II). Let us compare the derivatives of $T_{00}$ and of the scalar
gravitational formfactor (18) w.r.t. $t$ at $t=0$. Because the formfactor of
$T_{00}$ depends on the reference frame (strictly speaking, we can interpret
it in terms of mass distribution only in the rest frame of the proton), we
have to specify it.
It is natural to choose the Breit frame in which ${\bf p}_{2}=\frac{1}{2}{\bf
q}$, ${\bf p}_{1}=-\frac{1}{2}{\bf q}$. Evaluating the derivatives of the
formfactors of $T_{00}$ and of the trace of the EMT w.r.t. $t$ at $t=0$, we
find that they differ by terms $G_{i}(0)/(4M^{2})$ that have to be compared to
$dG_{i}/dt|_{t=0}\equiv G_{i}(0)/m_{i}^{2}$ that depend on the compositeness
scales $m_{i}^{2}$ of the corresponding formfactors.
This difference results from the frame dependence of the formfactor of
$T_{00}$. Indeed, the relativistic $\gamma$ factor for a nucleon moving with
momentum ${\bf p}=\frac{1}{2}{\bf q}$ is
$\gamma=E/M=\sqrt{M^{2}+(q^{2}/4)}/M=\sqrt{1+q^{2}/(4M^{2})}$, so for
$q\equiv|{\bf q}|\simeq m_{i}$, the nucleon is Lorentz-contracted by
$1/\gamma\simeq(1+m_{i}^{2}/(4M^{2}))^{-1/2}$. In the non-relativistic limit,
when $2M\gg m_{i}$, this is a negligible effect – the size $R\sim 1/m_{i}$ of
a massive non-relativistic body is much bigger than its Compton wavelength
$\lambda\sim 1/M$. Because of this, the mass distribution can be defined
through the scalar gravitational formfactor (19) instead of the formfactor of
$T_{00}$:
$\displaystyle\langle R_{\rm m}^{2}\rangle=\frac{6}{M}\
\frac{dG}{dt}\Big{|}_{t=0},$ (23)
where we took into account the normalization (21). This conclusion agrees with
the arguments given above on the basis of weak gravitational field limit of
the Einstein equation. Later we will verify that the compositeness scale of
the scalar gravitational formfactor $m_{s}^{2}\equiv G(0)/(dG/dt|_{t=0})$
extracted from the experimental data is indeed much smaller than $4M^{2}$.
## IV Scale anomaly of QCD and the mass distribution
Now that we have stablished that the mass radius of the proton can be
extracted from the formfactor of the trace of the EMT, let us discuss how this
formfactor can be measured. The key to this is the scale anomaly of QCD. In
this theory, quantum effects lead to non-vanishing trace of the EMT even for
massless quarks Collins et al. (1977); Nielsen (1977):
$\displaystyle T\equiv T_{\mu}^{\mu}=\frac{\beta(g)}{2g}\ G^{\mu\nu
a}G_{\mu\nu}^{a}+\sum_{l=u,d,s}m_{l}(1+\gamma_{m_{l}}){\bar{q}}_{l}q_{l}+\sum_{h=c,b,t}m_{h}(1+\gamma_{m_{h}}){\bar{Q}}_{h}Q_{h},$
(24)
where $G^{\mu\nu a}$ is the gluon field strength tensor with color index $a$,
the sum in the second and third terms runs over the light and heavy quark
flavors $q_{l}$ and $Q_{h}$ with masses $m_{l}$ and $m_{h}$ respectively, and
$\gamma_{m}$ are the anomalous mass dimensions. The beta-function of QCD Gross
and Wilczek (1973); Politzer (1973) $\beta(g)=\partial g/\partial(log\mu)$
governs the renormalization group running of the QCD coupling $g$ with scale
$\mu$:
$\displaystyle\beta(g)=-b\frac{g^{3}}{16\pi^{2}}+...,\ \
b=11-\frac{2n_{l}}{3}-\frac{2n_{h}}{3}$ (25)
where the first term in $b$ is due to gluon loops, and the second and third
terms are the contributions from light and heavy quark loops.
To determine the mass radius of the proton, we will be interested in the
matrix element of the operator (24) at small momentum transfer
$|t|<4m_{h}^{2}$; the lightest heavy quark is the charm with mass $m_{c}\simeq
1.25$ GeV, so this inequality implies $|t|\leq 6.25\ {\rm GeV}^{2}$. In this
kinematical region, heavy quarks appear only in virtual $Q{\bar{Q}}$ pairs; as
a result, the heavy quark part of (24) cancels the heavy quark contribution to
the gluon term Shifman et al. (1978):
$\displaystyle\sum_{h=c,b,t}m_{h}(1+\gamma_{m_{h}}){\bar{Q}}_{h}Q_{h}\simeq-\frac{2n_{h}}{3}\
G^{\mu\nu a}G_{\mu\nu}^{a}.$ (26)
As a result, the trace of the energy-momentum tensor that will determine the
mass radius of the proton contains only the contributions from light quarks
and gluons:
$\displaystyle T\equiv T_{\mu}^{\mu}=\frac{{\tilde{\beta}}(g)}{2g}\ G^{\mu\nu
a}G_{\mu\nu}^{a}+\sum_{l=u,d,s}m_{l}(1+\gamma_{m_{l}}){\bar{q}}_{l}q_{l},$
(27)
where ${\tilde{\beta}}$ is the beta-function with $b=11-2n_{l}/3=9$ for three
flavors of light quarks, $u,d$ and $s$.
It is well known that the chiral limit of massless quarks provides an accurate
approximation to the physical world; in this limit, the trace of the EMT (27)
contains only the gluon term. Therefore, since the forward matrix element of
(27) according to (20) yields the mass of the nucleon, we have to conclude
that the mass of the proton in the chiral limit is entirely due to gluons. The
contribution of the second term (“$\sigma$-term”) in (27) for physical values
of light quark masses can be extracted from the experimental data on pion and
kaon scattering amplitudes (for recent work, see Ruiz de Elvira et al. (2018))
or computed in lattice QCD Yang et al. (2016); it contributes about $80\ {\rm
MeV}$, or about $8\%$, to the total proton mass – so the chiral limit is
indeed reasonably accurate.
## V Quarkonium photoproduction near the threshold
In the chiral limit, the information about the mass radius of the proton is
contained in the matrix element of the scalar gluon operator in (27) at non-
zero momentum transfer. The zero-momentum transfer, forward matrix element of
this operator yields the proton’s mass, and this can be used for evaluating
the scattering length in quarkonium-nucleon interaction Luke et al. (1992);
Kaidalov and Volkovitsky (1992); Kharzeev (1996).
At finite momentum transfer, the matrix element of the scalar gluon operator
in (27) can be measured in photoproduction of vector heavy quarkonium states,
$J/\psi$ and $\Upsilon$ close to the threshold Kharzeev (1996); Kharzeev et
al. (1999). This proposal is based on the following arguments:
1. 1.
Because $J/\psi$ and $\Upsilon$ are made of a heavy quark and an antiquark,
and the proton at small momentum transfer contains only light quarks, the
corresponding amplitude is dominated by the exchange of gluons.
2. 2.
Close to the threshold, the characteristic size of the heavy quark-antiquark
pair is $\sim 1/(2m_{h})$; for charm quarks, this is about $0.08\ {\rm fm}$.
Because this size is much smaller than the radius of the proton, the coupling
of gluons to the heavy quark is perturbative, is characterized by a small
coupling constant, and can be described by a local color-neutral gluon
operator of the lowest possible dimension Voloshin (1979); Appelquist and
Fischler (1978); Peskin (1979).
3. 3.
Because of the vector quantum numbers $J^{PC}=1^{--}$ of $J/\psi$ and
$\Upsilon$, the threshold photoproduction is due to the $t-$channel exchange
of gluons in scalar $0^{++}$ and tensor $2^{++}$ states; the scalar exchange
is described by the operator that is proportional to the first term in (27).
Because of the scale anomaly, its matrix element does not depend on the QCD
coupling constant $g^{2}$, whereas the matrix element of the tensor operator
appears proportional to $g^{2}$, and is sub-leading at weak coupling Luke et
al. (1992); Kaidalov and Volkovitsky (1992); Kharzeev (1996); Kharzeev et al.
(1999); Fujii and Kharzeev (1999).
Let us now examine these arguments in more detail, and use them for the
extraction of the mass radius of the proton. Consider the interaction of the
heavy quark pair with the proton near the threshold, where the velocity of the
quarkonium in the center-of-mass is small: $v_{\psi}\ll c$. The coupling of a
small color-neutral heavy quark-antiquark state to gluons can be described by
the operator $g^{2}{\bf E}^{a2}$, where ${\bf E}^{a}$ is the chromo-electric
field – this is the quadratic QCD Stark effect (the first-order effect is
forbidden by color neutrality). The chromo-magnetic contribution is
proportional to $(v_{\psi}/c)^{2}$ and is suppressed; the operators that
contain covariant derivatives are suppressed by the powers of
$(v_{\psi}/c)^{2}$ as well. The $g^{2}{\bf E}^{a2}$ operator can be
identically represented as a sum of the scalar $0^{++}$ and tensor $2^{++}$
gluon operators Novikov and Shifman (1981):
$\displaystyle g^{2}{\bf E}^{a2}=\frac{g^{2}}{2}({\bf E}^{a2}-{\bf
B}^{a2})+\frac{g^{2}}{2}({\bf E}^{a2}+{\bf B}^{a2})=\frac{8\pi^{2}}{b}\
T+g^{2}T_{00}^{(g)},$ (28)
where $T$ is the trace of the EMT (27) in the chiral limit:
$\displaystyle T=\frac{{\tilde{\beta}}(g)}{2g}\ G^{\mu\nu
a}G_{\mu\nu}^{a}=-\frac{bg^{2}}{32\pi^{2}}G^{\mu\nu a}G_{\mu\nu}^{a},$ (29)
and $T_{00}^{(g)}$ is the temporal component of the gluon part of the EMT of
QCD.
The amplitude of $J/\psi$ photoproduction close to the threshold factorizes
into a short-distance part describing the electric polarizability of the
$c{\bar{c}}$ pair, and the matrix element of the operator (28) over a proton,
see Fig. 1 (left):
$\displaystyle{\cal M}_{\gamma P\to\psi P}(t)=-Qe\ c_{2}\ 2M\ \langle
P^{\prime}|g^{2}{\bf E}^{a2}|P\rangle,$ (30)
where $Qe=2e/3$ describes the coupling of the photon to the electric charge of
the charm quark, $c_{2}$ is the short-distance coefficient describing the
coupling of the chromoelectric fields to the heavy quark pair, and its
transition to the $J/\psi$, $t=(P^{\prime}-P)^{2}$ is the momentum transfer,
and the factor $2M$ is needed to reconcile the relativistic normalization of
states with our normalization of the EMT formfactors (16, 20).
The expression (30) holds only near the threshold, where the scalar gluon
operator dominates over the operators that contain derivatives; they are
suppressed by powers of $J/\psi$ velocity squared, $(v_{\psi}/c)^{2}$. Note
that in this kinematical region the scalar gluon formfactor that enters (30)
cannot be interpreted in terms of the gluon structure functions – indeed, the
gluon structure functions are defined through the matrix elements of traceless
gluon operators in the Operator Product Expansion, and we are interested in
the trace part.
Substituting the relation (28) in (30), we observe that the matrix element of
the first term does not contain the coupling $g^{2}$ as a consequence of scale
anomaly, whereas the second term in (28) is suppressed by $g^{2}$ which is
small at the scale of $Q^{2}=4m_{c}^{2}$; in addition, the first term is
enhanced by a numerical factor. Therefore we can re-write the amplitude (30)
in terms of the scalar gravitational formfactor:
$\displaystyle{\cal M}_{\gamma P\to\psi P}(t)=-Qe\ c_{2}\
\frac{16\pi^{2}M}{b}\ \langle P^{\prime}|T|P\rangle.$ (31)
The differential cross section and the integrated cross section of the
$J/\psi$ photoproduction can now be computed using the standard formulae, see
e.g. Group et al. (2020):
$\displaystyle\frac{d\sigma_{\gamma P\to\psi P}}{dt}=\frac{1}{64\pi s}\
\frac{1}{\left|{\bf p}_{\gamma cm}\right|^{2}}\ \left|{\cal M}_{\gamma
P\to\psi P}(t)\right|^{2},$ (32)
and
$\displaystyle\sigma_{\gamma P\to\psi P}(s)=\int_{t_{min}}^{t_{max}}dt\
\frac{d\sigma_{\gamma P\to\psi P}}{dt},$ (33)
where ${\bf p}_{\gamma cm}$ is the photon momentum in the c.m.s. of the
process, and $s=(p_{\gamma}+P_{p})^{2}$ is the square of the c.m.s. energy. We
expect that the short-distance coefficient $c_{2}$ is on the order of $\pi
r_{c{\bar{c}}}^{2}$, where the size of the $c{\bar{c}}$ pair
$r_{c{\bar{c}}}\simeq 1/2m_{c}\simeq 0.08$ fm. We will fit this parameter to
the GlueX Collaboration data Ali et al. (2019), and then check that it is in
the expected range.
Figure 1: Left: the Feynman diagram of $J/\psi$ photoproduction off a proton.
Right: the differential cross section of $J/\psi$ photoproduction at the
center-of-mass energy $E_{cm}=4.58$ GeV (lab. energy of the photon
$E_{\gamma}=10.72$ GeV); the data is from the GlueX Collaboration Ali et al.
(2019); the theory curve corresponds to the dipole form of the scalar
gravitational formfactor with the parameter $m_{s}=1.24\pm 0.07\ {\rm GeV}$,
corresponding to the mass radius of the proton $R_{\rm m}=0.55\pm 0.03$ fm.
Let us briefly discuss the kinematics of the $\gamma+p\to J/\psi+p$ process.
Because of the large mass of $J/\psi$, close to the threshold the process is
characterized by a sizable minimal momentum transfer $t_{min}$; right at the
threshold, its value is $t_{min}=-M_{\psi}^{2}M/(M_{\psi}+M)\simeq-2.23\ {\rm
GeV}^{2}\simeq-(1.5\ {\rm GeV})^{2}$, where $M_{\psi}\simeq 3.097$ GeV is the
mass of $J/\psi$ and $M\simeq 0.938$ GeV is the mass of the proton. The large
magnitude of $t_{min}$ close to the threshold makes the use of the vector
meson dominance model questionable. On the other hand, $t$ is still much
smaller than $4m_{c}^{2}\simeq 6.25\ {\rm GeV}^{2}$ which justifies the
approach based on Eqs (31) and (32). In this kinematical domain, the factor
$c_{2}$ in (31) can indeed be treated as a constant; when the magnitude of $t$
becomes comparable to $4m_{c}^{2}\simeq 6.25\ {\rm GeV}^{2}$, this factor can
be expected to acquire a significant $t$ dependence. Because $t_{min}$ rapidly
varies with the c.m.s. energy close to the threshold, the energy dependence of
the integrated cross section (33) is sensitive the scalar gravitational
formfactor. However the quantity that is most sensitive to the scalar
gravitational formfactor is the differential cross section (32).
The dominance of the scalar gluon operator over the operators with covariant
derivatives in the QCD multipole expansion is justified by the smallness of
the heavy quark pair velocity in the c.m.s., $v_{\psi}$. The operators with
derivatives are suppressed by even powers of $v_{\psi}$ \- therefore, to limit
their contributions by less than about $10\%$, we have to limit $v_{\psi}\leq
0.3$ – this translates into the limit on the c.m.s. energy $E_{cm}\leq 4.6$
GeV, or the photon lab. frame energy of $E_{\gamma}\leq 11.3$ GeV.
## VI Extracting the mass radius of the proton
To make a direct comparison with the charge radius of the proton which has
been traditionally extracted by using the dipole form factor Feynman (1973),
we will assume, as a first step, a simple dipole parameterization for the
scalar gravitational formfactor as well:
$\displaystyle G(t)=\frac{M}{\left(1-\frac{t}{m_{s}^{2}}\right)^{2}},$ (34)
where $m_{s}$ is the only adjustable parameter. With the standard definition
(23), this parameter relates to the r.m.s. mass radius of the proton in the
following way:
$\displaystyle\langle R_{\rm m}^{2}\rangle=\frac{12}{m_{s}^{2}}.$ (35)
Using (31) with the formfactor $\langle P^{\prime}|T|P\rangle=G(t)$ given by
(34) to evaluate the differential cross section (32), we can now perform the
fit of the recent data from the GlueX Collaboration Ali et al. (2019) at
Jefferson Lab that is available at $E_{\gamma}\simeq 10.72$ GeV, which is
within our desired kinematical range as described above. The resulting fit of
the data is shown in Fig. 1 (right); the extracted value
$\displaystyle m_{s}=1.24\pm 0.07\ {\rm GeV}$ (36)
corresponding to
$\displaystyle R_{\rm m}\equiv\sqrt{\langle R_{\rm m}^{2}\rangle}=0.55\pm
0.03\ {\rm fm}$ (37)
provides an excellent fit with adjusted ${\bar{r}}^{2}=0.99$
(${\bar{r}}^{2}=1.0$ implies a perfect fit).
The corresponding value of the short-distance coefficient in (31) describing
the coupling of the gluons to the heavy quark pair is $|c_{2}|^{2}=0.043\pm
0.006\ {\rm fm}^{4}$. Each gluon couples to the chromoelectric dipole moment
of the the heavy quark pair, and so $c_{2}$ has dimension of length squared.
As discussed above, we expect that $c_{2}$ is on the order of $\pi
r_{c{\bar{c}}}^{2}$, where the size of the $c{\bar{c}}$ pair
$r_{c{\bar{c}}}\simeq 1/2m_{c}\simeq 0.08$ fm. The extracted value of
$|c_{2}|^{2}=0.043\pm 0.006\ {\rm fm}^{4}$ corresponds to
$r_{c{\bar{c}}}\simeq 0.1$ fm, which is in line with our expectations.
We can compare the value of the mass radius (37) to the proton charge radius
Group et al. (2020)
$\displaystyle R_{\rm C}\equiv\sqrt{\langle R_{\rm C}^{2}\rangle}=0.8409\pm
0.0004\ {\rm fm}$ (38)
that is known with a much better precision, see Bernauer (2020) for a recent
review. It appears that the mass radius of the proton is about $50\%$ smaller
than its charge radius! This observation is statistically significant given
the statistical error bar of our value (37). Of course, there is also a
theoretical systematic uncertainty that we will discuss below; however it does
not appear large enough to explain the observed difference between the charge
and mass radii of the proton.
## VII The mass radius puzzle
At first glance, this difference may seem surprising – but only if one thinks
of a proton as of a charged ball of a fixed radius, and not as of a quantum
object. First, the charge radius is extracted from the coupling of the photon
to quarks, whereas the mass radius results from the coupling to gluons – and
it appears that the gluon radius of the proton is significantly smaller than
its quark radius. If we write down the spectral representation for the scalar
and charge formfactors (see Fig. 2), the radii of the mass and charge
distributions can be seen to be set by the masses of the lightest physical
states excited from the vacuum by the scalar gluon and vector quark currents,
correspondingly. For the vector quark current, the lightest physical state is
the $\rho$ meson with mass of $m_{\rho}\simeq 770$ MeV, whereas for the scalar
gluon current it is the scalar glueball, with a much larger mass of
$m_{G}\simeq 1600$ MeV, see e.g. Athenodorou and Teper (2020).
While this argument does explain a much smaller mass radius, it is too naïve -
in the QCD vacuum, the scalar gluon current strongly mixes with the scalar
quark one. In fact, there exists a low energy theorem Novikov et al. (1981)
that can be used to relate the off-diagonal correlation function of the scalar
gluon and quark currents to the quark condensate. The analysis Fujii and
Kharzeev (1999); Ellis et al. (1999) based on this low energy theorem and the
experimental hadron spectrum shows a very strong mixing of the scalar glueball
state with the $f_{0}(980)$ meson and a broad $\sigma(500)$ resonance in the
$\pi\pi$ spectrum, which is lighter than the $\rho$ meson. Therefore, the
hadron spectrum in the scalar and vector channels alone cannot explain the
difference between the mass and charge radii.
The reason for the smallness of the mass radius in our opinion is the
interplay of scale anomaly and spontaneously broken chiral symmetry. The
dominant contribution at large distances in the dispersion representation of
the proton formfactor stems from the scalar $\pi\pi$ exchange, see Fig. 2,
left. Because the trace of the EMT is invariant under the Renormalization
Group (RG), its matrix element at small momentum transfer (responsible for the
long-range tail of the mass distribution and thus for the mass radius) can be
calculated Voloshin and Zakharov (1980) using the effective chiral theory.
Matching onto the chiral perturbation theory allows to evaluate the matrix
element of the trace of the EMT in a model-independent way; in the chiral
limit $T=-(\partial_{\mu}\pi)^{2}+...$, so we get Voloshin and Zakharov (1980)
$\displaystyle\langle 0|T|\pi^{+}\pi^{-}\rangle=q^{2},$ (39)
where $q^{2}$ is the invariant mass of the $\pi\pi$ pair. In QCD with $N_{f}$
light quark flavors, the r.h.s. of (39) gets multiplied by $(N_{f}^{2}-1)$
after summing over all possible pion combinations, so the corresponding
spectral density of the correlation function of $T$ is
$\displaystyle\rho^{\pi\pi}(q^{2})=\frac{N_{f}^{2}-1}{32\pi^{2}}\ q^{4}.$ (40)
The matrix element (39) and the spectral density (40) are strongly suppressed
at small invariant masses, corresponding to long-range tail of the proton mass
distribution – as a result, the spectral density of the correlation function
of $T$ peaks at masses about $\sim 1$ GeV Fujii and Kharzeev (1999),
consistent with our finding (36). The underlying reason for the suppression of
the matrix element (39) at low $q^{2}$ is the fact that Goldstone bosons
decouple from the scalar curvature induced by the scale anomaly – this is
analogous to the absence of light bending in scalar gravity discussed in
Section II.
At short distances, where the invariant masses in the spectral representation
are large, the relevant matrix element of the trace of the EMT can be computed
using the QCD perturbation theory; in this case, the two-gluon state dominates
(see Fig. 2, right), with the spectral density Fujii and Kharzeev (1999)
$\displaystyle\rho^{pert}(q^{2})=\left(\frac{bg^{2}}{32\pi^{2}}\right)^{2}\
\frac{N_{c}^{2}-1}{4\pi^{2}}\ q^{4},$ (41)
where $b=(11N_{c}-2N_{f})/3$ is the coefficient of the QCD beta-function. We
thus see that the mass distribution at short distances is governed by the
Renormalization Group flow, and at large distances – by the interplay of scale
anomaly and spontaneously broken chiral symmetry. Because pions are Goldstone
bosons, their couplings involve derivatives of the pion field – otherwise they
would not be invariant w.r.t. the chiral rotations. Because of this, at small
momenta pions decouple from the trace of the EMT, and this confines the mass
distribution of the proton to shorter distances, where it is dominated by the
RG flow of QCD.
Figure 2: Spectral representation of the scalar gravitational formfactor.
Left: large-distance, small invariant mass region, where the dominant
contribution is from the pion pair. Right: short-distance, large invariant
mass region that is dominated by gluon pairs.
It would be interesting to decompose the mass distribution into the quark and
gluon ones. The mass decomposition for the proton Ji (1995); Kharzeev (1996);
Yang et al. (2018); Polyakov and Schweitzer (2018); Lorcé (2018); Metz et al.
(2021); Alexandrou et al. (2020) is a subject of a lively debate at present,
and is subject to the renormalization scale and scheme dependence, as well as
to the frame dependence in the case of $T_{00}$. When quark and gluon
contributions are separated, an additional term appears in the decomposition
(II) that corresponds to the anomalous gravimagnetic moment and gravielectric
dipole moment. These terms however should cancel each other in the total TEM
of the nucleon to obey the Einstein equivalence principle, and this is why we
did not consider them.
The formfactor of the gluon term in the TEM decomposition has been recently
evaluated in lattice QCD Shanahan and Detmold (2019). The authors fit this
formfactor by the dipole form, and extract the effective mass of
$m_{s}=1.13\pm 0.06$ GeV – remarkably, this is consistent with the value
$m_{s}=1.24\pm 0.07$ that we have extracted above from the GlueX data Ali et
al. (2019). The scalar and tensor gravitational formfactors that enter the
photoproduction amplitude have also been evaluated in the approaches based on
holography Hatta and Yang (2018); Hatta et al. (2019); Mamo and Zahed (2020).
In this case, the scale of the formfactor is encoded in the dilaton potential
in the bulk that is constructed to reproduce the hadron spectrum and Regge
trajectories.
A number of papers address the proton mass decomposition basing on the vector
meson dominance Kharzeev (1996); Kharzeev et al. (1999); Gryniuk and
Vanderhaeghen (2016); Strakovsky et al. (2020); Gryniuk et al. (2020); Wang et
al. (2020). Near the threshold, the scattering amplitude of quarkonium
possesses a large real part directly related to the scale anomaly that
strongly affects the cross section, which thus becomes sensitive to the mass
decomposition of the nucleon Kharzeev (1996); Kharzeev et al. (1999). However
a big problem of this approach stems from a large value of $t_{min}$ near the
threshold that necessitates taking account of excited $c{\bar{c}}$ states,
with unknown and interfering scattering amplitudes.
Perturbative approaches to the threshold photoproduction of quarkonium
introduce the two-gluon formfactors and relate them to gluon distributions
Frankfurt and Strikman (2002); Zeng et al. (2020). While these approaches are
close to ours in terms of phenomenology, we stress that the scalar
gravitational formfactor that we have considered cannot be interpreted in
terms of gluon structure functions, that are defined through the matrix
elements of traceless gluon operators; instead, they originate from the trace
operator that dominates near the threshold. In the language of Operator
Product Expansion (OPE), the trace terms correspond to the target mass
corrections to the parton model, see e.g. Kharzeev et al. (1996). Non-
perturbative instanton contributions to scalar formfactors have been recently
evaluated in Shuryak and Zahed (2020).
The contribution from $\Lambda{\bar{D}}^{*}$ $t$-channel exchanges has been
studied in Du et al. (2020); in our approach based on the OPE, this
contribution corresponds to higher dimensional operators that should be
suppressed by the heavy quark mass.
## VIII Theoretical uncertainties and an outlook
The error bar in the mass radius (37) originates only from the precision of
the current data Ali et al. (2019). But what is the “theoretical systematic
error” involved in its extraction? We can categorize the sources of the
uncertainty in (37) as follows:
* •
The contribution of gluon operators with derivatives: as mentioned above, the
contamination from these operators is suppressed by powers of the $J/\psi$
velocity in the c.m.s., $v_{\psi}$. The GlueX data Ali et al. (2019) on the
differential cross section that we used are at the energy of $E_{\gamma}\leq
11.3$ GeV, corresponding to $v_{\psi}\simeq 0.3$. This means that the
potential contribution of operators with derivatives is about $10\%$. This
contribution can be further reduced by measuring the differential cross
section at a lower energy – for example, at $E_{\gamma}\leq 9.4$ GeV it should
be less than $5\%$.
* •
Operators of higher dimension, e.g. the quark-gluon operators: the
contribution from these operators is suppressed by the powers of
$\Lambda^{2}/(4m_{c}^{2})$, where $\Lambda$ is a non-perturbative parameter.
Assuming $\Lambda\sim 1$ GeV, we expect these contributions to be on the order
of $\sim 10-15\ \%$. Note that in threshold photoproduction, the convergence
of the multipole expansion is much better than for the interaction of on-shell
quarkonia where it is governed not by the heavy quark mass but by the
quarkonium binding energy Peskin (1979); Kharzeev (1996). The contributions of
higher-dimensional operators can be further suppressed by extending the
studies of threshold photoproduction to the $\Upsilon$ states, that should
become possible with the advent of the Electron Ion Collider. It could also be
possible to study this process in ultra-peripheral collisions at RHIC and LHC.
* •
The extrapolation in momentum transfer $t$ and the $t$-dependence of the short
distance coefficient $c_{2}$: we have assumed that the coefficient $c_{2}$ is
$t$-independent. This is justified by the fact that the momentum transfer in
the threshold photoproduction is not large enough to resolve the internal
structure of the produced heavy quark pair with the size $\sim 1/(2m_{c})$,
$t\ll 4m_{c}^{2}$. The range of extrapolation to $t=0$ from $t_{min}$ is also
much smaller than $4m_{c}^{2}$, which justifies the assumption of a constant
$c_{2}$.
A careful evaluation and reduction of the uncertainty in (37) will require a
lot of dedicated theoretical and experimental effort. However the mass
distribution is definitely a fundamental property of the proton. Therefore the
measurements of this distribution, combined with measurements of other
“mechanical” properties of the proton, such as the pressure distribution
Burkert et al. (2018), will definitely advance our understanding of the
quantum origin of mass.
Acknowledgements
This work was first presented at the “Origin of the Proton Mass” workshop
organized at ANL by I. Cloët, X. Ji, Z. Meziani, and J. Qiu; I am grateful to
the organizers and participants of this workshop for stimulating discussions.
I also thank J. Bernauer and Z. Meziani for useful comments on the
manuscript. This research was supported by the U.S. Department of Energy under
awards DE-FG88ER40388 and DE-SC0012704.
## References
* Gross and Wilczek (1973) D. J. Gross and F. Wilczek, Phys. Rev. Lett. 30, 1343 (1973).
* Politzer (1973) H. D. Politzer, Phys. Rev. Lett. 30, 1346 (1973).
* Ellis (1970) J. R. Ellis, Nucl. Phys. B 22, 478 (1970), [Erratum: Nucl.Phys.B 25, 639–639 (1971)].
* Chanowitz and Ellis (1972) M. S. Chanowitz and J. R. Ellis, Phys. Lett. B 40, 397 (1972).
* Collins et al. (1977) J. C. Collins, A. Duncan, and S. D. Joglekar, Phys. Rev. D 16, 438 (1977).
* Nielsen (1977) N. K. Nielsen, Nucl. Phys. B 120, 212 (1977).
* Kharzeev (1996) D. Kharzeev, Proc. Int. Sch. Phys. Fermi 130, 105 (1996), eprint nucl-th/9601029.
* Kharzeev et al. (1999) D. Kharzeev, H. Satz, A. Syamtomov, and G. Zinovjev, Eur. Phys. J. C 9, 459 (1999), eprint hep-ph/9901375.
* Ali et al. (2019) A. Ali, M. Amaryan, E. Anassontzis, A. Austregesilo, M. Baalouch, F. Barbosa, J. Barlow, A. Barnes, E. Barriga, T. Beattie, et al., Physical review letters 123, 072001 (2019).
* Einstein (1915) A. Einstein, Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin pp. 844–847 (1915).
* Nordstrom (1912) G. Nordstrom, Physikalische Zeitschrift 13, 1126 (1912).
* Einstein and Fokker (1914) A. Einstein and D. Fokker, Adriann, Annalen der Physik 44, 321 (1914).
* Norton (1984) J. Norton, Historical studies in the physical sciences 14, 253 (1984).
* Landau and Lifshitz (1951) L. Landau and E. Lifshitz, 2 (1951).
* Pais and Epstein (1949) A. Pais and S. T. Epstein, Rev. Mod. Phys. 21, 445 (1949).
* Pagels (1966) H. Pagels, Phys. Rev. 144, 1250 (1966).
* Feynman (1973) R. P. Feynman, _Photon-hadron interactions_ (W.A. Benjamin, Inc., Reading, Massachusetts, 1973).
* Shifman et al. (1978) M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, Phys. Lett. B 78, 443 (1978).
* Ruiz de Elvira et al. (2018) J. Ruiz de Elvira, M. Hoferichter, B. Kubis, and U.-G. Meißner, J. Phys. G 45, 024001 (2018), eprint 1706.01465.
* Yang et al. (2016) Y.-B. Yang, A. Alexandru, T. Draper, J. Liang, and K.-F. Liu (xQCD), Phys. Rev. D 94, 054503 (2016), eprint 1511.09089.
* Luke et al. (1992) M. E. Luke, A. V. Manohar, and M. J. Savage, Phys. Lett. B 288, 355 (1992), eprint hep-ph/9204219.
* Kaidalov and Volkovitsky (1992) A. B. Kaidalov and P. E. Volkovitsky, Phys. Rev. Lett. 69, 3155 (1992).
* Voloshin (1979) M. B. Voloshin, Nucl. Phys. B 154, 365 (1979).
* Appelquist and Fischler (1978) T. Appelquist and W. Fischler, Phys. Lett. B 77, 405 (1978).
* Peskin (1979) M. E. Peskin, Nucl. Phys. B 156, 365 (1979).
* Fujii and Kharzeev (1999) H. Fujii and D. Kharzeev, Phys. Rev. D 60, 114039 (1999), eprint hep-ph/9903495.
* Novikov and Shifman (1981) V. A. Novikov and M. A. Shifman, Z. Phys. C 8, 43 (1981).
* Group et al. (2020) P. D. Group, P. Zyla, R. Barnett, J. Beringer, O. Dahl, D. Dwyer, D. Groom, C.-J. Lin, K. Lugovsky, E. Pianori, et al., Progress of Theoretical and Experimental Physics 2020, 083C01 (2020).
* Bernauer (2020) J. C. Bernauer, EPJ Web Conf. 234, 01001 (2020).
* Athenodorou and Teper (2020) A. Athenodorou and M. Teper, Journal of High Energy Physics 2020, 1 (2020).
* Novikov et al. (1981) V. A. Novikov, M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, Nucl. Phys. B 191, 301 (1981).
* Ellis et al. (1999) J. R. Ellis, H. Fujii, and D. Kharzeev (1999), eprint hep-ph/9909322.
* Voloshin and Zakharov (1980) M. B. Voloshin and V. I. Zakharov, Phys. Rev. Lett. 45, 688 (1980).
* Ji (1995) X.-D. Ji, Phys. Rev. Lett. 74, 1071 (1995), eprint hep-ph/9410274.
* Yang et al. (2018) Y.-B. Yang, J. Liang, Y.-J. Bi, Y. Chen, T. Draper, K.-F. Liu, and Z. Liu, Phys. Rev. Lett. 121, 212001 (2018), eprint 1808.08677.
* Polyakov and Schweitzer (2018) M. V. Polyakov and P. Schweitzer, Int. J. Mod. Phys. A 33, 1830025 (2018), eprint 1805.06596.
* Lorcé (2018) C. Lorcé, Eur. Phys. J. C 78, 120 (2018), eprint 1706.05853.
* Metz et al. (2021) A. Metz, B. Pasquini, and S. Rodini, Phys. Rev. D 102, 114042 (2021), eprint 2006.11171.
* Alexandrou et al. (2020) C. Alexandrou, S. Bacchio, M. Constantinou, J. Finkenrath, K. Hadjiyiannakou, K. Jansen, G. Koutsou, H. Panagopoulos, and G. Spanoudes, Phys. Rev. D 101, 094513 (2020), eprint 2003.08486.
* Shanahan and Detmold (2019) P. Shanahan and W. Detmold, Physical Review D 99, 014511 (2019).
* Hatta and Yang (2018) Y. Hatta and D.-L. Yang, Physical Review D 98, 074003 (2018).
* Hatta et al. (2019) Y. Hatta, A. Rajan, and D.-L. Yang, Physical Review D 100, 014032 (2019).
* Mamo and Zahed (2020) K. A. Mamo and I. Zahed, Physical Review D 101, 086003 (2020).
* Gryniuk and Vanderhaeghen (2016) O. Gryniuk and M. Vanderhaeghen, Physical Review D 94, 074001 (2016).
* Strakovsky et al. (2020) I. I. Strakovsky, D. Epifanov, and L. Pentchev, Physical Review C 101, 042201 (2020).
* Gryniuk et al. (2020) O. Gryniuk, S. Joosten, Z.-E. Meziani, and M. Vanderhaeghen, Physical Review D 102, 014016 (2020).
* Wang et al. (2020) R. Wang, X. Chen, and J. Evslin, The European Physical Journal C 80, 1 (2020).
* Frankfurt and Strikman (2002) L. Frankfurt and M. Strikman, Phys. Rev. D 66, 031502 (2002), eprint hep-ph/0205223.
* Zeng et al. (2020) F. Zeng, X.-Y. Wang, L. Zhang, Y.-P. Xie, R. Wang, and X. Chen, The European Physical Journal C 80, 1 (2020).
* Kharzeev et al. (1996) D. Kharzeev, H. Satz, A. Syamtomov, and G. Zinovev, Phys. Lett. B 389, 595 (1996), eprint hep-ph/9605448.
* Shuryak and Zahed (2020) E. Shuryak and I. Zahed (2020), eprint 2008.06169.
* Du et al. (2020) M.-L. Du, V. Baru, F.-K. Guo, C. Hanhart, U.-G. Meißner, A. Nefediev, and I. Strakovsky, The European Physical Journal C 80, 1 (2020).
* Burkert et al. (2018) V. D. Burkert, L. Elouadrhiri, and F. X. Girod, Nature 557, 396 (2018).
|
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
10.1109/ACCESS.2017.DOI
This research was supported by SHREC industry and agency members and by the
IUCRC Program of the National Science Foundation under Grant No. CNS-1738783.
This work was performed, in part, at the Los Alamos Neutron Science Center
(LANSCE), a NNSA User Facility operated for the U.S. Department of Energy
(DOE) by Los Alamos National Laboratory (Contract 89233218CNA000001).
Corresponding authors: Seth Roffe (e-mail: seth.roffe@pitt.edu), Ryad B.
Benosman (e-mail: benosman@pitt.edu).
# Neutron-Induced, Single-Event Effects
on Neuromorphic Event-based Vision Sensor: A First Step and Tools to Space
Applications
SETH ROFFE1 HIMANSHU AKOLKAR2, ALAN D. GEORGE1, BERNABÉ LINARES-BARRANCO5,
AND RYAD BENOSMAN.2, 3, 4 University of Pittsburgh, 4420 Bayard St. Suite
560, Pittsburgh, PA 15213 (emails<EMAIL_ADDRESS>University of Pittsburgh, Biomedical Science Tower 3, Fifth Avenue,
Pittsburgh, PA 15260 (emails<EMAIL_ADDRESS> INSERM UMRI S 968;
Sorbonne Université, UPMC Univ. Paris 06, UMRS 968; CNRS, UMR 7210, Institut
de la Vision, F-75012, Paris, France Carnegie Mellon University , Robotics
Institute, 5000 Forbes Avenue Pittsburgh PA 15213-3890, USA Instituto de
Microelectrónica de Sevilla, CSIC and Universidad de Sevilla, Sevilla, Spain
(email<EMAIL_ADDRESS>
# Neutron-Induced, Single-Event Effects
on Neuromorphic Event-based Vision Sensor: A First Step Towards Space
Applications
SETH ROFFE1 HIMANSHU AKOLKAR2, ALAN D. GEORGE1, BERNABÉ LINARES-BARRANCO5,
AND RYAD BENOSMAN.2, 3, 4 University of Pittsburgh, 4420 Bayard St. Suite
560, Pittsburgh, PA 15213 (emails<EMAIL_ADDRESS>University of Pittsburgh, Biomedical Science Tower 3, Fifth Avenue,
Pittsburgh, PA 15260 (emails<EMAIL_ADDRESS> INSERM UMRI S 968;
Sorbonne Université, UPMC Univ. Paris 06, UMRS 968; CNRS, UMR 7210, Institut
de la Vision, F-75012, Paris, France Carnegie Mellon University , Robotics
Institute, 5000 Forbes Avenue Pittsburgh PA 15213-3890, USA Instituto de
Microelectrónica de Sevilla, CSIC and Universidad de Sevilla, Sevilla, Spain
(email<EMAIL_ADDRESS>
###### Abstract
In this paper, we study the suitability of neuromorphic event-based vision
cameras for spaceflight, and the effects of neutron radiation on their
performance. Neuromorphic event-based vision cameras are novel sensors that
implement asynchronous, clockless data acquisition, providing information
about the change in illuminance greater than $(\geq 120dB)$ with sub-
millisecond temporal precision. These sensors have huge potential for space
applications as they provide an extremely sparse representation of visual
dynamics while removing redundant information, thereby conforming to low-
resource requirements. An event-based sensor was irradiated under wide-
spectrum neutrons at Los Alamos Neutron Science Center and its effects were
classified. Radiation-induced damage of the sensor under wide-spectrum
neutrons was tested, as was the radiative effect on the signal-to-noise ratio
of the output at different angles of incidence from the beam source. We found
that the sensor had very fast recovery during radiation, showing high
correlation of noise event bursts with respect to source macro-pulses. No
statistically significant differences were observed between the number of
events induced at different angles of incidence but significant differences
were found in the spatial structure of noise events at different angles. The
results show that event-based cameras are capable of functioning in a space-
like, radiative environment with a signal-to-noise ratio of 3.355. They also
show that radiation-induced noise does not affect event-level computation.
Finally, we introduce the Event-based Radiation-Induced Noise Simulation
Environment (Event-RINSE), a simulation environment based on the noise-
modelling we conducted and capable of injecting the effects of radiation-
induced noise from the collected data to any stream of events in order to
ensure that developed code can operate in a radiative environment. To the best
of our knowledge, this is the first time such analysis of neutron-induced
noise analysis has been performed on a neuromorphic vision sensor, and this
study shows the advantage of using such sensors for space applications.
## I INTRODUCTION
Neuromorphic event-based cameras are remarkably efficient, robust, and capable
of operating over a large range of light intensities. These sensors replicate
the design of biological retinas to make full use of their power efficiencies,
sparse output, large dynamic range, real-time computation, and low-data
bandwidth. Neuromorphic sensors are built by copying aspects of their
biological counterparts, and are therefore massively parallel and highly non-
redundant [1]. Each pixel of the sensor works independently, sensing changes
in light and providing output in the form of discrete events signifying
increasing or decreasing light intensity.
Event-based cameras are a perfectly suited to space missions where the
resource budget is limited and radiation can have catastrophic effects on
hardware. These sensors have the potential to improve numerous space
applications, including those involved in space situational awareness, target
tracking, observation and astronomical data collection [2]. Due to the harsh
conditions entailed, however, the performance of such sensors in space is yet
to be explored. The scope of this work is to test the resilience of
neuromorphic sensors to neutrons impacting the sensor in a highly radiative
environment. The goal is to determine the failure modes of the neuromorphic
camera as seen under the same spectrum as that produced by cosmic rays and to
measure the possible impact of neutrons on the temporal precision of output
events, noise levels, and computation.
Although studies have been carried out into the behavior of various
optoelectronic devices under neutron radiation [3][4][5][6][7][8], no work to
date has addressed the radiation-tolerance aspects of event-based visual
sensors to analyze if this technology is capable of retaining its efficacy
under radiative conditions. To observe and evaluate single-event effects, we
irradiated a neuromorphic event-based sensor at Los Alamos National Lab’s
(LANL) ICE-II neutron facility.
The measured neutron energy distribution at LANL-ICE-II is significantly more
intense than the flux of cosmic-ray-induced neutrons, and this allows for
testing at greatly accelerated rates. An ICE-II radiation test of less than an
hour is equivalent to many years of neutron exposure due to cosmic-rays [9].
Neutrons are known to interact with the materials in the semiconductor and
produce daughter particles, which may deposit or remove charge in sensitive
volumes of the chip. If the deposited charge is significant enough, it can
change the state of a bit in the system. In a digital system this change of
state is known as a bit-flip. Sensors include analog circuitry, and therefore
produce more complex behavior than simple bit-flips. Beam testing is popular
in sensor processing to classify single-event effects (SEEs) in new computing
systems and test the robustness of systems to single-event upsets (SEUs).
Different systems may respond in different ways to the radiation that brings
about SEEs, producing faults and errors of varying degrees. The affect of SEEs
can range from negligible, where an unused area of memory is affected, to
single-event latch-ups that could damage the system permanently.
Knowing how a system may respond to radiation is vital to the success of a
space mission insofar that it provides an overview of the kind of upsets that
may arise. This information allows designers to plan for any problems that may
be encountered in flight. Single-event upsets (SEUs) are transient in that
they do not permanently damage the device, but they may cause some silent data
or control errors which, if uncaught, may lead to a loss of performance or
accuracy. To reduce risk, it is therefore vital to know how a new system will
respond to radiation before deployment.
In this paper, we measured the effect of radiation and categorized the SEEs
observed in the sensor. We also tested how radiation affects pure event-based
computation in the context of optical flow estimation, which is known to be
sensitive to noise and temporal imprecision, under both radiation and non-
radiation conditions. Finally, we also used this preliminary data to develop a
simulator that makes it possible to inject events with radiation-noise effects
into any data stream. We call this simulator the ”Event-based Radiation-
Induced Noise Simulation Environment,” (Event-RINSE). Event-RINSE allows
realistic neutron beaming effects to be added to any event based data
sequence. These simulated radiation effects enable designers to test developed
algorithms prior to mission deployment.
## II Background
This section gives an overview of the neuromorphic event-driven visual sensor,
its data acquisition principles, and its data types. The use of event-driven
sensors for space applications is also discussed.
Figure 1: Event-based sensor operating principles: (A) The event-based sensor
used in this experiment. (B) When a given pixel’s luminosity change reaches a
given threshold, it produces a visual event with an x and y address, a
timestamp, and a polarity, which is either ON or OFF depending on the change
in relative luminosity. (C,D) The stream of events generated by three rotating
shapes, shown here in a color version of the sensor’s absolute light
measurement output that comes with every event.
### II-A Neuromorphic Event-Driven Visual Sensors
Biomimetic, event-based cameras [10] are a novel type of vision sensors that,
like their biological counterparts, are made of independent cells/pixels which
are driven by events taking place in their field of view, generating an
asynchronous stream of spikes/events. This method of data collection is in
contrast to conventional vision sensors which are driven by artificially
created timing and control signals (frame clock) to create full images that
have no relation to either the content or the temporal dynamics of the visual
scene. Over the past few years, several types of these event-based cameras
have been designed. These include temporal contrast vision sensors sensitive
to change in relative luminance, gradient-based sensors sensitive to static
edges, devices sensitive to edge-orientation, and optical-flow sensors.
Most of these vision sensors output visual information about the scene in the
form of discrete events using Address-Event Representation (AER) [11][12][13].
The data encodes the visual information by sending out tuples $[x;y;t;p]$ — of
space (the pixel where change occurred), time (when the change occurred), and
polarity (whether luminance increased or decreased) — as ON or OFF events,
respectively. The event-based camera used in this work is a time-domain
encoding event-based sensor with VGA resolution. The sensor contains a
640$\times$480 array of fully autonomous pixels, each relying on an
illuminance-change detector circuit. In this study, we will only consider the
luminance change circuit that is common to all existing event-based sensors
[14].
The operating principle of an event-based pixel is shown in Figure 1. The
change detector of each pixel individually detects a change in brightness in
the field-of-view. Since event-based cameras are not clocked like conventional
cameras, the timing of events can be conveyed with a very accurate temporal
resolution in the order of microseconds and below111The highest reported
neuromorphic sensor event output rate to date is $1.3\times 10^{9}$ events per
second[15]..
These sensors capture information predominantly in the time domain as opposed
to conventional frame-based cameras, which currently provide greater amount of
spatial information. Since the pixels only detect temporal changes, redundant
information like static background is not captured or communicated, resulting
in a sparse representation of the scene. Consequently, event-based cameras can
have a high temporal-resolution with a very low data-rate [16] compared to
conventional cameras, thus conforming to low-resource requirements. Since the
pixels are independent of one another and do not need a clock, an error in a
few of them will not lead to a catastrophic failure of the device and the
sensor will be able to remain operational.
### II-B Conventional Space Situational Awareness
Space situational awareness (SSA) has been an important topic in military
applications for many years [2][17][18][19]. SSA is the ability to detect and
keep track of surrounding objects and debris to avoid collisions. For SSA,
vision systems with high temporal-resolution and low latency are required to
accurately detect objects. Event-based cameras are therefore the perfect
candidate to replace limited conventional sensing methods in satellite
awareness.
Ender et al. [20] details the use of radar in SSA for collision detection,
orbit estimation, and propagation. The benefit of radar is that it has a very
large coverage, meaning it can consistently observe a wide area in an arc of
almost 5000 km. However, since radio uses long wavelengths, this methodology
would only work for larger objects [20]. Smaller objects would be impossible
to detect via radio waves.
One difficulty in object detection to avoid collisions in space is the
modeling of non-linear orbits in real-time. Several methods have been proposed
to predict non-linear orbits for SSA. One is to use Gaussian mixture modeling
to exploit properties of linear systems to extrapolate information about a
non-linear system, and then to use Gaussian splitting to reduce the errors
induced by that extrapolation [21]. The mixture model enables complex, non-
linear orbits to be mapped more accurately, providing a better judgment of
potential collisions. The issue arises when this kind of surveillance for
object avoidance needs to be done autonomously. The calculations presented are
too complex to be performed efficiently by a satellite’s embedded platform.
Also, since the analysis carried out by such platforms is based on statistical
manipulation, it needs to be verified by human intervention in order to avoid
any statistical anomalies that may cause potential collisions.
Abbot and Wallace [22] tackle the SSA problem of decision support for tracking
large amounts of orbiting space debris. They claim that the limited number of
sensors leads to inconsistent surveillance of the objects under observation,
and therefore propose a cooperative monitoring algorithm for geosynchronous
earth orbit satellites to address collision prevention and provide automated
alerts. However, this methodology relies on Bayesian modeling, which can be
computationally intensive for embedded platforms and requires publicly
available data to create the models. With satellites of unknown orbits,
unexpected collisions could therefore become an issue.
These techniques also require fast positional capture of the observed objects
which is difficult with the video cameras currently available for space
exploration. Event-based cameras could fill this space by providing low
latency/resources sensing for SSA.
### II-C Event based sensors for Space Situational Awareness
The high dynamic range of event-based sensors with both low-light and bright-
light sources allows visual information to be inferred even in the darkness of
space or when a bright sun is in the sensor’s field-of-view (FoV). It also
means that the area around the sun can be observed, even when the sun is
coming up over the horizon of a satellite’s orbit.
The use of event-based cameras in space-related applications is not well
developed. Most of the work has been carried out in the context of terrestrial
telescope observation of low brightness objects in Low-Earth Orbit (LEO) and
Geostationary-Earth Orbit (GEO) [23][24].
Event-based cameras can offer a promising solution to collision avoidance in
space provided their high temporal precision and sparsity of data are properly
taken into account when designing algorithms. The current trend of generating
frames of events, and gray levels to recycle decades of conventional computer
vision and machine learning techniques has led to their being used as simple
high dynamic range conventional cameras. In this work we focus only on the
temporal properties of these sensors, considering cases of per-event
computation that preserve the temporal properties of event-based cameras that
have been shown to be the key to developing new applications [25].
There has been extensive research into event-based cameras for real-time
tracking and low-power computer systems within the last decade. Many
algorithms have been developed that allow for objects to be tracked within the
visual space of an event-driven sensor. Reverter et al. developed one such
method that makes it possible to track many different shapes, as long as the
pattern of the shapes is known a-priori [26]. Similarly, Lagorce et al.
provide a multi-kernel Gaussian mixture model tracker for the detection and
tracking of different shaped objects [27]. Other methods use spatial matching
to allow object tracking even in occluded conditions [28][29] and provide
haptic stability by tracking gripper positions in microrobotics applications
[30]. The low computational requirements of event-based sensors even allow
tracking systems to be implemented on embedded platforms [31] and on FPGAs
[32]. Newer improved spatio-temporal feature detection could improve these
methods further [33]. Novel methods can even detect and track objects in
conditions where both the camera and the objects are moving independently [30,
34][35].
### II-D Neutron-Beam Testing
Srour and McGarrity [36] detail the effects of space radiation on
microelectronic circuits, discussing damage, ionization, and SEEs on
optoelectronic devices. Modern models describe the most of the radiation
experienced in the space environment as consisting of protons and heavy ions
[37]. However, this experiment primarily uses wide-spectrum neutrons to test
the sensor of interest. In general, neutron beam testing is useful for
classifying single-event effects in electronics. Since interest is focused on
the response of the device, the source of the upsets become irrelevant.
Neutron testing is also useful to test the robustness of systems to SEUs. As
an example, NASA Langley Research Center and Honeywell performed neutron beam
tests to study the robustness of their flight control computer architecture
[38]. Their primary goal was to show that they were able to recover from
neutron-induced SEUs. The recovery demonstrated system’s capabilities in a
hazardous environment, even though the radiation source was not neutrons.
When radiation impacts a device, energy is deposited into the target material,
causing various faults in the hardware. These faults can have different
effects such as memory corruption or glitches in analog and digital hardware
[39]. In an imaging sensor, these errors would manifest as corrupted pixels or
improper output. One type of effect, single-event effects (SEEs), occurs when
a high-energy particle strikes a microelectronic component and changes a
single state of the internals in the device [36]. These effects can then
manifest as transient-data errors, corrupting the data output.
## III METHODOLOGY
This section gives an overview of how the radiation experiment was performed,
explaining the Los Alamos Neutron Science Center’s neutron beam and detailing
how data was collected during irradiation.
Figure 2: (A) The event-driven sensor under test sitting on a stand that is
non-reactive to neutron radiation. To ensure that the neutrons passed through
the sensor, the green laser was used to aim the beam. (B) Schematics showing
the sensor placed at a fixed distance from the beam source in two conditions -
facing the beam directly and at a 90 $\degree$ angle of incidence.
### III-A Event-Camera
The sensor used for the experiments in this paper was an event-based sensor
based on [14] with VGA resolution (640$\times$480 pixels) fabricated in 180nm
CMOS-CIS technology. The chip has a total die size of 9.6$\times$7.2mm2, with
a pixel size of 15$\times$15$\mu m^{2}$, and a fill factor (ratio of photo-
diode area over total pixel area) of 25%. The maximum event-rate for this
camera is specified as 66 Meps (mega events per second). During recordings,
output events were time-stamped with micro-second resolution by the camera
interface and communicated via USB to a host computer for storage. In our
recordings we observed a maximum of about 30 events captured with the same
micro-second timestamp, meaning that the maximum sensor throughput was not
reached.
### III-B Irradiation
The event-camera under test was irradiated at ICE-II, Los Alamos Neutron
Science Center’s wide-spectrum neutron-beam facility. The Los Alamos Neutron
Science Center (LANSCE) provides the scientific community with intense sources
of neutrons, which can be used to perform experiments supporting civilian and
national security research. The ICE facility was built to perform accelerated
neutron testing of semiconductor devices. Flight Path 30L and 30R, known as
ICE House and ICE-II, allow users to efficiently set up and conduct
measurements [9]. The sensor was irradiated for two days, from November 23,
2019 to November 24, 2019 under wide-spectrum neutrons of energies ranging
from $0.1MeV$ to $>600MeV$. The general setup is shown in Figure 2.
An event-based camera was placed at a fixed distance in the beam to act as a
control on the effective neutron flux. The sensor was placed at different
angles of incidence from the beam as shown in Fig. 2(B) to detect any
potential differences in the effects observed. Data was collected at an angle
of $90^{\degree}$ from the beam and directly facing the beam source.
In this experiment, the event-camera was irradiated with the lens cap on to
avoid any light or environmental noise on the sensor. Thus, the noise recorded
from the sensor in this experiment primarily come from the effects of the
radiation rather than those induced by the light sources in the environment.
Figure 3: Average number of noise events per second induced due to radiation
compared to noise without irradiation over 2 days of irradiation. The
recordings were taken with the lens cap on the camera, so the induced events
were due either to the inherent thermal noise or to noise induced through the
neutrons. Radiation induced more ON events than OFF events (3:1 ratio).
### III-C Data Collection and analysis
The sensor was connected to a computer running software which interfaced with
the sensor to record events. Events were later processed offline. Data was
taken with the beam on and off in order to observe the increase in noise
caused by irradiation. Radiation-induced noise can be seen in the form of
clustered noise-like patterns and line streaks of moving particles in the
focal plane, as will be detailed in the following sections. The recorded data
was parsed to get an event rate to measure the number of events generated by
the sensor per second. The counted events were then separated into ON and OFF
events. The average events per second were calculated for each experiment with
standard deviation as error.
Figure 4: Probability density of events by location on the sensor for (A) 0
degree angle of incidence and (B) 90 degree angle of incidence. The graphs
show that the entire sensor was radiated uniformly over the field of view for
both conditions.
Data was collected with the sensor facing the beam source and at
$90^{\degree}$, to observe how the angle of incidence affected the incoming
radiation noise. The number of events was measured for both ON and OFF events
in each orientation and compared. A Mann-Whitney U test was used to determine
statistical significance in the differences between the two orientation
distributions [40].
This experiment measured patterns influenced by the effective neutron flux and
the number of ON events and OFF events. The patterns were analyzed using an
understanding of the sensor’s internal circuitry to determine the physical
effect of radiation on the sensor. This methodology presents a categorization
of SEEs in the form of radiation-induced noise.
To ensure the radiation-induced noise would not overwhelm signal integrity, a
pendulum was placed in the visual field to measure the signal-to-noise ratio.
Since the signal could be observed with and without radiation-induced noise,
the signal-to-noise ratio could be calculated by simply dividing the number of
signal events by the noise events produced by radiation. This ratio could then
be used to determine the robustness of the sensor to radiation in terms of
loss of signal integrity. To validate the signal-to-noise ratio, a correlation
test was performed between the radiated data and the non-radiated data.
## IV RESULTS
This section gives an overview of the results of the radiation experiment,
discussing noise rates, patterns, and analyses.
### IV-A Induced-Event Rate
Data was collected with the lens cap on the sensor to minimize environmental
influence from external lighting. First, the mean number of radiation-induced
ON and OFF events per second was measured. The average number of events can be
seen in Figure 3. A significant bias towards ON events was observed.
The induced-event probability density was plotted against the pixel
coordinates of the sensor to observe any location preferences for upsets. To
measure this, the pixel location of each induced event was divided by the
total number of events measured for both angles of incidence. These
measurements can be seen in Figures 4(A) and 4(B).
In both cases, the induced events are quite uniform across the sensor, with
the $0\degree$ angle of incidence tending to bias towards the location of the
neutron beam’s 1 inch diameter. We can see that about twice as many events
were produced for high $x$ and low $y$ values than for the opposite corner.
However, this is due to human error in placing the sensor in the beam path. In
other words, there is no particular area of the sensor that is more vulnerable
to neutron radiation effects than other areas. This is further demonstrated in
the $90\degree$ angle of incidence result. Every pixel across the sensor
showed a similar response.
Figure 5: Events observed at different angles of incidence. Data was collected
at $90^{\degree}$ from the beam and facing directly towards the beam. No
significant difference was found between the number of noise events generated
for the two conditions even though the sensor would be expected to interact
with more neutrons when facing the beam. At $0\degree$, more events were
produced at high $x$ and low $y$ values than for the opposite corner. This is
the result of human error in placing the sensor in the beam path. As expected,
every pixel across the sensor showed a similar response.
### IV-B Angle of Incidence Comparison
Data was collected at two orientations: facing the beam with an angle of
incidence of $0^{\degree}$ and at an angle of incidence of $90^{\degree}$ from
the beam source. These two distributions were then analyzed separately to
observe any significant differences.
Figure 5 shows that there was a slight difference between the number of OFF
events per second induced between the two orientations. A Mann-Whitney U test
was performed on the two distributions to test for statistical significance
[40] but no statistically significant difference was found.
Figure 6: Number of events induced in a $50\times 50$ pixel bounded box for a
light room vs a dark room. Given the contrast sensitive nature of the sensor,
and as expected, we observed that more ON noise events were generated in the
case of dark room since the neutron interactions allowed for the event
generation threshold to be crossed more often. The OFF noise events did not
increase significantly. Figure 7: Number of signal events observed vs
radiation-induced noise events. Signal events were calculated as the rate of
events while recording a cyclic pendulum where as the noise rate was computed
from isolated radiation induced events. The signal-to-noise ratio for the
sensor even under strong neutron radiation was found to be 3.355. Figure 8:
(A) An orbital pendulum recorded using the sensor and the event rate
calculated as the number of events within a 1 ms moving window with (red) and
without (blue) radiation turned on within a bounded box, as shown in the image
panels. The images show the event frames obtained within the time window at
different time points in the recording. Qualitatively, the sensor produced
similar images for both conditions. (B) Calculated frequency of the pendulum
using the event rates. The frequency of the pendulum’s motion could be
obtained using the FFT in each case.
### IV-C Effects of Room Brightness
When deployed in space, these vision sensors may be subject to varying levels
of background light intensity. To understand how neutron radiation would
affect the sensor under such varying conditions, we recorded background noise
events during radiation while placing the sensor in an artificially lit room
with illuminance levels of around 500 lux and with a lens cap covering the
sensor to simulate a low-light intensity condition with a light level close to
0 lux. The intrinsic characteristics of the sensor pixels allow them to be
invariant to the background lighting conditions thanks to the relative change
operation mode and the log scale. Figure 6 shows the number of ON and OFF
events induced by neutron radiation in the artificial lit ”light room” and in
the low-intensity ”dark room” case. We find that the number of ON events
induced in the dark room was nearly $1.5$ times higher than in the light room.
Conversely, no significant difference were observed in the OFF events induced
in the two conditions. Details of this process are explained in Section V.
### IV-D Signal-to-Noise
In order to measure the signal-to-noise ratio, events were compared with the
beam ON and OFF while the sensor observed a dynamic scene composed of a
cyclic-pendulum, as shown in Figure 8(A). To calculate the ratio between the
two values, the number of signal events measured with the cyclic-pendulum were
compared directly with the number of isolated radiation-induced noise events.
This comparison can be seen in Figure 7. Comparing these values gives a
signal-to-noise ratio of $3.355$.
To ensure the signal can be seen even when radiation is introduced, events in
a $50\times 50$-pixels bounding box (shown in green and red in Figure 8(A))
were measured and plotted to compare signal data with and without radiation.
Since the pendulum’s movement is cyclic, we calculated the event rates data
over time using a moving window of 1 ms. The frequency of this rate data
calculated using Fourier transform should ideally give us the frequency of
oscillation of the pendulum. The Fourier transform of the signal with and
without radiation is shown in Figure 8(B). With the addition of radiation
noise, the signal’s major frequency can still be estimated with some slight
noise at low frequencies.
Figure 9: A Pearson correlation test was performed for the events obtained
from the pendulum’s movement with and without radiation. The high correlation
and small standard deviation show that the signals obtained from the two
conditions were quantitatively similar. Figure 10: The movement directions of
different parts of the pendulum system computed from the recorded event
streams with and without radiation. (A) The colors represent the movement
directions of the events as indicated by the color wheel. (B) Graphs showing
the computed average movement directions for events occurring in a 5ms moving
window within the black bounding box shown in the images. The Pearson
correlation coefficient between two signals was 0.7189 indicating that the
direction computation was not affected by the radiation.
To validate the signal-noise ratio of the radiated sensor, a Pearson-
correlation test was performed between the radiation data and the non-
radiation data. With a high correlation, it can be shown that the two
distributions follow each other closely with minor linear transformations. Due
to the varying size of samples, sub-samples were taken and analyzed to
estimate the correlation R-value. The distribution of R-values can be seen in
Figure 9. The measured R-value was $0.70\pm 0.02$ with a negligible p-value.
It can therefore be deduced with high confidence that the radiation-induced
noise is not enough to significantly change the data output from the original,
non-radiated data.
The ultimate goal of deploying sensors on missions is to obtain useful
information from them while in space. One of the most fundamental, low-level
features that can be extracted from the event stream is motion flow. The
optical flow provides the speed and direction of an object’s movement in the
camera plane, where its precision is related to the temporal properties of
events. We computed optical flow on events captured from the sensor recording
the moving pendulum system using the aperture-robust event-per-event optical
flow technique introduced in [41]. The average direction of movement of one
arm of the pendulum inside a bounding box (shown in Figure 10(A)) is plotted
in Figure 10(B). The average angle values follow the expected wave as the arm
of the pendulum moves up and down vertically. The Pearson correlation between
the two conditions was found to be 0.7189, showing that movement computation
is not affected by radiation.
Figure 11: Clustered patterns of noise obtained by searching for clusters with
minimum sizes of 10 pixels in 30-second recordings. Time slices of 5 ms were
processed consecutively, searching for 10-pixel clusters. All clusters
detected during 30 seconds are grouped in the plots. Events were recorded for
a $0\degree$ angle of incidence and a $90\degree$ angle of incidence.
Significantly more line segments can be seen at $90\degree$. At $0\degree$,
fewer, smaller (average 30 pixels) and more clustered noise patterns were
observed than at $90\degree$, where longer (up to 300 pixels) and more
frequent (up to 7 times) line segments were observed. Figure 12: Details of
20ms event capture when exposed to neutron beam without visual stimulus. The
blue dots represent positive events and the red dots represent negative
events. Positive events are mostly concentrated in 600-800$\mu s$ time
intervals separated by about 8 $ms$ intervals in which mostly negative events
are recorded. (A) 3D plot (x,y,time) of events captured during the 20 $ms$
interval. Small scattered dots/clusters can be observed plus a line segment in
the lower right part. (B) Time vs x-coordinate projection of the recorded
events. (C) Events corresponding to the line segment in (A) which have been
isolated for better visibility. Figure 13: Noise rates for different
conditions with and without neutron radiation. (A) The overall noise without
radiation is very low. (B, C) Radiation noise when the sensor was placed at
$0\degree$ (facing) (B) and at $90\degree$ (C) to the beam source. In each
case, we recorded the bursts of noise most likely due to neutron pulses from
beam generation. (D) Similar noise was found in the recording when a circular
pendulum was recorded with the camera. The burst noise was superimposed on the
low frequency events generated by the pendulum motion. (E) Details of the
neutron’s macro-pulse sequence can be observed from a zoomed-in plot of the
event bursts in (B). Each neutron macro-pulse produced positive event bursts
with duration of about 1.6 $ms$, and with peaks separated on average by 8.3
$ms$. Five macro-pulse responses appear, with a duration between the first and
the fifth of 33.25 $ms$, while the time between two 5-macro-pulse trains is
16.75 $ms$.
### IV-E Noise Patterns
Radiation-induced noise, as shown in Figure 11 for both orientations, can be
categorized into two main groups: clusters and line segments. Line segments
represent a line of events that appear across the frame due to a neutron
impacting the sensor at a non-zero angle of incidence. Clusters represent a
random burst of events in a small area. The angle of incidence between the
sensor and the radiation source affects the number of line segments. About 5-7
times more line segments appear with a $90\degree$ angle of incidence than
with a $0\degree$ angle of incidence. Conversely, about twice as many clusters
appear with a $0\degree$ angle of incidence than with a $90\degree$ angle of
incidence. Significantly longer lengths of line segments occurred at
$90\degree$, where streaks of up to 300 pixel lengths were observed, whereas
smaller streaks, with maximum lengths of 30-50 pixels, were seen at
$0\degree$. An example of differences in noise cluster patterns can be seen in
Figure 11. These figures were obtained by analyzing recordings of $10^{7}$
events, with a duration about a 30-second, while searching for unconnected
clusters not exceeding 10-pixel in size. Note that in Figure 11(A) more event
density can be seen in the corner of high $x$ and low $y$ values than in the
opposite corner. This is similar to what was observed in Figure 4 due to human
error in placing the sensor in the beam path.
Analysis of noise line segments showed a burst of ON events over a fast time
frame, followed by a long relaxation-period of OFF events after a short wait
time, as shown in Figure 12. This is due to an influx of positive current in
the sensor’s photo-diodes creating a burst of ON events, followed by a
relaxation period for the current to return to normal, creating OFF events.
The ON events burst over about 600-800$\mu s$ and the negative event tail is
about 10ms long.
Viewing the event rate of the bursts, we see peaks of ON events followed by a
long tail of OFF events. This effect is seen within all noise-types and is
shown in Figures 13 (B) and (C). Figure 13(E) shows a zoomed view with finer
details. Bursts of 5 peaks separated by a time of 16.75 $ms$ can be seen. Each
peak has a duration of about 1.6 $ms$ of positive events. Consecutive bursts
are separated by 8.25 $ms$ within the 5 peaks. Consequently, on average, the
five peaks occur every 33.25 $ms$ \+ 16.75 $ms$ = 50 $ms$, which is equivalent
to 100Hz peaks. This coincides with the LANSCE neutron source description
[42], where the neutron source emits a pulse of neutrons at a rate of about
100 Hz. Each such neutron peak is referred to as a neutron “macro-pulse”.
## V Circuit-Level Interaction Interpretation
High-energy neutron beams are thought of as ionizing radiation, which can
instantaneously change the charge of an electric circuit node within the
camera sensor chip. Since an event-camera can capture internal changes with
microsecond resolution, these sensors provide a new way of “seeing” fine
interactions taking place between fast radiation particles and the electronic
chip while it is operating.
Figure 14: Three possible free neutron decays. (A) The neutron passes through
the sensor casing without decaying. (B) The neutron decays into a proton and
electron. (C) The neutron decays into a proton and an electron which emits
gamma radiation.
For the free neutrons passing through the sensor, there are three main
possibilities: the neutron can pass through as a neutron without decaying, the
neutron can decay into a proton and an electron, or the neutron can decay into
a proton and an electron which emits a gamma photon due to internal
brehmsstrahlung [43]. A diagram of these three possibilities can be seen in
Figure 14. Due to quantum uncertainties and the inability to distinguish
between particles, it is impossible to distinguish the cases’ impact on the
sensor in this experiment. Further research must therefore be performed to
detail the exact cause of the induced noise patterns.
In digital circuits, high-energy charged particles and radiation beams tend to
mainly impact memory circuits, where charge is stored on tiny parasitic
capacitors, producing bit-flips and consequently altering system states and
data. In our sensor, however, we observed consistent sudden positive events
over many pixels followed by negative event tails, synchronously with the
macro-pulse neutron emission patterns of LANSCE [42]. The fact that most
responsive pixels produce a burst of positive events during each 625$\mu s$
LANSCE neutron macro-pulse, rules out the possibility that the sensor is
suffering bit-flip effects at temporary memory-storing nodes. If this were the
case, we would expect to observe a random mix of positive and negative events
within each neutron macro-pulse. However, most of the affected pixels respond
by providing a synchronized burst of positive events. It can thus be inferred
that it is the pixels’ photo-diodes that are responding to the neutron macro-
pulses. Photo-diodes drive a photo-current proportional to incident light
intensity. If a high-energy proton or electron crosses the depletion region of
a photo-diode, it will interact, either by attraction or repulsion, with the
electrons flowing through it at that moment, thus producing a sudden decrease
in photo-current and, consequently, negative events. However, since we
observed a sudden, very significant increase in photo-current (resulting in
positive events), we hypothesize that the scattered pixels are sensing sudden
radiation at their locations. This would also explain the observation of
segments sensed simultaneously by consecutive pixels. Figure 12 shows one such
segment in a 20 $ms$ time slice of events, corresponding to three consecutive
625$\mu s$ neutron macro-pulses separated from each other by 8.25 $ms$. Most
of the pixel responses show small clusters of less than 10-pixels, the
exception being the 190-pixel long segment. Our hypothesis is that the sensor
is crossed by radiation bursts, most of them perpendicular to the chip plane,
but occasionally interacting with deflected radiations at other angles and
producing line segments. However, all radiation interactions occur precisely
during the beam’s macro-pulse times.
Figure 15: Examples of X-projections of noise line segments for events
recorded from (A) real data and (B) simulated noise data. All cases show a
burst of ON (blue) events with a long OFF (red) tail. As each example line
segment was detected, the events were separated from the recording and then
projected on the X-axis.
The electronic pixel circuitry of an event-camera chip has a limited response
time in the range of 0.1 $ms$ to 10 $ms$ depending on ambient light and bias
conditions [14][44]. The LANSCE neutron source macro-pulses have a time
duration of 625 $\mu s$, which is lower than the temporal resolution of the
event sensor. The macro-pulse radiation impinging on the destination pixels
produces a sudden over-stimulation of photo-current, resulting in the sudden
generation of a handful of positive events per pixel during the neutron macro-
pulse. After such strong over-stimulation, the pixel circuit relaxes to its
steady ambient-light-driven state with a time constant in the range of 10
$ms$, producing events of negative polarity over time. This behavior of sudden
positive stimulation of 600-800 $\mu s$, where positive events are produced,
followed by about 8-10 $ms$ of negative-event relaxation is systematically
observed in the recordings. Figure 12(A) shows the 20 $ms$ event capture with
scattered noise-like dots/clusters of fast positive events (shown in blue),
followed by negative event tails (shown in red). We hypothesize that each such
dot/cluster corresponds to a neutron crossing the chip. Figure 12(B) shows the
events in Figure 12(A), but displayed in their corresponding time vs
x-coordinate projection. We can clearly see the synchronized sequence of
neutron macro-pulse-induced positive events (shown in blue), of 600-800 $\mu
s$ duration, separated by about 8 $ms$ of inter-neutron macro-pulse time where
mainly negative relaxation events are produced. The figure also shows a
190-pixel long segment with the same time profile. The events for this segment
are isolated in Figure 12(C). In this plot there are 2,031 positive events
collected over about 800 $\mu s$, followed by 1,090 negative events collected
during over about 20 $ms$.
The suddenly induced photo-current hypothesis also explains the observations
in Figure 6, where more positive events are produced under dark-room
conditions than under light-room conditions. When under light room conditions,
the photo-diodes are already driving some current and consequently reach their
maximum saturation current earlier when suddenly impinged by high energy
particles, resulting in fewer induced positive events. Under dark conditions,
the photo-current can undergo a larger variation, resulting in more positive
events.
## VI Event-RINSE Simulator
Figure 16: Average single pixel radiation-induced event rate model for
observed and simulated data. From real data we observed that neutron
interactions induced ON event bursts of about 1.6 $ms$ within the first 1
$ms$. These were followed by long tails of OFF events lasting up to 10 $ms$.
The simulator was used to induce noise events into the stream of recorded non-
noisy data, and the noise characteristics for single-event noise were then
averaged to create the dashed curves. The simulator was able to match the real
noise model within a margin of acceptable error.
### VI-A Simulated Noise Generation
Given a stream of event-camera data as input, the simulator steps through each
event. For every time step in the data, a noise event is either generated or
passed. The probability of injection was determined using a Poisson
distribution of observing $k=1$ event with a variable event rate. Namely,
$P(\lambda)=\lambda e^{-\lambda}$ (1)
where $\lambda$ is the frequency of an event happening per microsecond. A
starting pixel is randomly chosen uniformly across the resolution of the
sensor. The simulator decides whether injected noise is in the form of a
cluster or a line segment based on the angle-of-incidence parameter.
Specifically, the chance of injecting a cluster is based on the cosine of the
angle of incidence with some jitter-error. Thus, the probability of the
injected noise pattern is given by Eq 2.
$\begin{gathered}P(Cluster)=|cos(\theta+\varepsilon))|\\\ P(Line\
Segment)=1-P(Cluster)\end{gathered}$ (2)
where $\theta$ is the angle of incidence in radians and $\varepsilon$ is a
small amount of error. A cluster’s shape is modelled by randomly chosen pixels
around the neighborhood of the starting point. A line segment is modelled by a
straight line with a randomly chosen angle between 1$\degree$ and
360$\degree$.
For each pixel in the shape of the generated event, the noise pattern is
modelled by sampling a time window for ON events from $\mathcal{N}(2000\mu
s,200\mu s)$ which represents the length of time for the burst of ON events.
OFF events are sampled from $\mathcal{N}(8000\mu s,1000\mu s)$. More
precisely,
$\begin{gathered}P(ON\ Noise\
Event;t)=e^{-\frac{1}{2}\frac{(t-t_{ON})^{2}}{\sigma_{ON}}}\\\
t\in[0,~{}t_{ON}]\\\ t_{ON}\sim\mathcal{N}(2000,200)\end{gathered}$ (3)
where the burst of ON events is simulated as a Gaussian model with the mean as
the sampled ON event time window ($t_{ON}$) and standard deviation
$\sigma_{ON}$ $=$ $340$ $\mu s$ is used to determine the probability of
generating an event over time $t$. The wait time between the burst of ON
events and the OFF-event relaxation period is sampled from
$\mathcal{N}(100,50)$. After the wait time ($t_{Wait}$), the current
relaxation of OFF events is modelled using an exponential with decay
parameter, $\beta$ $=$ 5200 up to a total OFF time ($t_{OFF}$) as per Eq.(4).
$\begin{gathered}P(OFF\ Noise\
Event;t)=\frac{1}{\beta}e^{-\frac{1}{\beta}t}\\\
t\in[t_{ON}+t_{Wait},~{}t_{OFF}]\\\ t_{Wait}\sim\mathcal{N}(100,50)\\\
t_{OFF}\sim\mathcal{N}(8000,1000)\end{gathered}$ (4)
The generated events are then added to the data file and sorted by timestamp
in ascending order. Finally, the file is saved to be used in testing or
evaluation. The algorithm to generate radiation-induced noise events is
detailed in Algorithm 24.
### VI-B Pattern Validation
To validate the simulation environment, noise events were generated following
the pattern described in Algorithm 24 and compared with noise events from real
data. The noise events were plotted against time to compare them with noise
from observations. Figure 15 shows a sample of visual real noise events
(Figure 15(A)) vs simulated noise events (Figure 15(B)). The model used to
generate noise was compared to the average observed single-event noise. The
model shown in Figure 16 fits the observed pattern with a $5\%$ error rate for
ON noise profiles and $12.3\%$ error rate for OFF noise profile.
1:for At each time step $t$ do
2: Compute chance of radiation-induced noise using Eq.(1)
3: if Generate Noise Event then
4: Decide if noise is cluster or line using Eq.(2)
5: Choose a random pixel $[x_{0},y_{0}]$
6: if CLUSTER NOISE then
7: Randomly sample a set of pixels $[X,Y]$ in the neighborhood of
$[x_{0},y_{0}]$
8: for Each pixel $\in$ $[X,Y]$ in the cluster do
9: Generate ON Events Using Eq.(3)
10: Generate OFF Events Using Eq.(4)
11: end for
12: end if
13: if LINE NOISE then
14: Randomly sample angle of line: $\theta$ $\in$ [0, 2$\pi$)
15: Select a set of pixels $[X,Y]$ forming a line L starting at
$[x_{0},y_{0}]$ with angle $\theta$
16: for Each pixel $[X,Y]$ of the line do
17: Generate ON Events Using Eq.(3)
18: Generate OFF Events Using Eq.(4)
19: end for
20: end if
21: end if
22: Append noise events to stream
23:end for
24:Sort events by ascending timestamps
Algorithm 1 Radiation Induced Noise Simulation Environment (Event-RINSE)
### VI-C Simulation Environment Usage
The Event-RINSE simulation environment is written in Python with many
supporting parameter flags that can be used to modify the simulation model.
Normal Python data analysis modules are needed for the simulator, namely SciPy
[45] and NumPy [46], while OpenCV [47] is used to display videos of the event
data. The simulator is run using Python3 environment with runtime flags for
campaign customization. Currently available flags and descriptions can be seen
in Table I. The input data file is the only input that is necessary to run the
simulator. Input files are assumed to be plain text files in $<x>\ <y>\
<timestamp\ (\mu s)>\ <polarity>$ format.
TABLE I: Summary of Event-RINSE runtime options. Command Flag | Description | Datatype
---|---|---
-h/–help | Display help message and exit | N/A
-f/–input-file | The input data file path to read from | String
-o/–output-file | Custom output data file path to write to | String
-aoi/–angle-of-incidence | Angle of incidence between the sensor and simulated beam. Affects prevalence of lines vs. clusters | Integer
-s/–imgSize | The size of the images from the sensor data | List of 2 integers
-vi/–view-input | View the input data file as a video | N/A
-vo/–view-output | View the output data file as a video | N/A
-i/–inject | Perform injections on input file and write to output file | N/A
-d/–delta | Time-step to hold in one frame when viewing video | Float
-n/–noise | The event rate of noise with standard deviation | List of 2 integers
## VII CONCLUSION
The purpose of this experiment was to irradiate an event-based camera under
wide-spectrum neutrons to view and classify any SEEs that may be observed. The
results show that the main SEU that affects the event-based camera is
radiation-induced noise in the form of uniformly-distributed events across the
sensor’s field of view. We found that noise induced on single pixels resulted
in both ON and OFF events with a ratio of 3:1. An average noise event rate was
found to generate peaks with lags in the range of 8-10 $ms$ which corresponded
directly with the macro-pulse patterns of the neutron source at LANSCE [42].
This shows that the sensor acted like a naive particle detector, and was only
affected by the radiation over short timescales. OFF events were also seen to
follow the ON-event peaks with exponentially-decaying event-rate profile.
These profiles seem to suggest that the neutrons interact with the photo-diode
in individual pixels causing energy dumps leading to large photo-current,
inducing the ON events in a short time period of about 1.6 $ms$. The residual
relaxation current after the radiation passes gives rise to the OFF events at
much lower rates, but with a longer duration of up to 10 $ms$. The radiation
did not cause any permanent, long-term damage to the sensor’s photo-diodes or
the hardware circuitry. This hypothesis was further confirmed when looking at
the noise events in brighter and darker background-illumination conditions,
where ON events were significantly higher in the dark environment due to
sensor’s higher contrast sensitivity but OFF events were not found to change
significantly across the two conditions.
Focusing on induced noise, experiments were performed to observe correlations
with the angle of incidence and the event rate through the sensor.
Surprisingly, the null hypothesis that there is no correlation between the
number of events and the angle of incidence, was supported. With a larger
angle of incidence, the cross-sectional area of the sensor is smaller to the
beam’s point-of-view, making it less likely to be hit. When a neutron does
impact the sensor, however, it travels across the field leaving a long streak
of events following its trajectory. When there is a smaller angle of
incidence, the sensor looks larger from the perspective of the beam. This
implies that the sensor will be more likely to be hit, but events are shown
only in the form of dots as short lines of neutrons penetrate the sensor.
These two effects thus cancel each other out, showing no difference in the
induced event rate.
Comparing the number of events from a pendulum signal with radiation-induced
noise shows a signal-to-noise ratio of 3.355. This ratio demonstrates the
robustness of the event-based sensor to radiation in that the noise introduced
does not significantly impact its ability to extract features of the desired
signal. This is further illustrated by the sensor’s ability to clearly observe
the sinusoidal signal against the noisy background, and by the results of the
optical flow algorithm implemented on the recorded events, which show no
significant deterioration between the flow directions computed from the events
when the radiation is introduced.
The Event-RINSE simulation environment created using the recorded noise data
can be used to simulate the effects of radiation on pre-recorded data files.
Event-RINSE was used to inject noise into the event streams recorded without
radiation and was found to correspond well with the observed profile. The
noise examples generated from the simulator matched both the average single-
event noise model and the average noise across the sensor. This fault injector
makes it possible to test different neuromorphic-sensor algorithms, such as
object tracking, under a noisy radiation environment without the need for
expensive radiation testing, and thereby to assess an algorithm’s viability in
space and in any noise suppression techniques. Future work could look at
improving the parameters and probability models for more accurate noise
generation.
Further development of event-cameras for space should include research into
their efficacy under proton and heavy-ion radiation. These experiments will
show if the sensor, as it currently stands, is capable of survival under the
harsh conditions of space. Future work could also include testing the sensor’s
capability to perform basic object tracking under neutron irradiation. The
noise shown in this experiment could pose a small problem for SSA by
interfering with signal events in object tracking. However, since the noise
was seen to be fairly constant under various cases, it could be modeled for
background analysis. Also, the induced noise did not appear to deteriorate
signal analysis enough to cause detrimental effects. With minor background
suppression, the signal-to-noise ratio could therefore be improved enough to
perform the necessary algorithms and analysis for SSA on future spacecraft.
## ACKNOWLEDGMENTS
This research was supported by SHREC industry and agency members and by the
IUCRC Program of the National Science Foundation under Grant No. CNS-1738783.
This work was performed, in part, at the Los Alamos Neutron Science Center
(LANSCE), a NNSA User Facility operated for the U.S. Department of Energy
(DOE) by Los Alamos National Laboratory (Contract 89233218CNA000001). We would
also like to thank M. Lozano, M. Ullán, S. Hidalgo, C. Fleta, and G.
Pellegrini from the ”Instituto de Microelectrónica de Barcelona” (IMB-CSIC)
for insightful discussions.
## References
* [1] C. Posch, T. Serrano-Gotarredona, B. Linares-Barranco, and T. Delbruck, “Retinomorphic event-based vision sensors: Bioinspired cameras with spiking output,” _Proceedings of the IEEE_ , vol. 102, pp. 1470–1484, 10 2014.
* [2] D. Oltrogge and S. Alfano, “The technical challenges of better space situational awareness and space traffic management,” _Journal of Space Safety Engineering_ , vol. 6, 05 2019.
* [3] S. Watts, J. Matheson, I. Hopkins-Bond, A. Holmes-Siedle, A. Mohammadzadeh, and R. Pace, “A new model for generation-recombination in silicon depletion regions after neutron irradiation,” _Nuclear Science, IEEE Transactions on_ , vol. 43, pp. 2587 – 2594, 01 1997.
* [4] M. El Mashade, M. Ashry, S. Eladl, and M. Rageh, “Experimental measurements of some optoelectronic devices before and after gamma irradiation,” vol. 3, pp. 1–12, 01 2004.
* [5] Z. Kovalyuk, V. Katerynchuk, O. Politanska, O. Sydor, and V. Khomyak, “Effect of gamma radiation on the properties of inse photodiodes,” _Technical Physics Letters_ , vol. 31, pp. 359–360, 05 2005.
* [6] M. Vujisic, K. Stankovic, N. Marjanovic, and P. Osmokrovic, “Simulated effects of proton and ion beam irradiation on titanium dioxide memristors,” _Nuclear Science, IEEE Transactions on_ , vol. 57, pp. 1798 – 1804, 09 2010\.
* [7] D. Alexander, “Transient ionizing radiation effects in devices and circuits,” _Nuclear Science, IEEE Transactions on_ , vol. 50, pp. 565 – 582, 07 2003\.
* [8] B. Iricanin and D. Gvozdic, “The analytic approach in the modelling of one-dimensional electron concentration distribution in some two-valley semiconductor electron devices,” _Journal of Physics A: Mathematical and General_ , vol. 31, p. 2997, 01 1999.
* [9] N. Suzanne, W. Stephen, and M. Michael, “The los alamos neutron science center spallation neutron sources,” _Physics Procedia_ , vol. 90, pp. 374–380, 12 2017.
* [10] P. Lichtsteiner, C. Posch, and T. Delbruck, “A $128\times 128$ 120 dB 15 $\mu$s Latency Asynchronous Temporal Contrast Vision Sensor,” _IEEE Journal of Solid-State Circuits_ , vol. 43, no. 2, pp. 566–576, 2008.
* [11] M. A. Mahowald, “VLSI analogs of neuronal visual processing: A synthesis of form and function,” Ph.D. dissertation, California Institute of Technology, 1992\.
* [12] J. Lazzaro and J. Wawrzynek, “A multi-sender asynchrounous extension to the AER protocol,” in _Conference on Advanced Research in VLSI_ , 1995.
* [13] K. A. Boahen, “Point-to-point connectivity between neuromorphic chips using address-events,” _IEEE Transactions on Circuits and Systems_ , vol. 47-5, pp. 416 – 434, 2000.
* [14] P. Christoph, M. Daniel, and W. Rainer, “A QVGA 143 db dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS,” _IEEE Journal of Solid-State Circuits_ , vol. 46, no. 1, pp. 259–275, 2011.
* [15] Y. Suh, S. Choi, M. Ito, J. Kim, Y. Lee, J. Seo, H. Jung, D. H. Yeo, S. Namgung, J. Bong, S. Yoo, S. H. Shin, D. Kwon, P. Kang, S. Kim, H. Na, K. Hwang, C. Shin, J. S. Kim, P. K. J. Park, J. Kim, H. Ryu, and Y. Park, “A 1280x960 dynamic vision sensor with a 4.95-um pixel pitch and motion artifact minimization,” in _IEEE Int. Symp. Circuits and Systems (ISCAS)_ , 2020, pp. 1–5.
* [16] R. Benosman, S.-H. Ieng, C. Clercq, C. Bartolozzi, and M. Srinivasan, “Asynchronous frameless event-based optical flow,” _Neural Networks_ , vol. 27, pp. 32–37, 2012.
* [17] J. Kennewell and B.-N. Vo, “An overview of space situational awareness,” 01 2013, pp. 1029–1036.
* [18] G. Gasparini and V. Miranda, “Space situational awareness: an overview,” 01 2010\.
* [19] J. Pelton, “A path forward to better space security: Finding new solutions to space debris, space situational awareness and space traffic management,” _Journal of Space Safety Engineering_ , vol. 6, 05 2019.
* [20] E. Joachim, L. Ludger, B. Andreas, and W. Helmut, “Radar techniques for space situational awareness,” in _2011 12th International Radar Symposium (IRS)_. IEEE, 2011, pp. 21–26.
* [21] D. K. Jordan, “Nonlinear orbit uncertainty prediction and rectification for space situational awareness,” Ph.D. dissertation, The University of Texas at Austin, 2010.
* [22] I. A. Richard and P. W. Timothy, “Decision support in space situational awareness,” _Lincoln Laboratory Journal_ , vol. 16, no. 2, p. 297, 2007.
* [23] N. Ralph, D. Maybour, Y. Bethi, and G. Cohen, “Observations and design of a new neuromorphic event-based all-sky and fixed region imaging system,” _Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, AMOS_ , vol. PP, pp. 17–20, 09 2019.
* [24] S. Afshar, A. Nicholson, A. van Schaik, and G. Cohen, “Event-based object detection and tracking for space situational awareness,” _IEEE Sensors Journal_ , vol. PP, pp. 1–1, 07 2020.
* [25] H. Akolkar, C. Meyer, X. Clady, O. Marre, C. Bartolozzi, S. Panzeri, and R. Benosman, “What can neuromorphic event-driven precise timing add to spike-based pattern recognition?” _Neural computation_ , vol. 27, pp. 561–593, 03 2015.
* [26] D. R. Valeiras, X. Lagorce, X. Clady, C. Bartolozzi, S. Ieng, and R. Benosman, “An asynchronous neuromorphic event-driven visual part-based shape tracking,” _IEEE Trans. Neural Netw. Learn. Syst. (TNNLS)_ , vol. 26, pp. 3045–3059, 2015.
* [27] X. Lagorce, C. Meyer, S. H. Ieng, D. Filliat, and R. Benosman, “Asynchronous event-based multikernel algorithm for high-speed visual features tracking,” _IEEE Trans. Neural Netw. Learn. Syst. (TNNLS)_ , vol. 26, pp. 1710–1720, 2015.
* [28] Z. Ni, S. H. Ieng, C. Posch, S. Regnier, and R. Benosman, “Visual tracking using neuromorphic asynchronous event-based cameras,” _Neural Computation_ , vol. 27, pp. 925–953, 2015.
* [29] L. A. Camuñas-Mesa, T. Serrano-Gotarredona, S. Ieng, R. Benosman, and B. Linares-Barranco, “Event-driven stereo visual tracking algorithm to solve object occlusion,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 29, no. 9, pp. 4223–4237, 2018.
* [30] Z. Ni, A. Bolopion, J. Agnus, R. Benosman, and S. Regnier, “Asynchronous event-based visual shape tracking for stable haptic feedback in microrobotics,” _IEEE Trans. Robot._ , vol. 28, pp. 1081–1089, 2012.
* [31] M. Litzenberger, C. Posch, D. Bauer, A. N. Belbachir, P. Schon, B. Kohn, and H. Garn, “Embedded vision system for real-time object tracking using an asynchronous transient vision sensor,” _IEEE 12th Digital Signal Proc. Workshop and 4th IEEE Signal Proc. Education Workshop_ , pp. 173–178, 2006.
* [32] A. Linares-Barranco, F. Gómez-Rodríguez, V. Villanueva, L. Longinotti, and T. Delbruck, “A USB3.0 FPGA event-based filtering and tracking framework for dynamic vision sensors,” _IEEE Int. Symp. Circuits and Systems (ISCAS)_ , 2015.
* [33] X. Lagorce, S.-H. Ieng, X. Clady, M. Pfeiffer, and R. Benosman, “Spatiotemporal features for asynchronous event-based data,” _Front. Neurosci._ , vol. 9, 2015.
* [34] B. Ramesh, S. Zhang, Z.-W. Lee, Z. Gao, G. Orchard, and C. Xiang, “Long-term object tracking with a moving event camera,” _British Machine Vision Conf. (BMVC)_ , 2018.
* [35] A. Mitrokhin, C. Fermüller, C. Parameshwara, and Y. Aloimonos, “Event-based moving object detection and tracking,” _IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS)_ , 2018.
* [36] J. R. Srour and J. M. McGarrity, “Radiation effects on microelectronics in space,” _Proceedings of the IEEE_ , vol. 76, no. 11, pp. 1443–1469, 1988\.
* [37] G. Badhwar and P. O’Neill, “An improved model of galactic cosmic radiation for space exploration missions,” _International Journal of Radiation Applications and Instrumentation. Part D. Nuclear Tracks and Radiation Measurements_ , vol. 20, no. 3, pp. 403–410, 1992.
* [38] E. Kenneth, B. C. M, K. Daniel, H. Richard, S. Kent, V. Linda, and F. Matthew, “Closed-loop neutron particle effects testing on a recoverable flight control computer,” in _The 23rd Digital Avionics Systems Conference (IEEE Cat. No. 04CH37576)_ , vol. 2. IEEE, 2004, pp. 6–.
* [39] M. Bagatin and S. Gerardin, _Ionizing Radiation Effects in Electronics: From Memories to Imagers_. CRC press, 2018\.
* [40] H. B. Mann and D. R. Whitney, “On a test of whether one of two random variables is stochastically larger than the other,” _Annals of Mathematical Statistics_ , vol. 18, no. 1, pp. 50–60, 1947.
* [41] H. Akolkar, S. H. Ieng, and R. Benosman, “Real-time high speed motion prediction using fast aperture-robust event-driven visual flow,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 99, no. 1, pp. 1–12, 2020.
* [42] S. Wender and L. Dominik, “Los alamos high-energy neutron testing handbook,” SAE Technical Paper, Tech. Rep., 2020.
* [43] K. Hagiwara, K. Nakamura, K. Hikasa, M. Tanabashi, M. Aguilar-Benitez, C. Amsler, R. Barnett, O. Dahl, D. Groom, R. Miquel _et al._ , “Review of particle physics,” _Physical Review. D, Particles Fields_ , vol. 66, no. 1, pp. 010 001–010 001, 2002.
* [44] T. Serrano-Gotarredona and B. Linares-Barranco, “A 128$\times$128 1.5% contrast sensitivity 0.9% FPN 3$\mu$s latency 4mW asynchronous frame-free dynamic vision sensor using transimpedance preamplifiers,” _IEEE Journal of Solid-State Circuits_ , vol. 48, no. 3, pp. 827–838, 2013\.
* [45] V. Pauli, G. Ralf, O. T. E., H. Matt, R. Tyler, C. David, B. Evgeni, P. Pearu, W. Warren, B. Jonathan, v. S. J., R. A. H., P. Fabian, v. Paul, and S. . . Contributors, “SciPy 1.0: Fundamental algorithms for scientific computing in python,” _Nature Methods_ , vol. 17, pp. 261–272, 2020.
* [46] E. O. Travis, _A guide to NumPy_. Trelgol Publishing USA, 2006, vol. 1.
* [47] G. Bradski, “The OpenCV Library,” _Dr. Dobb’s Journal of Software Tools_ , 2000.
| Seth Roffe earned a Bachelor of Philosophy in Physics, Astronomy, and
Mathematics from the University of Pittsburgh in 2017 and a M.S degree in
Electrical and Computer Engineering from the University of Pittsburgh in 2020.
He is currently pursuing a PhD in Electrical and Computer Engineering. He has
been a member of the NSF Center for Space, High-performance and Resilient
Computing (SHREC) since 2018 performing research in space computing under the
direction of Dr. Alan George. His main research interests involve resilience
in sensor processing including data reliability and error classification in
novel sensors.
---|---
| Alan D. George is Department Chair, R&H Mickle Endowed Chair, and Professor
of Electrical and Computer Engineering (ECE) at the University of Pittsburgh.
He is Founder and Director of the NSF Center for Space, High-performance, and
Resilient Computing (SHREC) headquartered at Pitt. SHREC is an
industry/university cooperative research center (I/UCRC) featuring some 30
academic, industry, and government partners and is considered by many as the
leading research center in its field. Dr. George’s research interests focus
upon high-performance architectures, applications, networks, services,
systems, and missions for reconfigurable, parallel, distributed, and
dependable computing, from spacecraft to supercomputers. He is a Fellow of the
IEEE for contributions in reconfigurable and high-performance computing.
---|---
| Himanshu Akolkar is currently a Post Doctoral Associate at the University
of Pittsburgh. He received his M.Tech. degree from IIT, Kanpur (India) in EE
and PhD from IIT, Genoa (Italy) in Robotics after which he had a Post Doctoral
stint at Universite Pierre et Marie Curie. His primary interest is to
understand the neural basis of sensory and motor control to develop an
intelligent machine.
---|---
| Bernabé Linares-Barranco (M’90-S’06-F’10) received the B.S. degree in
electronic physics in June 1986 and the M.S. degree in microelectronics in
September 1987, both from the University of Seville, Sevilla, Spain. From
September 1988 until August 1991 he was a Graduate Student at the Dept. of
Electrical Engineering of Texas A&M University. He received a first Ph.D.
degree in high-frequency OTA-C oscillator design in June 1990 from the
University of Seville, Spain, and a second Ph.D. degree in analog neural
network design in December 1991 from Texas A&M University, College-Station,
USA. Since June 1991, he has been a Tenured Scientist at the ”Instituto de
Microelectrónica de Sevilla”, IMSE-CNM (CSIC and Univ. de Sevilla), Sevilla,
Spain. In January 2003 he was promoted to Tenured Researcher, and in January
2004 to Full Professor. Since February 2018, he is the Director of the
”Insitituto de Microelectrónica de Sevilla”. He has been involved with circuit
design for telecommunication circuits, VLSI emulators of biological neurons,
VLSI neural based pattern recognition systems, hearing aids, precision circuit
design for instrumentation equipment, VLSI transistor mismatch parameters
characterization, and over the past 25 years has been deeply involved with
neuromorphic spiking circuits and systems, with strong emphasis on vision and
exploiting nanoscale memristive devices for learning. He is co-founder of two
start-ups, Prophesee SA (www.prophesee.ai) and GrAI-Matter-Labs SAS
(www.graimatterlabs.ai), both on neuromorphic hardware. He is an IEEE Fellow
since January 2010. He is Chief Editor of Frontiers in Neuromorphic
Engineering since 2021.
---|---
| Ryad Benosman received the M.Sc. and Ph.D. degrees in applied mathematics
and robotics from University Pierre and Marie Curie in 1994 and 1999,
respectively. He is Full Professor at the university of Pittsburgh/Carnegie
Mellon/Sorbonne University. His work pioneered the field of event based
vision. He is the cofounder of several neuromorphic related companies
including Prophesee and Pixium Vision a french prosthetics company. Ryad
Benosman has authored more than 60 publications that are considered
foundational to the field of event based vision and holds several patents in
the area of vision, robotics and image sensing. In 2013, he was awarded with
the national best French scientific paper by the publication LaRecherche for
his work on neuromorphic retinas applied to retina prosthetics.
---|---
|
# Metal-insulator transition and low-density phases in a strongly-interacting
two-dimensional electron system
A. A. Shashkin Institute of Solid State Physics, Chernogolovka, Moscow
District 142432, Russia S. V. Kravchenko Physics Department, Northeastern
University, Boston, Massachusetts 02115, USA Corresponding author
<EMAIL_ADDRESS>
###### Abstract
We review recent experimental results on the metal-insulator transition and
low-density phases in strongly-interacting, low-disordered silicon-based two-
dimensional electron systems. Special attention is given to the metallic state
in ultra-clean SiGe quantum wells and to the evidence for a flat band at the
Fermi level and a quantum electron solid.
###### keywords:
Two-dimensional electron systems , strongly correlated electrons , spin-
polarized electron system , flat bands , Wigner crystallization
###### PACS:
71.30.+h , 73.40.Qv
## 1 Introduction
The metal-insulator transition (MIT) is an exceptional testing ground for
studying strong electron-electron correlations in two dimensions (2D) in the
presence of disorder. The existence of the metallic state and the MIT in
strongly interacting 2D electron systems (contrary to the famous conclusion by
the “Gang of Four” that only an insulating state is possible in non-
interacting 2D systems [1]) was predicted in Refs. [2, 3, 4]. The phenomenon
was experimentally discovered in silicon metal-oxide-semiconductor field-
effect transistors (MOSFETs) and subsequently observed in a wide variety of
other strongly-interacting 2D systems: $p$\- and $n$-SiGe heterostructures,
$p$\- and $n$-GaAs/AlGaAs heterostructures, AlAs heterostructures, ZnO-related
heterostructures, etc. (for reviews, see Refs. [5, 6, 7, 8, 9, 10, 11, 12, 13,
14] and references therein). Now it is widely accepted that the driving force
behind the MIT is the strong correlations between carriers. Here we review
recent progress in the studies of the MIT and related phenomena. Section 2 is
devoted to the MIT in zero magnetic field and behavior of the effective
electron mass in ultra-high mobility SiGe/Si/SiGe quantum wells. In section 3,
we show that in an exceptionally clean two-valley system, the metallic state
survives even when the spins of the electrons become completely polarized.
Section 4 describes the scaling analysis of the temperature dependences of the
resistance in the spirit of the dynamical mean-field theory and
renormalization-group theory. In section 5, the formation of the flat band at
the Fermi level in SiGe/Si/SiGe quantum well is discussed. Finally, transport
evidence for the formation of the quantum electron solid in silicon MOSFETs at
very low electron densities is presented in section 6.
We used two sets of samples. The first set consisted of CVD-grown ultraclean
SiGe/Si/SiGe quantum wells described in detail in Refs. [15, 16]. The maximum
electron mobility, $\mu$, in these samples reaches 240 m2/Vs. The
approximately 15 nm wide silicon (001) quantum well was sandwiched between
Si0.8Ge0.2 potential barriers. The samples were patterned in Hall-bar shapes
using standard photo-lithography; the distance between the potential probes
and the width were 150 $\mu$m and 50 $\mu$m, correspondingly. The second set
was (100)-silicon MOSFETs with a peak electron mobility of 3 m2/Vs similar to
those described in Ref. [17]. Samples had a Hall bar geometry of width 50
$\mu$m and distance between the potential probes of 120 $\mu$m. In both sets
of samples, the electron density was controlled by applying a positive dc
voltage to the gate relative to the contacts.
Measurements were carried out in Oxford TLM-400 and Kelvinox-100 dilution
refrigerators. On the metallic side of the transition, the data were taken by
a standard four-terminal lock-in technique in a frequency range 0.5–11 Hz in
the linear regime of response. On the insulating side, the resistance was
measured with dc technique using an electrometer with a high input impedance.
## 2 Quantum phase transition in ultrahigh mobility SiGe/Si/SiGe two-
dimensional electron system in zero magnetic field
An important characteristic that defines the MIT is the magnitude of the
resistance drop with decreasing temperature on the metallic side of the
transition. Until recently, the strongest drop of the resistance (up to a
factor of 7) was reported in clean silicon MOSFETs. At the same time, in much
less disordered GaAs-based structures, the resistance drop has not exceeded a
factor of about three. This discrepancy has been attributed primarily to the
fact that electrons in silicon-based structures have two almost degenerate
valleys in the energy spectrum, which strengthens the effects of correlations
[18, 19].
Figure 1: Temperature dependences of the resistivity in an ultralow-disorder
SiGe/Si/SiGe quantum well at different electron densities in zero magnetic
field. Curves near the MIT are marked by the color-gradated area. In the
inset, a close-up view of $\rho(T)$ displaying a drop of the resistivity by
more than a factor of 12 is shown. From Ref. [20].
Resistivity $\rho$ of an ultraclean SiGe/Si/SiGe quantum well was measured in
Ref. [20] as a function of temperature $T$ in a wide range of electron
densities $n_{\text{s}}$, spanning both sides of the zero-magnetic-field MIT.
The data are plotted in Fig. 1. At the highest temperature, the resistivity
measured at the lowest electron density exceeds that at the highest density by
less than two orders of magnitude, while at the lowest temperature, this
difference becomes more than six orders of magnitude. The metal-insulator
transition occurs at $n_{\text{c}}=0.88\pm 0.02\times 10^{10}$ cm-2, according
to the criterion of the sign change of the derivative ${\rm d}\rho/{\rm d}T$
(taking account of the tilted separatrix [19]).
The critical density determined in this way is almost an order of magnitude
lower than that in the cleanest Si MOSFETs, where it is equal to $\approx
8\times 10^{10}$ cm-2. This difference can indeed be expected for an MIT
driven by interactions. In Si MOSFETs, the value of the interaction parameter
$r_{\text{s}}$, defined as the ratio between the Coulomb and Fermi energies,
$r_{\text{s}}=g_{\text{v}}/(\pi n_{\text{s}})^{1/2}a_{\text{B}}$, reaches
$\approx 20$ at the critical electron density (here $g_{\text{v}}=2$ is the
valley degeneracy and $a_{\text{B}}$ is the effective Bohr radius in
semiconductor). SiGe/Si/SiGe quantum wells differ from Si MOSFETs by the
strength of the disorder potential, the thickness of the 2D layer, and the
dielectric constant equal to 7.7 in Si MOSFETs and to 12.6 in SiGe/Si/SiGe
quantum wells. Since the dielectric constant is higher in the latter system,
the interaction parameter is smaller by a factor of approximately 1.6 at the
same electron density. The effective $r_{\text{s}}$ value is further reduced
in the SiGe/Si/SiGe quantum wells due to the greater thickness of the 2D
layer, which results in a smaller form-factor [21]. We assume that the
effective mass in the SiGe barrier is $\approx 0.5\,m_{\text{e}}$ and estimate
the barrier height at $\approx 25$ meV. Evaluating the penetration of the wave
function into the barrier, we obtain the effective thickness of the 2D layer
to be $\approx 200$ Å compared to $\approx 50$ Å in Si MOSFETs. This results
in the additional suppression of $r_{\text{s}}$ in SiGe/Si/SiGe quantum wells
by a factor of about 1.3 with respect to Si MOSFETs. Thus, at the critical
electron densities, the interaction parameters are close to 20 in both 2D
systems, which is consistent with the results of Ref. [22].
Figure 2: (a) Activation energy and the square root of the threshold voltage
as a function of the electron density in zero magnetic field. Vertical error
bars correspond to the experimental uncertainty. The solid lines are linear
fits yielding $n_{\text{c}}=0.87\pm 0.02\times 10^{10}$ cm-2. Top inset:
Current-voltage characteristic measured at a temperature of 30 mK in zero
magnetic field. Bottom inset: Arrhenius plots of the resistivity in the
insulating phase for two electron densities. The densities in both insets are
indicated in cm-2. (b) Dependence of the effective mass at the Fermi level,
$m_{\text{F}}$, on the electron density. The solid line is a linear fit. The
experimental uncertainty corresponds to the data dispersion. The inset shows a
close-up view of the dependence at low electron densities, where
$n_{\text{m}}=1.1\pm 0.1\times 10^{10}$ cm-2. From Ref. [20].
An alternative way to determine the critical density of the MIT is to study
the insulating side of the transition. The resistance has an activated form
there, as shown in the bottom inset to Fig. 2(a). In the main panel of Fig.
2(a), the activation energy in temperature units, $\Delta/k_{\text{B}}$, is
plotted vs. the electron density (red circles). This dependence corresponds to
the constant thermodynamic density of states near the critical point and is
expected to be linear. The activation energy extrapolates to zero at
$n_{\text{c}}=0.87\pm 0.02\times 10^{10}$ cm-2, which matches, within the
experimental uncertainty, the critical electron density determined from the
temperature derivative criterion used above. Additionally, a typical low-
temperature $I$-$V$ curve on the insulating side of the transition is a step-
like function: the voltage abruptly rises at low currents and almost saturates
at higher currents, as shown in the top inset to Fig. 2(a). The magnitude of
the step is $2\,V_{\text{th}}$, where $V_{\text{th}}$ is the threshold
voltage. In Ref. [23], such a threshold behavior of the $I$-$V$
characteristics has been attributed to the breakdown of the insulating phase
that occurs when the localized electrons at the Fermi level acquire enough
energy to reach the mobility edge in an electric field $V_{\text{th}}/d$ over
a distance of the temperature-independent localization length $L$ (here $d$ is
the distance between the potential probes). The values $\Delta/k_{\text{B}}$
and $V_{\text{th}}$ are related through the localization length, which
diverges near the transition as
$L(E_{\text{F}})\propto(E_{\text{c}}-E_{\text{F}})^{-s}$ with the exponent $s$
close to unity [23] (here $E_{\text{c}}$ is the mobility edge and
$E_{\text{F}}$ is the Fermi level). Therefore, the square root of
$V_{\text{th}}$ should be a linear function of $n_{\text{s}}$ near the MIT, as
indeed seen in Fig. 2(a) (blue squares). The $V_{\text{th}}(n_{\text{s}})$
dependence extrapolates to zero at the same electron density as
$\Delta/k_{\text{B}}$. The same analysis, yielding similar results, has been
previously made in a 2D electron system in Si MOSFETs [24], thus adding
confidence that the MIT in 2D is a genuine quantum phase transition.
We now compare the results for $n_{\text{c}}$ with the behavior of the
effective electron mass $m_{\text{F}}$ at the Fermi level (the latter was
determined by the analysis of the Shubnikov-de Haas oscillations; the detailed
procedure of measuring $m_{\text{F}}$ is described in Ref. [25]). In Fig.
2(b), the product $n_{\text{s}}g_{0}m_{0}/g_{\text{F}}m_{\text{F}}$ is plotted
as a function of $n_{\text{s}}$. Here $g_{0}$=2 is the Landé $g$-factor in the
bulk silicon, $m_{0}=0.19\,m_{\text{e}}$ is the band mass, $m_{\text{e}}$ is
the free electron mass, and $g_{\text{F}}\approx g_{0}$ is the $g$-factor at
the Fermi level; for more on this, see section 5. The inverse effective mass
extrapolates linearly to zero at a density $n_{\text{m}}=1.1\pm 0.1\times
10^{10}$ cm-2 that is noticeably higher than $n_{\text{c}}$. This is in
contrast to the situation in Si MOSFETs, where a similar dependence of the
inverse effective mass on the electron density has been observed, but
$n_{\text{m}}$ has always been slightly below $n_{\text{c}}$ [26, 27]. A
natural conclusion is that as the residual disorder in a 2D electron system is
decreased, the critical electron density $n_{\text{c}}$, affected by the
residual disorder, becomes lower than the density $n_{\text{m}}$, at which the
effective mass at the Fermi level tends to diverge. This indicates that these
two densities are not directly related.
In closing this section, we would like to stress that the behavior of the
electron system in SiGe/Si/SiGe quantum wells is qualitatively different from
that in Si MOSFETs. Since the critical electron density in Si MOSFETs always
lies slightly above $n_{\text{m}}$, the MIT in this system occurs in a
strongly-interacting, but conventional Fermi liquid state. In contrast, in
ultraclean SiGe/Si/SiGe quantum wells, the opposite relation
$n_{\text{c}}<n_{\text{m}}$ holds, and the MIT occurs in an unconventional
Fermi liquid state at electron density below the topological phase transition
expected at $n_{\text{s}}=n_{\text{m}}$, where the Fermi surface breaks into
several separate surfaces [28]. This should strengthen the metallic
temperature dependence of the resistance [29], which is consistent with the
observation of the low-temperature drop in the resistance by a factor of 12,
the highest value reported so far in any 2D system.
## 3 Metallic state in a strongly interacting spinless two-valley electron
system
The existence of the $B=0$ metallic state and the MIT in 2D is intimately
related to the presence of spin and valley degrees of freedom in the electron
spectrum [18, 19, 30, 31, 32]. Once the electron spins in a single-valley 2D
system become fully polarized by an external magnetic field, the system was
predicted to become insulating [30]. On the other hand, the electron spectrum
in silicon-based 2D systems contains two almost degenerate alleys, which
should promote metallicity [18, 19, 31]. Therefore, the metallic state may, in
principle, survive in these systems in the presence of spin-polarizing
magnetic fields.
In Fig. 3, we plot $\rho(T)$ dependences, measured in an ultraclean
SiGe/Si/SiGe quantum well at different electron densities in magnetic fields
$B_{\parallel}$ parallel to the 2D plane and strong enough to polarize the
electron spins. The magnetic field of the complete spin polarization, $B^{*}$,
is density-dependent and has been determined by the saturation of the
$\rho(B_{\parallel})$ dependence, which corresponds to the lifting of the spin
degeneracy [33, 34]. The values of magnetic fields used in the experiments of
Ref. [35] fell within the range between approximately 1 and 2 T. As shown in
Fig. 3, at the lowest temperatures, the resistivity has a strong metallic
temperature dependence ($d\rho/dT>0$) at electron densities above a specific
critical value, $n_{\text{c}}(B^{*})$, and an insulating behavior
($d\rho/dT<0$) at lower densities. Assuming that the extrapolation of
$\rho(T)$ to zero temperature is valid and taking into account that the
dependence separating the metallic and insulating regimes should be tilted
[19], one can identify the critical density for the MIT at
$n_{\text{c}}(B^{*})=(1.11\pm 0.05)\times 10^{10}$ cm-2. At electron densities
just above the critical value, the $\rho(T)$ dependences on the metallic side
of the transition are non-monotonic: at temperatures exceeding a density-
dependent value $T_{\text{max}}$, the derivative $d\rho/dT$ is negative, but
it changes sign at $T<T_{\text{max}}$. The measurements in Ref. [35] were
restricted to 0.5 K, the highest temperature at which the saturation of the
$\rho(B_{\parallel})$ dependences could still be achieved. This restriction is
likely to reflect the degeneracy condition for the dilute electron system with
low Fermi energy.
Figure 3: Resistivity of an electron system in a SiGe/Si/SiGe quantum well
placed in the spin-polarizing magnetic field $B^{*}$ as a function of
temperature for different electron densities. The critical region near the MIT
is color-gradated. The magnetic fields used are spanned in the range between
approximately 1 and 2 T. The inset shows a close-up view of $\rho(T)$ for
$n_{\text{s}}=2.09\times 10^{10}$ cm-2. From Ref. [35].
The strongest resistivity drop with decreasing temperature below 0.5 K on the
metallic side of the transition reaches almost a factor of 2 (see the inset to
Fig. 3), which is weaker compared to the factor of more than 12 drop in this
system at $B=0$ (Fig. 1). Nevertheless, the metallic temperature behavior of
spinless electrons in SiGe/Si/SiGe quantum wells remains substantial and
comparable to that observed in $p$-type GaAs/AlGaAs heterostructures in zero
magnetic field [36, 37].
Similarly to the way it was done in the previous section, one can deduce the
critical density for the MIT from two additional criteria that do not require
the extrapolation of the data to $T=0$: vanishing of the activation energy and
nonlinearity of the $I$-$V$ characteristics on the insulating side of the
transition. At $n_{\text{s}}<n_{\text{c}}(B^{*})$, in the vicinity of the
critical point, the temperature dependences of the resistivity have an
activation character (see the lower inset to Fig. 4); the density dependence
of the activation energy $\Delta$ is plotted in the main panel of Fig. 4. The
dependence is linear and extrapolates to zero at the critical density
$n_{\text{c}}(B^{*})=(1.07\pm 0.03)\times 10^{10}$ cm-2. Within the
experimental uncertainty, this value coincides with $n_{\text{c}}(B^{*})$
determined from the temperature derivative criterion. A typical $I$-$V$
characteristic measured on the insulating side of the MIT
($n_{\text{s}}<n_{\text{c}}(B^{*})$) is shown in the upper inset to Fig. 4.
The $V(I)$ dependence obeys Ohm’s law in a very narrow interval of currents
$\left|I\right|\lesssim 1$ pA and almost saturates at higher currents. The
square root of $V_{\text{th}}$ is a linear function of $n_{\text{s}}$ and
extrapolates to zero at the same electron density as the
$\Delta(n_{\text{s}})$ dependence.
Figure 4: Main panel: The activation energy, $\Delta$, and the square root of
the threshold voltage, $V_{\text{th}}^{1/2}$, vs. electron density. Solid
lines correspond to the best linear fits. Upper inset: a typical $I$-$V$
dependence on the insulating side of the MIT at $T=30$ mK. Lower inset:
Arrhenius plots of the temperature dependence of the resistivity for two
electron densities on the insulating side. From Ref. [35].
As mentioned in the previous section, in zero magnetic field, the independent
criteria yield the same critical electron density for the MIT in SiGe/Si/SiGe
quantum wells. This is also the case in Si MOSFETs at $B=0$ [24]. However,
fully spin-polarized (or “spinless”) electrons behave differently in these two
systems. This difference can be attributed to different intervalley scattering
rates. In Si MOSFETs, where the level of the short-range disorder is some two
orders of magnitude higher than that in the ultraclean SiGe/Si/SiGe quantum
wells, strong intervalley scattering mixes two valleys at low temperatures
effectively producing a single valley [38, 39, 40], and the derivative
criterion fails to yield the critical density for spinless electrons. However,
the second criterion mentioned above holds, and this leaves uncertain the
existence of the MIT in this system [24]. In contrast, in the ultraclean
SiGe/Si/SiGe quantum wells, the metallic temperature dependence of the
resistivity remains strong even when the electron spins are completely
polarized, and both above-mentioned criteria yield the same critical density
confirming the existence of the MIT in this 2D system of spinless electrons
that retain another, valley degree of freedom. The strength of the metallic
temperature dependence of the resistivity is comparable to that in spin-
unpolarized single-valley 2D systems in the least disordered $p$-type
GaAs/AlGaAs heterostructures, which indicates that the role of distinct
valleys in the electron spectrum is equivalent to the role of spins in regard
to the existence of the metallic state and the MIT in 2D.
The critical electron density for the MIT in the spinless electron system in
SiGe/Si/SiGe quantum wells exceeds that measured in zero magnetic field by a
factor of approximately 1.2. This increase is consistent with the theoretical
calculations [41]. According to this theory, the parallel-field-induced
increase in the critical electron density for the Anderson transition in a
strongly interacting 2D electron system is due to the exchange and correlation
effects; the ratio between the critical electron densities for fully spin-
polarized and unpolarized electron systems is independent of the density of
impurities and equal to $\approx 1.33$. A similar, although somewhat stronger,
suppression of the metallic regime was previously reported in Si MOSFETs where
the localization of fully spin-polarized electrons occurs at the electron
density by a factor of about 1.4 higher compared to that in zero magnetic
field [24, 42, 43, 44].
## 4 Manifestation of strong correlations in transport in ultra-clean
SiGe/Si/SiGe quantum wells
Early theories of the metallic state in strongly interacting 2D systems [2, 3,
4] were focused on the interplay between disorder and interactions using
renormalization-group scaling theory. Later, the theory was extended to
account for the existence of multiple valleys in the electron spectrum [18,
19]. At temperatures well below the Fermi temperature, the resistivity was
predicted to grow with decreasing temperature, reach a maximum at
$T=T_{\text{max}}$, and then decrease as $T\rightarrow 0$. The maximum in
$\rho(T)$ dependence corresponds to the temperature at which the interaction
effects become strong enough to stabilize the metallic state and overcome the
quantum localization. This theoretical prediction, which is applicable only
within the so-called diffusive regime (roughly, $k_{\text{B}}T\tau/\hbar<1$,
where $\tau$ is the mean-free time), was found to be consistent with the
experimental $\rho(T)$ data in silicon MOSFETs [18, 40, 45], but only in a
narrow range of electron densities near $n_{\text{c}}$. However, strong
temperature dependence of the resistivity has been experimentally observed in
a wide range of electron densities: up to five times the critical density,
including the so-called ballistic regime (roughly,
$k_{\text{B}}T\tau/\hbar>1$), where the renormalization-group scaling theory
is not relevant.111We emphasize that the ballistic regime introduced in Ref.
[29] is not related to the well-known ballistic transport, or Knudsen regime,
where the mean free path is larger than the sample dimensions.
A similar physical mechanism — the elastic but temperature-dependent
scattering of electrons by the Friedel oscillations — works in principle in
both diffusive and ballistic regimes [29]. The interaction corrections to the
conductivity in the corresponding limits have different forms. In the
diffusive regime, they are logarithmic-in-$T$, as follows from the
renormalization-group scaling theory for diffusion modes [2, 3, 4, 18, 19, 30,
46]. In the ballistic regime, the corrections are linear-in-$T$, according to
earlier theories of temperature-dependent screening of the impurity potential
[47, 48, 49, 50], where the leading term has the form
$\sigma(T)-\sigma(0)\propto T/T_{\text{F}}$ (note that the Fermi temperature
$T_{\text{F}}$ is in general determined by the effective electron mass $m$
renormalized by interactions).222The behaviors of the effective electron mass
at the Fermi level and the energy-averaged effective electron mass are
qualitatively different at low electron densities in the strongly correlated
2D system in SiGe/Si/SiGe quantum wells (see section 5). For the sake of
simplicity, in this section we will disregard this difference. The theory of
interaction corrections [29] and the screening theory [48] in its general
form, which takes account of the renormalization of the mass, allowed one to
extract the effective mass from the slope of the linear-in-$T$ correction to
the conductivity in the ballistic regime [26, 51]. In Ref. [26], it was shown
that the so-obtained effective mass sharply increases with decreasing electron
density and that the $m(n_{\text{s}})$ dependence practically coincides with
that obtained by alternative measurement methods [52, 53]. However, the small
corrections calculated in the ballistic regime cannot convincingly explain the
experimentally observed order-of-magnitude changes in the resistivity with
temperature. In principle, in line with the screening theories [48, 50], one
can expect the resistivity to be a function of $T/T_{\text{F}}$ with a maximum
at $T_{\text{max}}\sim T_{\text{F}}$, above which the electrons are not
degenerate. As of now, there are no accepted theoretical calculations allowing
for a quantitative comparison with experiments.
An alternative interpretation of the temperature dependence of the resistivity
is based on the so-called Wigner-Mott scenario, which focuses on the role of
strong electron-electron interactions. The simplest theoretical approach to
non-perturbatively tackle the interactions as the main driving force for the
MIT is based on dynamical mean-field theory (DMFT) methods [54, 55, 56] using
the Hubbard model at half-filling. On the metallic side near the MIT, the
resistivity is predicted to initially increase as the temperature is reduced,
reach a maximum, $\rho_{\text{max}}$, at temperature $T_{\text{max}}\sim
T_{\text{F}}$, and then decrease as $T\rightarrow 0$. It has also been shown
that the resistivity change $\rho(T)-\rho(0)$, normalized by its maximum
value, is a universal function of $T/T_{\text{max}}$.
Yet another approach to treat the strongly-interacting 2D electron systems was
proposed in Refs. [57, 58, 59]. It is based on the Pomeranchuk effect expected
within a phase coexistence region between the Wigner crystal and a Fermi
liquid. The predicted $\rho(T)$ dependences are also non-monotonic: the
resistivity increases with decreasing temperature at $T\gtrsim T_{\text{F}}$
and decreases at lower temperatures. To the best of our knowledge, currently
there is no theory allowing for a quantitative comparison with experiments.
Figure 5: Non-monotonic temperature dependences of the resistivity of the 2D
electron system in SiGe/Si/SiGe quantum wells on the metallic side near the
metal-insulator transition for samples A (a) and B (b). The electron densities
are indicated in units of $10^{10}$ cm-2. The inset in (b) shows
$\rho^{-1}(T)$ dependences for four electron densities in sample B (the
symbols are the same as in the main figure). The solid lines are linear fits
to the data. From Ref. [60].
Figure 6: Left-hand side panel: the ratio
$(\rho(T)-\rho(0))/(\rho_{\text{max}}-\rho(0))$ as a function of
$T/T_{\text{max}}$ for samples A (a) and B (b). Solid lines show the results
of DMFT in the weak-disorder limit [54, 55, 56]. Right-hand side panel: the
ratio $\rho/\rho_{\text{max}}$ as a function of the product
$\rho_{\text{max}}\ln(T/T_{\text{max}})$ for samples A (c) and B (d). Solid
lines are the result of the scaling theory [18, 19]. In both panels, the
electron densities are indicated in units of $10^{10}$ cm-2. From Ref. [60].
In Fig. 5, temperature dependences of the resistivity in the metallic regime
are shown for two SiGe/Si/SiGe samples in the range of electron densities
where the $\rho(T)$ curves are non-monotonic: at temperatures below a density-
dependent temperature $T_{\text{max}}$, the resistivity exhibits metallic
temperature behavior ($\mathrm{d}\rho/\mathrm{d}T>0$), while above
$T_{\text{max}}$, the behavior is insulating ($\mathrm{d}\rho/\mathrm{d}T<0$).
Note that the resistivity drop at $T<T_{\text{max}}$ in these samples is
strong and may exceed an order of magnitude (more than a factor of 12 for the
lowest curve in Fig. 5(b)), which is twice as large compared to that in the
best 2D electron systems studied so far. In the inset to Fig. 5(b), the data
recalculated into the conductivity as a function of temperature are plotted.
We also show linear fits to the data. The observed linear temperature
dependence of conductivity is consistent with the ballistic regime not too
close to the critical density. The temperature dependence of the conductivity
allows one to conclude that the transient region between ballistic and
diffusive regimes corresponds to electron densities around $\approx 1.1\times
10^{10}$ cm-2.
According to DMFT, the resistivity data should scale when plotted in a form
$\delta\rho/\delta\rho_{\text{max}}$ vs. $T/T_{\text{max}}$ (here
$\delta\rho=\rho(T)-\rho(0)$ and
$\delta\rho_{\text{max}}=\rho_{\text{max}}-\rho(0)$). The results of the
scaling analysis of the data shown in Fig. 5, performed in Ref. [60], are
presented in Fig. 6. The data scale perfectly in a wide range of
$n_{\text{s}}$ and are described well by the theory (the solid curve) in the
weak-disorder limit. We emphasize that at some electron densities, the changes
of the resistivity with temperature exceed an order of magnitude. Deviations
from the theoretical curve become pronounced at $T>T_{\text{max}}$ at electron
densities within $\sim 10$% of the critical value $n_{\text{c}}\approx
0.88\times 10^{10}$ cm-2. The fact that in the low-temperature limit, the same
data display linear-in-$T$ corrections to the conductivity (see the inset to
Fig. 5(b)) reveals the consistency of the DMFT and both the theory of
interaction corrections [29] and the generalized screening theory [51]. We
argue that the DMFT can be applied to strongly interacting 2D electron
systems: the Friedel oscillations near the impurities in real electron
systems, even weakened by strong electron correlations [61], imply the
existence of a short-range spatial charge order that plays the role of an
effective lattice. Note that the theory also quantitatively describes weaker
non-monotonic $\rho(T)$ dependences in silicon MOSFETs and $p$-GaAs
heterostructures [55, 56].
We also scale the $\rho(T)$ data in the spirit of the renormalization-group
scaling theory [18, 19], according to which, the normalized resistivity
$\rho/\rho_{\text{max}}$ is expected to be a universal function of the product
$\rho_{\text{max}}\ln(T/T_{\text{max}})$. The results are plotted in the
right-hand panel of Fig. 6(c,d). In both samples, only the data obtained at
$n_{\text{s}}=1.18\times 10^{10}$ cm-2 for sample A (Fig. 6(c)) and at
$n_{\text{s}}=1.17\times 10^{10}$ cm-2 for sample B (Fig. 6(d)) coincide well
with the theoretical curve, although some deviations occur at the lowest
temperature. Pronounced deviations from the theory arise at both higher and
lower $n_{\text{s}}$. At lower electron densities, the scaled experimental
curves become wider than the theoretical one, while at higher densities, they
become narrower. A similar shrinkage of the scaled curves with increasing
$n_{\text{s}}$ was reported earlier in Refs. [18, 45, 55], where the
resistivity data obtained in Si MOSFETs were analyzed. One should take into
account, however, that theory [18, 19] has been developed for 2D electron
systems that, on the one hand, are in the diffusive regime and, on the other
hand, their resistivities are low compared to $\pi h/e^{2}$ because at higher
values of $\rho$, higher-order corrections become important and cause
deviations from the universal scaling curve. As a result, the applicable range
of parameters becomes very narrow. Note that an attempt to scale the
resistivity data in Si MOSFETs in the spirit of the renormalization-group
scaling theory was made in Refs. [62, 63] at high temperatures above 1 K for
the moderate change in $\rho(T)$. Thus, the $\rho(T)$ data are best described
by DMFT.
A question naturally arises of how DMFT and the renormalization-group scaling
theory are connected. Although both theories predict non-monotonic temperature
dependences of the resistivity, within the scaling theory [18, 19], the
maximum in the $\rho(T)$ dependences occurs at the temperature well below
$T_{\text{F}}$, at which the temperature-dependent interactions become strong
enough to overcome the effect of the quantum localization. This theory is
relevant only in the diffusive regime. In contrast, within the DMFT, the
maximum in $\rho(T)$ dependences corresponds to the quasiparticle coherence
temperature $T^{\ast}\sim T_{\text{F}}$, below which the elastic electron-
electron scattering corresponds to coherent transport, while at higher
temperatures the inelastic electron-electron scattering becomes strong and
gives rise to a fully incoherent transport. Even though the theoretical
estimates of the positions of the maxima may be crude, the origins of the
maxima are clearly different within these two theories in view of the role of
the disorder. On the other hand, the functional forms of $\rho(T)$
dependences, including the maximum at $T_{\text{max}}\sim T_{\text{F}}$,
expected from both the screening theory in its general form and DMFT, are
similar. In particular, the linear temperature dependence of the conductivity
at $T\ll T_{\text{F}}$ following from the generalized screening theory [51]
and from the theory of the corrections to the conductivity due to the
scattering on Friedel oscillations in the ballistic regime [29] is consistent
with the prediction of the DMFT. This similarity adds confidence in both
theories and gives a hint that the underlying microscopic mechanism may be the
same, i.e., electron-impurity or impurity-mediated electron-electron
scattering in the strongly interacting case.
Finally, we mention that similar non-monotonic $\rho(T)$ dependences are
observed [64, 65] in quasi-two-dimensional organic charge-transfer salts (so-
called Mott organics), as well as in 2D transition metal dichalcogenides [66,
67, 68]. Interestingly, DMFT is capable of quantitatively describing $\rho(T)$
dependences in these systems [56, 66], which demonstrates that this theory is
applicable to various strongly correlated systems.
## 5 Indication of band flattening at the Fermi level
Flat band materials have recently attracted much attention [69, 70, 71, 72,
73]. The interest is caused, in particular, by the fact that, due to the
anomalous density of states, the flattening of the band may be important for
the construction of room temperature superconductivity. The formation of a
flat band at the Fermi level was theoretically predicted [54, 74, 75] in heavy
fermions, high-temperature superconducting materials, 3He, and two-dimensional
electron systems. As the strength of fermion-fermion interactions increases,
the single-particle spectrum becomes progressively flatter in the vicinity of
the Fermi energy, eventually forming a plateau. The flattening of the spectrum
is related to the increase of the effective fermion mass $m_{\text{F}}$ at the
Fermi level and the corresponding peak in the density of states.
Experimental data obtained in strongly interacting 2D electron systems can be
divided into two groups: the data describing the electron system as a whole
(for example, the magnetic field of the complete spin polarization) and the
data related solely to the electrons at the Fermi level (like the amplitude of
the Shubnikov-de Haas oscillations that yields the effective mass
$m_{\text{F}}$ and Landé $g$-factor $g_{\text{F}}$ at the Fermi level). The
results for the energy-averaged values $m$ and $g$ in the first group often
turn out to be identical to the results for $m_{\text{F}}$ and $g_{\text{F}}$.
For example, simultaneous increase of the energy-averaged effective mass and
that at the Fermi level was reported in strongly correlated 2D systems in Si
MOSFETs [7, 8, 9, 26, 27, 76, 77]. The strongly enhanced effective mass in Si
MOSFETs was previously interpreted in favor of the formation of the Wigner
crystal or an intermediate phase (e.g., a ferromagnetic liquid). The origin
and existence of possible intermediate phases preceding the formation of the
Wigner crystal can depend on the level of disorder in the electron system.
Since in SiGe/Si/SiGe quantum wells, the electron mobility is some two orders
of magnitude higher than that in Si MOSFETs, the origin of the low-density
phases in these electron systems can be different. Note that the experimental
results obtained in the least-disordered Si MOSFETs exclude localization-
driven MIT. The effects of the disorder in higher mobility SiGe/Si/SiGe
quantum wells should be yet weaker.
In this section, we compare the energy-averaged product $g_{\text{F}}m$ and
the product $g_{\text{F}}m_{\text{F}}$ at the Fermi level in ultra-clean
SiGe/Si/SiGe quantum wells. The magnetic field of the complete spin
polarization, $B^{*}(n_{\text{s}})$, which corresponds to a distinct “knee” on
the magnetoresistance curves [33, 34], is in good agreement with the
theoretical dependence calculated using the quantum Monte Carlo method for the
clean limit $k_{\text{F}}l\gg 1$ [78] (here $k_{\text{F}}$ is the Fermi
wavevector and $l$ is the mean free path). The product $g_{\text{F}}m$ can be
obtained in the clean limit from the analysis of the measured
$B^{*}(n_{\text{s}})$ dependence.333Note that in the case of strong disorder
potential, the experimental dependence $B^{*}(n_{\text{s}})$ is shifted to
higher electron densities due to the presence of localized electron moments
[79, 80, 81, 82, 83]. At $B_{\parallel}=B^{*}$, the Zeeman splitting is equal
to the Fermi energy of the spin-polarized electron system
$g_{\text{F}}\mu_{\text{B}}B^{*}=\frac{2\pi\hbar^{2}n_{\text{s}}}{mg_{\text{v}}},$
(1)
where $\mu_{\text{B}}$ is the Bohr magneton. On the other hand, the Landé
$g$-factor $g_{\text{F}}$ and effective mass $m_{\text{F}}$ at the Fermi level
can be determined by the analysis of the Shubnikov-de Haas oscillations in
relatively weak magnetic fields. For the details on how both masses have been
extracted, see Ref. [25].
In Fig. 7, the main result of this section is shown. The average
$g_{\text{F}}m$ and $g_{\text{F}}m_{\text{F}}$ at the Fermi level behave
similarly at high electron densities, where electron-electron interactions are
relatively weak. However, at low densities, where the interactions become
especially strong, their behavior is qualitatively different: the product
$g_{\text{F}}m_{\text{F}}$ continues to monotonically increase as the electron
density is reduced, while the product $g_{\text{F}}m$ saturates at low
$n_{\text{s}}$. We emphasize that what matters here is the qualitative
difference in the behaviors of the two sets of data, rather than a comparison
of their absolute values. Since the exchange effects in the 2D electron
systems in silicon are negligible [7, 8], one can only attribute this
difference to the different behaviors of the two effective masses. Their
qualitatively different behavior indicates the interaction-induced band
flattening at the Fermi level in this electron system. To add confidence in
our results and conclusions, in the bottom inset to Fig. 7 we show the data
for the effective mass $m_{\text{F}}$ determined by the analysis of the
temperature-dependent amplitude of the Shubnikov-de Haas oscillations, as
described in Ref. [84]. Similar density dependence of $m_{\text{F}}$ and
$g_{\text{F}}m_{\text{F}}$ allows one to exclude any possible influence of the
$g$-factor on the behavior of the product of the effective mass and the
$g$-factor, in consistency with the previous results in silicon MOSFETs.
We interpret these experimental results within the concept of the fermion
condensation [28, 85, 86] that occurs at the Fermi level in a range of
momenta, unlike the condensation of bosons. When the strength of the electron-
electron interactions increases, the single-particle spectrum flattens in a
region $\Delta p$ near the Fermi momentum $p_{\text{F}}$ (see top inset to
Fig. 7). At relatively high electron densities $n_{\text{s}}>0.7\times
10^{15}$ m-2, this effect is unimportant because the single-particle spectrum
does not change noticeably in the interval $\Delta p$, and the behaviors of
the energy-averaged effective mass and that at the Fermi level are practically
identical. Decreasing the electron density in the range
$n_{\text{s}}<0.7\times 10^{15}$ m-2 gives rise to the flattening of the
spectrum so that the effective mass at the Fermi level,
$m_{\text{F}}=p_{\text{F}}/v_{\text{F}}$, continues to increase (here
$v_{\text{F}}$ is the Fermi velocity). In contrast, the energy-averaged
effective mass does not because it is not particularly sensitive to this
flattening.
Figure 7: Product of the Landé factor and effective mass as a function of
electron density determined by measurements of the field of full spin
polarization, $B^{*}$ (squares), and Shubnikov-de Haas oscillations (circles)
at $T\approx 30$ mK. The empty and filled symbols correspond to two samples.
The experimental uncertainty corresponds to the data dispersion and is about
2% for the squares and about 4% for the circles. ($g_{0}=2$ and
$m_{0}=0.19\,m_{\text{e}}$ are the values for noninteracting electrons). The
top inset schematically shows the single-particle spectrum of the electron
system in a state preceding the band flattening at the Fermi level (solid
black line). The dashed violet line corresponds to an ordinary parabolic
spectrum. The occupied electron states at $T=0$ are indicated by the shaded
area. Bottom inset: the effective mass $m_{\text{F}}$ versus electron density
determined by analysis of the temperature dependence of the amplitude of
Shubnikov-de Haas oscillations. The dashed line is a guide to the eye. From
Ref. [25].
## 6 Transport evidence for a sliding two-dimensional quantum electron solid
Experimental studies [25, 27, 87, 88] of the transport and thermodynamic
properties of strongly correlated 2D electron systems have suggested that at
low electron densities, these systems approach a phase transition to a new,
unknown state that could be a quantum Wigner crystal or a precursor [89, 90,
91, 92, 93]. (The term quantum means that the kinetic energy of 2D electrons
is determined by the Fermi energy in contrast to the classical Wigner crystal
[94], in which the kinetic energy of electrons is determined by temperature.)
The phase transition point in the least-disordered 2D electron systems was
found to be close to the critical electron density for the MIT. Although the
low-density insulating state has been extensively studied in different 2D
systems [95, 96, 97, 98, 99, 100, 101, 102, 103], no definitive conclusion has
been reached about its origin. While many authors have interpreted the
observed nonlinear current-voltage ($I$-$V$) curves as manifestation of the
depinning of the Wigner crystal, alternative explanations of the breakdown of
the insulating phase have been proposed based on traditional scenarios such as
electron heating and subsequent thermal runaway [104], as well as Efros-
Shklovskii variable range hopping in strong electric field or percolation [8,
105, 106]).
In this section, we discuss double-threshold voltage-current characteristics
and accompanying noise recently observed in a strongly interacting 2D electron
system in silicon MOSFETs at very low electron densities [107]. In Fig. 8, a
set of low-temperature voltage-current curves, measured at different electron
densities in the insulating regime is shown. The critical electron density for
the MIT in this sample is $n_{\text{c}}\approx 8\times 10^{10}$ cm-2; the
corresponding interaction parameter at this density is $r_{\text{s}}\approx
20$. At electron densities below $\approx 6\times 10^{10}$ cm-2, two threshold
voltages are observed: with increasing bias voltage, the current remains near
zero up to the first threshold voltage $V_{\text{th1}}$; then it sharply
increases until a second threshold voltage $V_{\text{th2}}$ is reached, above
which the slope of the $V$-$I$ curve is significantly reduced, and the
dependence becomes linear, although not ohmic (see also the top inset to Fig.
8). As the electron density is increased, the value of $V_{\text{th1}}$
decreases while the second threshold becomes less pronounced and eventually
disappears. No hysteresis was observed in the entire range of electron
densities studied. We emphasize that the observed two-threshold behavior is
quite distinct from that reported in the insulating state in amorphous InO
films, where the current was found to jump at the threshold voltage by as much
as five orders of magnitude and the $V$-$I$ curves exhibited hysteresis
consistent with bistability and electron overheating [108, 109]. Furthermore,
in the experiments of Ref. [107], the power dissipated near the onset
$V_{\text{th1}}$ was less than $10^{-16}$ W, which is unlikely to cause
substantial electron overheating, while the power dissipated near the
threshold voltage in Ref. [108] was more than three orders of magnitude
higher. Note also that the double-threshold $V$-$I$ characteristics cannot be
explained within the percolation picture according to which, a single
threshold is expected [8]. Thus, the existing traditional mechanisms [8, 104,
105, 106] cannot account for the double-threshold behavior reported in Ref.
[107].
Figure 8: $V-I$ curves are shown for different electron densities in the
insulating state at a temperature of 60 mK. The dashed lines are fits to the
data using Eq. (2). The top inset shows the $V$-$I$ curve for
$n_{\text{s}}=5.20\times 10^{10}$ cm-2 on an expanded scale; also shown are
the threshold voltages $V_{\text{th1}}$ and $V_{\text{th2}}$, the static
threshold $V_{\text{s}}=V_{\text{th2}}$, and the dynamic threshold
$V_{\text{d}}$ that is obtained by the extrapolation of the linear region of
the $V$-$I$ curve to zero current. Bottom inset: activation energy
$U_{\text{c}}$ vs. electron density. Vertical error bars represent standard
deviations in the determination of $U_{\text{c}}$ from the fits to the data
using Eq. (2). The dashed line is a linear fit. From Ref. [107].
It is important that at bias voltages between the two thresholds, the current
exhibits strong fluctuations with time that are comparable to its value. This
is shown in Fig. 9, where the current is plotted as a function of time for
density $n_{\text{s}}=5.2\times 10^{10}$ cm-2. Above the second threshold,
however, these anomalously large fluctuations disappear, and the noise is
barely perceptible.
Figure 9: Current is plotted as a function of time for $n_{\text{s}}=5.2\times
10^{10}$ cm-2 and $T=60$ mK at voltages $V=4.90$ mV (which lies between
$V_{\text{th1}}$ and $V_{\text{th2}}$; lower curve) and $V=5.44$ mV (above
$V_{\text{th2}}$). Inset: color map of the broad-band noise at
$n_{\text{s}}=5.36\times 10^{10}$ cm-2 on a $(V,T)$ plane. From Ref. [107].
The measured broad-band noise as a function of voltage is shown in Fig. 10(b)
for different temperatures at electron density $n_{\text{s}}=5.36\times
10^{10}$ cm-2. The inset to Fig. 9 is a color map of the broad-band noise on a
$(V,T)$ plane. At the lowest temperature, a large increase in the noise is
observed between the thresholds $V_{\text{th1}}$ and $V_{\text{th2}}$. This
large noise decreases rapidly with increasing temperature in agreement with
the two-threshold behavior of the $V$-$I$ curves shown in Fig. 10(a).
Figure 10: (a) $V-I$ characteristics at $n_{\text{s}}=5.36\times 10^{10}$ cm-2
for different temperatures. The dashed lines are fits to the data using Eq.
(2). (b) The broad-band noise as a function of voltage for the same electron
density and temperatures. The three upper curves are shifted vertically for
clarity. From Ref. [107].
These results have been analyzed in Ref. [107] in light of a phenomenological
theory based on pinned elastic structures. There is a conspicuous similarity
between the double-threshold $V$-$I$ dependences displayed in Fig. 8 and those
(with the voltage and current axes interchanged) known for the collective
depinning of the vortex lattice in Type-II superconductors (for a
comprehensive review, see Ref. [110]). The physics of the vortex lattice
depinning, in which the existence of two thresholds is well known, was adapted
in Ref. [107] for the case of an electron solid. In a superconductor, current
flows for zero voltage, and the depinning of the vortex lattice occurs when a
non-zero voltage appears. Here the situation is reciprocal: a bias voltage is
applied but at first the current does not flow in the limit of zero
temperature; the depinning of the electron solid is indicated by the
appearance of a non-zero current. In the transient region between the dynamic
($V_{\text{d}}$) and static ($V_{\text{s}}$) thresholds, the collective
pinning of the solid occurs at the centers with different energies, and the
current is thermally activated:
$I\propto\exp\left[-\frac{U(V)}{k_{\text{B}}T}\right],$
where $U(V)$ is the activation energy. The static threshold
$V_{\text{s}}=V_{\text{th2}}$ signals the onset of the regime of solid motion
with friction. This corresponds to the condition
$eEL_{\text{c}}=U_{\text{c}}$, where $E$ is the electric field and
$L_{\text{c}}$ is the characteristic distance between the pinning centers with
maximal activation energy $U_{\text{c}}$. From the balance of the electric,
pinning, and friction forces in the regime of solid motion with friction, one
expects a linear $V$-$I$ dependence offset by the threshold $V_{\text{d}}$
corresponding to the pinning force: $I=\sigma_{0}(V-V_{\text{d}})$, where
$\sigma_{0}$ is a coefficient. Assuming that the activation energy for the
electron solid is equal to
$U(V)=U_{\text{c}}-eEL_{\text{c}}=U_{\text{c}}(1-V/V_{\text{s}}),$
one obtains the expression for the current
$I=\left\\{\begin{array}[]{l}\sigma_{0}(V-V_{\text{d}})\exp\left[-\frac{U_{\text{c}}(1-V/V_{\text{s}})}{k_{\text{B}}T}\right]{\text{
for }}V_{\text{d}}<V\leq V_{\text{s}}\\\ \sigma_{0}(V-V_{\text{d}}){\text{ for
}}V>V_{\text{s}}.\end{array}\right.$ (2)
In Figs. 8 and 10(a), the fits to the data using Eq. (2) are shown by dashed
lines. The experimental two-threshold $V$-$I$ characteristics are described
well by Eq. (2). The extracted value of $U_{\text{c}}$ decreases approximately
linearly with electron density and tends to zero at $n_{\text{s}}\approx
6\times 10^{10}$ cm-2 (see the bottom inset to Fig. 8). This is in contrast
with the vanishing activation energy $\Delta$ of electron-hole pairs at
$n_{\text{c}}$ obtained by measurements of the resistance in the limit of zero
$I$ and $V$ [24]; see also section 2. The vanishing $U_{\text{c}}$ is likely
to be related to the minimum number of the strong pinning centers for which
the collective pinning is still possible. The approximate constancy of the
coefficient $\sigma_{0}\approx 1.6\times 10^{-7}$ Ohm-1 indicates that the
motion of the solid with friction is controlled by weak pinning centers [110].
We argue that the strong noise seen in the regime of the collective pinning of
the solid between $V_{\text{d}}$ and $V_{\text{s}}$ should be suppressed in
the regime of solid motion with friction at $V>V_{\text{s}}$. Indeed, in the
regime of the collective pinning, the solid deforms locally when the depinning
occurs at some center, and then this process repeats at another center and so
on, leading to the generation of a strong noise. In contrast, in the regime of
solid motion with friction, the solid slides as a whole due to the over-
barrier motion, and, therefore, the noise is suppressed. Thus, the physics of
pinned periodic/elastic objects is relevant for the low-density state in a 2D
electron system in silicon MOSFETs. These experimental results are also
consistent with numerical simulations of the dynamics of a 2D electron system
forming a Wigner solid in the presence of charged impurities [111, 112].
Although the model proposed in Ref. [107] successfully describes the
experimental results, further comprehensive theoretical studies are needed.
## 7 Conclusions
We have reviewed recent studies of the MIT and low-density phases in strongly
correlated ultraclean silicon-based structures. Despite much progress has been
done in the field, many challenges still remain to be addressed.
## Acknowledgments
We acknowledge useful discussions with I. S. Burmistrov, V. Dobrosavljević, V.
T. Dolgopolov, A. M. Finkel’stein, D. Heiman, and M. M. Radonjić. A.A.S. was
supported by RFBR Grant No. 19-02-00196 and a Russian Government contract.
S.V.K. was supported by NSF Grant No. 1904051.
## References
* [1] E. Abrahams, P. W. Anderson, D. C. Licciardello, T. V. Ramakrishnan, Scaling theory of localization: Absence of quantum diffusion in two dimensions, Phys. Rev. Lett. 42 (1979) 673–676.
* [2] A. M. Finkel’stein, Influence of Coulomb interaction on the properties of disordered metals, Sov. Phys. JETP 57 (1983) 97–108.
* [3] A. M. Finkel’stein, Weak localization and Coulomb interaction in disordered systems, Z. Phys. B 56 (1984) 189–196.
* [4] C. Castellani, C. Di Castro, P. A. Lee, M. Ma, Interaction-driven metal-insulator transitions in disordered fermion systems, Phys. Rev. B 30 (1984) 527–543.
* [5] M. P. Sarachik, S. V. Kravchenko, Novel phenomena in dilute electron systems in two dimensions, Proc. Natl. Acad. Sci. USA 96 (1999) 5900–5902.
* [6] E. Abrahams, S. V. Kravchenko, M. P. Sarachik, Metallic behavior and related phenomena in two dimensions, Rev. Mod. Phys. 73 (2001) 251–266.
* [7] S. V. Kravchenko, M. P. Sarachik, Metal-insulator transition in two-dimensional electron systems, Rep. Prog. Phys. 67 (2004) 1–44.
* [8] A. A. Shashkin, Metal-insulator transitions and the effects of electron-electron interactions in two-dimensional electron systems, Phys.-Usp. 48 (2005) 129–149.
* [9] V. M. Pudalov, Metal-insulator transitions and related phenomena in a strongly correlated two-dimensional electron system, Phys.-Usp. 49 (2006) 203–208.
* [10] B. Spivak, S. V. Kravchenko, S. A. Kivelson, X. P. A. Gao, Transport in strongly correlated two dimensional electron fluids, Rev. Mod. Phys. 82 (2010) 1743–1766.
* [11] S. V. Kravchenko, Strongly Correlated Electrons in Two Dimensions, Pan Stanford Publishing, 2017.
* [12] A. A. Shashkin, S. V. Kravchenko, Metal-insulator transition in a strongly correlated two-dimensional electron system, in: S. V. Kravchenko (Ed.), Strongly Correlated Electrons in Two Dimensions, Pan Stanford Publishing, 2017, Ch. 2, pp. 47–64.
* [13] A. A. Shashkin, S. V. Kravchenko, Recent developments in the field of the metal-insulator transition in two dimensions, Appl. Sci. 9 (2019) 1169.
* [14] V. T. Dolgopolov, Two-dimensional system of strongly interacting electrons in silicon (100) structures, Phys. Usp. 62 (2019) 633–648.
* [15] M. Y. Melnikov, A. A. Shashkin, V. T. Dolgopolov, S.-H. Huang, C. W. Liu, S. V. Kravchenko, Ultra-high mobility two-dimensional electron gas in a SiGe/Si/SiGe quantum well, Appl. Phys. Lett. 106 (2015) 092102.
* [16] M. Y. Melnikov, V. T. Dolgopolov, A. A. Shashkin, S.-H. Huang, C. W. Liu, S. V. Kravchenko, Unusual anisotropy of inplane field magnetoresistance in ultra-high mobility SiGe/Si/SiGe quantum wells, J. Appl. Phys. 122 (2017) 224301\.
* [17] R. Heemskerk, T. M. Klapwijk, Nonlinear resistivity at the metal-insulator transition in a two-dimensional electron gas, Phys. Rev. B 58 (1998) R1754–R1757.
* [18] A. Punnoose, A. M. Finkel’stein, Dilute electron gas near the metal-insulator transition: Role of valleys in silicon inversion layers, Phys. Rev. Lett. 88 (2001) 016802.
* [19] A. Punnoose, A. M. Finkel’stein, Metal-insulator transition in disordered two-dimensional electron systems, Science 310 (2005) 289–291.
* [20] M. Y. Melnikov, A. A. Shashkin, V. T. Dolgopolov, A. Y. X. Zhu, S. V. Kravchenko, S.-H. Huang, C. W. Liu, Quantum phase transition in ultrahigh mobility SiGe/Si/SiGe two-dimensional electron system, Phys. Rev. B 99 (2019) 081106(R).
* [21] T. Ando, A. B. Fowler, F. Stern, Electronic properties of two-dimensional systems, Rev. Mod. Phys. 54 (1982) 437–672.
* [22] A. A. Shashkin, A. A. Kapustin, E. V. Deviatov, V. T. Dolgopolov, Z. D. Kvon, Strongly enhanced effective mass in dilute two-dimensional electron systems: System-independent origin, Phys. Rev. B 76 (2007) 241302.
* [23] A. A. Shashkin, V. T. Dolgopolov, G. V. Kravchenko, Insulating phases in a two-dimensional electron system of high-mobility Si MOSFET’s, Phys. Rev. B 49 (1994) 14486–14495.
* [24] A. A. Shashkin, S. V. Kravchenko, T. M. Klapwijk, Metal-insulator transition in a 2D electron gas: Equivalence of two approaches for determining the critical point, Phys. Rev. Lett. 87 (2001) 266402.
* [25] M. Y. Melnikov, A. A. Shashkin, V. T. Dolgopolov, S.-H. Huang, C. W. Liu, S. V. Kravchenko, Indication of band flattening at the Fermi level in a strongly correlated electron system, Sci. Rep. 7 (2017) 14539.
* [26] A. A. Shashkin, S. V. Kravchenko, V. T. Dolgopolov, T. M. Klapwijk, Sharp increase of the effective mass near the critical density in a metallic two-dimensional electron system, Phys. Rev. B 66 (2002) 073303.
* [27] A. Mokashi, S. Li, B. Wen, S. V. Kravchenko, A. A. Shashkin, V. T. Dolgopolov, M. P. Sarachik, Critical behavior of a strongly interacting 2D electron system, Phys. Rev. Lett. 109 (2012) 096405.
* [28] M. V. Zverev, V. A. Khodel, S. S. Pankratov, Microscopic theory of a strongly correlated two-dimensional electron gas, JETP Lett. 96 (2012) 192–202.
* [29] G. Zala, B. N. Narozhny, I. L. Aleiner, Interaction corrections at intermediate temperatures: Longitudinal conductivity and kinetic equation, Phys. Rev. B 64 (2001) 214204.
* [30] P. A. Lee, T. V. Ramakrishnan, Disordered electronic systems, Rev. Mod. Phys. 57 (1985) 287–337.
* [31] G. Fleury, X. Waintal, Many-body localization study in low-density electron gases: Do metals exist in two dimensions?, Phys. Rev. Lett. 101 (2008) 226803\.
* [32] I. S. Burmistrov, N. M. Chtchelkatchev, Electronic properties in a two-dimensional disordered electron liquid: Spin-valley interplay, Phys. Rev. B 77 (2008) 195319.
* [33] T. Okamoto, K. Hosoya, S. Kawaji, A. Yagi, Spin degree of freedom in a two-dimensional electron liquid, Phys. Rev. Lett. 82 (1999) 3875–3878.
* [34] S. A. Vitkalov, H. Zheng, K. M. Mertes, M. P. Sarachik, T. M. Klapwijk, Small-angle Shubnikov-de Haas measurements in a 2D electron system: The effect of a strong in-plane magnetic field, Phys. Rev. Lett. 85 (2000) 2164–2167.
* [35] M. Y. Melnikov, A. A. Shashkin, V. T. Dolgopolov, S.-H. Huang, C. W. Liu, A. Y. X. Zhu, S. V. Kravchenko, Metallic state in a strongly interacting spinless two-valley electron system in two dimensions, Phys. Rev. B 101 (2020) 045302.
* [36] Y. Hanein, U. Meirav, D. Shahar, C. C. Li, D. C. Tsui, H. Shtrikman, The metalliclike conductivity of a two-dimensional hole system, Phys. Rev. Lett. 80 (1998) 1288–1291.
* [37] X. P. A. Gao, G. S. Boebinger, A. P. Mills, Jr., A. P. Ramirez, L. N. Pfeiffer, K. W. West, Spin-polarization-induced tenfold magnetoresistivity of highly metallic two-dimensional holes in a narrow GaAs quantum well, Phys. Rev. B 73 (2006) 241315(R).
* [38] A. Punnoose, Renormalization group study of intervalley scattering and valley splitting in a two-valley system, Phys. Rev. B 81 (2010) 035306.
* [39] A. Punnoose, Renormalization group study of a two-valley system with spin splitting, Phys. Rev. B 82 (2010) 115310.
* [40] A. Punnoose, A. M. Finkel’stein, A. Mokashi, S. V. Kravchenko, Test of the scaling theory in two dimensions in the presence of valley splitting and intervalley scattering in Si-MOSFETs, Phys. Rev. B 82 (2010) 201308(R).
* [41] V. T. Dolgopolov, A. A. Shashkin, S. V. Kravchenko, Spin polarization and exchange-correlation effects in transport properties of two-dimensional electron systems in silicon, Phys. Rev. B 96 (2017) 075307.
* [42] V. T. Dolgopolov, G. V. Kravchenko, A. A. Shashkin, S. V. Kravchenko, Properties of electron insulating phase in Si inversion layers at low-temperatures, JETP Lett. 55 (1992) 733–737.
* [43] K. Eng, X. G. Feng, D. Popović, S. Washburn, Effects of a parallel magnetic field on the metal-insulator transition in a dilute two-dimensional electron system, Phys. Rev. Lett. 88 (2002) 136402.
* [44] J. Jaroszyński, D. Popović, T. M. Klapwijk, Magnetic-field dependence of the anomalous noise behavior in a two-dimensional electron system in silicon, Phys. Rev. Lett. 92 (2004) 226403.
* [45] S. Anissimova, S. V. Kravchenko, A. Punnoose, A. M. Finkel’stein, T. M. Klapwijk, Flow diagram of the metal-insulator transition in two dimensions, Nat. Phys. 3 (2007) 707–710.
* [46] C. Castellani, C. Di Castro, P. A. Lee, Metallic phase and metal-insulator transition in two-dimensional electronic systems, Phys. Rev. B 57 (1998) R9381–R9384.
* [47] F. Stern, Calculated temperature dependence of mobility in silicon inversion layers, Phys. Rev. Lett. 44 (1980) 1469–1472.
* [48] A. Gold, V. T. Dolgopolov, Temperature dependence of the conductivity for the two-dimensional electron gas: Analytical results for low temperatures, Phys. Rev. B 33 (1986) 1076–1084.
* [49] S. Das Sarma, Theory of finite-temperature screening in a disordered two-dimensional electron gas, Phys. Rev. B 33 (1986) 5401–5405.
* [50] S. Das Sarma, E. H. Hwang, Charged impurity-scattering-limited low-temperature resistivity of low-density silicon inversion layers, Phys. Rev. Lett. 83 (1999) 164–167.
* [51] A. A. Shashkin, V. T. Dolgopolov, S. V. Kravchenko, Comment on “Interaction effects in conductivity of Si inversion layers at intermediate temperatures”, Phys. Rev. Lett. 93 (2004) 269705.
* [52] A. A. Shashkin, M. Rahimi, S. Anissimova, S. V. Kravchenko, V. T. Dolgopolov, T. M. Klapwijk, Spin-independent origin of the strongly enhanced effective mass in a dilute 2D electron system, Phys. Rev. Lett. 91 (2003) 046403.
* [53] S. Anissimova, A. Venkatesan, A. A. Shashkin, M. R. Sakr, S. V. Kravchenko, T. M. Klapwijk, Magnetization of a strongly interacting two-dimensional electron system in perpendicular magnetic fields, Phys. Rev. Lett. 96 (2006) 046409.
* [54] A. Camjayi, K. Haule, V. Dobrosavljević, G. Kotliar, Coulomb correlations and the Wigner-Mott transition, Nat. Phys. 4 (2008) 932–935.
* [55] M. M. Radonjić, D. Tanasković, V. Dobrosavljević, K. Haule, G. Kotliar, Wigner-Mott scaling of transport near the two-dimensional metal-insulator transition, Phys. Rev. B 85 (2012) 085133.
* [56] V. Dobrosavljević, D. Tanasković, Wigner-Mott quantum criticality: From 2D-MIT to 3He and Mott organics, in: S. V. Kravchenko (Ed.), Strongly Correlated Electrons in Two Dimensions, Pan Stanford Publishing Pte Ltd., 2017, Ch. 1, pp. 1–46.
* [57] B. Spivak, Phase separation in the two-dimensional electron liquid in MOSFET’s, Phys. Rev. B 67 (2003) 125205.
* [58] B. Spivak, S. A. Kivelson, Phases intermediate between a two-dimensional electron liquid and Wigner crystal, Phys. Rev. B 70 (2004) 155114.
* [59] B. Spivak, S. A. Kivelson, Transport in two dimensional electronic micro-emulsions, Ann. Phys. 321 (2006) 2071–2115.
* [60] A. A. Shashkin, M. Y. Melnikov, V. T. Dolgopolov, M. M. Radonjić, V. Dobrosavljević, S.-H. Huang, C. W. Liu, A. Y. X. Zhu, S. V. Kravchenko, Manifestation of strong correlations in transport in ultraclean SiGe/Si/SiGe quantum wells, Phys. Rev. B 102 (2020) 081119(R).
* [61] E. C. Andrade, E. Miranda, V. Dobrosavljević, Quantum ripples in strongly correlated metals, Phys. Rev. Lett. 104 (2010) 236401\.
* [62] D. A. Knyazev, O. E. Omel’yanovskii, V. M. Pudalov, I. S. Burmistrov, Critical behavior of transport and magnetotransport in a 2D electron system in Si near the metal-insulator transition, JETP Lett. 84 (2007) 662–666.
* [63] D. A. Knyazev, O. E. Omel’yanovskii, V. M. Pudalov, I. S. Burmistrov, Metal-insulator transition in two dimensions: Experimental test of the two-parameter scaling, Phys. Rev. Lett. 100 (2008) 046405.
* [64] P. Limelette, P. Wzietek, S. Florens, A. Georges, T. A. Costi, C. Pasquier, D. Jérome, C. Mézière, P. Batail, Mott transition and transport crossovers in the organic compound $\kappa\mathrm{\text{$-$}}(\mathrm{B}\mathrm{E}\mathrm{D}\mathrm{T}\mathrm{\text{$-$}}\mathrm{T}\mathrm{T}\mathrm{F}{)}_{2}\mathrm{C}\mathrm{u}[\mathrm{N}(\mathrm{C}\mathrm{N}{)}_{2}]\mathrm{C}\mathrm{l}$, Phys. Rev. Lett. 91 (2003) 016401.
* [65] Y. Kurosaki, Y. Shimizu, K. Miyagawa, K. Kanoda, G. Saito, Mott transition from a spin liquid to a Fermi liquid in the spin-frustrated organic conductor $\kappa\mathrm{\text{$-$}}(\mathrm{ET}{)}_{2}\mathrm{Cu}_{2}(\mathrm{CN}{)}_{3}$, Phys. Rev. Lett. 95 (2005) 177001.
* [66] B. H. Moon, G. H. Han, M. M. Radonjić, H. Ji, V. Dobrosavljević, Quantum critical scaling for finite-temperature Mott-like metal-insulator crossover in few-layered MoS2, Phys. Rev. B 102 (2020) 245424.
* [67] B. H. Moon, Metal-insulator transition in two-dimensional transition metal dichalcogenides, Emergent Materials (Mar 2021).
URL https://doi.org/10.1007/s42247-021-00202-9
* [68] T. Li, S. Jiang, L. Li, Y. Zhang, K. Kang, J. Zhu, K. Watanabe, T. Taniguchi, D. Chowdhury, L. Fu, J. Shan, K. F. Mak, Continuous Mott transition in semiconductor moiré superlattices (2021). arXiv:2103.09779.
* [69] T. T. Heikkila, N. B. Kopnin, G. E. Volovik, Flat bands in topological media, JETP Lett. 94 (2011) 233–239.
* [70] K. Bennemann, J. Ketterson (Eds.), Novel Superfluids, Oxford University Press, 2013\.
* [71] S. Peotta, P. Torma, Superfluidity in topologically nontrivial flat bands, Nat. Commun. 6 (2015) 8944.
* [72] G. E. Volovik, From standard model of particle physics to room-temperature superconductivity, Phys. Scr. T164 (2015) 014014.
* [73] V. A. Khodel, J. W. Clark, M. V. Zverev, Metamorphoses of electron systems hosting a fermion condensate, JETP Lett. 111 (2020) 96–103.
* [74] M. Amusia, K. Popov, V. Shaginyan, W. Stefanowicz, Theory of Heavy-Fermion Compounds, Springer International Publishing, 2015.
* [75] D. Yudin, D. Hirschmeier, H. Hafermann, O. Eriksson, A. I. Lichtenstein, M. I. Katsnelson, Fermi condensation near van Hove singularities within the Hubbard model on the triangular lattice, Phys. Rev. Lett. 112 (2014) 070403\.
* [76] V. T. Dolgopolov, Two-dimensional electrons in (100)-oriented silicon field-effect structures in the region of low concentrations and high mobilities, JETP Lett. 101 (2015) 282–287.
* [77] A. Y. Kuntsevich, Y. V. Tupikov, V. M. Pudalov, I. S. Burmistrov, Strongly correlated two-dimensional plasma explored from entropy measurements, Nat. Commun. 6 (2015) 7298.
* [78] G. Fleury, X. Waintal, Energy scale behind the metallic behaviors in low-density Si MOSFETs, Phys. Rev. B 81 (2010) 165117.
* [79] V. T. Dolgopolov, A. Gold, Comment on “Weak anisotropy and disorder dependence of the in-plane magnetoresistance in high-mobility (100) Si-inversion layers”, Phys. Rev. Lett. 89 (2002) 129701.
* [80] A. Gold, V. T. Dolgopolov, On the role of disorder in transport and magnetic properties of the two-dimensional electron gas, J. Phys. Condens. Matter 14 (2002) 7091–7096.
* [81] O. Prus, Y. Yaish, M. Reznikov, U. Sivan, V. Pudalov, Thermodynamic spin magnetization of strongly correlated two-dimensional electrons in a silicon inversion layer, Phys. Rev. B 67 (2003) 205407.
* [82] N. Teneh, A. Y. Kuntsevich, V. M. Pudalov, M. Reznikov, Spin-droplet state of an interacting 2D electron system, Phys. Rev. Lett. 109 (2012) 226403.
* [83] V. M. Pudalov, A. Y. Kuntsevich, M. E. Gershenson, I. S. Burmistrov, M. Reznikov, Probing spin susceptibility of a correlated two-dimensional electron system by transport and magnetization measurements, Phys. Rev. B 98 (2018) 155109.
* [84] M. Y. Melnikov, A. A. Shashkin, V. T. Dolgopolov, S. V. Kravchenko, S.-H. Huang, C. W. Liu, Effective electron mass in high-mobility SiGe/Si/SiGe quantum wells, JETP Lett. 100 (2014) 114–119.
* [85] V. A. Khodel, V. R. Shaginyan, Superfluidity in system with fermion condensate, JETP Lett. 51 (1990) 553–555.
* [86] P. Nozières, Properties of Fermi liquids with a finite range interaction, J. Phys. I (France) 2 (1992) 443–458.
* [87] A. A. Shashkin, S. V. Kravchenko, V. T. Dolgopolov, T. M. Klapwijk, Indication of the ferromagnetic instability in a dilute two-dimensional electron system, Phys. Rev. Lett. 87 (2001) 086801.
* [88] S. A. Vitkalov, H. Zheng, K. M. Mertes, M. P. Sarachik, T. M. Klapwijk, Scaling of the magnetoconductivity of silicon mosfets: Evidence for a quantum phase transition in two dimensions, Phys. Rev. Lett. 87 (2001) 086401.
* [89] E. Wigner, On the interaction of electrons in metals, Phys. Rev. 46 (1934) 1002–1011.
* [90] A. V. Chaplik, Possible crystallization of charge carriers in low-density inversion layers, Sov. Phys. JETP 35 (1972) 395–398.
* [91] B. Tanatar, D. M. Ceperley, Ground state of the two-dimensional electron gas, Phys. Rev. B 39 (1989) 5005–5016.
* [92] C. Attaccalite, S. Moroni, P. Gori-Giorgi, G. B. Bachelet, Correlation energy and spin polarization in the 2D electron gas, Phys. Rev. Lett. 88 (2002) 256601\.
* [93] V. Kagalovsky, S. V. Kravchenko, D. Nemirovsky, Hartree-Fock description of a Wigner crystal in two dimensions, Physica E 119 (2020) 114016.
* [94] C. C. Grimes, G. Adams, Evidence for a liquid-to-crystal phase transition in a classical, two-dimensional sheet of electrons, Phys. Rev. Lett. 42 (1979) 795–798.
* [95] E. Y. Andrei, G. Deville, D. C. Glattli, F. I. B. Williams, E. Paris, B. Etienne, Observation of a magnetically induced Wigner solid, Phys. Rev. Lett. 60 (1988) 2765–2768.
* [96] V. J. Goldman, M. Santos, M. Shayegan, J. E. Cunningham, Evidence for two-dimensional quantum Wigner crystal, Phys. Rev. Lett. 65 (1990) 2189–2192.
* [97] F. I. B. Williams, P. A. Wright, R. G. Clark, E. Y. Andrei, G. Deville, D. C. Glattli, C. Dorin, C. T. Foxon, J. J. Harris, Conduction threshold and pinning frequency of magnetically induced Wigner solid, Phys. Rev. Lett. 66 (1991) 3285–3288.
* [98] M. B. Santos, Y. W. Suen, M. Shayegan, Y. P. Li, L. W. Engel, D. C. Tsui, Observation of a reentrant insulating phase near the 1/3 fractional quantum Hall liquid in a two-dimensional hole system, Phys. Rev. Lett. 68 (1992) 1188–1191.
* [99] V. M. Pudalov, M. D’Iorio, S. V. Kravchenko, J. W. Campbell, Zero-magnetic-field collective insulator phase in a dilute 2D electron system, Phys. Rev. Lett. 70 (1993) 1866–1869.
* [100] J. Yoon, C. C. Li, D. Shahar, D. C. Tsui, M. Shayegan, Wigner crystallization and metal-insulator transition of two-dimensional holes in GaAs at $\mathit{B}\phantom{\rule{0.0pt}{0.0pt}}=\phantom{\rule{0.0pt}{0.0pt}}0$, Phys. Rev. Lett. 82 (1999) 1744–1747.
* [101] R. Chitra, T. Giamarchi, Zero field Wigner crystal, Eur. Phys. J. B 44 (2005) 455–467.
* [102] T. Knighton, Z. Wu, J. Huang, A. Serafin, J. S. Xia, L. N. Pfeiffer, K. W. West, Evidence of two-stage melting of Wigner solids, Phys. Rev. B 97 (2018) 085135.
* [103] J. Huang, L. Pfeiffer, K. West, Metal-to-insulator transitions in strongly correlated regime, Appl. Sci. 9 (2018) 80.
* [104] H. W. Jiang, H. L. Stormer, D. C. Tsui, L. N. Pfeiffer, K. W. West, Magnetotransport studies of the insulating phase around $\nu=1/5$ Landau-level filling, Phys. Rev. B 44 (1991) 8107–8114.
* [105] S. Marianer, B. I. Shklovskii, Effective temperature of hopping electrons in a strong electric field, Phys. Rev. B 46 (1992) 13100–13103.
* [106] V. T. Dolgopolov, G. V. Kravchenko, A. A. Shashkin, S. V. Kravchenko, Metal-insulator transition in Si inversion layers in the extreme quantum limit, Phys. Rev. B 46 (1992) 13303–13308.
* [107] P. Brussarski, S. Li, S. V. Kravchenko, A. A. Shashkin, M. P. Sarachik, Transport evidence for a sliding two-dimensional quantum electron solid, Nat. Commun. 9 (2018) 3803.
* [108] M. Ovadia, B. Sacepe, D. Shahar, Electron-phonon decoupling in disordered insulators, Phys. Rev. Lett. 102 (2009) 176802.
* [109] B. L. Altshuler, V. E. Kravtsov, I. V. Lerner, I. L. Aleiner, Jumps in current-voltage characteristics in disordered films, Phys. Rev. Lett. 102 (2009) 176803.
* [110] G. Blatter, M. Y. Feigel’man, Y. B. Geshkenbein, A. I. Larkin, V. M. Vinokur, Vortices in high-temperature superconductors, Rev. Mod. Phys. 66 (1994) 1125–1388.
* [111] C. Reichhardt, C. J. Olson, N. Grønbech-Jensen, F. Nori, Moving Wigner glasses and smectics: Dynamics of disordered Wigner crystals, Phys. Rev. Lett. 86 (2001) 4354–4357.
* [112] C. Reichhardt, C. J. Olson Reichhardt, Depinning and nonequilibrium dynamic phases of particle assemblies driven over random and ordered substrates: A review, Rep. Prog. Phys. 80 (2017) 026501.
|
# How Secure are Two-Way Ping-Pong and LM05 QKD Protocols under a Man-in-the-
Middle Attack?
Mladen Pavičić<EMAIL_ADDRESS>Center of Excellence for Advanced Materials
and Sensors, Research Unit Photonics and Quantum Optics, Institute Ruder
Bošković, 10000 Zagreb, Croatia;
Nanooptics, Department of Physics, Humboldt-Universität zu Berlin, 12489
Berlin, Germany
###### Abstract
We consider a man-in-the-middle attack on two-way quantum key distribution
ping-pong and LM05 protocols in which an eavesdropper copies all messages in
the message mode, while being undetectable in the mode. Under the attack there
is therefore no disturbance in the message mode and the mutual information
between the sender and the receiver is always constant and equal to one and
messages copied by the eavesdropper are always genuine. An attack can only be
detected in the control mode but the level of detection at which the protocol
should be aborted is not defined. We examine steps of the protocol to evaluate
its security and find that the protocol should be redesigned. We also compare
it with the security of a one-way asymmetric BB84-like protocol in which one
basis serves as the message mode and the other as the control mode but which
does have the level of detection at which the protocol should be aborted
defined.
quantum cryptography and quantum key distribution and two-way communication
###### pacs:
03.67.Dd, 03.67.Ac, 42.50.Ex
## I Introduction
Quantum cryptography, in particular quantum key distribution (QKD) protocols,
offers us, in contrast to the classical one, provably unbreakable
communication based on the quantum physical properties of the information
carriers Elliott et al. (2005); Sasaki, M. et al. (2011); Peev, M. et al.
(2009). So far, implementations were mostly based on the one-way BB84 protocol
Bennett and Brassard (1984) which is unconditionally secure provided the
quantum bit error rate (QBER) is low enough. However, QBER in BB84-like
protocols might be high and since we cannot discriminate eavesdropper’s
(Eve’s) bit flips from bit flips caused by noise in the line, the request of
having QBER low enough for processing the bits is often difficult to satisfy.
E.g., 4-state BB84 with more than 0.11 Scarani et al. (2009) and 6-state BB84
Bruß (1998) with more than 0.126 Scarani et al. (2009) disturbance ($D$) have
to be aborted ($D$ is defined as the amount of polarization-flips caused by
Eve, maximum being 0.5). $D$ includes the inherent QBER as well as possible
Eve in the line. If Eve were the only cause of $D$, the mutual information
between the sender (Alice) and Eve ($I_{AE}$) wound surpass the one between
Alice and the receiver (Bob) ($I_{AB}$): $I_{AE}>I_{AB}$ for $D>0.11,0.126$,
respectively.
Protocols using two-way quantum communications have also been proposed. Since
they are less efficient versions of BB84 protocols, they have no meaningful
advantage. Here we show that the security of some two-way protocols is
vulnerable under a man-in-the-middle (MITM) attack. In particular, entangled
photon two-way protocols based on two Boström and Felbinger (2002) (also
called a ping-pong (pp) protocol) and four ($\Psi^{\mp},\Phi^{\mp}$) Cai and
Li (2004) Bell states, on the one hand and a single photon deterministic
Lucamarini-Mancini (LM05) protocol, on the other Lucamarini and Mancini
(2005); Beaudry et al. (2013). Several varieties, modifications, and
generalisations of the latter protocol are given in Henao and Serra (2015);
Khir et al. (2012); Shaari and Mancini (2015); Pirandola et al. (2008). Two
varieties were implemented in Cerè et al. (2006) and Kumar et al. (2008). The
former pp protocol was implemented by Ostermeyer and Walenta in 2008
Ostermeyer and Walenta (2008) while the protocol with four Bell states cannot
be implemented with linear optics elements Lütkenhaus et al. (1999); Vaidman
and Yoran (1999). In the aforementioned references various security
estimations have been obtained.
In Lu et al. (2011) Lu, Fung, Ma, and Cai provide a security proof of an LM05
deterministic QKD for the kind of attack proposed in Lucamarini and Mancini
(2005); Beaudry et al. (2013). Nevertheless, they claim it to be a proof of
the unconditional security of LM05. In Han et al. (2007) Han, Yin, Li, Chen,
Wang, Guo, and Han provide a security proof for a modified pp protocol and
prove its security against collective attacks in noisy and lossy channel.
All previous elaborations of the security of two-way protocols assume that Eve
attacks each signal twice, once on the way from Bob to Alice, and later on its
way back from Alice to Bob, and that in doing so she disturbs the signal in
the message mode.
However, there is another attack in which an undetectable Eve encodes Bob’s
signals by mimicking Alice’s encoding of a decoy signal sent to her which we
elaborate on in this paper. We consider the two-way deterministic QKD
protocols under a MITM attack where, Eve—undetectable in the message mode
(MM)—can acquire all the messages, meaning that there is no disturbance in the
MM ($D_{\rm MM}$) at all. In the control mode (CM) there is a disturbance
($D_{\rm CM}$), but there is no critical $D$ at which Alice and Bob should
abort the protocol. The only way to delete bits of the raw key snatched by Eve
is via privacy amplification and for disturbances close to $D_{\rm CM}=0.5$,
when Eve is in the line all time, it seems impossible to distinguish whether
Eve has or has not obtained the whole key. In order to verify that conjecture,
we prove that the security proof carried in Lu et al. (2011) does not cover a
MITM attack and that therefore cannot be called “unconditional.”
We also compare two-way protocols under a MITM attack with a recent one-way
asymmetric BB84-like protocol Bunandar et al. (2018) in which the
$\\{|0\rangle,|1\rangle\\}$ basis serves as MM and the $\\{|\pm\rangle\\}$
basis as CM, under a MITM attack. The latter protocol resolves the problem of
absence of inherent critical disturbance by introducing a predetermined
threshold disturbance after which Alice and Bob abort the protocol. This makes
the protocol conditionally secure and we propose a similar solution for the
two-way protocols.
In Sec. II we present the protocols and MITM attacks on them. In Sec. III we
discuss the security of two-way protocols and analyze their proof of
unconditional security; we also compare properties of two-way protocols with
those of the standard BB84 and the aforementioned asymmetrical BB84-like one.
In Sec. IV we present some concluding points and a summary of the results
achieved in the paper.
## II Protocols and Attacks on Them
Ping-pong (pp) protocol is based on two Bell states Boström and Felbinger
(2002). Bob prepares entangled photons in one of the Bell states, sends one of
the photons to Alice while keeping the other one in a quantum memory (qm).
Alice either returns the photon as is or acts on it so as to put both photons
into another Bell states. Bob combines the photon he receives from Alice with
the one he kept in qm at a beam splitter (BS) to decode Alice’s messages. The
messages are said to be sent in message mode (MM). There is also a control
mode (CM) in which Alice measures Bob’s photon and announces her outcomes over
a public channel.
Bell basis used in the pp protocol consists of two Bell states
$\displaystyle|\Psi^{\mp}\rangle=\frac{1}{\sqrt{2}}(|H\rangle_{1}|V\rangle_{2}\mp|V\rangle_{1}|H\rangle_{2}),$
(1)
where $|H\rangle_{i}$ ($|V\rangle_{i}$), $i=1,2$, represent horizontal
(vertical) polarized photon states.
Photon pairs in the state $|\Psi^{-}\rangle$ are generated by a down-converted
entangled photon source. To send $|\Psi^{-}\rangle$ state Alice just returns
her photon to Bob. To send $|\Psi^{+}\rangle$ she puts a half-wave plate
(${\rm HWP}(0^{\circ})$) in the path of her photon. The HWP changes the sign
of the vertical polarization.
At Bob’s BS the photons in state $|\Psi^{-}\rangle$ will split and those in
state $|\Psi^{+}\rangle$ will bunch together.
Eavesdropper Eve carries out a MITM, designed by Nguyen Nguyen (2004), as
follows. She puts Bob’s photon in a qm (delays it in fiber coil) and makes use
of a copy of Bob’s device to send Alice a photon from a down-converted pair in
state $|\Psi^{-}\rangle$ as shown in Fig. 1(a). When Eve receives the photon
from Alice she combines it with the other photon from the pair and determines
the Bell state in the same way Bob would. She uses this result to generate the
same Bell state for Bob by putting the appropriate HWPs in the path of Bob’s
photon.
Figure 1: (a) Nguyen’s attack Nguyen (2004) by which Eve is able to
deterministically (and undetectably in the MM) copy every one of the Bell-
state messages in the pp protocol Boström and Felbinger (2002); (b)
Lucamarini’s attack (Lucamarini, 2003, p. 61, Fig. 5.5) by which Eve is able
to deterministically (and undetectably in the MM) copy every message in the
LM05 protocol; (c) Common schematics of both attacks; The green dashed line
shows the path of photons when Eve is not in the line.
Thus, Eve is able to copy every single message in the MM undetectably and
therefore sending messages in the MM is equivalent to sending plain text
“secured” by the CM.
In the LM05 protocol Lucamarini (2003); Lucamarini and Mancini (2005) Bob
prepares a qubit in one of the four states $|0\rangle$, $|1\rangle$ (the Pauli
$\boldsymbol{Z}$ eigenstates), $|+\rangle$, or $|-\rangle$ (Pauli
$\boldsymbol{X}$ eigenstates) and sends it to his counterpart Alice. In the MM
she modifies the qubit state by applying either $\boldsymbol{I}$, which leaves
the qubit unchanged and encodes the logical 0, or by applying
$i{\boldsymbol{Y}}={\boldsymbol{Z}}{\boldsymbol{X}}$, which flips the qubit
state and encodes the logical 1. ($i{\boldsymbol{Y}}|0\rangle=-|1\rangle$,
$i{\boldsymbol{Y}}|1\rangle=|0\rangle$,
$i{\boldsymbol{Y}}|+\rangle=|-\rangle$,
$i{\boldsymbol{Y}}|-\rangle=-|+\rangle$.) Alice now sends the qubit back to
Bob who measures it in the same basis in which he prepared it and
deterministically infers Alice’s operations, i.e., her messages, without basis
reconciliation procedure.
Eavesdropper Eve carries out a MITM, designed by Lucamarini (Lucamarini, 2003,
p. 61, Fig. 5.5), as follows. As shown in Fig. 1(b). Eve delays Bob’s photon
(qubit) in a fiber spool (a quantum memory) and sends her own decoy photon in
one of the four states $|0\rangle$, $|1\rangle$, $|+\rangle$, or $|-\rangle$
to Alice, instead. Alice encodes her message via $\boldsymbol{I}$ or
$i{\boldsymbol{Y}}$ and sends the photon back. Eve measures it in the same
basis in which she prepared it, reads off the message, encodes Bob’s delayed
photon via ${\boldsymbol{I}}$, if she read 0, or via $i{\boldsymbol{Y}}$, if
she read 1, and sends it back to Bob.
Eve never learns the states in which Bob sent his photons but that is
irrelevant in MM since only polarization flipping or not flipping encode
messages. Alice also need not know Bob’s states Lucamarini and Mancini (2005).
Eve could only be revealed in CM in which Alice carries out a projective
measurement of the qubit along a basis randomly chosen between
${\boldsymbol{Z}}$ and ${\boldsymbol{X}}$, prepares a new qubit in the same
state as the outcome of the measurement, sends it back to Bob, and reveals
this over a classical public channel Lucamarini and Mancini (2005).
## III Security of the Protocols
To reach the main point of the paper, let us first discuss the one-way
asymmetric (aBB84) and symmetric (sBB84, i.e., standard BB84) protocols.
A recent definition of aBB84 Tomamichel1 et al. (2012) reads: “Alice [asks
her] entanglement-based source to [randomly] prepare quantum states in two
bases, ${\mathbb{X}}=\\{|0\rangle,|1\rangle\\}$ and
${\mathbb{Z}}=\\{(|0\rangle+|1\rangle)/\sqrt{2},(|0\rangle-|1\rangle)/\sqrt{2}\\}$…
Bob [randomly] measure[s] quantum systems in [these two] bases… The protocol
is asymmetric [meaning that] the number of bits measured in the two bases ($n$
bits in the ${\mathbb{X}}$ basis and $k$ bits in the ${\mathbb{Z}}$ basis) are
not necessarily equal Lo et al. (2005)… Sifting: Alice and Bob broadcast their
basis choices over the classical channel… Error correction: (EC) A
reconciliation scheme that broadcasts [chosen] bits of classical error
correction data is applied. Bob compute[s] an estimate $\hat{\mathbf{Y}}$ of
the raw key string $\mathbf{Y}$. Alice computes universal2 hash function of
$\mathbf{Y}$ [and] sends [it] to Bob. If the hash[es] of $\hat{\mathbf{Y}}$
and $\mathbf{Y}$ disagree, the protocol aborts. Privacy amplification: (PA)
Alice extracts $l$ bits of secret key $\mathbf{S}$ from $\mathbf{Y}$ using a
random universal2 hash function. The choice of function is communicated to
Bob, who uses it to calculate $\mathbf{S}$.” (Tomamichel1 et al., 2012, p. 3)
There are other similar definitions of aBB84 in the literature Lo et al.
(2005); Scarani and Renner (2008); Cai and Scarani (2009); Zhou et al. (2014);
Mizutani et al. (2015).
When $n=k$, aBB84 turns into sBB84, i.e., it becomes identical to the original
BB84. In what follows, when not explicitly stated otherwise, under BB84 we
mean sBB84.
What is essential for the standard aBB84 and sBB84, is that Eve cannot avoid
introducing disturbance ($D$). Specifically, when Alice sends messages in
${\mathbb{X}}$ and ${\mathbb{Z}}$ bases, Eve cannot avoid introducing $D$.
E.g., Alice sends $|1\rangle$ in ${\mathbb{X}}$ basis, Eve reads it as
$(|0\rangle+|1\rangle)/\sqrt{2}$ in ${\mathbb{Z}}$ basis and resends it to Bob
who, say in ${\mathbb{X}}$ basis, reads it either as $|0\rangle$ or as
$|1\rangle$. If the former, it will be discarded in the EC procedure, if the
latter, it will be accepted as a valid message. As for Eve, no public
information can enable her to find out what Bob actually measured hence she
loses information.
However, Alice and Bob lose their information too, and as shown in Fig. 2(a)
in a BB84 protocol, when the level of disturbance approaches $D_{\rm MM}=0.11$
(the MM is the only mode of the standard BB84 protocol) the mutual information
between Alice and Eve $I_{AE}$ approaches the mutual information between Alice
and Bob $I_{AB}$ and they have to abort the protocol. Note that
$I_{AB}=1+D\log_{2}D+(1-D)\log_{2}(1-D)$ and
$I_{AE}=-D\log_{2}D-(1-D)\log_{2}(1-D)$ Fuchs et al. (1997) and that, ideally,
for $D_{\rm MM}<0.11$, EC can eliminate all errors induced by Eve and that PA
can remove all key bits Eve might have collected, no matter how close to 0.11
$D_{\rm MM}$ is. This is so because both $I_{AB}$ and $I_{AE}$ are functions
of $D_{\rm MM}$, i.e., functions of the disturbance in the message mode for
which the mutual information in the very same message mode is calculated. The
closer $D_{\rm MM}$ is to 0.11, the more difficult is for Alice and Bob, after
PA, to extract the secure key from the raw key, since the former becomes
smaller and smaller. “The efficiency of privacy amplification rapidly
decreases when [$D_{\rm MM}$] increases” (Gisin et al., 2002, p. 165, mid
right column). “At $D_{\rm MM}=0.11$ the secure-key length formally vanishes”
(Molotkov and Timofeev, 2007, p. 524). See also Fuchs et al. (1997).
Figure 2: Mutual information plots for (a) one-way protocol BB84; (b) two-way
protocols with either pp entangled Bell states or with LM05-like single photon
states under a MITM attack; (c) one-way asymmetric BB84-like protocol, in
which one basis serves as MM and the other as CM, under a MITM attack
(mcasBB84-MITM); $I_{AEc}$ stands for $I_{AE}(D_{pd\rm-CM})$.
For a MITM attack on two-way protocols (which are without any sifting), when
Eve is in the line all the time, there is no $D_{\rm MM}$ that Eve induces and
the mutual information between Alice and Bob as well as between Alice and Eve
is equal to unity: $I_{AB}=1$. In Fig. 2(b) $D_{\rm CM}$ indicates the
presence of Eve, where $D_{\rm CM}=0.5$ would mean that Eve is always in the
line. For $D_{\rm CM}$ slightly below $D_{\rm CM}=0.5$ one cannot exclude the
possibility that Eve has all the messages.
When Eve has all the messages then there is no Alice-Bob “privacy” they could
amplify. When Eve snatches only a portion of messages, then, for $D_{\rm CM}$
close to 0.5, there is still a question whether Eve has all messages or not
and whether Alice and Bob can erase Eve’s messages with their PA. With that in
mind, we can examine the security evaluation for the MITM and verify whether
the proofs of unconditional security carried out for other kind of attack on
LM05 in Lu et al. (2011); Beaudry et al. (2013) might apply to it as well.
In the aforementioned security proof Lu et al. (2011), which is claimed to be
unconditional, the authors assume that Eve probes the qubit, entangling it
with an ancilla. However, their approach does not cover the MITM attacks. To
show this we point to the following steps in their proof of unconditional
security vs. two-way-protocol-under-MITM counter-steps:
* Lu et al. (2011)
p. 2, 2nd paragraph from the top: “Alice announces partial of her key bits in
the encoding mode [MM]. They compute the error rate $e$ in the Alice-Bob
channel.”
* MITM
$I_{AB}=1$ and Eve does not induce any error in the MM, ever.
* Lu et al. (2011)
p. 2, Sec. III.A: “Eve cannot gain any information about Alice’s key bits if
she only attacks the qubits after Alice’s encoding operation.”
* MITM
Since Eve in her MITM sends her own photons to Alice and then reads off
$\boldsymbol{I}$ or $i{\boldsymbol{Y}}$ from Alice’s encoding of those qubits,
Eve gains all information from Alice’s qubits, more precisely, from Eve’s
qubits encoded by Alice. Note the neither Alice nor Eve know which states the
qubits Bob sends are in. They only control $\boldsymbol{I}$ and
$i{\boldsymbol{Y}}$.
* Lu et al. (2011)
Eve’s most general quantum operation can be described by a unitary operation
together with an ancilla. In the Bob-Alice channel, when Bob sends a qubit in
state $|0\rangle$ and Alice measures in the basis $|0\rangle$,$|1\rangle$, she
will get the measurement outcomes $|0\rangle$ with probability $c_{00}^{2}$ or
$|1\rangle$ with probability $c_{01}^{2}$.
* MITM
Alice does not measure qubits. She just applies $\boldsymbol{I}$ and
$i{\boldsymbol{Y}}$.
* Lu et al. (2011)
Eve’s most general attack (with ancillas) is
$U_{BE}|0\rangle_{B}|E\rangle=c_{00}|0\rangle_{B}|E_{00}\rangle+c_{01}|1\rangle_{B}|E_{01}\rangle$,
$U_{BE}|1\rangle_{B}|E\rangle=c_{11}|1\rangle_{B}|E_{11}\rangle+c_{10}|0\rangle_{B}|E_{10}\rangle,$
$U_{BE}|+\rangle_{B}|E\rangle\\!=\\!c_{++}|+\rangle_{B}|E_{++}\rangle\\!+\\!c_{+-}|-\rangle_{B}|E_{+-}\rangle$,
$U_{BE}|-\rangle_{B}|E\rangle\\!=\\!c_{--}|-\rangle_{B}|E_{-+}\rangle\\!+\\!c_{-+}|+\rangle_{B}|E_{-+}\rangle.$
Fidelities are $f_{0}=c_{00}^{2}$, $f_{1}=c_{11}^{2}$, $f_{+}=c_{++}^{2}$, and
$f_{-}=c_{--}^{2}$. $f_{0}=f_{1}$ and $f_{-}=f_{+}$ are assumed…
Bob’s qubit is in a mixed state $\rho^{B}=(|0\rangle\langle
0|+|1\rangle\langle 1|)/2$. The joint state of the forward qubit and Eve’s
ancilla becomes $\rho_{BA}^{BE}=U_{BE}(\rho^{B}\otimes|E\rangle\langle
E|)U_{BE}$. Alice’s encoded qubit together with Eve’s ancillas is:
$\rho^{ABE}=\frac{1}{2}|0\rangle\langle
0|^{A}\otimes\rho^{BE}_{BA}+\frac{1}{2}|1\rangle\langle
1|^{A}\otimes{\boldsymbol{Y}}_{B}\rho^{BE}_{BA}{\boldsymbol{Y}}_{B}$…
The asymptotic key generation rate is $r=\lim_{m\to\infty}\frac{k(m)}{m}$,
where $m$ is the size of the raw key and $k(m)$ is the number of the final key
bits. Alice sends Bob EC information over a classical channel so that he can
correct his raw key to match Alice’s.
* MITM
Eve does not induce any disturbance, so there is no EC.
* Lu et al. (2011)
The final key is then derived by applying two-universal hashing to their
common raw key as PA. The secure key rate $r_{\rm PA}$ for secret key
generation is bounded by the conditional entropy of Alice and Bob’s key bits
given the quantum information of Eve about the key bits $r_{\rm
PA}=S(\rho^{A}|\rho^{BE})=-{\rm}tr\rho^{ABE}\log_{2}\rho^{ABE}+{\rm
tr}\rho^{BE}\log_{1}\rho^{BE}=1-h(\xi)$, where $\xi=c_{++}^{2}-c_{1}^{2}$,
$c_{1}=c_{01}=c_{10}$, and $h(x)=-x\log_{2}x-(1-x)\log_{2}(1-x)$ is the binary
Shannon entropy. In particular, if Eve does not attack the forward qubits in
the Bob-Alice channel, i.e., $f_{0}=f_{1}=f_{+}=f_{-}=1$, one can find that
$r_{\rm PA}(\xi)=1$. This states that Eve cannot gain any information about
Alice’s key bits if she does not attack the travel qubit in the Bob-Alice
channel first. Consider the case that Eve measures each forward qubit in the
Bob-Alice channel in the basis $|0\rangle,|1\rangle$. Alice and Bob can verify
that $f_{0}=f_{1}=1$, and $f_{+}=f_{-}=\frac{1}{2}$. In this case, we have
$r_{\rm PA}(\xi)=0$. On the other hand, Eve can also measure each forward
qubit in the Bob-Alice channel in the basis $|+\rangle,|-\rangle$, which gives
$f_{+}=f_{-}=1$ and $f_{0}=f_{1}=\frac{1}{2}$, and thus $r_{\rm PA}(\xi)=0$.
That is, Eve can gain full information of Alice’s key bits if she has exactly
known the forward states before Alice’s encoding operations.
* MITM
Eve does not measure qubits (or ancillas). When she is in the line all the
time, she just reads off $\boldsymbol{I}$ and $i{\boldsymbol{Y}}$ Alice
executed on her qubits and applies them to Bob’s qubits she stored, i.e., she
copies the whole key—both sides have the whole key. There is no privacy which
can be amplified. That means we have $r_{\rm PA}=1$, not 0. This deserves a
clarification. $r_{\rm PA}=\lim_{m\to\infty}\frac{k(m)}{m}=1$ states that the
secret key is equivalent to the raw key in the infinite limit for both Alice
and Bob and Eve, what corresponds to $I_{AB}=I_{AE}(D_{Max\rm-CM})=1$, for
$D_{Max\rm-CM}=0.5$. So, $k_{\rm PA}(m)$ should not be used as a secret key,
but that does not mean that we can infer $k_{\rm PA}(m)=0$. After PA both
parties have the same $r_{\rm PA}=1$ and discarding $k_{\rm PA}(m)$ does not
turn $r_{\rm PA}$ to zero. Discarding the key is based on Alice and Bob’s
estimation from the CM, i.e., from outside of the MM space of calculation. The
way of calculating $k_{\rm PA}(m)$ so as to include discarding of estimated
bits both parties might possess should follow from an adequately elaborated PA
procedure and its algorithms. A starting step should be a predefined
$D_{Max\rm-CM}<0.5$ and its inclusion in the protocol via
$I_{MaxAE}=I_{AE}(D_{Max\rm-CM})$. That would give us a conditional security
of the protocol.
Taken together, the analysis carried out in Lu et al. (2011) is applicable and
correct for the attacks on two-way protocols in which Eve reads off the states
of qubits with the help of ancillas but is inapplicable to MITM attacks.
Therefore, their Proof is not universal, i.e., cannot be considered
unconditional.
Can Alice and Bob still achieve reliable security of their two-way protocol?
To answer this let us first compare one-way (e.g., BB8) and two-way protocols.
Under standard attacks on one way protocols, Eve is left with less and less
information about the key when she approaches the critical disturbance
$D_{crit\rm{-MM}}=0.11$, i.e., the messages she snatches end up scrambled, up
to 50% of the time. But Eve also scrambles Bob and Alice’s messages so that
after PA, half of the messages are deleted and half coincide. So, neither
party is left with any usable bit.
In two-way protocols under MITM it is different. Eve does not scramble Bob and
Alice’s messages at all at higher and higher values of $D_{\rm CM}$ and the
longer she is in the line the more messages she copies; for $D_{Max\rm-
CM}=0.5$ their secret keys are identical and no bare PA (hashing only) can
change that.
However, with two-way protocols, when Eve is not in the line all the time,
Alice and Bob carry out the PA, guided by the level of disturbance, i.e., the
error rate in the CM. The standard PA uses a binary string of obtained
messages to produce a new shorter strings via universal hashing. Alice
randomly chooses a (permutation) function $f:\\{0,1\\}^{m}\to\\{0,1\\}^{k(m)}$
among some universal2 class of functions. She then sends both $f(x)$ and a
description of $f$ to Bob via universal hashing. After computing $f(y)$, where
$y$ is his corresponding string, Bob checks whether it agrees with $f(x)$. If
it does, a basic property of universal hashing allows them to assume that
$x=y$ (Bennett et al., 1995, p. 214).
The problem emerges with this version of the PA, i.e., with algorithms it
makes use of, because Alice and Bob should be able to estimate the length
$k(m)$ of the secure key with respect to the length of the raw key $m$ which
would guarantee them that Eve is not in possession of a single bit of $k(m)$,
but via their bare (“blind”) PA they always get $x=y$, i.e., they do not have
a benchmark for estimating the amount of bits Eve lost. The PA procedures
elaborated in the literature do not help since they are made for one-way
protocols (BB84, B92, etc.) and are rather involved and intricate—Cf. Renner
and König (2005). We have not found a PA procedure elaborated for two-way
protocols in the literature. And what makes it challenging is the asymptotic
approach of $I_{AE}(D_{\rm CM})$ to $I_{AB}=1$ shown in Fig. 2(b), which is
absent in the BB84 protocol—see Fig. 2(a), on the one hand, and the high $D$,
on the other. Whether we can find an efficient PA algorithm for two-way
protocols remains to be seen.
A special kind of an aBB84-like protocol in which the $\mathbb{X}$ basis
serves as MM and $\mathbb{Z}$ as CM proposed by Bunandar et al. Bunandar et
al. (2018) can help us to better understand the problem of unlimited $D$. We
call the protocol a message-control-(a)symmetric BB84 (mcasBB84) protocol. In
Table 1 we compare the properties of the BB84, two-way, and mcasBB84 protocols
under MITM attacks.
| BB84 | pp | LM05 | mcasBB84-MITM
---|---|---|---|---
mode(s) | MM | MM + CM | MM + CM | MM + CM
disturbance | $0\leq D_{\rm MM}\leq 0.5$ | $D_{\rm MM}=0$ | |
$0\leq D_{\rm CM}\leq 0.5$ | $D_{\rm MM}=0$ | | |
$0\leq D_{\rm CM}\leq 0.5$ | $D_{\rm MM}=0$ | | |
$0\leq D_{\rm CM}\leq D_{pd{\rm-CM}}$ | | | |
maximal | | | |
disturbance | $D_{critical{\rm-MM}}=0.11$ | ? | ? | $D_{pd{\rm-CM}}$
secure | for $D_{\rm MM}<0.11$ | for $D_{\rm CM}<\ ?$ | for $D_{\rm CM}<\ ?$ | for $D_{\rm CM}<D_{pd{\rm-CM}}$
mutual | | | |
information | $I_{AB}(D_{\rm MM})$, $I_{AE}(D_{\rm MM})$ | $I_{AB}=1$ | |
$0\leq I_{AE}(D_{\rm CM})<1$ | $I_{AB}=1$ | | |
$0\leq I_{AE}(D_{\rm CM})<1$ | $I_{AB}=1$ | | |
$0\leq I_{AE}(D_{\rm CM})<I_{AE}(D_{pd{\rm-CM}})$ | | | |
photon | | | |
distance | $L$ | 4$L$ | 2$L$ | $L$
trans- | | | |
mittance | $\cal T$ | ${\cal T}^{4}$ | ${\cal T}^{2}$ | $\cal T$
Table 1: Properties of a symmetric BB84-like protocol under an arbitrary
attack compared with properties of pp, LM05, and asymmetric mcasBB84 protocols
under MITM. For the pp and LM05 protocols $D<0.5$ means that Eve is in the
line only a portion of the time and $D=0.5$ that she is in the line all the
time. $D_{pd\rm-CM}$ is a predetermined threshold value of $D<0.5$ for the
mcasBB84-MITM Bunandar et al. (2018) protocol.
Let us consider a MITM attack on mcasBB84 (mcasBB84-MITM) which Eve carries
out so as to measure and resend all qubits in the $\mathbb{X}$ basis. Since
Eve receives only $|0\rangle$ and $|1\rangle$ messages and resends them
unchanged, she does not introduce any disturbance in the MM and $I_{AB}=1$.
Eve’s $I_{AE}$ rises with her increased presence in the line. If she were in
the line all the time, we would have $D_{\rm CM}=0.5$ and $I_{AE}=1$. But the
protocol does not allow that. Instead, it predetermines a threshold value
$D_{pd\rm CM}$ and if $D_{\rm CM}>D_{pd\rm CM}$ Alice and Bob will abort it as
specified by Bunandar et al. (Bunandar et al., 2018, p. 7). The protocol is an
adaptation of the three-state aBB84 protocol which makes use of both
$\mathbb{X}$ and $\mathbb{Z}$ bases for MM as in the standard BB84 only with
two additional decoy settings put forward by Lim, Curty, Walenta, Xu and
Zbinden Lim et al. (2014) which itself builds on other decoy-state pioneering
methods as, e.g., the one proposed by Wang Wang (2005). The idea of a
predetermined threshold value $D_{pd\rm CM}$ is taken over from Mizutani et
al. (2015). $D_{pd\rm CM}$ serves Bunandar et al. (2018) to calculate a
conditional security. The calculations determines which maximal $D_{pd\rm CM}$
is acceptable for an implementation. Apart from solving two-way-protocol
maximal $D$ problem, the mcasBB84-MITM has another big advantage (with respect
to exponential attenuation of photons in optical fibres) that its photons
cover the same distance as the original BB84 ($L$), i.e., half the distance
LM05 photons cover (2$L$) and a quarter of the distance pp photons cover
(4$L$).
## IV Conclusion
To summarise, we considered man-in-middle (MITM) attacks on two kinds of two-
way QKD protocols (pp with entangled photons and LM05 with single photons) in
which an undetectable Eve can decode all the messages in the message mode (MM)
and showed that the mutual information between Alice and Bob is not a function
of disturbance in the MM, since there is no disturbance in the MM, but is
equal to unity no matter whether Eve is in the line or not. Eve induces a
disturbance ($D_{\rm CM}$) only in the control mode (CM). In a way, Alice’s
sending of the key is equivalent to sending an unencrypted plain text (via
photons obtained by and returned to Bob) secured by an indicator of Eve’s
presence. That burdens the protocols with the following drawbacks under a MITM
attack:
* •
the photons must cover the double distance than in an equivalent one-way
BB84-like protocol (mcasBB84) which also has analogous MM and CM modes;
* •
while the BB84 protocol is unconditionally secure, and its revised version,
the macasBB84 protocol proposed recently is only conditionally secure, the
proof of unconditional security of the LM05 protocol given in Lu et al. (2011)
is not valid, as shown in details in Sec. III; the mcasBB84 protocol has a
predetermined threshold value of the CM disturbance ($D_{pdCM}$) at which Bob
and Alice must abort the protocol whilst the considered two-way protocols do
not have such a critical CM disturbance at which to abort the protocol;
* •
since there are no errors in the MM mode, the privacy amplification (PA) is
the only way to establish the security of the protocols and again in contrast
to the mcasBB84 no PA procedures for the two-way protocols have been provided
in the literature;
Let us elaborate on these point in the reverse order.
In the two-way protocols the mutual information between Alice and Bob is
always greater than or equal to the one between Alice and Eve. When they are
equal, i.e., when Eve is in the line all the time, then Alice and Bob and Eve
have identical messages and there is no privacy which can be amplified and PA
cannot erase key bits Eve has snatched. For a $D_{\rm CM}<0.5$, but close to
0.5, Alice and Bob do not have a procedure and algorithms to obtain the secret
key of which Eve possess almost all bits. Note that $I_{AE}(D_{\rm CM})$
approaches $I_{AE}=1$ asymptotically and that a maximal $D_{CM}$ after which
Alice and Bob have to abort the protocol is not defined.
This is related to our analysis (in Sec. III) of the security proof given in
Lu et al. (2011) which the authors call unconditional. In the analysis in Sec.
III we show that their proof does not cover the man-in-the-middle attack
(MITM) and that therefore cannot be called “unconditional.”
To better understand the problem of absence of a maximal tolerable $D_{\rm
CM}$ (after which Alice and Bob have to abort the protocol) in two-way
protocols, in Sec. III we compare protocol with a newly proposed one-way
asymmetric BB84-like protocol Bunandar et al. (2018) (mcasBB84) in which
$\mathbb{X}$ basis serves as MM and $\mathbb{Z}$ basis as CM under a MITM
attack (mcasBB84-MITM). We show that mcasBB84-MITM without defined maximal
$D_{\rm CM}$ would be completely equivalent to two-way protocols under MITM.
But the mcasBB84 protocol resolves the problem of a maximal $D_{\rm CM}$ by
means of a predetermined threshold value $D_{pd\rm CM}$. When $D_{\rm
CM}>D_{pd\rm CM}$ Alice and Bob abort the protocol (Bunandar et al., 2018, p.
7). The security calculated for such $D_{pd\rm CM}$ (Bunandar et al., 2018,
pp. 7-10), i.e., an elaborated PA procedure, can be called a “conditional
security.” An additional advantage (with respect to exponential attenuation of
photons in optical fibres) of mcasBB84 is that photons do not travel from Bob
to Alice and back to Bob, but only from Alice to Bob (see Table 1).
A similar solution for two-way protocols would be to redesign the protocol so
as to either calculate a critical $D_{crit\rm-CM}$ at which Alice and Bob
would be able to erase all bits Eve might have possessed via privacy
amplification (PA) or to predetermine threshold value of the disturbance in
the CM, $D_{pd\rm-CM}$, for which PA calculations might be carried out. The
former calculation, if possible, would provide us with an unconditionally
security and the latter one would provide us with a conditional security of
the protocols. How to do either of the calculations is an open question, but
we conjecture that the former calculation is not feasible.
###### Acknowledgements.
Supported by the Ministry of Science and Education of Croatia through the
Center of Excellence for Advanced Materials and Sensing Devices (CEMS)
funding, and by MSE grants Nos. KK.01.1.1.01.0001 and 533-19-15-0022.
Computational support was provided by the cluster Isabella of the Zagreb
University Computing Centre and by the Croatian National Grid Infrastructure
(CRO-NGI). Financial supports by the Alexander von Humboldt Foundation as well
as by the German Research Foundation (DFG) and the Open Access Publication
Fund of Humboldt-Universität zu Berlin are acknowledged.
## References
* Elliott et al. (2005) Elliott, C.; Colvin, A.; Pearson, D.; Pikalo, O.; Schlafer, J.; Yeh, H. Current Status of the DARPA Quantum Network. In SPIE Quantum Information and Computation III; Donkor, E.J.; Pirich, A.R.; Brandt, H.E., Eds.; SPIE: Bellingham, Washington, 2005; Vol. 5815, Proceedings of SPIE, pp. 138–149.
* Sasaki, M. et al. (2011) Sasaki, M. et al.. Field test of quantum key distribution in the Tokyo QKD Network. Optics Express 2011, 19, 10387–10409.
* Peev, M. et al. (2009) Peev, M. et al.. The SECOQC Quantum Key Distribution Network in Vienna. New J. Phys. 2009, 11, 075001–1–37.
* Bennett and Brassard (1984) Bennett, C.H.; Brassard, G. Quantum Cryptography, Public Key Distribution and Coin Tossing. In International Conference on Computers, Systems & Signal Processing, Bangalore, India, December 10-12, 1984; IEEE: New York, 1984; pp. 175–179.
* Scarani et al. (2009) Scarani, V.; Bechmann-Pasquinucci, H.; Cerf, N.J.; Dušek, M.; Lütkenhaus, N.; Peev, M. The Security of Practical Quantum Key Distribution. Rev. Mod. Phys. 2009, 81, 1301–1350.
* Bruß (1998) Bruß, D. Optimal Eavesdropping in Quantum Cryptography with Six States. Phys. Rev. Lett. 1998, 81, 3018–3021.
* Boström and Felbinger (2002) Boström, K.; Felbinger, T. Deterministic Secure Direct Communication Using Entanglement. Phys. Rev. Lett. 2002, 89, 187902–1–4.
* Cai and Li (2004) Cai, Q.; Li, B. Improving the Capacity of the Boström-Felbinger Protocol. Phys. Rev. A 2004, 69, 054301–1–3.
* Lucamarini and Mancini (2005) Lucamarini, M.; Mancini, S. Secure Deterministic Communication without Entanglement. Phys. Rev. Lett. 2005, 94, 140501–1—4.
* Beaudry et al. (2013) Beaudry, N.J.; Lucamarini, M.; Mancini, S.; Renner, R. Security of Two-Way Quantum Key Distribution. Phys. Rev. A 2013, 88, 062302–1–9.
* Henao and Serra (2015) Henao, C.I.; Serra, R.M. Practical Security Analysis of Two-Way Quantum-Key-Distribution Protocols Based on Nonorthogonal States. Phys. Rev. A 2015, 92, 052317–1–9.
* Khir et al. (2012) Khir, M.A.; Zain, M.M.; Bahari, I.; Suryadi.; Shaari, S. Implementation of Two Way Quantum Key Distribution Protocol with Decoy State. Opt. Commun. 2012, 285, 842–845.
* Shaari and Mancini (2015) Shaari, J.S.; Mancini, S. Finite Key Size Analysis of Two-Way Quantum Cryptography. Entropy 2015, 17, 2723–2740.
* Pirandola et al. (2008) Pirandola, S.; Mancini, S.; Lloyd, S.; Braunstein, S.L. Continuous-Variable Quantum Cryptography Using Two-Way Quantum Communication. Nature Phys. 2008, 4, 726–730.
* Cerè et al. (2006) Cerè, A.; Lucamarini, M.; Di Giuseppe, G.; Tombesi, P. Experimental Test of Two-Way Quantum Key Distribution in the Presence of Controlled Noise. Phys. Rev. Lett. 2006, 96, 200501–1–4.
* Kumar et al. (2008) Kumar, R.; Lucamarini, M.; Giuseppe, G.D.; Natali, R.; Mancini, G.; Tombesi, P. Two-Way Quantum Key Distribution at Telecommunication Wavelength. Phys. Rev. A 2008, 77, 022304–1–10.
* Ostermeyer and Walenta (2008) Ostermeyer, M.; Walenta, N. On the Implementation of a Deterministic Secure Coding Protocol Using Polarization Entangled Photons. Opt. Commun. 2008, 281, 4540–4544.
* Lütkenhaus et al. (1999) Lütkenhaus, N.; Calsamiglia, J.; Suominen, K.A. Bell Measurements for Teleportation. Phys. Rev. A 1999, 59, 3295–3300.
* Vaidman and Yoran (1999) Vaidman, L.; Yoran, N. Methods for Reliable Teleportation. Phys. Rev. A 1999, 59, 116–125.
* Lu et al. (2011) Lu, H.; Fung, C.H.F.; Ma, X.; Cai, Q.Y. Unconditional Security Proof of a Deterministic Quantum Key Distribution with a Two-Way Quantum Channel. Phys. Rev. A 2011, 84, 042344–1–5.
* Han et al. (2007) Han, Y.G.; Yin, Z.Q.; Li, H.W.; Chen, W.; Wang, S.; Guo, G.C.; Han, Z.F. Security of Modified Ping-Pong Protocol in Noisy and Lossy Channel. Sci. Rep. 2007, 4, 4936–1–4.
* Bunandar et al. (2018) Bunandar, D.; Lentine, A.; Lee, C.; Cai, H.; Long, C.M.; Boynton, N.; Martinez, N.; DeRose, C.; Chen, C.; Grein, M.; Trotter, D.; Starbuck, A.; Pomerene, A.; Hamilton, S.; Wong, F.N.C.; Camacho, R.; Davids, P.; Urayama, J.; Englund, D. Metropolitan Quantum Key Distribution with Silicon Photonics. Phys. Rev. X 2018, 8, 021009–1–12.
* Nguyen (2004) Nguyen, B.A. Quantum Dialogue. Phys. Lett. A 2004, 328, 6–10.
* Lucamarini (2003) Lucamarini, M. Quantum Decoherence and Quantum Cryptography. PhD thesis, University of Rome La Sapienza, 2003.
* Tomamichel1 et al. (2012) Tomamichel1, M.; Lim, C.C.W.; Gisin, N.; Renner, R. Tight finite-key analysis for quantum cryptography. Nature Comm. 2012, 3, 1–6.
* Lo et al. (2005) Lo, H.; Chau, H.F.; Ardehali, M. Efficient Quantum Key Distribution Scheme and a Proof of Its Unconditional Security. J. Cryptol. 2005, 82, 133–166.
* Scarani and Renner (2008) Scarani, V.; Renner, R. Quantum Cryptography with Finite Resources: Unconditional Security Bound for Discrete-Variable Protocols with One-Way Postprocessing. Phys. Rev. Let. 2008, 100, 200501–1–4.
* Cai and Scarani (2009) Cai, R.Y.Q.; Scarani, V. Finite-Key Analysis for Practical Implementations of Quantum Key Distribution. New J. Phys. 2009, 11, 045024–1–20.
* Zhou et al. (2014) Zhou, C.; Bao, W.S.; Li, H.W.; Wang, Y.; Fu, X.Q. Key-Leakage Evaluation of Authentication in Quantum Key Distribution with Finite Resources. Quantum Inf. Process. 2014, 13, 935–955.
* Mizutani et al. (2015) Mizutani, A.; Curty, M.; Ci, C.; Lim, W.; Imoto, N.; Tamaki, K. Finite-Key Security Analysis of Quantum Key Distribution with Imperfect Light Sources. New J. Phys. 2015, 17, 093011–1–27.
* Fuchs et al. (1997) Fuchs, C.A.; Gisin, N.; Griffiths, R.B.; Niu, C.S.; Peres, A. Optimal Eavesdropping in Quantum Cryptography. I. Information Bound and Optimal Strategy. Phys. Rev. A 1997, 56, 1163–1172.
* Gisin et al. (2002) Gisin, N.; Ribordy, G.; Tittel, W.; Zbinden, H. Quantum Cryptography. Rev. Mod. Phys. 2002, 74, 145–195.
* Molotkov and Timofeev (2007) Molotkov, S.N.; Timofeev, A.V. Explicit Attack on the Key in Quantum Cryptography (BB84 Protocol) Reaching the Theoretical Error Limit $Q_{c}\approx 11\%$. JETP Lett. 2007, 85, 524–525.
* Bennett et al. (1995) Bennett, C.H.; Brassard, G.; Crépeau, C.; Maurer, U.M. Generalized Privacy Amplification. IEEE Trans. Inf. Theory 1995, 41, 1915–1923.
* Renner and König (2005) Renner, R.; König, R. Universally Composable Privacy Amplification Against Quantum Adversaries. In Theory of Cryptography; 2nd Theory of Cryptography Conference, TCC 2005, Cambridge, MA, USA, February 10-12, 2005. Proceedings; Kilian, J., Ed.; Springer, 2005; Vol. 3378, chapter Quantum Cryptography and Universal Composability, pp. 407–425.
* Lim et al. (2014) Lim, C.C.W.; Curty, M.; Walenta, N.; Xu, F.; Zbinden, H. Concise Security Bounds for Practical Decoy-State Quantum Key Distribution. Phys. Rev. A 2014, 89, 022307–1–7.
* Wang (2005) Wang, X.B. Beating the Photon-Number-Splitting Attack in Practical Quantum Cryptography. Phys. Rev. Lett. 2005, 94, 230503–1–4.
|
# The Dimension Spectrum Conjecture for Planar Lines
D. M. Stull
Department of Computer Science
Northwestern University
<EMAIL_ADDRESS>
###### Abstract
Let $L_{a,b}$ be a line in the Euclidean plane with slope $a$ and intercept
$b$. The dimension spectrum $\operatorname{sp}(L_{a,b})$ is the set of all
effective dimensions of individual points on $L_{a,b}$. Jack Lutz, in the
early 2000s posed the dimension spectrum conjecture. This conjecture states
that, for every line $L_{a,b}$, the spectrum of $L_{a,b}$ contains a unit
interval.
In this paper we prove that the dimension spectrum conjecture is true.
Specifically, let $(a,b)$ be a slope-intercept pair, and let
$d=\min\\{\dim(a,b),1\\}$. For every $s\in[0,1]$, we construct a point $x$
such that $\dim(x,ax+b)=d+s$. Thus, we show that $\operatorname{sp}(L_{a,b})$
contains the interval $[d,1+d]$.
## 1 Introduction
The effective dimension, $\dim(x)$, of a point $x\in\mathbb{R}^{n}$ gives a
fine-grained measure of the algorithmic randomness of $x$. Effective dimension
was first defined by J. Lutz [5], and was originally used to quantify the
sizes of complexity classes. Unsurprisingly, because of its strong connection
to (classical) Hausdorff dimension, effective dimension has proven to be
geometrically meaningful [3, 15, 1, 9]. Indeed, an exciting line of research
has shown that one can prove classical results in geometric measure theory
using effective dimension [7, 10, 11, 13]. Importantly, these are not
effectivizations of known results, but new results whose proofs rely on
effective methods. Thus, it is of considerable interest to investigate the
effective dimensions of points of geometric objects such as lines.
Let $L_{a,b}$ be a line in the Euclidean plane with slope $a$ and intercept
$b$. Given the point-wise nature of effective dimension, one can study the
dimension spectrum of $L_{a,b}$. That is, the set
$\operatorname{sp}(L_{a,b})=\\{\dim(x,ax+b)\,|x\in\mathbb{R}\\}$
of all effective dimensions of points on $L_{a,b}$. In the early 2000s, Jack
Lutz posed the dimension spectrum conjecture for lines. That is, he
conjectured that the dimension spectrum of every line in the plane contains a
unit interval.
The first progress on this conjecture was made by Turetsky.
###### Theorem 1 (Turetsky [18]).
The set of points $x\in\mathbb{R}^{n}$ with $\dim(x)=1$ is connected.
This immediately implies that $1\in\operatorname{sp}(L_{a,b})$ for every line
$L_{a,b}$. The next progress on the dimension spectrum conjecture was by Lutz
and Stull [11]. They showed that the effective dimension of points on a line
is intimately connected to problems in fractal geometry. Among other things,
they proved that $1+d\in\operatorname{sp}(L_{a,b})$ for every line $L_{a,b}$,
where $d=\min\\{\dim(a,b),1\\}$. Shortly thereafter, Lutz and Stull [12]
proved the dimension spectrum conjecture for the special case where the
effective dimension and strong dimension of $(a,b)$ agree.
In this paper, we prove that dimension spectrum conjecture is true. For every
$s\in(0,1)$, we construct a point $x$ such that $\dim(x,ax+b)=d+s$, where
$d=\min\\{\dim(a,b),1\\}$. This, combined with the results of Lutz and Stull,
imply that
$[d,1+d]\subseteq\operatorname{sp}(L_{a,b})$,
for every planar line $L_{a,b}$. The proof of the conjecture builds on the
techniques of [11]. The primary difficulty of the conjecture is the case when
the dimension of $x$ is less than the difficulty of the line $(a,b)$. We
expand on the nature of this $\dim(x)<\dim(a,b)$ obstacle in Section 3.1. Our
main technical contribution is showing how to overcome this difficulty by
encoding the information of $a$ into our point $x$. Further complications
arise in the “high-dimensional” case, i.e., when $\dim(a,b)>1$. In this case,
we combine the encoding idea with a non-constructive argument.
Apart from its intrinsic interest, recent work has shown that the effective
dimensions of points has deep connections to problems in classical analysis
[10, 11, 13, 17, 8]. Lutz and Lutz [7] proved the point-to-set principle,
which characterizes the Hausdorff dimension of a set by effective dimension of
its individual points. Lutz and Stull [11], using the point-to-set principle,
showed that lower bounds on the effective dimensions of points on a line are
intimately related to well-known problems of classical geometric measure
theory such the Kakeya and Furstenberg conjectures.
The structure of the paper is as follows. In Section 2, we recall the basic
definitions and results of Kolmogorov complexity and effective dimension we
need. In Section 3, we recall the strategy of Lutz and Stull [11] to give
strong lower bounds on the effective dimension of points on a line. In
Sections 3 and 3.1 we give intuition about this strategy, and discuss why it
is not enough to settle the dimension spectrum conjecture.
In Section 4, we prove the dimension spectrum conjecture for lines with
effective dimension at most one. We also give a brief overview of this proof,
and how it overcomes the strategy discussed in Section 3. In Section 5, we
prove the dimension spectrum conjecture for lines with effective dimension
greater than one. We also give intuition of this proof, and how it overcomes
the difficulties when the line is high-dimensional.
Finally, in the conclusion, we discuss open questions and avenues for future
research.
## 2 Preliminaries
The _conditional Kolmogorov complexity_ of a binary string
$\sigma\in\\{0,1\\}^{*}$ given binary string $\tau\in\\{0,1\\}^{*}$ is
$K(\sigma|\tau)=\min_{\pi\in\\{0,1\\}^{*}}\left\\{\ell(\pi):U(\pi,\tau)=\sigma\right\\}\,,$
where $U$ is a fixed universal prefix-free Turing machine and $\ell(\pi)$ is
the length of $\pi$. The _Kolmogorov complexity_ of $\sigma$ is
$K(\sigma)=K(\sigma|\lambda)$, where $\lambda$ is the empty string. Thus, the
Kolmogorov complexity of a string $\sigma$ is the minimum length program
which, when run on a universal Turing machine, eventually halts and outputs
$\sigma$. We stress that the choice of universal machine effects the
Kolmogorov complexity by at most an additive constant (which, especially for
our purposes, can be safely ignored). See [4, 16, 2] for a more comprehensive
overview of Kolmogorov complexity.
We can extend these definitions to Euclidean spaces by introducing “precision”
parameters [9, 7]. Let $x\in\mathbb{R}^{m}$, and $r,s\in\mathbb{N}$. The
_Kolmogorov complexity of $x$ at precision $r$_ is
$K_{r}(x)=\min\left\\{K(p)\,:\,p\in
B_{2^{-r}}(x)\cap\mathbb{Q}^{m}\right\\}\,.$
The _conditional Kolmogorov complexity of $x$ at precision $r$ given
$q\in\mathbb{Q}^{m}$_ is
$\hat{K}_{r}(x|q)=\min\left\\{K(p)\,:\,p\in
B_{2^{-r}}(x)\cap\mathbb{Q}^{m}\right\\}\,.$
The _conditional Kolmogorov complexity of $x$ at precision $r$ given
$y\in\mathbb{R}^{n}$ at precision $s$_ is
$K_{r,s}(x|y)=\max\big{\\{}\hat{K}_{r}(x|q)\,:\,q\in
B_{2^{-r}}(y)\cap\mathbb{Q}^{n}\big{\\}}\,.$
We abbreviate $K_{r,r}(x|y)$ by $K_{r}(x|y)$.
The _effective Hausdorff dimension_ and _effective packing dimension_
111Although effective Hausdorff was originally defined by J. Lutz [6] using
martingales, it was later shown by Mayordomo [14] that the definition used
here is equivalent. For more details on the history of connections between
Hausdorff dimension and Kolmogorov complexity, see [2, 15]. of a point
$x\in\mathbb{R}^{n}$ are
$\dim(x)=\liminf_{r\to\infty}\frac{K_{r}(x)}{r}\quad\text{and}\quad\operatorname{Dim}(x)=\limsup_{r\to\infty}\frac{K_{r}(x)}{r}\,.$
Intuitively, these dimensions measure the density of algorithmic information
in the point $x$.
By letting the underlying fixed prefix-free Turing machine $U$ be a universal
_oracle_ machine, we may _relativize_ the definition in this section to an
arbitrary oracle set $A\subseteq\mathbb{N}$. The definitions of
$K^{A}_{r}(x)$, $\dim^{A}(x)$, $\operatorname{Dim}^{A}(x)$, etc. are then all
identical to their unrelativized versions, except that $U$ is given oracle
access to $A$. Note that taking oracles as subsets of the naturals is quite
general. We can, and frequently do, encode a point $y$ into an oracle, and
consider the complexity of a point relative to $y$. In these cases, we
typically forgo explicitly referring to this encoding, and write e.g.
$K^{y}_{r}(x)$.
Among the most used results in algorithmic information theory is the symmetry
of information. In Euclidean spaces, this was first proved, in a slightly
weaker form in [7], and in the form presented below in [11].
###### Lemma 2.
For every $m,n\in\mathbb{N}$, $x\in\mathbb{R}^{m}$, $y\in\mathbb{R}^{n}$, and
$r,s\in\mathbb{N}$ with $r\geq s$,
1. (i)
$\displaystyle|K_{r}(x|y)+K_{r}(y)-K_{r}(x,y)\big{|}\leq O(\log
r)+O(\log\log\|y\|)\,.$
2. (ii)
$\displaystyle|K_{r,s}(x|x)+K_{s}(x)-K_{r}(x)|\leq O(\log
r)+O(\log\log\|x\|)\,.$
### 2.1 Initial segments versus $K$-optimizing rationals
For $x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}$ and a precision
$r\in\mathbb{N}$, let
$x{\upharpoonright}r=(x_{1}{\upharpoonright}r,\ldots,x_{n}{\upharpoonright}r)$,
where each
$x_{i}{\upharpoonright}r=2^{-r}\lfloor 2^{r}x_{i}\rfloor$
is the truncation of $x_{i}$ to $r$ bits to the right of the binary point. For
$r\in(0,\infty)$, let $x{\upharpoonright}r=x{\upharpoonright}\lceil r\rceil$.
We can relate the complexity $K_{r}(x)$ of $x$ at precision $r$ and the
_initial segment complexity_ $K(x{\upharpoonright}r)$ of the binary
representation of $x$. Lutz and Stull [11] proved the following lemma, and its
corollaries, relating these two quantities. Informally, it shows that, up to a
logarithmic error, the two quantities are equivalent.
###### Lemma 3.
For every $m,n\in\mathbb{N}$, there is a constant $c$ such that for all
$x\in\mathbb{R}^{m}$, $p\in\mathbb{Q}^{n}$, and $r\in\mathbb{N}$,
$|\hat{K}_{r}(x|p)-K(x{\upharpoonright}r\,|\,p)|\leq K(r)+c\,.$
This has the following two useful corollaries.
###### Corollary 4.
For every $m\in\mathbb{N}$, there is a constant $c$ such that for every
$x\in\mathbb{R}^{m}$ and $r\in\mathbb{N}$,
$|K_{r}(x)-K(x{\upharpoonright}r)|\leq K(r)+c\,.$
###### Corollary 5.
For every $m,n\in\mathbb{N}$, there is a constant $c$ such that for all
$x\in\mathbb{R}^{m}$, $y\in\mathbb{R}^{n}$, and $r,s\in\mathbb{N}$,
$|K_{r,s}(x|y)-K(x{\upharpoonright}r\,|\,y{\upharpoonright}s)|\leq
K(r)+K(s)+c\,.$
## 3 Previous Work
The proof of our main theorem will use the tools and techniques introduced by
Lutz and Stull [11]. In this section we will state the main lemmas needed for
this paper. We will devote some time giving intuition about each lemma. In
Subsection 3.1, we give an informal discussion on how to combine these lemmas
to give bounds on the effective dimensions of points on a line. We will also
discuss where these tools break down, motivating the techniques introduced in
this paper.
The first lemma, informally, states the following. Suppose that $L_{a,b}$
intersects $(x,ax+b)$ and the complexity of $(a,b)$ is low (item (i)). Further
assume that (item (ii)), if $L_{u,v}$ is any other line intersecting
$(x,ax+b)$ such that $\|(a,b)-(u,v)\|<2^{-m}$ then either
1. 1.
$u,v$ is of high complexity, or
2. 2.
$u,v$ is very close to $a,b$.
Then it is possible to compute an approximation of $(a,b)$ given an
approximation of $(x,ax+b)$ and first $m$ bits of $(a,b)$. Indeed, we can
simply enumerate over all low complexity lines, since we know that the only
candidate is essentially $(a,b)$.
###### Lemma 6 (Lutz and Stull [11]).
Suppose that $A\subseteq\mathbb{N}$,$a,b,x\in\mathbb{R}$, $m,r\in\mathbb{N}$,
$\delta\in\mathbb{R}_{+}$, and $\varepsilon,\eta\in\mathbb{Q}_{+}$ satisfy
$r\geq\log(2|a|+|x|+5)+1$ and the following conditions.
* (i)
$K^{A}_{r}(a,b)\leq\left(\eta+\varepsilon\right)r$.
* (ii)
For every $(u,v)\in B_{2^{-m}}(a,b)$ such that $ux+v=ax+b$,
$K^{A}_{r}(u,v)\geq\left(\eta-\varepsilon\right)r+\delta\cdot(r-t)\,,$
whenever $t=-\log\|(a,b)-(u,v)\|\in(0,r]$.
Then,
$K_{r}^{A}(a,b,x)\leq
K_{r}(x,ax+b)+K_{m,r}(a,b\,|\,x,ax+b)+\frac{4\varepsilon}{\delta}r+K(\varepsilon,\eta)+O(\log
r)\,.$
The second lemma which will be important in proving our main theorem is the
following. It is essentially the approximation version of the simple geometric
fact that any two lines intersect at a single point. In other words, if
$ax+b=ux+v$ and you are given an approximation of $(a,b)$ and an approximation
of $(u,v)$, then you can compute an approximation of $x$. Moreover, the
quality of the approximation of $x$ depends linearly on the distance between
$(u,v)$ and $(a,b)$.
###### Lemma 7 ([11]).
Let $a,b,x\in\mathbb{R}$. For all $u,v\in B_{1}(a,b)$ such that $ux+v=ax+b$,
and for all $r\geq t:=-\log\|(a,b)-(u,v)\|$,
$K_{r}(u,v)\geq K_{t}(a,b)+K_{r-t,r}(x|a,b)-O(\log r)\,.$
The primary function of this lemma is to give a lower bound on the complexity
of any line intersecting $(x,ax+b)$, i.e., ensuring condition (ii) of the
previous lemma.
Finally, we also need the following oracle construction of Lutz and Stull. The
purpose of this lemma is to show that we can lower the complexity of our line
$(a,b)$, thus ensuring item (i) of Lemma 6. Crucially, we can lower this
complexity using only the information contained in $(a,b)$.
###### Lemma 8 ([11]).
Let $r\in\mathbb{N}$, $z\in\mathbb{R}^{2}$, and
$\eta\in\mathbb{Q}\cap[0,\dim(z)]$. Then there is an oracle $D=D(r,z,\eta)$
satisfying
* (i)
For every $t\leq r$, $K^{D}_{t}(z)=\min\\{\eta r,K_{t}(z)\\}+O(\log r)$.
* (ii)
For every $m,t\in\mathbb{N}$ and $y\in\mathbb{R}^{m}$,
$K^{D}_{t,r}(y|z)=K_{t,r}(y|z)+O(\log r)$ and
$K_{t}^{z,D}(y)=K_{t}^{z}(y)+O(\log r)$.
### 3.1 Combining the lemmas
We now briefly discuss the strategy of [11] which combines the above lemmas to
give non-trivial bounds on the effective dimension of points on a line.
Suppose $(a,b)$ is a line with $\dim(a,b)=d$, and $x$ is a point with
$\dim^{a,b}(x)=s$. We will also make the crucial assumption that $d\leq s$.
Roughly, Lutz and Stull showed that, for sufficiently large $r$
$K_{r}(x,ax+b)\geq(s+d)r$.
The strategy is as follows. Note that to simplify the exposition, all
inequalities in this discussion will be approximate. Using Lemma 8, we find an
oracle $D$ which reduces the complexity of $(a,b)$ to some $\eta\leq dr$,
i.e., $K^{D}_{r}(a,b)=\eta r$. Combining this with Lemma 7, we get a lower
bound on every line $(u,v)$ intersecting $(x,ax+b)$. That is, we show for any
such line,
$K^{D}_{r}(u,v)\geq\eta t+s(r-t)-O(\log r)$
By our choice of $\eta$, we can simplify this inequality to get
$K^{D}_{r}(u,v)\geq sr-O(\log r)$
In particular, relative to $D$, both conditions of Lemma 6 are satisfied and
we have the sufficient lower bound.
In the previous sketch, it was crucial that the dimension of $(a,b)$ was less
than $s$, in order for the lower bound from Lemma 7 to be useful. In the case
where $\dim(a,b)$ is much larger than $\dim(x)$, this strategy breaks down,
and further techniques are required.
We also note that this seems to be a very deep issue. As discussed in the
Introduction, the point-to-set principle of J. Lutz and N. Lutz [7] allows us
to translate problems from (classical) geometric measure theory into problems
of effective dimension. The same issue discussed in this section occurs when
attacking the notorious Kakeya and Furstenberg set conjectures using the
point-to-set principle. While resolving this obstacle in full generality is
still elusive, we are able to get around it in the context of the Dimension
Spectrum Conjecture.
## 4 Low-Dimensional Lines
In this section, we prove the spectrum conjecture for lines with
$\dim(a,b)\leq 1$.
###### Theorem 9.
Let $(a,b)\in\mathbb{R}^{2}$ be a slope-intercept pair with $\dim(a,b)\leq
1$.. Then for every $s\in[0,1]$, there is a point $x\in\mathbb{R}$ such that
$\dim(x,ax+b)=s+\dim(a,b)$.
We begin by giving an intuitive overview of the proof.
### 4.1 Overview of the proof
As mentioned in Section 3.1, the main obstacle of the Dimension Spectrum
Conjecture occurs when the dimension of $x$ is lower than the dimension of the
line $(a,b)$. As mentioned in Section 3.1, in general, this issue is still
formidable. However, in the Dimension Spectrum Conjecture, we are given the
freedom to specifically construct the point $x$, allowing us overcome this
obstacle.
The most natural way to construct a sequence $x$ with $\dim^{a,b}(x)=s$ is to
start with a random sequence, and pad it with long strings of zeros. This
simple construction, unfortunately, does not seem to work.
We are able to overcome the obstacle by padding the random sequence with the
bits of $a$, instead of with zeros. Thus, given an approximation $(x,ax+b)$ we
trivially have a decent approximation of $a$ (formalized iin Lemma 10). This
allows us, using Lemma 6, to restrict our search for $(a,b)$ to a smaller set
of candidate lines.
### 4.2 Proof for low-dimensional lines
Fix a slope-intercept pair $(a,b)$, and let $d=\dim(a,b)$. Let $s\in(0,d)$.
Let $y\in\mathbb{R}$ be random relative to $(a,b)$. Thus, for every
$r\in\mathbb{N}$,
$K^{a,b}_{r}(y)\geq r-O(\log r)$.
Define the sequence of natural numbers $\\{h_{j}\\}_{j\in\mathbb{N}}$
inductively as follows. Define $h_{0}=1$. For every $j>0$, let
$h_{j}=\min\left\\{h\geq
2^{h_{j-1}}:K_{h}(a,b)\leq\left(d+\frac{1}{j}\right)h\right\\}.$
Note that $h_{j}$ always exists. For every $r\in\mathbb{N}$, let
$\displaystyle x[r]=\begin{cases}a[r-\lfloor sh_{j}\rfloor]&\text{ if
}r\in(\lfloor sh_{j}\rfloor,h_{j}]\text{ for some }j\in\mathbb{N}\\\
y[r]&\text{ otherwise}\end{cases}$
where $x[r]$ is the $r$th bit of $x$. Define $x\in\mathbb{R}$ to be the real
number with this binary expansion.
One of the most important aspects of our construction is that we encode (a
subset of) the information of $a$ into our point $x$. This is formalized in
the following lemma.
###### Lemma 10.
For every $j\in\mathbb{N}$, and every $r$ such that $sh_{j}<r\leq h_{j}$,
$K_{r-sh_{j},r}(a,b\,|\,x,ax+b)\leq O(\log h_{j})$.
###### Proof.
By definition, the last $r-sh_{j}$ bits of $x$ are equal to the first
$r-sh_{j}$ bits of $a$. That is,
$\displaystyle x[sh_{j}]\,x[sh_{j}+1]\,\ldots x[r]$
$\displaystyle=a[0]\,a[1]\,\ldots a[r-sh_{j}]$
$\displaystyle=a{\upharpoonright}(r-sh_{j}).$
Therefore, since additional information cannot increase Kolmogorov complexity,
$\displaystyle K_{r-sh_{j},r}(a\,|\,x,ax+b)$ $\displaystyle\leq
K_{r-sh_{j},r}(a\,|\,x)$ $\displaystyle\leq O(\log h_{j}).$
Note that, given $2^{-(r-sh_{j})}$-approximations of $a$, $x$, and $ax+b$, it
is possible to compute an approximation of $b$. That is,
$K_{r-sh_{j}}(b\,|\,a,x,ax+b)\leq O(\log h_{j})$.
Therefore, by Lemma 2 and the two above inequalities,
$\displaystyle K_{r-sh_{j},r}(a,b\,|\,x,ax+b)$
$\displaystyle=K_{r-sh_{j},r}(a\,|\,x,ax+b)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+K_{r-sh_{j},r}(b\,|\,a,x,ax+b)+O(\log
r)$ $\displaystyle\leq O(\log h_{j})+K_{r-sh_{j},r}(b\,|\,a,x,ax+b)+O(\log r)$
$\displaystyle\leq O(\log h_{j}).$
∎
The other important property of our construction is that $(a,b)$ gives no
information about $x$, beyond the information specifically encoded into $x$.
###### Lemma 11.
For every $j\in\mathbb{N}$, the following hold.
1. 1.
$K^{a,b}_{t}(x)\geq t-O(\log h_{j})$, for all $t\leq sh_{j}$.
2. 2.
$K^{a,b}_{r}(x)\geq sh_{j}+r-h_{j}-O(\log h_{j})$, for all $h_{j}\leq r\leq
sh_{j+1}$.
###### Proof.
We first prove item (1). Let $t\leq sh_{j}$. Then, by our construction of $x$,
and choice of $y$,
$\displaystyle K^{a,b}_{t}(x)$ $\displaystyle\geq
K^{a,b}_{t}(y)-h_{j-1}-O(\log t)$ $\displaystyle\geq t-O(\log t)-\log
h_{j}-O(\log t)$ $\displaystyle\geq t-O(\log h_{j}).$
For item (2), let $h_{j}\leq r\leq sh_{j+1}$. Then, by item (1), Lemma 2 and
our construction of $x$,
$\displaystyle K^{a,b}_{r}(x)$
$\displaystyle=K^{a,b}_{h_{j}}(x)+K^{a,b}_{r,h_{j}}(x)-O(\log r)$ [Lemma 2]
$\displaystyle\geq sh_{j}+K^{a,b}_{r,h_{j}}(x)-O(\log r)$ [Item (1)]
$\displaystyle\geq sh_{j}+K^{a,b}_{r,h_{j}}(y)-O(\log r)$ $\displaystyle\geq
sh_{j}+r-h_{j}-O(\log r),$
and the proof is complete.
∎
We now prove bounds on the complexity of our constructed point. We break the
proof into two parts.
In the first, we give lower bounds on $K_{r}(x,ax+b)$ at precisions
$sh_{j}<r\leq h_{j}$. Intuitively, the proof proceeds as follows. Since
$r>sh_{j}$, given $(x,ax+b)$ to precision $r$ immediately gives a
$2^{-r+sh_{j}}$ approximation of $(a,b)$. Thus, we only have to search for
candidate lines $(u,v)$ which satisfy $t=\|(a,b)-(u,v)\|<2^{-r+sh_{j}}$. Then,
because of the lower bound on $t$, the complexity $K_{r-t}(x)$ is maximal. In
other words, we are essentially in the case that the complexity of $x$ is
high. Thus, we are able to use the method described in Section 3.1. We now
formalize this intuition.
###### Lemma 12.
For every $\gamma>0$ and all sufficiently large $j\in\mathbb{N}$,
$K_{r}(x,ax+b)\geq(s+d)r-\gamma r$,
for every $r\in(sh_{j},h_{j}]$.
###### Proof.
Let $\eta\in\mathbb{Q}$ such that
$d-\gamma/4<\eta<d-\gamma^{2}$.
Let $\varepsilon\in\mathbb{Q}$ such that
$\varepsilon<\gamma(d-\eta)/16$.
Note that
$\frac{4\varepsilon}{1-\eta}\leq\frac{\gamma}{4}$
We also note that, since $\eta$ and $\varepsilon$ are constant,
$K(\eta,\varepsilon)=O(1)$.
Let $D=D(r,(a,b),\eta)$ be the oracle of Lemma 8 and let $\delta=1-\eta$.
Let $(u,v)$ be a line such that $t:=\|(a,b)-(u,v)\|\geq r-sh_{j}$, and
$ux+v=ax+b$. Note that $r-t\leq sh_{j}$. Then, by Lemma 7, Lemma 8 and Lemma
11(1),
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq
K^{D}_{t}(a,b)+K^{D}_{r-t,r}(x\,|\,a,b)-O(\log r)$ [Lemma 7]
$\displaystyle\geq K^{D}_{t}(a,b)+K_{r-t,r}(x\,|\,a,b)-O(\log r)$ [Lemma 8]
$\displaystyle\geq K^{D}_{t}(a,b)+r-t-O(\log r).$ [Lemma 11(1)]
There are two cases by Lemma 8. For the first, assume that
$K^{D}_{t}(a,b)=\eta r$. Then
$\displaystyle\geq\eta r+r-t-O(\log r)$ [Definition of $\dim$]
$\displaystyle=(\eta-\varepsilon)r+r-t$ [$r$ is large]
$\displaystyle\geq(\eta-\varepsilon)r+(1-\eta)(r-t)$
For the second, assume that $K^{D}_{t}(a,b)=K_{t}(a,b)$. Then
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq K_{t}(a,b)+r-t-O(\log r)$
$\displaystyle\geq dt-o(t)+r-t-O(\log r)$ [Definition of $\dim$]
$\displaystyle=\eta r+(1-\eta)r-t(1-d)-\varepsilon r$ [$r$ is large]
$\displaystyle\geq\eta r-\varepsilon r+(1-\eta)(r-t)$ [$d>\eta$]
$\displaystyle\geq(\eta-\varepsilon)r+(1-\eta)(r-t).$ (1)
Therefore, in either case, we may apply Lemma 6,
$\displaystyle K_{r}(x,ax+b)$ $\displaystyle\geq
K_{r}^{D}(a,b,x)-K_{r-sh_{j},r}(a,b\,|,x,ax+b)$ [Lemma 6]
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\frac{4\varepsilon}{1-\eta}r-K(\eta,\varepsilon)-O(\log
r)$ $\displaystyle\geq
K_{r}^{D}(a,b,x)-K_{r-sh_{j},r}(a,b\,|\,x,ax+b)-\frac{\gamma}{4}r-\frac{\gamma}{8}r$
$\displaystyle=K_{r}^{D}(a,b,x)-K_{r-sh_{j},r}(a,b\,|\,x,ax+b)-\frac{3\gamma}{8}r.$
(2)
By Lemma 11(1), our construction of oracle $D$, and the symmetry of
information,
$\displaystyle K_{r}^{D}(a,b,x)$
$\displaystyle=K_{r}^{D}(a,b)+K_{r}^{D}(x\,|\,a,b)-O(\log r)$ [Lemma 2]
$\displaystyle=K_{r}^{D}(a,b)+K_{r}(x\,|\,a,b)-O(\log r)$ [Lemma 8(ii)]
$\displaystyle\geq\eta r+K_{r}(x\,|\,a,b)-O(\log r)$ [Lemma 8(i)]
$\displaystyle\geq\eta r+sh_{j}-O(\log r)$ [Lemma 11(1)]
$\displaystyle\geq\eta r+sh_{j}-\frac{\gamma}{4}r.$ (3)
Finally, by Lemma 10,
$K_{r-sh_{j},r}(a,b\mid x,ax+b)\leq\frac{\gamma}{8}r.$ (4)
Together, inequalities (2), (3) and (4) imply that
$\displaystyle K_{r}(x,ax+b)$ $\displaystyle\geq
K_{r}^{D}(a,b,x)-K_{r-sh_{j},r}(a,b\,|,x,ax+b)-\frac{3\gamma}{8}r$
$\displaystyle\geq\eta
r+sh_{j}-\frac{\gamma}{4}r-\frac{\gamma}{8}r-\frac{3\gamma}{8}r$
$\displaystyle\geq dr-\frac{\gamma}{4}r+sh_{j}-\frac{3\gamma}{4}r$
$\displaystyle\geq dr+sh_{j}-\gamma r$ $\displaystyle\geq(s+d)r-\gamma r,$
and the proof is complete. ∎
We now give lower bounds on the complexity of our point, $K_{r}(x,ax+b)$, when
$h_{j}<r\leq sh_{j+1}$. Intuitively, the proof proeeds as follows. Using the
previous lemma, we can, given a $2^{-h_{j}}$-approximation of $(x,ax+b)$,
compute a $2^{-h_{j}}$-approximation of $(a,b)$. Thus, we only have to compute
the last $r-h_{j}$ bits of $(a,b)$. Importantly, since $r>h_{j}$, the last
$r-h_{j}$ bits of $x$ are maximal. Hence, we can simply lower the complexity
of the last $r-h_{j}$ bits of $(a,b)$ to roughly $s(r-h_{j})$. Thus, we are
again, essentially, in the case where $\dim(x)\geq\dim(a,b)$ and the
techniques of Section 3.1 work. We now formalize this intuition.
###### Lemma 13.
For every $\gamma>0$ and all sufficiently large $j\in\mathbb{N}$,
$K_{r}(x,ax+b)\geq(s+d)r-\gamma r$,
for every $r\in(h_{j},sh_{j+1}]$.
###### Proof.
Recall that we are assuming that $s<d$. Let $\hat{s}\in\mathbb{Q}\cap(0,s)$ be
a dyadic rational such that
$\gamma/8<s-\hat{s}<\gamma/4$.
Let $\hat{d}\in\mathbb{Q}\cap(0,\dim(a,b))$ be a dyadic rational such that
$\gamma/8<\dim(a,b)-\hat{d}<\gamma/4$.
Define
$\alpha=\frac{s(r-h_{j})+\dim(a,b)h_{j}}{r}$,
and $\eta\in\mathbb{Q}\cap(0,\alpha)$ by
$\eta=\frac{\hat{s}(r-h_{j})+\hat{d}h_{j}}{r}$.
Finally, let $\varepsilon=\gamma^{2}/64$. Note that
$\displaystyle\alpha-\eta$
$\displaystyle=\frac{s(r-h_{j})+dh_{j}-\hat{s}(r-h_{j})-\hat{d}h_{j}}{r}$
$\displaystyle=\frac{(s-\hat{s})(r-h_{j})+(d-\hat{d})h_{j}}{r}$
$\displaystyle\leq\frac{\frac{\gamma}{4}(r-h_{j})+\frac{\gamma}{4}h_{j}}{r}$
$\displaystyle=\frac{\gamma}{4}$ (5)
Similarly,
$\displaystyle\alpha-\eta$
$\displaystyle=\frac{s(r-h_{j})+\dim(a,b)h_{j}-\hat{s}(r-h_{j})-\hat{d}h_{j-1}}{r}$
$\displaystyle=\frac{(s-\hat{s})(r-h_{j})+(\dim(a,b)-\hat{d})h_{j}}{r}$
$\displaystyle>\frac{\frac{\gamma}{8}(r-h_{j})+\frac{\gamma}{8}h_{j}}{r}$
$\displaystyle=\frac{\gamma}{8}$ (6)
In particular,
$\frac{4\varepsilon}{\alpha-\eta}\leq\gamma/4.$ (7)
We also note that
$K(\varepsilon,\eta)\leq K(\gamma,\hat{s},\hat{d},r,h_{j})\leq O(\log r),$ (8)
since $j$ was chosen to be sufficiently large and $\gamma$ is constant.
Finally, let $D=D(r,(a,b),\eta)$ be the oracle of Lemma 8. Note that we chose
$D$ so that, roughly, $D$ lowers the complexity of the last $r-h_{j}$ bits of
$(a,b)$ to $s(r-h_{j})$.
Let $(u,v)$ be a line such that $t:=\|(a,b)-(u,v)\|\geq h_{j}$, and
$ux+v=ax+b$. Then, by Lemmas 7, 8 and 11,
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq
K^{D}_{t}(a,b)+K^{D}_{r-t,r}(x\,|\,a,b)-O(\log r)$ [Lemma 7]
$\displaystyle\geq K^{D}_{t}(a,b)+K_{r-t,r}(x\,|\,a,b)-O(\log r)$ [Lemma 8]
$\displaystyle\geq K^{D}_{t}(a,b)+s(r-t)-O(\log r).$ [Lemma 11(1)]
There are two cases. In the first, $K^{D}_{t}(a,b)=\eta r$. Then,
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq\eta r+s(r-t)-O(\log r)$
$\displaystyle\geq(\eta-\varepsilon)r+s(r-t)$
$\displaystyle\geq(\eta-\varepsilon)r+(\alpha-\eta)(r-t).$
In the other case, $K^{D}_{t}(a,b)=K_{t}(a,b)$. Then,
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq K_{t}(a,b)+s(r-t)-O(\log r)$
$\displaystyle\geq dt-o(t)+s(r-t)-O(\log r)$ [Definition of $\dim$]
$\displaystyle=dh_{j}+d(t-h_{j})+s(r-t)-o(r)$
$\displaystyle=dh_{j}+d(t-h_{j})+s(r-h_{j})-s(t-h_{j})-o(r)$
$\displaystyle=\alpha r+(d-s)(t-h_{j})-o(r)$ $\displaystyle=\eta
r+(\alpha-\eta)r+(d-s)(t-h_{j})-o(r)$ $\displaystyle\geq\eta
r+(\alpha-\eta)(r-t)-o(r)$
$\displaystyle\geq(\eta-\varepsilon)r+(\alpha-\eta)(r-t).$
Therefore we may apply Lemma 6, which yields
$\displaystyle K^{D}_{r}(a,b,x)$ $\displaystyle\leq
K_{r}(x,ax+b)+K^{D}_{h_{j},r}(a,b,x\,|\,x,ax+b)$ [Lemma 6]
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;+\frac{4\varepsilon}{\alpha-\eta}r+K(\varepsilon,\eta)+O(\log
r)$ $\displaystyle\leq K_{r}(x,ax+b)+K^{D}_{h_{j},r}(a,b,x\,|\,x,ax+b)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+\frac{\gamma}{4}r+\frac{\gamma}{8}r$
[Choice of $\eta,\varepsilon$]
$\displaystyle=K_{r}(x,ax+b)+K^{D}_{h_{j},r}(a,b,x\,|\,x,ax+b)+\frac{3\gamma}{8}r.$
(9)
By Lemma 11, and our construction of oracle $D$,
$\displaystyle K_{r}^{D}(a,b,x)$
$\displaystyle=K_{r}^{D}(a,b)+K_{r}^{D}(x\,|\,a,b)-O(\log r)$ [Lemma 2]
$\displaystyle=\eta r+K_{r}(x\,|\,a,b)-O(\log r)$ [Lemma 8]
$\displaystyle\geq\eta r+sh_{j}+r-h_{j}-O(\log r)$ [Lemma 11(2)]
$\displaystyle\geq\alpha r-\frac{\gamma}{4}r+sh_{j}+r-h_{j}-O(\log r)$
$\displaystyle\geq s(r-h_{j})+dh_{j}-\frac{\gamma}{4}r+sh_{j}+r-h_{j}-O(\log
r)$ $\displaystyle\geq(1+s)r-(1-d)h_{j}-\frac{\gamma}{4}r.$ (10)
By Lemmas 12, and 2, and the fact that additional information cannot increase
Kolmogorov complexity
$\displaystyle K_{h_{j},r}(a,b,x\,|\,x,ax+b)$ $\displaystyle\leq
K_{h_{j},h_{j}}(a,b,x\,|\,x,ax+b)$
$\displaystyle=K_{h_{j}}(a,b,x)-K_{h_{j}}(x,ax+b)$ [Lemma 2]
$\displaystyle=K_{h_{j}}(a,b)+K_{h_{j}}(x\mid a,b)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-K_{h_{j}}(x,ax+b)$ [Lemma 2]
$\displaystyle=K_{h_{j}}(a,b)+sh_{j}-K_{h_{j}}(x,ax+b)$ [Lemma 11]
$\displaystyle\leq K_{h_{j}}(a,b)+sh_{j}-(s+d)h_{j}+\frac{\gamma}{16}h_{j}$
[Lemma 12] $\displaystyle\leq dh_{j}+h_{j}/j-dh_{j}+\frac{\gamma}{16}r$
[Definition of $h_{j}$] $\displaystyle\leq\frac{\gamma}{8}r$ (11)
Combining inequalities (9), (10) and (11) , we see that
$\displaystyle K_{r}(x,ax+b)$ $\displaystyle\geq
K_{r}^{D}(a,b,x)-\frac{\gamma}{8}r-\frac{3\gamma}{8}r$
$\displaystyle\geq(1+s)r-(1-d)h_{j}-\frac{\gamma}{4}r-\frac{\gamma}{4}r$
$\displaystyle\geq(1+s)r-(1-d)h_{j}-\gamma r.$
Note that, since $d\leq 1$, and $h_{j}\leq r$,
$\displaystyle(1+s)r-h_{j}(1-d)-(s+d)r$ $\displaystyle=r(1-d)-h_{j}(1-d)$
$\displaystyle=(r-h_{j})(1-d)$ $\displaystyle\geq 0.$
Thus,
$\displaystyle K_{r}(x,ax+b)$ $\displaystyle\geq(1+s)r-h_{j}(1-d)-\gamma r$
$\displaystyle\geq(s+d)r-\gamma r,$
and the proof is complete for the case $s<\dim(a,b)$.
∎
We are now able to prove our main theorem.
###### Proof of Theorem 9.
Let $(a,b)\in\mathbb{R}^{2}$ be a slope-intercept pair with
$d=\dim(a,b)\leq 1$.
Let $s\in[0,1]$. If $s=0$, then
$\displaystyle K_{r}(a,a^{2}+b)$
$\displaystyle=K_{r}(a)+K_{r}(a^{2}+b\,|\,a)+O(\log r)$
$\displaystyle=K_{r}(a)+K_{r}(b\,|\,a)+O(\log r)$
$\displaystyle=K_{r}(a,b)+O(\log r),$
and so the conclusion holds.
If $s=1$, then by [11], for any point $x$ which is random relative to $(a,b)$,
$\dim(x,ax+b)=1+d$,
and the claim follows.
If $s\geq d$, then Lutz and Stull [11] showed that for any $x$ such that
$\dim^{a,b}(x)=\dim(x)=s$,
we have $\dim(x,ax+b)=s+d$.
Therefore, we may assume that $s\in(0,1)$ and $s<d$. Let $x$ be the point
constructed in this section. Let $\gamma>0$. Let $j$ be large enough so that
the conclusions of Lemmas 12 and 13 hold for these choices of $(a,b)$, $x$,
$s$ and $\gamma$. Then, by Lemmas 12 and 13,
$\displaystyle\dim(x,ax+b)$
$\displaystyle=\liminf_{r\rightarrow\infty}\frac{K_{r}(x,ax+b)}{r}$
$\displaystyle\geq\liminf_{r\rightarrow\infty}\frac{(s+d)r-\gamma r}{r}$
$\displaystyle=s+d-\gamma.$
Since we chose $\gamma$ arbitrarily, we see that
$\dim(x,ax+b)\geq s+d$.
For the upper bound, let $j\in\mathbb{N}$ be sufficiently large. Then
$\displaystyle K_{h_{j}}(x,ax+b)$ $\displaystyle\leq K_{h_{j}}(x,a,b)$
$\displaystyle=K_{h_{j}}(a,b)+K_{h_{j}}(x\,|\,a,b)$ $\displaystyle\leq
dh_{j}+sh_{j}$ $\displaystyle=(d+s)h_{j}.$
Therefore,
$\dim(x,ax+b)\leq s+d$,
and the proof is complete. ∎
## 5 High-Dimensional Lines
In this section we show that the Dimension Spectrum Conjecture holds for lines
of high dimension, i.e., for lines $L_{a,b}$ such that $\dim(a,b)>1$. That is,
we will prove the following theorem.
###### Theorem 14.
Let $(a,b)\in\mathbb{R}^{2}$ be a slope-intercept pair with $\dim(a,b)>1$..
Then for every $s\in[0,1]$, there is a point $x\in\mathbb{R}$ such that
$\dim(x,ax+b)=1+s$.
### 5.1 Overview of proof
In this case, we again apply essential insight of the proof for low-
dimensional lines, namely, encoding (a subset of) the information of $a$ into
$x$. However, when $\dim(a,b)>1$ constructing $x$ as before potentially causes
a problem. Specifically, in this case, the previous construction might cause
$\dim(x,ax+b)$ to become too large.
The overcome this, we rely on a non-constructive argument. More specifically,
we begin as in the construction of $x$ in the low-dimensional case. However at
stage $j$ of our construction, we do not add all $h_{j}-sh_{j}$ bits of $a$ to
$x$. Instead we consider the $m=h_{j}-sh_{j}$ strings
$\mathbf{x_{0}},\ldots,\mathbf{x_{m}}$, where
$\displaystyle\mathbf{x_{n}}[i]=\begin{cases}0&\text{ if }0\leq i<m-n\\\
\frac{1}{a}[i-(m-n)]&\text{ if }m-n\leq i\leq m\end{cases}$ (*)
and look at the extension of $x$ with the bits of $\mathbf{x_{n}}$.
Using a discrete, approximate, version of the mean value theorem, we are able
to conclude that there is some extension $x^{\prime}=x\mathbf{x_{n}}$ such
that
$\min\limits_{sh_{j}\leq r\leq h_{j}}|K_{r}(x^{\prime},ax^{\prime}+b)-(1+s)r|$
is sufficiently small. We then carry on with the argument of the low-
dimensional lines until $sh_{j+1}$.
### 5.2 Proof for high-dimensional lines
In order to prove Theorem 14, we will, given any slope-intercept pair $(a,b)$
and $s\in(0,1)$, construct a point $x\in[0,1]$ such that $\dim(x,ax+b)=1+s$.
Our construction is best phrased as constructing an infinite binary sequence
$\mathbf{x}$, and then taking $x$ to be the unique real number whose binary
expansion is $\mathbf{x}$. We now recall terminology needed in the
construction. We will use bold variables to denote binary strings and
(infinite) binary sequences. If $\mathbf{x}$ is a (finite) binary string and
$\mathbf{y}$ is a binary string or sequence, we write
$\mathbf{x}\prec\mathbf{y}$ if $\mathbf{x}$ is a prefix of $\mathbf{y}$.
Let $(a,b)$ be a slope intercept pair and let $d=\dim(a,b)$. Define the
sequence of natural numbers $\\{h_{j}\\}_{j\in\mathbb{N}}$ inductively as
follows. Define $h_{0}=2$. For every $j>0$, let
$h_{j}=\min\left\\{h\geq
2^{h_{j-1}}:K_{h}(a,b)\leq\left(d+2^{-j}\right)h\right\\}.$
We define our sequence $\mathbf{x}$ inductively. Let $\mathbf{y}$ be a random,
relative to $(a,b)$, binary sequence. That is, there is some constant $c$ such
that
$K^{a,b}(y{\upharpoonright}r)\geq r-c,$ (12)
for every $r\in\mathbb{N}$. We begin our inductive definition by setting
$\mathbf{x}[0\ldots 2]=\mathbf{y}[0\ldots 2]$. Suppose we have defined
$\mathbf{x}$ up to $h_{j-1}$. We then set
$\mathbf{x}[r]=\mathbf{y}[r]$, for all $h_{j-1}<r\leq sh_{j}$.
To specify the next $h_{j}-sh_{j}$ bits of $\mathbf{x}$, we use the following
lemma, which we will prove in the next section.
###### Lemma 15.
For every sufficiently large $j$, there is a binary string $\mathbf{z}$ of
length $h_{j}-sh_{j}$ such that
$\min\limits_{sh_{j}<r\leq
h_{j}}\left|K_{r}(x,ax+b)-(1+s)r\right|<\frac{r}{j},$
where $x$ is any real such that $\mathbf{x}\mathbf{z}\prec x$. Moreover,
$\mathbf{z}$ is of the form (*) of Section 5.1.
For now, we assume the truth of this lemma. If the current $j$ is not
sufficiently large, take $\mathbf{z}$ to be the string of all zeros.
Otherwise, if $j$ is sufficiently large, we let $\mathbf{z}$ be such a binary
string. We then set
$\mathbf{x}[r]=\mathbf{z}[r-sh_{j}]$, for all $sh_{j}<r\leq h_{j}$,
completing the inductive step. Finally, we let $x_{a,b,s}$ be the real number
with binary expansion $\mathbf{x}$.
###### Proposition 16.
Let $x=x_{a,b,s}$ be the real we just constructed. Then for every $j$,
1. 1.
$K^{a,b}_{sh_{j}}(x)\geq sh_{j}-O(\log h_{j})$, and
2. 2.
$K_{r}(x\mid a,b)\geq sh_{j}+r-h_{j}$, for every $h_{j}\leq r<sh_{j+1}$.
###### Proof.
To see (1), by our construction, $\mathbf{x}[h_{j-1}\ldots
sh_{j}]=\mathbf{y}[h_{j-1}\ldots sh_{j}]$. Thus, by Corollary 4,
$\displaystyle K^{a,b}_{sh_{j}}(x)$ $\displaystyle=K^{a,b}(\mathbf{x})-O(\log
h_{j})$ $\displaystyle\geq K^{a,b}(\mathbf{y})-h_{j-1}-O(\log h_{j})$
$\displaystyle\geq sh_{j}-h_{j-1}-O(\log h_{j})$ $\displaystyle\geq
sh_{j}-O(\log h_{j}).$
For item (2), by our construction $\mathbf{x}[h_{j}\ldots
r]=\mathbf{y}[h_{j}\ldots r]$. Therefore, by Corollary 4 and our construction
of $x$,
$\displaystyle K_{r}(x\mid a,b)$ $\displaystyle\geq K^{a,b}_{r}(x)$
$\displaystyle=K^{a,b}(\mathbf{x})-O(\log h_{j})$ $\displaystyle\geq
K^{a,b}(\mathbf{y}[0\ldots sh_{j}]\mathbf{z}\mathbf{y}[h_{j}\ldots
r])-h_{j-1}-O(\log h_{j})$ $\displaystyle\geq K^{a,b}(\mathbf{y}[0\ldots
sh_{j}]\mathbf{y}[h_{j}\ldots r])$
$\displaystyle\;\;\;\;\;\;\;\;\;-h_{j-1}-O(\log h_{j})$ [$\mathbf{z}$
computable from $a$] $\displaystyle\geq sh_{j}+r-h_{j}-O(\log h_{j})$
[Definition of $\mathbf{y}$]
∎
We now show, again assuming Lemma 15, that $\dim(x,ax+b)=1+s$, where
$x=x_{a,b,s}$ is the point we have just constructed.
We begin by proving an upper bound on $\dim(x,ax+b)$. Note that this
essentially follows from our choice of $\mathbf{z}$.
###### Proposition 17.
Let $(a,b)$ be a slope intercept pair, $s\in(0,1)$ and $\gamma\in\mathbb{Q}$
be positive. Let $x=x_{a,b,s}$ be the point we have just constructed. Then
$\dim(x,ax+b)\leq(1+s)+\gamma$.
###### Proof.
Let $j$ be sufficiently large. By our construction of $x$,
$\min\limits_{sh_{j}<r\leq
h_{j}}\left|K_{r}(x,ax+b)-(1+s)r\right|<\frac{\gamma r}{4}$ (13)
Therefore,
$\displaystyle\dim(x,ax+b)$
$\displaystyle=\liminf\limits_{r}\frac{K_{r}(x,ax+b)}{r}$
$\displaystyle\leq\liminf\limits_{j}\min\limits_{sh_{j}<r\leq
h_{j}}\frac{K_{r}(x,ax+b)}{r}$
$\displaystyle\leq\liminf\limits_{j}\min\limits_{sh_{j}<r\leq
h_{j}}\frac{(1+s)r+\gamma r/4}{r}$
$\displaystyle=\liminf\limits_{j}\min\limits_{sh_{j}<r\leq h_{j}}1+s+\gamma/4$
$\displaystyle=1+s+\frac{\gamma}{4}.$
∎
We break the proof of the lower bound on $\dim(x,ax+b)$ into two parts. In the
first, we give lower bounds on $K_{r}(x,ax+b)$ on the interval
$r\in(sh_{j},h_{j}]$. Note that this essentially follows from inequality (13).
###### Proposition 18.
Let $(a,b)$ be a slope intercept pair, $s\in(0,1)$, $\gamma\in\mathbb{Q}$ be
positive and $j$ be sufficiently large. Let $x=x_{a,b,s}$ be the point we have
just constructed. Then
$K_{r}(x,ax+b)\geq(1+s-\gamma)r$
for all $sh_{j}<r\leq h_{j}$
###### Proof.
Let $j$ be sufficiently large and $sh_{j}<r\leq h_{j}$. Then, by (13),
$K_{r}(x,ax+b)\geq(1+s)r-\gamma r$.
∎
We now give lower bounds on $K_{r}(x,ax+b)$ on the interval
$r\in(h_{j-1},sh_{j}]$. The proof of this lemma is very similar to the
analogous lemma for low-dimensional lines (Lemma 13). Intuitively, the proof
is as follows. Using the previous lemma, we can, given a
$2^{-h_{j}}$-approximation of $(x,ax+b)$, compute a $2^{-h_{j}}$-approximation
of $(a,b)$ with a small amount of extra bits. Having done so, we have to
compute the last $r-h_{j}$ bits of $(a,b)$. Importantly, since $r>h_{j}$, the
last $r-h_{j}$ bits of $x$ are maximal. Thus, we can simply lower the
complexity of the last $r-h_{j}$ bits of $(a,b)$ so that the complexity of
these bits is roughly $s(r-h_{j})$. Thus, we are again, morally, in the case
where $\dim(x)\geq\dim(a,b)$ and the techniques of Section 3.1 work. We now
formalize this intuition.
###### Lemma 19.
Let $(a,b)$ be a slope intercept pair, $s\in(0,1)$, $\gamma\in\mathbb{Q}$ be
positive and $j$ be sufficiently large. Let $x=x_{a,b,s}$ be the point we have
just constructed. Then
$K_{r}(x,ax+b)\geq(1+s-\gamma)r$
for all $h_{j-1}<r\leq sh_{j}$
###### Proof.
Intuitively, we will use the approximation of $(x,ax+b)$ at precision
$h_{j-1}$ to compute $(a,b)$ at precision $h_{j-1}$. Then we will only search
for candidate lines within $2^{-h_{j-1}}$ of $(a,b)$. Formally, the argument
proceeds as follows.
We first show that we can compute $(a,b)$ to within $2^{-h_{j-1}}$ with an
approximation of $(x,ax+b)$, with few additional bits of information. By Lemma
2 and inequality (13)
$\displaystyle K_{h_{j-1},r}(a,b,x\,|\,x,ax+b)$ $\displaystyle\leq
K_{h_{j-1},h_{j-1}}(a,b,x\,|\,x,ax+b)+O(\log h_{j}))$
$\displaystyle=K_{h_{j-1}}(a,b,x)-K_{h_{j-1}}(x,ax+b)$ [Lemma 2]
$\displaystyle\leq K_{h_{j-1}}(a,b,x)-(1+s)h_{j-1}+\frac{\gamma}{4}h_{j-1}$
[(13)] $\displaystyle=K_{h_{j-1}}(a,b)+K_{h_{j-1}}(x\mid a,b)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-(1+s)h_{j-1}+\frac{\gamma}{4}h_{j-1}$
[Lemma 2] $\displaystyle\leq dh_{j-1}+h_{j}2^{-j}+K_{h_{j-1}}(x\mid a,b)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-(1+s)h_{j-1}+\frac{\gamma
h_{j-1}}{4}$ [Definition $h_{j}$] $\displaystyle\leq
dh_{j-1}+h_{j}2^{-j}+sh_{j-1}$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-(1+s)h_{j-1}+\frac{\gamma
h_{j-1}}{4}$ [Proposition 16] $\displaystyle\leq dh_{j}+sh_{j-1}$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;-(1+s)h_{j-1}+\frac{\gamma
h_{j-1}}{2}$ [$j$ large] $\displaystyle\leq dh_{j}-h_{j}+\frac{\gamma
h_{j-1}}{2}.$ (14)
Thus, we can, given a $2^{-r}$ approximation of $(x,ax+b)$, compute a
$2^{-h_{j-1}}$-approximation of $(a,b)$ with
$(d-1)h_{j}+\frac{\gamma h_{j-1}}{2}$
additional bits of information. Knowing $(a,b)$ to precision $h_{j-1}$ allows
us to search for candidate lines within $2^{-h_{j-1}}$ of $(a,b)$, i.e., using
Lemma 6 with $m=h_{j-1}$.
Let $\hat{s}\in\mathbb{Q}\cap(0,s)$ be a dyadic rational such that
$\gamma/8<s-\hat{s}<\gamma/4$.
Let $\hat{d}\in\mathbb{Q}\cap(0,\dim(a,b))$ be a dyadic rational such that
$\gamma/8<\dim(a,b)-\hat{d}<\gamma/4$.
Define
$\alpha=\frac{s(r-h_{j-1})+dh_{j-1}}{r}$.
Define
$\eta=\frac{\hat{s}(r-h_{j-1})+\hat{d}h_{j-1}}{r}$.
Finally, let $\varepsilon=\gamma^{2}/64$. Note that
$\displaystyle\alpha-\eta$
$\displaystyle=\frac{s(r-h_{j-1})+dh_{j-1}-\hat{s}(r-h_{j-1})-\hat{d}h_{j-1}}{r}$
$\displaystyle=\frac{(s-\hat{s})(r-h_{j-1})+(d-\hat{d})h_{j-1}}{r}$
$\displaystyle\leq\frac{\frac{\gamma}{4}(r-h_{j-1})+\frac{\gamma}{4}h_{j-1}}{r}$
$\displaystyle=\frac{\gamma}{4}$ (15)
Similarly,
$\displaystyle\alpha-\eta$
$\displaystyle=\frac{s(r-h_{j-1})+dh_{j-1}-\hat{s}(r-h_{j-1})-\hat{d}h_{j-1}}{r}$
$\displaystyle=\frac{(s-\hat{s})(r-h_{j-1})+(d-\hat{d})h_{j-1}}{r}$
$\displaystyle>\frac{\frac{\gamma}{8}(r-h_{j-1})+\frac{\gamma}{4}h_{j-1}}{r}$
$\displaystyle=\frac{\gamma}{8}$ (16)
In particular,
$\frac{4\varepsilon}{\alpha-\eta}\leq\gamma/4.$ (17)
We also note that
$K(\varepsilon,\eta)\leq K(\gamma,\hat{s},\hat{d},r,h_{j-1})\leq O(\log r),$
(18)
since $j$ was chosen to be sufficiently large and $\gamma$ is constant.
Let $D=D(r,(a,b),\eta)$ be the oracle of Lemma 8. We now show that the
conditions of Lemma 6 are satisfied for these choices $a,b,\eta,\varepsilon,r$
and $\delta=\alpha-\eta$, $m=h_{j-1}$ and $A=D$.
Let $(u,v)$ be a line such that $t:=\|(a,b)-(u,v)\|\geq h_{j-1}$, and
$ux+v=ax+b$. Then, by Lemmas 7, 8, and Proposition 16,
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq
K^{D}_{t}(a,b)+K^{D}_{r-t,r}(x\,|\,a,b)-O(\log r)$ [Lemma 7]
$\displaystyle\geq K^{D}_{t}(a,b)+K_{r-t,r}(x\,|\,a,b)-O(\log r)$ [Lemma 8]
$\displaystyle\geq K^{D}_{t}(a,b)+s(r-t)-O(\log r).$ [Proposition 16]
By Lemma 8, there are two cases. In the first, $K^{D}_{t}(a,b)=\eta r$, and so
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq K^{D}_{t}(a,b)+s(r-t)-O(\log
r)$ $\displaystyle=\eta r+s(r-t)-O(\log r)$
$\displaystyle\geq(\eta-\varepsilon)r+s(r-t)$ [$j$ is large]
$\displaystyle\geq(\eta-\varepsilon)r+\delta(r-t)$ [$\gamma$ is small]
In the second case, $K^{D}_{t}(a,b)=K_{t}(a,b)$, and so
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq K^{D}_{t}(a,b)+s(r-t)-O(\log
r)$ $\displaystyle\geq dt-o(t)+s(r-t)-O(\log r)$ [Definition of $\dim$]
$\displaystyle=dh_{j-1}+d(t-h_{j-1})+s(r-t)-o(r)$
$\displaystyle=dh_{j-1}+d(t-h_{j-1})+s(r-h_{j-1})-s(t-h_{j-1})-o(r)$
$\displaystyle=\alpha r+d(t-h_{j-1})-s(t-h_{j-1})-o(r)$ [Definition of
$\alpha$] $\displaystyle=\eta r+(\alpha-\eta)r+(d-s)(t-h_{j-1})-o(r)$
$\displaystyle\geq\eta r+(\alpha-\eta)r-o(r)$ [$d>1$, $t>h_{j-1}$]
$\displaystyle\geq\eta r+(\alpha-\eta)(r-t)-o(r)$ [$\alpha>\eta$]
$\displaystyle\geq(\eta-\varepsilon)r+\delta(r-t)$ [$j$ is large]
Therefore, in either case, we may apply Lemma 6, relative to $D$ which yields
$\displaystyle K^{D}_{r}(a,b,x)$ $\displaystyle\leq
K_{r}(x,ax+b)+K_{h_{j},r}(a,b,x\,|\,x,ax+b)$
$\displaystyle\;\;\;\;\;\;\;+\frac{4\varepsilon}{\alpha-\eta}r+K(\varepsilon,\eta)+O(\log
r)$ [Lemma 6] $\displaystyle\leq K_{r}(x,ax+b)+dh_{j}-h_{j}+\frac{\gamma
h_{j-1}}{2}$
$\displaystyle\;\;\;\;\;\;\;+\frac{4\varepsilon}{\alpha-\eta}r+K(\varepsilon,\eta)+O(\log
r)$ [(14)] $\displaystyle\leq K_{r}(x,ax+b)+dh_{j}-h_{j}+\frac{\gamma
h_{j-1}}{2}$
$\displaystyle\;\;\;\;\;\;\;+\frac{4\varepsilon}{\alpha-\eta}r+O(\log r)$
[(18)] $\displaystyle\leq K_{r}(x,ax+b)+dh_{j}-h_{j}+\frac{\gamma h_{j-1}}{2}$
$\displaystyle\;\;\;\;\;\;\;+\frac{\gamma r}{4}+O(\log r)$ [(17)]
$\displaystyle\leq K_{r}(x,ax+b)+dh_{j}-h_{j}+\frac{3\gamma r}{4}+O(\log r)$
(20)
By Lemma 11, and our construction of oracle $D$,
$\displaystyle K_{r}^{D}(a,b,x)$
$\displaystyle=K_{r}^{D}(a,b)+K_{r}^{D}(x\,|\,a,b)-O(\log r)$ [Lemma 2]
$\displaystyle=\eta r+K_{r}(x\,|\,a,b)-O(\log r)$ [Lemma 8]
$\displaystyle\geq\eta r+sh_{j}+r-h_{j}-O(\log r)$ [Lemma 11(2)]
$\displaystyle\geq\alpha r-\frac{\gamma}{4}r+sh_{j}+r-h_{j}-O(\log r)$
$\displaystyle\geq s(r-h_{j})+dh_{j}-\frac{\gamma}{4}r+sh_{j}+r-h_{j}-O(\log
r)$ $\displaystyle\geq(1+s)r-(1-d)h_{j}-\frac{\gamma}{4}r.$ (21)
Rearranging (20) and combining this with (21), we see that
$\displaystyle K_{r}(x,ax+b)$ $\displaystyle\geq
K^{D}_{r}(a,b,x)-dh_{j}+h_{j}-\frac{3\gamma r}{4}-O(\log r)$ [(20)]
$\displaystyle\geq(1+s)r-(1-d)h_{j}-\frac{\gamma}{4}r$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-dh_{j}+h_{j}-\frac{3\gamma
r}{4}-O(\log r)$ [(21)] $\displaystyle=(1+s)r-\gamma r-O(\log r)$
∎
We are now able to prove that the Dimension Spectrum Conjecture holds for high
dimensional lines.
###### Proof of Theorem 14.
Let $(a,b)\in\mathbb{R}^{2}$ be a slope-intercept pair with
$d=\dim(a,b)>1$.
Let $s\in[0,1]$. In the case where $s=0$, Turetsky showed (Theorem 1) that
$1\in\operatorname{sp}(L_{a,b})$, i.e., there is a point $x$ such that
$\dim(x,ax+b)=1$. In the case where $s=1$, Lutz and Stull [11] showed than any
point $x$ which is random relative to $(a,b)$ satisfies
$\dim(x,ax+b)=2$.
Therefore, we may assume that $s\in(0,1)$. Let $x=x_{a,b,s}$ be the point
constructed in this section. By Propositions 17 and 18 and Lemma 19, for every
$\gamma$,
$|\dim(x,ax+b)-(1+s)|<\gamma$.
Thus, by the definition of effective dimension,
$\dim(x,ax+b)=1+s$,
and the proof is complete. ∎
### 5.3 Proof of Lemma 15
To complete the proof of the main theorem of this section, we now prove Lemma
15. Recall that this states that, for every $j$, after setting
$\mathbf{x}[h_{j-1}\ldots sh_{j}]=\mathbf{y}[h_{j-1}\ldots sh_{j}]$, the
following holds.
Lemma 15. For every sufficiently large $j$ there is a binary string
$\mathbf{z}$ of length $h_{j}-sh_{j}$ such that
$\min\limits_{sh_{j}<r\leq
h_{j}}\left|K_{r}(x,ax+b)-(1+s)r\right|<\frac{r}{j},$
where $x$ is any real such that $\mathbf{x}\mathbf{z}\prec x$. Moreover,
$\mathbf{z}$ is of the form (*) of Section 5.1.
Let $m=h_{j}-sh_{j}$. For each $0\leq n\leq m$, define the binary string
$\mathbf{x_{n}}$ of length $m$ by
$\displaystyle\mathbf{x_{n}}[i]=\begin{cases}0&\text{ if }0\leq i<m-n\\\
\frac{1}{a}[i-(m-n)]&\text{ if }m-n\leq i\leq m\end{cases}$
Thus, for example $\mathbf{x_{0}}$ is the binary string of $m$ zeros, while
$\mathbf{x_{m}}$ is the binary string containing the $m$-bit prefix of
$\frac{1}{a}$.
Let $x$ be the real number such that $\mathbf{x}\mathbf{x_{0}}\prec x$, and
whose binary expansion contains only zeros after $sh_{j}$. For each $1\leq
n\leq m$, let $x_{n}$ be the real number defined by
$x_{n}=x+2^{-h_{j}+n}/a$.
Therefore, for every $n$,
$(x_{n},ax_{n}+b)=(x_{n},ax+b+2^{-h_{j}+n})$.
Since the binary expansion of $x$ satisfies $x[r]=0$ for all $r\geq sh_{j}$,
we have, for every $n$,
$\mathbf{x}\mathbf{x_{n}}\prec x_{n}$ (22)
In other words, the binary expansion of $x_{n}$ up to index $h_{j}$ is just
the concatenation of $\mathbf{x}$ and $\mathbf{x_{n}}$.
We now collect a few facts about our points $x_{n}$.
###### Lemma 20.
For every $n,r$ such that $0\leq n\leq m$ and $sh_{j}\leq r\leq h_{j}$ the
following hold.
1. 1.
$K_{n,h_{j}}(a\,|\,x_{n})\leq O(\log h_{j})$.
2. 2.
For every $n$ and $n^{\prime}>n$,
$|K_{r}(x_{n^{\prime}},ax_{n^{\prime}}+b)-K_{r}(x_{n},ax_{n}+b)|<n^{\prime}-n+\log(r)$.
3. 3.
$K_{r-sh_{j},r}(a,b\mid x_{m},ax_{m}+b)\leq O(\log r)$.
Note that the constants implied by the big oh notation depend only on $a$.
###### Proof.
From the definition of $\mathbf{x_{n}}$,
$\mathbf{x_{n}}[(m-n)\ldots m]=\frac{1}{a}[0\ldots n]$.
Since the map $a\mapsto\frac{1}{a}$ is bi-Lipschitz on an interval, there some
constant $c$ depending only on $a$, such that we can compute a
$2^{-n+c}$-approximation of $a$ given the first $n$ bits of $1/a$. Thus, by
Corollary 5,
$\displaystyle K_{n,h_{j}}(a\mid x_{n})$
$\displaystyle=K(a{\upharpoonright}n\mid x_{n}{\upharpoonright}h_{j})+O(\log
h_{j})$ [Corollary 5]
$\displaystyle=K(a{\upharpoonright}n\mid\mathbf{x}\mathbf{x_{n}})+O(\log
h_{j})$ [(22)] $\displaystyle\leq
K(a{\upharpoonright}n\mid\mathbf{x_{n}})+O(\log h_{j})$ $\displaystyle\leq
O(\log h_{j}).$
For item (2), let $n^{\prime}>n$. We first assume that $r<h_{j}-n^{\prime}$.
Then
$\displaystyle|x_{n^{\prime}}-x_{n}|$
$\displaystyle=x+2^{-h_{j}+n^{\prime}}/a-(x+2^{-h_{j}+n}/a)$
$\displaystyle=2^{-h_{j}}(2^{n^{\prime}}-2^{n})/a)$
$\displaystyle=2^{-h_{j}+n^{\prime}}(1-2^{n-n^{\prime}})/a$ $\displaystyle\leq
2^{-h_{j}+n^{\prime}}/2a$ $\displaystyle\leq O(2^{-h_{j}+n^{\prime}}).$
Thus,
$\displaystyle\|(x_{n^{\prime}},ax_{n^{\prime}}+b)-(x_{n},ax_{n}+b)\|$
$\displaystyle=\|(x_{n^{\prime}},ax+b+2^{-h_{j}+n^{\prime}})$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;-(x_{n},ax+b+2^{-h_{j}+n^{\prime}})\|$
$\displaystyle\leq O(2^{-h_{j}+n^{\prime}}).$
Therefore, under our assumption that $r<h_{j}-n^{\prime}$,
$K_{r}(x_{n^{\prime}},ax_{n^{\prime}}+b)=K_{r}(x_{n},ax_{n}+b)-O(1)$,
and the claim follows. Now assume that $r\geq h_{j}-n^{\prime}$. By Corollary
4,
$\displaystyle K_{r}(x_{n^{\prime}},ax_{n^{\prime}}+b)$
$\displaystyle=K_{r}(x_{n^{\prime}},ax+b+2^{-h_{j}+n^{\prime}})$
$\displaystyle=K_{r}(\mathbf{x}\mathbf{x_{n^{\prime}}},ax+b+2^{-h_{j}+n^{\prime}}).$
Similarly,
$\displaystyle K_{r}(x_{n},ax_{n}+b)$
$\displaystyle=K_{r}(x_{n},ax+b+2^{-h_{j}+n})$
$\displaystyle=K_{r}(\mathbf{x}\mathbf{x_{n}},ax+b+2^{-h_{j}+n}).$
By our definition of $\mathbf{x_{n}}$ and $\mathbf{x_{n^{\prime}}}$,
$\mathbf{x_{n^{\prime}}}$ contains all the bits of $\frac{1}{a}$ that
$\mathbf{x_{n}}$ has. Thus,
$K(\mathbf{x_{n}}{\upharpoonright}r\mid\mathbf{x_{n^{\prime}}}{\upharpoonright}r)=O(\log
n^{\prime})$.
Given the first $r$ bits of $\mathbf{x_{n}}$, we can compute the first $r$
bits of $\mathbf{x_{n^{\prime}}}$ by simply giving the $n^{\prime}-n$ bits of
$\frac{1}{a}$ that $\mathbf{x_{n^{\prime}}}$ contains but $\mathbf{x_{n}}$
does not. Thus,
$K(\mathbf{x_{n^{\prime}}}{\upharpoonright}r\mid\mathbf{x_{n}}{\upharpoonright}r)\leq
n^{\prime}-n+O(\log n^{\prime})$.
Similarly, since $ax_{n}+b$ and $ax_{n^{\prime}}+b$ are translates of each
other, to compute one given the other, we just need to give the logarithm of
their distance. Therefore,
$\displaystyle K_{r}(ax+b+2^{-h_{j}+n^{\prime}}\,|\,ax+b+2^{-h_{j}+n})$
$\displaystyle\leq O(\log h_{j}+\log n^{\prime})$ $\displaystyle
K_{r}(ax+b+2^{-h_{j}+n}\,|\,ax+b+2^{-h_{j}+n^{\prime}})$ $\displaystyle\leq
O(\log h_{j}+\log n^{\prime})$
Taken together, these four inequalities show that
$\displaystyle K_{r}(x_{n},ax_{n}+b\mid x_{n^{\prime}},ax_{n^{\prime}}+b)$
$\displaystyle=K_{r}(x_{n}\mid x_{n^{\prime}},ax_{n^{\prime}}+b)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+K_{r}(ax_{n}+b\mid
x_{n^{\prime}},ax_{n^{\prime}}+b)$ $\displaystyle\leq O(\log
n^{\prime})+O(\log h_{j}+\log n^{\prime}).$
Similarly,
$\displaystyle K_{r}(x_{n^{\prime}},ax_{n^{\prime}}+b\mid x_{n},ax_{n}+b)$
$\displaystyle=K_{r}(x_{n^{\prime}}\mid x_{n},ax_{n}+b)$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;+K_{r}(ax_{n^{\prime}}+b\mid
x_{n},ax_{n}+b)$ $\displaystyle\leq n^{\prime}-n+O(\log n^{\prime})+O(\log
h_{j}+\log n^{\prime}),$
and the proof of the second item is complete.
For item (3), note that the first $r$ bits of $x_{m}$,
$x_{m}{\upharpoonright}r=\mathbf{x}\mathbf{x_{m}}[0\ldots(r-sh_{j})]$.
By definition,
$\mathbf{x_{m}}[0\ldots(r-sh_{j})]=\frac{1}{a}[0\ldots(r-sh_{j})]$. Thus,
given a $2^{-r}$ approximation of $x_{m}$, we can compute a
$2^{-r+sh_{j}+c}$-approximation of $a$. Finally, given a
$2^{-r+sh_{j}+c}$-approximation of $x_{m}$, $a$ and $ax_{m}+b$, we can compute
a $2^{-r+sh_{j}+c}$-approximation of $b$. ∎
###### Poof of Lemma 15.
The proof of this lemma is essentially (a discrete, approximate, version of)
the mean value theorem. In particular, we will show that $x_{0}$ satisfies
$K_{r}(x_{0},ax_{0}+b)\leq(1+s)r$,
for every $r\in[sh_{j},h_{j}]$, and $x_{m}$ satisfies
$K_{r}(x_{m},ax_{m}+b)\geq(1+s)r$,
for every $r\in[sh_{j},h_{j}]$. By Lemma 20, the map $n\mapsto
K_{r}(x_{n},ax_{n}+b)$ is “continuous”, and therefore there must be a point
$x_{n}$ which satisfies
$\min\limits_{sh_{j}<r\leq
h_{j}}\left|K_{r}(x,ax+b)-(1+s)r\right|<\frac{r}{j}$.
We now formalize this intuition. For each $n$, define
$M_{n}=\min\\{\frac{K_{r}(x_{n},ax_{n}+b)}{r}\,|\,sh_{j}\leq r\leq h_{j}\\}$.
To prove our theorem, it suffices to show that there is an $n$ such that
$1+s-\frac{1}{j}\leq M_{n}\leq 1+s+\frac{1}{j}.$
To begin, note, by our construction of $h_{j}$, $j<\log h_{j}$. We also note
that $K_{h_{j}}(x_{0})\leq sh_{j}$, since the bits of $x_{0}$ after $sh_{j}$
are all zero. Therefore,
$\displaystyle K_{h_{j}}(x_{0},ax_{0}+b)$
$\displaystyle=K_{h_{j}}(x_{0})+K_{h_{j}}(ax_{0}+b\mid x_{0})+O(\log h_{j})$
[Lemma 2] $\displaystyle\leq sh_{j}+K_{h_{j}}(ax_{0}+b\mid x_{0})+O(\log
h_{j})$ $\displaystyle\leq sh_{j}+K_{h_{j}}(ax_{0}+b)+O(\log h_{j})$
$\displaystyle\leq sh_{j}+h_{j}+O(\log h_{j})$
where the constant of the $O(\log h_{j})$ term is independent of $x_{0}$.
Thus, $M_{0}<1+s+\frac{1}{j}$, for all $j$ such that $\frac{h_{j}}{j}>O(\log
h_{j})$.
We now show that
$K_{r}(x_{m},ax_{m}+b)\geq(1+s)r-\frac{r}{j}$,
for all sufficiently large $j$, and for every $sh_{j}\leq r\leq h_{j}$.
Let $sh_{j}\leq r\leq h_{j}$, $\eta=1-\frac{1}{2j}$ and let
$D=D(r,(a,b),\eta$. Let $\epsilon=\frac{1}{32j^{2}}$ and $\delta=1-\eta$. We
now show that the conditions of Lemma 6 are satisfied.
Let $(u,v)$ be a line such that $t:=\|(a,b)-(u,v)\|\geq r-sh_{j}$, and
$ux_{m}+v=ax_{m}+b$. Note that $r-t\leq sh_{j}$. Then, by Lemma 7, Lemma 8 and
Lemma 11(1),
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq
K^{D}_{t}(a,b)+K^{D}_{r-t,r}(x_{m}\,|\,a,b)-O(\log r)$ [Lemma 7]
$\displaystyle\geq K^{D}_{t}(a,b)+K_{r-t,r}(x_{m}\,|\,a,b)-O(\log r)$ [Lemma
8] $\displaystyle=\min\\{\eta r,K_{t}(a,b)\\}+K_{r-t,r}(x_{m}\,|\,a,b)-O(\log
r)$ [Lemma 8] $\displaystyle=\min\\{\eta r,K_{t}(a,b)\\}+r-t-O(\log r).$
[construction of $x$]
There are two cases guaranteed by Lemma 8. If $\eta r\leq K_{t}(a,b)$, then
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq\eta r+r-t-O(\log r)$
$\displaystyle\geq(\eta-\epsilon)r+r-t$ [$j$ is large]
$\displaystyle\geq(\eta-\epsilon)r+\delta(r-t),$ [$\delta<1$]
since $\varepsilon r\geq O(\log r)$. Hence, we may apply Lemma 6 in this case.
Otherwise, $K^{D}_{t}(a,b)=K_{t}(a,b)$. Then
$\displaystyle K^{D}_{r}(u,v)$ $\displaystyle\geq K_{t}(a,b)+r-t-O(\log r)$
$\displaystyle\geq(d-2^{-j})t+r-t-O(\log r)$ [Definition of $h_{j}$]
$\displaystyle=\eta r+(1-\eta)r-t(1+2^{-j}-d)-O(\log r)$
$\displaystyle\geq\eta r+(1-\eta)r-t(1-\eta)-O(\log r)$ $\displaystyle=\eta
r+\delta(r-t)-O(\log r)$ $\displaystyle\geq(\eta-\varepsilon)r+\delta(r-t).$
Therefore we may apply Lemma 6.
$\displaystyle K_{r}(x_{m},ax_{m}+b)$ $\displaystyle\geq
K_{r}^{D}(a,b,x_{m})-K_{r-sh_{j},r}(a,b\,|,x_{m},ax_{m}+b)$ [Lemma 6]
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\frac{4\varepsilon}{1-\eta}r-K(\epsilon,\eta)-O(\log
r)$ $\displaystyle=K_{r}^{D}(a,b,x_{m})-O(\log r)$ [Lemma 20(3)]
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\frac{4\varepsilon}{1-\eta}r-K(\epsilon,\eta)-O(\log
r)$ $\displaystyle=K_{r}^{D}(a,b,x_{m})-O(\log
r)-\frac{1/8j^{2}}{1/2j}r-K(\epsilon,\eta)-O(\log r)$
$\displaystyle=K_{r}^{D}(a,b,x_{m})-O(\log
r)-\frac{r}{4j}-K(\epsilon,\eta)-O(\log r)$
$\displaystyle=K_{r}^{D}(a,b,x_{m})-O(\log r)-\frac{r}{4j}-K(j)-O(\log r)$
$\displaystyle=K_{r}^{D}(a,b,x_{m})-\frac{r}{4j}-O(\log r)$
$\displaystyle=K_{r}^{D}(a,b,x_{m})-\frac{r}{2j}.$
By our construction of oracle $D$, and the symmetry of information,
$\displaystyle K_{r}^{D}(a,b,x_{m})$
$\displaystyle=K_{r}^{D}(a,b)+K_{r}^{D}(x_{m}\,|\,a,b)-O(\log r)$ [Lemma 2]
$\displaystyle=K_{r}^{D}(a,b)+K_{r}(x_{m}\,|\,a,b)-O(\log r)$ [Lemma 8(ii)]
$\displaystyle\geq\eta r+K_{r}(x_{m}\,|\,a,b)-O(\log r)$ [Lemma 8(i)]
$\displaystyle\geq\eta r+sh_{j}-O(\log r).$ (23)
Therefore, we have
$\displaystyle K_{r}(x_{m},ax_{m}+b)$ $\displaystyle\geq\eta r+sh_{j}-O(\log
r)-\frac{r}{2j}$ $\displaystyle\geq(1+s)r-\frac{r}{2j}-\frac{r}{2j}$
$\displaystyle>(1+s)r-\frac{r}{j}.$
Thus we have shown that
$K_{r}(x_{m},ax_{m}+b)>(1+s)r-\frac{r}{j}$,
for every $sh_{j}\leq r\leq h_{j}$.
If either
* •
$M_{m}\leq(1+s)+\frac{1}{j}$, or
* •
$M_{0}\geq(1+s)-\frac{1}{j}$
our claim would follow. We will therefore assume that
$M_{0}<1+s-\frac{1}{j}$, and $1+s+\frac{1}{j}<M_{m}$.
Define
$\displaystyle L$ $\displaystyle=\\{n\,|\,M_{n}<1+s-\frac{1}{j}\\}$
$\displaystyle G$ $\displaystyle=\\{n\,|\,M_{n}>1+s+\frac{1}{j}\\}.$
By our assumption, $L$ and $G$ are non-empty. Suppose that $L$ and $G$
partition $\\{0,\ldots,m\\}$. Then there is a $n$ such that $n\in L$ and
$n+1\in G$. However, by Lemma 20,
$|K_{r}(x_{n+1},ax_{n+1}+b)-K_{r}(x_{n},ax_{n}+b)|\leq 1+O(\log r)$,
for every $r$. Let $r$ be a precision testifying to $x_{n}\in L$. Then
$\displaystyle(1+s-\frac{1}{j})r$ $\displaystyle>K_{r}(x_{n},ax_{n}+b)$
$\displaystyle>K_{r}(x_{n+1},ax_{n+1}+b)-1-O(\log r).$
That is,
$\displaystyle\frac{K_{r}(x_{n+1},ax_{n+1}+b)}{r}$
$\displaystyle<1+s-\frac{1}{j}+\frac{1}{r}+\frac{O(\log r)}{r}$
$\displaystyle<1+s+\frac{1}{j},$
which contradicts our assumption that $x_{n+1}\in G$ and the proof is
complete. ∎
## 6 Conclusion and Future Directions
The behavior of the effective dimension of points on a line is not only
interesting from the algorithmic randomness viewpoint, but also because of its
deep connections to geometric measure theory. There are many avenues for
future research in this area.
The results of this paper show that, for any line $L_{a,b}$, the dimension
spectrum $\operatorname{sp}(L_{a,b})$ contains a unit interval. However, this
is not, in general, a tight bound. It would be very interesting to have a more
thorough understanding of the “low end” of the dimension spectrum. Stull [17]
showed that the Hausdorff dimension of points $x$ such that
$\dim(x,ax+b)\leq\alpha+\frac{\dim(a,b)}{2}$
is at most $\alpha$. Further investigation of the low-end of the spectrum is
needed.
It seems plausible that, for certain lines, the dimension spectrum contains an
interval of length greater than one. For example, are there lines in the plane
such that $\operatorname{sp}(L)$ contains an interval of length strictly
greater than $1$?
Another interesting direction is to study the dimension spectrum of particular
classes of lines. One natural class is the lines $L_{a,b}$ whose slope and
intercept are both in the Cantor set. By restricting the lines to the Cantor
set, or, more generally, self-similar fractals, might give enough structure to
prove tight bounds not possible in the general case.
Additionally, the focus has been on the effective (Hausdorff) dimension of
points. Very little is known about the effective strong dimension of points on
a line. The known techniques do not seem to apply to this question. New ideas
are needed to understand the strong dimension spectrum of planar lines.
Finally, it would be interesting to broaden this direction by considering the
dimension spectra of other geometric objects. For example, can anything be
said about the dimension spectrum of a polynomial?
## References
* [1] Randall Dougherty, Jack Lutz, R Daniel Mauldin, and Jason Teutsch. Translating the Cantor set by a random real. Transactions of the American Mathematical Society, 366(6):3027–3041, 2014.
* [2] Rod Downey and Denis Hirschfeldt. Algorithmic Randomness and Complexity. Springer-Verlag, 2010.
* [3] Xiaoyang Gu, Jack H Lutz, Elvira Mayordomo, and Philippe Moser. Dimension spectra of random subfractals of self-similar fractals. Annals of Pure and Applied Logic, 165(11):1707–1726, 2014.
* [4] Ming Li and Paul M.B. Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Springer, third edition, 2008.
* [5] Jack H. Lutz. Dimension in complexity classes. SIAM J. Comput., 32(5):1236–1259, 2003.
* [6] Jack H. Lutz. The dimensions of individual strings and sequences. Inf. Comput., 187(1):49–79, 2003.
* [7] Jack H. Lutz and Neil Lutz. Algorithmic information, plane Kakeya sets, and conditional dimension. ACM Trans. Comput. Theory, 10(2):Art. 7, 22, 2018.
* [8] Jack H Lutz and Neil Lutz. Who asked us? how the theory of computing answers questions about analysis. In Complexity and Approximation, pages 48–56. Springer, 2020.
* [9] Jack H. Lutz and Elvira Mayordomo. Dimensions of points in self-similar fractals. SIAM J. Comput., 38(3):1080–1112, 2008.
* [10] Neil Lutz. Fractal intersections and products via algorithmic dimension. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017), 2017.
* [11] Neil Lutz and D. M. Stull. Bounding the dimension of points on a line. In Theory and applications of models of computation, volume 10185 of Lecture Notes in Comput. Sci., pages 425–439. Springer, Cham, 2017\.
* [12] Neil Lutz and D. M. Stull. Dimension spectra of lines. In Unveiling dynamics and complexity, volume 10307 of Lecture Notes in Comput. Sci., pages 304–314. Springer, Cham, 2017.
* [13] Neil Lutz and D. M. Stull. Projection theorems using effective dimension. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018), 2018.
* [14] Elvira Mayordomo. A Kolmogorov complexity characterization of constructive Hausdorff dimension. Inf. Process. Lett., 84(1):1–3, 2002.
* [15] Elvira Mayordomo. Effective fractal dimension in algorithmic information theory. In S. Barry Cooper, Benedikt Löwe, and Andrea Sorbi, editors, New Computational Paradigms: Changing Conceptions of What is Computable, pages 259–285. Springer New York, 2008.
* [16] Andre Nies. Computability and Randomness. Oxford University Press, Inc., New York, NY, USA, 2009.
* [17] D. M. Stull. Results on the dimension spectra of planar lines. In 43rd International Symposium on Mathematical Foundations of Computer Science, volume 117 of LIPIcs. Leibniz Int. Proc. Inform., pages Art. No. 79, 15. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2018.
* [18] Daniel Turetsky. Connectedness properties of dimension level sets. Theor. Comput. Sci., 412(29):3598–3603, 2011.
|
Smoothly varying ridge regularization
Daeju Kima, Shuichi Kawanob, Yoshiyuki Ninomiyac***Corresponding author. Tel &
Fax: +81-50-5533-8527. E-mail address<EMAIL_ADDRESS>(Y. Ninomiya).
aAI Strategy Office, Technology Unit, Softbank Corp.
1-9-1 Higashi-Shimbashi, Minato-ku, Tokyo 105-7317, Japan
bGraduate School of Informatics and Engineering, The University of Electro-
Communications
1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585, Japan
cDepartment of Statistical Inference and Mathematics, The Institute of
Statistical Mathematics
10-3 Midori-cho, Tachikawa-shi, Tokyo 190-8562, Japan
Abstract: A basis expansion with regularization methods is much appealing to
the flexible or robust nonlinear regression models for data with complex
structures. When the underlying function has inhomogeneous smoothness, it is
well known that conventional reguralization methods do not perform well. In
this case, an adaptive procedure such as a free-knot spline or a local
likelihood method is often introduced as an effective method. However, both
methods need intensive computational loads. In this study, we consider a new
efficient basis expansion by proposing a smoothly varying regularization
method which is constructed by some special penalties. We call them adaptive-
type penalties. In our modeling, adaptive-type penalties play key rolls and it
has been successful in giving good estimation for inhomogeneous smoothness
functions. A crucial issue in the modeling process is the choice of a suitable
model among candidates. To select the suitable model, we derive an
approximated generalized information criterion (GIC). The proposed method is
investigated through Monte Carlo simulations and real data analysis. Numerical
results suggest that our method performs well in various situations.
Keywords: basis expansion, curve and surface fitting, information criterion,
model selection, smoothness, tuning parameter
## 1 Introduction
Recently, nonlinear regression models with basis expansion have received
considerable attention in various statistical and engineering fields. Basis
expansion is widely used as an effective approach for data with complex
structure. The essential idea behind basis expansion is to represent the
underlying regression function as a linear combination of known nonlinear
functions, which are called basis functions. In constructing the statistical
model, various basis functions, such as natural cubic splines (Green and
Silverman 1994), $B$-splines (de Boor 2001), and radial basis functions
(Powell 1981) are used according to the structure of data or the purpose of
analysis. Usually in basis expansion, we stop under-fitting by placing a lot
of basis functions and prevent over-fitting by using regularization method for
estimating their coefficients. As the regularization, the ridge method (Hoerl
and Kennard 1970) or the lasso method (Tibshirani 1996) is basically used.
While basis expansion with such a regularization method works well in many
situations, it is often inappropriate when the underlying regression function
has inhomogeneous smoothness. Let us call the region where the function is
smoother the strongly smooth one and the region where the function is less
smooth the weakly smooth one. Basis expansion that is described above often
leads to under-fitting in the strongly smooth region and over-fitting in the
weakly smooth region. The local likelihood method (Fan and Gijbels 1996,
Loader 1999), defining a locally weighted log-likelihood by each explanatory
variable and providing a predictor for the explanatory variable through
minimizing it, can handle this problem. Multivariate adaptive regression
splines (MARS, Friedman 1991), adaptive wavelet filtering (Donoho and
Johnstone 1994), adaptive sparse grids (Garcke et al. 2001, Hegland 2002) and
free-knot splines (Denison et al. 1998, DiMatteo et al. 2001, Miyata and Shen
2003) can also handle it using a kind of optimal set of basis functions. To
find the optimal set, however, we must solve an optimiation problem with
respect to the positions and shapes of the basis functions. In general, the
objective function does not have tractable properties such as the unimodality.
Therefore, in order to implement the optimization problem completely, it is
necessary to compute the objective function for all the positions and shapes,
which will be almost impossible if the number of basis functions is large.
Needless to say, the local likelihood method also needs a huge amount of
computations to be implemented completely because it requires an optimization
by each point in the domain of function.
The purpose of this study is to propose a simple new approach for the above-
mentioned problem, without calculation by each explanatory variable or search
for optimal basis functions. To achieve our aim, we propose a new efficient
nonlinear regression modeling with basis expansions. The proposed method is
based on the following idea. To begin with, we prepare tuning parameters by
each coefficient of the basis function, while conventional regularization
methods usually have one or a few tuning parameters for the whole set of
model’s parameters. The prepared many tuning parameters will lead over-
fitting, and so we penalize them so that tuning parameters for basis functions
whose positions are close have close values. Then we obtain estimates of the
coefficients and appropriate values of the tuning parameters simultaneously by
maximizing the sum of the likelihood function, the penalty term for the
coefficients and the penalty term for the tuning parameters.
We still have the problem of how penalize the tuning parameters. To determine
the weight of penalization, we put a coefficient for the penalty term, called
hyper-tuning parameter. Note that a larger value of the coefficient provides
more weight of penalization. Then the problem comes down to the selection of
the hyper-tuning parameter, and its appropriate value can be obtained by
minimizing an information criterion. As the information criterion, we consider
the generalized information criterion (GIC, Konishi and Kitagawa 1996) because
the estimator considered in this method is not the maximum likelihood one.
However, it is not possible to obtain an analytical expression of the GIC,
because the estimation function for the proposed method is not expressed
explicitly, unlike the estimation function of the ridge method. We therefore
propose an estimating function with explicit expression and derive an
approximate GIC, which is assured by some asymptotics. Our proposed nonlinear
regression modeling is investigated through some numerical examples and real
data analysis. Numerical results suggest that our proposed method performs
well in various situations.
The rest of this article is organized as follows. In Section 2, we describe a
framework of Gaussian regression modeling based on basis expansion with the
ridge method. Section 3 studies a new nonlinear regression modeling by
proposing a regularization method with adaptive-type penalties and derives the
approximate GIC. Section 4 investigates the performance of our procedure by
Monte Carlo simulations and real data analysis, while some concluding remarks
are described in Section 5.
## 2 Regression modeling via basis expansion
Let $\\{(\bm{x}_{i},y_{i})\mid i=1,\ldots,n\\}$ are $n$ sets of data obtained
in terms of explanatory variables $\bm{x}\ (\in\mathbb{R}^{p})$ and response
variable $y\ (\in\mathbb{R})$. We consider a nonlinear regression model based
on basis expansion given by
$\displaystyle y_{i}=\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})+\varepsilon_{i}=\sum_{j=1}^{m}\beta_{j}{\phi}_{j}(\bm{x}_{i})+\varepsilon_{i},\qquad
i=1,\ldots,n.$ (1)
Here,
$\bm{\phi}(\bm{x}_{i})=(\phi_{1}(\bm{x}_{i}),\phi_{2}(\bm{x}_{i}),\ldots,\phi_{m}(\bm{x}_{i}))^{\rm
T}$ is a known basis function vector,
$\bm{\beta}=(\beta_{1},\ldots,\beta_{m})^{\rm T}$ is its unknown coefficient
vector, and $\varepsilon_{i}$ is an error term. Assuming that the error terms
are independently and identically distributed according to ${\rm N}(0,\alpha)$
with unknown parameter $\alpha$, the log-likelihood function is
$\sum_{i=1}^{n}l_{i}(\bm{\theta})$ from (1), where
$\displaystyle
l_{i}(\bm{\theta})=-\frac{1}{2}\log(2\pi\alpha)-\frac{1}{2\alpha}\\{y_{i}-\bm{\beta}^{{\rm
T}}\bm{\phi}(\bm{x}_{i})\\}^{2}.$ (2)
Here, $\bm{\theta}=(\alpha,\bm{\beta}^{\rm T})^{\rm T}$ is an unknown
parameter vector to be estimated.
The maximum likelihood estimator often gives poor prediction accuracy,
especially when the sample size $n$ is smaller than the number of basis
functions $m$ (see, e.g., Konishi and Kitagawa 2008). Regularization method is
one of the most popular and preferred methods to overcome this problem. It has
been applied successfully to estimation problem in constructing flexible
models with generalization framework. Hoerl and Kennard (1970), who proposed
the $L_{2}$ norm of the penalty called the ridge, have successfully applied
regularized regression in improving prediction performance through a bias
variance trade-off. The ridge estimator can be obtained through minimizing the
following regularized loss function:
$\displaystyle-2\sum_{i=1}^{n}l_{i}(\bm{\theta})+n\lambda\bm{\beta}^{\rm
T}\bm{\beta},$ (3)
where $\lambda$ is a non-negative tuning parameter that balances between
goodness of fit and model complexity. The ridge estimator
$\hat{\bm{\theta}}=(\hat{\alpha},\hat{\bm{\beta}}^{\rm T})^{\rm T}$ of
$\bm{\theta}=(\alpha,\bm{\beta}^{\rm T})^{\rm T}$ is expressed by
$\displaystyle\hat{\alpha}=\frac{1}{n}\sum_{i=1}^{n}\\{y_{i}-\hat{\bm{\beta}}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}^{2}\quad{\rm
and}\quad\hat{\bm{\beta}}=\bigg{\\{}\sum_{i=1}^{n}\bm{\phi}(\bm{x}_{i})\bm{\phi}(\bm{x}_{i})^{\rm
T}+n\lambda\hat{\alpha}\bm{I}_{m}\bigg{\\}}^{-1}\sum_{i=1}^{n}\bm{\phi}(\bm{x}_{i})y_{i},$
where $\bm{I}_{m}$ is the $m$-dimensional identity matrix. Note that
$\hat{\alpha}$ and $\hat{\bm{\beta}}$ depend on each other. Therefore, we
provide an appropriate initial value for $\hat{\bm{\beta}}$ first, then
$\hat{\alpha}$ and $\hat{\bm{\beta}}$ are updated until convergence.
In order to complete the formulation of the ridge method, we need to determine
an appropriate value of the tuning parameter $\lambda$. This determination can
be viewed as a model selection problem, and here we regard the minimizer of
so-called GIC proposed by Konishi and Kitagawa (1996) as the optimal value of
$\lambda$. The GIC is an AIC-type information criterion generalized for
$M$-estimation methods, and actually this is an asymptotically unbiased
estimator of twice the Kullback-Leibler divergence (Kullback and Leibler 1951)
between the true distribution and the estimated distribution minus some
constant. Let $\sum_{i=1}^{n}\bm{\psi}_{i}({\bm{\theta}})$ and
$\sum_{i=1}^{n}l_{i}({\bm{\theta}})$ be respectively an estimating function
and a log-likelihood function with respect to a parameter vector
$\bm{\theta}$, where $\bm{\psi}_{i}(\cdot)$ and $l_{i}(\cdot)$ are functions
depending on the $i$-th data. Then the GIC is provided by
$\displaystyle-2\sum_{i=1}^{n}l_{i}(\hat{\bm{\theta}})+2{\rm tr}\bigg{\\{}{\rm
E}\bigg{(}-\sum_{i=1}^{n}\frac{\partial\bm{\psi}_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}\bigg{)}^{-1}{\rm
E}\bigg{(}\sum_{i=1}^{n}\bm{\psi}_{i}\frac{\partial
l_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}\bigg{)}\bigg{\\}}\bigg{|}_{\bm{\theta}^{*}=\hat{\bm{\theta}}},$
(4)
where $\hat{\bm{\theta}}$ and $\bm{\theta}^{*}$ are respectively the
$M$-estimator satisfying
$\sum_{i=1}^{n}\bm{\psi}_{i}(\hat{\bm{\theta}})=\bm{0}$ and its limit
satisfying ${\rm E}\\{\sum_{i=1}^{n}\bm{\psi}_{i}(\bm{\theta}^{*})\\}=\bm{0}$.
When these expectations cannot be calculated completely, we have only to
replace the left expectations by their empirical versions and use the replaced
one as the GIC. For example,
$\displaystyle-2\sum_{i=1}^{n}l_{i}(\hat{\bm{\theta}})+2{\rm
tr}\bigg{\\{}\bigg{(}-\sum_{i=1}^{n}\frac{\partial\bm{\psi}_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}\bigg{)}^{-1}\bigg{(}\sum_{i=1}^{n}\bm{\psi}_{i}\frac{\partial
l_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}\bigg{)}\bigg{\\}}$
is a simple candidate to use, but it is better to calculate the expectations
whenever possible in order to stabilize the criterion.
For the above-mentioned ridge method in the Gaussian linear regression with
basis expansion, $l_{i}({\bm{\theta}})$ is defined in (2), and
$\sum_{i=1}^{n}\bm{\psi}_{i}({\bm{\theta}})$ is the derivative of (3), that
is, $\bm{\psi}_{i}({\bm{\theta}})=-2\partial
l_{i}(\bm{\theta})/\partial\bm{\theta}+2\lambda(0,\bm{\beta}^{\rm T})^{\rm
T}$. In this setting, the expectations in (4) can be calculated to some
extent. Actually, letting $\varepsilon_{i}^{*}\equiv y_{i}-\bm{\beta}^{*{\rm
T}}\bm{\phi}(\bm{x}_{i})$, it holds
$\displaystyle{\rm
E}\bigg{(}-\sum_{i=1}^{n}\frac{\partial\bm{\psi}_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}\bigg{)}=\begin{pmatrix}\displaystyle\frac{n}{\alpha^{*2}}&\displaystyle\frac{2}{\alpha^{*2}}\sum_{i=1}^{n}{\rm
E}(\varepsilon_{i}^{*})\bm{\phi}(\bm{x}_{i})^{\rm T}\\\\[6.45831pt]
\displaystyle\frac{2}{\alpha^{*2}}\sum_{i=1}^{n}{\rm
E}(\varepsilon_{i}^{*})\bm{\phi}(\bm{x}_{i})&\displaystyle\frac{2}{\alpha^{*}}\sum_{i=1}^{n}\bm{\phi}(\bm{x}_{i})\bm{\phi}(\bm{x}_{i})^{\rm
T}+2n\lambda\bm{I}_{m}\end{pmatrix}$
and
$\displaystyle{\rm E}\bigg{(}\sum_{i=1}^{n}\bm{\psi}_{i}\frac{\partial
l_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}\bigg{)}=\begin{pmatrix}\displaystyle\frac{1}{2\alpha^{*4}}\sum_{i=1}^{n}{\rm
E}(\varepsilon_{i}^{*4})-\frac{n}{2\alpha^{*2}}&\displaystyle\sum_{i=1}^{n}{\rm
E}\bigg{(}\frac{1}{\alpha^{*3}}\varepsilon_{i}^{*3}-\frac{1}{\alpha^{*2}}\varepsilon_{i}^{*}\bigg{)}\bm{\phi}(\bm{x}_{i})^{\rm
T}\\\\[6.45831pt] \displaystyle\sum_{i=1}^{n}{\rm
E}\bigg{(}\frac{1}{\alpha^{*3}}\varepsilon_{i}^{*3}-\frac{1}{\alpha^{*2}}\varepsilon_{i}^{*}\bigg{)}\bm{\phi}(\bm{x}_{i})&\displaystyle\frac{2}{\alpha^{*2}}\sum_{i=1}^{n}{\rm
E}(\varepsilon_{i}^{2})\bm{\phi}(\bm{x}_{i})\bm{\phi}(\bm{x}_{i})^{\rm
T}\end{pmatrix},$
and therefore we obtain the GIC based on these expectations. When $\alpha$ is
known, we can see from these expectations that half the penalty term in the
GIC reduces to
$\displaystyle{\rm
tr}\bigg{[}\bigg{\\{}\sum_{i=1}^{n}\bm{\phi}(\bm{x}_{i})\bm{\phi}(\bm{x}_{i})^{\rm
T}+n\lambda\alpha\bm{I}_{m}\bigg{\\}}^{-1}\bigg{\\{}\sum_{i=1}^{n}\bm{\phi}(\bm{x}_{i})\bm{\phi}(\bm{x}_{i})^{\rm
T}\bigg{\\}}\bigg{]},$
which is known as the effective degree of the freedom for the ridge method
(see, e.g., Chapter 5 in Hastie et al. 2009).
When the underlying regression function has inhomogeneous smoothness,
conventional regularization methods often lead over-fitting at strongly smooth
regions or/and under-fitting at weakly smooth regions. See estimated curves by
the ridge method in Figure 1 as an illustrative example. We observe the under-
fitting in the left panel imposing on a strong regularization and the over-
fitting in the right panel imposing on a weak regularization. The local
likelihood method is a tool to overcome this problem. For each point
$\bm{x}_{i}\in\mathbb{R}^{p}$ ($i=1,\ldots,n$), it assigns large weights to
observations close to $\bm{x}_{i}$, constructs a local likelihood based on the
weights and obtains estimators by maximizing it. Multivariate adaptive
regression splines, adaptive wavelet filtering, adaptive sparse grids and
free-knot splines can also overcome the problem by selecting the positions
or/and shapes of the basis functions
$\bm{\phi}(\bm{x})=(\phi_{1}(\bm{x}),\ldots,\phi_{m}(\bm{x}))^{\rm T}$ data-
adaptively. However, such a local likelihood and such adaptive methods need
intensive computational loads, which will be almost impossible to be
implemented completely if the data size $n$ or the number of basis functions
$m$ is large.
Figure 1: Estimated curves based on conventional basis expansion with a strong
regularization (left) and a weak regularization (right). Solid lines, broken
lines, and dots depict the estimated curves, the true regression functions,
and the data, respectively.
## 3 Proposed method
### 3.1 Statistical Setup
Let $\\{(\bm{x}_{i},y_{i})\ |\ i=1,\ldots,n\\}$ be $n$ sets of data obtained
in terms of explanatory variables $\bm{x}$ and response variable $y$. We
assume that the behavior of $y$ is characterized by an unknown parameter
vector $\bm{\alpha}=(\alpha_{1},\ldots,\alpha_{\ell})$ and a linear
combination $\bm{\beta}^{\rm T}\bm{\phi}(\bm{x})$ with an unknown coefficient
vector $\bm{\beta}=(\beta_{1},\ldots,\beta_{m})$ and a vector of known
functions $\bm{\phi}(\cdot)=(\phi_{1}(\cdot),\ldots,\phi_{m}(\cdot))$. For
measuring the goodness of fit, we use a loss function
$L\\{y,\bm{\alpha},\bm{\beta}^{\rm T}\bm{\phi}(\bm{x})\\}$. For this setting,
let us consider the case where the estimation of $\bm{\alpha}$ and
$\bm{\beta}$ obtained by minimizing
$\sum_{i=1}^{n}L\\{y_{i},\bm{\alpha},\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}$ leads to over-fitting. For example, it often
occurs when $n$ is not large compared to $m$. Regularization methods are
popular and preferred to overcome this problem, and first we consider a
regularization method which provides the following type of estimators:
$\displaystyle\mathop{\rm
argmin}_{(\bm{\alpha},\bm{\beta})}\bigg{[}\sum_{i=1}^{n}L\\{y_{i},\bm{\alpha},\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}+n\lambda\sum_{j=1}^{m}r(\beta_{j})\bigg{]},$
where $r(\cdot)$ is a positive-value penalty function and $\lambda$ is a
tuning parameter. Note that the regularization by putting
$\lambda\bm{\beta}^{\rm T}K\bm{\beta}$ for a positive definite matrix $K$ can
be included in this formulation by reparametrization.
This estimation method is required in a wide class of modern statistical
problems. Letting $g\\{\bm{\alpha},\bm{\beta}^{\rm T}\bm{\phi}(\bm{x})\\}$ be
a predictor for $y$, for example, $L(y,s,t)=(y-t)^{2}$ and $g(u,v)=v$ are used
in regression analysis with linear basis expansion models,
$L(y,s,t)=-yt+\log(1+e^{t})$ and $g(u,v)=\\{{\rm sign}(v)+1\\}/2$ are used in
discriminant analysis with nonparametric logistic regression model. In the
context of support vector machine, $L(y,s,t)=(|y-t|-\epsilon)_{+}$ and
$g(u,v)=v$ for a positive value $\epsilon$ or $L(y,s,t)=(1-yt)_{+}$ and
$g(u,v)={\rm sign}(v)$ are used in regression or discriminant analysis,
respectively. See, e.g., Capters 5 and 12 in Hastie et al. (2009) and Chapter
7 in Bishop (2006).
### 3.2 Estimation
In this section, we first describe the basic idea behind our proposed method
in a general framework. The main aim of this approach is to appropriately
estimate the underlying regression function which has inhomogeneous smoothness
and less the computational load. To achieve our aim, we propose a new
nonlinear regression modeling with smoothly varying regularization method as
follows:
$\displaystyle\sum_{i=1}^{n}\bigg{[}L\\{y_{i},\bm{\alpha},\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}+\sum_{j=1}^{m}\lambda_{j}r(\beta_{j})+\gamma_{1}\sum_{j=2}^{m}(\lambda_{j}-\lambda_{j-1})^{2}-\gamma_{2}\sum_{j=1}^{m}\log\lambda_{j}\bigg{]},$
(5)
where $\bm{\lambda}=(\lambda_{1},\ldots,\lambda_{m})^{\rm T}$ is an
$m$-dimensional tuning parameter vector, and $\gamma_{1}$ and $\gamma_{2}$ are
hyper-tuning parameters.
In our approach, the proposed adaptive-type penalties in (5) play key roles:
The first penalty is to assign different tuning parameters to different
coefficients, which promotes adaptability of tuning parameters. The second
penalty is a sum of squared values of differences for tuning parameters, and
encourages their continuity. The third penalty is a sum of logarithmic values
of tuning parameters which keep the values of tuning parameters from being
shrunk toward zero. By evaluating $\bm{\lambda}$ which gives a small value of
the regularized log-likelihood function in (5), we can obtain smoothly varying
tuning parameters. They can be expected to impose on a strong and weak
regularizations at strongly and weakly smooth regions, respectively. The
hyper-tuning parameters $\gamma_{1}$ and $\gamma_{2}$ in (5) control the
magnitude of effects for the second and third penalties. At the smaller value
of $\gamma_{1}$, the tuning parameters tend to be decided freely, and then
they tend to be continuous at larger value of $\gamma_{1}$. At the smaller
value of $\gamma_{2}$, they tend to take a value which close to zero, although
they tend to be large at the larger value of $\gamma_{2}$.
If the tuning parameters $\gamma_{1}$ and $\gamma_{2}$ are given, estimators
of $\bm{\alpha}$ and $\bm{\beta}$ and an appropriate value of $\bm{\lambda}$
can be obtained by minimizing the proposed function in (5) with respect to
$\bm{\alpha}$, $\bm{\beta}$ and $\bm{\lambda}$. We denote them by
$\hat{\bm{\alpha}}$, $\hat{\bm{\beta}}$ and $\hat{\bm{\lambda}}$, which
satisfy
$\displaystyle(\hat{\bm{\alpha}},\hat{\bm{\beta}})=\mathop{\rm
argmin}_{(\bm{\alpha},\bm{\beta})}\bigg{(}\sum_{i=1}^{n}\bigg{[}L\\{y_{i},\bm{\alpha},\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}+\sum_{j=1}^{m}\hat{\lambda}_{j}r(\beta_{j})\bigg{]}\bigg{)}$
and
$\displaystyle\hat{\lambda}_{j}=(2\gamma_{1}\hat{\lambda}_{j+1}+2\gamma_{1}\hat{\lambda}_{j-1}-r(\hat{\beta}_{j})+[\\{2\gamma_{1}\hat{\lambda}_{j+1}+2\gamma_{1}\hat{\lambda}_{j-1}-r(\hat{\beta}_{j})\\}^{2}+16\gamma_{1}\gamma_{2}]^{1/2})/(8\gamma_{1}).$
Since the solutions depend on each other, an iterative procedures are
required. To get $\hat{\bm{\alpha}}$, $\hat{\bm{\beta}}$ and
$\hat{\bm{\lambda}}$, we use the following numerical estimation algorithm.
Step 1 Set the values of hyper-tuning parameters $\gamma_{1}$ and
$\gamma_{2}$.
Step 2 Give an appropriate initial value of the tuning parameters,
$\hat{\bm{\lambda}}^{(0)}=(\hat{\lambda}_{1}^{(0)},\ldots,\hat{\lambda}_{m}^{(0)})$,
such as the optimal value of the tuning parameter for the naive ridge method.
Step 3 Substitute $0$ for $t$.
Step 4 Give $(\hat{\bm{\alpha}}^{(t+1)},\hat{\bm{\beta}}^{(t+1)})$ by
$\mathop{\rm
argmin}_{\bm{\alpha},\bm{\beta}}(\sum_{i=1}^{n}[L\\{y_{i},\bm{\alpha},\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}+\sum_{j=1}^{m}\hat{\lambda}_{j}^{(t)}r(\beta_{j})])$.
Step 5 Set $\hat{\lambda}_{0}^{(t+1)}=\hat{\lambda}_{m+1}^{(t+1)}=0$ and give
$\hat{\lambda}_{j}^{(t+1)}$ by
$(2\gamma_{1}\hat{\lambda}_{j+1}^{(t)}+2\gamma_{1}\hat{\lambda}_{j-1}^{(t+1)}-r(\hat{\beta}_{j}^{(t+1)})+[\\{2\gamma_{1}\hat{\lambda}_{j+1}^{(t)}+2\gamma_{1}\hat{\lambda}_{j-1}^{(t+1)}-r(\hat{\beta}_{j}^{(t+1)})\\}^{2}+16\gamma_{1}\gamma_{2}]^{1/2})/(8\gamma_{1})$
for $j=1,\ldots,m$.
Step 6 Substitute $t+1$ for $t$.
Step 7 Repeat Steps 4, 5 and 6 until the values converge.
Note that $L(\cdot)$ and $r(\cdot)$ are convex in many cases and so Step 4 can
be easily conducted by using Newton-Raphson method.
### 3.3 Model selection
Our proposed regularization method depends on hyper-tuning parameters
$\gamma_{1}$ and $\gamma_{2}$, and it is indispensable to determine their
appropriate values. Then we consider the GIC introduced in Section 2 for our
method. In our framework, however, it is difficult to directly derive an
analytical form of the GIC. Therefore, here we focus on the Gaussian linear
regression setting with a ridge-type regularization and approximate the GIC
based on some asymptotics. Concretely, we suppose that the loss function is
twice the negative log-likelihood, that is,
$\displaystyle L\\{y_{i},\alpha,\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}=-2l_{i}(\bm{\theta})=\log(2\pi\alpha)+\\{y_{i}-\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}^{2}/\alpha,$
and the penalty function is $r(\beta_{j})=\beta_{j}^{2}$. For the other
settings, we recommend to naively use the cross-validation (Stone 1974) by
overlooking its computational costs.
In this setting, the estimation function is
$\sum_{i=1}^{n}\bm{\psi}_{i}(\bm{\theta})$, where
$\bm{\theta}=(\alpha,\bm{\beta}^{\rm T})^{\rm T}$ and
$\displaystyle\bm{\psi}_{i}(\bm{\theta})=-\frac{\partial}{\partial\bm{\theta}}\min_{\bm{\lambda}}\bigg{[}L\\{y_{i},\alpha,\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}+\sum_{j=1}^{m}\lambda_{j}\beta_{j}^{2}+\gamma_{1}\sum_{j=2}^{m}(\lambda_{j}-\lambda_{j-1})^{2}-\gamma_{2}\sum_{j=1}^{m}\log\lambda_{j}\bigg{]}.$
(6)
We can obtain an AIC-type information criterion from this estimation function,
but it is difficult to express the criterion exactly because the estimation
function depends on the estimated tuning parameter vector
$\hat{\bm{\lambda}}$, which is the minimizer of $\bm{\lambda}$ appearing in
(6). Here, we propose an estimation function using an approximate vector
$\tilde{\bm{\lambda}}=(\tilde{\lambda}_{1},\ldots,\tilde{\lambda}_{p})$
instead of $\hat{\bm{\lambda}}$ in (6). We utilize a simple approximation as
follows: First, to get the optimal value of $\bm{\lambda}$, we consider the
derivative of (5) with respect to $\lambda_{j}$,
$\displaystyle\beta_{j}^{2}+2\gamma_{1}(\lambda_{j+1}-2\lambda_{j}+\lambda_{j-1})-\gamma_{2}/\lambda_{j}.$
(7)
Checking for zero of (7), we have the equation,
$\displaystyle{\lambda}_{j}=\frac{\gamma_{2}}{\beta_{j}^{2}+2\gamma_{1}(\lambda_{j+1}-2\lambda_{j}+\lambda_{j-1})}.$
(8)
When we consider $\gamma_{1}=0$, it leads to
${\lambda}_{j}=\gamma_{2}/\beta_{j}^{2}$. By substituting it into
${\lambda}_{j}$ in the right hand of (8), we get the following approximation,
$\displaystyle\tilde{\lambda}_{j}=\frac{\gamma_{2}}{{\beta_{j}^{2}+2\gamma_{1}(\gamma_{2}/\beta_{j+1}^{2}-2\gamma_{2}/\beta_{j}^{2}+\gamma_{2}/\beta_{j-1}^{2})}}.$
As a result, the estimation function in (6) is approximated by
$\displaystyle\tilde{\bm{\psi}}_{i}({\bm{\theta}})=-\frac{\partial}{\partial\bm{\theta}}[L\\{y_{i},\alpha,\bm{\beta}^{\rm
T}\bm{\phi}(\bm{x}_{i})\\}+\bm{\beta}^{\rm
T}\tilde{\bm{\Lambda}}\bm{\beta}+\gamma_{1}\tilde{\bm{\lambda}}^{\rm
T}\bm{D}\tilde{\bm{\lambda}}-\gamma_{2}\log\det(\tilde{\bm{\Lambda}})],$ (9)
where
$\displaystyle\tilde{\bm{\Lambda}}={\rm
diag}(\tilde{\bm{\lambda}})\equiv\begin{pmatrix}\tilde{\lambda}_{1}&0&\cdots&\cdots&0\\\
0&\tilde{\lambda}_{2}&\ddots&&\vdots\\\ \vdots&\ddots&\ddots&\ddots&\vdots\\\
\vdots&&\ddots&\tilde{\lambda}_{p-1}&0\\\
0&\cdots&\cdots&0&\tilde{\lambda}_{p}\end{pmatrix}\quad{\rm
and}\quad\bm{D}\equiv\begin{pmatrix}1&-1&0&\cdots&\cdots&\cdots&0\\\
-1&2&-1&\ddots&&&\vdots\\\ 0&-1&2&\ddots&\ddots&&\vdots\\\
\vdots&\ddots&\ddots&\ddots&\ddots&\ddots&\vdots\\\
\vdots&&\ddots&\ddots&2&-1&0\\\ \vdots&&&\ddots&-1&2&-1\\\
0&\cdots&\cdots&\cdots&0&-1&1\end{pmatrix}.$
Letting $\bm{S}\equiv 2(\partial\tilde{\bm{\lambda}}^{\rm
T}/\partial\bm{\beta})^{-1}{\rm diag}(\bm{\beta})+2{\rm
diag}(\bm{\beta})(\partial\tilde{\bm{\lambda}}/\partial\bm{\beta}^{\rm
T})^{-1}+2\gamma_{1}\bm{D}+\gamma_{2}\tilde{\Lambda}^{-2}$,
$t_{j}\equiv\beta_{j}^{2}+2\gamma_{1}(\bm{D}\tilde{\bm{\lambda}})_{j}-\gamma_{2}/\tilde{\lambda}_{j}$,
$\bm{u}\equiv{\rm
diag}(\bm{\beta})^{2}\bm{1}_{m}+2\gamma_{1}\bm{D}\tilde{\bm{\lambda}}+\gamma_{2}\tilde{\bm{\Lambda}}^{-1}\bm{1}_{m}$
and $\varepsilon_{i}^{*}\equiv y_{i}-\bm{\beta}^{*\rm
T}\bm{\phi}(\bm{x}_{i})$, where $\bm{1}_{m}$ is the $m$-dimensional
$1$-vector, we have
$\displaystyle-\frac{\partial\tilde{\bm{\psi}}_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}=\begin{pmatrix}\displaystyle\frac{2}{\alpha^{*3}}\varepsilon_{i}^{*2}-\frac{1}{\alpha^{*2}}&\displaystyle\frac{2}{\alpha^{*2}}\varepsilon_{i}^{*}\bm{\phi}(\bm{x}_{i})^{\rm
T}\\\\[6.45831pt]
\displaystyle\frac{2}{\alpha^{*2}}\varepsilon_{i}^{*}\bm{\phi}(\bm{x}_{i})&\displaystyle\frac{2}{\alpha^{*}}\bm{\phi}(\bm{x}_{i})\bm{\phi}(\bm{x}_{i})^{\rm
T}+\bigg{(}2\tilde{\bm{\Lambda}}+\frac{\partial\tilde{\bm{\lambda}}^{\rm
T}}{\partial\bm{\beta}}\bm{S}\frac{\partial\tilde{\bm{\lambda}}}{\partial\bm{\beta}^{\rm
T}}+\sum_{j=1}^{p}t_{j}\frac{\partial^{2}\tilde{\lambda}_{j}}{\partial\bm{\beta}\partial\bm{\beta}^{\rm
T}}\bigg{)}\bigg{|}_{\bm{\beta}=\bm{\beta}^{*}}\end{pmatrix}$
and
$\displaystyle\tilde{\bm{\psi}}_{i}\frac{\partial
l_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}=\begin{pmatrix}\displaystyle\frac{1}{2}\bigg{(}\frac{1}{\alpha^{*2}}\varepsilon_{i}^{*2}-\frac{1}{\alpha^{*}}\bigg{)}^{2}&\displaystyle\bigg{(}\frac{1}{\alpha^{*3}}\varepsilon_{i}^{*3}-\frac{1}{\alpha^{*2}}\varepsilon_{i}^{*}\bigg{)}\bm{\phi}(\bm{x}_{i})^{\rm
T}\\\\[6.45831pt]
\displaystyle\bigg{(}\frac{1}{\alpha^{*3}}\varepsilon_{i}^{*3}-\frac{1}{\alpha^{*2}}\varepsilon_{i}^{*}\bigg{)}\bm{\phi}(\bm{x}_{i})&\displaystyle\frac{1}{\alpha^{*}}\varepsilon_{i}^{*}\bm{\phi}(\bm{x}_{i})\bigg{\\{}\frac{2}{\alpha^{*}}\varepsilon_{i}^{*}\bm{\phi}(\bm{x}_{i})-\bigg{(}2\tilde{\bm{\Lambda}}\bm{\beta}+\frac{\partial\tilde{\bm{\lambda}}^{\rm
T}}{\partial\bm{\beta}}\bm{u}\bigg{)}\bigg{|}_{\bm{\beta}=\bm{\beta}^{*}}\bigg{\\}}^{\rm
T}\end{pmatrix}$
from (2) and (9). Similarly to the navie ridge method introduced in Section 2,
the expectations of these quantities can be calculated to some extent.
Actually, we obtain the followings:
$\displaystyle{\rm
E}\bigg{(}-\sum_{i=1}^{n}\frac{\partial\tilde{\bm{\psi}}_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}\bigg{)}$
$\displaystyle=\begin{pmatrix}\displaystyle\frac{n}{\alpha^{*2}}&\displaystyle\frac{2}{\alpha^{*2}}\sum_{i=1}^{n}{\rm
E}(\varepsilon_{i}^{*})\bm{\phi}(\bm{x}_{i})^{\rm T}\\\\[6.45831pt]
\displaystyle\frac{2}{\alpha^{*2}}\sum_{i=1}^{n}{\rm
E}(\varepsilon_{i}^{*})\bm{\phi}(\bm{x}_{i})&\displaystyle\frac{2}{\alpha^{*}}\sum_{i=1}^{n}\bm{\phi}(\bm{x}_{i})\bm{\phi}(\bm{x}_{i})^{\rm
T}+n\bigg{(}2\tilde{\bm{\Lambda}}+\frac{\partial\tilde{\bm{\lambda}}^{\rm
T}}{\partial\bm{\beta}}\bm{S}\frac{\partial\tilde{\bm{\lambda}}}{\partial\bm{\beta}^{\rm
T}}+\sum_{j=1}^{p}t_{j}\frac{\partial^{2}\tilde{\lambda}_{j}}{\partial\bm{\beta}\partial\bm{\beta}^{\rm
T}}\bigg{)}\bigg{|}_{\bm{\beta}=\bm{\beta}^{*}}\end{pmatrix}$
and
$\displaystyle{\rm E}\bigg{(}\sum_{i=1}^{n}\tilde{\bm{\psi}}_{i}\frac{\partial
l_{i}}{\partial\bm{\theta}^{\rm
T}}\bigg{|}_{\bm{\theta}=\bm{\theta}^{*}}\bigg{)}=\begin{pmatrix}\displaystyle\frac{1}{2\alpha^{*4}}\sum_{i=1}^{n}{\rm
E}(\varepsilon_{i}^{*4})-\frac{n}{2\alpha^{*2}}&\displaystyle\sum_{i=1}^{n}{\rm
E}\bigg{(}\frac{1}{\alpha^{*3}}\varepsilon_{i}^{*3}-\frac{1}{\alpha^{*2}}\varepsilon_{i}^{*}\bigg{)}\bm{\phi}(\bm{x}_{i})^{\rm
T}\\\\[6.45831pt] \displaystyle\sum_{i=1}^{n}{\rm
E}\bigg{(}\frac{1}{\alpha^{*3}}\varepsilon_{i}^{*3}-\frac{1}{\alpha^{*2}}\varepsilon_{i}^{*}\bigg{)}\bm{\phi}(\bm{x}_{i})&\displaystyle\frac{2}{\alpha^{*2}}\sum_{i=1}^{n}{\rm
E}(\varepsilon_{i}^{2})\bm{\phi}(\bm{x}_{i})\bm{\phi}(\bm{x}_{i})^{\rm
T}\end{pmatrix}$
from ${\rm
E}\\{\sum_{i=1}^{n}\tilde{\bm{\psi}}_{i}(\bm{\theta}^{*})\\}=\bm{0}$. Using
these expectations and (4), we can obtain an approximate GIC and provide the
hyper-tuning parameters through minimizing it.
We can regard that this approximation is validated by an asymptotics if we
consider the case of $(\gamma_{1},\gamma_{2})={\rm O}(n^{-\delta})$ for
$\delta>0$. In this case, $\lambda_{j}={\rm O}(n^{-\delta})$ but
$\lambda_{j}-\tilde{\lambda}_{j}={\rm O}(n^{-4\delta})$, that is,
$\tilde{\lambda}_{j}$ can be regarded as a good approximator of $\lambda_{j}$.
## 4 Numerical studies
### 4.1 Curve fitting
Monte Carlo simulations are conducted to investigate the effectiveness of our
method. We generate random samples $\\{(x_{i},y_{i})\mid i=1,\ldots,n\\}$ from
the model $y_{i}=g(x_{i})+\varepsilon_{i}$ with a regression function $g(x)$
and errors $\\{\varepsilon_{i}\mid i=1,\ldots,n\\}$. We consider two simple
examples: the true functions with inhomogeneous smoothness are specified by
$\displaystyle g(x)=\sin(x)+2\exp(-30x^{2}),\qquad-2\leq x\leq 2,$ (10)
and
$\displaystyle g(x)=\sin\\{32\exp(x)^{3}\\},\qquad 0\leq x\leq 1.$ (11)
Here, the function in (10), which is also used in DiMatteo et al. (2001), has
a relatively smooth structure with a sharp peak around $x=0$, and the function
in (11) has strongly and weakly smooth regions in $0\leq x\leq 0.5$ and
$0.5\leq x\leq 1$, respectively.
Simulation results are obtained from 100 Monte Carlo trials, and then we
evaluate mean squared errors defined by
$\displaystyle{\rm MSE}=\frac{1}{n}\sum_{i=1}^{n}\\{\hat{\bm{\beta}}^{\rm
T}\bm{\phi}(x_{i})-g(x_{i})\\}^{2}$ (12)
to measure the goodness of fit. It is assumed that the design points
$\\{x_{i}\mid i=1,\ldots,n\\}$ are uniformly spaced on each domain and that
the errors $\\{\varepsilon_{i}\mid i=1,\ldots,n\\}$ are independently and
identically distributed according to ${\rm N}(0,\alpha)$. The sample size $n$
is 50, 100, 150 or 200, and the variance of errors $\alpha$ is 0.025, 0.05,
0.075 or 0.1.
Our method is compared with the three conventional regularization methods, the
ridge (Hoerl and Kennard 1970) whose tuning parameter is chosen by the GIC in
(4), the lasso (Tibshirani 1996) and the adaptive lasso (Zou 2006) whose
tuning parameters are chosen by the five-fold cross-validation. The means of
MSEs together with their standard deviations are reported in Tables 1 and 2
for the regression functions in (10) and (11), respectively.
Table 1: Means and standard deviations of MSEs for the regression function in (10). | | | proposed | ridge | lasso | ada-lasso
---|---|---|---|---|---|---
| $n=$50 | mean [SD] ($\times 10^{2}$) | 1.08 [0.34] | 1.80 [0.42] | 1.44 [0.60] | 1.26 [0.43]
$\alpha=$ | $n=$100 | mean [SD] ($\times 10^{3}$) | 6.11 [1.80] | 9.49 [2.06] | 9.11 [2.57] | 7.84 [2.43]
0.025 | $n=$150 | mean [SD] ($\times 10^{3}$) | 3.92 [1.13] | 6.21 [1.37] | 7.00 [1.87] | 5.73 [1.63]
| $n=$200 | mean [SD] ($\times 10^{3}$) | 3.02 [0.86] | 4.51 [0.97] | 5.95 [1.60] | 5.02 [1.41]
| $n=$50 | mean [SD] ($\times 10^{2}$) | 1.93 [0.64] | 3.55 [0.84] | 2.64 [1.11] | 2.26 [0.76]
$\alpha=$ | $n=$100 | mean [SD] ($\times 10^{2}$) | 1.09 [0.33] | 1.87 [0.41] | 1.67 [0.49] | 1.26 [0.39]
0.05 | $n=$150 | mean [SD] ($\times 10^{3}$) | 7.73 [4.96] | 12.24 [2.71] | 12.67 [3.65] | 9.02 [2.72]
| $n=$200 | mean [SD] ($\times 10^{3}$) | 5.68 [2.98] | 8.87 [1.92] | 10.62 [2.99] | 7.18 [2.09]
| $n=$50 | mean [SD] ($\times 10^{2}$) | 2.93 [1.45] | 5.29 [1.26] | 3.77 [1.59] | 3.26 [1.43]
$\alpha=$ | $n=$100 | mean [SD] ($\times 10^{2}$) | 1.73 [1.05] | 2.79 [0.62] | 2.43 [0.74] | 1.78 [0.57]
0.075 | $n=$150 | mean [SD] ($\times 10^{2}$) | 1.11 [0.63] | 1.82 [0.40] | 1.82 [0.56] | 1.21 [0.38]
| $n=$200 | mean [SD] ($\times 10^{3}$) | 8.69 [5.71] | 13.19 [2.87] | 15.09 [4.39] | 9.58 [3.35]
| $n=$50 | mean [SD] ($\times 10^{2}$) | 4.04 [2.58] | 7.01 [1.67] | 4.87 [2.00] | 4.31 [2.02]
$\alpha=$ | $n=$100 | mean [SD] ($\times 10^{2}$) | 2.17 [0.86] | 3.70 [0.83] | 3.20 [1.03] | 2.29 [0.72]
0.1 | $n=$150 | mean [SD] ($\times 10^{2}$) | 1.35 [0.38] | 2.41 [0.54] | 2.38 [0.74] | 1.52 [0.48]
| $n=$200 | mean [SD] ($\times 10^{2}$) | 1.09 [0.44] | 1.75 [0.38] | 1.97 [0.59] | 1.18 [0.41]
Table 2: Means and standard deviations of MSEs for the regression function in (11). | | | proposed | ridge | lasso | ada-lasso
---|---|---|---|---|---|---
| $n=$50 | mean [SD] ($\times 10^{2}$) | 1.32 [0.37] | 1.92 [0.44] | 8.58 [12.38] | 14.79 [10.98]
$\alpha=$ | $n=$100 | mean [SD] ($\times 10^{3}$) | 7.42 [1.95] | 10.32 [2.16] | 8.79 [2.35] | 17.74 [26.95]
0.025 | $n=$150 | mean [SD] ($\times 10^{3}$) | 4.89 [1.17] | 6.78 [1.42] | 6.84 [1.84] | 5.95 [3.11]
| $n=$200 | mean [SD] ($\times 10^{4}$) | 2.66 [0.32] | 2.18 [0.25] | 4.90 [0.64] | 6.53 [8.50]
| $n=$50 | mean [SD] ($\times 10^{2}$) | 2.64 [0.72] | 3.84 [0.87] | 12.07 [11.80] | 17.80 [10.14]
$\alpha=$ | $n=$100 | mean [SD] ($\times 10^{2}$) | 1.49 [0.42] | 2.05 [0.43] | 1.67 [0.48] | 2.54 [2.91]
0.05 | $n=$150 | mean [SD] ($\times 10^{3}$) | 9.76 [2.44] | 13.45 [2.84] | 14.03 [14.28] | 10.56 [3.91]
| $n=$200 | mean [SD] ($\times 10^{4}$) | 5.25 [1.08] | 5.86 [1.04] | 8.80 [1.65] | 9.09 [6.20]
| $n=$50 | mean [SD] ($\times 10^{2}$) | 4.10 [1.13] | 5.76 [1.31] | 16.36 [12.85] | 20.55 [10.05]
$\alpha=$ | $n=$100 | mean [SD] ($\times 10^{2}$) | 2.29 [0.61] | 3.07 [0.65] | 2.53 [0.81] | 3.40 [3.31]
0.075 | $n=$150 | mean [SD] ($\times 10^{2}$) | 1.51 [0.37] | 2.01 [0.43] | 2.05 [1.79] | 1.53 [0.57]
| $n=$200 | mean [SD] ($\times 10^{4}$) | 9.24 [2.18] | 11.83 [2.21] | 15.36 [3.26] | 13.51 [3.33]
| $n=$50 | mean [SD] ($\times 10^{2}$) | 5.63 [1.51] | 7.68 [1.75] | 19.84 [11.91] | 23.46 [9.57]
$\alpha=$ | $n=$100 | mean [SD] ($\times 10^{2}$) | 3.10 [0.82] | 4.09 [0.87] | 3.45 [1.18] | 4.13 [3.60]
0.1 | $n=$150 | mean [SD] ($\times 10^{2}$) | 2.07 [0.51] | 2.68 [0.57] | 2.80 [2.28] | 2.05 [1.11]
| $n=$200 | mean [SD] ($\times 10^{3}$) | 1.52 [0.36] | 2.03 [0.39] | 2.51 [0.59] | 1.98 [0.45]
For the case of $(\alpha,n)=(0.025,200)$ or $(\alpha,n)=(0.1,150)$ in Table 1,
the ridge or adaptive lasso is better than our method, however, for all cases
except for them in Tables 1 and 2, our method provides smaller MSEs than the
other three methods. Figures 2 and 3 show typical estimated curves for the
case of $(\alpha,n)=(0.05,100)$ and the true regression functions in (10) and
(11), respectively. From Figure 2, we can see that our method captures the
sharp peak well and avoids over-fitting at the regions except for the peak. We
can also see from Figure 3 that our method avoids over-fitting on $x<0.5$ and
under-fitting on $x>0.5$, respectively, comparing with the other three
methods.
Figure 2: Estimated curves based on the proposed method (top left), ridge (top
right), lasso (bottom left) and adaptive lasso (bottom right) for the
regression function in (10). The solid lines draw the estimated curves, and
the broken lines draw the true curves.
Figure 3: Estimated curves based on the proposed method (top left), ridge (top
right), lasso (bottom left) and adaptive lasso (bottom right) for the
regression function in (11). The solid lines draw the estimated curves, and
the broken lines draw the true curves.
The ridge ridge is clearly inferior to our method in Table 1 and when $n$ is
small in Table 2. The reason is that it over-fits at strongly smooth regions
as seen in the top right panels of Figures 2 and 3. On the other hand, it is
competitive with our method when $n$ is large in Table 2 because it does not
cause over-fitting in this case. The lasso is always clearly worse than our
method in Tables 1 and 2. Note that its MSEs are similar to those of the ridge
method in Table 1, but their properties are different. As seen in the bottom
left panel of Figure 2, the lasso causes underfitting. The adaptive lasso is
always superior to the ridge and lasso and comparable to our method for
$\alpha=0.1$ in Table 1. Also in Table 2, it is competitive with our method
for $n=150$. When $n$ is small in Table 2, however, its MSEs are much larger
than those of the other three methods. It is because the adaptive lasso
sometimes cannot catch the true curve at all as seen in the bottom right panel
of Figure 3.
### 4.2 Surface fitting
Next we illustrate an effectiveness of our method by fitting surfaces to
simulated data. The random samples $\\{(\bm{x}_{i},y_{i})\mid i=1,\ldots,n\\}$
are generated from the model $y_{i}=g(x_{i1},x_{i2})+\varepsilon_{i}$. We
consider
$\displaystyle g(x_{i1},x_{i2})=$
$\displaystyle\sum_{j=1}^{3}\exp[-30\\{(x_{i1}-a_{j1})^{2}+(x_{i2}-a_{j2})^{2})\\}]$
$\displaystyle+\sum_{k=1}^{4}\exp[-100\\{(x_{i1}-b_{k1})^{2}+(x_{i2}-b_{k2})^{2})\\}]$
(13)
as the true surface, where $(a_{11},a_{12})=(0.25,0.25)$,
$(a_{21},a_{22})=(0.25,0.75)$, $(a_{31},a_{32})=(0.75,0.25)$,
$(b_{11},b_{12})=(0.6,0.6)$, $(b_{21},b_{22})=(0.6,0.9)$,
$(b_{31},b_{32})=(0.9,0.6)$ and $(b_{41},b_{42})=(0.9,0.9)$. The function has
strongly and weakly smooth regions in $[0,1]^{2}\setminus[0.5,1]^{2}$ and
$[0.5,1]^{2}$, respectively. Simulation results are obtained from one hundred
Monte Carlo trials, and then we evaluate mean squared errors similarly to in
(12). It is assumed that the design points $\\{(x_{i1},x_{i2})\mid
i=1,\ldots,n\\}$ are uniformly spaced on the domain $[0,1]^{2}$ and that the
errors $\\{\varepsilon_{i}\mid i=1,\ldots,n\\}$ are independently and
identically distributed according to ${\rm N}(0,\alpha)$. The sample size $n$
is 900, 1600, 2500 or 3600, and the variance of errors $\alpha$ is 0.05, 0.1,
0.15 or 0.2.
Our method is compared with the ridge choosing its tuning parameter by the GIC
in (4), the lasso and the adaptive lasso choosing tuning parameters by the
five-fold cross-validation. The means of MSEs together with their standard
deviations are reported in Tables 3. Figures 4 shows the true surface in (13)
for the case of $(\alpha,n)=(0.1,1600)$, and its typical estimated surfaces
are drawn in Figure 5.
Table 3: Mean and standard deviation of MSEs for the regression function in (13). | | | proposed | ridge | lasso | ada-lasso
---|---|---|---|---|---|---
| $n=$900 | mean [SD] ($\times 10^{3}$) | 3.60 [0.68] | 6.67 [0.87] | 5.84 [1.25] | 4.60 [0.76]
$\alpha=$ | $n=$1600 | mean [SD] ($\times 10^{3}$) | 2.03 [0.31] | 3.76 [0.48] | 3.83 [0.74] | 3.05 [0.53]
0.05 | $n=$2500 | mean [SD] ($\times 10^{3}$) | 1.35 [0.19] | 2.41 [0.31] | 2.82 [0.56] | 2.22 [0.59]
| $n=$3600 | mean [SD] ($\times 10^{4}$) | 9.78 [1.40] | 16.54 [2.18] | 21.69 [3.88] | 17.90 [6.16]
| $n=$900 | mean [SD] ($\times 10^{3}$) | 6.04 [1.23] | 13.55 [1.79] | 10.42 [2.29] | 7.60 [1.34]
$\alpha=$ | $n=$1600 | mean [SD] ($\times 10^{3}$) | 3.70 [0.56] | 7.51 [0.96] | 7.21 [1.51] | 5.02 [0.81]
0.1 | $n=$2500 | mean [SD] ($\times 10^{3}$) | 2.57 [0.38] | 4.82 [0.62] | 5.41 [1.15] | 3.61 [0.59]
| $n=$3600 | mean [SD] ($\times 10^{3}$) | 1.88 [0.28] | 3.31 [0.44] | 4.27 [0.79] | 2.68 [0.50]
| $n=$900 | mean [SD] ($\times 10^{3}$) | 9.05 [1.78] | 20.01 [2.61] | 15.30 [3.67] | 10.51 [1.95]
$\alpha=$ | $n=$1600 | mean [SD] ($\times 10^{3}$) | 5.53 [0.85] | 11.27 [1.44] | 10.32 [2.25] | 6.78 [1.05]
0.15 | $n=$2500 | mean [SD] ($\times 10^{3}$) | 3.87 [0.59] | 7.24 [0.93] | 7.79 [1.68] | 4.84 [0.67]
| $n=$3600 | mean [SD] ($\times 10^{3}$) | 2.83 [0.42] | 4.96 [0.65] | 6.24 [1.17] | 3.64 [0.56]
| $n=$900 | mean [SD] ($\times 10^{2}$) | 1.25 [0.27] | 2.67 [0.35] | 1.99 [0.52] | 1.32 [0.30]
$\alpha=$ | $n=$1600 | mean [SD] ($\times 10^{3}$) | 7.53 [1.22] | 15.03 [1.93] | 13.34 [3.01] | 8.26 [1.26]
0.2 | $n=$2500 | mean [SD] ($\times 10^{3}$) | 5.28 [0.84] | 9.65 [1.24] | 10.02 [2.23] | 5.94 [0.89]
| $n=$3600 | mean [SD] ($\times 10^{3}$) | 3.84 [0.59] | 6.62 [0.87] | 8.11 [1.54] | 4.53 [0.72]
Figure 4: True surface with inhomogeneous smoothness given in (13).
Figure 5: Estimated sufraces based on the proposed method (top left), ridge
(top right), lasso (bottom left) and adaptive lasso (bottom right) for the
regression function in (13).
For all cases, our method provides clearly smaller MSEs than the other three
methods, while the adaptive lasso is a little better than the ridge and lasso.
From Figures 4 and 5, we can see that the ridge causes over-fitting and that
the lasso and adaptive lasso cause under-fitting. On the other hand, our
method captures the true surface well on the entire region, that is, avoids
over-fitting on $[0,1]^{2}\setminus[0.5,1]^{2}$ and under-fitting on
$[0.5,1]^{2}$, respectively.
## 5 Real data analysis
We illustrate our procedure through analysing the earth temperature data
treated in Rohde et al. (2012). The data $\\{(x_{i},y_{i})\mid
i=1,\ldots,3109\\}$ consist of monthly the averages of earth temperatures in
the period from 1753 to 2011, where $x_{i}$ is the $i$-th time value and
$y_{i}$ is the temperature at $x_{i}$.
First, we applied the ridge method with choosing the tuning parameter by the
GIC in (4). The data and the estemated curve are reported in Figure 6. Of
course, we do not know its true curve, but it seems that the ridge method
yields over-fitting in the period from 1850 to 2000, as seen in our simulation
studies. To avoid the over-fitting, next, we applied our method with choosing
the hyper-tuning parameters by the approximated GIC derived in Section 3.3.
The result is also reported in Figure 6. Our method clearly removes the
fluctuation in the period in which over-fitting is observed, and therefore we
can say that the difference between our and the ridge methods is not
ignorable.
Figure 6: The earth temperature data (top) and estimated curves with proposed
method (bottom left) and ridge (bottom right).
## 6 Concluding remarks
We have proposed a new efficient nonlinear regression modeling with smoothly
varying regularization method. The main substantive contributions of this work
is to introduce the varying degree of smoothness method in constructing the
nonlinear regression modeling via regularization with adaptive-type penalties.
Determining a value of a tuning parameter and estimating unknown parameters
are implemented by maximizing the proposed regularized log-likelihood
function. The estimated curves based on our method tend to be flexible on the
weakly smooth region and smooth on the strongly smooth region. To obtain
appropriate values of tuning parameters, we also propose a model selection
criterion for evaluating constructed models from an information-theoretic
point of view. Our method is applied to the analysis of simulated data and
Earth surface temperature data. The conventional method causes over-fitting
and under-fitting on strongly and weakly smooth regions, respectively, but our
method considerably reduces this problem. From these results, we conclude that
our method is effective in the case that the underlying function has
inhomogeneous smoothness.
In this paper, we focus on the ridge method as the regularization. Recently,
as can be seen from the publication of the special issue edited by Joseph
(2020) in Technometrics, the ridge method recaptures the spotlight. For
example, in the over-parametrized linear regression framework with “double
descent” behavior of the prediction risk, which is observed also in deep
neural networks, Hastie et al. (2019) showed that the minimum $\ell_{2}$ norm
least squares estimator achieved a good generalization despite having zero
training error, but the optimally-tuned ridge estimator additionally dominated
it in risk. They derive an asymptotic property of the ridge estimator in a
high-dimensional setting similarly to in Dicker (2016) and Dobriban and Wager
(2018). Close relationship between the ridge regularization and early stopped
gradient descent is another interesting recent topic (Wei et al. 2017, Ali et
al. 2019). Our method can be regarded as a development of the ridge one. To
investigate an asymptotic property in a high-dimensional setting including the
over-parametrized case and to construct a relationship with the early stopped
gradient descent is our future theme.
## Acknowledgment
SK was supported by JSPS KAKENHI Grant Numbers JP19K11854 and JP20H02227, and
YN was supported by JSPS KAKENHI Grant Numbers JP16K00050.
## References
* (1)
* Ali et al. (2019) Ali, A., Kolter, J. Z. and Tibshirani, R. J. (2019), A continuous-time view of early stopping for least squares regression, in International Conference on Artificial Intelligence and Statistics, 1370–1378.
* Bishop (2006) Bishop, C. M. (2006), Pattern Recognition and Machine Learning: Springer.
* de Boor (2001) de Boor, C. (2001), A Practical Guide to Splines: Springer-Verlag New York, 2nd edition.
* Denison et al. (1998) Denison, D. G. T., Mallick, B. K. and Smith, A. F. M. (1998), Automatic Bayesian curve fitting, Journal of the Royal Statistical Society: Series B, 60, 333–350.
* Dicker (2016) Dicker, L. H. (2016), Ridge regression and asymptotic minimax estimation over spheres of growing dimension, Bernoulli, 22, 1–37.
* DiMatteo et al. (2001) DiMatteo, I., Genovese, C. R. and Kass, R. E. (2001), Bayesian curve-fitting with free-knot splines, Biometrika, 88, 1055–1071.
* Dobriban and Wager (2018) Dobriban, E. and Wager, S. (2018), High-dimensional asymptotics of prediction: Ridge regression and classification, The Annals of Statistics, 46, 247–279.
* Donoho and Johnstone (1994) Donoho, D. L. and Johnstone, J. M. (1994), Ideal spatial adaptation by wavelet shrinkage, Biometrika, 81, 425–455.
* Fan and Gijbels (1996) Fan, J. and Gijbels, I. (1996), Local polynomial modelling and its applications: monographs on statistics and applied probability: CRC Press.
* Friedman (1991) Friedman, J. H. (1991), Multivariate adaptive regression splines, The Annals of Statistics, 1–67.
* Garcke et al. (2001) Garcke, J., Griebel, M. and Thess, M. (2001), Data mining with sparse grids, Computing, 67, 225–253.
* Green and Silverman (1994) Green, P. and Silverman, B. W. (1994), Nonparametric Regression and Generalized Linear Models: Chapman & Hall.
* Hastie et al. (2009) Hastie, T., Tibshirani, R. and Friedman, J. H. (2009), The Elements of Statistical Learning: Springer-Verlag New York, 2nd edition.
* Hastie et al. (2019) Hastie, T., Montanari, A., Rosset, S. and Tibshirani, R. J. (2019), Surprises in high-dimensional ridgeless least squares interpolation, arXiv preprint: 1903.08560.
* Hegland (2002) Hegland, M. (2002), Adaptive sparse grids, ANZIAM Journal, 44, 335–353.
* Hoerl and Kennard (1970) Hoerl, A. E. and Kennard, R. W. (1970), Ridge regression: biased estimation for nonorthogonal problems, Technometrics, 12, 55–67.
* Joseph (2020) Joseph, V. R. (2020), (Ed.), Special issue on ridge regression, Technometrics, 62, 419–458.
* Konishi and Kitagawa (1996) Konishi, S. and Kitagawa, G. (1996), Generalised information criteria in model selection, Biometrika, 83, 875–890.
* Konishi and Kitagawa (2008) Konishi, S. and Kitagawa, G. (2008), Information Criteria and Statistical Modeling: Springer Science & Business Media.
* Kullback and Leibler (1951) Kullback, S. and Leibler, R. A. (1951), On information and sufficiency, The Annals of Mathematical Statistics, 22, 79–86.
* Loader (1999) Loader, C. (1999), Local Regression and Likelihood: Springer New York.
* Miyata and Shen (2003) Miyata, S. and Shen, X. (2003), Adaptive free-knot splines, Journal of Computational and Graphical Statistics, 12, 197–213.
* Powell (1981) Powell, M. J. D. (1981), Approximation theory and methods: Cambridge University Press.
* Rohde et al. (2012) Rohde, R., Muller, R., Jacobsen, R., Muller, E., Groom, D. and Wickham, C. (2012), A new estimate of the average earth surface land temperature spanning 1753 to 2011, Geoinformatic & Geostatistics: An Overview, 1, 1–7.
* Stone (1974) Stone, M. (1974), Cross-validatory choice and assessment of statistical predictions, Journal of the Royal Statistical Society: Series B, 36, 111–147.
* Tibshirani (1996) Tibshirani, R. (1996), Regression shrinkage and selection via the Lasso, Journal of the Royal Statistical Society: Series B, 58, 267–288.
* Wei et al. (2017) Wei, Y., Yang, F. and Wainwright, M. J. (2017), Early stopping for kernel boosting algorithms: A general analysis with localized complexities, in Advances in Neural Information Processing Systems, 6065–6075.
* Zou (2006) Zou, H. (2006), The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101, 1418–1429.
|
∎
11institutetext: H. BELBACHIR 22institutetext: USTHB, Faculty of Mathematics,
RECITS Laboratory P. Box $32$,
El-Alia, $16111$ Bab-Ezzouar, Algiers, Algeria.
22email<EMAIL_ADDRESS>or<EMAIL_ADDRESS>33institutetext: Y.
DJEMMADA (corresponding author) 44institutetext: USTHB, Faculty of
Mathematics, RECITS Laboratory P. Box $32$,
El-Alia, $16111$ Bab-Ezzouar, Algiers, Algeria.
44email<EMAIL_ADDRESS>or<EMAIL_ADDRESS>55institutetext: S. HADJ-
BRAHIM 66institutetext: USTHB, Faculty of Mathematics, RECITS Laboratory P.
Box $32$,
El-Alia, $16111$ Bab-Ezzouar, Algiers, Algeria.
66email<EMAIL_ADDRESS>
# Unified Bernoulli-Euler polynomials of Apostol type
Hacène Belbachir Yahia Djemmada Slimane Hadj-Brahim
(Received: date / Accepted: date)
###### Abstract
The object of this paper is to introduce and study properties of unified
Apostol-Bernoulli and Apostol-Euler polynomials noted by
$\left\\{\mathfrak{V_{n}}(x;\lambda;\mu)\right\\}_{n\geq 0}$. We study some
arithmetic properties of
$\left\\{\mathfrak{V_{n}}(x;\lambda;\mu)\right\\}_{n\geq 0}$ as their
connection to Apostol-Euler polynomials and Apostol-Bernoulli polynomials.
Also, we give derivation and integration representations of
$\left\\{\mathfrak{V_{n}}(x;\lambda;\mu)\right\\}_{n\geq 0}$. Finally, we use
the umbral calculus approach to deduce symmetric identities.
###### Keywords:
Euler polynomials Bernoulli polynomials Apostol-Bernoulli and Apostol-Euler
polynomialsgenerating function.
###### MSC:
11B6811B8311C0811C20.
## 1 Introduction
The Bernoulli $\\{B_{n}(x)\\}_{n\geq 0}$ and the Euler $\\{E_{n}(x)\\}_{n\geq
0}$ polynomials respectively are generated by the following power series (see
Bernoulli-1713 ; Euler-1738 ):
$\dfrac{t}{e^{t}-1}e^{xt}=\sum_{n=0}^{\infty}B_{n}(x)\dfrac{t^{n}}{n!}\qquad(|t|<2\pi)$
and
$\dfrac{2}{e^{t}+1}e^{xt}=\sum_{n=0}^{\infty}E_{n}(x)\dfrac{t^{n}}{n!}\qquad(|t|<\pi).$
As a particular case, for $x=0$, we denote $B_{n}:=B_{n}(0)$ and
$E_{n}:=E_{n}(0)$, which are called the Bernoulli and the Euler numbers
respectively. They have numerous important applications in various fields of
mathematics, as number theory, analysis and combinatorics.
Apostol Apostol1951 introduced and investigated the extended form of the
classical Bernoulli polynomials and numbers, which are known as the Apostol-
Bernoulli polynomials and numbers. The Apostol-Euler and the Apostol-Genocchi
polynomials were introduced by Srivastava Srivastava2011 . Belbachir et _al_.
BH-2019 ; BHR-2019 proposed a new family of polynomials called Euler-Genocchi
polynomials and studied their properties like linear recurrences and
difference equations using a determinantal approach and generating function.
## 2 Determinantal representation of the Bernoulli-Euler polynomials of
Apostol type
According to Luo-Srivastava , the Apostol-Bernoulli polynomials
$\\{\mathfrak{B}_{n}(x;\lambda)\\}_{n\geq 0}$ and the Apostol-Euler
polynomials $\\{\mathfrak{E}_{n}(x;\lambda)\\}_{n\geq 0}$ are generated by the
following power series:
$\dfrac{t}{\lambda
e^{t}-1}e^{xt}=\sum_{n=0}^{\infty}\mathfrak{B}_{n}(x;\lambda)\dfrac{t^{n}}{n!}\qquad\left(\left|t+\ln\lambda\right|<2\pi,\
\lambda\in\mathbb{R}_{+}^{*}\right)$ (1)
and
$\dfrac{2}{\lambda
e^{t}+1}e^{xt}=\sum_{n=0}^{\infty}\mathfrak{E}_{n}(x;\lambda)\dfrac{t^{n}}{n!}\qquad\left(\left|t+\ln\lambda\right|<\pi,\
\lambda\in\mathbb{R}_{+}^{*}\right),$ (2)
The Apostol-Bernoulli numbers $\mathfrak{B_{n}}(\lambda)$ and the Apostol-
Euler numbers $\mathfrak{E_{n}}(\lambda)$ are given by
$\mathfrak{B_{n}}(\lambda)=\mathfrak{B}_{n}(0;\lambda)$ and
$\mathfrak{E_{n}}(\lambda)=\mathfrak{E_{n}}(0;\lambda)$.
Letting
$T(x,\lambda,t)=\dfrac{2}{\lambda e^{t}+1}e^{xt}\times\dfrac{t}{\lambda
e^{t}-1}e^{xt}=\dfrac{2t}{\lambda^{2}e^{2t}-1}e^{2xt}.$
Taking into account the right hand side of (1) and (2), a direct computation
gives
$\lambda^{2}T(x+1,\lambda,t)-T(x,\lambda,t)=\sum_{n=0}^{\infty}\left\\{\sum_{k=0}^{n}\binom{n}{k}\left[\lambda^{2}\mathfrak{B}_{n-k}(x,\lambda)\mathfrak{E}_{k}(x,\lambda)-\mathfrak{B}_{n-k}(x,\lambda)\mathfrak{E}_{k}(x,\lambda)\right]\right\\}\dfrac{t^{n}}{n!}.$
On the other hand, we have
$\lambda^{2}T(x+1,\lambda,t)-T(x,\lambda,t)=\lambda^{2}\dfrac{2t}{\lambda^{2}e^{2t}-1}e^{2(x+1)t}-\dfrac{2t}{\lambda^{2}e^{2t}-1}e^{2xt}=2te^{2xt}=\sum_{n=0}^{+\infty}n2^{n}x^{n-1}\dfrac{t^{n}}{n!}.$
Comparing the two expansions of $\lambda^{2}T(x+1,\lambda,t)-T(x,\lambda,t)$,
we formulate the next result.
###### Theorem 2.1
Let ${x}$ be a real number and $n$ an integer. Then
$x^{n}=\sum_{k=0}^{n+1}\Lambda_{n,k}\times\Delta_{n+1-k,k}(x,\lambda),$ (3)
where $\Lambda_{n,k}=\dfrac{1}{2^{n+1}(n+1)}\binom{n+1}{k}$ and
$\Delta_{n,k}(x,\lambda)=\left|\begin{array}[]{cc}\lambda\mathfrak{B}_{n}(x+1,\lambda)&\mathfrak{E}_{k}(x,\lambda)\\\
\mathfrak{B}_{n}(x,\lambda)&\lambda\mathfrak{E}_{k}(x+1,\lambda)\end{array}\right|$.
In particular, taking $\lambda=1$ in (3), we get the following result in terms
of the Bernoulli and the Euler polynomials.
###### Corollary 1
_BHR-2019_ Let ${x}$ be a real number and an integer $n\geq 0$, we have
$x^{n}=\dfrac{1}{2^{n+1}(n+1)}\sum_{k=0}^{n+1}\binom{n+1}{k}\left|\begin{array}[]{cc}B_{n-(k-1)}(x+1)&E_{k}(x)\\\
B_{n-(k-1)}(x)&E_{k}(x+1)\end{array}\right|.$ (4)
## 3 Unified Bernoulli-Euler polynomials of Apostol type
In this section, we give a definition of the unified Bernoulli-Euler
polynomials of Apostol type and study their properties using power series.
###### Definition 1
Let $\lambda\in\mathbb{R}_{+}^{*}$ and $\mu\in\mathbb{R}_{+}-\\{1\\}$, we
define the unified Bernoulli-Euler polynomials of Apostol type
$\mathfrak{V_{n}}(x;\lambda;\mu)$ by the following power series:
$\dfrac{2-\mu+\frac{\mu}{2}t}{\lambda e^{t}+(1-\mu)}e^{xt}=\sum_{n\geq
0}\mathfrak{V_{n}}(x;\lambda;\mu)\dfrac{t^{n}}{n!},$ (5)
where
$\left\\{\begin{array}[]{ll}\left|\ln\left(\frac{\lambda}{1-\mu}\right)+t\right|<2\pi,&\hbox{for
}0\leq\mu<1;\\\ {}&{}\\\
\left|\ln\left(\frac{\lambda}{\mu-1}\right)+t\right|<\pi,&\hbox{otherwise}.\end{array}\right.$
Furthermore, the unified Bernoulli-Euler numbers of Apostol type, denoted
$\mathfrak{V_{n}}(\lambda;\mu)$, are given by
$\mathfrak{V_{n}}(\lambda;\mu):=\mathfrak{V_{n}}(0;\lambda;\mu).$ (6)
We summarize in the following table some special polynomials related to this
extension.
Parameters | Generating functions | Polynomials
---|---|---
$\mu=0,\lambda=1$ | $\displaystyle\dfrac{2}{e^{t}+1}e^{xt}=\sum_{n\geq 0}\limits E_{n}(x)\dfrac{t^{n}}{n!},\qquad\left|t\right|<\pi$ | Euler polynomials
$\mu=2,\lambda=1$ | $\displaystyle\dfrac{t}{e^{t}-1}e^{xt}=\sum_{n\geq 0}\limits B_{n}(x)\dfrac{t^{n}}{n!},\qquad\left|t\right|<2\pi$ | Bernoulli polynomials
$\mu=2$ | $\displaystyle\dfrac{t}{\lambda e^{t}-1}e^{xt}=\sum\limits_{n\geq 0}\mathfrak{B}_{n}(x;\lambda)\dfrac{t^{n}}{n!},\qquad\left|t+\ln\lambda\right|<2\pi$ | Apostol-Bernoulli polynomials
$\mu=0$ | $\displaystyle\dfrac{2}{\lambda e^{t}+1}e^{xt}=\sum\limits_{n\geq 0}\mathfrak{E}_{n}(x;\lambda)\dfrac{t^{n}}{n!},\qquad\left|t+\ln\lambda\right|<\pi$ | Apostol-Euler polynomials
We list some properties of the unified Bernoulli-Euler polynomials of Apostol
type using generating function approach.
###### Theorem 3.1
Let $n$ be nonnegative integer, we have
$\mathfrak{V}_{n}(x+y;\lambda;\mu)=\sum_{k=0}^{n}\binom{n}{k}\mathfrak{V}_{k}(x;\lambda;\mu)y^{n-k}.$
(7)
In particular, for $x:=0$ and $y:=x$, the above relation becomes
$\mathfrak{V}_{n}(x;\lambda;\mu)=\sum_{k=0}^{n}\binom{n}{k}\mathfrak{V}_{k}(\lambda;\mu)x^{n-k}.$
(8)
###### Proof
We establish the power series defined in (5) for
$\mathfrak{V}_{n}(x+y;\lambda;\mu)$, we have
$\sum_{n\geq
0}\mathfrak{V}_{n}(x+y;\lambda;\mu)\dfrac{t^{n}}{n!}=\left(\dfrac{2-\mu+\frac{\mu}{2}t}{\lambda
e^{t}+(1-\mu)}\right)e^{(x+y)t}=\sum_{n\geq 0}\sum_{k\geq
0}\mathfrak{V}_{n}(x;\lambda;\mu){y}^{k}\dfrac{t^{n+k}}{n!k!}.$
Applying the product series and then comparing the coefficients of $t^{n}$ on
both sides, we obtain Identity (7).
###### Remark 1
Expression (7) allows us to obtain $\mathfrak{V}_{n}(\lambda;\mu)$ the unified
Bernoulli-Euler numbers of Apostol type in terms of the unified Bernoulli-
Euler polynomials of Apostol type. Indeed, it suffices to replace $y$ by $-x$
in Formula (7), we get the following expression:
$\mathfrak{V}_{n}(\lambda;\mu)=\sum_{k=0}^{n}\binom{n}{k}(-1)^{n-k}\mathfrak{V}_{k}(x;\lambda;\mu)x^{n-k}.$
As a first consequence of Theorem 2.1, we show that the unified Bernoulli-
Euler polynomials of Apostol type,
$\\{\mathfrak{V}_{n}(x,\lambda;\mu)\\}_{n\geq 0}$ given by the power series in
(5), can be expressed in terms of the Apostol-Bernoulli and the Apostol-Euler
polynomials. That is, by a straightforward calculation, the substitution of
$x^{n}$ given by (3) in Expression (8) allows us to obtain the following
formula:
###### Proposition 1
Let $n,k$ and $j$ be three integers, it holds that
$\mathfrak{V}_{n}(x;\lambda;\mu)=\sum_{k=0}^{n}\binom{n}{k}\mathfrak{V}_{n-k}(\lambda;\mu)\sum_{j=0}^{k+1}\Lambda_{k,j}\times\Delta_{k+1-j,j}(x,\lambda).$
## 4 Generalized Raabe Theorem
In this section, we give an extension of Raab Theorem for the unified
Bernoulli-Euler polynomials of Apostol type.
###### Theorem 4.1
Let $r$ and $m$ be nonnegative integers with $m$ odd, for $\lambda=1-\mu$ and
$\mu\not=1$, we have
$\sum\limits_{k=0}^{m-1}(-1)^{k}\mathfrak{V}_{n}\left(\frac{x+k}{m};1-\mu;\mu\right)=\frac{1-m}{m^{n}}\left(\dfrac{\mu-2}{2(\mu-1)}\right)E_{n}(x)+\frac{1}{m^{n-1}}\mathfrak{V}_{n}(x;1-\mu;\mu).$
###### Proof
It follows from (5) that
$\displaystyle\sum_{n=0}^{\infty}\sum\limits_{k=0}^{m-1}(-1)^{k}$
$\displaystyle\mathfrak{V}_{n}\left(\frac{x+k}{m};1-\mu;\mu\right)\frac{t^{n}}{n!}$
$\displaystyle=\sum_{k=0}^{m-1}(-1)^{k}\sum_{n=0}^{\infty}\mathfrak{V}_{n}\left(\frac{x+k}{m};1-\mu;\mu\right)\frac{t^{n}}{n!}$
$\displaystyle=\sum_{k=0}^{m-1}(-1)^{k}\dfrac{2-\mu+\frac{\mu}{2}t}{(1-\mu)e^{t}+(1-\mu)}e^{(x+k)t/m}$
$\displaystyle=\dfrac{2-\mu+\frac{\mu}{2}t}{(1-\mu)(e^{t}+1)}e^{xt/m}\dfrac{\left[1-(-e^{t/m})^{m}\right]}{1+e^{t/m}}$
$\displaystyle=\dfrac{1}{(1-\mu)}\left(\dfrac{2-\mu+\frac{\mu}{2}t}{e^{t/m}+1}\right)e^{xt/m}$
$\displaystyle=\dfrac{1-m}{2}\left(\dfrac{2-\mu}{1-\mu}\right)\left(\dfrac{2}{e^{t/m}+1}\right)e^{xt/m}+\dfrac{m}{1-\mu}\left(\dfrac{2-\mu+\frac{\mu
t}{2m}}{e^{t/m}+1}\right)e^{xt/m}$
$\displaystyle=\dfrac{1-m}{2}\left(\dfrac{2-\mu}{1-\mu}\right)\sum_{n=0}^{\infty}E_{n}(x)\dfrac{\left(t/m\right)^{n}}{n!}+\sum_{n=0}^{\infty}\frac{1}{m^{n-1}}\mathfrak{V}_{n}(x;1-\mu;\mu)\dfrac{t^{n}}{n!}.$
By simple manipulations and equating the coefficients of $t^{n}$ on both
sides, we get the result.
As a consequence of Theorem 4.1, for $\mu=2$ and $\mu=0$ respectively, we have
a multiplication Theorem for Euler and Bernoulli polynomials proved by Raabe
in Raabe-1851 , as specified by Kargin and Kurt Kargin-Kurt-2011 . They are
given as follows:
$\sum\limits_{k=0}^{m-1}B_{n}\left(\frac{x+k}{m}\right)=\dfrac{1}{m^{n-1}}B_{n}(x)$
and
$\sum\limits_{k=0}^{m-1}(-1)^{k}E_{n}\left(\frac{x+k}{m}\right)=\frac{1}{m^{n}}E_{n}(x).$
## 5 Some explicit formulas
In this section, we give some explicit formulas of the unified Bernoulli-Euler
polynomials of Apostol type.
###### Theorem 5.1
For $\lambda\in\mathbb{R}_{+}^{*}$ and $\mu\in\mathbb{R}_{+}-\\{1\\}$, it
holds that
$\mathfrak{V}_{n}(x;\lambda;\mu)=\frac{1}{2(\mu-1)}\left[(\mu-2)\mathfrak{E}_{n}\left(x;\frac{\lambda}{1-\mu}\right)-\frac{\mu
n}{2}\mathfrak{E}_{n-1}\left(x;\frac{\lambda}{1-\mu}\right)\right]\quad(n\in\mathbb{N}).$
(9)
###### Proof
We can reformulate (5) as follows
$\displaystyle\sum_{n=0}^{\infty}\mathfrak{V_{n}}(x;\lambda;\mu)\dfrac{t^{n}}{n!}$
$\displaystyle=\left(\frac{1}{2(\mu-1)}\right)\left((\mu-2)-\frac{\mu}{2}t\right)\left(\frac{2}{1+\frac{\lambda}{1-\mu}e^{t}}\right)e^{xt}$
$\displaystyle=\frac{1}{2(\mu-1)}\left[(\mu-2)\sum_{n=0}^{\infty}\mathfrak{E}_{n}\left(x;\frac{\lambda}{1-\mu}\right)\dfrac{t^{n}}{n!}-\frac{\mu}{2}\sum_{n=1}^{\infty}\mathfrak{E}_{n}\left(x;\frac{\lambda}{1-\mu}\right)\dfrac{t^{n+1}}{n!}\right]$
$\displaystyle=\frac{1}{2(\mu-1)}\sum_{n=1}^{\infty}\left[(\mu-2)\mathfrak{E}_{n}\left(x;\frac{\lambda}{1-\mu}\right)-\frac{\mu
n}{2}\mathfrak{E}_{n-1}\left(x;\frac{\lambda}{1-\mu}\right)\right]\dfrac{t^{n}}{n!}.$
Equating the coefficients of $\frac{t^{n}}{n!}$ on both sides, we obtain
Identity (9).
Here, we give an explicit formula as a dual convex combination of classical
Bernoulli and Euler polynomials of Apostol type.
###### Theorem 5.2
Let $n$ be nonnegative integer and a real number $\mu\not=1$, we have
$\mathfrak{V}_{n}(x;\lambda;\mu)=\frac{1}{1-\mu}\left[\left(1-\frac{\mu}{2}\right)\mathfrak{E}_{n}\left(x;\frac{\lambda}{1-\mu}\right)-\frac{\mu}{2}\mathfrak{B}_{n}\left(x;\frac{\lambda}{\mu-1}\right)\right].$
(10)
###### Proof
From (5), we have
$\sum_{n=0}^{\infty}\mathfrak{V_{n}}(x;\lambda;\mu)\dfrac{t^{n}}{n!}=\dfrac{2-\mu+\frac{\mu}{2}t}{\lambda
e^{t}+(1-\mu)}e^{xt}=\dfrac{2-\mu}{2(1-\mu)}\dfrac{2}{\frac{\lambda}{1-\mu}e^{t}+1}e^{xt}+\dfrac{\mu}{2(\mu-1)}\dfrac{t}{\frac{\lambda}{\mu-1}e^{t}-1}e^{xt}.$
Using (1) and (2) leads to get (10).
###### Theorem 5.3
For $\mu\not=1$ and $n\geq 1$, the following formula holds:
$(\mu-1)\sum_{k=1}^{n}\binom{n}{k}\Upsilon_{n-k,k}\left(\frac{x}{2},\lambda,\mu\right)+\left(\frac{\mu}{2}-1\right)\Delta_{n,0}\left(\frac{x}{2},\frac{\lambda}{1-\mu}\right)=n(\mu-2)x^{n-1}-n(n-1)\mu
x^{n-2},$
where
$\Upsilon_{n,k}(x,\lambda,\mu)=\left|\begin{array}[]{cc}\frac{\lambda}{1-\mu}\mathfrak{B}_{n}(x+1,\lambda)&\mathfrak{V}_{k}(x;\lambda;\mu)\\\
\mathfrak{B}_{n}(x,\lambda)&\frac{\lambda}{1-\mu}\mathfrak{V}_{k}(x+1;\lambda;\mu)\end{array}\right|$.
###### Proof
From Theorem 5.1, we have
$\displaystyle\sum_{k=1}^{n}\binom{n}{k}\Upsilon_{n-k,k}\left(x,\lambda,\mu\right)$
$\displaystyle=\frac{1}{2(\mu-1)}\left\\{(\mu-2)\sum_{k=1}^{n}\binom{n}{k}\Delta_{n-k,k}\left(x,\frac{\lambda}{1-\mu}\right)-\mu\sum_{k=1}^{n}k\binom{n}{k}\Delta_{n-k,k-1}\left(x,\frac{\lambda}{1-\mu}\right)\right\\}$
$\displaystyle=\frac{1}{2(\mu-1)}\left\\{(\mu-2)\sum_{k=1}^{n}\binom{n}{k}\Delta_{n-k,k}\left(x,\frac{\lambda}{1-\mu}\right)-\mu
n\sum_{k=1}^{n}\binom{n-1}{k-1}\Delta_{n-k,k-1}\left(x,\frac{\lambda}{1-\mu}\right)\right\\}.$
Applying Theorem 2.1 and a straightforward computation, we obtain
$\displaystyle\sum_{k=1}^{n}\binom{n}{k}\Upsilon_{n-k,k}\left(x,\lambda,\mu\right)$
$\displaystyle=\frac{1}{\mu-1}\left\\{n(\mu-2)(2x)^{n-1}-n(n-1)\mu(2x)^{n-2}\right\\}$
$\displaystyle-\frac{\mu-2}{2(\mu-1)}\Delta_{n,0}\left(x,\frac{\lambda}{1-\mu}\right).$
Multiplying both sides by $(\mu-1)$, we get the desired identity.
## 6 Derivation and integration representations of unified Bernoulli-Euler
polynomials of Apostol type
In this section, we present derivation and integration representations for the
unified Bernoulli-Euler polynomials of Apostol type.
###### Theorem 6.1
Let $l,n$ be two nonnegative integers. Then
$\dfrac{d^{l}}{dx^{l}}\mathfrak{V}_{n}(x;\lambda;\mu)=(n)_{l}\mathfrak{V}_{n-l}(x;\lambda;\mu),$
(11)
$\int_{x}^{y}\mathfrak{V}_{n}(z;\lambda;\mu)dz=\dfrac{1}{(n+1)}\left[\mathfrak{V}_{n+1}(y;\lambda;\mu)-\mathfrak{V}_{n+1}(x;\lambda;\mu)\right],$
(12)
where $(x)_{n}:=x(x-1)\cdots(x-n+1)$ with $(x)_{0}=1$.
###### Proof
The assertion (11) follows from (5) by successive differentiations with
respect to $x$ and then uses the induction principle on $l$. Furthermore,
taking $l=1$ in (11) and integrating both sides of the resulting equation with
respect to $z$ over the interval $[x,y],(y>x)$, we obtain the Integral Formula
(12).
###### Remark 2
Setting $\lambda=1,\mu=2$ in (11) and (12), we obtain known results due to Luo
et _al_. _Luo-Srivastava_.
###### Corollary 2
Let $n$ be a nonnegative integer. Then
$\int_{x}^{x+y}\mathfrak{V}_{n}(z;\lambda;\mu)dz=\dfrac{1}{(n+1)}\sum_{k=0}^{n}\binom{n}{k}\mathfrak{V}_{k}(x;\lambda;\mu)y^{n-(k-1)}.$
(13)
###### Proof
Replacing $y$ by $x+y$ in the Integral Formula (12) and using Formula (7), by
successive calculations, we obtain the Integral Formula (13).
###### Theorem 6.2
For $\mu\in\mathbb{R}_{+}^{*}-\\{1,2\\}$ and $n$ nonnegative integer, the
following formula holds:
$\displaystyle\mathfrak{V}_{n+1}(x;\lambda;\mu)-x\mathfrak{V}_{n}(x;\lambda;\mu)=\dfrac{1}{2-\mu}\sum_{i=0}^{n}\binom{n}{i}(n-i)!$
$\displaystyle\times\left(\dfrac{\mu}{2(\mu-2)}\right)^{n-i}\left[\frac{\mu}{2}\mathfrak{V}_{i}(x;\lambda;\mu)-\lambda\sum_{k=0}^{i}\binom{i}{k}\mathfrak{V}_{k}(\lambda;\mu)\mathfrak{V}_{i-k}(x+1;\lambda;\mu)\right].$
(14)
###### Proof
Differentiating both sides of (5) with respect to $t$, we express the factors
$\left(1+\frac{\mu}{2(2-\mu)}t\right)^{-1}$ in series form for
$\left|t\right|<\frac{2}{\mu}\left|2-\mu\right|$, and using Formulas (5) and
(6), we obtain
$\displaystyle\sum_{n=0}^{\infty}\mathfrak{V}_{n+1}(x;\lambda;\mu)\dfrac{t^{n}}{n!}=x\sum_{n=0}^{\infty}\mathfrak{V}_{n}(x;\lambda;\mu)\dfrac{t^{n}}{n!}+\dfrac{1}{(2-\mu)}\left(\sum_{n=0}^{\infty}\frac{\mu^{n}}{2^{n}(\mu-2)^{n}}t^{n}\right)$
$\displaystyle\times\left[\frac{\mu}{2}\sum_{n=0}^{\infty}\mathfrak{V}_{n}(x;\lambda;\mu)\dfrac{t^{n}}{n!}-\lambda\left(\sum_{n=0}^{\infty}\mathfrak{V}_{n}(\lambda;\mu)\dfrac{t^{n}}{n!}\right)\left(\sum_{n=0}^{\infty}\mathfrak{V}_{n}(x+1;\lambda;\mu)\dfrac{t^{n}}{n!}\right)\right].$
(15)
Then taking into account the series product in (6) and equating the
coefficients of $t$, we get Identity (6.2).
## 7 Identities inspired via umbral calculus
Umbral calculus approach is a usefull tool to get and guess arithmetic and
combinatorial identities, see for instense Gessel gess on some applications
of the classical umbral calculus, Di Crescenzo et _al_. cre on umbral
calculus. See also classical references as those of Roman and Rota Roman-
Rota-1978 .
Let $\mathbf{B}^{n}(\lambda;\mu)$ be the umbra defined by
$\mathbf{B}^{n}(\lambda;\mu):=\mathfrak{V}_{n}\left(\lambda;\mu\right)$ and
$\left(\mathfrak{V}_{n}\left(x;\lambda;\mu\right)\right)_{n\geq 0}\ $ defined
by
$\sum_{n\geq
0}\mathfrak{V}_{n}\left(x;\lambda;\mu\right)\frac{t^{n}}{n!}=F\left(t\right)e^{xt}=\exp\left(\left(\mathbf{B}(\lambda;\mu)+x\right)t\right),$
where $F\left(t\right):=\sum\limits_{n\geq
0}\mathfrak{V}_{n}\left(\lambda;\mu\right)\frac{t^{n}}{n!}=\exp\left(\mathbf{B}(\lambda;\mu)t\right).$
So, $\mathfrak{V}_{n}\left(x;\lambda;\mu\right)$ admits the umbral
representation
$\mathfrak{V}_{n}\left(x;\lambda;\mu\right)=\left(\mathbf{B}(\lambda;\mu)+x\right)^{n}.$
###### Theorem 7.1
Let $n$ be nonnegative integer. Then
$\mathfrak{V}_{n}(x+1;\lambda;\mu)=\sum_{k=0}^{n}\binom{n}{k}\mathfrak{V}_{k}(x;\lambda;\mu).$
###### Proof
By the umbral representation
$\mathfrak{V}_{n}\left(x;\lambda;\mu\right)=\left(\mathbf{B}(\lambda;\mu)+x\right)^{n}$,
we have
$\mathfrak{V}_{n}(x+1;\lambda;\mu)=\left(\mathbf{B}(\lambda;\mu)+(x+1)\right)^{n}=\sum_{k=0}^{n}\binom{n}{k}\left(\mathbf{B}(\lambda;\mu)+x\right)^{k}=\sum_{k=0}^{n}\binom{n}{k}\mathfrak{V}_{k}(x;\lambda;\mu).$
###### Theorem 7.2
Let $n,m$ be nonnegative integers. Then
$\sum_{k=0}^{n}\binom{n}{k}y^{n-k}\mathfrak{V}_{m+k}\left(x;\lambda;\mu\right)=\sum_{k=0}^{m}\binom{m}{k}\left(-y\right)^{m-k}\mathfrak{V}_{n+k}\left(x+y;\lambda;\mu\right).$
###### Proof
By the umbral representation
$\mathfrak{V}_{n}\left(x;\lambda;\mu\right)=\left(\mathbf{B}(\lambda;\mu)+x\right)^{n}$,
on the one hand, we have
$\displaystyle(\mathbf{B}(\lambda;\mu)+(x+y))^{n}(\mathbf{B}(\lambda;\mu)+x)^{m}$
$\displaystyle=\left(\mathbf{B}(\lambda;\mu)+(x+y)\right)^{n}\left(\mathbf{B}(\lambda;\mu)+(x+y)-y\right)^{m}$
$\displaystyle=\sum_{k=0}^{m}\binom{m}{k}\left(-y\right)^{m-k}\left(\mathbf{B}(\lambda;\mu)+(x+y)\right)^{n+k}$
$\displaystyle=\sum_{k=0}^{m}\binom{m}{k}\left(-y\right)^{m-k}\mathfrak{V}_{n+k}\left(x+y;\lambda;\mu\right),$
and on the other hand, we have
$\displaystyle(\mathbf{B}(\lambda;\mu)+(x+y))^{n}(\mathbf{B}(\lambda;\mu)+x)^{m}$
$\displaystyle=\sum_{k=0}^{n}\binom{n}{k}y^{n-k}\left(\mathbf{B}(\lambda;\mu)+x\right)^{m+k}$
$\displaystyle=\sum_{k=0}^{n}\binom{n}{k}y^{n-k}\mathfrak{V}_{m+k}(x;\lambda;\mu).$
Hence, the two expressions give the desired identity.
## References
* (1) Apostol, T. M., et al. On the lerch zeta function. Pacific Journal of Mathematics 1, 2 (1951), 161–167.
* (2) Belbachir, H., and Hadj-Brahim, S. Some explicit formulas for euler-genocchi polynomials. Integers 19 (2019), A28.
* (3) Belbachir, H., Hadj-Brahim, S., and Rachidi, M. On another approach for a family of appell polynomials. Filomat 32, 12 (2018), 4155–4164.
* (4) Bernoulli, J. Ars conjectandi. Impensis Thurnisiorum, fratrum, 1713.
* (5) Di Crescenzo, A., and Rota, G.-C. Sul calcolo umbrale. Ricerche di Matematica 43 (1994), 129–162.
* (6) Euler, L. Methodus generalis summandi progressiones. Commentarii academiae scientiarum Petropolitanae (1738), 68–97.
* (7) Gessel, I. M. Applications of the classical umbral calculus. Algebra Universalis 49, 4 (2003), 397–434.
* (8) Kargin, L., and Kurt, V. On the generalization of the euler polynomials with the real parameters. Applied Mathematics and Computation 218, 3 (2011), 856–859.
* (9) Luo, Q.-M., and Srivastava, H. Some generalizations of the apostol–bernoulli and apostol–euler polynomials. Journal of Mathematical Analysis and Applications 308, 1 (2005), 290–302.
* (10) Raabe, J. L. Zurückführung einiger summen und bestimmten integrale auf die jacob-bernoullische function. Journal für die reine und angewandte Mathematik 1851, 42 (1851), 348–367.
* (11) Roman, S. M., and Rota, G.-C. The umbral calculus. Advances in Mathematics 27, 2 (1978), 95–188.
* (12) Srivastava, H. M. Some generalizations and basic (or q-) extensions of the bernoulli, euler and genocchi polynomials. Appl. Math. Inform. Sci 5, 3 (2011), 390–444.
|
# Latent-Space Inpainting for Packet Loss Concealment in Collaborative Object
Detection ††thanks: This work was supported in part by the Natural Sciences
and Engineering Research Council (NSERC) of Canada.
Ivan V. Bajić
School of Engineering Science
Simon Fraser University
Burnaby, BC, Canada
###### Abstract
Edge devices, such as cameras and mobile units, are increasingly capable of
performing sophisticated computation in addition to their traditional roles in
sensing and communicating signals. The focus of this paper is on collaborative
object detection, where deep features computed on the edge device from input
images are transmitted to the cloud for further processing. We consider the
impact of packet loss on the transmitted features and examine several ways for
recovering the missing data. In particular, through theory and experiments, we
show that methods for image inpainting based on partial differential equations
work well for the recovery of missing features in the latent space. The
obtained results represent the new state of the art for missing data recovery
in collaborative object detection.
###### Index Terms:
Collaborative object detection, collaborative intelligence, latent space,
missing data recovery, loss resilience
## I Introduction
In video surveillance and monitoring systems, input video is usually sent to
the cloud for temporary storage or further visual analysis. With the emergence
of “smart cameras,” simpler forms of visual analysis can now be performed on-
board, without the need to incur costs related to video transmission to the
cloud or potential privacy breaches. Still, computational resources available
in the cloud far outmatch those available at the edge, enabling much more
sophisticated analysis in the cloud compared to what is possible in edge
devices.
In between the two extremes mentioned above – cloud-based and edge-based
analytics – stands _collaborative intelligence_ (CI) [1, 2, 3], a framework in
which a machine learning model, usually a deep model, is split between the
edge and the cloud. The front-end of the model is deployed at the edge and
computes intermediate features, which are sent to the back-end in the cloud to
complete the inference task, as shown in Fig. 1. The CI approach has shown
potential for better energy efficiency and lower latency compared to cloud-
based or edge-based analytics [1]. Moreover, CI is better suited to privacy
protection than the cloud-based approach, because the original input signal
never leaves the edge device – only the intermediate features are sent to the
cloud.
Figure 1: Collaborative intelligence over a lossy channel
To make efficient use of the communication channel in CI, intermediate
features need to be compressed. There has been increasing interest in feature
compression recently, both in the academic [4, 5, 6, 7, 8, 9, 10, 11, 12] and
standardization community [13]. Our focus here is on another aspect of the
communication channel, namely its imperfections. At the network/transport
layer, channel imperfections will manifest themselves as packet loss, leading
to missing feature values. This is illustrated by dark regions in the feature
tensor shown in Fig. 1. For successful inference, this data loss in the
feature tensors needs to be mitigated. Yet, there has been very limited amount
of work on this topic. The authors in [14, 15] studied joint source-channel
coding of intermediate features to improve their robustness against bit
errors. In [16], simple interpolation methods (e.g., nearest neighbor and bi-
linear) were explored for recovering missing features, while [17] proposed
low-rank tensor completion for this purpose. All the aforementioned studies
focused on image classification models.
In this paper we focus on the task of object detection, where the model needs
to simultaneously localize multiple objects within an input image and classify
each one of them. This is arguably more challenging than image classification,
where one label per input image needs to be produced, and the corresponding
detection models may be expected to be more sensitive to errors and feature
loss than image classification models. Indeed, as will be seen in the results,
object detection accuracy drops quickly as the packet loss increases, unless
something is done to recover lost features: with only 5% loss, detection
accuracy drops by about 20%, whereas the results in [16, 17] show that for
classification models, 5% loss leads up to about 5% loss in accuracy,
depending on the model. Hence, missing feature recovery is crucial for
collaborative object detection.
In order to recover lost features, we borrow an idea from image inpainting
[18], specifically the Partial Differential Equation (PDE)-based inpainting
[19, 20, 21]. Such methods operate by solving a PDE-based model of surface
flow, as briefly described in Section II. In order to understand what is the
equivalent of image surface flow in the latent space of features, in Section
III we analyze the effects of typical operations found in deep convolutional
models on the PDE describing the surface flow. The conclusion is that latent-
space flow is described by the same PDE as the input flow, but with an
appropriately scaled flow field. This analysis is put to the test in Section
IV, where we compare the efficacy of latent-space inpainting against the
current state-of-the-art for missing feature recovery. Finally, the paper is
concluded in Section V.
## II Preliminaries
### II-A Surface flow
Image inpainting is the problem of filling in the missing details of an image.
PDE-based methods have been prominent in this area, initially developed as
models that attempt to mimic the techniques used by professional painting
restorators [18]. Several PDE formulations have been used for this purpose,
but our focus here is on a particular formulation from [19, 21] that we will
refer to as “surface flow.” This formulation allows image surface to “flow”
into the missing area as the inpainting progresses. Let $I(x,y,t)$ denote
pixel intensity at time $t$, at spatial position $(x,y)$, then the surface
flow can be expressed as [19, 21]:
$\frac{\partial I}{\partial x}v_{x}+\frac{\partial I}{\partial
y}v_{y}+\frac{\partial I}{\partial t}=0,$ (1)
where $(v_{x},v_{y})$ represents the flow vector. Here, spatial coordinates
$(x,y)$ and iterations $t$ are expressed as continuous quantities to allow
PDE-based formulation, but in practice they are discrete. We also note that
(1) is the same equation as optical flow [22], but there, $(v_{x},v_{y})$
represents motion between video frames. This analogy is not surprising,
because both surface flow and optical flow represent conservation laws for
image intensity. In fact, latent space motion analysis based on optical flow
has recently been performed in [23]. In Section III we will explore what
happens to (1) as the image $I$ passes through common operations found in deep
convolutional models.
### II-B Feature packetization
In order to transmit the feature tensor produced by the edge sub-model over a
packet network, tensor elements need to be packetized. There is currently no
“standard” way of doing this, and there are many possible ways of forming
packets. For the purposes of this study, we adopt the following approach.
First, tensor channels are tiled into an image, as in [4]. Then packets are
formed by taking eight rows at a time from such a tiled image. This is similar
to how packets are usually formed in video streaming [24], and also resembles
the way packets are formed from feature tensors in [16, 17]. With such a
packetization scheme, each lost packet creates an 8-row gap in the channels of
the feature tensor, which needs to be filled in. An example will be seen in
Fig. 4 in Section IV. Note that the analysis in Section III is independent of
the packetization scheme and none of the feature recovery methods examined
here crucially depend on it. However, numerical results may change if a
different packetization scheme is used.
## III Latent-space surface flow
The success of PDE-based image inpainting has demonstrated that (1) represents
a good model for natural images, so it can be used to fill in the gaps in such
images. If that is the case, what would be a good analogous model for latent-
space feature tensors, whose gaps we need to fill in? This is not immediately
clear, since latent-space feature tensors may look quite different from
natural images, as will be seen in Fig. 4. In this section, our goal is to
find the latent-space equivalent of input-space surface flow, as illustrated
in Fig. 2.
Figure 2: What is the latent-space equivalent of input-space surface flow?
To do this, we look at the processing pipeline between the input image and a
given channel of a feature tensor in an edge sub-model. In most deep
convolutional networks, this processing pipeline consists of a sequence of
basic operations: convolutions, nonlinear pointwise activations, and pooling.
We will analyze the effect of each of these operations on (1).
### III-A Convolution
Let $f$ be a (spatial) filter kernel, then the surface flow after convolution
can be described by
$\frac{\partial}{\partial x}(f*I)v^{\prime}_{x}+\frac{\partial}{\partial
y}(f*I)v^{\prime}_{y}+\frac{\partial}{\partial t}(f*I)=0,$ (2)
where $*$ represents convolution and $(v^{\prime}_{x},v^{\prime}_{y})$ is the
flow after the convolution. Since the convolution and differentiation are
linear operations, we have
$f*\left(\frac{\partial I}{\partial x}v^{\prime}_{x}+\frac{\partial
I}{\partial y}v^{\prime}_{y}+\frac{\partial I}{\partial t}\right)=0.$ (3)
Hence, a solution to the surface flow after the convolution satisfies the same
type of equation as (1). This means that if we had a method for solving (1),
the same method would be able to find a solution to the post-convolution flow
(3).
### III-B Nonlinear activation
Nonlinear activations such as sigmoid, tanh, ReLU, etc., are usually applied
on the output of convolutions in deep models [25]. These are point-wise
operations, applied to each sample separately. Let $\sigma(\cdot)$ denote such
a nonlinear activation, then the surface flow after this operation is
described by the following PDE:
$\frac{\partial}{\partial
x}\left[\sigma(I)\right]v^{\prime}_{x}+\frac{\partial}{\partial
y}\left[\sigma(I)\right]v^{\prime}_{y}+\frac{\partial}{\partial
t}\left[\sigma(I)\right]=0,$ (4)
where $(v^{\prime}_{x},v^{\prime}_{y})$ is the flow after the nonlinear
activation. By using the chain rule of differentiation, the above equation can
be rewritten as
$\sigma^{\prime}(I)\cdot\left(\frac{\partial I}{\partial
x}v^{\prime}_{x}+\frac{\partial I}{\partial y}v^{\prime}_{y}+\frac{\partial
I}{\partial t}\right)=0.$ (5)
Hence, again, $(v^{\prime}_{x},v^{\prime}_{y})$ satisfies the same type of
equation as (1), and a method that solves pre-activation flow (1) should be
able to find a solution to post-activation flow (5).
Note that (5) may also have other solutions, not just those that satisfy the
surface flow equation (1). For example, consider the ReLU activation defined
by $\sigma(x)=\max(0,x)$. In the regions of $I$ where the values are are
negative, the corresponding outputs of ReLU will be zero, so $\sigma(I)=0$,
and its derivative is also zero, $\sigma^{\prime}(I)=0$. Hence, in those
regions, (5) can be satisfied for arbitrary flow
$(v^{\prime}_{x},v^{\prime}_{y})$. However, in practice this does not matter,
because in regions where the signal is constant, propagating signal values in
any direction will produce the same result.
### III-C Pooling
Various forms of pooling are used in deep models [25], such as max-pooling,
mean-pooling, learnt pooling via strided convolutions, etc. All these forms of
pooling can be decomposed into a sequence of two operations as follows
$\underbrace{\text{spatial operation}\to\text{scale change}}_{\text{pooling}}$
In the case of mean pooling or learnt pooling, spatial operation is a
convolution. In the case of max-pooling, spatial operation is a local maximum
operation. Scale change is simply implemented using regular downsampling.
Since the effect of convolution on surface flow was discussed in Section
III-A, here we further discuss the effect of local maximum and scale change on
(1).
#### III-C1 Local maximum
Consider the maximum of function $I(x,y,t)$ over a local spatial region
$R_{h}=[x_{0}-h,x_{0}+h]\times[y_{0}-h,y_{0}+h],$
at a given time $t$. We can approximate $I(x,y,t)$ over this region as a
locally-linear function, whose slope is determined by the spatial derivatives
of $I$ at $(x_{0},y_{0})$, namely $\frac{\partial}{\partial x}I$ and
$\frac{\partial}{\partial y}I$. Depending on the sign of these derivatives,
the local maximum of $I$ over $R_{h}$ will be somewhere on the boundary of
$R_{h}$. In the special case where both derivatives are zero, $I$ is constant
over $R_{h}$ and any point in $R_{h}$, including boundary points, is a local
maximum.
From the first-order Taylor series expansion of $I(x,y,t)$ around
$(x_{0},y_{0},t)$ we have
$\begin{split}I(x_{0}+\epsilon_{x},y_{0}+\epsilon_{y},t)\approx\>&I(x_{0},y_{0},t)\\\
&+\frac{\partial}{\partial x}I(x_{0},y_{0},t)\cdot\epsilon_{x}\\\
&+\frac{\partial}{\partial y}I(x_{0},y_{0},t)\cdot\epsilon_{y},\end{split}$
(6)
for $|\epsilon_{x}|,|\epsilon_{y}|\leq h$. With such linear approximation, the
local maximum of $I(x,y,t)$ over $R_{h}$ (which, as we saw above, is somewhere
on the boundary of $R_{h}$) can be approximated as
$\begin{split}\max_{(x,y)\in R_{h}}&I(x,y,t)\approx I(x_{0},y_{0},t)\\\
&+\operatorname{sign}\left(\frac{\partial}{\partial
x}I(x_{0},y_{0},t)\right)\cdot\frac{\partial}{\partial x}I(x_{0},y_{0},t)\cdot
h\\\ &+\operatorname{sign}\left(\frac{\partial}{\partial
y}I(x_{0},y_{0},t)\right)\cdot\frac{\partial}{\partial y}I(x_{0},y_{0},t)\cdot
h.\end{split}$ (7)
Let (7) be the definition of $M(x_{0},y_{0},t)$, the function that takes on
local maximum values of $I(x,y,t)$ over spatial windows of size $2h\times 2h$.
The surface flow after such a local maximum operation is described by
$\frac{\partial M}{\partial x}v^{\prime}_{x}+\frac{\partial M}{\partial
y}v^{\prime}_{y}+\frac{\partial M}{\partial t}=0,$ (8)
where $(v^{\prime}_{x},v^{\prime}_{y})$ represents the flow after local
spatial maximum operation. From (7) we have
$\displaystyle\frac{\partial M}{\partial x}$ $\displaystyle=\frac{\partial
I}{\partial x}+\operatorname{sign}\left(\frac{\partial I}{\partial
x}\right)\frac{\partial^{2}I}{\partial
x^{2}}h+\operatorname{sign}\left(\frac{\partial I}{\partial
y}\right)\frac{\partial^{2}I}{\partial x\partial y}h,$ (9)
$\displaystyle\frac{\partial M}{\partial y}$ $\displaystyle=\frac{\partial
I}{\partial y}+\operatorname{sign}\left(\frac{\partial I}{\partial
x}\right)\frac{\partial^{2}I}{\partial x\partial
y}h+\operatorname{sign}\left(\frac{\partial I}{\partial
y}\right)\frac{\partial^{2}I}{\partial y^{2}}h,$ $\displaystyle\frac{\partial
M}{\partial t}$ $\displaystyle=\frac{\partial I}{\partial
t}+\operatorname{sign}\left(\frac{\partial I}{\partial
x}\right)\frac{\partial^{2}I}{\partial x\partial
t}h+\operatorname{sign}\left(\frac{\partial I}{\partial
y}\right)\frac{\partial^{2}I}{\partial y\partial t}h,$
and substituting (9) in (8) gives the following PDE
$\begin{split}\frac{\partial I}{\partial x}v^{\prime}_{x}&+\frac{\partial
I}{\partial y}v^{\prime}_{y}+\frac{\partial I}{\partial t}\\\
&+\operatorname{sign}\left(\frac{\partial I}{\partial
x}\right)\cdot\frac{\partial}{\partial x}\left(\frac{\partial I}{\partial
x}v^{\prime}_{x}+\frac{\partial I}{\partial y}v^{\prime}_{y}+\frac{\partial
I}{\partial t}\right)\cdot h\\\ &+\operatorname{sign}\left(\frac{\partial
I}{\partial y}\right)\cdot\frac{\partial}{\partial y}\left(\frac{\partial
I}{\partial x}v^{\prime}_{x}+\frac{\partial I}{\partial
y}v^{\prime}_{y}+\frac{\partial I}{\partial t}\right)\cdot h=0.\end{split}$
(10)
Note that if
$\frac{\partial I}{\partial x}v^{\prime}_{x}+\frac{\partial I}{\partial
y}v^{\prime}_{y}+\frac{\partial I}{\partial t}=0,$ (11)
then (10) will automatically be satisfied. But (10) is the same PDE as (1),
with $(v^{\prime}_{x},v^{\prime}_{y})$ now playing the role of
$(v_{x},v_{y})$. Hence, if $(v^{\prime}_{x},v^{\prime}_{y})$ satisfies the
same type of surface flow equation as (1), it will also satisfy (10). So a
method that solves (1) should be able to find, at least approximately, the
surface flow after the local maximum operation.
#### III-C2 Scale change
Finally, consider the change of spatial scale by factors $s_{x}$ and $s_{y}$
in x- and y-directions, such that the new signal is
$I^{\prime}(x,y,t)=I(s_{x}\cdot x,s_{y}\cdot y,t)$. The surface flow equation
after spatial scaling is
$\frac{\partial I^{\prime}}{\partial x}v^{\prime}_{x}+\frac{\partial
I^{\prime}}{\partial y}v^{\prime}_{y}+\frac{\partial I^{\prime}}{\partial
t}=0.$ (12)
Since $\frac{\partial I^{\prime}}{\partial x}=s_{x}\cdot\frac{\partial
I}{\partial x}$, $\frac{\partial I^{\prime}}{\partial
y}=s_{y}\cdot\frac{\partial I}{\partial y}$, and $\frac{\partial
I^{\prime}}{\partial t}=\frac{\partial I}{\partial t}$, it is easy to see that
post-scaling flow satisfies the same equation as (1), but with the
correspondingly scaled flow field:
$(v^{\prime}_{x},v^{\prime}_{y})=(v_{x}/s_{x},v_{y}/s_{y})$, where
$(v_{x},v_{y})$ is the pre-scaling flow. In deep convolutional models, scaling
factors of $s_{x}=s_{y}=2$ are commonly found, so the surface flow in the
downscaled signal is correspondingly scaled by a factor of $2$ as well.
The analysis presented above suggests that the surface flow equation is
largely left intact by the common operations found in deep convolutional
models, such as convolutions, nonlinear activations, and pooling. In the case
of max-pooling, the conclusion is only approximate, but over small windows
such as $2\times 2$, which are common, it is expected to be a good
approximation. Hence, a method that solves the surface flow PDE (1) should be
a good solution to the latent-space surface flow as well. We put this
conclusion to the test in the next section, where we deploy two algorithms for
finding surface flow, from [19] and [20], to recover missing features in the
latent space.
## IV Experiments
### IV-A Setup
The experiments are carried out on the YOLOv3 object detector [26].
Specifically, a Python implementation of this
model111https://github.com/experiencor/keras-yolo3 based on Keras and
Tensorflow was used. Details of the experimental testbed are shown in Table I.
YOLOv3 has a complex
architecture222https://towardsdatascience.com/yolo-v3-object-
detection-53fb7d3bfe6b with a number of skip connections. Depending on where
the model is split, one or more tensors needs to be transmitted. If one wishes
to transmit only a single tensor per input image, there are two points where a
sensible split can be made - layer 12 and layer 36. In this paper we picked
the deeper of these two split points, namely layer 36, so that layers 1-36
form the edge sub-model in Fig. 1 and the remaining layers 37-105 constitute
the cloud sub-model.
Input images were resized to $512\times 512$ before feeding them to the edge
sub-model. With this input size, the feature tensor produced by the edge sub-
model is $64\times 64\times 256$, i.e., $256$ channels of size $64\times 64$.
These were tiled into a square image of size $1024\times 1024$, and quantized
to 8-bit precision, similarly to [4, 6]. No further compression of feature
values was employed.
TABLE I: Experimental testbed CPU | Intel(R) Core(TM) i7-6700K @ 4.00GHz
---|---
GPU | NVIDIA GeForce GTX Titan X
CUDA | 10.2
RAM | 64 GB
Oprating system | Ubuntu 18.04 LTS
Language | Python 3.5.6
Tensorflow | 2.2.0
Keras | 2.3.1
OpenCV | 3.1.0
We used two methods for solving surface flow (1): one, from [19], will be
referred to as “Navier-Stokes,” and the other, from [20], will be referred to
as “Telea.” These methods were compared against the current state of the art
in missing feature recovery [17]. In [17], several tensor completion methods
were used to recover tensor features missing due to packet loss. These tensor
completion methods make no assumptions on how the feature tensor is produced.
The only underlying assumption is that the tensor lies in a low-rank manifold
which the completion method tries to reach. In [17], tensor completion methods
were tested on recovering features from image classification models VGG-16
[27] and ResNet-34 [28], and found to offer relatively similar performance. As
a representative of these methods, we use Simple Low-Rank Tensor Completion
(SiLRTC) [29]. It is an iterative method that refines its estimates of missing
values at each iteration.
### IV-B Quantitative results
Recovery of missing features was tested on the 2017 version of the validation
set from the COCO dataset [30]. This set contains 5,000 images with 80 object
classes represented in the set. Independent packet loss was simulated with
loss probabilities $p\in\\{0.05,0.10,0.15,0.20,0.25,0.30\\}$. For each input
image, the loss was applied to the tiled feature tensor, and the missing
values were recovered by various methods. Model accuracy was measured in terms
of the mean Average Precision (mAP) [26] at the Intersection over Union (IoU)
threshold of 0.5. These are common settings in the literature for quantifying
object detection accuracy. The results are shown in Fig. 3.
Figure 3: mAP vs. loss for various feature recovery methods
In the figure, the blue line represents the case where no feature recovery is
performed, and the missing values are simply set to zero. As shown by this
curve, the accuracy of object detection quickly drops from its loss-free value
of 0.56 down to about 0.36 with just 5% loss. The curve labeled SiLRTC-50 is
the performance obtained when missing features are recovered using SiLRTC with
50 iterations. This provides some improvement. Much better accuracy is
obtained when running SiLRTC with 250 iterations, as shown by the curve
labeled SiLRTC-250. It should be noted that in [17], 50 iterations of SiLRTC
were found to be sufficient to provide solid accuracy for VGG-16 and
ResNet-34, whereas here, with YOLOv3, a larger number of iterations is needed.
One factor that may play a role in this is tensor size; here we are dealing
with tensors of size $64\times 64\times 256$, whereas in [17], tensors were
much smaller: $14\times 14\times 512$ for VGG-16 and $28\times 28\times 128$
for ResNet-34.
The best accuracy in Fig. 3 is obtained by the two PDE-based inpainting
methods, Navier-Stokes and Telea. Their performance in the figure is virtually
indistinguishable, which is not surprising considering that they are based on
the same principles. As seen in the figure, PDE-based inpainting methods are
able to achieve significant improvement in object detection accuracy over
tensor completion methods. Part of the reason for this is that they reasonably
manage to capture the structure of the tensor via surface flow (1), whereas
tensor completion methods do not use any such insight and try to discover this
structure iteratively.
In the second column of Table II we summarize the average mAP gain from Fig. 3
of various methods over the case of no recovery. These values are obtained by
numerically integrating the area between the “No recovery” curve and the curve
corresponding to a particular method using the trapezoidal rule, and then
dividing this value by the range of packet loss probabilities, which is 0.3.
Hence, these values represents the average mAP gain over this range of packet
loss probabilities. As seen in the table, the two PDE-based inpainting methods
provide significant average mAP improvement of over 0.38. SiLRTC is also able
to provide solid gains of about 0.31, if executed for 250 iterations.
Figure 4: Top row: object detections overlaid on the original image. Bottom
row: part of the latent space (9 tensor channels) corresponding to the object
detections in the top row. Tensor channels have been mapped to grayscale and
enhanced for visualization purposes. From left to right: (1) Without any data
recovery (locations of lost data indicated as black), no detections are made;
SiLRTC-50 (not shown) also does not provide sufficient recovery for any
detections to be made. (2) With SiLRTC-150, one STOP sign is detected with
confidence 0.52. (3) With SiLRTC-250, two STOP signs are detected, and the
confidence for the smaller sign is improved to 0.82. (4) With Navier-Stokes
[19] and Telea [20] inpainting, two STOP signs are detected, and the
confidence for the smaller sign is improved to 0.91. (5) Without any loss, a
truck is also detected, and the confidence for the smaller STOP sign is 0.94.
The last column in Table II shows the average execution time per tensor of
various methods. As seen here, the PDE-based methods are not only more
accurate, but much faster than SiLRTC - over 100$\times$ faster than SiLRTC-50
and almost 600$\times$ faster than SiLRTC-250. The reason for this is that
SiLRTC performs a global decomposition of the tensor using Singular Value
Decomposition (SVD), which is quite expensive computationally. Meanwhile, the
PDE-based inpainting methods operate locally. While it might be possible for
SiLRTC to reach the accuracy provided by the PDE-based methods using more
iterations, this would run counter to the goals of collaborative intelligence,
where latency is an important aspect; with 250 iterations, it is already much
slower than the PDE-based methods.
TABLE II: Average mAP gain and execution time of various feature recovery methods Method | Avg. mAP gain | Time per tensor (sec.)
---|---|---
SiLRTC-50 | 0.1028 | 17.0793
SiLRTC-250 | 0.3101 | 83.2044
Navier-Stokes | 0.3823 | 0.1408
Telea | 0.3837 | 0.1356
### IV-C Visual examples
Finally, we show several visual examples in Fig. 4. The top row in the figure
shows object detections overlaid on the original image, and the bottom row
shows nine channels from the feature tensor produced by the edge sub-model.
Note that the entire tensor contains 256 channels, so the nine channels in the
figure are only a small part of the latent space.
The leftmost images in the figure correspond to the case where no feature
recovery is performed. One can see the black lines in the tensor channels,
indicating the locations of the missing packets. No objects are detected when
such a tensor is fed to the cloud sub-model. The next case to the right is
SiLRTC-50, which manages to provide some recovery to the missing features,
resulting in the detection of a small STOP sign in the background, which is
barely detected with confidence of 0.52. In the next case to the right,
SiLRTC-250 provides somewhat better recovery and the resulting tensor allows
detection of the large STOP sign in the foreground, as well as the small sign
in the background, now with increased confidence of 0.82. Finally, Telea
inpainting provides even better feature recovery, with increased confidence of
0.91 for the small STOP sign. the Navier-Stokes inpainting gave almost the
same results. The rightmost image corresponds to the case when there are no
lost features; in this case, the confidence about the small STOP sign is 0.94,
and a small truck is also detected in the background.
## V Conclusions
In this paper, the problem of missing feature recovery in collaborative object
detection was studied. Starting with the surface flow model of natural images,
which is known to work well for image inpainting, we analyzed the effect of
various processing steps used in deep convolutional models on such flow, and
concluded that the flow equation remains approximately intact under such
operations. Hence, methods that work well for image inpainting should work
well for latent-space inpainting as well. This conclusion was tested on the
YOLOv3 object detector. The results showed that the resulting latent-space
inpainting methods provide significant improvement over existing tensor
completion-based methods for missing feature recovery, in both accuracy and
speed.
## References
* [1] Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and L. Tang, “Neurosurgeon: Collaborative intelligence between the cloud and mobile edge,” in _Proc. 22nd ACM Int. Conf. Arch. Support Programming Languages and Operating Syst._ , 2017, pp. 615–629.
* [2] A. E. Eshratifar, A. Esmaili, and M. Pedram, “Towards collaborative intelligence friendly architectures for deep learning,” in _20th Int. Symp. Quality Electronic Design (ISQED)_ , 2019, pp. 14–19.
* [3] I. V. Bajić, W. Lin, and Y. Tian, “Collaborative intelligence: Challenges and opportunities,” in _Proc. IEEE ICASSP_ , 2021, to appear.
* [4] H. Choi and I. V. Bajić, “Deep feature compression for collaborative object detection,” in _Proc. IEEE ICIP_ , 2018, pp. 3743–3747.
* [5] A. E. Eshratifar, A. Esmaili, and M. Pedram, “BottleNet: a deep learning architecture for intelligent mobile cloud computing services,” in _2019 IEEE/ACM Int. Symp. Low Power Electr. Design (ISLPED)_ , 2019.
* [6] H. Choi and I. V. Bajić, “Near-lossless deep feature compression for collaborative intelligence,” in _Proc. IEEE MMSP_ , 2018, pp. 1–6.
* [7] Z. Chen, K. Fan, S. Wang, L. Duan, W. Lin, and A. C. Kot, “Toward intelligent sensing: Intermediate deep feature compression,” _IEEE Trans. Image Processing_ , vol. 29, pp. 2230–2243, 2019.
* [8] L. Duan, J. Liu, W. Yang, T. Huang, and W. Gao, “Video coding for machines: A paradigm of collaborative compression and intelligent analytics,” _IEEE Trans. Image Processing_ , vol. 29, pp. 8680–8695, 2020.
* [9] S. R. Alvar and I. V. Bajić, “Multi-task learning with compressible features for collaborative intelligence,” in _Proc. IEEE ICIP_ , Sep. 2019, pp. 1705–1709.
* [10] H. Choi, R. A. Cohen, and I. V. Bajić, “Back-and-forth prediction for deep tensor compression,” in _Proc. IEEE ICASSP_ , 2020, pp. 4467–4471.
* [11] S. R. Alvar and I. V. Bajić, “Bit allocation for multi-task collaborative intelligence,” in _Proc. IEEE ICASSP_ , May 2020, pp. 4342–4346.
* [12] R. A. Cohen, H. Choi, and I. V. Bajić, “Lightweight compression of neural network feature tensors for collaborative intelligence,” in _Proc. IEEE ICME_ , Jul. 2020, pp. 1–6.
* [13] ISO/IEC, “Draft call for evidence for video coding for machines,” Jul. 2020, ISO/IEC JTC 1/SC 29/WG 11 W19508.
* [14] K. Choi, K. Tatwawadi, A. Grover, T. Weissman, and S. Ermon, “Neural joint source-channel coding,” in _Proc. ICML_ , Jun. 2019, pp. 1182–1192.
* [15] J. Shao and J. Zhang, “Bottlenet++: An end-to-end approach for feature compression in device-edge co-inference systems,” in _Proc. IEEE ICC Workshops_ , 2020, pp. 1–6.
* [16] H. Unnibhavi, H. Choi, S. R. Alvar, and I. V. Bajić, “DFTS: Deep feature transmission simulator,” in _IEEE MMSP_ , 2018, demo.
* [17] L. Bragilevsky and I. V. Bajić, “Tensor completion methods for collaborative intelligence,” _IEEE Access_ , vol. 8, pp. 41 162–41 174, 2020.
* [18] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in _Proc. SIGGRAPH_ , Jul. 2000, pp. 417–424.
* [19] M. Bertalmio, A. L. Bertozzi, and G. Sapiro, “Navier-Stokes, fluid dynamics, and image and video inpainting,” in _Proc. IEEE CVPR_ , vol. 1, Dec. 2001, pp. I–355–I–362.
* [20] A. Telea, “An image inpainting technique based on the fast marching method,” _J. Graphics Tools_ , vol. 9, no. 1, pp. 23–34, Jan. 2004.
* [21] M. Bertalmío, V. Caselles, G. Haro, and G. Sapiro, “PDE-based image and surface inpainting,” in _Handbook of Mathematical Models in Computer Vision_ , N. Paragios, Y. Chen, and O. Faugeras, Eds. Boston, MA: Springer US, 2006, pp. 33–61.
* [22] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” _Artificial Intelligence_ , vol. 17, no. 1, pp. 185 – 203, 1981.
* [23] M. Ulhaq and I. V. Bajić, “Latent space motion analysis for collaborative intelligence,” in _Proc. IEEE ICASSP_ , 2021, to appear.
* [24] Y. Wang, J. Ostermann, and Y.-Q. Zhang, _Video Processing and Communications_. Upper Saddle River, NJ, USA: Prentice-Hall, 2002.
* [25] I. Goodfellow, Y. Bengio, and A. Courville, _Deep Learning_. MIT Press, 2016.
* [26] J. Redmon and A. Farhadi, “YOLOv3: an incremental improvement,” _arXiv preprint arXiv:1804.02767_ , 2018.
* [27] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in _Proc. ICLR_ , May 2015.
* [28] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proc. IEEE CVPR_ , Jun. 2016, pp. 770–778.
* [29] J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data,” _IEEE Trans. Pattern Analysis and Machine Intelligence_ , vol. 35, no. 1, pp. 208–220, Jan. 2013.
* [30] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: common objects in context,” in _Proc. ECCV_ , 2014, pp. 740–755.
|
# An Axiomatization of Stochastic Utility
Ricky Li I am a senior at Harvard College, and my email is
<EMAIL_ADDRESS>I thank Tomasz Strzalecki for introducing me to
decision theory and for his invaluable guidance throughout my research career.
(January 29, 2021)
###### Abstract
I provide an axiomatization of stochastic utility over two periods, stating
testable necessary and sufficient conditions under which an agent’s choice
behavior under exogenous menu selection can be modeled by a pair of random
utility functions. Although static random utility is characterized by a single
axiom, Block-Marschak nonnegativity, I demonstrate that an additional notion
of marginal consistency is needed for the two-period axiomatization. In
particular, when each period’s choice set has size three, I restate the
characterization using the simpler axiom of stochastic regularity. I conclude
by stating several corollaries, including an axiomatization of stochastic
utility with full support and an axiomatization of $n$-period stochastic
utility.
## 1 Introduction
As stated in Sen (1971), it is well-known how to characterize the
deterministic choice functions that can be represented by a unique, strict
preference relation. However, in economics, agents’ choices often display some
element of randomness. Instead of observing a mapping from each menu to an
element of the menu, the analyst may observe a mapping from each menu to a
probability distribution over the menu. Analogously, the analyst may wish to
represent this stochastic choice function with a probability distribution over
strict preference relations, also known as a random utility (RU)
representation. Block et al. (1959) and Falmagne (1978) showed that a single
axiom characterizes the existence of such a representation.111See Section 2
for the formal result.
Agents also frequently make choices over time. Given dynamic nondeterministic
choice data, the analyst may similarly wish to microfound the data with a
multiperiod random utility representation. Depending on the primitive, there
are multiple variants of this model. One type is to treat menus as endogenous:
at any given period, the agent chooses a lottery over the set of pairs of
immediate consumption and a menu of lotteries for the next period. Given
dynamic choice data of this type, Frick et al. (2019) obtained an
axiomatization of dynamic random expected utility, as well as sharper sub-
models in which agents are forward-looking.
However, there are also settings in which menus may be exogenously selected,
such as research studies in which the authors present menus to the subjects.
There are also settings in which the choice set is finite, ruling out
lotteries. In particular, the analyst may wonder when this variant of dynamic
choice data can be modeled by a stochastic process of random preferences. The
main result of this paper is a characterization of these representations,
which I name stochastic utility (SU). The rest of the paper proceeds as
follows. Section 2 provides an overview of RU and its axiomatization. Section
3 formally defines SU and states its axiomatization. Section 4 provides some
corollaries to the main result. Section 5 contains some relevant combinatorics
results and all proofs.
## 2 Random Utility
Before introducing SU, I provide a brief overview of RU and its
axiomatization. Let $X$ be a finite choice set, and let
$\mathcal{M}:=2^{X}\backslash\\{\emptyset\\}$ be the set of all nonempty
menus. Given a (exogenously-chosen) menu $A\in\mathcal{M}$, the agent makes a
choice $x\in A$. The agent’s choice data for all nonempty menus is encoded in
the following primitive:
###### Definition 1.
A stochastic choice function is a mapping
$\rho:\mathcal{M}\rightarrow\Delta(X)$ satisfying $\text{supp
}\rho(A)\subseteq A$ for all $A\in\mathcal{M}$.
Stochastic choice functions must satisfy $\text{supp }\rho(A)\subseteq A$
because the agent can only pick from choices within the menu. As in Strzalecki
(2021), I use $\rho$ to denote a stochastic choice function and $\rho(x,A)$ to
denote the probability that $\rho(A)$ assigns $x$. Let $P$ be the set of
strict preference relations over $X$, and let $C(x,A):=\big{\\{}\succ\in
P:x\succ A\backslash\\{x\\}\big{\\}}$.222$P$ can also be viewed as the set of
permutations of $X$. The notation $x\succ A\backslash\\{x\\}$ denotes $x\succ
y\ \forall\ y\in A\backslash\\{x\\}$. Observe that $\\{C(x,A)\\}_{x\in A}$
form a partition of $P$.
###### Definition 2.
$\mu\in\Delta(P)$ is a random utility (RU) representation of $\rho$ if
$\rho(x,A)=\mu\big{(}C(x,A)\big{)}$ for all $x\in A\in\mathcal{M}$.
In the deterministic case, it is well-known that a choice function can be
represented by a strict preference relation if and only if it satisfies Sen’s
$\alpha$ condition, as shown in Sen (1971). I will now state the analogous
axiomatization of RU, first for $|X|\leq 3$ and second for any finite $X$.
###### Axiom 1.
$\rho$ satisfies regularity if $\rho(x,A)\geq\rho(x,B)$ for all $x\in
A\subseteq B\in\mathcal{M}$.
As stated in Strzalecki (2021), regularity serves as the stochastic analog of
Sen’s $\alpha$. Regularity is necessary for RU because $C(x,A)\supseteq
C(x,B)$ for all $x\in A\subseteq B$. In particular, when the choice set is of
size three or less, regularity characterizes RU.
###### Lemma 1 (Block et al. (1959)).
Suppose $|X|\leq 3$. $\rho$ has a RU representation if and only if it
satisfies regularity.
If the choice set satisfies these cardinalities, Strzalecki (2021) shows that
the RU representation is also unique. For higher cardinalities, a stronger
axiom is needed to characterize RU.
###### Definition 3 (Chambers and Echenique (2016)).
For any $A\subsetneq X$ and $x\in A^{C}$, define their Block-Marschak sum to
be
$\displaystyle M_{x,A}:=\sum_{B\supseteq A^{C}}(-1)^{|B\backslash
A^{C}|}\rho(x,B)$
###### Axiom 2 (Chambers and Echenique (2016)).
$\rho$ satisfies Block-Marschak nonnegativity if $M_{x,A}\geq 0$ for all $x\in
A^{C}\neq\emptyset$.
###### Lemma 2 (Block et al. (1959); Falmagne (1978)).
$\rho$ has a RU representation if and only if it satisfies Block-Marschak
nonnegativity.
With these two axiomatizations in hand, I turn to the dynamic model.
## 3 Stochastic Utility
### 3.1 Preliminaries
I begin by generalizing the agent’s static problem to two periods, denoted
$t=1,2$, with finite choice sets $X_{t}$. As before, define
$\mathcal{M}_{t}:=2^{X_{t}}\backslash\\{\emptyset\\}$. In period $t$, the
agent is offered a (exogenously-chosen) menu $A_{t}\in\mathcal{M}_{t}$ and
makes a choice $x_{t}\in A_{t}$. Importantly, in the dynamic case, the analyst
sequentially observes the agent’s choices. We can thus encode the agent’s
choice data as follows. As before, the analyst observes the (first-period)
stochastic choice function $\rho_{1}$. As in Strzalecki (2021), define
$\mathcal{H}:=\\{(A_{1},x_{1})\in\mathcal{M}_{1}\times
X_{1}:\rho_{1}(x_{1},A_{1})>0\\}$ as the set of observable choice histories.
In addition to $\rho_{1}$, the analyst also observes a family of period-2
stochastic choice functions $\\{\rho_{2}(\cdot|h)\\}_{h\in\mathcal{H}}$,
indexed by choice histories. Thus, the primitive is the vector
$\boldsymbol{\rho}^{2}:=\big{(}\rho_{1},\\{\rho_{2}(\cdot|h)\\}_{h\in\mathcal{H}}\big{)}$.
Since the analyst does not have access to data describing the agent’s period-2
choices after making zero-probability period-1 choices, WLOG let
$\rho_{2}(\cdot,A_{2}|A_{1},x_{1})\in\Delta(A_{2})$ be an arbitrarily chosen
probability distribution when $\rho_{1}(x_{1},A_{1})=0$.333This is to ensure
that expressions like
“$\rho_{2}(x_{2},A_{2}|A_{1},x_{1})\rho_{1}(x_{1},A_{1})=0$” make sense when
$\rho_{1}(x_{1},A_{1})=0$, so that the forthcoming axioms are well-defined. As
long as $\rho_{2}(\cdot,A_{2}|A_{1},x_{1})$ is an honest-to-god probability
distribution over $A_{2}$, its values will not affect the axiomatization. As
before, let $P_{t}$ be the set of strict preference relations over $X_{t}$.
For $x_{t}\in A_{t}$, define $C_{t}(x_{t},A_{t}):=\big{\\{}\succ_{t}\ \in
P_{t}:x_{t}\succ_{t}A_{t}\backslash\\{x_{t}\\}\big{\\}}$ and
$C(x_{t},A_{t}):=C_{t}(x_{t},A_{t})\times P_{-t}$.444Let
$\\{t,-t\\}=\\{1,2\\}$.
###### Definition 4 (Strzalecki (2021)).
$\mu\in\Delta(P_{1}\times P_{2})$ is a (two-period) stochastic utility (SU)
representation of $\boldsymbol{\rho}^{2}$ if
$\rho_{1}(x_{1},A_{1})=\mu\big{(}C(x_{1},A_{1})\big{)}$ for all $x_{1}\in
A_{1}\in\mathcal{M}_{1}$ and
$\rho_{2}(x_{2},A_{2}|A_{1},x_{1})=\mu\big{(}C(x_{2},A_{2})|C(x_{1},A_{1})\big{)}$
for all $x_{2}\in A_{2}\in\mathcal{M}_{2}$ and $(A_{1},x_{1})\in\mathcal{H}$.
If $\mu$ is a SU representation of $\boldsymbol{\rho}^{2}$, note that its
marginal over $P_{1}$ is a RU representation of $\rho_{1}$ and its conditional
$\mu\big{(}\cdot|C(x_{1},A_{1})\big{)}$ is a RU representation of
$\rho_{2}(\cdot|A_{1},x_{1})$ for $(A_{1},x_{1})\in\mathcal{H}$.
###### Definition 5.
For each $t=1,2$ and $x_{t}\in A_{t}^{C}\neq\emptyset$, define their joint
Block-Marschak sum to be
$\displaystyle M_{x_{1},A_{1};x_{2},A_{2}}:=\sum_{B_{2}\supseteq
A_{2}^{C}}\sum_{B_{1}\supseteq A_{1}^{C}}(-1)^{|B_{1}\backslash
A_{1}^{C}|+|B_{2}\backslash
A_{2}^{C}|}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})$
and their joint upper edge set555Joint upper edge sets are a generalization of
what Chambers and Echenique (2016) define as upper contour sets. to be
$\displaystyle E(x_{1},A_{1};x_{2},A_{2}):=\\{(\succ_{1},\succ_{2})\in
P_{1}\times P_{2}:A_{t}\succ_{t}x_{t}\succ_{t}A_{t}^{C}\backslash\\{x_{t}\\},\
t=1,2\\}$
### 3.2 Axiomatization
###### Axiom 3.
$\boldsymbol{\rho}^{2}$ satisfy stochastic Block-Marschak nonnegativity if
$M_{x_{1},A_{1};x_{2},A_{2}}\geq 0$ for each $t=1,2$ and $x_{t}\in
A_{t}^{C}\neq\emptyset$.
As its name suggests, Axiom 3 serves as the two-period analog of Axiom 2.
Unlike the static case, Axiom 3 is not sufficient, and another axiom that
enforces consistency between periods is needed to complete the
characterization.
###### Axiom 4.
$\boldsymbol{\rho}^{2}$ satisfy marginal consistency666In Strzalecki (2021),
this is stated as the LTP axiom. if
$\displaystyle P(x_{2},A_{2};A_{1}):=\sum_{x_{1}\in
A_{1}}\rho_{2}(x_{2},A_{2}|A_{1},x_{1})\rho_{1}(x_{1},A_{1})$
is invariant in $A_{1}$ for all $x_{2}\in A_{2}\in\mathcal{M}_{2}$.
The main result of this paper is that Axioms 1 and 2 characterize two-period
SU:
###### Theorem 1.
$\boldsymbol{\rho}^{2}$ has a SU representation if and only if it satisfies
stochastic Block-Marschak nonnegativity and marginal consistency.
The full proof of Theorem 1 is in Section 5, but I will provide a sketch here.
First, I will state several helpful propositions. Propositions 1 and
2777Proposition 1 is the two-period analog of Lemma 7.4.I in Chambers and
Echenique (2016), while Proposition 2 is a partial generalization of Lemma
7.4.II in the same book. serve as useful identities for the joint Block-
Marschak sums, and Proposition 3888This is the two-period analog of
Proposition 7.3. in Chambers and Echenique (2016). characterizes SU
representations as probability measures that assign each joint upper edge set
its corresponding joint Block-Marschak sum.
###### Proposition 1.
For each $t=1,2$ and $x_{t}\in A_{t}^{C}\neq\emptyset$
$\displaystyle\rho_{2}(x_{2},A_{2}^{C}|A_{1}^{C},x_{1})\rho_{1}(x_{1},A_{1}^{C})=\sum_{B_{2}\subseteq
A_{2}}\sum_{B_{1}\subseteq A_{1}}M_{x_{1},B_{1};x_{2},B_{2}}$
###### Proposition 2.
For any $x_{1}\in A_{1}^{C}\neq\emptyset$ and $\emptyset\subsetneq
A_{2}\subsetneq X_{2}$
$\displaystyle\sum_{x_{2}\in
A_{2}^{C}}M_{x_{1},A_{1};x_{2},A_{2}}=\sum_{y_{2}\in
A_{2}}M_{x_{1},A_{1};y_{2},A_{2}\backslash\\{y_{2}\\}}$
###### Proposition 3.
$\mu$ is a SU representation of $\boldsymbol{\rho}^{2}$ if and only if
$\mu\big{(}E(x_{1},A_{1};x_{2},A_{2})\big{)}=M_{x_{1},A_{1};x_{2},A_{2}}$ for
each $t=1,2$ and $x_{t}\in A_{t}^{C}\neq\emptyset$.
To prove the forwards direction of Theorem 1, note that Proposition 3
immediately implies that stochastic Block-Marschak nonnegativity is necessary,
since probability measures assign nonnegative values to all
events.999Analogous reasoning provides intuition for why Block-Marschak
nonnegativity is necessary for static RU. Marginal consistency is necessary
because of the Law of Total Probability. With Proposition 3 in hand, it
follows that to prove the backwards direction, it suffices to construct a
probability measure $\mu\in\Delta(P_{1}\times P_{2})$ that assigns each joint
upper edge set its corresponding joint Block-Marschak sum. I do this in
Section 5 via the following steps:101010The proof strategy for this direction
is adapted from the proof of the static case in Chambers and Echenique (2016).
1. 1.
Using marginal consistency, prove the period-1 equivalent of Proposition 2.
2. 2.
Using stochastic Block-Marschak nonnegativity, recursively define a “partial
measure” $\nu$. $\nu$ is “partial” in the following sense: it is not defined
on all subsets of $P_{1}\times P_{2}$, but rather on pairs of subsets called
$t$-cylinders.
3. 3.
Verify that $\nu$ satisfies two crucial additivity properties over the pairs
of $t$-cylinders.
4. 4.
Use both additivity properties to define a probability measure $\mu$ that is
an extension of $\nu$, and verify that $\mu$ assigns each joint upper edge set
its corresponding joint Block-Marschak sum.
### 3.3 Axiomatization for $\boldsymbol{|X_{1}|=|X_{2}|=3}$
At lower choice set cardinalities, we can restate stochastic Block-Marschak
nonnegativity as a simpler axiom.
###### Axiom 5.
$\boldsymbol{\rho}^{2}$ satisfies stochastic regularity if for each $t=1,2$
and $x_{t}\in A_{t}\subseteq B_{t}$,
$\displaystyle\frac{\rho_{1}(x_{1},A_{1})}{\rho_{1}(x_{1},B_{1})}\geq\frac{\rho_{2}(x_{2},A_{2}|B_{1},x_{1})-\rho_{2}(x_{2},B_{2}|B_{1},x_{1})}{\rho_{2}(x_{2},A_{2}|A_{1},x_{1})-\rho_{2}(x_{2},B_{2}|A_{1},x_{1})}$
Stochastic regularity is necessary because $x_{t}\in A_{t}\subseteq B_{t}$ for
each $t=1,2$ implies
$\displaystyle C(x_{1},A_{1})\cap\bigg{(}C(x_{2},A_{2})\backslash
C(x_{2},B_{2})\bigg{)}\supseteq
C(x_{1},B_{1})\cap\bigg{(}C(x_{2},A_{2})\backslash C(x_{2},B_{2})\bigg{)}$
When $|X_{1}|=|X_{2}|=3$ and marginal consistency holds, it is also
sufficient.
###### Proposition 4.
Suppose $|X_{1}|=|X_{2}|=3$. $\boldsymbol{\rho}^{2}$ has a unique SU
representation if and only if it satisfies stochastic regularity and marginal
consistency.
## 4 Corollaries
### 4.1 SU with Full Support
As shown by Fishburn (1998), for arbitrary finite $X$ RU representations need
not be unique. This also implies that, in general, SU representations need not
be unique.111111To see this, let $\mu_{1},\mu_{1}^{\prime}\in\Delta(P_{1})$ be
distinct RU representations of $\rho_{1}$, and let $\mu_{2}\in\Delta(P_{2})$.
Let $\mu=\mu_{1}\times\mu_{2}$, $\mu^{\prime}=\mu_{1}^{\prime}\times\mu_{2}$:
it follows that
$\mu\big{(}C(x_{2},A_{2})|C(x_{1},A_{1})\big{)}=\mu_{2}\big{(}C_{2}(x_{2},A_{2})\big{)}=\mu^{\prime}\big{(}C(x_{2},A_{2})|C(x_{1},A_{1})\big{)}$.
Indeed, sometimes it may be desirable to represent $\boldsymbol{\rho}^{2}$
using a SU representation with full support over $P_{1}\times P_{2}$.121212The
existence of such a representation is equivalent to the existence of a
distribution over $\mathbb{R}^{X_{1}}\times\mathbb{R}^{X_{2}}$ with positive
density.
###### Definition 6.
$\mu\in\Delta(P_{1}\times P_{2})$ has full support if
$\mu(\succ_{1},\succ_{2})>0$ for all $(\succ_{1},\succ_{2})\in P_{1}\times
P_{2}$.
It turns out that characterizing this case requires only a slightly stronger
version of Axiom 3.
###### Axiom 6.
$\boldsymbol{\rho}^{2}$ satisfies stochastic Block-Marschak positivity if
$M_{x_{1},A_{1};x_{2},A_{2}}>0$ for each $t=1,2$ and $x_{t}\in
A_{t}^{C}\neq\emptyset$.
###### Corollary 1.
$\boldsymbol{\rho}^{2}$ has a SU representation with full support if and only
if it satisfies stochastic Block-Marschak positivity and marginal
consistency.131313Analogously, $\rho$ has a RU representation with full
support if and only if it satisfies the strict version of Axiom 2. The proof
proceeds analogously to the proof of this corollary.
Observe that stochastic Block-Marschak positivity is necessary because
probability measures with full support assign strictly positive probability to
all nonempty events.
### 4.2 More Periods
I am currently working on extending Theorem 1 to more than two periods with
multiperiod versions of both axioms. Fix $n>2$. The primitive is now the
vector
$\displaystyle\boldsymbol{\rho}^{n}:=\big{(}\rho_{1},\\{\rho_{2}(\cdot|h_{1})\\}_{h_{1}\in\mathcal{H}_{1}},\ldots,\\{\rho_{n}(\cdot|h_{n-1})\\}_{h_{n-1}\in\mathcal{H}_{n-1}}\big{)}$
where $\mathcal{H}_{1}:=\\{(A_{1},x_{1}):\rho_{1}(x_{1},A_{1})>0\\}$ and
$\mathcal{H}_{t}:=\\{(A_{t},x_{t};h_{t-1})\in\mathcal{M}_{t}\times
X_{t}\times\mathcal{H}_{t-1}:\rho_{t}(x_{t},A_{t}|h_{t-1})>0\\}$ for all
$t>1$. As before, WLOG let
$\rho_{t}(\cdot,A_{t}|A_{t-1},x_{t-1};\cdots;A_{1},x_{1})\in\Delta(A_{t})$ be
an arbitrary probability distribution if
$\rho_{t^{\prime}}(x_{t^{\prime}},A_{t^{\prime}}|A_{t^{\prime}-1},x_{t^{\prime}-1};\cdots;A_{1},x_{1})=0$.
Define $P_{t},C_{t}(x_{t},A_{t}),C(x_{t},A_{t})$ as before. Given $t>1$ and
$h_{t}=(A_{t},x_{t};h_{t-1})$, define $C(h_{t}):=C(x_{t},A_{t})\cap
C(h_{t-1})$.
###### Definition 7 (Strzalecki (2021)).
$\mu\in\Delta\big{(}\bigtimes_{t=1}^{n}P_{t}\big{)}$ is a ($n$-period)
stochastic utility (SU) representation of $\boldsymbol{\rho}^{n}$ if
$\rho_{1}(x_{1},A_{1})=\mu\big{(}C(x_{1},A_{1})\big{)}$ for all $x_{1}\in
A_{1}\in\mathcal{M}_{1}$ and
$\rho_{t}(x_{t},A_{t}|h_{t-1})=\mu\big{(}C(x_{t},A_{t})|C(h_{t-1})\big{)}$ for
all $x_{t}\in A_{t}\in\mathcal{M}_{t}$ and $h_{t-1}\in\mathcal{H}_{t-1}$.
Now, we generalize the axioms. Let
$(\boldsymbol{x},\boldsymbol{A}):=(x_{t},A_{t})_{t=1}^{n}$ and
$(\boldsymbol{x}_{-t},\boldsymbol{A}_{-t})=(x_{t^{\prime}},A_{t^{\prime}})_{t^{\prime}=1,t^{\prime}\neq
t}^{n}$. Let $\boldsymbol{A}^{C}=(A_{t}^{C})_{t=1}^{n}$, and say
$\boldsymbol{B}\geq\boldsymbol{A}\iff B_{t}\supseteq A_{t}$ for each
$t=1,\ldots,n$. Let
$j(\boldsymbol{x},\boldsymbol{A})=\rho_{1}(x_{1},A_{1})\prod_{t=2}^{n}\rho_{t}(x_{t},A_{t}|A_{t-1},x_{t-1};\ldots,A_{1},x_{1})$.
###### Axiom 7.
$\boldsymbol{\rho}^{n}$ satisfies ($n$-period) stochastic Block-Marschak
nonnegativity if
$\displaystyle
M_{(\boldsymbol{x},\boldsymbol{A})}:=\sum_{\boldsymbol{B}\geq\boldsymbol{A}^{C}}(-1)^{\sum_{t=1}^{n}|B_{t}\backslash
A_{t}^{C}|}j(\boldsymbol{x},\boldsymbol{B})\geq 0$
for all $(\boldsymbol{x},\boldsymbol{A})$ satisfying $x_{t}\in
A_{t}^{C}\neq\emptyset$ for each $t=1,\ldots,n$.
###### Axiom 8.
$\boldsymbol{\rho}^{n}$ satisfies ($n$-period) marginal consistency if for any
$(\boldsymbol{x},\boldsymbol{A})$ and $t=1,\ldots,n-1$,
$\displaystyle
P(\boldsymbol{x}_{-t},\boldsymbol{A}_{-t};A_{t}):=\sum_{x_{t}\in
A_{t}}j(\boldsymbol{x}_{-t},\boldsymbol{A}_{-t};x_{t},A_{t})$
is invariant in $A_{t}$.
###### Corollary 2 (Conjecture).
$\boldsymbol{\rho}^{n}$ has a SU representation if and only if it satisfies
stochastic Block-Marschak nonnegativity and marginal consistency.
As before, to prove Corollary 2 it will be helpful to have the following
generalization of Proposition 3 in hand.
###### Corollary 3 (Conjecture).
$\mu$ is a SU representation of $\boldsymbol{\rho^{n}}$ if and only if
$\mu\big{(}E(\boldsymbol{x},\boldsymbol{A})\big{)}=M_{(\boldsymbol{x},\boldsymbol{A})}$
for all $(\boldsymbol{x},\boldsymbol{A})$ satisfying $x_{t}\in
A_{t}^{C}\neq\emptyset$ for each $t=1,\ldots,n$, where
$\displaystyle
E(\boldsymbol{x},\boldsymbol{A}):=\big{\\{}(\succ_{1},\ldots,\succ_{n})\in\bigtimes_{t=1}^{n}P_{t}:A_{t}\succ_{t}x_{t}\succ_{t}A_{t}^{C}\backslash\\{x_{t}\\},\
t=1,\ldots,n\big{\\}}$
## 5 Appendix
### 5.1 The Möbius Inversion
Let $(L,\leq)$ be a finite, partially ordered set (poset).
###### Definition 8 (Van Lint et al. (2001), 25.2).
The Möbius function $m_{L}:L^{2}\rightarrow\mathbb{Z}$ is
$\displaystyle m_{L}(a,b)=\begin{cases}1&a=b\\\ 0&a\nleq b\\\ -\sum_{a\leq
c<y}m_{L}(a,c)&a<b\end{cases}$
###### Lemma 3 (Van Lint et al. (2001), 25.5).
Given a function $f:L\rightarrow\mathbb{R}$, define $F(a):=\sum_{b\geq
a}f(b)$. Then
$\displaystyle f(a)=\sum_{b\geq a}m(a,b)F(b)$
This is known as the Möbius inversion.
I close this section with two more lemmas that will help for the following
proofs.
###### Lemma 4 (Van Lint et al. (2001), 25.1).
Fix finite $X$ and let $L=2^{X}$, $\leq=\subseteq$. Then
$\displaystyle m_{L}(A,B)=\begin{cases}(-1)^{|B|-|A|}&A\subseteq B\\\
0&\text{else}\end{cases}$
###### Lemma 5 (Godsil (2018), 3.1).
Let $L,S$ be posets with respective Möbius functions $m_{L},m_{S}$. Then
$\displaystyle m_{L\times
S}\big{(}(a_{L},a_{S}),(b_{L},b_{S})\big{)}=m_{L}(a_{L},b_{L})m_{S}(a_{S},b_{S})$
### 5.2 Proof of Proposition 1
###### Proof.
Let $L=2^{X_{1}}\times 2^{X_{2}}$ and $(A_{1},A_{2})\leq(B_{1},B_{2})\iff
A_{1}\subseteq B_{1},\ A_{2}\subseteq B_{2}$. Then $(L,\leq)$ is the (finite)
product poset of $(2^{X_{1}},\subseteq)$ and $(2^{X_{1}},\subseteq)$. By
Lemmas 4 and 5, its Möbius function is
$\displaystyle
m_{L}\big{(}(A_{1},A_{2}),(B_{1},B_{2})\big{)}=\begin{cases}(-1)^{|B_{1}|-|A_{1}|+|B_{2}|-|A_{2}|}&A_{1}\subseteq
B_{1},\ A_{2}\subseteq B_{2}\\\ 0&\text{else}\end{cases}$
Now, for each $t=1,2$, fix any $x_{t}\in A_{t}^{C}\neq\emptyset$ and define
$f:L\rightarrow\mathbb{R}$ as
$\displaystyle
f(B_{1},B_{2}):=(-1)^{|B_{1}|-|A_{1}^{C}|+|B_{2}|-|A_{2}^{C}|}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})$
and $F:L\rightarrow\mathbb{R}$ as
$\displaystyle F(D_{1},D_{2})=\sum_{B_{2}\supseteq D_{2}}\sum_{B_{1}\supseteq
D_{1}}f(B_{1},B_{2})$
By Lemma 3,
$\displaystyle f(D_{1},D_{2})=\sum_{B_{2}\supseteq D_{2}}\sum_{B_{1}\supseteq
D_{1}}(-1)^{|B_{1}|-|D_{1}|+|B_{2}|-|D_{2}|}F(B_{1},B_{2})$
$\displaystyle\implies
f(A_{1}^{C},A_{2}^{C})=\rho_{2}(x_{2},A_{2}^{C}|A_{1}^{C},x_{1})\rho_{1}(x_{1},A_{1}^{C})=\sum_{B_{2}\supseteq
A_{2}^{C}}\sum_{B_{1}\supseteq
A_{1}^{C}}(-1)^{|B_{1}|-|A_{1}^{C}|+|B_{2}|-|A_{2}^{C}|}F(B_{1},B_{2})$
To see that
$\displaystyle\sum_{B_{2}\supseteq A_{2}^{C}}\sum_{B_{1}\supseteq
A_{1}^{C}}(-1)^{|B_{1}|-|A_{1}^{C}|+|B_{2}|-|A_{2}^{C}|}F(B_{1},B_{2})=\sum_{D_{2}\subseteq
A_{2}}\sum_{D_{1}\subseteq A_{1}}M_{x_{1},D_{1};x_{2},D_{2}}$
we can match terms as follows. Fix any $D_{2}\subseteq A_{2}$ and
$D_{1}\subseteq A_{1}$. Then
$\displaystyle(-1)^{|D_{1}^{C}|-|A_{1}^{C}|+|D_{2}^{C}|-|A_{2}^{C}|}F(D_{1}^{C},D_{2}^{C})=\sum_{B_{2}\supseteq
D_{2}^{C}}\sum_{B_{1}\supseteq
D_{1}^{C}}(-1)^{|D_{1}^{C}|-|A_{1}^{C}|+|D_{2}^{C}|-|A_{2}^{C}|}f(B_{1},B_{2})$
$\displaystyle=\sum_{B_{2}\supseteq D_{2}^{C}}\sum_{B_{1}\supseteq
D_{1}^{C}}(-1)^{|B_{1}|-|D_{1}^{C}|+|B_{2}|-|D_{2}^{C}|}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})=M_{x_{1},D_{1};x_{2},D_{2}}$
where the second equality follows by observing that $(-1)^{n}=(-1)^{-n}$. ∎
### 5.3 Proof of Proposition 2
###### Proof.
Fix any $x_{1}\in A_{1}^{C}\neq\emptyset$ and $\emptyset\subsetneq
A_{2}\subsetneq X_{2}$. I will use the notation
$\rho_{2}(D_{2},B_{2}|B_{1},x_{1}):=\sum_{x_{2}\in
D_{2}}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})$. We can write
$\displaystyle\sum_{x_{2}\in
A_{2}^{C}}M_{x_{1},A_{1};x_{2},A_{2}}=\sum_{x_{2}\in
A_{2}^{C}}\bigg{(}\sum_{B_{1}\supseteq A_{1}^{C}}\sum_{B_{2}\supseteq
A_{2}^{C}}(-1)^{|B_{2}|-|A_{2}^{C}|+|B_{1}|-|A_{1}^{C}|}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})\bigg{)}$
$\displaystyle=\sum_{B_{1}\supseteq
A_{1}^{C}}\rho_{1}(x_{1},B_{1})(-1)^{|B_{1}|-|A_{1}^{C}|}\bigg{(}\sum_{B_{2}\supseteq
A_{2}^{C}}(-1)^{|B_{2}|-|A_{2}^{C}|}\rho_{2}(A_{2}^{C},B_{2}|B_{1},x_{1})\bigg{)}$
and
$\displaystyle\sum_{y_{2}\in
A_{2}}M_{x_{1},A_{1};y_{2},A_{2}\backslash\\{y_{2}\\}}=\sum_{B_{1}\supseteq
A_{1}^{C}}\rho_{1}(x_{1},B_{1})(-1)^{|B_{1}|-|A_{1}^{C}|}\bigg{(}\sum_{y_{2}\in
A_{2}}\sum_{B_{2}\supseteq
A_{2}^{C}\cup\\{y_{2}\\}}(-1)^{|B_{2}|-|A_{2}^{C}|-1}\rho_{2}(y_{2},B_{2}|B_{1},x_{1})\bigg{)}$
Thus, to complete the proof it suffices to show
$\displaystyle\sum_{B_{2}\supseteq
A_{2}^{C}}(-1)^{|B_{2}|-|A_{2}^{C}|}\rho_{2}(A_{2}^{C},B_{2}|B_{1},x_{1})=\sum_{y_{2}\in
A_{2}}\sum_{B_{2}\supseteq
A_{2}^{C}\cup\\{y_{2}\\}}(-1)^{|B_{2}|-|A_{2}^{C}|-1}\rho_{2}(y_{2},B_{2}|B_{1},x_{1})$
To see this, observe that
$\displaystyle\sum_{B_{2}\supseteq
A_{2}^{C}}(-1)^{|B_{2}|-|A_{2}^{C}|}\rho_{2}(A_{2}^{C},B_{2}|B_{1},x_{1})$
$\displaystyle=\rho_{2}(A_{2}^{C},A_{2}^{C}|B_{1},x_{1})-\sum_{B_{2}=A_{2}^{C}\cup\\{a_{2}\\}}\rho_{2}(A_{2}^{C},A_{2}^{C}\cup\\{a_{2}\\}|B_{1},x_{1})+\ldots+(-1)^{|A_{2}|}\rho_{2}(A_{2}^{C},X_{2}|B_{1},x_{1})$
$\displaystyle=1-\sum_{B_{2}=A_{2}^{C}\cup\\{a_{2}\\}}\big{(}1-\rho_{2}(a_{2},A_{2}^{C}\cup\\{a_{2}\\}|B_{1},x_{1})\big{)}+\ldots+(-1)^{|A_{2}|}\big{(}1-\rho_{2}(A_{2},X_{2}|B_{1},x_{1})\big{)}$
Since there are $\binom{|A_{2}|}{k}$ sets of the form
$B_{2}=A_{2}^{C}\cup\\{a_{2}^{1},\ldots,a_{2}^{k}\\}$ and for $|A_{2}|\geq 1$,
$\displaystyle\sum_{k=0}^{|A_{2}|}(-1)^{k}\binom{|A_{2}|}{k}=0$
we can separate out an alternating sum of binomial coefficients:
$\displaystyle=\sum_{B_{2}=A_{2}^{C}\cup\\{a_{2}\\}}\rho_{2}(a_{2},A_{2}^{C}\cup\\{a_{2}\\}|B_{1},x_{1})-\sum_{B_{2}=A_{2}^{C}\cup\\{a_{2}^{1},a_{2}^{2}\\}}\rho_{2}(\\{a_{2}^{1},a_{2}^{2}\\},A_{2}^{C}\cup\\{a_{2}^{1},a_{2}^{2}\\}|B_{1},x_{1})$
$\displaystyle+\ldots+(-1)^{|A_{2}|+1}\rho_{2}(A_{2},X_{2}|B_{1},x_{1})$
Observe that there is a bijection between nonempty $D_{2}\subseteq A_{2}$ and
terms in this sum:
$\displaystyle
D_{2}\xleftrightarrow{}(-1)^{|D_{2}|+1}\rho_{2}(D_{2},A_{2}^{C}\cup
D_{2}|B_{1},x_{1})$
Similarly, there is a bijection between nonempty $D_{2}\subseteq A_{2}$ and
terms in the following sum
$\displaystyle\sum_{y_{2}\in A_{2}}\sum_{B_{2}\supseteq
A_{2}^{C}\cup\\{y_{2}\\}}(-1)^{|B_{2}|-|A_{2}^{C}|-1}\rho_{2}(y_{2},B_{2}|B_{1},x_{1})$
given by
$\displaystyle D_{2}\xleftrightarrow{}\sum_{y_{2}\in
D_{2}}(-1)^{|A_{2}^{C}\cup D_{2}|-|A_{2}^{C}|-1}\rho_{2}(y_{2},A_{2}^{C}\cup
D_{2}|B_{1},x_{1})$
$\displaystyle=(-1)^{|D_{2}|+1}\rho_{2}(D_{2},A_{2}^{C}\cup
D_{2}|B_{1},x_{1})$
Since both sums are comprised of precisely the same terms, they are equal. ∎
### 5.4 Proof of Proposition 3
###### Proof.
Forwards direction: suppose $\mu$ is a SU representation of
$\boldsymbol{\rho}^{2}$. For each $t=1,2$, fix any $x_{t}\in
A_{t}^{C}\neq\emptyset$. Since $x_{t}\succ_{t}A_{t}^{C}\backslash\\{x_{t}\\}$
if and only if $B_{t}^{C}\succ_{t}x_{t}\succ_{t}B_{t}\backslash\\{x_{t}\\}$
for some $B_{t}\supseteq A_{t}^{C}$,
$\displaystyle C(x_{1},A_{1}^{C})\cap
C(x_{2},A_{2}^{C})=\bigcup_{B_{2}\supseteq A_{2}^{C}}\bigcup_{B_{1}\supseteq
A_{1}^{C}}E(x_{1},B_{1}^{C};x_{2},B_{2}^{C})$
Furthermore, this union is disjoint, so
$\displaystyle\rho_{2}(x_{2},A_{2}^{C}|A_{1}^{C},x_{1})\rho_{1}(x_{1},A_{1}^{C})=\sum_{B_{2}\supseteq
A_{2}^{C}}\sum_{B_{1}\supseteq
A_{1}^{C}}\mu\big{(}E(x_{1},B_{1}^{C};x_{2},B_{2}^{C})\big{)}$
By Lemmas 4 and 5,
$m\big{(}(A_{1},A_{2}),(B_{1},B_{2})\big{)}=(-1)^{|B_{1}|-|A_{1}|+|B_{2}|-|A_{2}|}$.
By Lemma 3 with
$f(B_{1},B_{2})=\mu\big{(}E(x_{1},B_{1}^{C};x_{2},B_{2}^{C})\big{)}$ and
$F(A_{1},A_{2})=\rho_{2}(x_{2},A_{2}|A_{1},x_{1})\rho_{1}(x_{1},A_{1})$, we
get
$\displaystyle\mu\big{(}E(x_{1},A_{1};x_{2},A_{2})\big{)}=\sum_{B_{2}\supseteq
A_{2}^{C}}\sum_{B_{1}\supseteq
A_{1}^{C}}(-1)^{|B_{2}|-|A_{2}^{C}|+|B_{1}|-|A_{1}^{C}|}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})=M_{x_{1},A_{1};x_{2},A_{2}}$
as desired.141414The proof strategy for this direction is adapted from
Strzalecki (2021).
Backwards direction: suppose there exists $\mu\in\Delta(P_{1}\times P_{2})$
satisfying
$\mu\big{(}E(x_{1},A_{1};x_{2},A_{2})\big{)}=M_{x_{1},A_{1};x_{2},A_{2}}$ for
all $t=1,2$ and $x_{t}\in A_{t}^{C}\neq\emptyset$, and fix any $y_{t}\in
D_{t}\in\mathcal{M}_{t}$. As before, observe that
$y_{t}\succ_{t}D_{t}\backslash\\{y_{t}\\}$ if and only if
$B_{t}\succ_{t}y_{t}\succ_{t}B_{t}^{C}\backslash\\{y_{t}\\}$ for some
$B_{t}\subseteq D_{t}^{C}$, so
$\displaystyle C(y_{1},D_{1})\cap C(y_{2},D_{2})=\bigcup_{B_{2}\subseteq
D_{2}^{C}}\bigcup_{B_{1}\subseteq D_{1}^{C}}E(x_{1},B_{1};x_{2},B_{2})$
Furthermore, this union is disjoint, so
$\displaystyle\mu\big{(}C(y_{1},D_{1})\cap
C(y_{2},D_{2})\big{)}=\sum_{B_{2}\subseteq D_{2}^{C}}\sum_{B_{1}\subseteq
D_{1}^{C}}M_{x_{1},B_{1};x_{2},B_{2}}=\rho_{2}(y_{2},D_{2}|D_{1},y_{1})\rho_{1}(y_{1},D_{1})$
where the second equality follows because $D_{t}\neq\emptyset\implies
B_{t}\neq X_{t}$, and the third equality follows from Proposition 1. Since
$\displaystyle\mu\big{(}C(y_{1},D_{1})\big{)}=\sum_{y_{2}\in
D_{2}}\mu\big{(}C(y_{1},D_{1})\cap
C(y_{2},D_{2})\big{)}=\rho_{1}(y_{1},D_{1})$
and
$\displaystyle\mu\big{(}C(y_{2},D_{2})|C(y_{1},D_{1})\big{)}=\frac{\mu(C(y_{1},D_{1})\cap
C(y_{2},D_{2}))}{\mu(C(y_{1},D_{1}))}=\rho_{2}(y_{2},D_{2}|D_{1},y_{1})$
we conclude that $\mu$ is a SU representation of $\boldsymbol{\rho}^{2}$.
∎
### 5.5 Proof of Theorem 1 (Backwards Direction)
###### Proof.
Suppose $\boldsymbol{\rho}^{2}$ satisfies Block-Marschak nonnegativity and
marginal consistency. As outlined in Section 3, the proof rests the following
series of Claims, whose proofs are included in the forthcoming subsections.
###### Claim 1.
For any $x_{2}\in A_{2}^{C}\neq\emptyset$ and $\emptyset\subsetneq
A_{1}\subsetneq X_{1}$,
$\displaystyle\sum_{x_{1}\in
A_{1}^{C}}M_{x_{1},A_{1};x_{2},A_{2}}=\sum_{y_{1}\in
A_{1}}M_{y_{1},A_{1}\backslash\\{y_{1}\\};x_{2},A_{2}}$
Claim 1 is the first-period analog of Proposition 2 and follows from a similar
argument by using marginal consistency. Now, I define the $t$-cylinders.
###### Definition 9.
Given an ordered, distinct $\boldsymbol{k}$-sequence
$(x_{t}^{1},\ldots,x_{t}^{k})$, its $\boldsymbol{t}$-cylinder151515This
definition is the two-period analog of Chambers and Echenique (2016)’s
definition of cylinders. is
$\displaystyle I_{(x_{t}^{1},\ldots,x_{t}^{k})}=\big{\\{}\succ_{t}\ \in
P_{t}:x_{t}^{1}\succ_{t}\cdots\succ_{t}x_{t}^{k}\succ_{t}X_{t}\backslash\\{x_{t}^{1},\ldots,x_{t}^{k}\\}\big{\\}}$
Given a menu $A_{t}$, let $\pi(A_{t})$ denote the set of permutations of
$A_{t}$. Let
$\mathcal{I}_{t}(k)=\\{I_{(x_{t}^{1},\ldots,x_{t}^{k})}:(x_{t}^{1},\ldots,x_{t}^{k})\in\pi(A_{t}),A_{t}\in\mathcal{M}_{t},|A_{t}|=k\\}$
be the set of all $t$-cylinders induced by $k$-sequences, and let
$\mathcal{I}_{t}=\bigcup_{k=1}^{|X_{t}|}\mathcal{I}_{t}(k)$. Observe that
$\mathcal{I}_{t}$ contains all singletons, since
$\displaystyle I_{(x_{t}^{1},\ldots,x_{t}^{|X_{t}|})}=\\{\succ_{t}\\}\iff
x_{t}^{1}\succ_{t}\cdots\succ_{t}x_{t}^{|X_{t}|}$
Next, I recursively define a function
$\nu:\mathcal{I}_{1}\times\mathcal{I}_{2}\rightarrow\mathbb{R}_{\geq
0}$.161616Again, my definition of $\nu$ is the two-period analog of Chambers
and Echenique (2016), (7.4). Define
$\displaystyle\nu\big{(}I_{x_{1}}\times
I_{x_{2}}\big{)}:=M_{x_{1},\emptyset;x_{2},\emptyset}=\rho_{2}(x_{2},X_{2}|X_{1},x_{1})\rho_{1}(x_{1},X_{1})$
Now, let (distinct) $i,j\in\\{1,2\\}$. For any $1<k\leq|X_{j}|$ and
$(x_{j}^{1},\ldots,x_{j}^{k},x_{j}^{k+1})$, let
$A_{j}=\\{x_{j}^{1},\ldots,x_{j}^{k}\\}$ and define
$\displaystyle\nu\big{(}I_{x_{i}}\times
I_{(x_{j}^{1},\ldots,x_{j}^{k},x_{j}^{k+1})}\big{)}:=\begin{cases}0&\sum_{\tau_{j}\in\pi(A_{j})}\nu(I_{x_{i}}\times
I_{\tau_{j}})=0\\\ \frac{\nu(I_{x_{i}}\times
I_{(x_{j}^{1},\ldots,x_{j}^{k})})M_{x_{i},\emptyset;x_{j}^{k+1},A_{j}}}{\sum_{\tau_{j}\in\pi(A_{j})}\nu(I_{x_{i}}\times
I_{\tau_{j}})}&\text{else}\end{cases}$
Similarly, for any $1\leq k<|X_{1}|$, $1\leq\ell<|X_{2}|$ and
$(x_{1}^{1},\ldots,x_{1}^{k},x_{1}^{k+1})$,
$(x_{2}^{1},\ldots,x_{2}^{\ell},x_{2}^{\ell+1})$, let
$A_{1}=\\{x_{1}^{1},\ldots,x_{1}^{k}\\}$,
$A_{2}=\\{x_{2}^{1},\ldots,x_{2}^{\ell}\\}$ and define
$\displaystyle\nu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{k},x_{1}^{k+1})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{\ell},x_{2}^{\ell+1})}\big{)}:=\begin{cases}0&\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{1})}\nu(I_{\tau_{1}}\times
I_{\tau_{2}})=0\\\ \frac{\nu(I_{(x_{1}^{1},\ldots,x_{1}^{k})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{\ell})})M_{x_{1}^{k+1},A_{1};x_{2}^{\ell+1},A_{2}}}{\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{1})}\nu(I_{\tau_{1}}\times
I_{\tau_{2}})}&\text{else}\end{cases}$
###### Definition 10.
For any $0\leq k<|X_{1}|$, $0\leq\ell<|X_{2}|$, the first additive property
$\boldsymbol{p_{1}(k,\ell)}$ holds if for all $A_{1},A_{2}$ s.t.
$|A_{1}|=k,|A_{2}|=\ell$ and all $x_{t}\in A_{t}^{C}$
$\displaystyle\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{2})}\nu\big{(}I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}}\big{)}=M_{x_{1},A_{1};x_{2},A_{2}}$
For any $0<k\leq|X_{1}|$, $0<\ell\leq|X_{2}|$, the second additive property
$\boldsymbol{p_{2}(k,\ell)}$ holds if for all $A_{1},A_{2}$ s.t.
$|A_{1}|=k,|A_{2}|=\ell$ and all $\tau_{t}\in\pi(A_{t})$
$\displaystyle\sum_{x_{1}\in A_{1}^{C}}\sum_{x_{2}\in
A_{2}^{C}}\nu\big{(}I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}}\big{)}=\nu\big{(}I_{\tau_{1}}\times I_{\tau_{2}}\big{)}$
Observe that
$\bigcup_{\tau_{1}\in\pi(A_{1})}\bigcup_{\tau_{2}\in\pi(A_{2})}(I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}})=E_{x_{1},A_{1};x_{2},A_{2}}$ and $\bigcup_{x_{1}\in
A_{1}^{C}}\bigcup_{x_{2}\in A_{2}^{C}}(I_{\tau_{1},x_{1}}\times
I_{\tau_{1},x_{1}})=I_{\tau_{1}}\times I_{\tau_{2}}$ and these are disjoint
unions, so these additive properties are necessary.171717These additive
properties are the two-period analogs of Chambers and Echenique (2016), (7.2)
and (7.3), respectively.
###### Claim 2.
$p_{1}(k,\ell)$ holds for all $0\leq k<|X_{1}|$, $0\leq\ell<|X_{2}|$.
###### Claim 3.
$p_{2}(k,\ell)$ holds for all $0<k\leq|X_{1}|$, $0<\ell\leq|X_{2}|$.
With these additive properties in hand, I am ready to define the candidate SU
representation. Given $(\succ_{1},\succ_{2})\in P_{1}\times P_{2}$, denote
$x_{t}^{1}\succ_{t}\cdots\succ_{t}x_{t}^{|X_{t}|}$ and define
$\mu:2^{P_{1}\times P_{2}}\rightarrow\mathbb{R}$ as181818This definition is
the two-period analog of Chambers and Echenique (2016)’s definition of
“$\nu$.”
$\displaystyle\mu\big{(}\\{\succ_{1},\succ_{2}\\}\big{)}=\nu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{|X_{1}|})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{|X_{2}|})}\big{)}$
$\displaystyle\mu\big{(}S\big{)}=\sum_{(\succ_{1},\succ_{2})\in
S}\mu\big{(}\\{\succ_{1},\succ_{2}\\}\big{)}$
###### Claim 4.
$\mu$ is a probability measure.
###### Claim 5.
$\mu=\nu$ on $\mathcal{I}_{1}\times\mathcal{I}_{2}$.
Claim 5 shows that $\mu$ is an extension of $\nu$. Thus, I can leverage Claim
2 as follows. Recall from Proposition 1 that to show that $\mu$ is a SU
representation of $\boldsymbol{\rho}^{2}$, it suffices to show
$\mu\big{(}E(x_{1},A_{1};x_{2},A_{2})\big{)}=M_{x_{1},A_{1};x_{2},A_{2}}$ for
all $t=1,2$ and $x_{t}\in A_{t}^{C}\neq\emptyset$. For each $t=1,2$, fix any
$x_{t}\in A_{t}^{C}\neq\emptyset$ (note this implies that $|A_{t}|<|X_{t}|$):
then
$\displaystyle\mu\big{(}E(x_{1},A_{1};x_{2},A_{2})\big{)}=\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{2})}\mu\big{(}I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}}\big{)}$
$\displaystyle=\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{2})}\nu\big{(}I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}}\big{)}=M_{x_{1},A_{1};x_{2},A_{2}}$
where the penultimate equality follows because $\mu=\nu$ on
$\mathcal{I}_{1}\times\mathcal{I}_{2}$, and the last equality follows from
Claim 2.
#### 5.5.1 Proof of Claim 1
###### Proof.
We can write
$\displaystyle\sum_{x_{1}\in
A_{1}^{C}}M_{x_{1},A_{1};x_{2},A_{2}}=\sum_{B_{2}\supseteq
A_{2}^{C}}(-1)^{|B_{2}|-|A_{2}^{C}|}\Bigg{(}\sum_{B_{1}\supseteq
A_{1}^{C}}(-1)^{|B_{1}|-|A_{1}^{C}|}\bigg{(}\sum_{x_{1}\in
A_{1}^{C}}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})\bigg{)}\Bigg{)}$
and
$\displaystyle\sum_{y_{1}\in
A_{1}}M_{y_{1},A_{1}\backslash\\{y_{1}\\};x_{2},A_{2}}=\sum_{B_{2}\supseteq
A_{2}^{C}}(-1)^{|B_{2}|-|A_{2}^{C}|}\Bigg{(}\sum_{y_{1}\in
A_{1}}\sum_{B_{1}\supseteq
A_{1}^{C}\cup\\{y_{1}\\}}(-1)^{|B_{1}|-|A_{1}^{C}|-1}\rho_{2}(x_{2},B_{2}|B_{1},y_{1})\rho_{1}(y_{1},B_{1})\Bigg{)}$
Since marginal consistency is satisfied, we can write
$\displaystyle\sum_{B_{1}\supseteq
A_{1}^{C}}(-1)^{|B_{1}|-|A_{1}^{C}|}\bigg{(}\sum_{x_{1}\in
A_{1}^{C}}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})\bigg{)}$
$\displaystyle=\sum_{B_{1}\supseteq
A_{1}^{C}}(-1)^{|B_{1}|-|A_{1}^{C}|}\bigg{(}P(x_{2},B_{2})-\sum_{x_{1}\in
B_{1}\backslash
A_{1}^{C}}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})\bigg{)}$
$\displaystyle=P(x_{2},B_{2})\sum_{B_{1}\supseteq
A_{1}^{C}}(-1)^{|B_{1}|-|A_{1}^{C}|}+\sum_{B_{1}\supsetneq
A_{1}^{C}}(-1)^{|B_{1}|-|A_{1}^{C}|+1}\sum_{x_{1}\in B_{1}\backslash
A_{1}^{C}}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})$
Again observing that
$\displaystyle\sum_{B_{1}\supseteq
A_{1}^{C}}(-1)^{|B_{1}|-|A_{1}^{C}|}=\sum_{k=0}^{|A_{1}|}(-1)^{k}\binom{|A_{1}|}{k}=0$
it thus suffices to show that
$\displaystyle\sum_{B_{1}\supsetneq
A_{1}^{C}}(-1)^{|B_{1}|-|A_{1}^{C}|+1}\sum_{x_{1}\in B_{1}\backslash
A_{1}^{C}}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})$
$\displaystyle=\sum_{y_{1}\in A_{1}}\sum_{B_{1}\supseteq
A_{1}^{C}\cup\\{y_{1}\\}}(-1)^{|B_{1}|-|A_{1}^{C}|-1}\rho_{2}(x_{2},B_{2}|B_{1},y_{1})\rho_{1}(y_{1},B_{1})$
which immediately follows from matching terms. ∎
Note that Proposition 2 and Claim 1 together imply that for any
$\emptyset\subsetneq A_{t}\subsetneq X_{t}$,
$\displaystyle\sum_{x_{1}\in A_{1}^{C}}\sum_{x_{2}\in
A_{2}^{C}}M_{x_{1},A_{1};x_{2},A_{2}}=\sum_{x_{1}\in A_{1}^{C}}\sum_{y_{2}\in
A_{2}}M_{x_{1},A_{1};y_{2},A_{2}\backslash\\{y_{2}\\}}=\sum_{y_{2}\in
A_{2}}\sum_{x_{1}\in
A_{1}^{C}}M_{x_{1},A_{1};y_{2},A_{2}\backslash\\{y_{2}\\}}$
$\displaystyle=\sum_{y_{2}\in A_{2}}\sum_{y_{1}\in
A_{1}}M_{y_{1},A_{1}\backslash\\{y_{1}\\};y_{2},A_{2}\backslash\\{y_{2}\\}}$
#### 5.5.2 Proof of Claim 2
###### Proof.
I will prove this via induction.
Base case: fix $A_{1}=A_{2}=\emptyset$ (these are the only menus satisfying
$|A_{1}|=|A_{2}|=0$ and $x_{t}\in X_{t}$). Then, by definition,
$\displaystyle\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{2})}\nu\big{(}I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}}\big{)}=\nu\big{(}I_{x_{1}}\times
I_{x_{2}}\big{)}=M_{x_{1},\emptyset;x_{2},\emptyset}$
First inductive step: Fix $A_{i}=\emptyset$, $x_{i}\in X_{i}$, and
$A_{j}=\\{x_{j}^{1},\ldots,x_{j}^{k}\\}$ for any $k>0$ and observe that
$\displaystyle\sum_{\tau_{j}\in\pi(A_{j})}\nu\big{(}I_{x_{i}}\times
I_{\tau_{j}}\big{)}=\sum_{y_{j}\in
A_{j}}\bigg{(}\sum_{(y_{j}^{1},\ldots,y_{j}^{k-1})\in\pi(A_{j}\backslash\\{y_{j}\\})}\nu\big{(}I_{x_{i}}\times
I_{(y_{j}^{1},\ldots,y_{j}^{k-1},y_{j})}\big{)}\bigg{)}$
$\displaystyle=\sum_{y_{j}\in
A_{j}}M_{x_{i},\emptyset;y_{j},A_{j}\backslash\\{y_{j}\\}}=\sum_{x_{j}\in
A_{j}^{C}}M_{x_{i},\emptyset;x_{j},A_{j}}$
where the first equality follows because permuting $A_{j}$ is equivalent to
picking the last element and permuting the remaining $k-1$ elements, the
second equality follows from the inductive hypothesis (which is
“$p_{1}(0,k-1)$ holds” if $i=1,j=2$ and “$p_{1}(k-1,0)$ holds” otherwise), and
the third equality follows from Proposition 2 and Claim 1. There are two
cases:
1. 1.
Suppose
$\displaystyle\sum_{\tau_{j}\in\pi(A_{j})}\nu\big{(}I_{x_{i}}\times
I_{\tau_{j}}\big{)}=0$
Since Block-Marschak nonnegativity holds by assumption,
$M_{x_{i},\emptyset;x_{j},A_{j}}=0$ for each $x_{j}\in A_{j}^{C}$. Fix any
$x_{j}\in A_{j}^{C}$: then we have
$\displaystyle\sum_{\tau_{i}\in\pi(A_{i})}\sum_{\tau_{j}\in\pi(A_{j})}\nu\big{(}I_{\tau_{i},x_{i}}\times
I_{\tau_{j},x_{j}}\big{)}=\sum_{\tau_{j}\in\pi(A_{j})}\nu\big{(}I_{x_{i}}\times
I_{\tau_{j},x_{j}}\big{)}=0=M_{x_{i},\emptyset;x_{j},A_{j}}$
where the penultimate equality follows by definition of $\nu$. Thus,
$p_{1}(0_{i},k_{j})$ holds.
2. 2.
Suppose
$\displaystyle\sum_{\tau_{j}\in\pi(A_{j})}\nu\big{(}I_{x_{i}}\times
I_{\tau_{j}}\big{)}>0$
Again by definition of $\nu$, we have
$\displaystyle\sum_{\tau_{i}\in\pi(A_{i})}\sum_{\tau_{j}\in\pi(A_{j})}\nu\big{(}I_{\tau_{i},x_{i}}\times
I_{\tau_{j},x_{j}}\big{)}=\sum_{\tau_{j}\in\pi(A_{j})}\nu\big{(}I_{x_{i}}\times
I_{\tau_{j},x_{j}}\big{)}$
$\displaystyle=\sum_{\tau_{j}\in\pi(A_{j})}\bigg{(}\frac{\nu(I_{x_{i}}\times
I_{\tau_{j}})}{\sum_{\alpha_{j}\in\pi(A_{j})}\nu(I_{x_{i}}\times
I_{\alpha_{j}})}M_{x_{i},\emptyset;x_{j},A_{j}}\bigg{)}=M_{x_{i},\emptyset;x_{j},A_{j}}$
Thus, $p_{1}(0_{i},k_{j})$ holds.
Second inductive step: This step proceeds similarly to the first inductive
step. For any $0<k<|X_{1}|$ and $0<\ell<|X_{2}|$, fix
$A_{1}=\\{x_{1}^{1},\ldots,x_{1}^{k}\\}$,
$A_{2}=\\{x_{2}^{1},\ldots,x_{2}^{\ell}\\}$, and $x_{t}\in A_{t}^{C}$. Our
inductive hypothesis is that $p_{1}(k-1,\ell-1)$ holds. Again, observe that
$\displaystyle\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{2})}\nu\big{(}I_{\tau_{1}}\times
I_{\tau_{2}}\big{)}$ $\displaystyle=\sum_{y_{1}\in A_{1}}\sum_{y_{2}\in
A_{2}}\bigg{(}\sum_{(y_{1}^{1},\ldots,y_{1}^{k-1})\in\pi(A_{1}\backslash\\{y_{1}\\})}\sum_{(y_{2}^{1},\ldots,y_{2}^{\ell-1})\in\pi(A_{2}\backslash\\{y_{2}\\})}\nu\big{(}I_{(y_{1}^{1},\ldots,y_{1}^{k-1},y_{1})}\times
I_{(y_{2}^{1},\ldots,y_{2}^{\ell-1},y_{2})}\big{)}\bigg{)}$
$\displaystyle=\sum_{y_{1}\in A_{1}}\sum_{y_{2}\in
A_{2}}M_{y_{1},A_{1}\backslash\\{y_{1}\\};y_{2},A_{2}\backslash\\{y_{2}\\}}=\sum_{x_{1}\in
A_{1}^{C}}\sum_{x_{2}\in A_{2}^{C}}M_{x_{1},A_{1};x_{2},A_{2}}$
where the penultimate equality follows from the inductive hypothesis, and the
last equality follows from Proposition 2 and Claim 1. Again, there are two
cases:
1. 1.
Suppose
$\displaystyle\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{2})}\nu\big{(}I_{\tau_{1}}\times
I_{\tau_{2}}\big{)}=0$
Then weak Block-Marschak positivity implies $M_{x_{1},A_{1};x_{2},A_{2}}=0$
for each $x_{t}\in A_{t}^{C}$. Fix any $x_{t}\in A_{t}^{C}$: by definition of
$\nu$, we have
$\displaystyle\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{2})}\nu\big{(}I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}}\big{)}=0=M_{x_{1},A_{1};x_{2},A_{2}}$
and $p_{1}(k,\ell)$ holds.
2. 2.
Suppose
$\displaystyle\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{2})}\nu\big{(}I_{\tau_{1}}\times
I_{\tau_{2}}\big{)}>0$
Again by definition of $\nu$, it follows that
$\displaystyle\sum_{\tau_{1}\in\pi(A_{1})}\sum_{\tau_{2}\in\pi(A_{2})}\nu\big{(}I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}}\big{)}=M_{x_{1},A_{1};x_{2},A_{2}}$
and $p_{1}(k,\ell)$ holds.
∎
#### 5.5.3 Proof of Claim 3
###### Proof.
Fix any $0<k\leq|X_{1}|$ and $0<\ell\leq|X_{2}|$,
$A_{1}=\\{x_{1}^{1},\ldots,x_{1}^{k}\\}$ and
$A_{2}=\\{x_{2}^{1},\ldots,x_{2}^{\ell}\\}$, and $\tau_{t}\in\pi(A_{t})$. As
before, there are two cases:
1. 1.
Suppose
$\displaystyle\sum_{\alpha_{1}\in\pi(A_{1})}\sum_{\alpha_{2}\in\pi(A_{2})}\nu\big{(}I_{\alpha_{1}}\times
I_{\alpha_{2}}\big{)}=0$
Since $\nu\geq 0$ by definition, in particular $\nu\big{(}I_{\tau_{1}}\times
I_{\tau_{2}}\big{)}=0$. Furthermore, by definition, for each $x_{t}\in
A_{t}^{C}$
$\displaystyle\nu\big{(}I_{\tau_{1},x_{1}}\times I_{\tau_{2},x_{2}}\big{)}=0$
$\displaystyle\implies\sum_{x_{1}\in A_{1}^{C}}\sum_{x_{2}\in
A_{2}^{C}}\nu\big{(}I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}}\big{)}=0=\nu\big{(}I_{\tau_{1}}\times I_{\tau_{2}}\big{)}$
as desired.
2. 2.
Suppose
$\displaystyle\sum_{\alpha_{1}\in\pi(A_{1})}\sum_{\alpha_{2}\in\pi(A_{2})}\nu\big{(}I_{\alpha_{1}}\times
I_{\alpha_{2}}\big{)}>0$
Since $0<k\leq|X_{1}|$ and $0<\ell\leq|X_{2}|$, $0\leq k-1<|X_{1}|$ and
$0\leq\ell-1<|X_{2}|$, so by Claim 2 we can apply $p_{1}(k-1,\ell-1)$ as
before to write
$\displaystyle\sum_{\alpha_{1}\in\pi(A_{1})}\sum_{\alpha_{2}\in\pi(A_{2})}\nu\big{(}I_{\alpha_{1}}\times
I_{\alpha_{2}}\big{)}=\sum_{x_{1}\in A_{1}^{C}}\sum_{x_{2}\in
A_{2}^{C}}M_{x_{1},A_{1};x_{2},A_{2}}$
which implies
$\displaystyle\sum_{x_{1}\in A_{1}^{C}}\sum_{x_{2}\in
A_{2}^{C}}\nu\big{(}I_{\tau_{1},x_{1}}\times
I_{\tau_{2},x_{2}}\big{)}=\sum_{x_{1}\in A_{1}^{C}}\sum_{x_{2}\in
A_{2}^{C}}\bigg{(}\frac{\nu(I_{\tau_{1}}\times
I_{\tau_{2}})}{\sum_{\alpha_{1}\in\pi(A_{1})}\sum_{\alpha_{2}\in\pi(A_{2})}\nu(I_{\alpha_{1}}\times
I_{\alpha_{2}})}M_{x_{1},A_{1};x_{2},A_{2}}\bigg{)}$
$\displaystyle=\frac{\nu(I_{\tau_{1}}\times I_{\tau_{2}})}{\sum_{x_{1}\in
A_{1}^{C}}\sum_{x_{2}\in
A_{2}^{C}}M_{x_{1},A_{1};x_{2},A_{2}}}\bigg{(}\sum_{x_{1}\in
A_{1}^{C}}\sum_{x_{2}\in
A_{2}^{C}}M_{x_{1},A_{1};x_{2},A_{2}}\bigg{)}=\nu(I_{\tau_{1}}\times
I_{\tau_{2}})$
∎
#### 5.5.4 Proof of Claim 4
###### Proof.
I have already shown that $\mu\geq 0$. First, fix $x_{1}\in B_{1}$ and observe
that
$\displaystyle\sum_{x_{2}\in
X_{2}}\sum_{B_{2}\supseteq\\{x_{2}\\}}(-1)^{|B_{2}|-1}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})=\sum_{i=1}^{|X_{2}|}(-1)^{i-1}\binom{|X_{2}|}{i}=1$
To see this, fix $1\leq i\leq|X_{2}|$ and
$B_{2}=\\{x_{2}^{1},\ldots,x_{2}^{i}\\}$. Then the terms
$\sum_{k=1}^{i}(-1)^{i-1}\rho_{2}(x_{2}^{k},B_{2}|B_{1},x_{1})=(-1)^{i-1}$
appear exactly once in the sum above, and there are $\binom{|X_{2}|}{i}$ menus
of size $i$. Thus,
$\displaystyle\sum_{\succ_{1}\in P_{1}}\sum_{\succ_{2}\in
P_{2}}\mu(\succ_{1},\succ_{2})=\sum_{\tau_{1}\in\pi(X_{1})}\sum_{\tau_{2}\in\pi(X_{2})}\nu\big{(}I_{\tau_{1}}\times
I_{\tau_{2}}\big{)}=\sum_{x_{1}\in X_{1}}\sum_{x_{2}\in
X_{2}}M_{x_{1},X_{1}\backslash\\{x_{1}\\};x_{2},X_{2}\backslash\\{x_{2}\\}}$
$\displaystyle=\sum_{x_{1}\in X_{1}}\sum_{x_{2}\in
X_{2}}\sum_{B_{2}\supseteq\\{x_{2}\\}}\sum_{B_{1}\supseteq\\{x_{1}\\}}(-1)^{|B_{1}|-1+|B_{2}|-1}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\rho_{1}(x_{1},B_{1})$
$\displaystyle=\sum_{x_{1}\in
X_{1}}\sum_{B_{1}\supseteq\\{x_{1}\\}}\bigg{(}\sum_{x_{2}\in
X_{2}}\sum_{B_{2}\supseteq\\{x_{2}\\}}(-1)^{|B_{2}|-1}\rho_{2}(x_{2},B_{2}|B_{1},x_{1})\bigg{)}(-1)^{|B_{1}|-1}\rho_{1}(x_{1},B_{1})$
$\displaystyle=\sum_{x_{1}\in
X_{1}}\sum_{B_{1}\supseteq\\{x_{1}\\}}(-1)^{|B_{1}|-1}\rho_{1}(x_{1},B_{1})=1$
where the last equality follows for the same reason as above. ∎
#### 5.5.5 Proof of Claim 5
###### Proof.
I will use induction.
Base case: for any $|X_{1}|$-sequence $(x_{1}^{1},\ldots,x_{1}^{|X_{1}|})$ and
$|X_{2}|$-sequence $(x_{2}^{1},\ldots,x_{2}^{|X_{2}|})$, let $\succ_{t}$ be
the (unique) preference satisfying
$x_{t}^{1}\succ_{t}\cdots\succ_{t}x_{t}^{|X_{t}|}$. Then, by definition,
$\displaystyle\mu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{|X_{1}|})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{|X_{2}|})}\big{)}=\mu\big{(}\\{\succ_{1},\succ_{2}\\}\big{)}=\nu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{|X_{1}|})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{|X_{2}|})}\big{)}$
First inductive step: suppose
$\mu\big{(}I_{(x_{i}^{1},\ldots,x_{i}^{|X_{i}|})}\times
I_{(x_{j}^{1},\ldots,x_{j}^{k})}\big{)}=\nu\big{(}I_{(x_{i}^{1},\ldots,x_{i}^{|X_{i}|})}\times
I_{(x_{j}^{1},\ldots,x_{j}^{k})}\big{)}$ for all $|X_{i}|$-sequences
$(x_{i}^{1},\ldots,x_{i}^{|X_{i}|})$ and $k$-sequences
$(x_{j}^{1},\ldots,x_{j}^{k})$, where $k>1$. Then, for any $|X_{i}|$-sequence
$(x_{i}^{1},\ldots,x_{i}^{|X_{i}|})$ and $k-1$-sequence
$(x_{j}^{1},\ldots,x_{j}^{k-1})$,
$\displaystyle\mu\big{(}I_{(x_{i}^{1},\ldots,x_{i}^{|X_{i}|})}\times
I_{(x_{j}^{1},\ldots,x_{j}^{k-1})}\big{)}=\sum_{y_{j}\in\\{x_{j}^{1},\ldots,x_{j}^{k-1}\\}^{C}}\mu\big{(}I_{(x_{i}^{1},\ldots,x_{i}^{|X_{i}|})}\times
I_{(x_{j}^{1},\ldots,x_{j}^{k-1},y_{j})}\big{)}$
$\displaystyle=\sum_{y_{j}\in\\{x_{j}^{1},\ldots,x_{j}^{k-1}\\}^{C}}\nu\big{(}I_{(x_{i}^{1},\ldots,x_{i}^{|X_{i}|})}\times
I_{(x_{j}^{1},\ldots,x_{j}^{k-1},y_{j})}\big{)}=\nu\big{(}I_{(x_{i}^{1},\ldots,x_{i}^{|X_{i}|})}\times
I_{(x_{j}^{1},\ldots,x_{j}^{k-1})}\big{)}$
where the first equality follows from $\mu$ being a probability measure, the
second equality follows from the inductive hypothesis, and the third equality
follows because $p_{2}(|X_{1}|,k-1)$ and $p_{2}(k-1,|X_{2}|)$ hold, by Claim
3.
Second inductive step: suppose
$\mu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{k})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{\ell})}\big{)}=\nu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{k})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{\ell})}\big{)}$ for all $k$-sequences
$(x_{1}^{1},\ldots,x_{1}^{k})$ and $\ell$-sequences
$(x_{2}^{1},\ldots,x_{2}^{\ell})$, where $k,\ell>1$. Then for any
$k-1$-sequence $(x_{1}^{1},\ldots,x_{1}^{k-1})$ and $\ell-1$-sequence
$(x_{2}^{1},\ldots,x_{2}^{\ell-1})$,
$\displaystyle\mu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{k-1})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{\ell-1})}\big{)}=\sum_{y_{1}\in\\{x_{1}^{1},\ldots,x_{1}^{k-1}\\}^{C}}\sum_{y_{2}\in\\{x_{2}^{1},\ldots,x_{2}^{\ell-1}\\}^{C}}\mu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{k-1},y_{1})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{\ell-1},y_{2})}\big{)}$
$\displaystyle=\sum_{y_{1}\in\\{x_{1}^{1},\ldots,x_{1}^{k-1}\\}^{C}}\sum_{y_{2}\in\\{x_{2}^{1},\ldots,x_{2}^{\ell-1}\\}^{C}}\nu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{k-1},y_{1})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{\ell-1},y_{2})}\big{)}=\nu\big{(}I_{(x_{1}^{1},\ldots,x_{1}^{k-1})}\times
I_{(x_{2}^{1},\ldots,x_{2}^{\ell-1})}\big{)}$
Since every pair of cylinders in $\mathcal{I}_{1}\times\mathcal{I}_{2}$ is
induced by a pair of $k,\ell$-sequences with $1\leq k\leq|X_{1}|$ and
$1\leq\ell\leq|X_{2}|$, I have shown that $\mu=\nu$ on
$\mathcal{I}_{1}\times\mathcal{I}_{2}$. ∎
Now that each Claim has been verifed, the proof is complete. ∎
### 5.6 Proof of Proposition 4 (Backwards Direction)
###### Proof.
Suppose $|X_{t}|=3$ and denote $X_{1}=\\{a,b,c\\}$ and $X_{2}=\\{d,e,f\\}$.
Suppose $\boldsymbol{\rho}^{2}$ satisfies stochastic regularity and marginal
consistency. Then in particular, for any $x_{t}\in X_{t}$,
$A_{t}=\\{x_{t}\\}^{C}=\\{y_{t},z_{t}\\}$, and $B_{t}=X_{t}$,
$\displaystyle\frac{\rho_{1}(y_{1},A_{1})}{\rho_{1}(y_{1},B_{1})}\geq\frac{\rho_{2}(y_{2},A_{2}|B_{1},y_{1})-\rho_{2}(y_{2},B_{2}|B_{1},y_{1})}{\rho_{2}(y_{2},A_{2}|A_{1},y_{1})-\rho_{2}(y_{2},B_{2}|A_{1},y_{1})}$
$\displaystyle\iff\rho_{2}(y_{2},z_{2}|\\{y_{1},z_{1}\\},y_{1})\rho_{1}(y_{1},z_{1})-\rho_{2}(y_{2},X_{2}|\\{y_{1},z_{1}\\},y_{1})\rho_{1}(y_{1},z_{1})$
$\displaystyle-\bigg{(}\rho_{2}(y_{2},z_{2}|X_{1},y_{1})\rho_{1}(y_{1},X_{1})-\rho_{2}(y_{2},X_{2}|X_{1},y_{1})\rho_{1}(y_{1},X_{1})\bigg{)}\geq
0$ $\displaystyle\iff
M_{y_{1},\\{x_{1}\\};y_{2},\\{x_{2}\\}}=\sum_{B_{2}\supseteq\\{y_{2},z_{2}\\}}\sum_{B_{1}\supseteq\\{y_{1},z_{1}\\}}(-1)^{|B_{1}|+|B_{2}|-4}\rho_{2}(y_{2},B_{2}|B_{1},y_{1})\rho_{1}(y_{1},B_{1})\geq
0$
Since marginal consistency is satisfied, Proposition 2 and Claim 1 are
satisfied, so for any $i,j\in\\{1,2\\}$
$\displaystyle
M_{x_{i},A_{i};y_{j},\\{x_{j}\\}}+M_{x_{i},A_{i};z_{j},\\{x_{j}\\}}=M_{x_{i},A_{i};x_{j};\emptyset}$
$\displaystyle
M_{x_{i},A_{i};x_{j},\\{y_{j},z_{j}\\}}=M_{x_{i},A_{i};y_{j},\\{z_{j}\\}}+M_{x_{i},A_{i};z_{j},\\{y_{j}\\}}$
which implies that stochastic Block-Marschak nonnegativity is satisfied. Thus,
by Theorem 1, $\boldsymbol{\rho}^{2}$ has a SU representation. Of course, we
can also directly verify this by defining
$\displaystyle\mu(x_{1}y_{1}z_{1},x_{2}y_{2}z_{2}):=M_{y_{1},\\{x_{1}\\};y_{2},\\{x_{2}\\}}\geq
0$
It is then straightforward to verify
$\displaystyle\mu(x_{1}y_{1}z_{1}):=\sum_{\tau_{2}\in\pi(X_{2})}\mu(x_{1}y_{1}z_{1},\tau_{2})=\rho_{1}(y_{1},z_{1})-\rho_{1}(y_{1},X_{1})$
$\displaystyle\implies\sum_{\tau_{1}\in\pi(X_{1})}\sum_{\tau_{2}\in\pi(X_{2})}\mu(\tau_{1},\tau_{2})=1$
so $\mu$ is indeed a probability measure. Furthermore,
$\displaystyle E(x_{i},A_{i};y_{j},\\{x_{j}\\})\cup
E(x_{i},A_{i};z_{j},\\{x_{j}\\})=E(x_{i},A_{i};x_{j};\emptyset)$
$\displaystyle
E(x_{i},A_{i};x_{j},\\{y_{j},z_{j}\\})=E(x_{i},A_{i};y_{j},\\{z_{j}\\})\cup
E(x_{i},A_{i};z_{j},\\{y_{j}\\})$
and these unions are disjoint, so by Proposition 3 it follows that $\mu$ is a
SU representation. Finally, let $\mu^{\prime}$ be a SU representation. Then by
Proposition 3,
$\displaystyle\mu^{\prime}(x_{1}y_{1}z_{1},x_{2}y_{2}z_{2})=\mu^{\prime}\big{(}E(y_{1},\\{x_{1}\\};y_{2},\\{x_{2}\\})\big{)}=M_{y_{1},\\{x_{1}\\};y_{2},\\{x_{2}\\}}=\mu(x_{1}y_{1}z_{1},x_{2}y_{2}z_{2})$
so $\mu$ is unique.
∎
### 5.7 Proof of Corollary 1
Define $\nu$ as in the proof of Theorem 1. Since this definition is recursive
and the base case is a joint Block-Marschak sum, it immediately follows that
$\nu$ is strictly positive on $\mathcal{I}_{1}\times\mathcal{I}_{2}$, so $\mu$
is strictly positive on $P_{1}\times P_{2}$. All other parts of the proof of
Theorem 1 still hold, so we conclude that $\mu$ is a SU representation of
$(\rho_{1},\\{\rho_{2}(\cdot|h)\\}_{h\in\mathcal{H}})$ with full support.
## References
* Sen (1971) Amartya K Sen. Choice functions and revealed preference. _The Review of Economic Studies_ , 38(3):307–317, 1971.
* Block et al. (1959) Henry David Block, Jacob Marschak, et al. Random orderings and stochastic theories of response. Technical report, Cowles Foundation for Research in Economics, Yale University, 1959.
* Falmagne (1978) Jean-Claude Falmagne. A representation theorem for finite random scale systems. _Journal of Mathematical Psychology_ , 18(1):52–72, 1978.
* Frick et al. (2019) Mira Frick, Ryota Iijima, and Tomasz Strzalecki. Dynamic random utility. _Econometrica_ , 87(6):1941–2002, 2019.
* Strzalecki (2021) Tomasz Strzalecki. _Stochastic Choice_. 2021\.
* Chambers and Echenique (2016) Christopher P Chambers and Federico Echenique. _Revealed preference theory_ , volume 56. Cambridge University Press, 2016.
* Fishburn (1998) Peter C Fishburn. Stochastic utility. _Handbook of utility theory_ , 1:273–318, 1998.
* Van Lint et al. (2001) Jacobus Hendricus Van Lint, Richard Michael Wilson, and Richard Michael Wilson. _A course in combinatorics_. Cambridge university press, 2001.
* Godsil (2018) Chris Godsil. An introduction to the moebius function. _arXiv preprint arXiv:1803.06664_ , 2018.
|
# Actor-Critic Learning Based QoS-Aware Scheduler for Reconfigurable Wireless
Networks
Shahram Mollahasani School of Electrical Engineering and Computer Science
University of Ottawa
<EMAIL_ADDRESS>Melike Erol-Kantarci School of Electrical Engineering
and Computer Science
University of Ottawa
<EMAIL_ADDRESS>Mahdi Hirab VMware Inc.
<EMAIL_ADDRESS>Hoda Dehghan VMware Inc.
<EMAIL_ADDRESS>Rodney Wilson,
School of Electrical Engineering Ciena Corp.
<EMAIL_ADDRESS>
###### Abstract
The flexibility offered by reconfigurable wireless networks, provide new
opportunities for various applications such as online AR/VR gaming, high-
quality video streaming and autonomous vehicles, that desire high-bandwidth,
reliable and low-latency communications. These applications come with very
stringent Quality of Service (QoS) requirements and increase the burden over
mobile networks. Currently, there is a huge spectrum scarcity due to the
massive data explosion and this problem can be solved by helps of
Reconfigurable Wireless Networks (RWNs) where nodes have reconfiguration and
perception capabilities. Therefore, a necessity of AI-assisted algorithms for
resource block allocation is observed. To tackle this challenge, in this
paper, we propose an actor-critic learning-based scheduler for allocating
resource blocks in a RWN. Various traffic types with different QoS levels are
assigned to our agents to provide more realistic results. We also include
mobility in our simulations to increase the dynamicity of networks. The
proposed model is compared with another actor-critic model and with other
traditional schedulers; proportional fair (PF) and Channel and QoS Aware (CQA)
techniques. The proposed models are evaluated by considering the delay
experienced by user equipment (UEs), successful transmissions and head-of-the-
line delays. The results show that the proposed model noticeably outperforms
other techniques in different aspects.
###### Index Terms:
5G, AI-enabled networks, Reinforcement learning, Resource allocation
## I Introduction
With the growing demand on social media platforms, Ultra-High-Definition (UHD)
video and Virtual Reality (VR), Augmented Reality (AR) enabled applications,
the ever-growing data traffic of Internet-of-Things (IoT), and the speedy
advances in mobile devices and smartphones, have led to exponential growth in
the traffic load over wireless networks [1]. Due to these new applications,
availability of various traffic types, and unpredictability of physical
channels, including fading, path loss, etc., maintaining quality of service
(QoS) has become more challenging than ever before. In recent years, with the
introduction of Open Radio Access Network (O-RAN), Artificial Intellience (AI)
and Machine Learning (ML) have found applications in wireless networks, and
Reconfigurable Wireless Networks (RWNs) have emerged. In RWNs, local
networking nodes are controlled by groups of communicating nodes equipped with
reconfigurable software, hardware, or protocols. Software reconfiguration is
useful for updating, inclusion, and exclusion of tasks while hardware
reconfiguration will enable manipulating physical infrastructure. AI and ML
techniques can provide automation at a higher degree than before (further than
self-organized networks (SON) concept of 3GPP) and manage the growing
complexity of the RWNs [2].
Reinforcement learning (RL) is a machine learning technique that allows
optimal control of systems by directing the system to a desired state by
interacting with the environment and using feedback from the environment [3].
RL techniques are widely used in cellular networks with various use cases such
as video flow optimization [4], improving energy-efficiency in mobile networks
[5], and optimizing resource allocation [6]. A majority of RL-based methods
used in wireless networks focus on Q-learning or Deep Q-learning. Although
promising results are obtained using these techniques, they techniques offer a
single level control. Instead, actor-critic learning which is a type of RL,
can be implemented in multiple hierarchies and offer control from multiple
points of views.
In this work, we propose an Actor-Critic Learning approach [7], to allocate
resource blocks in a way that the communication reliability is enhanced and
the required QoS by each UE is satisfied. In the proposed model, we formulate
the choice of the number of resource blocks (RBs) and the location of them in
the RBs’ map as a Markov Decision Process (MDP) [8], and we solve this problem
by using an actor-critic model. We consider channel quality, packets
priorities, and the delay budget of each traffic stream in our reward
function. We adopt two Advantage Actor-Critic (A2C) models [9]. The first
technique solely schedules packets by giving priority to their scheduling
delay budget (called as D-A2C) while second technique considers channel
quality, delay budget, and packet types (called as CDPA-A2C). We evaluate the
performance of the proposed models using NS3 [10] with fixed and mobile
scenarios. Our results show that, in the fixed scenario, the proposed model
can reduce the mean delay significantly with respect to proportional fair (PF)
[11], Channel and QoS Aware (CQA) [12] and D-A2C schedulers. Additionally,
CDPA-A2C can increase the packet delivery rate in the mobile scenario up to
92% and 53% in comparison with PF and CQA.
The main contributions of this paper are:
* •
Proposing an actor-critic learning technique that can be implemented on
disaggregated RAN functions and provide control in two levels.
* •
Proposing a comprehensive reward function which takes care of channel quality,
packet priorities, and the delay budget of each traffic stream.
* •
Proposing two A2C models, where the first model solely schedules packets by
giving priority to their scheduling delay budget (called as D-A2C) and the
second technique considers channel quality, delay budget, and packet types
(called as CDPA-A2C).
The rest of the paper is organized as follows. In Section II, we summarized
the related works. In Section III, the system model is described. In Section
IV, the proposed actor-critic resource block scheduler is explained. Numerical
results and evaluation of the proposed model are presented in Section V, and
finally, we conclude the paper in Section VI.
## II Related Work
Providing ubiquitous connectivity for various devices with different QoS
requirements is one of the most challenging issues for mobile network
operators [13]. This problem is amplified in future 5G applications with
strict QoS requirements [14]. Additionally, in order to be capable of handling
all the new immersive applications (which are known for their heterogeneous
QoS properties), advanced techniques are required to maintain quality of
experience (QoE) among the network’s entities. To this end, packet schedulers
need to allow sharing the network bandwidth dynamically among UEs in a way
that UEs achieve their target QoS. Many scheduling algorithms have been
introduced previously, which employs QoS in their models. In [15], a scheduler
is proposed, which encapsulates different features of scheduling strategies
for the downlink of cellular networks to guarantee multi-dimensional QoS for
various radio channels and traffic types. However, most of the QoS-based
schedulers are prioritizing some traffic types by ignoring the rest. For
instance, in [16], a prioritized traffic scheduler named by frame level
scheduler (FLS) is introduced, which gives higher priority to real-time
traffics in comparison with elastic traffics (such as HTTP or file transfer).
Additionally, in [17] required activity detection (RADS) scheduler is
proposed, which prioritized UEs based on the fairness and their packet delay.
However, most of prioritizing schedulers are not capable of quickly reacting
to the dynamics of cellular networks. Therefore, some traffic classes may have
degradation in their QoS, while others can be over-provisioned.
RL-based models are also applied in different ways in order to optimize
resource allocation in networks. In [18], an RL-based scheduler is presented
for resource allocation in a reliable vehicle-to-vehicle (V2V) network. The
presented RL scheduler interacts with the environment frequently to learn and
optimize resource allocation to the vehicles. In this work, it is assumed that
the whole network structure is connected to a centralized scheduler.
Additionally, in [19], resource allocation and computation offloading in
multi-channel multi-user mobile edge cloud (MEC) systems are evaluated. In
this work, the authors presented a deep reinforcement network to jointly
optimize the total delay and energy consumption of all UEs. Moreover, in [20],
an RL controller is implemented to schedule deadline-driven data transfers. In
this paper, it is assumed that the requests will be sent to a central network
controller where the flows with respect to their pacing rates can be
scheduled. In [21, 22], authors introduce a traffic predictor for network
slices with a low complexity, based on a soft gated recurrent unit. They also
use the traffic predictor to feed several deep learning models, which are
trained offline to apply end-to-end reliable and dynamic resource allocation
under dataset-dependent generalized service level agreement (SLA) constraints.
The authors have considered resource bounds-based SLA and violation rate-based
SLA in order to estimate the required resources in the network.
Traditional radio resource management (RRM) algorithms are not able to handle
the stringent QoS requirements of users while adapting to fast varying
conditions of RWNs. Recently, machine learning algorithms have been employed
in schedulers to benefit from data in optimizing resource allocation, as
opposed to using models [23, 24, 25, 26]. An ML-based RRM algorithm is
proposed in [27] to estimate the required resources in the network for
tackling traffic on-demand traffics over HTTP. ML has been used in resource
allocation by considering various QoS combinations objectives such as packet
loss [23], delay [24], and user fairness [25]. However, these models are
defined for improving delay or enhancing throughput for Ultra-Reliable and
Low-Latency Communications (URLLC) and throughput of enhanced Mobile Broadband
(eMBB) UEs, while traffic types are considered homogeneous [26]. Hence, they
ignored the effect of traffic types with various QoS requirements in their
model. In comparison with previous works, we propose actor-critic learning for
resource allocation which can be used at different levels of disaggregated
RANs. We use a reward function that addresses channel quality, packet
priorities, and the delay budget of each traffic stream. We evaluate our
proposed scheme under various traffic types, interference levels and mobility.
We provide detailed results on key performance indicators (KPIs) collected
from NS3 simulator and integrated Open-AI Gym.
Figure 1: A Distributed RL-based RB Scheduler.
## III System Model
We assume the overall downlink bandwidth is divided into the total number of
available RBs, and each RB contains 12 contiguous subcarriers. Moreover, a
resource block group (RBG) will be formed when consecutive RBs are grouped
together. In order to reduce the number of state’s in our reinforcement
learning approach, we consider RBG as a unit for resource allocation in the
frequency domain. The aim of the proposed actor-critic model is to assign RBGs
by considering traffic types, their QoS requirement, and their priorities
during each transmission time interval (TTI). Based on our system model, Base
Stations (BS) are actors and each actor schedules the packets located in the
transmission buffer of its associated UEs during each time interval such that,
the amount of time that the packets stay in UEs buffers are reduced. The
scheduling decision will be made every TTI, by considering the number of
pending packets in the transmission buffers of active UEs. The overall delay
experienced by a packet can be break down into three main factors as shown
below:
$Packet_{Latency}=T_{HOL}+T_{tx}+T_{HARQ}$ (1)
where $T_{HOL}$ is the time duration that a packet waits in the queues to get
a transmission opportunity (scheduling delay). HOL stands for head-of-the-line
delay. $T_{tx}$ is communication delay, and $T_{HARQ}$ is a round-trip time
which is required to retransmit a packet. $T_{tx}$ and $T_{HARQ}$ are
basically based on the environment (path loss, shadowing, fading, etc.), UEs
locations (propagation distance) and channel condition (noise, interference,
etc.). In order to satisfy the packets with low latency requirements (e.g.,
URLLC UEs), the scheduler needs to handle those packets in UE buffers as soon
as they arrive, thus, minimizing HOL. We also need to limit the number of HARQ
to achieve lower delay during communication. However, limiting retransmissions
can increase the packet drop rate and reduce the reliability in the network
[28]. Low reliability can highly affect the UEs located at the edges. The
proposed RL-based scheduler aims to address this trade off by enhancing
reliability and meeting the required latency budget.
## IV Actor-Critic Scheduling Framework
It is well-known that due to the scheduler’s multi-dimensional and continuous
state space, we can not enumerate the scheduling problem exhaustively [29]. We
can tackle this issue by employing RL and learning the proper scheduling rule.
In actor-critic learning, policy model and value function are the two main
parts of policy gradients. In order to reduce the gradient variance in our
policy, we need to learn the value function for update and assist system
policy during each time interval, and this is known as the Actor-Critic model.
At the exploitation step, in order to make decisions, the learnt actor
function is used. In our model, we aim to prioritize certain traffic types and
reduce their HOL during scheduling decision. The obtained $M$ dimensional
decision matrix is employed to schedule and prioritize the available traffic
classes at each TTI. To do so, a neural network (NN) framework is employed to
tackle the complexity and obtain an approximation for achieving the best
possible prioritizing decision during each time interval. At the learning
state, the weights of the neural network will be updated every TTI by
considering the interactions occurring between the actor and the critic.
Moreover, during the exploitation state, the value of the updated weights is
saved, and the NN is tuned as a non-linear function.
An RL agent basically tries to achieve higher value from each state $s_{t}$
and it can be obtained through the state-value ($V(s)$) and action-value
functions $Q(s,a)$. Using the action-value function, we can estimate the
output of action $a$ in state $s$ during time interval $t$, and the average
expected output of state $s$ can be obtained by using the state-value
function. In this work, instead of approximating both of action-value and
state-value functions, we estimate only $V(s)$ by employing the Advantage
Actor-Critic (A2C) model, which simplifies the learning process and reduces
the number of required parameters. More specifically, advantage refer to a
value which determines how much the performed action is better than the
expected $V(s)$ ($A(s_{t},a_{t})=Q(s_{t},a_{t})-V(s_{t})$). Moreover, the A2C
model is a synchronous model and, with respect to asynchronous actor-critic
(A3C) [30], it provides better consistency among agents, making it suitable
for disaggregated deployments.
The proposed A2C model contains two neural networks:
* •
A neural network in the critic for estimating the value function to criticize
the actors’ actions during each state.
* •
A neural network in the actor for approximating scheduling rules and
prioritizing packet streams during each state.
In the presented model, the critic is responsible for inspecting the actors’
actions and enhance their decisions at each time interval and its located at
an edge cloud while the actor is at the BS. This flexible architecture is
completed with recent O-RAN efforts around disaggregation of network
functions. The high level perspective of the proposed model’s architecture is
presented in Fig.1.
In the following we present the actor-critic architecture step by step: Actor:
We employed actor as an agent to explore the required policy $\pi$ ($\theta$
is policy parameter) based on its observation ($O$) to obtain and apply the
corresponding action ($A$).
$\pi_{\theta}(O)=O\rightarrow A$ (2)
Therefore, the chosen action by an agent can be present as:
$a=\pi_{\theta}(O),$ (3)
where $a\in A$. Actions are considered as choosing a proper resource block in
the resource block map of each agent. Due to the discrete nature of actions,
we employ softmax functions at the last layer (output) of actor to obtain the
corresponding values of each actions. The summation of actions’ scores is
equal to 1 and they are presented as probabilities of achieving a high reward
value with respect to the chosen action.
Critic: We employed the critic for obtaining the value function $V(O)$. During
each time interval $t$, after the agent executed the chosen action by actor
($a_{t}$), it will send it to the critic along with the current observation
($O_{t}$). Then, the critic estimates the temporal difference (TD) by
considering the next state ($O_{t+1}$) and the reward value ($R_{t}$) as
follows:
$\delta_{t}=R_{t}+\gamma V(O_{t+1})-(O_{t}).$ (4)
Here, $\delta_{t}$ is TD error for the action-value at time $t$, and $\gamma$
is a discount factor. At the updating step, the least squares temporal
difference (LSTD) need to be minimized to update the critic during each step:
$V^{*}=arg\ \underset{V}{min}(\delta_{t})^{2},$ (5)
Here, the optimal value function is presented as $V^{*}$. The actor can be
updated by policy gradient which can be obtained by using the TD error as
follow:
$\bigtriangledown_{\theta}J(\theta)=E_{\pi_{\theta}}[\bigtriangledown_{\theta}log{\pi_{\theta}}(O,a)\delta_{t}],$
(6)
where, $\bigtriangledown_{\theta}J(\theta)$ is the gradient of the cost
function with respect to $\theta$, and the value of action $a$ under the
current policy is shown as $\pi_{\theta}(O,a)$. Then, the difference of
parameters’ weights at the actor during time interval $t$ can be calculated
as:
$\Delta_{\theta_{t}}=\alpha\bigtriangledown_{\theta_{t}}log\pi_{\theta_{t}}(O_{t},a_{t})\delta_{t},$
(7)
Here, The gradient is estimated per time step
($\bigtriangledown_{\theta_{t}}$) and parameters will be updated in this
gradient direction; also, the learning rate is defined as $\alpha$, which is
between 0 and 1. Finally, the actor network, by using policy gradient can be
updated as follows:
$\theta_{t+1}=\theta_{t}+\alpha\bigtriangledown_{\theta_{t}}log\pi_{\theta_{t}}(O_{t},a_{t})\delta_{t}.$
(8)
Our main goal is to provide a channel, delay and priority aware actor-critic
learning based scheduler (CDPA-A2C). Actors are located at BSs; therefore,
during their observation, they can access the channel condition through
Channel Quality Indicator (CQI) feedback of UEs from the control channels, and
the amount of time packets will remain in the buffer in order to be scheduled.
Moreover, actors can tune their future actions with respect to the received
reward value at each iteration. In order to train the agents based on the
network requirements, we need to include information about the channel
condition, the load of transmission buffer, and the priority of packets into
our reward function. The reward function in actor $i$ ($BS_{i}$) is designed
as follows:
$\displaystyle R=\varphi R_{1}+\tau R_{2}+\lambda R_{3},$ (9) $\displaystyle
R_{1}=max\left(sgn\left(cqi_{k}-\frac{\sum_{j=0}^{K}cqi_{j}}{K}\right),0\right)$
(10) $\displaystyle R_{2}=\left\\{\begin{matrix}1&Packet_{URLLC}\\\
0&Otherwise\end{matrix}\right.$ (11) $\displaystyle
R_{3}=sinc(\pi\left\lfloor\frac{Packet_{delay}}{Packet_{budget}}\right\rfloor)$
(12)
Here, $cqi_{k}$ is the feedback send by $UE_{k}$ to agent $i$ at time interval
$t$, $K$ is the total number of UEs associated with the agent $i$,
$Packet_{URLLC}$ is an identifier for packets with a low delay budget, and it
is associated to the $QCI$, $Packet_{delay}$ is the HOL delay at RLC buffer
and $Packet_{budget}$ is the maximum tolerable delay for the corresponding
packet type, which is defined based on the packets’ types, $\varphi$, $\tau$
and $\lambda$ are scalar weights to control the priority among traffic types,
maintaining delay budget and UEs condition (CQI feedback) [31, 32]. The reward
function is tuned in a way that UEs received signal strength alongside packet
delivery ratio be maximized while giving higher priority to the critical
traffic load to increase QoS of URLLC users.
In this paper, to examine the effect of the proposed reward function, we
defined two A2C models as follows:
* •
In the first one, the scheduler will schedule packets just based on packets’
delay budget, and it is named as Delay aware A2C (D-A2C). In this model, the
priority of packets (in our scenario URLLC packets) are ignored by setting
$\tau=0$ in eq. (11).
* •
In the second one, the scheduler takes all performance metrics into
consideration by using eqs. (10),(11) and (12). The scheduler is called as
CDPA-A2C. In this model, instead of giving priority to just one metric, the
scheduler is equipped with a more complex model to be capable of handling RB
allocation in different conditions.
## V Simulation Environment
In this work, we implemented the proposed algorithms in ns3-gym [33], which is
a framework for connecting ns-3 with OpenAI Gym (a tool for integrating
machine learning libraries) [34]. The neural networks of CDPA-A2C and D-A2C
were implemented in Pytorch. In our simulations, three BSs are considered, and
the number of UEs varies between 30 to 90 UEs, which are distributed randomly
in the environment. Simulation results are based on 30 runs and each run
contain 5000 iterations. To run the following model, we used a PC equipped
with $Core^{TM}$ i7-8700 CPU and 32 GB of RAM. The simulation time depends on
the number of assigned UEs and BSs, traffic types, and UEs’ mobility. The
simulation time can be varied between 30-90 minutes for simulating 5000 TTI
based on the defined scenarios in this paper. In this work numerology zero is
employed with 15 KHz subcarrier spacing, 12 subcarriers per resource block,
and 14 symbols per subframe and our scheduling interval is set to 1 $ms$ [35].
We deployed two scenarios. In the first scenario, we assume the number of UEs
vary between 30 to 90 and UEs are not mobile. We distributed three traffic
types (voice, video, and IP Multimedia Subsystem (IMS)) with different QoS
requirements uniform randomly among UEs. The UE applications generate traffic
with Poisson arrivals. Each traffic type has its own properties and different
arrival time with respect to other packets.
In our simulations, when the amount of time a packet stays in a UE’s buffer is
lower than its delay budget, we consider it as a satisfied packet, and when
this value is higher than the delay budget, it will be evaluated as an
unsatisfied packet. In Table.I, we present the required QoS metrics for each
traffic type considered in this paper, in detail.
TABLE I: The employed packet types and their properties [36]. QCI | | Resource
---
Type
Priority | | Packet
---
Delay
Budget
| Service
---
Example
1 | GBR | 2 | 100 ms | Voice
5 | Non-GBR | 1 | 100 ms | IMS
6 | Non-GBR | 6 | 300 ms | Video
75 | GBR | 2.5 | 20 ms | V2X
In the second scenario, we have 70-110 UEs in our vehicular network, which 10%
of them are vehicular UEs ($UE_{v}$) and the rest are fixed users ($UE_{c}$).
The $UE_{c}$ requires a high capacity link, while $UE_{v}$ demands need to be
satisfied by a low latency link. In this scenario, in addition to “Voice,
Video, and IMS” we have Vehicle-to-everything “V2X” packets that are defined
based on 5GAA standards [37]. Table II presents the assigned network
parameters and neural network settings in our simulations. $\varphi$, $\tau$
and $\lambda$
TABLE II: Simulation parameters. Parameters | Value
---|---
Number of neurons | 256 x 3 layers (Actor + Critic)
Scheduler algorithm | CDPA-A2C, D-A2C, PF, CQA
Number of BSs | 3
Number of UEs | 30-110
| Maximum Traffic load per UE
---
(Downlink)
256 kbps
Traffic types | Voice, Video, IMS, V2X
Traffic stream per TTI | 50
D-A2C reward’s weights | $\varphi=1-\frac{packet_{delay}}{traffic_{delay-budget}}$, $\tau=0$ and $\lambda=5$
CDPA-A2C reward’s weights | $\varphi=1-\frac{packet_{delay}}{traffic_{delay-budget}}$, $\tau=5$ and $\lambda=5$
Discount factor | 0.9
Actor learning-rate | 0.01
Critic learning-rate | 0.05
Before presenting network performance results, in Fig. 2 we show the
convergence of the proposed reward. The figure shows the behavior of the
reward function (eq. 9) when the number of UEs are 90 and 10% of them are
mobile. In this work, an epsilon-greedy policy is used, in which during the
exploration phase actors perform their actions either by randomly or by
choosing an RB with the highest weight assigned by the proposed actor-critic
model. As is shown in Fig. 2, the exploration phase will end after almost 3700
rounds, and the model will converge after that.
Figure 2: The convergence performance of the CDPA-A2C algorithm’s reward
function when the number of UEs are 90 while 10% of them are mobile.
Figure 3: The HOL delay for different number of UEs (with no mobility).
Figure 4: Packet delivery ratio (with no mobility).
### V-A Simulation Results with no mobility
Figure 5: The HOL delay for different number of UEs (with mobility).
In this scenario, we assumed UEs are fixed, and we have three packet streams
(voice, video, IMS) with different QoS requirements (delay budget, packet loss
ratio, etc.). As it is mentioned previously, we considered two A2C models with
different reward functions named by D-A2C and CDPA-A2C, respectively. In
addition to these two models, we compare our results with the traditional
Proportional Fair (PF) scheduler and Channel and QoS Aware (CQA) scheduler as
described in [12].
In Fig. 3, we present the mean delay by considering various traffic types and
varying number of UEs. Although, when the number of UEs below 50 (in case of
CQA scheduler when the number of UEs are below 70), PF and CQA can satisfy the
required delay budget for all types of traffic, as we increase the number of
UEs the mean HOL delay will be significantly increased, and it becomes higher
than the target delay for voice and IMS traffic types. In cases with higher
number of UEs, D-A2C results are better than the traditional PF and CQA
schedulers, while employing a more comprehensive reward function such as
CDPA-A2C can remarkably enhance network performance. In general, The CDPA-A2C
scheduler can reduce the mean delay in comparison with D-A2C, PF and CQA
schedulers up to 100 ms, 225 ms and 375 ms, respectively.
In Fig. 4, we present the packet delivery ratio (the ratio of the packets
which satisfy the predefined delay budget). As it is shown, the packet
delivery ratio of CDPA-A2C and D-A2C can be up to 117% and 73% higher than PF
and CQA, by considering various numbers of UEs for different traffic types.
Moreover, CDPA-A2C can considerably enhance the packet delivery ratio in
comparison to D-A2C (up to 63% for IMS packets). Note that, we assume all
applications are sensitive to delay and will consider packets that are beyond
service targets as undelivered.
(a) The effect of UEs’ number on packet delivery ratio when the ratio of
mobile UEs is fixed to 10%.
(b) The effect of percentage of mobile UEs on packet delivery ratio when the
total number of UEs is 90.
Figure 6: Packet delivery ratio (with mobility).
### V-B Simulation Results with mobility
In the second scenario, we consider a case when 10% of UEs are mobile. We
include V2X traffic based on 5GAA standards in addition to other traffic types
used in the previous simulations. Due to the better performance of CDPA-A2C,
we omit the D-A2C from second scenario.
In Fig. 5, we evaluate the mean HOL delay for various numbers of UEs (70-110)
when we have mobile UEs in the network. As we can see, although, PF and CQA
schedulers can maintain the mean HOL delay below the delay budget, by
increasing the number of UEs, the mean HOL delay will increase dramatically.
Moreover, due to the high sensitivity of V2X packets to delay (V2X delay
budget = 20 $ms$), PF and CQA can not satisfy V2X packets in any of these
cases. However, the proposed CDPA-A2C model, by scheduling packets on time and
preventing the congestion in the UEs buffer, can provide a lower delay for the
presented traffic types with respect to PF and CQA.
In Fig. 6, we present the packet delivery ratio (packets that can satisfy
delay target) of the proposed model with respect to other schedulers. In Fig.
6(a), we evaluate the effect of increasing the number of UEs over the packet
delivery ratio when 10% of UEs are mobile. As shown, the packet delivery ratio
of CDPA-A2C for different traffic types and numbers of UEs up to 92% and 53%
higher than PF and CQA, respectively. Although CQA has a good performance in
the delivery of Voice, Video, and IMS packets, CDPA-A2C can considerably
enhance the packet delivery ratio for delay-sensitive traffics such as V2X
packets in comparison with CQA. We also examined the effect of increasing the
mobile UEs’ ratio, on the packet delivery ratio, when the total number of UEs
is fixed to 90, in Fig. 6(b). As we can see, the proposed model can
significantly enhance the packet delivery ratio for different traffic types
with respect to PF and CQA. Moreover, by increasing the density of mobile UEs,
the packet loss ratio of V2X packets when CQA and PF schedulers are employed,
will be increased up to 40% and 75%, respectively. Therefore, the CDPA-A2C
scheduler can noticeably enhance QoS by reducing the HOL delay and increasing
the packet delivery ratio in comparison with PF and CQA schedulers.
## VI Conclusion
In this paper, we propose two actor-critic learning-based schedulers, namely
Delay-aware actor-critic (D-A2C) and channel, delay and priority-aware actor-
critic (CDPA-A2C) techniques, for reconfigurable wireless networks, where the
network is capable to autonomously learn and adapt itself to the wireless
environment’s dynamicity for optimizing the utility of the network resources.
Reconfigurable wireless networks play a vital role in network automation.
Applying machine learning algorithms can be an appropriate candidate for
making versatile models and making future networks smarter. Here, the proposed
model is constructed based on two neural network models (actor and critic).
Actors are employed to apply actions (RB allocation); simultaneously, the
critic is used to monitor agents’ actions and tune their behaviors in the
following states to make the convergence faster and optimize the model. The
proposed comprehensive model (CDPA-A2C), in addition of considering channel
condition and delay budget of each packet, prioritizes the received packets by
considering their types and their QoS requirements. We include fixed and
mobile scenarios to evaluate the performance of the proposed schemes. We
compare the learning-based schemes to two well-known algorithms: the
traditional proportional fair and a QoS-aware algorithm CQA. Our results show
that CDPA-A2C significantly reduces the mean delay with respect to PF and
D-A2C schedulers. Additionally, CDPA-A2C can increase the packet delivery rate
in the mobile scenario up to 92% and 53% in comparison with PF and CQA,
respectively.
## VII Acknowledgement
This work is supported by Ontario Centers of Excellence (OCE) 5G ENCQOR
program.
## References
* [1] J. Navarro-Ortiz, P. Romero-Diaz, S. Sendra, P. Ameigeiras, J. J. Ramos-Munoz, and J. M. Lopez-Soler, “A Survey on 5G Usage Scenarios and Traffic Models,” IEEE Communications Surveys Tutorials, vol. 22, no. 2, pp. 905–929, 2020.
* [2] M. Polese, R. Jana, V. Kounev, K. Zhang, S. Deb, and M. Zorzi, “Machine learning at the edge: A data-driven architecture with applications to 5G cellular networks,” IEEE Transactions on Mobile Computing, 2020.
* [3] N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim, “Applications of deep reinforcement learning in communications and networking: A survey,” IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp. 3133–3174, 2019.
* [4] M. Polese, R. Jana, V. Kounev, K. Zhang, S. Deb, and M. Zorzi, “Machine learning at the edge: A data-driven architecture with applications to 5G cellular networks,” IEEE Transactions on Mobile Computing, 2020.
* [5] R. Li, Z. Zhao, X. Zhou, G. Ding, Y. Chen, Z. Wang, and H. Zhang, “Intelligent 5G: When cellular networks meet artificial intelligence,” IEEE Wireless communications, vol. 24, no. 5, pp. 175–183, 2017.
* [6] S. Chinchali, P. Hu, T. Chu, M. Sharma, M. Bansal, R. Misra, M. Pavone, and S. Katti, “Cellular network traffic scheduling with deep reinforcement learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, 2018.
* [7] Y. Wei, F. R. Yu, M. Song, and Z. Han, “User scheduling and resource allocation in HetNets with hybrid energy supply: An actor-critic reinforcement learning approach,” IEEE Transactions on Wireless Communications, vol. 17, no. 1, pp. 680–692, 2017.
* [8] C. C. White, “A survey of solution techniques for the partially observed Markov decision process,” Annals of Operations Research, vol. 32, no. 1, pp. 215–230, 1991.
* [9] O. Nachum, M. Norouzi, K. Xu, and D. Schuurmans, “Bridging the gap between value and policy based reinforcement learning,” in Advances in Neural Information Processing Systems, pp. 2775–2785, 2017.
* [10] A. K. Saluja, S. A. Dargad, and K. Mistry, “A Detailed Analogy of Network Simulators—NS1, NS2, NS3 and NS4,” Int. J. Future Revolut. Comput. Sci. Commun. Eng, vol. 3, pp. 291–295, 2017.
* [11] M. T. Kawser, H. Farid, A. R. Hasin, A. M. Sadik, and I. K. Razu, “Performance comparison between round robin and proportional fair scheduling methods for LTE,” International Journal of Information and Electronics Engineering, vol. 2, no. 5, pp. 678–681, 2012.
* [12] B. Bojovic and N. Baldo, “A new channel and QoS aware scheduler to enhance the capacity of voice over LTE systems,” in 2014 IEEE 11th International Multi-Conference on Systems, Signals & Devices (SSD14), pp. 1–6, IEEE, 2014.
* [13] M. F. Audah, T. S. Chin, Y. Zulfadzli, C. K. Lee, and K. Rizaluddin, “Towards Efficient and Scalable Machine Learning-Based QoS Traffic Classification in Software-Defined Network,” in International Conference on Mobile Web and Intelligent Information Systems, pp. 217–229, Springer, 2019.
* [14] M. A. Habibi, M. Nasimi, B. Han, and H. D. Schotten, “A comprehensive survey of RAN architectures toward 5G mobile communication system,” IEEE Access, vol. 7, pp. 70371–70421, 2019.
* [15] S. Abedi, “Efficient radio resource management for wireless multimedia communications: a multidimensional QoS-based packet scheduler,” IEEE Transactions on Wireless Communications, vol. 4, no. 6, pp. 2811–2822, 2005\.
* [16] G. Piro, L. A. Grieco, G. Boggia, R. Fortuna, and P. Camarda, “Two-level downlink scheduling for real-time multimedia services in LTE networks,” IEEE Transactions on Multimedia, vol. 13, no. 5, pp. 1052–1065, 2011.
* [17] G. Monghal, D. Laselva, P.-H. Michaelsen, and J. Wigard, “Dynamic packet scheduling for traffic mixes of best effort and VoIP users in E-UTRAN downlink,” in 2010 IEEE 71st Vehicular Technology Conference, pp. 1–5, IEEE, 2010.
* [18] T. Şahin, R. Khalili, M. Boban, and A. Wolisz, “Reinforcement learning scheduler for vehicle-to-vehicle communications outside coverage,” in 2018 IEEE Vehicular Networking Conference (VNC), pp. 1–8, IEEE, 2018.
* [19] S. Nath, Y. Li, J. Wu, and P. Fan, “Multi-user Multi-channel Computation Offloading and Resource Allocation for Mobile Edge Computing,” in ICC 2020-2020 IEEE International Conference on Communications (ICC), pp. 1–6, IEEE, 2020.
* [20] G. R. Ghosal, D. Ghosal, A. Sim, A. V. Thakur, and K. Wu, “A Deep Deterministic Policy Gradient Based Network Scheduler For Deadline-Driven Data Transfers,” in 2020 IFIP Networking Conference (Networking), pp. 253–261, IEEE, 2020.
* [21] H. Chergui and C. Verikoukis, “Offline SLA-constrained deep learning for 5G networks reliable and dynamic end-to-end slicing,” IEEE Journal on Selected Areas in Communications, vol. 38, no. 2, pp. 350–360, 2019.
* [22] H. Chergui and C. Verikoukis, “Big Data for 5G Intelligent Network Slicing Management,” IEEE Network, vol. 34, no. 4, pp. 56–61, 2020.
* [23] I.-S. Comşa, S. Zhang, M. E. Aydin, P. Kuonen, Y. Lu, R. Trestian, and G. Ghinea, “Towards 5G: A reinforcement learning-based scheduling solution for data traffic management,” IEEE Transactions on Network and Service Management, vol. 15, no. 4, pp. 1661–1675, 2018.
* [24] I.-S. Comsa, S. Zhang, M. Aydin, P. Kuonen, R. Trestian, and G. Ghinea, “A comparison of reinforcement learning algorithms in fairness-oriented OFDMA schedulers,” Information, vol. 10, no. 10, p. 315, 2019.
* [25] M. Elsayed and M. Erol-Kantarci, “AI-enabled radio resource allocation in 5G for URLLC and eMBB users,” in 2019 IEEE 2nd 5G World Forum (5GWF), pp. 590–595, IEEE, 2019.
* [26] M. Mohammadi and A. Al-Fuqaha, “Enabling cognitive smart cities using big data and machine learning: Approaches and challenges,” IEEE Communications Magazine, vol. 56, no. 2, pp. 94–101, 2018.
* [27] A. Martin, J. Egaña, J. Flórez, J. Montalbán, I. G. Olaizola, M. Quartulli, R. Viola, and M. Zorrilla, “Network resource allocation system for QoE-aware delivery of media services in 5G networks,” IEEE Transactions on Broadcasting, vol. 64, no. 2, pp. 561–574, 2018.
* [28] X. Du, Y. Sun, N. B. Shroff, and A. Sabharwal, “Balancing Queueing and Retransmission: Latency-Optimal Massive MIMO Design,” IEEE Transactions on Wireless Communications, vol. 19, no. 4, pp. 2293–2307, 2020\.
* [29] I.-S. Comşa, G.-M. Muntean, and R. Trestian, “An Innovative Machine-Learning-Based Scheduling Solution for Improving Live UHD Video Streaming Quality in Highly Dynamic Network Environments,” IEEE Transactions on Broadcasting, 2020.
* [30] M. Sewak, “Actor-Critic Models and the A3C,” in Deep Reinforcement Learning, pp. 141–152, Springer, 2019.
* [31] M. Elsayed and M. Erol-Kantarci, “Learning-based resource allocation for data-intensive and immersive tactile applications,” in 2018 IEEE 5G World Forum (5GWF), pp. 278–283, IEEE, 2018.
* [32] M. Elsayed, M. Erol-Kantarci, B. Kantarci, L. Wu, and J. Li, “Low-latency communications for community resilience microgrids: A reinforcement learning approach,” IEEE Transactions on Smart Grid, vol. 11, no. 2, pp. 1091–1099, 2019.
* [33] P. Gawłowicz and A. Zubow, “NS-3 meets openai gym: The playground for machine learning in networking research,” in Proceedings of the 22nd International ACM Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems, pp. 113–120, 2019.
* [34] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” arXiv preprint arXiv:1606.01540, 2016.
* [35] J. Vihriälä, A. A. Zaidi, V. Venkatasubramanian, N. He, E. Tiirola, J. Medbo, E. Lähetkangas, K. Werner, K. Pajukoski, A. Cedergren, et al., “Numerology and frame structure for 5G radio access,” in 2016 IEEE 27th annual international symposium on personal, indoor, and mobile radio communications (PIMRC), pp. 1–5, IEEE, 2016.
* [36] “Table 6.1.7-A: Standardized QCI characteristics from 3GPP TS 23.203V16.1.0.,”
* [37] 5GAA: Paving the Way towards 5G, accessed on 03 September 2020. Available online: https://5gaa.org/5g-technology/paving-the-way.
|
Astronomy Reports, 2021, Vol. 98, No. 1
# Dual-temperature acceleration radiation111Paper presented by MG at the
Fourth Zeldovich meeting, an international conference in honor of Ya. B.
Zeldovich held in Minsk, Belarus on September 7–11, 2020. Published by the
recommendation of the special editors: S. Ya. Kilin, R. Ruffini and G. V.
Vereshchagin.
© 2021. M. R. R. Good<EMAIL_ADDRESS>Nazarbaev University, Nur-Sultan,
Qazaqstan A. Mitra<EMAIL_ADDRESS>Nazarbaev University, Nur-Sultan,
Qazaqstan Kazakh-British Technical University, Almaty, Qazaqstan V. Zarikas
<EMAIL_ADDRESS>Nazarbaev University, Nur-Sultan, Qazaqstan
###### Abstract
We solve for a system that emits acceleration radiation at two different
temperatures. The equilibrium states occur asymptotically in Planck
distributions and transition non-thermally. The model is simple enough to
obtain a global solution for the radiation spectrum analytically. We present
it as a potentially useful model for investigation of non-thermal vacuum
acceleration radiation in the presence of final(initial) asymptotic
thermodynamic horizon(less) states in equilibrium.
## I Introduction
As one of the simplest theoretical models of acceleration radiation, the
moving mirror of DeWitt DeWitt:1975ys , and Davies-Fulling Davies:1976hi ;
Davies:1977yv elucidates the process of field perturbation by a perfectly
reflecting accelerated boundary, which transmits both particles and energy.
The mirror model has developed carlitz1987reflections into a textbook case
Birrell:1982ix ; Fabbri of vacuum acceleration radiation, notably used in
analog to describe Hawking radiation from black holes good2013time . On the
experimental side, it has been a decade since moving mirror radiation has been
detected Wilson_2011 and further interesting observational setups are planned
Dodonov . Recent studies are making progress on a variety of topics including
accelerating boundary entropy production Chen:2017lum , relativistic plasma
detection Chen:2015bcg ; a , de Sitter cosmology correspondence Good:2020byh ,
motion induction Silva:2020odg , extremal black holes good2020extreme , and
Casimir free-fall x ; y ; z and other future prospects emanating from black
hole information loss paradox mitra .
One aspect of the moving mirror model that has not been explored is transition
between equilibrium radiation states. Equilibrium in nature is more of an
exception than the rule, and spectral changes (which constitute a significant
portion of interesting radiative phenomena) take place in non-equilibrium
conditions. Thus there is much to be learned about the complex evolutionary
emission process occurring far from equilibrium.
Moreover, while very little is understood about the general aspects of non-
equilibrium radiative systems, equilibrium spectra (e.g. Bose-Einstein
statistics of a thermal Planck distribution) have been much studied and have
been shown to display general universal features understood dynamically
carlitz1987reflections . This universality (e.g. in black hole thermalization)
is thought to macroscopically emerge from large-scale fluctuations in such a
robust way that one can expect that similar mathematical machinery and
physical mechanisms will work to describe non-equilibrium radiative systems as
well Zoltan . Thus, investigating the similarities and differences of
equilibrium and non-equilibrium emission may help to discover the
distinguishing but still robust properties of non-equilibrium radiation. A
simple soluble system with both phases can help fill this gap. We present such
a model and ask these questions:
* •
What kind of dynamics can be responsible for radiation at different
temperatures?
* •
How can we explicitly illustrate temperature with and without a horizon?
* •
What is the spectrum describing the intermediate non-thermal phase?
Our paper is organized as follows: in Sec. II, we review the details of the
proposed accelerated mirror trajectory, computing only the crucial and
minimally needed relativistic dynamical properties: rapidity, speed, and
acceleration. In Sec. III, we derive the energy flux radiated by analysis of
the quantum stress tensor. In Sec. IV, we derive the particle spectrum,
finding a unique form for the radiation. Throughout we use natural units,
$\hbar=c=1$.
## II Trajectory Motion
We start with light-cone coordinates $(u,v)$, retarded time $u=t-x$, to
express the one-parameter ($\kappa$), $1+1$-dimensional trajectory of the
moving mirror as
$f(v)=-\frac{1}{\kappa}\ln\left[\kappa v(\kappa v-1)\right],$ (1)
where $f$ is the retarded time position. Here $f$ is a function, not a
coordinate, so the symbol “$u$” is not used. Eq. (1) is the retarded time
‘position’ or trajectory of the mirror where the independent variable is
advanced time $v=t+x$. A spacetime plot with time $t$ on the vertical axis is
given of the trajectory in Figure 1a. A conformal Penrose diagram is plotted
in Figure 1b. What are the important dynamics of our apriori chosen
trajectory, Eq. (1)?
(a)
(b)
Figure 1: Left : A spacetime diagram of the mirror trajectory, Eq. (1) in a
contour plot. It starts off asymptotically inertial with zero acceleration and
light-speed velocity and decelerates eventually reaching zero speed and then
accelerates again approaching the speed of light receding with asymptotic
infinite acceleration. This divergence happens along the null ray $v_{H}$
which is the advanced time horizon. Here we have set $v_{H}=0$. Notice how
field modes with $v<v_{H}$ moving to the left will always hit the mirror.
Those field modes with $v>v_{H}$, never hit the mirror due to the horizon,
geometrically illustrating information loss to an observer at
$\mathscr{I}^{+}_{R}$. The appearance of an initial horizon is an illusion in
the spacetime diagram because the mirror starts asymptotically time-like which
can be seen in the Penrose diagram of Figure 1b. The future light-cone
centered at $t=x=0$ is marked in solid black. Here $\kappa=1$. Right : A
Penrose diagram of the mirror trajectory, Eq. (1). The mirror is moving at
light-speed as $v\to(-\infty,0^{-})$. Starting asymptotically time-like in the
past, it proceeds to accelerate to an asymptotic light-like horizon at
$v_{H}=0$. The various colors correspond to different system scaling with
$\kappa=1/2,1,2,4,8$ from orange, red, blue, black, to green.
### II.1 Rapidity, Speed, Acceleration
We compute the rapidity $\eta(v)$, by $2\eta(v)\equiv-\ln f^{\prime}(v)$ where
the prime is a derivative with respect to the argument, plugging in Eq. (1),
$\eta(v)=-\frac{1}{2}\ln\left(\frac{1-2\kappa v}{\kappa v(\kappa
v-1)}\right).$ (2)
With rapidity, we may easily compute the velocity, $V\equiv\tanh\eta$,
plugging in Eq. (2),
$V(v)=-\tanh\left(\frac{1}{2}\ln\left(\frac{1}{1-\kappa v}-\frac{1}{\kappa
v}\right)\right),$ (3)
and the proper acceleration which follows from $\alpha(v)\equiv
e^{\eta(v)}\eta^{\prime}(v)$, using Eq. (2) again, giving the leading order
terms of the acceleration at both $v\to-\infty$ and $v\to 0^{-}$,
respectively,
$\alpha_{\textrm{past}}(v)=-\frac{1}{2\sqrt{2}}\sqrt{-\frac{\kappa}{v}},\quad\alpha_{\textrm{horizon}}(v)=-\frac{1}{2}\sqrt{-\frac{\kappa}{v}}.$
(4)
The horizon proper acceleration has a greater magnitude by
$\alpha_{\textrm{horizon}}=\sqrt{2}\alpha_{\textrm{past}}$, which intuitively
suggests a hotter radiative temperature. The magnitude of the velocity, Eq.
(3), along with the proper acceleration, $|\alpha(v)|$, are plotted in Figure
2.
Figure 2: The speed, $|V(v)|$ and proper acceleration $|\alpha(v)|$, as a
function of light-cone coordinate advanced time $v=t+x$ for the mirror
trajectory, Eq. (1). It is readily seen that at $v=0^{-}$, the velocity, $V$,
is asymptotically the speed of light, $|V|\to c=1$ and the acceleration
diverges, $|\alpha|\to\infty$. Here $\kappa=8$ for illustration. The key take-
away from this graph is the asymmetry between the asymptotic states of the
mirror: the past state and horizon state are both asymptotically light speed,
but only the $v\to v_{H}=0^{-}$ has infinite acceleration with a light-like
horizon.
## III Energy Flux
### III.1 Energy Flux
The quantum stress tensor reveals the energy flux, $F(u)$, emitted by the
mirror Davies:1976hi ,
$F(u)=-\frac{1}{24\pi}\\{p(u),u\\},$ (5)
where $p(u)$ is the advanced time as a function of retarded time,
Davies:1977yv ; Birrell:1982ix . However, since we employ $f(v)$, retarded
time as a function of advanced time, Eq. (1), Good:2016atu ; Good:2020byh we
need
$F(v)=\frac{1}{24\pi}\\{f(v),v\\}f^{\prime}(v)^{-2},$ (6)
where the Schwarzian brackets are defined as usual,
$\\{f(v),v\\}\equiv\frac{f^{\prime\prime\prime}}{f^{\prime}}-\frac{3}{2}\left(\frac{f^{\prime\prime}}{f^{\prime}}\right)^{2}\,,$
(7)
which gives, using $f(v)$ of Eq. (1),
$F(v)=\frac{\kappa^{2}(4\kappa v(\kappa v-1)(\kappa v(\kappa
v-1)+2)+1)}{48\pi(1-2\kappa v)^{4}}.$ (8)
The leading order terms of the energy flux at both $v\to-\infty$ and $v\to
0^{-}$ are respectively,
$F_{\textrm{past}}=\frac{\kappa^{2}}{192\pi},\qquad
F_{\textrm{horizon}}=\frac{\kappa^{2}}{48\pi},$ (9)
demonstrating the asymptotic fluxes are constant (thermal) and differ by a
factor of 4. A plot of the energy flux, $F(v)$, as a function of advance time
$v$ is given in Figure 3.
Figure 3: The energy flux, Eq. (8), is asymptotically constant at both
$v\to(-\infty,0^{-})$. The scale has been set so that the maximum flux is
$F_{\textrm{max}}=1$, where $\kappa^{2}=48\pi$. Notice that in the asymptotic
past, $v\to-\infty$, the flux is $F=\kappa^{2}/192\pi$, or in our scale,
$F=1/4$. The radiation transitions from constant flux in the past to constant
flux near the horizon (by a factor of $4$). The temperature correspondingly
increases (by a factor of $2$).
## IV Particle Spectrum
The particle spectrum can be obtained from the beta Bogoliubov coefficient
Birrell:1982ix ,
$\beta_{\omega\omega^{\prime}}=\frac{1}{2\pi}\sqrt{\frac{\omega^{\prime}}{\omega}}\int_{-\infty}^{v_{H}=0}\operatorname{d}\\!{v}\>e^{-i\omega^{\prime}v-i\omega
f(v)}\,,$ (10)
where $\omega$ and $\omega^{\prime}$ are the frequencies of the outgoing and
incoming modes respectively carlitz1987reflections . To obtain the particle
spectrum, we take the modulus square,
$N_{\omega\omega^{\prime}}\equiv|\beta_{\omega\omega^{\prime}}|^{2}$, which
gives
$N_{\omega\omega^{\prime}}=\frac{\left|K_{\frac{1}{2}-\frac{i\omega}{\kappa}}\left(\frac{i\omega^{\prime}}{2\kappa}\right)\right|^{2}}{2\pi^{2}\kappa^{2}\left(e^{2\pi\omega/\kappa}-1\right)}.$
(11)
The spectrum Eq. (11), $|\beta_{\omega\omega^{\prime}}|^{2}$ is thermal
asymptotically and plotted as a contour plot in Figure 4. In both the early-
time ($\omega^{\prime}\ll\omega$) and late-time ($\omega^{\prime}\gg\omega$)
regimes, we have to leading order, respectively,
$N_{\omega\omega^{\prime}}^{\textrm{past}}=\frac{1}{\pi\omega^{\prime}\kappa}\frac{1}{e^{4\pi\omega/\kappa}-1},\quad
N_{\omega\omega^{\prime}}^{\textrm{horizon}}=\frac{1}{2\pi\omega^{\prime}\kappa}\frac{1}{e^{2\pi\omega/\kappa}-1}.$
(12)
This demonstrates that the particles at early-times (horizonless) have a
different temperature than at late-times (horizon).
Figure 4: The dual-temperature spectrum, Eq. (11),
$|\beta_{\omega\omega^{\prime}}|^{2}$ as a contour plot, here $\kappa=1$. The
color scheme depicts the lightest color as one magnitude order larger than the
darkest color. The key take-away is the asymmetry between the two frequencies,
$\omega$ and $\omega^{\prime}$, which underscore the temperature difference
between the past-future equilibrium states.
This spectrum, Eq. (11), demonstrates a new form of Hawking radiation
emanating from a moving mirror trajectory. Here the temperatures are
respectively, $T_{\textrm{past}}=\kappa/(4\pi)$, and
$T_{\textrm{horizon}}=\kappa/(2\pi)$. Eq. (11) can be compared to the late
time spectra of non-extremal black holes which is the same expression as
$N_{\omega\omega^{\prime}}^{\textrm{horizon}}$ (e.g. see Eq. 2.13 of
good2013time ) while the Bessel function is reminiscent of the radiation
characterizing extremal black holes at lates times good2020extreme ;
Liberati:2000sq ; Foo:2020bmv .
The small frequency infrared divergence demonstrates the infinite soft total
particle count (some of which are zero-energy Rindler particles
Cozzella:2020gci ; Landulfo:2019tqj ) commonly associated with drifting moving
mirror solutions Good:2016atu ; Good:2018ell ; Good:2018zmx ; Myrzakul:2018bhy
; Good_2015BirthCry ; Good:2016yht (remnant analogs), that are not strictly
asymptotically static Walker_1982 ; Good:2019tnf ; GoodMPLA ; Good:2017kjr ;
good2013time ; Good:2017ddq ; Good:2018aer (effervescent analogs).
## V Conclusions
We have presented a model of acceleration radiation with an exactly solvable
spectrum and energy flux whose particles are asymptotically in a Planck
distribution and energy which is a constant equilibrium emission. The
different asymptotic temperatures explicitly demonstrate the disconnect
between horizon and non-horizon radiating states of the system.
Starting with the motion we have used the well-established moving mirror model
to compute the precise nature of the radiation, which we liken to a type of
‘mirror defogger’. In practise, a store-bought mirror defogger is used to
transmit gentle warmth across the mirror’s surface, preventing steam’s
moisture from collecting on the cold mirror. Our system gently
(asymptotically) warms up so that the moving mirror’s radiation has an
increased temperature, preventing particles from maintaining their original
ultra-relativistic cold equilibrium in the asymptotic past. It moves in a
precise non-thermal way, with dynamics that asymptotically evolve to a higher
temperature, re-establishing an ultra-relativistic hot equilibrium in the
asymptotic future. The spectral solution describes non-equilibrium emission
similar to that in early-time gravitational collapse or the particle creation
in cosmologies where the asymptotic early universe and remote future exhibit
thermal equilibrium.
Future studies investigating universal out-of-equilibrium behaviour of Hawking
radiation and comparison with other approaches, like holographic ones
Sonner:2012if may prove fruitful. Regardless, recent proposals involving
phenomenological issues say, e.g. concerning Hawking points in the sky
An:2018utx , or cosmological consequences of PBHs make apparent the need for
detailed study of vacuum radiation with a variety of possible methods
including accelerating mirror models.
## Funding
MG is funded by grant no. BR05236454 by the Ministry of Education and Science
of the Republic of Kazakhstan and the FY2018-SGP-1-STMM Faculty Dev.
Competitive Research Grant No. 090118FD5350. AM acknowledges the support of
Grant No. 110119FD4534.
## References
* (1)
* (2) B. S. DeWitt, Phys. Rept. 19, 295 (1975).
* (3) S. A. Fulling and P. C. W Davies, Proc. of the Royal Society of London A 348, 393, (1976).
* (4) P. Davies and S. Fulling, Proc. of the Royal Society of London A A356, 237 (1977).
* (5) R. D. Carlitz and R. S. Willey, Phys. Rev. D 36, 2327 (1987).
* (6) R. Zoltan, Les Houches Lecture Notes, (2002), [arXiv:0210435].
* (7) N. Birrell and P. Davies, _Quantum Fields in Curved Space_ , Cambridge Monographs on Mathematical Physics (Cambridge Univ. Press, Cambridge, UK, 1984).
* (8) A. Fabbri and J. Navarro-Salas, _Modeling Black Hole Evaporation_ (Imp. Coll. Press, 2005).
* (9) M. R. R. Good, P. R. Anderson, and C. R. Evans, Phys. Rev. D 88, 025023 (2013).
* (10) C. M. Wilson, G. Johansson, A. Pourkabirian, M. Simoen, J. R. Johansson, T. Duty, F. Nori, and P. Delsing, Nature 479, 376–379 (2011).
* (11) V. Dodonov, Physics 2, 67 (2020).
* (12) P. Chen and D.-h. Yeom, Phys. Rev. D 96, 025016 (2017).
* (13) Valenta, P., et al. Physics of Plasmas 27 032109 (2020).
* (14) P. Chen and G. Mourou, Phys. Rev. Lett. 118, 045001 (2017).
* (15) M. R. R. Good, A. Zhakenuly, and E. V. Linder, Phys. Rev. D 102 045020 (2020a).
* (16) Silva J. D. L, Braga A. N, Rego A. L. C, Alves D. T (2020), [arXiv:2009.07421].
* (17) M. R. R. Good, Phys. Rev. D 101, 104050 (2020).
* (18) Fulling, S. A., and J. H. Wilson. Physica Scripta 94, 014004 (2018).
* (19) Wilson, J.H., F. Sorge, and S.A. Fulling. Phys. Rev. D 101, 065007 (2020).
* (20) A. Mitra, P. Chattopadhyay, G. Paul, and V. Zarikas, Entropy 22.12:1387 (2020).
* (21) Sorge, F. and J.H. Wilson. Phys. Rev. D 100, 105007 (2019).
* (22) S. Liberati, T. Rothman, and S. Sonego, Phys. Rev. D 62, 024005 (2000).
* (23) J. Foo and M. R. R. Good (2020), [arXiv:2006.09681].
* (24) M. R. R. Good, K. Yelshibekov, and Y. C. Ong, JHEP 03, 013 (2017).
* (25) G. Cozzella, S. A. Fulling, A. G. Landulfo, and G. E. Matsas (2020), [arXiv:2009.13246].
* (26) A. G. Landulfo, S. A. Fulling, and G. E. Matsas, Phys. Rev. D 100, 045020 (2019).
* (27) M. R. Good Y. C. Ong, A. Myrzakul, and K. Yelshibekov, Gen. Rel. Grav. 51, 92 (2019).
* (28) M. R. Good, Universe 4, 122 (2018).
* (29) A. Myrzakul and M. R. Good, in _15th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories_ (2018), [arXiv:1807.10627].
* (30) M. R. R. Good and Y. C. Ong, J. of High Energ. Phys. 1507, 145 (2015).
* (31) M. R. R. Good, _Reflecting at the Speed of Light_ Memorial Volume for Kerson Huang (World Scientific, Singapore, 2017). [arXiv:1612.02459].
* (32) W. R. Walker and P. C. W. Davies, J. of Phys. A 15, L477 (1982),
* (33) M. R. R. Good, E. V. Linder, and F. Wilczek, Phys. Rev. D 101, 025012 (2020b).
* (34) M. R. R. Good, E. V. Linder, and F. Wilczek, Mod. Phys. Lett. A 35, 2040006 (2020c).
* (35) M. R. R. Good and E. V. Linder, Phys. Rev. D 96, 125010 (2017).
* (36) M. R. R. Good and E. V. Linder, Phys. Rev. D 97, 065006 (2018).
* (37) M. R. R. Good and E. V. Linder, Phys. Rev. D 99, 025009 (2019).
* (38) J. Sonner and A. G. Green, Phys. Rev. Lett. 109, 091601 (2012).
* (39) D. An, K. A. Meissner, P. Nurowski and R. Penrose, Mon. Not. Roy. Astron. Soc. 495, 3403-3408, (2020).
|
# Localization properties in Lieb lattices and their extensions
Jie Liu<EMAIL_ADDRESS>Xiaoyu Mao<EMAIL_ADDRESS>Jianxin Zhong<EMAIL_ADDRESS>Rudolf A. Römer<EMAIL_ADDRESS>School of Physics and Optoelectronics, Xiangtan University, Xiangtan 411105,
China Department of Physics, University of Warwick, Coventry, CV4 7AL, United
Kingdom
###### Abstract
We study the localization properties of generalized, two- and three-
dimensional Lieb lattices, $\mathcal{L}_{2}(n)$ and $\mathcal{L}_{3}(n)$,
$n=1,2,3$ and $4$, at energies corresponding to flat and dispersive bands
using the transfer matrix method (TMM) and finite size scaling (FSS). We find
that the scaling properties of the flat bands are different from scaling in
dispersive bands for all $\mathcal{L}_{d}(n)$. For the $d=3$ dimensional case,
states are extended for disorders $W$ down to $W=0.01t$ at the flat bands,
indicating that the disorder can lift the degeneracy of the flat bands
quickly. The phase diagram with periodic boundary condition for
$\mathcal{L}_{3}(1)$ looks similar to the one for hard boundaries [1]. We
present the critical disorder $W_{c}$ at energy $E=0$ and find a decreasing
$W_{c}$ for increasing $n$ for $\mathcal{L}_{3}(n)$, up to $n=3$. Last, we
show a table of FSS parameters including so-called irrelevant variables; but
the results indicate that the accuracy is too low to determine these reliably.
###### keywords:
Localization , Flat band , Phase diagram , Finite size scaling
††journal: Annals of Physics Special Issue: Localisation 2020
## 1 Introduction
Flat band systems, in which the absence of a dispersion in the band structure
leads to a highly dramatic macroscopic degeneracy, have at the flat band
energy an effectively reduced kinetic energy. Hence other terms in the
Hamiltonian can become prominent, such as many-body interactions. This
mechanism leads to a convenient construction of various platform for studying
many-body physics, such as the fractional quantum Hall effect [2, 3, 4], spin
liquids [5, 6], ferromagnetism [7, 8], and superconductivity [9, 10, 11]. In
recent years, artificial lattices [12], for instance, in photonic [13, 14, 15,
16, 17, 18] and cold atom systems [19, 20, 21], allow to realize
experimentally also the probing of the _novel_ many-body problems.
As is well known, states in a flat band are localized [14] because of the high
degeneracy. Hence disorder, which one should expect to destroy this
degeneracy, might also, at least initially, destroy the localization. So what
will happen after disorder is being included in the Hamiltonians describing
these localized flat bands is an interesting question.
Flat band system can be constructed in many models [2, 3, 4, 8, 22]. The Lieb
lattice [10, 23, 24] is one of the simplest and most famous two-dimensional
flat band system. Actually, the $CuO_{2}$ plane of cuprate superconductors is
also a Lieb lattice, namely $\mathcal{L}_{2}(1)$ in our notation. It contains
three atoms per unit cell as shown in Fig. 1(a). In the figure, we also
introduce its extensions $\mathcal{L}_{2}(n)$, $n=2,3$ and $4$, shown in
panels (b)-(d), respectively. The three-dimensional Lieb lattice and its
extensions are also shown in Fig. 1(a)-(d). The number of flat bands is
related to the number of central red atoms between two nearest blue atoms. In
short, for $\mathcal{L}_{d}(n)$, the number of flat band is $n$ with $d-1$
degeneracy. The central energy at $E=0$ is part of a flat band for
$\mathcal{L}_{2}(1)$, $\mathcal{L}_{2}(3)$, $\mathcal{L}_{3}(1)$ and
$\mathcal{L}_{3}(3)$ while it remains in a dispersive band for
$\mathcal{L}_{2}(2)$, $\mathcal{L}_{2}(4)$, $\mathcal{L}_{3}(2)$ and
$\mathcal{L}_{3}(4)$[25, 1].
This paper is organized as follows. In section 2 and section 3 we will discuss
the two-dimensional and three-dimensional Lieb lattices and their extensions,
respectively. The conclusion is given in section 4.
(a) (b) (c) (d)
Figure 1: Schematic representation of Lieb and extended Lieb lattices
$\mathcal{L}_{d}(n)$. The lightly coloured spheres highlight the situation in
2D while together with the fully coloured spheres they represent the 3D
lattices. Blue spheres denote undecimated lattices site while the red sphere
will be decimated in the TMM formulation. The dark lines are guides to the eye
only. The labels (a), (b), (c) and (d) denote
$\mathcal{L}_{2}(1)$/$\mathcal{L}_{3}(1)$,
$\mathcal{L}_{2}(2)$/$\mathcal{L}_{3}(2)$,
$\mathcal{L}_{2}(3)$/$\mathcal{L}_{3}(3)$ and
$\mathcal{L}_{2}(4)$/$\mathcal{L}_{3}(4)$, respectively.
## 2 The two-dimensional Lieb lattice and its extensions
### 2.1 Previous results for disordered $\mathcal{L}_{2}(n)$ lattices
In our previous paper [25], we study the localization properties of
$\mathcal{L}_{2}(n)$ in detail. Using direct diagonalization for small system
sizes, we compute the density of states (DOS). We can see that in the presence
of disorder, the interplay between flat bands and dispersive bands is
prominent for all $\mathcal{L}_{2}(n)$, $n=1,2,3$ and $4$. The disorder can
quickly lift the degeneracy of flat bands and make the states merge with the
neighboring dispersive bands. When $W\gtrsim 2$, the flat band DOS loses its
peaks distinguishing and becomes part of a large bulk DOS.
Next, we use a renormalized transfer-matrix method (TMM) and compute the
reduced localization lengths $\Lambda_{M}(E,W)=\lambda(E,W)/M$, where $M$
corresponds to the width of the quasi-1D transfer-matrix strip. For all
$\mathcal{L}_{d}(n)$, we find that for $W\gtrsim t$ all states are localized.
The localization lengths for states at flat band energies are about one order
of magnitude smaller than for states from dispersive bands. We employ
traditional one-parameter finite-size scaling methods to estimate a scaling
parameter $\xi(E,W)$ [26]. The $\Lambda(E,W)/M$ can be described after scaling
by a single scaling branch, corresponding to a fully localized behaviour.
After fitting the $\xi$ with disorder $W$, we use three fitting forms, power
low form as $\xi(W)\propto W^{-2}$[27], a non-universal form
$\xi(W)=aW^{-\alpha}\exp(\beta W^{-\gamma})$ and a constraint form
$\xi(W)=aW^{-2}\exp(\beta W^{-1})$. For $\mathcal{L}_{2}(1)$ and
$\mathcal{L}_{2}(3)$ at the flat band energy $E=0$, we find that the usual
power low form $\xi(W)\propto W^{-2}$ for 1D localization fits well for
disorder around $1<W<2$. However, for $\mathcal{L}_{2}(2)$ and
$\mathcal{L}_{2}(4)$ at $E=0$, which is an energy in a dispersive band for
these lattices, none of the fits gives a convincing result.
### 2.2 Scaling function $\Lambda_{M}$ vs reduced correlation length
$\xi_{M}$ for $\mathcal{L}_{2}(n)$
The scaled localization lengths $\Lambda_{M}(0,W)$ as a function of scaled
correlation length $\xi/M$ for $\mathcal{L}_{2}(n)$, $n=1,2,3$ and $4$ are
shown in Fig. 2(a) at energy $E=0$. This corresponds to flat bands for
$\mathcal{L}_{2}(1)$ and $\mathcal{L}_{2}(3)$ and dispersive band for
$\mathcal{L}_{2}(2)$ and $\mathcal{L}_{2}(4)$. The $\Lambda_{M}(0,W)$ data all
show the behaviour for localized states, scaling as
$\Lambda_{M}(0,W)\propto\xi(0,W)/M$ for large system sizes and large
disorders. In this regime, the behaviour of states in the flat bands and in
the dispersive bands is similar as shown also in the inset graph of said
figure.
(a) (b)
Figure 2: (a) Scaled localization length $\Lambda_{M}(0,W)$ versus $\xi/M$ at
energy $E=0$ for $\mathcal{L}_{2}(1)$ (red $\bigcirc$), $\mathcal{L}_{2}(2)$
(blue $\square$), $\mathcal{L}_{2}(3)$ (green $\triangledown$) and
$\mathcal{L}_{2}(4)$ (purple $\diamondsuit$). For clarity, lines show all data
points while symbols denote only about $15\%$ of all data. Inset: the detail
of small reduced correlation length. (b) Scaling parameters $\xi(0,W)$ for the
same Lieb lattices as in (a). The dashed lines represent the power law fit
functions $ax^{b}$. Error bars are within symbol size in both panels.
### 2.3 Scaling parameter $\xi$ vs disorder $W$ for $\mathcal{L}_{2}(n)$
The disorder dependence of the scaling parameter $\xi$ for small disorders
$t\leq W\leq 2t$ is shown in Fig. 2(b), computed from the $\Lambda_{M}(E=0,W)$
data of Fig. 2(a). We see that the behaviour of $\xi$ for $\mathcal{L}_{2}(1)$
is comparable to $\mathcal{L}_{2}(3)$; both are well-described by a power law
$ax^{b}$ with exponent approximately $b\sim-2$, similar to localization
properties of a standard 1D Anderson model [27]. This might suggest that the
localization behaviour of these flat band states at least for weak disorder is
similar to the 1D behaviour. On the other hand, for the dispersive states of
$\mathcal{L}_{2}(2)$ and $\mathcal{L}_{2}(4)$ at $E=0$, we find that the
values of $\xi$ are orders of magnitude larger than for $\mathcal{L}_{2}(1)$
and $\mathcal{L}_{2}(3)$. The simple power-law also does not fit anymore and
we rather see the more standard behaviour of a 2D Anderson model [27] with a
quickly diverging $\xi$ when $W\rightarrow 0$. Nevertheless, for both flat
band and dispersive band energies, the fits are not very robust and have
rather small $p$ values of $<10^{-10}$. This shows that the true form of the
behaviour of $\xi(W\rightarrow 0)$ is yet to be determined.
### 2.4 Density of states without Gaussian broadening for
$\mathcal{L}_{2}(n)$
$\mathcal{L}_{2}(n)$ $\mathcal{L}_{3}(n)$
(a)
(b)
(c)
(d)
Figure 3: Normalized bar chart histograms of the $(E,W)$-dependence of the
density of states (DOS) for (a) $\mathcal{L}_{2}(1)$ and $\mathcal{L}_{3}(1)$,
(b) $\mathcal{L}_{2}(2)$ and $\mathcal{L}_{3}(2)$, (c) $\mathcal{L}_{2}(3)$
and $\mathcal{L}_{3}(3)$, and (d) $\mathcal{L}_{2}(4)$ and
$\mathcal{L}_{3}(4)$. The colors denote different DOS values ranging from zero
(deep purple) to maximal (white). Bin widths in $(E,W)$ directions have been
chosen for representational clarity.
The results of the disorder-averaged density of states (DOS), calculated with
direct diagonalization, are shown in Fig. 3. The system sizes are
$M^{2}=13^{2}$, $10^{2}$, $9^{2}$, $8^{2}$ for $\mathcal{L}_{2}(n)$, $n=1,2,3$
and $4$, respectively. The disorder ranges from $W=0$ to $W=5.2$ in step of
$0.05$ using $300$ independent random samples. We can see the prominent peaks
of flat bands close to $W=0$ have been largely vanished when the disorder
reaches up to $W=2$ for all $\mathcal{L}_{2}(n)$, $n=1,2,3$ and $4$.
## 3 Three-dimensional Lieb lattice and its extensions
### 3.1 Previous results for disordered $\mathcal{L}_{3}(n)$ lattices
In Ref. [1], we investigated the DOS, the localization properties and the
phase diagrams for the 3D Lieb lattices $\mathcal{L}_{3}(n)$, $n=1,2,3$ and
$4$ as shown in Fig. 1. Obviously, the main difference to the 2D case is the
existence of an Anderson metal insulator transition (MIT) in 3D Lieb lattices
[1]. Details of the TMM construction for $\mathcal{L}_{3}(n)$ can also be
found in Ref. [1] along with finite-size scaling results for the universal
critical exponent $\nu$ of the localization lengths. We found $\nu$ to be in
good agreement with the currently accepted value of $\nu=1.590(1.579,1.602)$
[28, 29] for the Anderson transition. In the following, we shall elaborate on
the stability of the phase diagrams to a change in boundary conditions,
highlight the various positions of the transitions in the phase diagrams,
comment on the possibility of FSS with irrelevant scaling parameters and also
provide the DOS without Gaussian-broadening.
### 3.2 Phase diagram with periodic boundary condition for
$\mathcal{L}_{3}(1)$
The phase diagrams given in Ref. [1] have been computed for the
$\mathcal{L}_{3}(n)$ lattices with hard wall boundaries. In Fig. 4(a) we now
show a phase diagram for $\mathcal{L}_{3}(1)$ with periodic boundary
condition. The phases have been determined by the scaling behaviour of the
$\Lambda_{M}(E,W)$ for system sizes $M=4$, $M=6$ and $M=8$ with error $\leq
0.1\%$[30]. Comparing this to the results obtained with hard boundary
conditions, we find that it looks very similar as expected although the
extended region is a little wider in the $E$ axis. As for the hard wall case,
we can identify a re-entrant region around disorder $W=4$ and a shoulder at
$E\sim 3$ and $W\sim 6$. Hence as expected, the change in boundary conditions
does not change the phase diagrams appreciably already with the small system
sizes and modest disorder averages as used here and in Ref. [1].
(a) (b)
Figure 4: (a) Phase diagram for $\mathcal{L}_{3}(1)$ in case of periodic
boundaries. The three solid and colored lines represent the approximate
location of the phase boundary estimated from small $M$, i.e. the blue
line/$\bigcirc$ comes from widths $M=4$ and $M=6$, the red line/$\times$ from
$M=4$ and $M=8$, and the green line/$+$ from $M=6$ and $M=8$. The shaded area
in the center contains extended states while states outside the phase boundary
are localized. The dashed lines on both sides are guides-to-the-eye for the
expected continuation of the phase boundary for $W<2$. The red short vertical
line at $E=0$ represents the position of the doubly-degenerate flat band. The
diamonds ($\blacklozenge$) denote the band edges for $W=0$, i.e.
$E_{\mathrm{min}}=-2\sqrt{3}$ and $E_{\mathrm{max}}=2\sqrt{3}$. The dotted
lines are the theoretical band edges $\pm\left(|E_{\mathrm{min}}|+W/2\right)$
and the forbidden areas below those band edges have been shaded. (b) Scaling
parameters $\xi$ versus disorder $W$ for $\mathcal{L}_{3}(1)$(red $\bigcirc$),
$\mathcal{L}_{3}(2)$ (blue $\square$) and $\mathcal{L}_{3}(3)$ (green
$\triangledown$) at energy $E=0$. The expansion parameters $n_{r}$, $n_{i}$,
$m_{r}$ and $m_{i}$ are the same as the highlighted line in Table 1.
### 3.3 Localization and extended transition with 0.01 $\leq$ $W$ $\leq$ 2.0
(a) (b)
(c) (d)
Figure 5: (a) Small $W$ behaviour of $\Lambda_{M}$ for $\mathcal{L}_{3}(1)$
with (a) energy $E=0.05$, (b) $E=0.1$, (c) $E=0.15$, and (d) for
$\mathcal{L}_{3}(2)$ at $E=1.05$ with disorder down to $0.01$ in steps of
$0.01$ and with error less than $1.0\%$. System sizes $M$ are $4$ (black
$\oplus$), $6$ (red $\diamond$), $8$ (green $\square$), $10$ (dark-blue
$\times$), $12$ (brown $\triangleleft$), $14$ (purple $\triangleright$). Error
bars are denoted with a solid line. Insets: increased disorder range up to
$W=2$ for the corresponding cases in the $4$ main panels.
For $W<1$, it is well known that the convergence of the TMM is very slow.
Hence results for appropriately small errors are hard to compute. Usually,
this is not a problem since, e.g., in the 3D Anderson model, the limit as
$W\rightarrow 0$ belongs trivially to the extended phase. However, for the
$\mathcal{L}_{3}(n)$ lattice, we know that at the flat band energies even at
$W=0$, we except compactly localized states [31, 32]. Hence it is interesting
to see if the localization properties at $W<1$ for flat band energies indicate
any possible “inverse” Anderson transition from extended states at $W\sim 1$
to localized states at small, but finite $W>0$.
In Ref. [1], we had shown that at the flat band energy $E=0$ for
$\mathcal{L}_{3}(1)$ and at $E=1$ for $\mathcal{L}_{3}(2)$, the $\Lambda_{M}$
increases with increasing $M$, indicating extended behaviour, down to
disorders as small as $W=0.01$. In Fig. 5(a-c), we now augment that result by
studying energies close by. For $\mathcal{L}_{3}(1)$ and $E=0.05$ we initially
find localized behaviour, e.g., $\Lambda_{M}$ decreasing with increasing $M$
up to $M=10$, but then reversing to extended behaviour for larger $M$. For
$E=0.1$ the reversal to the extended behaviour already starts at $M=8$ while
for $E=0.15$, only the extended behaviour remains. In Fig. 5(d), we see that
for $\mathcal{L}_{3}(1)$ at $E=1.05$, the $\Lambda_{M}$ decreases with
increasing $M$. This localized behaviour should vanish for larger $M$ values,
but it is at present beyond our computational capabilities.
We conclude that the presence of the compactly localized states at the flat
band energies certainly has an effect at small disorder, but for larger
disorder values, the broadening of the flat bands and the mixing with the
dispersive bands becomes dominant such that the character if the states is
extended down to disorders $W=0.01$. This is usually already true at $W\sim
0.1$.
### 3.4 Divergence of the scaling parameter $\xi(W)$
The behaviour of $\xi(W)$ for $\mathcal{L}_{3}(1)$, $\mathcal{L}_{3}(2)$ and
$\mathcal{L}_{3}(3)$ is given in Fig. 4. We can clearly see how the critical
disorder $W_{c}$ decreases from $8.59$ for $\mathcal{L}_{3}(1)$ to $5.96$ for
$\mathcal{L}_{3}(2)$ and finally to $4.79$ for $\mathcal{L}_{3}(3)$. This
suggests that a larger $n$ in $\mathcal{L}_{3}(n)$, i.e. a larger number of
additional (red, cp. Fig. 1) atoms, leads to stronger localization and hence
an MIT already for smaller values of $W_{c}$. It could be an interesting study
to estimate $W_{c}(n)$, particularly the limiting behaviour when
$n\rightarrow\infty$.
### 3.5 Scaling with irrelevant variables $n_{i}$, $m_{i}$
For high-precision estimates of critical properties, including $\nu$, it is by
now state of the art to include irrelevant scaling contributions, i.e. scaling
as $M^{-y}$ with $y>0$, in the FSS analysis. However, such FSS methods also
require large $M$ values to reliably model the irrelevant scaling. Due to the
complexity of the $\mathcal{L}_{3}(n)$ systems, only values of $M\leq 20$ have
been computed in Ref. [1]. For such sizes, adding irrelevant scaling variables
is usually not a net benefit. In Table 1, we show the results for FSS with and
without scaling. We note that although acceptable $p$ values can be obtained
for the fits with irrelevant scaling exponent $y$ included, in nearly all
cases, this results either in increased error estimates for the relevant
exponent $\nu$. Alternatively, one finds estimates for $y$ with very large
errors or very large values for $y$. Except for one case, the final estimate
for the physical quantity $\nu$ has hardly changed. Hence we conclude that for
the available $\Lambda_{M}$ data, the inclusion of irrelevant scaling
parameter $y$ does not necessarily add towards the accuracy of the estimates
for $\nu$. This confirms having made this choice in Ref. [1].
$\mathcal{L}_{3}(1)$
---
$\Delta M$ | $E$ | $\delta W$ | $n_{r}$ | $n_{i}$ | $m_{r}$ | $m_{i}$ | $W_{c}$ | CI($W_{c}$) | $\nu$ | CI($\nu$) | $y$ | CI($y$) | $p$
16-20 | 0 | 8.25-8.9 | 3 | 0 | 1 | 0 | 8.59 | $(58,61)$ | 1.6 | $(4,7)$ | 0 | 0 | $0.15$
16-20 | 0 | 8.25-8.9 | 2 | 1 | 1 | 1 | ${{8.71}}$ | $(57,84)$ | ${{1.3}}$ | $(0.8,1.8)$ | 4 | $(-2,10)$ | $0.86$
14-20 | 1 | 8.0-8.8 | 3 | 0 | 1 | 0 | 8.44 | $(42,45)$ | 1.6 | $(5,7)$ | 0 | 0 | $0.18$
14-20 | 1 | 8.0-8.8 | 3 | 2 | 1 | 1 | $8.48$ | $(45,50)$ | $1.8$ | $(6,9)$ | $6.9$ | $(6.6,7.1)$ | $0.77$
$\Delta M$ | $W$ | $\delta E$ | $n_{r}$ | $n_{i}$ | $m_{r}$ | $m_{i}$ | $E_{c}$ | CI($E_{c}$) | $\nu$ | CI($\nu$) | $y$ | CI($y$) | $p$
16-20 | 3 | 3.725-3.785 | 2 | 0 | 1 | 0 | 3.75 | $(74,75)$ | 1.7 | $(6,9)$ | 0 | 0 | $0.88$
16-20 | 3 | 3.725-3.785 | 3 | 2 | 1 | 2 | $3.75$ | $(74,75)$ | $1.5$ | $(0.6,2.5)$ | $2$ | $(-3,8)$ | $0.7$
16-20 | 6 | 3.04-3.11 | 1 | 0 | 1 | 0 | 3.08 | $(07,09)$ | 1.5 | $(1.0,2.1)$ | 0 | 0 | $0.14$
16-20 | 6 | 3.04-3.11 | 1 | 1 | 2 | 1 | $3.08$ | $(06,09)$ | $1.5$ | $(0.7,2.4)$ | $47$ | $(44,50)$ | $0.13$
$\mathcal{L}_{3}(2)$
$\Delta M$ | $E$ | $\delta W$ | $n_{r}$ | $n_{i}$ | $m_{r}$ | $m_{i}$ | $W_{c}$ | CI($W_{c}$) | $\nu$ | CI($\nu$) | $y$ | CI($y$) | $p$
12,14,18 | 0 | 5.85-6.05 | 2 | 0 | 2 | 0 | 5.96 | $(95,97)$ | 1.8 | $(1.5,2.0)$ | 0 | 0 | $0.08$
12,14,18 | 0 | 5.85-6.05 | 2 | 1 | 1 | 4 | $5.97$ | $(96,98)$ | $1.7$ | $(1.3,2.1)$ | $9$ | $(2,16)$ | $0.89$
$\Delta M$ | $W$ | $\delta E$ | $n_{r}$ | $n_{i}$ | $m_{r}$ | $m_{i}$ | $E_{c}$ | CI($E_{c}$) | $\nu$ | CI($\nu$) | $y$ | CI($y$) | $p$
10-14 | 4 | 1.6-1.8 | 2 | 0 | 1 | 0 | 1.70 | $(70,71)$ | 1.6 | $(4,7)$ | 0 | 0 | $0.18$
10-14 | 4 | 1.6-1.8 | 1 | 1 | 2 | 1 | $1.72$ | $(67,78)$ | $1.6$ | $(1.1,2.1)$ | $6$ | $(-18,31)$ | $0.38$
$\mathcal{L}_{3}(3)$
$\Delta M$ | $E$ | $\delta W$ | $n_{r}$ | $n_{i}$ | $m_{r}$ | $m_{i}$ | $W_{c}$ | CI($W_{c}$) | $\nu$ | CI($\nu$) | $y$ | CI($y$) | $p$
12-18 | 0 | 4.7–4.875 | 2 | 0 | 1 | 0 | 4.79 | $(78,80)$ | 1.6 | $(4,8)$ | 0 | 0 | $0.43$
12-18 | 0 | 4.7–4.875 | 2 | 1 | 1 | 2 | $4.79$ | $(78,80)$ | $1.6$ | $(4,8)$ | $8284$ | $(0,1)$ | $0.11$
Table 1: Critical parameters at the MIT for $\mathcal{L}_{3}(n)$, $n=1,2$ and
$3$. The columns are denoting the system width $M$, fixed $E$ (or $W$), the
range of $W$ (or $E$). The expansion orders $n_{r}$, $n_{i}$, $m_{r}$, $m_{i}$
are listed as well as the obtained critical disorders $W_{c}$ (or energies
$E_{c}$), their 95$\%$ confidence intervals (CI), the critical exponent $\nu$,
its CI, the irrelevant parameter $y$, its CI, and the goodness of fit
probability $p$. The confidence interval are given with one significant
decimal. For instance, $1.6(4,8)$ marks that the CI is $(1.4,1.8)$
### 3.6 Density of states without Gaussian broadening for
$\mathcal{L}_{3}(n)$
The results of for the DOS, calculated with exact diagonalization and without
applied Gaussian smoothing, are in Fig. 3. The system sizes are $M^{3}=5^{3}$,
$5^{3}$, $4^{3}$, $4^{3}$ for $\mathcal{L}_{3}(n)$, $n=1,2,3$ and $4$,
respectively. The disorder ranges are all from $W=0$ to $W=5.2$ in step of
$0.05$ and with $300$ samples for $n=1,2,3$ but only $100$ samples for
$\mathcal{L}_{3}(4)$ because of computing time limits. Again, the results are
very similar to the Gaussian-broadened DOS shown in Ref. [1].
## 4 Conclusions
We have studied the localization properties of the 2D and 3D extended Lieb
lattices. Clearly, the Lieb lattices exhibit stronger localization than their
more standard square and cubic Anderson lattices. This can be understood by
noting that the transport along the ”extra” sites as shown (by red spheres) in
Fig. 1 is essentially one-dimensional and hence subject to stronger
localization. Consequently, in 2D rather small $W$ values can still be studied
(most earlier TMM studies for the 2D Anderson model stop already around $W\sim
2$, cp. Fig. (3) of Ref. [33]). Details of the resulting FSS curves for small
$W$ are given in Fig. 2. In 3D, we similarly see that $W_{c}(n)$ decreases as
a function of $n$. Results for particularly small $W$ are shown in Fig. 5. Due
to the numerical complexity of the $\mathcal{L}_{d}(n)$ systems, scaling is
more challenging than in the Anderson models and only relatively small $M$
values can be reached. Table 1 shows that for the available data, there is no
need to include irrelevant scaling variables — within the accuracy of the
calculation, all estimates of the critical exponent agree with the value found
for the Anderson universality class [28, 29].
## References
* [1] J. Liu, X. Mao, J. Zhong, R. A. Römer, Localization, phases, and transitions in three-dimensional extended Lieb lattices, Physical Review B 102 (17) (2020) 174207. doi:10.1103/PhysRevB.102.174207.
URL https://link.aps.org/doi/10.1103/PhysRevB.102.174207
* [2] E. Tang, J.-W. Mei, X.-G. Wen, High-Temperature Fractional Quantum Hall States, Physical Review Letters 106 (23) (2011) 236802\. doi:10.1103/PhysRevLett.106.236802.
URL https://link.aps.org/doi/10.1103/PhysRevLett.106.236802
* [3] T. Neupert, L. Santos, C. Chamon, C. Mudry, Fractional Quantum Hall States at Zero Magnetic Field, Physical Review Letters 106 (23) (2011) 236804. doi:10.1103/PhysRevLett.106.236804.
URL https://link.aps.org/doi/10.1103/PhysRevLett.106.236804
* [4] K. Sun, Z. Gu, H. Katsura, S. Das Sarma, Nearly Flatbands with Nontrivial Topology, Physical Review Letters 106 (23) (2011) 236803\. doi:10.1103/PhysRevLett.106.236803.
URL https://link.aps.org/doi/10.1103/PhysRevLett.106.236803
* [5] L. Savary, L. Balents, Quantum spin liquids: a review, Reports on Progress in Physics 80 (1) (2017) 016502\. doi:10.1088/0034-4885/80/1/016502.
URL https://iopscience.iop.org/article/10.1088/0034-4885/80/1/016502
* [6] L. Balents, Spin liquids in frustrated magnets, Nature 464 (7286) (2010) 199–208. doi:10.1038/nature08917.
URL http://www.nature.com/articles/nature08917
* [7] A. Mielke, H. Tasaki, Ferromagnetism in the Hubbard model, Communications in Mathematical Physics 158 (2) (1993) 341–371. doi:10.1007/BF02108079.
URL http://link.springer.com/10.1007/BF02108079
* [8] H. Tasaki, From Nagaoka’s Ferromagnetism to Flat-Band Ferromagnetism and Beyond: An Introduction to Ferromagnetism in the Hubbard Model, Progress of Theoretical Physics 99 (4) (1998) 489–548. doi:10.1143/PTP.99.489.
URL https://academic.oup.com/ptp/article-lookup/doi/10.1143/PTP.99.489
* [9] S. Miyahara, S. Kusuta, N. Furukawa, BCS theory on a flat band lattice, Physica C: Superconductivity 460-462 (2007) 1145–1146. doi:10.1016/j.physc.2007.03.393.
URL https://linkinghub.elsevier.com/retrieve/pii/S0921453407004261
* [10] A. Julku, S. Peotta, T. I. Vanhala, D. H. Kim, P. Törmä, Geometric Origin of Superfluidity in the Lieb-Lattice Flat Band, Physical Review Letters 117 (4) (2016) 045303. doi:10.1103/PhysRevLett.117.045303.
URL https://link.aps.org/doi/10.1103/PhysRevLett.117.045303
* [11] N. B. Kopnin, T. T. Heikkilä, G. E. Volovik, High-temperature surface superconductivity in topological flat-band systems, Physical Review B 83 (22) (2011) 220503. doi:10.1103/PhysRevB.83.220503.
URL https://link.aps.org/doi/10.1103/PhysRevB.83.220503
* [12] D. Leykam, A. Andreanov, S. Flach, Artificial flat band systems: from lattice models to experiments, Advances in Physics: X 3 (1) (2018) 1473052. doi:10.1080/23746149.2018.1473052.
URL https://www.tandfonline.com/doi/full/10.1080/23746149.2018.1473052
* [13] S. Mukherjee, A. Spracklen, D. Choudhury, N. Goldman, P. Öhberg, E. Andersson, R. R. Thomson, Observation of a Localized Flat-Band State in a Photonic Lieb Lattice, Physical Review Letters 114 (24) (2015) 245504. doi:10.1103/PhysRevLett.114.245504.
URL https://link.aps.org/doi/10.1103/PhysRevLett.114.245504
* [14] R. A. Vicencio, C. Cantillano, L. Morales-Inostroza, B. Real, C. Mejía-Cortés, S. Weimann, A. Szameit, M. I. Molina, Observation of Localized States in Lieb Photonic Lattices, Physical Review Letters 114 (24) (2015) 245503. doi:10.1103/PhysRevLett.114.245503.
URL https://link.aps.org/doi/10.1103/PhysRevLett.114.245503
* [15] D. Guzmán-Silva, C. Mejía-Cortés, M. A. Bandres, M. C. Rechtsman, S. Weimann, S. Nolte, M. Segev, A. Szameit, R. A. Vicencio, Experimental observation of bulk and edge transport in photonic Lieb lattices, New Journal of Physics 16 (6) (2014) 063061. doi:10.1088/1367-2630/16/6/063061.
URL https://iopscience.iop.org/article/10.1088/1367-2630/16/6/063061
* [16] F. Diebel, D. Leykam, S. Kroesen, C. Denz, A. S. Desyatnikov, Conical Diffraction and Composite Lieb Bosons in Photonic Lattices, Physical Review Letters 116 (18) (2016) 183902. doi:10.1103/PhysRevLett.116.183902.
URL https://link.aps.org/doi/10.1103/PhysRevLett.116.183902
* [17] S. Taie, H. Ozawa, T. Ichinose, T. Nishio, S. Nakajima, Y. Takahashi, Coherent driving and freezing of bosonic matter wave in an optical Lieb lattice, Science Advances 1 (10) (2015) e1500854. doi:10.1126/sciadv.1500854.
URL https://advances.sciencemag.org/lookup/doi/10.1126/sciadv.1500854
* [18] M. Nixon, E. Ronen, A. A. Friesem, N. Davidson, Observing Geometric Frustration with Thousands of Coupled Lasers, Physical Review Letters 110 (18) (2013) 184102. doi:10.1103/PhysRevLett.110.184102.
URL https://link.aps.org/doi/10.1103/PhysRevLett.110.184102
* [19] R. Shen, L. B. Shao, B. Wang, D. Y. Xing, Single Dirac cone with a flat band touching on line-centered-square optical lattices, Physical Review B 81 (4) (2010) 041410. doi:10.1103/PhysRevB.81.041410.
URL https://link.aps.org/doi/10.1103/PhysRevB.81.041410
* [20] N. Goldman, D. F. Urban, D. Bercioux, Topological phases for fermionic cold atoms on the Lieb lattice, Physical Review A 83 (6) (2011) 063601. doi:10.1103/PhysRevA.83.063601.
URL https://link.aps.org/doi/10.1103/PhysRevA.83.063601
* [21] V. Apaja, M. Hyrkäs, M. Manninen, Flat bands, Dirac cones, and atom dynamics in an optical lattice, Physical Review A 82 (4) (2010) 041402. doi:10.1103/PhysRevA.82.041402.
URL https://link.aps.org/doi/10.1103/PhysRevA.82.041402
* [22] C. Weeks, M. Franz, Topological insulators on the Lieb and perovskite lattices, Physical Review B 82 (8) (2010) 085310. doi:10.1103/PhysRevB.82.085310.
URL https://link.aps.org/doi/10.1103/PhysRevB.82.085310
* [23] E. H. Lieb, Two theorems on the Hubbard model, Physical Review Letters 62 (10) (1989) 1201–1204. doi:10.1103/PhysRevLett.62.1201.
URL https://link.aps.org/doi/10.1103/PhysRevLett.62.1201
* [24] W.-X. Qiu, S. Li, J.-H. Gao, Y. Zhou, F.-C. Zhang, Designing an artificial Lieb lattice on a metal surface, Physical Review B 94 (24) (2016) 241409. doi:10.1103/PhysRevB.94.241409.
URL https://link.aps.org/doi/10.1103/PhysRevB.94.241409
* [25] X. Mao, J. Liu, J. Zhong, R. A. Römer, Disorder effects in the two-dimensional Lieb lattice and its extensions, Physica E: Low-Dimensional Systems and Nanostructures 124 (January) (2020) 114340. doi:10.1016/j.physe.2020.114340.
URL https://doi.org/10.1016/j.physe.2020.114340
* [26] A. MacKinnon, B. Kramer, One-Parameter Scaling of Localization Length and Conductance in Disordered Systems, Physical Review Letters 47 (21) (1981) 1546–1549. doi:10.1103/PhysRevLett.47.1546.
URL https://link.aps.org/doi/10.1103/PhysRevLett.47.1546
* [27] A. MacKinnon, B. Kramer, The scaling theory of electrons in disordered solids: Additional numerical results, Zeitschrift für Physik B Condensed Matter 53 (1) (1983) 1–13. doi:10.1007/BF01578242.
URL http://link.springer.com/10.1007/BF01578242
* [28] K. Slevin, T. Ohtsuki, Corrections to Scaling at the Anderson Transition, Physical Review Letters 82 (2) (1999) 382–385. doi:10.1103/PhysRevLett.82.382.
URL https://link.aps.org/doi/10.1103/PhysRevLett.82.382
* [29] A. Rodriguez, L. J. Vasquez, K. Slevin, R. A. Römer, Multifractal finite-size scaling and universality at the Anderson transition, Physical Review B 84 (13) (2011) 134209. doi:10.1103/PhysRevB.84.134209.
URL https://link.aps.org/doi/10.1103/PhysRevB.84.134209
* [30] A. Eilmes, A. M. Fischer, R. A. Römer, Critical parameters for the disorder-induced metal-insulator transition in fcc and bcc lattices, Physical Review B 77 (24) (2008) 245117. doi:10.1103/PhysRevB.77.245117.
URL https://link.aps.org/doi/10.1103/PhysRevB.77.245117
* [31] H. Aoki, M. Ando, H. Matsumura, Hofstadter butterflies for flat bands, Physical Review B 54 (24) (1996) R17296–R17299. doi:10.1103/PhysRevB.54.R17296.
URL https://link.aps.org/doi/10.1103/PhysRevB.54.R17296
* [32] W. Maimaiti, A. Andreanov, H. C. Park, O. Gendelman, S. Flach, Compact localized states and flat-band generators in one dimension, Physical Review B 95 (11) (2017) 115135. doi:10.1103/PhysRevB.95.115135.
URL https://link.aps.org/doi/10.1103/PhysRevB.95.115135
* [33] M. Leadbeater, R. Römer, M. Schreiber, Interaction-dependent enhancement of the localisation length for two interacting particles in a one-dimensional random potential, The European Physical Journal B 8 (4) (1999) 643–652. doi:10.1007/s100510050732.
URL http://link.springer.com/10.1007/s100510050732
|
# Gravitational instability of non-isothermal filamentary molecular clouds, in
presence of external pressure
Mohammad Mahdi Motiei,1 Mohammad Hosseinirad1,2 and Shahram Abbassi1
1Department of Physics, School of Sciences, Ferdowsi University of Mashhad,
Mashhad, PO Box 91775-1436, Iran
2School of Astronomy, Institute for Research in Fundamental Sciences (IPM), PO
Box 19395-5531, Tehran, Iran E-mail<EMAIL_ADDRESS>(MMM);
<EMAIL_ADDRESS>(MH<EMAIL_ADDRESS>(SA)
(Accepted . Received ; in original form )
###### Abstract
Filamentary molecular clouds are omnipresent in the cold interstellar medium.
Observational evidences show that the non-isothermal equations of state
describe the filaments properties better than the isothermal one. In this
paper we use the logatropic and the polytropic equations of state to study the
gravitational instability of the pressure-confined filaments in presence of a
uniform axial magnetic field. To fully explore the parameter space we carry
out very large surveys of stability analysis that cover filaments with
different radii in various magnetic fields. Our results show that for all the
equations of state the instability of thinner filaments is more sensitive to
the magnetic field variations than the thicker ones. Moreover, for all the
equations of state, an intermediate magnetic field can entirely stabilize the
thinner filaments. Albeit for the thicker ones this effect is suppressed for
the magnetic field stronger than $B\simeq 70$ $\upmu$G.
###### keywords:
instabilities – MHD – ISM: clouds – methods: numerical.
††pubyear: 2021††pagerange: Gravitational instability of non-isothermal
filamentary molecular clouds, in presence of external pressure–A
## 1 Introduction
The cold interstellar molecular gas in the Galaxy has been revealed to have
filamentary structures of parsec-scale (0.5 - 100 pc) (e.g. Schneider &
Elmegreen, 1979; Bally et al., 1987; Goldsmith et al., 2008; André, 2017),
regularly harbouring clumps and dense cores (e.g. Bergin & Tafalla, 2007;
Miettinen & Harju, 2010; Jackson et al., 2010; Wang et al., 2011; Miettinen,
2012; Wang et al., 2014; Contreras et al., 2016; Henshaw et al., 2016; Wang et
al., 2016; Feng et al., 2016; Kainulainen et al., 2017). The filamentary
molecular clouds (MCs) unveiled by unprecedented images of _Herschel Space
Observatory_ (Pilbratt et al., 2010), represent a common width of $\sim 0.1$
pc (Arzoumanian et al. (2019), see also Panopoulou et al., 2017; André, 2017;
Roy et al., 2019, for a recent debate on the existence of such a universal
width) at least in the nearby Gould belt, while they extend over a wide range
in column density. The filamentary MCs, are identified both in non-star-
forming (Men’shchikov et al., 2010; Miville-Deschênes et al., 2010; Ward-
Thompson et al., 2010) and star-forming (Könyves et al., 2010; Bontemps et
al., 2010) regions which emphasizes their importance to better understand the
theory of star formation (André et al., 2014).
In the filamentary picture of formation of stars, the large-scale turbulent
flows are assembled into a network of filaments due to the supersonic shocks
(e.g. Klessen et al., 1998; McKee & Ostriker, 2007; Dib et al., 2007; Padoan
et al., 2014; Pudritz & Kevlahan, 2013) or combination with the magnetic field
which is most probably perpendicular to the filaments (e.g. Nakamura & Li,
2008; Chen & Ostriker, 2014; Inutsuka et al., 2015; Federrath, 2016; Klassen
et al., 2017; Li & Klein, 2019). It is also possible that the global collapse
of the parent cloud under its self-gravity, governs the formation process
(e.g. Burkert & Hartmann, 2004; Hartmann & Burkert, 2007; Vázquez-Semadeni et
al., 2007; Gómez & Vázquez-Semadeni, 2014; Wareing et al., 2016; Camacho et
al., 2016). If the filaments are gravitationally unstable, they will fragment
onto the cores and finally form clusters of stars (Lada & Lada, 2003) provided
that the conditions for the subsequent fragmentation are met.
Filaments have been subject to many investigations since almost the mid-point
of the twentieth century, when the groundbreaking work by Chandrasekhar &
Fermi (1953) showed that a poloidal magnetic field is able to completely
stabilize a very long uniform incompressible cylinder of gas. Ten years later,
Stodólkiewicz (1963) derived the magnetostatic equilibrium of an isothermal
gas cylinder threaded by a longitudinal magnetic field proportional to the
square root of its initial density. Physical explanation of the filamentary
clouds was interesting enough to encourage other authors for more detailed
theoretical investigations (see e.g. Larson, 1985; Nagasawa, 1987; Inutsuka &
Miyama, 1992; Nakamura et al., 1993; Matsumoto et al., 1994; Gehman et al.,
1996a, b; Inutsuka & Miyama, 1997; Fischera & Martin, 2012a; Freundlich et
al., 2014; Hanawa & Tomisaka, 2015; Sadhukhan et al., 2016; Hosseinirad et
al., 2017; Hanawa et al., 2017, 2019). In addition, various simulations of the
cylindrical geometry have been performed for more realistic studies in non-
linear regime (see e.g. Steinacker et al., 2016; Gritschneder et al., 2016;
Heigl et al., 2016, 2018, 2020; Ntormousi & Hennebelle, 2019; Clarke et al.,
2017; Clarke et al., 2020).
Filaments are seldom found in isolation but under the external pressure of the
ambient medium (Fischera & Martin, 2012a, b). Nagasawa (1987) performed global
linear stability analysis for an infinitely long isothermal magnetized
filament. Nagasawa (1987) showed that both a non-confined and a pressure-
confined filament are gravitationally unstable for a specific range of
wavelengths. More specifically, Nagasawa (1987) found that a poloidal magnetic
field can increase the stability of a filament and interestingly entirely
stabilize it if the filament is thin enough. The linear stability analysis of
self-gravitating objects under the effect of external pressure in other
environments, have been also the matter of many studies (see e.g. Miyama et
al. 1987, for sheet-like gas layers; Nagai et al. 1998, for sheet-like gas
layers; Durrive & Langer 2019, for sheet-like gas layers and also Chou et al.
2000, for gas disks; Lee & Hong 2007, for gas disks; Kim et al. 2012, for gas
disks). Recently, Anathpindika & Di Francesco (2020) have reported the
external pressure could affect the peak central density, the column density,
the morphology and the star formation of the filaments.
Based on the column density maps extracted from _Herschel_ images (Arzoumanian
et al., 2011; Juvela et al., 2012; Palmeirim et al., 2013), the radial density
profiles of the filaments in Gould belt, could not be properly described by a
simple isothermal model (Stodólkiewicz, 1963; Ostriker, 1964), but instead
they are best fitted by the softer (i.e. profiles which are shallower at
distances away from the centre) polytropic models with indices $\gamma_{p}<1$
(Palmeirim et al., 2013; Toci & Galli, 2015). Moreover, for the filaments in
the IC5146 region, a modification to the simple isothermal model that supposes
a very long subcritical pressure-confined cylinder with different masses per
unit length, can account for this problem (Fischera & Martin 2012a; see also
Heitsch 2013 for a similar but accreting model), which is also the case for a
near-critical cylinder wrapped by a helical magnetic field (Fiege & Pudritz,
2000b). In addition, polytropic filaments with indices less than but near the
unity that are undergoing gravitational collapse, have also shallow density
profiles (Kawachi & Hanawa, 1998; Nakamura & Umemura, 1999; Shadmehri, 2005).
In a recent paper, Hosseinirad et al. (2018, hereafter H18) carried out a
similar analysis to the work that had been done by Nagasawa (1987), but for
the aforementioned polytropic equation of state (PEOS) as well as the
logatropic equation of state (LEOS) (Lizano & Shu, 1989). They used the non-
ideal magnetohydrodynamic (MHD) framework for a filament threaded by a
poloidal magnetic field in the absence of the external pressure. They found
that without the effect of magnetic field, filaments with these two softer
types of equations of state (EOSs) are more susceptible to the gravitational
instability than a filament with the isothermal EOS (IEOS). More specifically,
they realized that while the gravitational instability in a moderate
magnetized filament is generally sensitive to the type of EOS, the instability
is suppressed in the strongly magnetized one, regardless of its EOS type.
Here, we aim to elucidate how a pressure-confined filament with the LEOS or
the PEOS responds to the linear perturbations, therefore combining the study
by Nagasawa (1987) and H18, albeit in the ideal MHD for simplicity. We will
investigate this problem in the non-ideal MHD in a forthcoming paper.
The outline of this paper is as follows. In Section 2.1, we explain the ideal
MHD equations considering self-gravity. Section 2.2 introduces the non-
isothermal EOSs we use in this paper. The equilibrium state and perturbations
are described in sections 2.3 and 2.4, respectively. Section 2.5 deals with
the boundary conditions. Computation method is given in section 2.6. Section 3
and Section 4 contain the results and conclusions of this investigation.
## 2 Basics equations and formulations
### 2.1 Ideal MHD equations considering self-gravity
We consider an infinitely long cylinder of gas with finite radius as the
filament. The filament is threaded by a uniform magnetic field parallel to its
long axis so $\boldsymbol{B}=(0,0,B_{z})$ which does not affect the
unperturbed structure. Our set of equations include the equation of motion
(1), the induction equation (2), the continuity equation (3) and the Poisson’s
equation (4) as
$\rho\frac{\partial\,\boldsymbol{u}}{\partial
t}+\rho\left(\boldsymbol{u}\cdot\nabla\right)\boldsymbol{u}+\nabla
p+\rho\nabla\psi-\frac{1}{4\pi}\left(\nabla\times\boldsymbol{B}\right)\times\boldsymbol{B}=0,$
(1) $\frac{\partial\,\boldsymbol{B}}{\partial
t}+\nabla\times\left(\boldsymbol{B}\times\boldsymbol{u}\right)=0,$ (2)
$\frac{\partial\,\rho}{\partial
t}+\nabla\cdot\left(\rho\boldsymbol{u}\right)=0,$ (3) $\nabla^{2}\psi=4\pi
G\rho.$ (4)
In equations (1) to (4), $\rho$, $\boldsymbol{u}$, $p$, $\boldsymbol{B}$ and
$\psi$ indicate the neutral gas density, the velocity of the fluid, the gas
pressure, the magnetic field strength and the gravitational potential,
respectively.
### 2.2 Equations of state
Equations (1) to (4) must be accompanied by a relation between the pressure
and the density in order to be complete. There are many studies that use the
IEOS (see e.g. Nagasawa 1987; Inutsuka & Miyama 1992; Fischera & Martin 2012b;
Heigl et al. 2016; Hosseinirad et al. 2017). We consider three different types
of non-isothermal EOSs. In the following we describe them briefly.
#### 2.2.1 Gehman equation of state (GEOS)
By applying a non-isothermal barotropic equation of state, Gehman et al.
(1996a, b) studied the observed turbulence effect in MCs. They added a term to
the IEOS in order to model this effect. This EOS is softer than the IEOS and
there is theoretical and empirical support for using that (Gehman et al.,
1996a, b, and Lizano & Shu, 1989). They proposed the GEOS form as
$p=c_{\rm s}^{2}\rho+\mathcal{P}_{0}\ln(\rho/\rho_{\rm c}).$ (5)
In this equation $c_{\rm s}$ and $\rho_{\rm c}$ are the isothermal sound speed
and the density at the filament axis, respectively. $\mathcal{P}_{0}$ is an
empirical constant which its value changes between 10 and 70 picodynes cm-2
(Gehman et al., 1996b).
#### 2.2.2 McLaughlin & Pudritz eqaution of state (MPEOS)
McLaughlin & Pudritz (1996) considered a pure logarithmic EOS as
$p=p_{\rm c}[1+A\ln(\rho/\rho_{\rm c})],$ (6)
where $p_{\rm c}$ and $\rho_{\rm c}$ are the pressure and the density along
the filament axis, respectively and $A$ is an empirical constant about 0.2 for
molecular cloud cores. They claimed that this EOS is the simplest and the most
successful model that contains the important properties of the giant MCs and
their internal structures such as cores. Also Fiege & Pudritz (2000a) found
that this logatropic model is in agreement with the existing data and although
it was based on the core data, they used the same value of $A$ for the
filamentary clouds. This EOS is the softest one among the EOSs we use in this
text.
#### 2.2.3 Polytropic equation of state (PEOS)
Palmeirim et al. (2013) argued that the structure of B211 filament in the
Taurus MC is well described by a polytropic cylindrical filament with an EOS
as $p\propto\rho^{\gamma}$ where $\gamma=0.97\pm 0.01.$ Toci & Galli (2015)
analysed the observational properties of the filamentary clouds in the
cylindrical symmetry with the PEOS and the polytropic exponent
$1/3\apprle\gamma_{\rm p}\apprle 2/3$ (the polytropic indices $-3\apprle
n\apprle-1.5$) where $\gamma_{\rm p}=1+1/n.$ In a more general way
$-\infty<n<-1$ (Viala & Horedt, 1974; Maloney, 1988). In this paper we use the
PEOS with negative index as
$p=p_{\rm c}(\rho/\rho_{\mathrm{c}})^{1+1/n},$ (7)
where $p_{\rm c}$ and $\rho_{\rm c}$ are the same as in equation (6). The
dimensionless forms of equations (5) to (7) are available in appendix A.
### 2.3 Equilibrium state
We use the cylindrical coordinates $(r,\phi,z)$ by assuming the filament
centre at the origin. The filament is very long and its radius is confined.
The initial magnetic field $\boldsymbol{B_{0}}=B_{0}\,\hat{z}$ is uniform and
has not any effect on the equilibrium of the filament. Solving a combination
of equations (1) and (4) gives us the density profile at the equilibrium
state. For a filament with the IEOS we have
$\rho(r)=\rho_{\mathrm{c}}(1+\dfrac{r^{2}}{8H^{2}})^{-2},$ (8)
where $H$ is the radial scale length (Stodólkiewicz, 1963; Ostriker, 1964) as
$H=\dfrac{c_{\rm s}}{\sqrt{4\pi G\rho_{\mathrm{c}}}}$ (9)
where $G$ is the gravitational constant. For a typical MC with the central
density of 4$\times$10-20 g cm-3 and the thermal sound speed of 0.2 km s-1,
$H$ $\approx 0.035$ pc. The magnetic field strength $B$ is in the unit of
$(4\pi\rho_{\mathrm{c}})^{1/2}c_{\rm s}$ and considering the mentioned values
for $\rho_{\mathrm{c}}$ and $c_{\rm s}$, $B=1$ is equivalent with $B\simeq
14.2$ $\upmu$G. See Appendix A for more details. Since there are no analytical
solutions for the non-isothermal EOSs presented in $\S$ 2.2, we obtain them
numerically (H18). We set the dimensionless turbulence parameter
$\kappa=0.1,0.2,0.5,1$ in the GEOS where $\kappa=\mathcal{P}_{\rm 0}/(c_{\rm
s}^{2}\rho_{\mathrm{c}})$, the dimensionless parameter $A=0.2$ in the MPEOS
and the polytropic indices $n=-1.5,-2,-3,-4$ in the PEOS.
### 2.4 Perturbations
By applying a small perturbation of $\delta r$ to the surface of the filament
(Gehman et al., 1996b), the perturbed version of equations (1) to (4) in
dimensionless form to the first order gives us
$\rho_{0}\frac{\partial\,\boldsymbol{u}_{1}}{\partial t}+\nabla
p_{1}+\rho_{0}\nabla\psi_{1}+\rho_{1}\nabla\psi_{0}-\left(\nabla\times\boldsymbol{B}_{1}\right)\times\boldsymbol{B}_{0}=0,$
(10) $\frac{\partial\,\boldsymbol{B}_{1}}{\partial
t}+\nabla\times\left(\boldsymbol{B}_{0}\times\boldsymbol{u}_{1}\right)=0,$
(11) $\frac{\partial\,\rho_{1}}{\partial
t}+\nabla\rho_{0}\cdotp\boldsymbol{u}_{1}+\rho_{0}\nabla\cdotp\boldsymbol{u}_{1}=0,$
(12) $\nabla^{2}\psi_{1}=\rho_{1}.$ (13)
In these equations, the subscript 0 indicates the unperturbed parameters and
the subscript 1 shows the perturbed quantities. Since our EOSs are barotropic,
we can linearize them as
$p_{1}=\dfrac{dp}{d\rho}(\rho_{0})\rho_{1}\equiv
p^{\prime}(\rho_{0})\rho_{1}.$ (14)
On the other hand the density, the velocity, the magnetic field and the
gravitational potential are as
$\rho_{1}(\boldsymbol{x},t)=\mathbb{R}\left[f(r)\exp{(ikz-i\omega t)}\right],$
(15)
$\boldsymbol{u}_{1}(\boldsymbol{x},t)=\mathbb{R}\left[\boldsymbol{v}(r)\exp{(ikz-i\omega
t)}\right],$ (16)
$\boldsymbol{B}_{1}(\boldsymbol{x},t)=\mathbb{R}\left[\boldsymbol{b}(r)\exp{(ikz-i\omega
t)}\right],$ (17)
$\psi_{1}(\boldsymbol{x},t)=\mathbb{R}\left[\phi(r)\exp{(ikz-i\omega
t)}\right],$ (18)
where $\mathbb{R}$ refers to the real part, $k$ is the wave number (along the
filament axis) and $\omega$ denotes to the angular frequency. $f(r)$,
$\boldsymbol{v}(r)$, $\boldsymbol{b}(r)$ and $\phi(r)$ are the amplitudes of
the perturbations. We use these forms and equations (10) to (12) in order to
get the linearised dimensionless forms by considering $w$ = $i\omega v_{r}$ in
the following equations
$\rho_{0}(\frac{d}{dr}+\frac{1}{r})w+(\omega^{2}-k^{2}P^{\prime})f-k^{2}\rho_{0}\phi+w\frac{d\rho_{0}}{dr}=0,$
(19)
$\bigg{[}P^{\prime}+\frac{B_{0}^{2}}{\rho_{0}}\bigg{(}1-\frac{k^{2}}{\omega^{2}}P^{\prime}\bigg{)}\bigg{]}\frac{df}{dr}+A_{1}f+\bigg{(}\rho_{0}-\frac{k^{2}}{\omega}B_{0}^{2}\bigg{)}\frac{d\phi}{dr}+A_{2}\phi+A_{3}w=0,$
(20) $(\frac{d}{dr}+\frac{1}{r})\frac{d\phi}{dr}-k^{2}\phi-f=0,$ (21)
where
$A_{1}=-\frac{k^{2}B_{0}^{2}}{\omega^{2}\rho_{0}}\bigg{(}P^{\prime\prime}-\frac{2P^{\prime}}{\rho_{0}}\bigg{)}\frac{d\rho_{0}}{dr}+\bigg{(}P^{\prime\prime}-\frac{2B_{0}^{2}}{\rho_{0}^{2}}\bigg{)}\frac{d\rho_{0}}{dr}+\frac{d\psi}{dr},$
(22) $A_{2}=\frac{k^{2}B_{0}^{2}}{\omega^{2}\rho_{0}}\frac{d\rho_{0}}{dr},$
(23)
$A_{3}=\frac{B_{0}^{2}}{\omega^{2}}\bigg{[}k^{2}+\frac{1}{\rho_{0}}\frac{d^{2}\rho_{0}}{dr^{2}}-2\bigg{(}\frac{1}{\rho_{0}}\frac{d\rho_{0}}{dr}\bigg{)}^{2}-\frac{1}{r\rho_{0}}\frac{d\rho_{0}}{dr}\bigg{]}-\rho_{0}.$
(24)
### 2.5 Boundary conditions
In this section we closely follow Nagasawa (1987) approach for obtaining
boundary conditions. The external pressure confines the filament to the finite
radius $R$. The perturbed surface of this filament will have the radius
$r=R+\delta r\exp(ikz-i\omega t).$ (25)
On the deformed surface of the filament, the $r$-component of the velocity is
then defined as
$v_{r}(R)=-i\omega\delta r.$ (26)
On the other hand the pressure on the boundary must be equal to the external
pressure. This leads to
$p_{0}(R)=\begin{cases}\rho_{0}(R)+\kappa\ln[\rho_{0}(R)]&\text{(GEOS)},\\\\[6.0pt]
1+A\ln[\rho_{0}(R)]&\text{(MPEOS)},\\\\[6.0pt]
\rho_{0}(R)^{1+1/n}&\text{(PEOS)},\end{cases}$ (27)
in the dimensionless form and to the first order of the perturbation we will
have
$\frac{dp_{0}}{dr}\bigg{|}_{R}\delta
r+p_{1}(R)+B_{0}B_{1z}(R)=B_{0}B_{1z}^{ext}(R),$ (28)
It is necessary for the gravitational potential and its radial derivative to
be continuous on the border. So
$\psi_{1}(R)=\psi_{1}^{ext}(R),$ (29)
$\frac{d\psi_{1}}{dr}\bigg{|}_{R}+\rho_{0}(R)\delta
r=\frac{d\psi_{1}^{ext}}{dr}\bigg{|}_{R}.$ (30)
We consider a very hot and low density environment out of the filament
$(r>R)$. We solve Laplace’s equation for exterior gravitational potential in
cylindrical coordinates and write the solution with the modified Bessel
function of the second type and order $m$ ($K_{m}$). Because we only
investigate axisymmetric and unstable modes, we use $m=0$ order in the
modified Bessel function, so we restrict ourselves to the axisymmetric mode
($m=0$). This is because it was shown by (Nagasawa, 1987) that non-
axisymmetric modes ($m\geqslant 1$) are stable against perturbation. This will
let us to recast equation (30) into
$\frac{d\psi_{1}}{dr}\bigg{|}_{R}+\rho_{0}(R)\delta
r=-k\frac{K_{1}(kR)}{K_{0}(kR)}\psi_{1}(R).$ (31)
Furthermore, we consider that there is no electric current outside the
filament. So, $B_{1z}$ should be continuous on the boundary. Considering
equations (27) and (28) we will have for the GEOS
$\displaystyle\bigg{(}1+\frac{\kappa}{\rho_{0}}\bigg{)}\bigg{(}\rho_{1}+\frac{d\rho_{0}}{dr}\delta
r\bigg{)}\bigg{|}_{R}+B_{0}B_{1z}(R)={}$ (32) $\displaystyle-
B_{0}\frac{iK_{0}(kR)}{K_{1}(kR)}B_{1r}(R),$
for the MPEOS
$\displaystyle\bigg{(}\frac{A}{\rho_{0}}\bigg{)}\bigg{(}\rho_{1}+\frac{d\rho_{0}}{dr}\delta
r\bigg{)}\bigg{|}_{R}+B_{0}B_{1z}(R)={}$ (33) $\displaystyle-
B_{0}\frac{iK_{0}(kR)}{K_{1}(kR)}B_{1r}(R),$
and for the PEOS
$\displaystyle\bigg{(}1+\frac{1}{n}\bigg{)}\rho_{0}^{1/n}\bigg{(}\rho_{1}+\frac{d\rho_{0}}{dr}\delta
r\bigg{)}\bigg{|}_{R}+B_{0}B_{1z}(R)={}$ (34) $\displaystyle-
B_{0}\frac{iK_{0}(kR)}{K_{1}(kR)}B_{1r}(R).$
We set the boundary conditions along the filament axis $(r=0)$ as
$f=1,\quad\frac{d\phi}{dr}=0,\quad w=0.$ (35)
Eventually, equation (31), one of the equations (32) to (34) and equation (35)
are our boundary conditions.
### 2.6 Computation method
Applying the boundary conditions at the centre and the surface of the
filament, one can solve equations (19) to (21) which demonstrate a disguised
eigenvalue problem. This can be done by different methods. Here, we finite
difference our equations over a 2000 point equally spaced mesh grid. This
gives rise to a system of algebraic block-tridiagonal matrix equations which
could be solved with any standard matrix solver. We take advantage of a
flexible relaxation technique based on the Newton-Raphson-Kantorovich (NRK)
standard algorithm (Garaud, 2001). This algorithm needs an initial guess to
advance. For non-magnetic calculations, the algorithm converges rapidly after
a few iterations using a reasonable guess for each of the dependent variables.
For cases with the magnetic field, we start with the previous non-magnetic
results as the initial guess. To calculate dispersion relation, we fix
$\omega$ and consider the eigenvalue $k$ as a dependent variable. The
algorithm successively adjusts $k$ along with the other dependent variables
until it converges.
## 3 Results
Table 1: Units of the filament radius $(R)$, the magnetic field strength $(B)$, the fastest perturbation growth rate $(\omega_{\rm fast})$ and the critical wave number $(k_{\rm critic})$ for a typical MC with the central density of 4$\times$10-20 g cm-3 and the thermal sound speed of 0.2 km s-1. Parameter | Unit | Parameter = 1 is equivalent with
---|---|---
$R$ | $H$ | 0.035 pc
$B$ | $c_{\rm s}\sqrt{4\pi\rho_{\rm c}}$ | 14.180 $\upmu$G
$\omega_{\rm fast}$ | $\sqrt{4\pi G\rho_{\rm c}}$ | $5.780$ $\text{Myr}^{-1}$
$k_{\rm critic}$ | $1/H$ | 28.409 $\text{pc}^{-1}$
Figure 1: The GEOS dispersion relations for $\kappa=0$ (isothermal),
$0.1,0.2,0.5$, different filament radii ($R$). $R=1$ is equivalent with
$R\simeq 0.035$ pc and $B=5$ is equivalent with $B\simeq 71$ $\upmu$G. In each
panel the vertical axis is $\omega^{2}$ in the unit of $4\pi
G\rho_{\mathrm{c}}$ and the horizontal axes are the wave numbers ($k$) which
is multiplied by $(1+\kappa)^{1/2}$ to account for using the thermal sound
speed as the velocity unit. Units are given in Table 1. For $R=1$, $B=5$ and
$\kappa\geqslant 0.2$, $|\omega^{2}|$ is too small to be recognized.
By considering the effect of external pressure of the environment which
confines the filament boundary, we try to study the stability of the filament
under influence of various magnetic field strengths and different EOSs. For
each EOS dispersion relation, we select the minimum of $\omega^{2}$ or
equivalently the maximum of $|\omega^{2}|$ and substitute it in $\omega_{\rm
fast}=\sqrt{|\omega^{2}|}$ to find the fastest perturbation growth rate. A
system with larger $\omega_{\rm fast}$ is more prone to the instability and
vice versa. So $\omega_{\rm fast}$ is a good indicator in order to study the
instability of filaments. Also we define $k_{\rm critic}$ as the non-zero wave
number corresponding to $\omega=0$, here which is also the largest unstable
one. The wave numbers with $\omega^{2}<0$ and $k<k_{\rm critic}$ are unstable.
The unit of the filament radius $(R)$, the magnetic field strength $(B)$, the
fastest perturbation growth rate $(\omega_{\rm fast})$ and the critical wave
number $(k_{\rm critic})$ and their equivalent values when they are equal to 1
are summarized in Table 1. We adopt different values for the filament radius,
the magnetic field strengths, the turbulence parameters ($\kappa$ in the GEOS)
and the polytropic indices ($n$ in the PEOS) to get as high as possible
resolution for dispersion relations in the different EOSs. For example, we set
$R=0.25,0.5,0.75,1,1.25,...,3.75,4,4.25$,
$B=0,0.01,0.02,0.05,0.1,0.2,0.5,1,2,5,10$ and $\kappa=0,0.1,0.2,0.5,1$ for the
GEOS to provide the dispersion relations and investigate the effect of $R$,
$B$ and $\kappa$ on the instability of this type of filaments. Albeit settings
$\kappa$ to the values larger than 1 leads the minimum value of $|\omega^{2}|$
to become about zero for the radii less than 1 and for all the magnetic field
strengths. It is worth noting that $R=1$ is equivalent with $R=H\simeq 0.035$
pc and considering the density of 4$\times$10-20 g cm-3 and the thermal sound
speed of 0.2 km s-1, $B=1$ is equivalent with $B\simeq 14.2$ $\upmu$G.
Similarly we set $R=0.25,0.5,0.75,1,2,5,10,20,50$ and $A=0.2$ for the MPEOS
with the same magnetic fields to achieve dispersion relations. For the PEOS
filaments, we use the same $R$ and $B$ values as for the MPEOS and
$n=-1.5,-2,-3,-4$.
Figure 2: Effect of the filament radius ($R$) and the magnetic field ($B$) on
the GEOS filament instability with $\kappa=0.1$ (upper left panel),
$\kappa=0.2$ (upper right panel), $\kappa=0.5$ (lower left panel) and
$\kappa=1$ (lower right panel). In each panel, the horizontal axis shows the
radius of filaments, the vertical axis shows the logarithm of magnetic field
strength and the colour bar represents $\omega_{\rm fast}$. Units are given in
Table 1. The darker shaded areas are more unstable.
Fig. 1 shows the dispersion relations for the IEOS (a GEOS with $\kappa=0$)
and the GEOS. Although we calculate the dispersion relations using so many
values for the filament radii, the magnetic field strengths and the turbulence
parameters $(\kappa)$, here, we draw the dispersion relations for only
$R=1,2,3,4$ and $B=0,0.2,0.5,5$ in order to observe their impact on the
stability of the filament. Panels in each column of Fig. 1 show specific radii
and ones in each row represent filaments with a specific turbulence parameter.
As Nagasawa (1987), Gehman et al. (1996b) and H18 reported, because of the
magnetic field disability to prevent fluid contraction along the field
direction, dispersion relation curves are nearly overlapped for the magnetic
fields larger than $B\simeq 5$ or $\simeq 71$ $\upmu$G. We compare the
dispersion relations of our models with the work by Nagasawa (1987) and find
that the two studies reproduce fairly similar results.
Figure 3: Effect of the filament radius ($R$) and the magnetic field ($B$) on
the critical wave number ($k_{\rm critic}$) in the GEOS dispersion relations
with $\kappa=0.1$ (upper left panel), $\kappa=0.2$ (upper right panel),
$\kappa=0.5$ (lower left panel) and $\kappa=1$ (lower right panel). The
horizontal and vertical axis in each panel shows the radius of filaments and
the logarithm of magnetic field strength, respectively and the colour bar
represents $k_{\rm critic}$. Units are given in Table 1. The horizontal dashed
line separates models with the magnetic fields larger than $B\simeq 0.5$ or
$\log_{10}B\simeq-0.3$ from others in each panel.
In all the panels of Fig. 1 it is obvious that increasing the magnetic field
leads to more stability by decreasing $\omega_{\rm fast}$. This also causes to
the reduction of $k_{\rm critic}$ value. We can see for each $\kappa$ and in a
specific magnetic field, when the radius of a filament increases, $\omega_{\rm
fast}$ increases as well and so the filament becomes more unstable. With a
focus on the radius of filaments, the instability of filaments with smaller
radii are more sensitive to the magnetic field strength than the larger ones.
On the other hand for a filament with a specific radius and in a fixed
magnetic field, when $\kappa$ increases, $\omega_{\rm fast}$ decreases and so
the filament becomes more stable. Interestingly, this is in contrast to the
behaviour of infinite filaments (here by infinite we mean large in radius)
(H18). In addition, clearly none of the models represent the radial
instability (RI) which occurs at a non-zero $\omega^{2}$ corresponding to
$k=0$, in any panel that is again in contrast to the infinite filaments (H18).
To understand the problem with more details, we performed very large surveys
of stability analysis that include the effect of various magnetic field
strengths, different radii and several values of $\kappa$. Fig. 2 represents
the result of four surveys for the GEOSs with $\kappa=0.1,0.2,0.5$ and 1. In
each panel, the colour bar shows $\omega_{\rm fast}$. The darker colour the
larger $\omega_{\rm fast}$ and the more unstable filament. Areas with lighter
colour, have lower $\omega_{\rm fast}$ and so represent more stable models. It
is clear that the lower right region of each panel which represents the larger
radii i.e. the thicker filaments in the lower magnetic field strength, is the
most unstable area. In contrast, the upper left region in each panel which
indicates the thinner filaments in the stronger magnetic fields are the most
stable ones. In particular, if we compare panels with each other, we find that
when $\kappa$ increases, the colour of a specific area becomes lighter that
means the corresponding filament is more stable. It is also of the note that
there are some areas with different radii and magnetic fields but with the
same colour i.e. the same $\omega_{\rm fast}$. This means that just by knowing
the $\omega_{\rm fast}$, it is not possible to find the magnetic field in a
filament or its radius exclusively.
In Fig. 3 we can see the effect of filament radius and the magnetic field on
$k_{\rm critic}$ in the GEOS dispersion relation for the same $\kappa$ values
as the Fig. 2. This figure shows that $k_{\rm critic}$ has very weak
dependency on $\kappa$ on the whole. Furthermore, if we compare this figure
with Fig. 2, we find that models that are in the upper left corner of the
panels i.e. the thinner filaments in the stronger magnetic fields, which have
the smallest $\omega_{\rm fast}$, have also the smallest $k_{\rm critic}$. On
the other hand, there is an area with larger $k_{\rm critic}$ that contains
the thinner filaments in the weaker magnetic field strengths (the lower left
region in all the panels of Fig. 3). It is also observed that in all the
panels, by increasing the filament radii in magnetic fields $B\gtrsim 0.5$
($\log_{10}B\gtrsim-0.3$, indicated by horizontal dashed line), $k_{\rm
critic}$ also increases. However in $B<0.5$, an inverse trend is observed.
$B=0.5$ is equivalent with $B\simeq 7.1$ $\upmu$G. Notwithstanding this
different treatment in the stronger and the weaker magnetic field regimes,
$k_{\rm critic}$ is converged to about 0.5 for $R\gtrsim 3.5$ in all the
panels.
Figure 4: The MPEOS dispersion relations for different filament radii ($R$)
and magnetic field strengths ($B$). The vertical axes are as Fig. 1. The
horizontal axes are the wave numbers $(k)$ multiplied by $A^{1/2}$ to account
for using the thermal sound speed as the velocity unit. Units are given in
Table 1. For $R=0.25$ and $B>0.2$, $|\omega^{2}|$ is too small to be
recognized. Note to the different scale of the horizontal axes in the first
and the second rows. The $R=1$ panel is repeated in the second row in order to
better compare with the larger radii. Figure 5: Left-hand panel: Effect of the
filament radius ($R$) and the magnetic field ($B$) on the instability of the
MPEOS filaments. In each panel the horizontal axis shows the logarithm of
filament radius, the vertical axis shows the logarithm of magnetic field
strength and the colour bar represents $\omega_{\rm fast}$. The units of $R$,
$B$ and $\omega_{\rm fast}$ are as Fig. 2. The darker shaded areas are more
unstable. Right-hand panel: Effect of the filament radius ($R$) and the
magnetic field ($B$) on the critical wave number ($k_{\rm critic}$) in the
MPEOS dispersion relation. The horizontal and vertical axes are as the left-
hand panel and the unit of $k_{\rm critic}$ are as Fig. 3. The horizontal
dashed line separates models with the magnetic fields larger than $B\simeq
0.5$ or $\log_{10}B\simeq-0.3$ from others in each panel. Figure 6: The PEOS
dispersion relations for polytropic indices ($n$) of $-1.5,-2,-3$ and $-4$,
different filament radii ($R$) and magnetic field strengths ($B$). The
vertical axes are as Fig. 1. The horizontal axes are the wave numbers $(k)$
multiplied by $(1+1/n)^{1/2}$ to account for using the thermal sound speed as
the velocity unit. Units are given in Table 1. Figure 7: Effect of the
filament radius ($R$) and the magnetic field ($B$) on the PEOS filaments
instability with $n=-1.5$ (upper left panel), $n=-2$ (upper right panel),
$n=-3$ (lower left panel) and $n=-4$ (lower right panel). In each panel the
horizontal axis shows the logarithm of the filament radius, the vertical axis
shows the logarithm of magnetic field strength and the colour bar represents
$\omega_{\rm fast}$. The units of $R$, $B$ and $\omega_{\rm fast}$ are as Fig.
2. The darker shaded areas are more unstable.
Fig. 4, demonstrates the dispersion relations for the MPEOS. Similar to the
GEOS, the dispersion relations were plotted for several values of the filament
radius, the same magnetic field strengths and $A=0.2$. Because for $R\gtrsim
10$, there are no noticeable changes in the dispersion relations, in this
figure, we select only $R=0.25,0.5,0.75,1,2,5,10$ in order to see $\omega_{\rm
fast}$ and $k_{\rm critic}$ variations clearly. Each panel shows the
dispersion relation for the aforementioned filament radii. Like the GEOS
dispersion relations (see Fig. 1), for all the radii, increasing the magnetic
field, increases the stability of the filament. Yet also the stabilization
effect due to the magnetic field is saturated for $B>5$ (Nagasawa, 1987;
Gehman et al., 1996b, H18). In small radii, the effect of magnetic field on
the dispersion relation is stronger. As an example a filament with the radius
of $R=0.5$ is almost stable when $B=5$. This is also the case for a smaller
radius of $R=0.25$, but for an order of magnitude weaker magnetic field. This
filament is completely stable when $B=5$. It should be also noted that for the
radius of $R=5$ in the absence of magnetic field and for the radius of $R=10$
also in presence of a relatively weak magnetic field of $B=0.2$ the filament
is radially unstable. In addition, it is noticeable that RI disappears
entirely when $B\geqslant 0.5$.
Figure 8: Effect of the filament radius ($R$) and the magnetic field ($B$) on
the critical wave number ($k_{\rm critic}$) in the PEOS dispersion relations
with $n=-1.5$ (upper left panel), $n=-2$ (upper right panel), $n=-3$ (lower
left panel) and $n=-4$ (lower right panel). In each panel the horizontal axis
shows the logarithm of filament radius, the vertical axis shows the logarithm
of magnetic field strength and the colour bar represents $k_{\rm critic}$. The
units of $R$, $B$ and $k_{\rm critic}$ are as Fig. 3. The horizontal dashed
line in each panel separates the magnetic fields larger than or smaller than
$B\simeq 0.5$ or $\log_{10}B\simeq-0.3$.
Much the same as the GEOS, the stability analysis is elaborated in the left-
hand panel of Fig. 5 which shows instability of the MPEOS filaments with
different radii in various magnetic fields with more details. Comparing this
panel with those of Fig. 2, one can see that the general treatment is the same
but the MPEOS is a little more unstable than the GEOSs. By incrementally
increasing the magnetic field, same as the GEOS, all the filaments become more
stable gradually. Moreover, the upper left region has the lightest colour and
manifestly shows that the most stable filaments are the thinnest ones in the
strongest magnetic field regime. Furthermore, it should be noted that similar
to the Fig. 2 for the GEOS, some models have the same $\omega_{\rm fast}$ in
spite of having different radii because of the effect of magnetic field. As it
could be observed in the panel, like the GEOS, increasing the radius leads to
more instability regardless of the magnetic field strength. For $R>5$,
increasing the radius has not noticeable effect on the instability for all the
magnetic field strength. The right-hand panel of Fig. 5 shows the effect of
filament radius and the magnetic field on $k_{\rm critic}$. Here, the overall
picture is very similar to the GEOSs, specially one with $\kappa=0.1$. The
only difference is that the very thin MPEOS filaments in the absence or
presence of very weak magnetic fields, have $k_{\rm critic}$ almost twice
larger than their GEOS counterparts.
Fig. 6 represents the dispersion relations for the PEOS. As already mentioned
in $\S$ 2.2.3, we set $n=-1.5,-2$ and $-3$ as the polytropic indices and use
$R$ and $B$ values same as in the MPEOS. We also compute the dispersion
relation for $n=-4$ in order to examine the effect of smaller $n$ on the
instability. In this figure, panels in each row indicate filaments with a
specific polytropic index, while those in each column show specific radii. The
filaments radii $R>10$, do not show noticeable changes in their dispersion
relations, so we select only $R=0.25,0.5,0.75,1,2,5$ and $10$ in order to see
their effect on the filaments instability. Nevertheless, to see the impact of
these parameters on the shape of dispersion relations more clearly, we draw
the plots only for $R=1,2,5$ and $10$ and $B=0,0.2,0.5$ and $5$. Here, once
more the magnetic field strength $B>5$ causes the dispersion relation to
nearly overlap similar to the GEOS and the MPEOS (Nagasawa, 1987; Gehman et
al., 1996b, H18). In all the panels of Fig. 6 it is obvious that increasing
the magnetic field leads to the more stability by decreasing $\omega_{\rm
fast}$ and for $R\lesssim 5$, reduction of $k_{\rm critic}$ value. H18 showed
that the infinite filaments with the PEOS are prone to the radial instability
and a strongly enough magnetic field could suppress the instability. Focusing
on the $R=5$ and $R=10$ panels, we see that this is also the case for the
pressure confined filaments with PEOSs and relatively large radii. It should
be noted that the smaller radius is radially unstable just for $n=-1.5$ while
the larger one is unstable for all the polytropic indices. The occurrence of
RI could be probably due to this fact that the effective sound speed decreases
when the density of the filament increases. For $R<5$ there is no sign of RI
in the dispersion relations.
In the following, the effect of $R$, $B$ and $n$ on the instability of
filaments with the PEOS is studied by leveraging the result of four large
surveys in more details. Fig. 7 illustrates the outcome for $\omega_{\rm
fast}$. The upper left stable region is a common clear feature of all the
panels which states that similar to the previous GEOS and MPEOS, the thinner
filaments in the stronger magnetic fields are the most stable ones. Looking at
the panels, it is clear that the stability patterns are more or less similar
to the GEOS and MPEOS. All the four panels have a distinctive more stable
region at the upper left and an unstable region at the lower right corner. By
decreasing $n$, the former becomes a little larger while the latter fades out.
This could indicate that the softer PEOSs (ones with larger $n$) are more
unstable, possibly because these filaments have more mass per unit length.
Moreover, like the GEOS and MPEOS by increasing the radius of a filament in a
constant magnetic field, the stability decreases until $R=10$ or $\simeq 0.35$
pc where after this radius, the stability does not change noticeably.
Regarding the stabilizing effect of magnetic field, the stability of filaments
with smaller radii are more sensitive to the magnetic field strength than the
larger ones. It is also worth noting that similar to Fig. 2 and 5, there is a
degeneracy in determining $B$ and $R$ from a specific $\omega_{\rm fast}$.
Fig. 8 exhibits how the filament radius and the magnetic field could affect
$k_{\rm critic}$ in the PEOSs with different $n$. The results are very similar
to the GEOS and MPEOS, however, one can see that for the thinner filaments in
the low magnetic field regime, the GEOS has the greatest $k_{\rm critic}$
while the MPEOS has the smallest one. In addition, by decreasing $n$, one can
see that for the thinner filaments in the low magnetic field regime, $k_{\rm
critic}$ is a little increases. For all the models, $k_{\rm critic}$ is
converged to $\simeq 0.5$ after $R>5$.
To further study the relationship between the critical wavelength of the
fragmentation $(\lambda_{\rm critic}=2\pi/k_{\rm critic})$ and the EOS, we
calculate $\lambda_{\rm critic}$ for the selected $R$, $B$, $\kappa$, $n$ and
$A$ (Table 2). In this table because $k_{\rm critic}$ is about 0 for some
radii, magnetic fields and EOSs, we can not calculate $\lambda_{\rm critic}$
for those models (indicated as N/A in the table). As expected, $\lambda_{\rm
critic}$ is more sensitive to the magnetic field and the type of EOS for
thinner filaments. Also difference between the maximum and the minimum values
of $\lambda_{\rm critic}$ in the various EOSs is greater for the filaments
with smaller radii.
Table 2: $\lambda_{\rm critic}$ values in parsecs for the sample filaments
radii ($R$), the magnetic field strengths ($B$), the turbulence parameters
($\kappa$) in the GEOS, the polytropic indices ($n$) in the PEOS and the
empirical constant ($A$) in the MPEOS.
| | $\kappa$ | $n$ | $A$
---|---|---|---|---
$R$ (pc) | $B$ ($\upmu$G) | 0.0 | 1.0 | -1.5 | -4.0 | 0.2
0.018 | 00.0 | 0.102 | 0.073 | 0.171 | 0.117 | 0.214
0.018 | 07.1 | N/Aa | N/A | 0.771 | 0.801 | 0.713
0.018 | 70.9 | N/A | N/A | 2.697 | N/A | 1.250
0.035 | 00.0 | 0.193 | 0.142 | 0.301 | 0.219 | 0.358
0.035 | 07.1 | 0.392 | 0.314 | 0.478 | 0.422 | 0.491
0.035 | 70.9 | 1.324 | 6.505 | 0.599 | 0.970 | 0.536
0.100 | 00.0 | 0.369 | 0.330 | 0.446 | 0.394 | 0.480
0.100 | 07.1 | 0.395 | 0.381 | 0.452 | 0.416 | 0.480
0.100 | 70.9 | 0.408 | 0.520 | 0.453 | 0.425 | 0.480
0.175 | 00.0 | 0.391 | 0.419 | 0.452 | 0.414 | 0.482
0.175 | 07.1 | 0.392 | 0.433 | 0.452 | 0.415 | 0.482
0.175 | 70.9 | 0.393 | 0.451 | 0.452 | 0.416 | 0.482
* a
Not available data for this model (see the text).
## 4 Summary and conclusion
Filamentary structures seem to be a natural early stage in formation of stars
and clusters of stars. This has stimulated many investigations regarding the
properties and evolution of these structures. In a pioneering work, Nagasawa
(1987) showed that the pressure-confined filaments are gravitationally
unstable for a specific range of wavelengths and a poloidal magnetic field can
increase their stability and interestingly entirely stabilize them if the
filaments are thin enough.
Recent observations show that the IEOS is not always the best EOS for
interpreting the filaments properties. Building on the work by Nagasawa
(1987), H18 studied the instability of filamentary MCs without the effect of
external pressure, with the previously proposed non-isothermal EOSs, namely
the GEOS, the MPEOS and the PEOS.
In this paper, in a continuation of the previous work by H18, we have added
the effect of external pressure. To this aim, we use these three non-
isothermal EOSs (described in $\S$ 2.2) in order to study the instability of
magnetized pressure-confined filaments. We solve the equations as mentioned
earlier in $\S$ 2.6 and extract the dispersion relations for these non-
isothermal filaments with various radii ($R$) and magnetic field ($B$).
Moreover, by exploiting the growth rate of the fastest growing mode
($\omega_{\rm fast}$) as a gravitational instability indicator, we are able to
investigate the effect of filament radius, magnetic field and type of EOS on
the instability of the filaments. In summary, the results show that:
1. 1.
Similar to the infinite filaments, for all the EOSs, increasing the magnetic
field strength, makes the pressure-confined filaments more stable.
2. 2.
The instability in the thinner filaments is more sensitive to the magnetic
field strength than the thicker ones.
3. 3.
Unlike the infinite filaments, for the GEOS pressure-confined models
considered in this study which have $R\lesssim 0.15$ pc, for all the radii
(specially larger ones), in a fixed magnetic field, when $\kappa$ increases,
the filaments become more stable.
4. 4.
For all the EOSs, the thinner filaments are totally stabilized in an even
intermediate magnetic field strength (e.g. models with $R\lesssim 0.03$ pc in
$B\gtrsim 14$ $\upmu$G), while for the thicker ones this effect is suppressed
for the magnetic field stronger than $B\simeq 70$ $\upmu$G.
5. 5.
There is no RI in the GEOS pressure-confined filaments. This is in contrast to
the infinite GEOS filaments.
6. 6.
In the absence of magnetic field, the MPEOS and the PEOS with $n=-1.5$ and
$R\gtrsim 0.17$ pc are radially unstable. The twice broader filaments of these
two EOSs can also be radially unstable in presence of a weak magnetic field
$B\simeq 3\upmu$G. The RI in the other less softer PEOSs ($n=-2,-3$ and $-4$)
with the radius $R=0.35$ pc is still suppressed by a weak magnetic fields of
$B\simeq 7\upmu$G for the first and $B\simeq 3\upmu$G for the next two ones.
7. 7.
In the PEOS, decreasing $n$ has the same effect on the filament instability as
the increasing $\kappa$ in the GEOS.
8. 8.
Comparing the filaments with the same radius and in the same magnetic field,
the MPEOS filaments are the most unstable ones, because of their softer EOS.
9. 9.
The minimum spacing distance between clumps in filamentary MCs often is
compared with $\lambda_{\rm critic}$ (e.g. Hacar & Tafalla, 2011; Contreras et
al., 2016; Zhang et al., 2020) and demonstrates diverse ranges of length (see
Table 2). The predicted $\lambda_{\rm critic}$ is clearly dependent on the
filament radii, the EOS and the magnetic field strength. This dependency is
more pronounced for the thinner filaments and is completely strong for the
thinnest ones. Caution is needed in interpreting this length scale. More
specifically, it is interesting to investigate the fragmentation space within
the thinner filaments (e.g. Sánchez-Monge et al., 2014).
10. 10.
In all the models it is observed that by decreasing the filament radius (which
means the higher external pressure), $\omega_{\rm fast}$ decreases or
equivalently the minimum time needed for the fragmentation ($\tau_{\rm
min}=1/\omega_{\rm fast}$) increases. This could correspond to a longer time
needed for a clump to become unstable and finally form protostars .
Interestingly, this has been also reported by Anathpindika & Di Francesco
(2020) recently. They have performed hydrodynamical simulation of accreting
filaments in a medium with different external pressure and have shown that a
higher external pressure leads to a lower star formation rate.
11. 11.
By comparing $\omega_{\rm fast}$ for the PEOSs, one can see that the softer
ones have smaller $\tau_{\rm min}$. Remarkably, in agreement with this result,
in a hydrodynamical simulation of an initially uniform polytropic gas within a
periodic box and driving turbulence, Federrath & Banerjee (2015) derived that
the star formation rate increases for the softer PEOSs.
## Data availability
No new data were generated or analysed in support of this research.
## Acknowledgements
Mohammad Mahdi Motiei and Mohammad Hosseinirad thank Mahmood Roshan and Najme
Mohammad-Salehi for useful discussions. The authors thank the anonymous
referees for the careful reading of the manuscript and their insightful and
constructive comments. This research made use of Scipy (Jones et al., 2001),
Jupyter (Kluyver et al., 2016) and Numpy (Walt et al., 2011). All figures were
generated using Matplotlib (Hunter, 2007). Also we have made extensive use of
the NASA Astrophysical Data System Abstract Service. This work was supported
by the Ferdowsi University of Mashhad under grant no. 50729 (1398/06/26).
## References
* Anathpindika & Di Francesco (2020) Anathpindika S., Di Francesco J., 2020, arXiv e-prints, p. arXiv:2012.01794
* André (2017) André P., 2017, Comptes Rendus Geoscience, 349, 187
* André et al. (2014) André P., Di Francesco J., Ward-Thompson D., Inutsuka S.-I., Pudritz R. E., Pineda J., 2014, in , Protostars and Planets VI. University of Arizona Press (arXiv:1312.6232), doi:10.2458/azu˙uapress˙9780816531240-ch002, http://muse.jhu.edu/books/9780816598762/9780816598762-8.pdf
* Arzoumanian et al. (2011) Arzoumanian D., et al., 2011, A&A, 529, L6
* Arzoumanian et al. (2019) Arzoumanian D., et al., 2019, A&A, 621, A42
* Bally et al. (1987) Bally J., Stark A. A., Wilson R. W., Langer W. D., 1987, ApJ, 312, L45
* Bergin & Tafalla (2007) Bergin E. A., Tafalla M., 2007, ARA&A, 45, 339
* Bontemps et al. (2010) Bontemps S., et al., 2010, A&A, 518, L85
* Burkert & Hartmann (2004) Burkert A., Hartmann L., 2004, ApJ, 616, 288
* Camacho et al. (2016) Camacho V., Vázquez-Semadeni E., Ballesteros-Paredes J., Gómez G. C., Fall S. M., Mata-Chávez M. D., 2016, ApJ, 833, 113
* Chandrasekhar & Fermi (1953) Chandrasekhar S., Fermi E., 1953, ApJ, 118, 116
* Chen & Ostriker (2014) Chen C.-Y., Ostriker E. C., 2014, ApJ, 785, 69
* Chou et al. (2000) Chou W., Matsumoto R., Tajima T., Umekawa M., Shibata K., 2000, ApJ, 538, 710
* Clarke et al. (2017) Clarke S. D., Whitworth A. P., Duarte-Cabral A., Hubber D. A., 2017, MNRAS, 468, 2489
* Clarke et al. (2020) Clarke S. D., Williams G. M., Walch S., 2020, arXiv e-prints, p. arXiv:2007.15358
* Contreras et al. (2016) Contreras Y., Garay G., Rathborne J. M., Sanhueza P., 2016, MNRAS, 456, 2041
* Dib et al. (2007) Dib S., Kim J., Vázquez-Semadeni E., Burkert A., Shadmehri M., 2007, ApJ, 661, 262
* Durrive & Langer (2019) Durrive J.-B., Langer M., 2019, Journal of Fluid Mechanics, 859, 362
* Federrath (2016) Federrath C., 2016, MNRAS, 457, 375
* Federrath & Banerjee (2015) Federrath C., Banerjee S., 2015, Monthly Notices of the Royal Astronomical Society, 448, 3297
* Feng et al. (2016) Feng S., Beuther H., Zhang Q., Henning T., Linz H., Ragan S., Smith R., 2016, A&A, 592, A21
* Fiege & Pudritz (2000a) Fiege J. D., Pudritz R. E., 2000a, MNRAS, 311, 85
* Fiege & Pudritz (2000b) Fiege J. D., Pudritz R. E., 2000b, MNRAS, 311, 105
* Fischera & Martin (2012a) Fischera J., Martin P. G., 2012a, A&A, 542, A77
* Fischera & Martin (2012b) Fischera J., Martin P. G., 2012b, A&A, 547, A86
* Freundlich et al. (2014) Freundlich J., Jog C. J., Combes F., 2014, A&A, 564, A7
* Garaud (2001) Garaud P., 2001, PhD thesis, https://users.soe.ucsc.edu/~pgaraud/Work/thesis.pdf
* Gehman et al. (1996a) Gehman C. S., Adams F. C., Fatuzzo M., Watkins R., 1996a, ApJ, 457, 718
* Gehman et al. (1996b) Gehman C. S., Adams F. C., Watkins R., 1996b, ApJ, 472, 673
* Goldsmith et al. (2008) Goldsmith P. F., Heyer M., Narayanan G., Snell R., Li D., Brunt C., 2008, ApJ, 680, 428
* Gómez & Vázquez-Semadeni (2014) Gómez G. C., Vázquez-Semadeni E., 2014, ApJ, 791, 124
* Gritschneder et al. (2016) Gritschneder M., Heigl S., Burkert A., 2016, ApJ, 834, 202
* Hacar & Tafalla (2011) Hacar A., Tafalla M., 2011, A&A, 533, A34
* Hanawa & Tomisaka (2015) Hanawa T., Tomisaka K., 2015, ApJ, 801, 11
* Hanawa et al. (2017) Hanawa T., Kudoh T., Tomisaka K., 2017, ApJ, 848, 2
* Hanawa et al. (2019) Hanawa T., Kudoh T., Tomisaka K., 2019, ApJ, 881, 97
* Hartmann & Burkert (2007) Hartmann L., Burkert A., 2007, ApJ, 654, 988
* Heigl et al. (2016) Heigl S., Burkert A., Hacar A., 2016, MNRAS, 463, 4301
* Heigl et al. (2018) Heigl S., Gritschneder M., Burkert A., 2018, MNRAS, 481, L1
* Heigl et al. (2020) Heigl S., Gritschneder M., Burkert A., 2020, 15, 1
* Heitsch (2013) Heitsch F., 2013, ApJ, 776, 62
* Henshaw et al. (2016) Henshaw J. D., et al., 2016, MNRAS, 463, 146
* Hosseinirad et al. (2017) Hosseinirad M., Naficy K., Abbassi S., Roshan M., 2017, MNRAS, 465, 1645
* Hosseinirad et al. (2018) Hosseinirad M., Naficy K., Abbassi S., Roshan M., 2018, MNRAS, 465, 1645
* Hunter (2007) Hunter J. D., 2007, Computing In Science & Engineering, 9, 90
* Inutsuka & Miyama (1992) Inutsuka S.-I., Miyama S. M., 1992, ApJ, 388, 392
* Inutsuka & Miyama (1997) Inutsuka S., Miyama S. M., 1997, ApJ, 480, 681
* Inutsuka et al. (2015) Inutsuka S.-i., Inoue T., Iwasaki K., Hosokawa T., 2015, A&A, 580, A49
* Jackson et al. (2010) Jackson J. M., Finn S. C., Chambers E. T., Rathborne J. M., Simon R., 2010, ApJ, 719, L185
* Jones et al. (2001) Jones E., Oliphant T., Peterson P., et al., 2001, SciPy: Open source scientific tools for Python, http://www.scipy.org/
* Juvela et al. (2012) Juvela M., et al., 2012, A&A, 541, A12
* Kainulainen et al. (2017) Kainulainen J., Stutz A. M., Stanke T., Abreu-Vicente J., Beuther H., Henning T., Johnston K. G., Megeath S. T., 2017, A&A, 600, A141
* Kawachi & Hanawa (1998) Kawachi T., Hanawa T., 1998, Publications of the Astronomical Society of Japan, 50, 577
* Kim et al. (2012) Kim J.-g., Kim W.-t., Seo Y. M., Hong S. S., 2012, ApJ, 761, 131
* Klassen et al. (2017) Klassen M., Pudritz R. E., Kirk H., 2017, MNRAS, 465, 2254
* Klessen et al. (1998) Klessen R. S., Burkert A., Bate M. R., 1998, ApJ, 501, L205
* Kluyver et al. (2016) Kluyver T., et al., 2016, in ELPUB. pp 87–90
* Könyves et al. (2010) Könyves V., et al., 2010, A&A, 518, L106
* Lada & Lada (2003) Lada C. J., Lada E. A., 2003, /araa, 41, 57
* Larson (1985) Larson R. B., 1985, MNRAS, 214, 379
* Lee & Hong (2007) Lee S. M., Hong S. S., 2007, ApJS, 169, 269
* Li & Klein (2019) Li P. S., Klein R. I., 2019, MNRAS, 485, 4509
* Lizano & Shu (1989) Lizano S., Shu F. H., 1989, ApJ, 342, 834
* Maloney (1988) Maloney P., 1988, ApJ, 334, 761
* Matsumoto et al. (1994) Matsumoto T., Nakamura F., Hanawa T., 1994, PASJ, 46, 243
* McKee & Ostriker (2007) McKee C. F., Ostriker E. C., 2007, ARA&A, 45, 565
* McLaughlin & Pudritz (1996) McLaughlin D. E., Pudritz R. E., 1996, ApJ, 469, 194
* Men’shchikov et al. (2010) Men’shchikov A., et al., 2010, A&A, 518, L103
* Miettinen (2012) Miettinen O., 2012, A&A, 542, A101
* Miettinen & Harju (2010) Miettinen O., Harju J., 2010, A&A, 520, A102
* Miville-Deschênes et al. (2010) Miville-Deschênes M.-A., et al., 2010, A&A, 518, L104
* Miyama et al. (1987) Miyama S. M., Narita S., Hayashi C., 1987, Progress of Theoretical Physics, 78, 1051
* Nagai et al. (1998) Nagai T., Inutsuka S.-i., Miyama S. M., 1998, ApJ, 506, 306
* Nagasawa (1987) Nagasawa M., 1987, Progress of Theoretical Physics, 77, 635
* Nakamura & Li (2008) Nakamura F., Li Z.-Y., 2008, ApJ, 687, 354
* Nakamura & Umemura (1999) Nakamura F., Umemura M., 1999, ApJ, 515, 239
* Nakamura et al. (1993) Nakamura F., Hanawa T., Nakano T., 1993, PASJ, 45, 551
* Ntormousi & Hennebelle (2019) Ntormousi E., Hennebelle P., 2019, A&A, 625, A82
* Ostriker (1964) Ostriker J., 1964, ApJ, 140, 1056
* Padoan et al. (2014) Padoan P., Haugbølle T., Nordlund Å., 2014, ApJ, 797, 32
* Palmeirim et al. (2013) Palmeirim P., et al., 2013, A&A, 550, A38
* Panopoulou et al. (2017) Panopoulou G. V., Psaradaki I., Skalidis R., Tassis K., Andrews J. J., 2017, MNRAS, 466, 2529
* Pilbratt et al. (2010) Pilbratt G. L., et al., 2010, A&A, 518, L1
* Pudritz & Kevlahan (2013) Pudritz R. E., Kevlahan N. K.-R., 2013, Philosophical Transactions of the Royal Society of London Series A, 371, 20120248
* Roy et al. (2019) Roy A., et al., 2019, A&A, 626, A76
* Sadhukhan et al. (2016) Sadhukhan S., Mondal S., Chakraborty S., 2016, MNRAS, 459, 3059
* Sánchez-Monge et al. (2014) Sánchez-Monge Á., et al., 2014, A&A, 569, A11
* Schneider & Elmegreen (1979) Schneider S., Elmegreen B. G., 1979, ApJS, 41, 87
* Shadmehri (2005) Shadmehri M., 2005, MNRAS, 356, 1429
* Steinacker et al. (2016) Steinacker J., Bacmann A., Henning T., Heigl S., 2016, A&A, 593, A6
* Stodólkiewicz (1963) Stodólkiewicz J. S., 1963, Acta Astron., 13, 30
* Toci & Galli (2015) Toci C., Galli D., 2015, MNRAS, 446, 2110
* Vázquez-Semadeni et al. (2007) Vázquez-Semadeni E., Gómez G. C., Jappsen A. K., Ballesteros-Paredes J., González R. F., Klessen R. S., 2007, ApJ, 657, 870
* Viala & Horedt (1974) Viala Y., Horedt G. P., 1974, A&AS, 16, 173
* Walt et al. (2011) Walt S. v. d., Colbert S. C., Varoquaux G., 2011, Computing in Science & Engineering, 13, 22
* Wang et al. (2011) Wang K., Zhang Q., Wu Y., Zhang H., 2011, ApJ, 735, 64
* Wang et al. (2014) Wang K., et al., 2014, MNRAS, 439, 3275
* Wang et al. (2016) Wang K., Testi L., Burkert A., Walmsley C. M., Beuther H., Henning T., 2016, ApJS, 226, 9
* Ward-Thompson et al. (2010) Ward-Thompson D., et al., 2010, A&A, 518, L92
* Wareing et al. (2016) Wareing C. J., Pittard J. M., Falle S. A. E. G., Van Loo S., 2016, MNRAS, 459, 1803
* Zhang et al. (2020) Zhang S., et al., 2020, arXiv e-prints, p. arXiv:2012.07738
## Appendix A Dimensionless form of the equations of state
We convert quantities and equations to the dimensionless ones as H18. The
units are
$[\rho]=\rho_{\rm c},$ (36) $[t]={\sqrt{4\pi G[\rho]}}^{-1},$ (37)
$[p]=p_{c},$ (38) $[\boldsymbol{u}]=\sqrt{\dfrac{[p]}{[\rho]}},$ (39)
$[r]=[t][\boldsymbol{u}],$ (40) $[\psi]=[\boldsymbol{u}]^{2}.$ (41)
The velocity unit is equal to the isothermal sound speed $c_{\mathrm{s}}$ for
the IEOS and the GEOS. For the MPEOS and the PEOS, it is assumed to be
$c_{\mathrm{s}}$. The magnetic field unit is defined as
$[\boldsymbol{B}]=\sqrt{4\pi[p]},$ (42)
$B=1$ is equivalent with $B\simeq 14.2$ $\upmu$G. Using these units, we can
rewrite the analytical solution of the density and gravitational potential of
the isothermal filament as
$\rho(r)=\bigg{(}1+\dfrac{r^{2}}{8}\bigg{)}^{-2},$ (43)
and
$\psi(r)=2\ln\bigg{(}1+\dfrac{r^{2}}{8}\bigg{)}.$ (44)
Moreover we achieve the dimensionless form of EOSs for the GEOS as
$p=\rho+\kappa\ln\,(\rho),$ (45)
where $\kappa=\dfrac{\mathcal{P}_{0}}{c_{\rm s}^{2}\rho_{\mathrm{c}}}$ and
$\kappa=0$ gives the IEOS. The MPEOS dimensionless form is
$p=1+A\ln\,(\rho),$ (46)
and finally the dimensionless form of the PEOS is
$p=\rho^{1+1/n}.$ (47)
|
# Global Existence of Solutions to Reaction Diffusion Systems with Mass
Transport Type Boundary Conditions on an Evolving Domain
Vandana Sharma Department of Mathematics, Indian Institute of Technology
Jodhpur, Rajasthan, India<EMAIL_ADDRESS>and Jyotshana V. Prajapat
Department of Mathematics, University of Mumbai, Vidyanagari, Santacruz east,
Mumbai 400 098, India<EMAIL_ADDRESS>
###### Abstract.
We consider reaction diffusion systems where components diffuse inside the
domain and react on the surface through mass transport type boundary
conditions on an evolving domain. Using a Lyapunov functional and duality
arguments, we establish the existence of component wise non-negative global
solutions.
The first author acknowledge IIT Jodhpur for research grant support as SEED
grant and infrastructural support
00footnotetext: Keywords: reaction-diffusion equations, mass transport,
conservation of mass, global wellposedness, linear estimates, evolving domain.
AMS Classification: Primary: 35K57, 35B45
## 1\. Introduction
The reaction–diffusion mechanism is one of the simplest and most elegant
pattern formation models. Turing (1952) [34] first proposed the mechanism in
the context of biological morphogenesis, showing that reactions between two
diffusible chemicals (morphogens) could give rise to spatially heterogeneous
concentrations through an instability driven by diffusion. Recently there has
been ample of studies on models that involve coupled bulk surface dynamics
[7], [9],[11],[22], [23]. Hahn et al [10] model the surfactant concentration
by use of coupled bulk-surface model and Rätz and Röger [27], [28] studied the
symmetry breaking in a bulk surface reaction diffusion model for signalling
networks. In the former work, a reaction–convection diffusion is proposed that
couples the concentration of the surfactants in the bulk and on the free
surfaces while in the latter work, a single diffusion partial differential
equation is formulated inside the bulk of a cell, while on the cell surface a
system of two membrane reaction diffusion equations is formulated.
Sharma and Morgan, [31] worked on coupled reaction diffusion system with $m$
components in the bulk coupled with $n$ components on the boundary and under
certain conditions established the local and global wellposedness of the
model. They further established the uniform boundedness of the solution.
Recent advances in mathematical modelling and developmental biology identify
the important role of evolution of domains during the reaction process as
central in the formation of patterns, both empirically (Kondo and Asai [12])
and computationally (Comanici and Golubitsky [5] ; Crampin et al. [3]).
Experimental observations on the skin pigmentation of certain species of fish
have shown that patterns evolve in a dynamic manner during the growth of the
developing animal. Kondo and Asai [12] describe observations on the marine
angelfish Pomacanthus semicirculatus, where juveniles display a regular array
of vertical stripes which increase in number during growth, with new stripes
appearing in the gaps between existing ones as the animal doubles in length.
Further in [3], Crampin et al investigated the sequence of patterns generated
by a reaction–diffusion system on growing domain. They derived a general
evolution equation to incorporate domain growth in reaction–diffusion models
and consider the case of slow and isotropic domain growth in one spatial
dimension. The results that Crampin et al [3] present, suggest at least in
one-dimension, that growth may in fact stabilize the frequency-doubling
sequence and subsequently that it may be a mechanism for robust pattern
formation. Also, in this respect, many numerical studies, such as in Barrass
et al. [2]; and Madzvamuse and Maini [19] ; Madzvamuse [21], of RDS’s on
evolving domains are available. We also observed that Kulesa et al. [14] have
incorporated exponential domain growth into a model for the spatio-temporal
sequence of initiation of tooth primordia in the Alligator Mississippiensis.
In the model, domain growth plays a central role in establishing the order in
which tooth precursors appear.
A specific feature of reaction-diffusion patterns on growing domains is the
tendency for stripe patterns to double in the number of stripes each time the
domain doubles in length, called mode-doubling. Since their seminal
introduction by Turing [34], reaction-diffusion systems (RDS’s) have
constituted a standard framework for the mathematical modelling of pattern
formation in chemistry and biology. Numerous studies on the stability of
solutions of RDS’s on fixed domains are available, for example, Hollis et al.
[11] ; Rothe [29]; Sharma and Morgan [31], but very little literature
regarding the global wellposedness of solutions of RDS’s on evolving domains.
In direction of stability, Madzvamuse et al.[20] provides a linear stability
analysis of RDS’s on continuously evolving domains, and Labadie [17] examines
the stability of solutions of RDS’s on monotonically growing surfaces.
Chandrashekar et al [35] showed that RDS fulfils a restricted version of
certain stability conditions, introduced by Morgan [22] for fixed domain, then
the RDS fulfills the same stability conditions on any bounded spatially linear
isotropic evolution of the domain. They prove that, under certain conditions,
the existence and uniqueness for a RDS on a fixed domain implies the existence
and uniqueness for the corresponding RDS on an evolving domain. This is, to
our best knowledge, the first result that holds independently of the growth
rate and is thus valid on growing or contracting domains as well as domains
that exhibit periods of growth and periods of contraction. Again these models
arise in the area of tissue engineering and regenerative medicine, elctrospun
membrane which are useful in applications such as filtration systems and
sensors for chemical detection.
In [4], [24], [13] and the references therein, the authors derive the equation
for the reaction diffusion equation on a growing manifold with or without
boundary. They imposed special growth conditions such as isotropic (including
exponential) or anisotropic and studied the behaviour of solutions. More
precisely, they studied pattern formation on a manifold beginning with an
initial static pattern and compared it with the final pattern after the
manifold stops growing. The main focus of their work has been stability
analysis and numerical simulations to study the development of patterns with
growth, on curved surfaces.
In this paper, we prove the global existence for solutions of reaction
diffusion system on a domain in $\mathbb{R}^{n}$ evolving with time. In [11],
the authors had proved global existence and uniform boundedness for a class of
two component reaction diffusion system where one of the components is given
to be apriori bounded as long as the solutions exists. This was extended in
[30] to more general system involving two components and with Neumann boundary
conditions using Lyapunov type functional for deriving the apriori esitmates.
Keeping in mind the possible applications to systems such as Brusselator (see
Section 7), here we use techniques of [30] to obtain the global existence for
a two component reaction diffusion system on an evolving domain in the case
when one of the component remains apriori bounded. Extension of the estimates
of Fabes-Riviere [8] to a more general operator and construction of a suitable
Lyapunov functional are crucial ingredients in our proof to obtain Hölder and
$L_{p}$ estimates. These results as well as the local existence is proved here
for $m$ component system of reaction diffusion equation. We also define a
Lyapunov functional different from the one used in [31] and [1] which can be
used to obtain $L_{p}$ estimates for the $m$ component system, as in these
references. Once this is done, the global existence for the general case of
system of $m$ components on evolving domain will follow from arguments similar
to [31].
As in many of the existing works, we consider here dilational anisotropic as
well as isotropic growth, though the arguments extend to a more general
growth. Consider compact domains $\Omega_{t}\subset\mathbb{R}^{n}$, $t\geq 0$
with boundary $\partial\Omega_{t}=\Gamma_{t}$ evolving according to the given
law (flow) $y(x,t)$ so that we can represent
$\Omega_{t}=y_{t}(\Omega_{0})=y_{t}(\Omega)$, $t\geq 0$ where
$\Omega_{0}=\Omega$ is the initial domain. We assume that $y$ is a
diffeomorphism and as in [13], it is separable in $t$ and $x$ variable. In
practice, one expects that for an arbitrary domain, at a future time $t$, the
boundary $\Gamma_{t}$ may begin self intersecting, or the domain $\Omega_{t}$
may split. Here we are interested in modeling situations where the domain does
not break up and the boundary evolves in such a way that $\Gamma_{t}$
continues to remain smooth. So, without loss of generality we assume that the
domain and hence the boundary remain asymptotically close to a fixed domain,
which we denote by $\Omega_{\infty}$ with boundary $\Gamma_{\infty}$, and that
for each $t\geq 0$, $\Gamma_{t}$ is $C^{2+\mu}$. Letting $c_{i}$ denote the
concentration of the $i$-th component, we consider the system of equations
$\displaystyle\frac{\partial c_{i}}{\partial t}$ $\displaystyle=$
$\displaystyle d_{i}\Delta c_{i}+f_{i}(c_{1},\ldots,c_{m})\quad
y\in\Omega_{t},t\in(0,T),1\leq i\leq m,$ $\displaystyle d_{i}\frac{\partial
c_{i}}{\partial\eta_{t}}$ $\displaystyle=$ $\displaystyle
g_{i}(c_{1},\ldots,c_{m})\quad y\in\Gamma_{t},t\in(0,T),1\leq i\leq m,$ (1.1)
$\displaystyle c_{i}$ $\displaystyle=$ $\displaystyle c^{0}_{i}\quad
y\in\Omega_{0}=\Omega,t=0,1\leq i\leq m$
where $\eta_{t}$ denotes the outward unit normal vector to the boundary
$\Gamma_{t}$ and $\Omega_{0}=\Omega$ is the initial domain.
Note that our results can be generalised to a manifold $(\Omega,{\it g})$ with
boundary where Laplacian $\Delta$ is replaced by the Laplace-Beltrami operator
$\Delta_{g}$ corresponding to the Riemannian metric ${\it g}$. This will be
done in our future work [33], where we prove the global existence of solutions
for volume-surface reaction diffusion systems on manifolds.
The first step is to transform the system of equations (1) on $\Omega_{t}$ to
an equivalent system on the initial domain $\Omega$, as in [4], [24], [13].
The information on how the domain is evolving is captured in the diffusion
term of the transformed equation, and generally, the evolution of domain is
described by a flow which is separable in time and spatial variables. We note
that in [13], a suitable transformation was used so that the diffusion term in
the resulting equation does not have time dependent term. Whereas we do
analysis of the transformed equation with diffusion term depending on the time
variable.
The plan of paper is as follows. We begin by the derivation of the equations
on the evolving domain and reduction to a pull back system on the intial
domain $\Omega$ in Section 2. Section 3 contains primary assumptions on the
vector fields $f$ and $g$, and statements of our local and global existence
results. In addition to quasi-positivity assumptions that guarantee the
componentwise nonnegativity of solutions, we also assume polynomial bounds,
and that the reaction vector fields satisfy a condition that is similar to the
condition given in [1] and [30]. In Section 4 we discuss the Hölder estimates
which will be useful in establishing the global wellposedness of the model on
growing domain. Local existence is established in Section 5 and in Section 6
we develop a boot strapping process based upon duality estimates, and provide
a proof of our global existence result. Section 7 contains a few examples.
## 2\. Equation for evolving domains
Here we show how to reduce the system (1) on $\Omega_{t}$ to a system on the
fixed domain $\Omega$. For simplicity of notations and keeping in mind
practical applications, we show the derivation for domains
$\Omega_{t}\subset\mathbb{R}^{3}$. Let $y_{t}:\Omega\to\mathbb{R}^{3}$ be a
one parameter family of diffeomorphisms such that $y_{0}=Id$, the identity map
and $\Omega_{t}=y_{t}(\Omega)$ denote the domain evolving with time $t\geq 0$
such that $\Omega_{0}=\Omega$. We obtain a parametrization for $\Omega_{t}$ by
writing $y\in\Omega_{t}$ as
$y=y(x,t)=(y_{1}(x,t),y_{2}(x,t),y_{3}(x,t))=y_{t}(x)$ for $t\geq 0$, so that
$y(x,0)=x\in\Omega_{0}=\Omega.$
If $c$ denotes the chemical concentration in the domain $\Omega_{t}$, then the
diffusion process for $c$ is driven by the equation
$\frac{d}{dt}\int\limits\limits\limits_{\Omega_{t}}\left(c(y)\,d\Omega_{t}\right)=D\int\limits\limits\limits_{\partial\Omega_{t}}\nabla
c(y)\cdot\nu_{t}\,d\sigma_{t}$ (2.1)
where $d\Omega_{t}=dy$ is the volume element in $\Omega_{t}$, $\sigma_{t}$ is
a parametrization for $\partial\Omega_{t}$ and $d\sigma_{t}$ is the surface
area element for $\partial\Omega_{t}$. Since $y$ is a diffeomorphism, we have
$dy={\sqrt{det(Dy(x,t))}dx}$ and hence
$\displaystyle\frac{d}{dt}\int\limits\limits\limits_{\Omega_{t}}\left(c(y,t)\,dy\right)=\frac{d}{dt}\int\limits_{\Omega}c(y(x,t),t)\,{\sqrt{det(Dy(x,t))}}\,dx$
(2.2) $\displaystyle=$
$\displaystyle\int\limits\limits\limits_{\Omega}\left\\{\frac{dy}{dt}(x,t)\cdot\nabla
c(y(x,t),t)+\frac{dc}{dt}(y(x,t),t)\right\\}\,{\sqrt{det(Dy(x,t))}}\,dx$
$\displaystyle+\int\limits\limits\limits_{\Omega}c(y(x,t),t)\frac{d}{dt}\left({\sqrt{det(Dy(x,t))}}\right)\,dx,$
while using the Stokes theorem and change of variables, we see that
$\displaystyle D\int\limits\limits\limits_{\partial\Omega_{t}}\nabla
c(y,t)\cdot\nu_{t}\,d\sigma_{t}(y)$ $\displaystyle=$ $\displaystyle
D\int\limits\limits\limits_{\Omega_{t}}\Delta c(y)\,dy$ (2.3) $\displaystyle=$
$\displaystyle
D\int\limits\limits\limits_{\Omega}\Delta_{t}c((y(x),t),t)\,{\sqrt{det(Dy(x,t))}}dx.$
Note, $\frac{\partial}{\partial
y_{i}}c(y,t)=\sum\limits_{j=1}^{3}\frac{\partial x_{j}}{\partial
y_{i}}\frac{\partial c}{\partial x_{j}}(y(x,t),t)$ so that
$\frac{\partial^{2}}{\partial y_{i}^{2}}c(y,t)=\sum_{j,k=1}^{3}\frac{\partial
x_{j}}{\partial y_{i}}\frac{\partial x_{k}}{\partial
y_{i}}\frac{\partial^{2}c}{\partial x_{j}\partial
x_{k}}(y(x,t),t)+\sum_{j=1}^{3}\frac{\partial^{2}x_{j}}{\partial
y_{i}^{2}}\frac{\partial c}{\partial x_{j}}(y(x,t),t).$
Thus,
$\Delta_{t}=\sum_{i=1}^{3}\sum_{j,k=1}^{3}\frac{\partial x_{j}}{\partial
y_{i}}\frac{\partial x_{k}}{\partial y_{i}}\frac{\partial^{2}}{\partial
x_{j}\partial
x_{k}}+\sum_{i=1}^{3}\sum_{j=1}^{3}\frac{\partial^{2}x_{j}}{\partial
y_{i}^{2}}\frac{\partial}{\partial x_{j}}$ (2.4)
Combining equations (2.1)-(2.2) we get that the concentration in the domain
$\Omega_{t}$ satisfies the equation
$\displaystyle\frac{dc}{dt}(y(x,t),t)+\frac{dy}{dt}(x,t)\cdot\nabla
c(y(x,t),t))$ (2.5) $\displaystyle=$ $\displaystyle
D\Delta_{t}c((y(x),t),t)-\frac{1}{{\sqrt{det(Dy(x,t))}}}\frac{d}{dt}\left({\sqrt{det(Dy(x,t))}}\right)c(y(x,t),t)$
for $x\in\Omega$ and $t\geq 0$. Define the function $u:\Omega\times[0,T)$ as
$u(x,t):=c(y(x,t),t),$
then
$\frac{d}{dt}u(x,t)=\frac{d}{dt}c(y(x,t),t)=\frac{dc}{dt}(y(x,t),t)+\frac{dy}{dt}(x,t)\cdot\nabla
c(y(x,t),t)$. The equation (2.5) can now be written in terms of $u$ as
$\frac{du}{dt}(x,t)=D\Delta_{t}u(x,t)-\frac{1}{{\sqrt{det(Dy(x,t))}}}\frac{d}{dt}\left({\sqrt{det(Dy(x,t))}}\right)u(x,t),~{}\mbox{for}~{}(x,t)\in\Omega\times[0,t).$
(2.6)
In particular, for the flow $y(x,t):=A(t)x$, $t\geq 0$ where
$A(t):\mathbb{R}^{3}\to\mathbb{R}^{3}$ is a family of diffeomorphism such that
$A(0)=Id$, the identity map so that $\Omega_{t}=A(t)\Omega$. As a special case
we let
$A(t)=\left(\begin{array}[]{ccc}\lambda_{1}(t)&0&0\\\ 0&\lambda_{2}(t)&0\\\
0&0&\lambda_{3}(t)\end{array}\right)$
so that there is dilational growth- which is isotropic if
$\lambda_{1}(t)=\lambda_{2}(t)=\lambda_{3}(t)$ and anisotropic otherwise. The
equation (2.6) in this case is
$\frac{du}{dt}(x,t)=D\Delta_{t}u(x,t)-\frac{({\sqrt{\lambda_{1}(t)\lambda_{2}(t)\lambda_{3}(t)}})^{\prime}}{{\sqrt{\lambda_{1}(t)\lambda_{2}(t)\lambda_{3}(t)}}}u(x,t),~{}\mbox{for}~{}(x,t)\in\Omega\times[0,t)$
(2.7)
with
$\Delta_{t}=\sum\limits_{i=1}^{3}\frac{1}{\lambda_{i}(t)^{2}}\frac{\partial^{2}}{\partial
x_{i}^{2}}$
Let
$\sigma:[\alpha_{1},\alpha_{2}]\times[\beta_{1},\beta_{2}]\to\mathbb{R}^{3}$
be a parametrization of $\partial\Omega=\Gamma$ with
$\sigma(\alpha,\beta)=(x(\alpha,\beta),y(\alpha,\beta),z(\alpha,\beta))$
so that we can express parametrization for $\partial\Omega_{t}=\Gamma_{t}$ as
$\sigma^{t}:[\alpha_{1},\alpha_{2}]\times[\beta_{1},\beta_{2}]\to\mathbb{R}^{3}$
with
$\sigma^{t}(\alpha,\beta)=\sigma(\alpha,\beta,t)=(x(\alpha,\beta,t),y(\alpha,\beta,t),z(\alpha,\beta,t))$
where for $t=0$,
$\sigma^{0}(\alpha,\beta)=\sigma(\alpha,\beta,0)=\sigma(\alpha,\beta)$
For a point $p_{t}=\sigma^{t}(\alpha_{0},\beta_{0})\in\Gamma_{t}$, the tangent
plane is
$T_{p_{t}}\Gamma_{t}=span\\{\sigma^{t}_{\beta}(\alpha_{0},\beta_{0}),\sigma^{t}_{\alpha}(\alpha_{0},\beta_{0})\\}$
and the normal at this point is
$\nu_{t}(\alpha_{0},\beta_{0})=\sigma^{t}_{\alpha}\times\sigma^{t}_{\beta}(\alpha_{0},\beta_{0}).$
(2.8)
The compatibility condition is given by
$D\nabla c\cdot\nu_{t}=G(u,v,t)\quad\mbox{on}\quad\Gamma_{t},$ (2.9)
which can be transformed to fixed boundary $\Gamma$ as
$D\nabla_{t}u(\sigma(\alpha,\beta,t))\cdot\nu=G(u,v,t)\quad\mbox{on}\quad\Gamma$
(2.10)
where $\nu_{t}$ is defined as in (2.8). With our parametrization,
$\Gamma_{t}=y_{t}(\Gamma)$ can be simply expressed as
$\sigma_{t}(\alpha,\beta)=y_{t}(x(\alpha,\beta),y(\alpha,\beta),z(\alpha,\beta))$
so that the area element for $\Gamma_{t}$ is $\sqrt{\det y_{t}}d\sigma$ and
hence the pull back of this surface measure on $\Gamma$ will be $\sqrt{\det
A(t)}d\sigma$. We may also consider $A(t)=(a_{ij}(t))$, in which case we
obtain a more complicated expression for $\Delta_{t}$ and rest of the
arguments will follow similarly.
## 3\. Notations and Main results
Throughout this paper, $n\geq 2$ and $\Omega$ is a bounded domain in
$\mathbb{R}^{n}$ with smooth boundary $\Gamma$ ($\partial\Omega$) belonging to
the class $C^{2+\mu}$ with $\mu>0$ such that $\Omega$ lies locally on one side
of its boundary, $\eta$ is the unit outward normal (from $\Omega$) to
$\partial\Omega$, and $\Delta$ is the Laplace operator. In addition, $m,k,n,i$
and $j$ are positive integers.
### 3.1. Basic Function Spaces
Let $\mathcal{C}$ be a bounded domain on $\mathbb{R}^{m}$ with smooth boundary
such that $\mathcal{C}$ lies locally on one side of $\partial\mathcal{C}$. We
define all function spaces on $\mathcal{C}$ and
$\mathcal{C}_{T}=\mathcal{C}\times(0,T)$. $L_{q}(\mathcal{C})$ is the Banach
space consisting of all measurable functions on $\mathcal{C}$ that are
$q^{th}(q\geq 1)$ power summable on $\mathcal{C}$. The norm is defined as
$\|u\|_{q,\mathcal{C}}=\left(\int_{\mathcal{C}}|u(x)|^{q}dx\right)^{\frac{1}{q}}$
Also,
$\|u\|_{\infty,\mathcal{C}}=ess\sup\\{|u(x)|:x\in\mathcal{C}\\}.$
Measurability and summability are to be understood everywhere in the sense of
Lebesgue.
If $p\geq 1$, then $W^{2}_{p}(\mathcal{C})$ is the Sobolev space of functions
$u:\mathcal{C}\rightarrow\mathbb{R}$ with generalized derivatives,
$\partial_{x}^{s}u$ (in the sense of distributions) $|s|\leq 2$ belonging to
$L_{p}(\mathcal{C})$. Here
$s=(s_{1},s_{2},$…,$s_{n}),|s|=s_{1}+s_{2}+..+s_{n}$, $|s|\leq 2$, and
$\partial_{x}^{s}=\partial_{1}^{s_{1}}\partial_{2}^{s_{2}}$…$\partial_{n}^{s_{n}}$
where $\partial_{i}=\frac{\partial}{\partial x_{i}}$. The norm in this space
is
$\|u\|_{p,\mathcal{C}}^{(2)}=\sum_{|s|=0}^{2}\|\partial_{x}^{s}u\|_{p,\mathcal{C}}$
Similarly, $W^{2,1}_{p}(\mathcal{C}_{T})$ is the Sobolev space of functions
$u:\mathcal{C}_{T}\rightarrow\mathbb{R}$ with generalized derivatives,
$\partial_{x}^{s}\partial_{t}^{r}u$ (in the sense of distributions) where
$2r+|s|\leq 2$ and each derivative belonging to $L_{p}(\mathcal{C}_{T})$. The
norm in this space is
$\|u\|_{p,\mathcal{C}_{T}}^{(2)}=\sum_{2r+|s|=0}^{2}\|\partial_{x}^{s}\partial_{t}^{r}u\|_{p,\mathcal{C}_{T}}$
In addition to $W^{2,1}_{p}(\mathcal{C}_{T})$, we will encounter other spaces
with two different ratios of upper indices, $W_{2}^{1,0}(\mathcal{C}_{T})$,
$W_{2}^{1,1}(\mathcal{C}_{T})$, $V_{2}(\mathcal{C}_{T})$,
$V_{2}^{1,0}(\mathcal{C}_{T})$, and $V_{2}^{1,\frac{1}{2}}(\mathcal{C}_{T})$
as defined in [16].
We also introduce $W^{l}_{p}(\mathcal{C})$, where $l>0$ is not an integer,
because initial data will be taken from these spaces. The space
$W^{l}_{p}(\mathcal{C})$ with nonintegral $l$, is a Banach space consisting of
elements of $W^{[l]}_{p}$ ([$l$] is the largest integer less than $l$) with
the finite norm
$\|u\|_{p,\mathcal{C}}^{(l)}=\langle
u\rangle_{p,\mathcal{C}}^{(l)}+\|u\|_{p,\mathcal{C}}^{([l])}$
where
$\|u\|_{p,\mathcal{C}}^{([l])}=\sum_{s=0}^{[l]}\|\partial_{x}^{s}u\|_{p,\mathcal{C}}$
and
$\langle
u\rangle_{p,\mathcal{C}}^{(l)}=\sum_{s=[l]}\left(\int_{\mathcal{C}}dx\int_{\mathcal{C}}{|\partial_{x}^{s}u(x)-\partial_{y}^{s}u(y)|}^{p}.\frac{dy}{|x-y|^{n+p(l-[l])}}\right)^{\frac{1}{p}}$
$W^{l,\frac{l}{2}}_{p}(\partial\mathcal{C}_{T})$ spaces with non integral $l$
also play an important role in the study of boundary value problems with
nonhomogeneous boundary conditions, especially in the proof of exact estimates
of their solutions. It is a Banach space when $p\geq 1$, which is defined by
means of parametrization of the surface $\partial\mathcal{C}$. For a rigorous
treatment of these spaces, we refer the reader to page 81 of Chapter 2 of
[16].
The use of the spaces $W^{l,\frac{l}{2}}_{p}(\partial\mathcal{C}_{T})$ is
connected to the fact that the differential properties of the boundary values
of functions from $W^{2,1}_{p}(\mathcal{C}_{T})$ and of certain of its
derivatives, $\partial_{x}^{s}\partial_{t}^{r}$, can be exactly described in
terms of the spaces $W^{l,\frac{l}{2}}_{p}(\partial\mathcal{C}_{T})$, where
$l=2-2r-s-\frac{1}{p}$.
For $0<\alpha,\beta<1$, $C^{\alpha,\beta}(\overline{\mathcal{C}_{T}})$ is the
Banach space of Hölder continuous functions $u$ with the finite norm
$|u|^{(\alpha)}_{\overline{\mathcal{C}_{T}}}=\sup_{(x,t)\in{\mathcal{C}_{T}}}|u(x,t)|+[u]^{(\alpha)}_{x,\mathcal{C}_{T}}+[u]^{(\beta)}_{t,\mathcal{C}_{T}}$
where
$[u]^{(\alpha)}_{x,{\overline{\mathcal{C}_{T}}}}=\sup_{\begin{subarray}{c}(x,t),(x^{\prime},t)\in{\mathcal{C}_{T}}\\\
x\neq
x^{\prime}\end{subarray}}\frac{|u(x,t)-u(x^{\prime},t)|}{|x-x^{\prime}|^{\alpha}}$
and
$[u]^{(\beta)}_{t,{\overline{\mathcal{C}_{T}}}}=\sup_{\begin{subarray}{c}(x,t),(x,t^{\prime})\in{\mathcal{C}_{T}}\\\
t\neq
t^{\prime}\end{subarray}}\frac{|u(x,t)-u(x,t^{\prime})|}{|t-t^{\prime}|^{\beta}}$
We shall denote the space
$C^{\frac{\alpha}{2},\frac{\alpha}{2}}(\overline{\mathcal{C}_{T}})$ by
$C^{\frac{\alpha}{2}}(\overline{\mathcal{C}}_{T})$.
$C(\mathcal{C}_{T},\mathbb{R}^{n})$ is the set of all continuous functions
$u:\mathcal{C}_{T}\rightarrow\mathbb{R}^{n}$, and
$C^{1,0}(\mathcal{C}_{T},\mathbb{R}^{n})$ is the set of all continuous
functions $u:\mathcal{C}_{T}\rightarrow\mathbb{R}^{n}$ for which $u_{x_{i}}$
is continuous for all $1\leq i\leq n$.
$C^{2,1}(\mathcal{C}_{T},\mathbb{R}^{n})$ is the set of all continuous
functions $u:\mathcal{C}_{T}\rightarrow\mathbb{R}^{n}$ having continuous
derivatives $u_{x_{i}},u_{{x_{i}}{x_{j}}}\ \text{and}\ u_{t}$ in
$\mathcal{C}_{T}$. Note that similar definitions can be given on
$\overline{\mathcal{C}_{T}}$.
Assumptions on the system: Let $\Omega\subset\mathbb{R}^{n}$ with $C^{2+\mu}$
boundary $\Gamma$ for some $\mu>0$. We consider the system
$\left.\begin{array}[]{rll}\frac{du}{dt}(x,t)&=&{\mathcal{L}}u(x,t)+f_{1}(u,v),\quad~{}\mbox{in}~{}\Omega\times[0,t)\\\
\frac{dv}{dt}(x,t)&=&\tilde{\mathcal{L}}v(x,t)+f_{2}(u,v),\quad~{}\mbox{in}~{}\Omega\times[0,t)\\\
\nabla_{t}u\cdot\eta=g_{1}(u,v),&&\nabla_{t}v\cdot\eta=g_{2}(u,v)\quad~{}\mbox{on}~{}\in\partial\Omega\times[0,t)\\\
u=u_{0},&&v=v_{0}\quad~{}\mbox{in}~{}(x,t)\in\Omega\times\\{0\\}\end{array}\right\\}$
(3.1)
with the operator
${\mathcal{L}}=D\sum\limits_{i=1}^{n}\frac{1}{\lambda_{i}(t)^{2}}\frac{\partial^{2}}{\partial
x_{i}^{2}}-\frac{\left({\sqrt{\prod\limits_{i=1}^{n}\lambda_{i}(t)}}\right)^{\prime}}{{\sqrt{\prod\limits_{i=1}^{n}\lambda_{i}(t)}}}=D\Delta_{t}-a(t)$
(3.2)
and
$\tilde{\mathcal{L}}=\tilde{D}\Delta_{t}-a(t)$ (3.3)
where
$a(t)=\frac{\left({\sqrt{\prod\limits_{i=1}^{n}\lambda_{i}(t)}}\right)^{\prime}}{{\sqrt{\prod\limits_{i=1}^{n}\lambda_{i}(t)}}}.$
(3.4)
We assume that there exists constants $\Lambda_{1}$, $\Lambda_{2}>0$ and
$k_{1}$, $k_{2}>0$ such that
$\left.\begin{array}[]{lll}&&\Lambda_{1}\leq\frac{1}{\lambda_{i}^{2}(t)}\leq\Lambda_{2},\,i=1,\ldots,n\\\
&&k_{1}\leq a(t)\leq k_{2}.\end{array}\right\\}$ (3.5)
Here
$\nabla_{t}=A(t)^{-1}\nabla=\left(\frac{1}{\lambda_{1}(t)}\partial_{x_{1}},\frac{1}{\lambda_{2}(t)}\partial_{x_{2}},\ldots,\frac{1}{\lambda_{n}(t)}\partial_{x_{n}}\right)$
(3.6)
and $\eta$ is the unit outward normal vector on $\Gamma$.
For sake of completeness, we also mention here the extension to $m$ components
for evolving domains. That is, let ${\bf u}=(u_{1},\ldots,u_{m})$ be solution
of the system
$\left.\begin{array}[]{rll}\frac{\partial u_{i}}{\partial
t}(x,t)&=&{\mathcal{L}}u_{i}(x,t)+f_{i}(u),\quad~{}\mbox{in}~{}\Omega\times[0,t)\\\
\nabla_{t}u_{i}\cdot\eta&=&g_{i}(u)\quad~{}\mbox{on}~{}\in\partial\Omega\times[0,t)\\\
u_{i}&=&u_{0},\quad~{}\mbox{in}~{}(x,t)\in\Omega\times\\{0\\}\end{array}\right\\}1\leq
i\leq m,$ (3.7)
with
${\mathcal{L}}=D\sum\limits_{i=1}^{n}\frac{1}{\lambda_{k}(t)^{2}}\frac{\partial^{2}}{\partial
x_{k}^{2}}-\frac{\left({\sqrt{\prod\limits_{i=1}^{n}\lambda_{i}(t)}}\right)^{\prime}}{{\sqrt{\prod\limits_{i=1}^{n}\lambda_{i}(t)}}}=D\Delta_{t}-a(t)$
(3.8)
$a(t)$ and $\nabla_{t}$ as before. We remark that throughout,
$\mathbb{R}^{m}_{+}$ is the nonnegative orthant in $\mathbb{R}^{m}$, $m\geq
2$. Here we list the assumptions required to prove our results for a general
$m$ component system, with the understanding that for $m=2$, we denote
$u_{1}=u$, $u_{2}=v$ and $d_{1}=D$, $d_{2}=\tilde{D}$.
1. ($V_{\text{N}}$)
${u_{0}}=({u_{0}}_{i})\in C^{2}(\overline{\Omega})$ and is componentwise
nonnegative on $\overline{\Omega}$. Moreover, $u_{0}$ satisfies the
compatibility condition
$d_{i}\frac{\partial{u_{0}}_{i}}{\partial\eta}=g_{i}({u_{0}})\quad\text{on}\
\Gamma,\quad\mbox{for all }i=1,\ldots,m.$
2. ($V_{\text{F}}$)
$f_{i},g_{i}:\mathbb{R}^{m}\rightarrow\mathbb{R}$, for $i=1,\ldots,m$ are
locally Lipschitz.
3. ($V_{\text{QP}}$)
$f$ and $g$ are quasi positive. That is, for each $i=1,...,m$, if
$u\in\mathbb{R}^{m}_{+}$ with $u_{i}=0$ then $f_{i}(u),g_{i}(u)\geq 0$.
4. ($V_{\text{L1 }}$)
There exists constants $b_{j}>0$ and $L_{1}>0$ such that
$\sum_{j=1}^{m}b_{j}f_{j}(z),\sum_{j=1}^{m}b_{j}g_{j}(z)\leq
L_{1}\left(\sum_{j=1}^{m}z_{j}+1\right)\quad\mbox{for all
}z\in\mathbb{R}^{m}_{+}.$
5. ($V_{\text{Poly}}$)
For $i=1,\ldots,m$, $f_{i}$ and $g_{i}$ are polynomially bounded. That is,
there exists $K_{fg}>0$ and a natural number $l$ such that
$f_{i}(u,v),g_{i}(u,v)\leq K_{fg}\left(u+v+1\right)^{l}\ \text{for all}\
(u,v)\in\mathbb{R}^{m}_{+}.$
Under the assumption that $f=(f_{1},\ldots,f_{m})$ and
$g=(g_{1},\ldots,g_{m})$ are locally Lipschitz, we are able to prove the
following local existence result.
###### Theorem 3.1.
(Local Existence) Suppose $(V_{N})$, $(V_{F})$, and $(V_{QP})$ holds. Then
there exists $T_{\max}>0$ such that $\left(\ref{grow}\right)$ has a unique,
maximal, component-wise nonegative solution $(u,v)$ with $T=T_{\max}$.
Moreover, if $T_{\max}<\infty$ then
$\displaystyle\limsup_{t\to
T^{-}_{\max}}\|u(\cdot,t)\|_{\infty,\Omega}+\limsup_{t\to
T^{-}_{\max}}\|v(\cdot,t)\|_{\infty,\Omega}=\infty.$
We remark that this local existence result is true for $m$ components with
$m\geq 2$ though we have indicated the proof here only for $m=2$. The
following result gives global existence of solutions of (3.1) in case we know
that one of the components is bounded by a suitable function.
###### Theorem 3.2.
Suppose $(V_{N})$, $(V_{F})$, $(V_{QP})$, $(V_{L1})$ and $(V_{Poly})$ hold,
and let $T_{\max}>0$ be given in Theorem 3.1. If there exists a nondecreasing
function $h\in C(\mathbb{R}_{+},\mathbb{R}_{+})$ such that
$\|v(\cdot,t)\|_{\infty,\Omega}\leq h(t)$ for all $0\leq t<T_{\max}$, and
there exists $K>0$ so that whenever $a\geq K$ there exists $L_{a}\geq 0$ so
that
$\displaystyle af_{1}(u,v)+f_{2}(u,v),\ ag_{1}(u,v)+g_{2}(u,v)\leq
L_{a}(u+v+1),\quad\text{for all}\quad(u,v)\in\mathbb{R}_{+}^{2},$ (3.9)
then (3.1) has a unique component-wise nonegative global solution.
In order to prove global wellposedness, we need Hölder estimates of the
solution of the associated linearized problem. The Hölder estimates in Theorem
3.6 of [31] are extended to a more general operator ${\mathcal{L}}$ described
in (3.2).
Thus, consider the equation
$\left.\begin{array}[]{rll}\frac{\partial\varphi}{\partial
t}&={{\mathcal{L}}\varphi}+\theta&\mbox{ for}\quad(x,t)\in\Omega\times(0,T)\\\
d\nabla_{t}\varphi\cdot\eta&=\varphi_{1}&\mbox{
for}\quad(x,t)\in\Gamma\times(0,T)\\\ \varphi(x,0)&=\varphi_{0}(x)&\mbox{
for}\quad x\in\Omega\end{array}\right\\}$ (3.10)
where $\eta$ denotes the outward unit normal vector on $\Gamma$. Then,
###### Theorem 3.3.
Let $p>n+1$, $T>0$, $\theta\in L_{p}(\Omega\times(0,T))$, $\varphi_{1}\in
L_{p}(\Gamma\times(0,T))$ and $\varphi_{0}\in W^{2}_{p}(\Omega)$ such that
$d\frac{\partial\varphi_{0}}{\partial\eta}=\varphi_{1}(x,0)\quad\mbox{on}\quad\Gamma.$
(3.11)
Then, there exists a unique weak solution $\varphi\in
V^{1,\frac{1}{2}}_{2}(\Omega_{T})$ of (3.10) and a constant
$C_{p,T,||detA(t)||_{\infty}}>0$ independent of $\theta$, $\varphi_{1}$,
$\varphi_{0}$ such that for $0<\beta<1-\frac{n+1}{p}$
$|\varphi|^{\beta}_{\Omega_{T}}\leq
C_{p,T,||detA(t)||_{\infty}}\left(||\theta||_{p,\Omega_{T}}+||\varphi_{1}||_{p,\Gamma_{T}}+||\varphi_{0}||_{p,\Omega}^{(2)}\right).$
(3.12)
###### Theorem 3.4.
(Global Existence) Suppose $(V_{N})$, $(V_{F})$, $(V_{QP})$, $(V_{Poly})$ and
the condition $(V_{\text{L}})$ described below hold.
$\displaystyle(V_{\text{L}})~{}~{}~{}$ $\displaystyle\mbox{ There exists a
constant }K>0,\mbox{ such that for any
}a=(a_{1},...,a_{m-1},a_{m})\in\mathbb{R}^{m}$ $\displaystyle\mbox{ with
}a_{1},...,a_{m-1}\geq K,\mbox{ and }a_{m}=1,\mbox{ there exists a constant
}L_{a}\geq 0\mbox{ such that }$
$\displaystyle\sum_{j=1}^{m}a_{j}f_{j}(z),\sum_{j=1}^{m}a_{j}g_{j}(z)\leq
L_{a}\left(\sum_{j=1}^{m}z_{j}+1\right)\quad\text{for all}\quad
z\in\mathbb{R}^{m}_{+}.$
Then, $(\ref{m-grow})$ has a unique component-wise nonegative global solution.
Note that defining $c(y,t)=u(A(t)^{-1}y,t)$ for $y\in\Omega_{t}$, above
results can be translated to the solutions ${\bf c}=(c_{1},\ldots,c_{m})$ of
(1) as follows:
###### Theorem 3.5.
(Local existence for evolving domain) Suppose $(V_{N})$, $(V_{F})$, and
$(V_{QP})$ holds. Then there exists $T_{\max}>0$ such that $(\ref{c0})$ has a
unique, maximal, component-wise nonegative solution ${\bf c}$ with
$T=T_{\max}$. Moreover, if $T_{\max}<\infty$ then for all $i=1,\ldots,m$,
$\displaystyle\limsup_{t\to
T^{-}_{\max}}\|c_{i}(\cdot,t)\|_{\infty,\Omega_{t}}=\infty.$
###### Theorem 3.6.
For $m=2$, suppose $(V_{N})$, $(V_{F})$, $(V_{QP})$, $(V_{L1})$ and
$(V_{Poly})$ hold, and let $T_{\max}>0$ be given in Theorem 3.1. If there
exists a nondecreasing function $h\in C(\mathbb{R}_{+},\mathbb{R}_{+})$ such
that $\|c_{2}(\cdot,t)\|_{\infty,\Omega}\leq h(t)$ for all $0\leq t<T_{\max}$,
and there exists $K>0$ so that whenever $a\geq K$ there exists $L_{a}\geq 0$
so that
$\displaystyle af_{1}(u,v)+f_{2}(u,v),ag_{1}(u,v)+g_{2}(u,v)\leq
L_{a}(u+v+1),\quad\text{for all}\quad(u,v)\in\mathbb{R}_{+}^{2},$ (3.13)
then (1) has a unique component-wise nonegative global solution ${\bf
c}=(c_{1},c_{2})$.
###### Theorem 3.7.
(Global existence for evolving domain) Suppose $(V_{N})$, $(V_{F})$,
$(V_{QP})$, $(V_{Poly})$ and the condition $(V_{\text{L}})$ described below
hold.
$\displaystyle(V_{\text{L}})~{}~{}~{}$ $\displaystyle\mbox{ There exists a
constant }K>0,\mbox{ such that for any
}a=(a_{1},...,a_{m-1},a_{m})\in\mathbb{R}^{m}$ $\displaystyle\mbox{ with
}a_{1},...,a_{m-1}\geq K,\mbox{ and }a_{m}=1,\mbox{ there exists a constant
}L_{a}\geq 0\mbox{ such that }$
$\displaystyle\sum_{j=1}^{m}a_{j}f_{j}(z),\sum_{j=1}^{m}a_{j}g_{j}(z)\leq
L_{a}\left(\sum_{j=1}^{m}z_{j}+1\right)\quad\text{for all}\quad
z\in\mathbb{R}^{m}_{+}.$
Then, $(\ref{c0})$ has a unique component-wise nonegative global solution.
In the next section we obtain estimates for the linearized problem associated
to (3.7).
## 4\. Hölder Estimates: proof of Theorem 3.3
We begin by listing some of the results from [16] which will be used in this
as well as upcoming sections. Using the notations therein, we let
$\mathcal{L}(x,t,\partial_{x},\partial_{t})=\partial_{t}-\sum_{i,j=1}^{n}a_{i,j}(x,t)\frac{\partial^{2}}{\partial
x_{i}\partial x_{j}}+\sum_{i=1}^{n}a_{i}(x,t)\frac{\partial}{\partial
x_{i}}+a(x,t)$
denote a uniformly parabolic operator. Consider the Dirichlet problem
$\displaystyle\mathcal{L}u$ $\displaystyle=f(x,t)$ $\displaystyle
u|_{\partial\Omega_{T}}=\Phi(x,t)$ $\displaystyle~{}~{}u|_{t=0}=\phi(x)$ (4.1)
then the Theorem 9.1 from [16] states
###### Lemma 4.1.
Let $q>1$. Suppose that the coefficients $a_{ij}$ of the operator
$\mathcal{L}$ are bounded continuous function in $\overline{\mathcal{C}}_{T}$,
while $a_{i}$ and $a$ have finite norms
$\|a_{i}\|_{r,\mathcal{C}_{T}}^{(loc)}$ and
$\|a\|_{s,\mathcal{C}_{T}}^{(loc)}$ respectively, where
$r=\begin{cases}\max(q,n+2)&\quad\text{for}\ q\neq n+2\\\
n+2+\epsilon&\quad\text{for}\ q=n+2\\\ \end{cases}$
and
$s=\begin{cases}\max(q,\frac{n+2}{2})&\quad\text{for}\ q\neq\frac{n+2}{2}\\\
\frac{n+2}{2}+\epsilon&\quad\text{for}\ q=\frac{n+2}{2}\\\ \end{cases}$
Let $\partial\Omega\in C^{2+\mu}$ and $\epsilon>0$ is very small. Suppose the
quantities $\|a_{i}\|_{r,\mathcal{C}_{t,t+\tau}}^{(loc)}$ and
$\|a\|_{s,\mathcal{C}_{t,t+\tau}}^{(loc)}$ tends to zero for $\tau\rightarrow
0$. Then for any $f\in L_{q}(\mathcal{C}_{T})$, $\phi\in
W_{q}^{2-\frac{2}{q}}(\Omega)$ and $\Phi\in
W_{q}^{2-\frac{1}{q},1-\frac{1}{2q}}(\partial\mathcal{C}_{T})$ with
$q\neq\frac{3}{2}$, satisfying the case $q>\frac{3}{2}$ the compatibility
condition of zero order
$\phi|_{\partial\Omega}=\Phi_{t=0}$
system has a unique solution $u\in W^{2,1}_{q}(\mathcal{C}_{T})$. Moreover it
satisfies the estimates
$\|u\|_{q,Q_{T}}^{(2)}\leq
c\left(\|f\|_{q,\mathcal{C}_{T}}+\|\phi\|_{q,\Omega}^{(2-\frac{2}{q})}+\|\Phi\|_{q,\partial\mathcal{C}_{T}}^{(2-\frac{1}{q})}\right)$
Now for the Neumann problem
$\displaystyle\mathcal{L}u$ $\displaystyle=f(x,t)$ $\displaystyle u|_{t=0}$
$\displaystyle=\phi(x)$ (4.2)
$\displaystyle\sum\limits_{i=1}^{n}b_{i}(x,t)\partial_{x_{i}}u+b(x,t)u\mid_{\partial\mathcal{C}_{T}}$
$\displaystyle=\Phi(x,t)$
where we assume $|\sum\limits_{i=1}^{n}b_{i}(x,t)\eta_{i}(x)|\geq\delta>0$
everywhere on $\partial\mathcal{C}_{T}$, $\eta$ denoting the unit outward
normal vector to $\partial\mathcal{C}_{T}$. Then Neumann counterpart of above
Lemma can be written as follows.
###### Lemma 4.2.
Let $p>1$ and suppose that $\theta\in L_{p}(\Omega\times(0,T))$,
$\varphi_{0}\in W^{2-\frac{2}{p}}_{p}(\Omega),$ and $\gamma\in
W_{p}^{1-\frac{1}{p},\frac{1}{2}-\frac{1}{2p}}(\Gamma\times(0,T))$ with $p\neq
3$. In addition, when $p>3$ assume
$d\frac{\partial\varphi_{0}}{\partial\eta}=\gamma(x,0)\quad\mbox{ on }\Gamma.$
Then (4) has a unique solution $\varphi\in W^{2,1}_{p}(\mathcal{C}_{T})$ and
there exists $C$ independent of $\theta$, $\varphi_{0}$ and $\gamma$ such that
$||\varphi||_{p,\mathcal{C}_{T}}^{(2)}\leq
C\left(||\theta||_{p,\mathcal{C}_{T}}+||\varphi||^{(2-\frac{2}{p})}_{p,\Omega}+||\gamma||^{(1-\frac{1}{p},\frac{1}{2}-\frac{1}{2p})}_{p,\partial\mathcal{C}_{T}}\right)$
We will also need the following Corollary from [16].
###### Corollary 4.3.
If the conditions of Lemma 4.1 are fulfilled for $q>\frac{n+2}{2}$ then the
solution of problem $(\ref{Dirchlet})$ satisfies a Hölder condition in $x$ and
$t$. Moreover, when $q>n+2$ then the derivatives of the associated Neumann
boundary value problem will also satisfy Hölder condition in $x$ and $t$.
Next, we will prove the Hölder estimates for the solution of the linearized
Neumann problem associated to (3.7) corresponding to the operator
$\Delta_{t}$. The ideas for these estimates were developed in Section 5 of
[31] and here we adapt those techniques for our operator $\Delta_{t}$. For
this section, we will make further reduction by writing
$\tilde{u}=\left({\sqrt{\prod\limits_{i=1}^{n}\lambda_{i}(t)}}\right)u$ so
that $\tilde{u}$ solves the equation
$\frac{d\tilde{u}}{dt}(x,t)=D\Delta_{t}\tilde{u}(x,t)~{}\mbox{for}~{}(x,t)\in\Omega\times[0,t)$
(4.3)
where
$\Delta_{t}=\sum\limits_{i=1}^{n}\frac{1}{\lambda_{i}(t)^{2}}\frac{\partial^{2}}{\partial
x_{i}^{2}}.$ (4.4)
With this reduction, instead of working with equation (3.10), it suffices to
obtain estimates of the equation
$\left.\begin{array}[]{rll}\frac{\partial\varphi}{\partial
t}&={\Delta_{t}\varphi}+\theta&\mbox{ for}\quad(x,t)\in\Omega\times(0,T)\\\
d\nabla_{t}\varphi\cdot\eta&=\varphi_{1}&\mbox{
for}\quad(x,t)\in\Gamma\times(0,T)\\\ \varphi(x,0)&=\varphi_{0}(x)&\mbox{
for}\quad x\in\Omega.\end{array}\right\\}$ (4.5)
Infact, the results of this section hold for a general
$\Delta_{t}=\sum_{i,j=1}^{n}a_{i,j}(t)\frac{\partial^{2}}{\partial
x_{i}\partial x_{j}}$ with a positive definite $A(t)=(a_{ij}(t))$ where the
coefficient matrix is function of $t$ alone. Extension of these results to
more general operator will appear in a forthcoming work.
The proof of Theorem 3.3 will follow arguing as in the proof of Theorem 3.6 of
[31] for (4.5). Here we point out the necessary changes when we replace the
usual Laplacian with $\Delta_{t}$ in the results used. Firstly, the following
Lemma from Pg 351, [16] gives $W_{p}^{2,1}(\mathcal{C}_{T})$ on the solutions
of (3.10).
###### Lemma 4.4.
Let $p>1$. Suppose $\theta\in L_{p}(\mathcal{C}_{T})$, $\varphi_{0}\in
W^{(2-\frac{2}{p})}(\Omega)$ and $\theta\in
W^{(1-\frac{1}{p},\frac{1}{2}-\frac{1}{2p})}(\Gamma\times(0,T))$ with $p\neq
3$. In case $p>3$ we further assume that
$d\frac{\partial\varphi_{0}}{\partial\eta}=\varphi_{1}\quad\mbox{on}\quad{\Gamma\times\\{0\\}}.$
Then equation (3.10) has a unique solution $\varphi\in
W_{p}^{2,1}(\mathcal{C}_{T})$ and there exists a constant $C(\Omega,p,T)$,
independent of $\theta$, $\varphi_{0}$ and $\varphi_{1}$ such that
$||\varphi||_{p,\mathcal{C}}^{(2)}\leq
C(\Omega_{T},p,T)\left(||\theta||_{p,\mathcal{C}}+||\varphi_{0}||_{p,\Omega}^{(2-\frac{2}{p})}+||\varphi_{1}||_{p,\mathcal{C}_{T}}^{(1-\frac{1}{p},\frac{1}{2}-\frac{1}{2p})}\right)$
Following pg. 356 of [16], the fundamental solution of the operator
$\Delta_{t}$ is given by
$Z_{0}(x-y,y,t,s)=\frac{1}{|4\pi(t-s)|^{\frac{n}{2}}\sqrt{det{A(s)}}}\exp\left(-\frac{\left<{\tilde{A}}(s)(x-y),(x-y)\right>}{4\pi(t-s)}\right)$
(4.6)
where $\tilde{A}(s)=A(s)^{-1}$.
For $0<\varepsilon<t$, we can define the operator $J_{\varepsilon}(f)$
corresponding to our equation as
$J_{\varepsilon}(f)(Q,t)=\int\limits_{0}^{t-\varepsilon}\int\limits_{\Gamma}\frac{<{\tilde{A}(s)}(y-Q),\eta_{Q}>}{\sqrt{det{A(s)}}(t-s)^{\frac{n}{2}+1}}\exp\left(-\frac{\left<\tilde{A}(s)(Q-y),(Q-y)\right>}{4\pi(t-s)}\right)f(s,y)\,d\sigma
ds.$ (4.7)
Noticing that the change of variables gives
$J_{\varepsilon}(f)=\int\limits_{0}^{t-\varepsilon}\int\limits_{\Gamma_{t}}\frac{<{\tilde{A}(s)}(y-Q),\eta_{Q}>}{(t-s)^{\frac{n}{2}+1}}\exp\left(-\frac{\left<(Q-y),(Q-y)\right>}{4\pi(t-s)}\right)F(s,y)\,d\sigma
ds$
where $F(s,y)=f(s,A(s)^{-1}y)$, the estimates and properties of the operator
$J_{\varepsilon}$ can be summarized as follows.
###### Proposition 4.5.
(Fabes-Riviere) Assume $\Omega$ is a $C^{1}$ domain, $Q\in\Gamma$ and
$\eta_{Q}$ denote the unit outward normal to $\Gamma$ at $Q$. For
$0<\varepsilon<t$, let the functional $J_{\varepsilon}(f)$ be defined as in
$(\ref{J})$. Then
1\. for $1<p<\infty$ there exists $C(p,||A||_{\infty})>0$ such that
$J(f)(Q,t)=\sup\limits_{0<\varepsilon<t}|J_{\varepsilon}(f)(Q,t)|$
satisfies
$||J(f)||_{L_{p}(\Gamma\times(0,T))}\leq
C(p,||A||_{\infty})||f||_{L_{p}(\Gamma\times(0,T))}\quad\mbox{for all }f\in
L_{p}(\Gamma\times(0,T));$
2.$\lim\limits_{\varepsilon\to 0^{+}}J_{\varepsilon}(f)=J(f)$ exists in
$L_{p}(\Gamma\times(0,T)$ and pointwise for almost every
$(Q,t)\in\Gamma\times(0,T)$ provided $f\in L_{p}(\Gamma\times(0,T)$,
$1<p<\infty$;
3\. $cI+J$ is invertible on $L_{p}(\Gamma\times(0,T)$ for each $1<p<\infty$
where $I$ is the identity operator and $c\neq 0$ in $\mathbb{R}$.
Note that now the constants will also depend on the matrix $A$. For
$Q\in\Gamma$, $(x,t)\in\mathcal{C}_{T}$ and $t>s$, define
$\displaystyle W(t-s,x,Q)$
$\displaystyle=\frac{1}{\sqrt{det{A(s)}}(t-s)^{\frac{n}{2}+1}}\exp\left(-\frac{\left<\tilde{A}(s)(Q-y),(Q-y)\right>}{4\pi(t-s)}\right),$
$\displaystyle\mbox{ and }g(Q,t)$
$\displaystyle=-2(-c_{n}I+J)^{-1}\gamma(Q,t)$ (4.8)
where $c_{n}=\frac{\omega_{n}H(0)}{2}$, $\omega_{n}=$ surface area of unit
sphere in $\mathbb{R}^{n}$ and
$H(0)=\int\limits\limits\limits_{0}^{\infty}\frac{1}{s^{n/2+1}}\exp({-\frac{1}{4s}})\,ds$.
Referring to the Theorem 2.4 in [8] we have the following definition.
###### Definition 4.6.
A function $\varphi$ is a classical solution of the system $(\ref{eqn3.6})$
with $d=1$ and $\gamma\in L_{p}(\Gamma\times(0,T))$ for $p>1$ if and only if
$\varphi(x,t)=\int\limits_{0}^{t}\int\limits_{\Gamma}W(t-s,x,Q)g(Q,s)\,d\sigma\,ds\quad\mbox{for
all }(x,t)\in\mathcal{C}_{T}.$ (4.9)
We claim that the classical solution $\varphi$ of (4.5) defined as in (4.9) is
Hölder continuous. For $(x,T)$, $(y,\tau)\in\mathcal{C}_{T}$, $0<\tau<T$,
consider the difference
$\varphi(x,T)-\varphi(y,\tau)=\int\limits\limits\limits_{0}^{t}\int\limits\limits\limits_{\Gamma}\left[W(T-s,x,Q)-W(\tau-s,y,Q)\right]g(Q,s)\,d\sigma\,ds.$
The following three Lemmas provide the required estimates.
###### Lemma 4.7.
Let $p>n+1$. Suppose $(x,T)$, $(y,\tau)\in\mathcal{C}$ with $0<\tau<T$ and
${\mathcal{R}}^{c}=\\{(Q,s)\in\Gamma\times(0,\tau):|x-Q|+|T-s|^{\frac{1}{2}}<2|x-y|+|T-\tau|^{\frac{1}{2}}\\}.$
Then for $0<a<1-\frac{n+1}{p}$, there exists
$C(p,n,\overline{\Omega},T,||A||_{\infty})>0$ independent of $g\in
L_{p}(\Gamma\times(0,T)$ such that
$\int\limits\limits\limits_{{\mathcal{R}}^{c}}|\left(W(T-s,x,Q)-W(\tau-s,y,Q)\right)g(Q,s)|\,d\sigma\,ds\leq
C\left(|x-y|+|T-\tau|^{\frac{1}{2}}\right)^{a}||g||_{p,\Gamma\times[0,\tau]}.$
(4.10)
###### Lemma 4.8.
Let $p>n+1$. Suppose $(x,T)$, $(y,\tau)\in\mathcal{C}$ with $0<\tau<T$ and
${\mathcal{R}}=\\{(Q,s)\in\Gamma\times(0,\tau):2(|x-y|+|T-\tau|^{\frac{1}{2}})<|x-Q|+|T-s|^{\frac{1}{2}}\\}.$
Then for $0<a<1-\frac{n+1}{p}$, there exists
$C(p,n,\overline{\Omega},T,||A||_{\infty})>0$ independent of $g\in
L_{p}(\Gamma\times(0,T)$ such that
$\int\limits_{{\mathcal{R}}}|\left(W(T-s,x,Q)-W(\tau-s,y,Q)\right)g(Q,s)|\,d\sigma\,ds\leq
C\left(|x-y|+|T-\tau|^{\frac{1}{2}}\right)^{a}||g||_{p,\Gamma\times[0,\tau]}.$
(4.11)
###### Lemma 4.9.
Let $p>n+1$ and suppose $(x,T)$, $(y,\tau)\in\mathcal{C}$ with $0<\tau<T$.
Then for $0<a<1-\frac{n+1}{p}$, there exists
$C(p,n,\overline{\Omega},T,||A||_{\infty})>0$ independent of $g\in
L_{p}(\Gamma\times(0,T)$ such that
$\int\limits_{\tau}^{T}\int\limits\limits\limits_{\Gamma}|W(T-s,x,Q)g(Q,s)|\,d\sigma
ds\leq C(|T-\tau|)^{a}||g||_{p,\Gamma\times[\tau,T]}.$ (4.12)
We refer to the proofs of Lemmas 5.5, 5.6 and 5.7 respectively in [31] which
can be repeated verbatim for the above three Lemmas. Similar to Proposition
5.8 in [31], we have the following Hölder estimates for the solution of
(3.10).
###### Proposition 4.10.
Let $\gamma\in L_{p}(\Gamma\times(0,T))$ for $p>n+1$. Then a solution of
$(\ref{eqn3.6})$ is Hölder continuous on $\bar{\Omega}\times(0,T)$ with Hölder
exponent $0<a<1-\frac{n+1}{p}$ and there is a constant $K_{p}>0$, depending on
$p$, $\bar{\Omega}$, $T$ and $||A||_{\infty}$, independent of $\varphi_{1}$
such that
$|\varphi(x,T)-\varphi(y,\tau)|\leq
K_{p}\left(|T-\tau|^{\frac{1}{2}}+|x-y|\right)^{a}||\varphi_{1}||_{p,\Gamma\times(0,T)}$
(4.13)
for all $(x,T),(y,\tau)\in\mathcal{C}$.
The proof of Theorem 3.3 can now be completed.
## 5\. Local existence of the solution
Here we illustrate the proof of local existence of solutions of (3.7) for the
case $m=2$, which can be easily extended to $m$ component case. In order to
prove local existence of the solution we need the following result.
###### Theorem 5.1.
If $f=(f_{1},f_{2})$, $g=(g_{1},g_{2})$ are Lipschitz function then the
$(\ref{grow})$ has a unique global solution.
###### Proof.
Here we sketch first few steps of the proof to indicate that the linear term
can be controlled. Let $T>0$ and $u_{0},v_{0}\in W_{p}^{2}(\Omega)\times
W_{p}^{2}(\Omega)$ such that they satisfy the compatibility condition
$\frac{\partial u_{0}}{\partial\eta}=g_{1}(u_{0},v_{0})\,\text{and}\
\frac{\partial v_{0}}{\partial\eta}=g_{2}(u_{0},v_{0})$
Set
$X=\\{(u,v)\in C(\overline{\Omega}\times[0,T])\times
C(\Omega\times[0,T]):u(x,0)=0\ \text{and}\ v(x,0)=0\ \text{for all }\
x\in\overline{\Omega}\\}$
Note that $(X,\|\cdot\|_{\infty})$ is a Banach space. Let $(u,v)\in X$ and
consider the problem
$\displaystyle\frac{dU}{dt}(x,t)={\mathcal{L}}U(x,t)+f_{1}(u+u_{0},v+v_{0}),$
$\displaystyle~{}\mbox{in}~{}\Omega\times[0,t)$
$\displaystyle\frac{dV}{dt}(x,t)=\tilde{\mathcal{L}}V(x,t)+f_{2}(u+u_{0},v+v_{0}),$
$\displaystyle~{}\mbox{in}~{}\Omega\times[0,t)$ (5.1)
$\displaystyle\nabla_{t}U\cdot\eta=g_{1}(u+u_{0},v+v_{0}),\
\nabla_{t}V\cdot\eta=g_{2}(u+u_{0},v+v_{0})$
$\displaystyle~{}\mbox{on}~{}\partial\Omega\times[0,t)$ $\displaystyle
U=u_{0},\ V=v_{0}$ $\displaystyle~{}\mbox{in}~{}\Omega\times\\{0\\}$
From Lemma 4.1, $(\ref{grow1})$ possess a unique solution $(U,V)\in
W_{q}^{2,1}(\Omega_{T})\times W_{q}^{2,1}(\Omega_{T})$. Furthermore, from
embedding $(U,V)\in C(\overline{\Omega}_{T}\times[0,T])\times
C(\overline{\Omega}_{T}\times[0,T])$. Define $S:X\rightarrow X$ as
$S(u,v)=(U-u_{0},V-v_{0})$
where $(U,V)$ solves $(\ref{grow1})$ . We will see $S$ is continuous and
compact. Using linearity, $(U-\tilde{U},V-\tilde{V})$ solves
$\left.\begin{array}[]{rll}\frac{d}{dt}(U-\tilde{U})(x,t)&=&{\mathcal{L}}(U-\tilde{U})(x,t)+f_{1}(u+u_{0},v+v_{0})-f_{1}(\tilde{u}+u_{0},\tilde{v}+v_{0}),~{}\mbox{in}~{}\Omega\times[0,t)\\\
\frac{d}{dt}(V-\tilde{V})(x,t)&=&\tilde{\mathcal{L}}(V-\tilde{V})(x,t)+f_{2}(u+u_{0},v+v_{0})-f_{2}(\tilde{u}+u_{0},\tilde{v}+v_{0}),~{}\mbox{in}~{}\Omega\times[0,t)\\\
\nabla_{t}(U-\tilde{U})\cdot\eta&=&g_{1}(u+u_{0},v+v_{0})-g_{1}(\tilde{u}+u_{0},\tilde{v}+v_{0})~{}\mbox{on}~{}\partial\Omega\times[0,t),\\\
\nabla_{t}(V-\tilde{V})\cdot\eta&=&g_{2}(u+u_{0},v+v_{0})-g_{2}(\tilde{u}+u_{0},\tilde{v}+v_{0})~{}\mbox{on}~{}\partial\Omega\times[0,t)\\\
U-\tilde{U}=0,&&V-\tilde{V}=0~{}\mbox{in}~{}\Omega\times\\{0\\}.\end{array}\right\\}$
(5.2)
From Corollary 4.3, if $q>n+2$ then solution of $(\ref{grow3})$ is Hölder
continuous. Therefore there exists $C$ independent of $f_{i}$ and $g_{i}$,
$i=1,2$, such that
$\begin{array}[]{lll}\|U-\tilde{U}\|_{\infty,\Omega_{T}}+\|V-\tilde{V}\|_{\infty,\Omega_{T}}&\leq&C\left\\{\|f_{1}(u+u_{0},v+v_{0})-f_{1}(\tilde{u}+u_{0},\tilde{v}+v_{0})\|_{q,\Omega_{T}}\right.\\\
&&+\|f_{2}(u+u_{0},v+v_{0})-f_{2}(\tilde{u}+u_{0},\tilde{v}+v_{0})\|_{q,\Omega_{T}}\\\
&&+\|g_{1}(u+u_{0},v+v_{0})-g_{1}(\tilde{u}+u_{0},\tilde{v}+v_{0})\|_{q,\partial\Omega_{T}}\\\
&&\left.+\|g_{2}(u+u_{0},v+v_{0})-g_{2}(\tilde{u}+u_{0},\tilde{v}+v_{0})\|_{q,\partial\Omega_{T}}\right\\}\end{array}$
Using boundedness of $\Omega$, there exists $\tilde{C}>0$ such that
$\begin{array}[]{lll}\|U-\tilde{U}\|_{\infty,\Omega_{T}}+\|V-\tilde{V}\|_{\infty,\Omega_{T}}&\leq&\tilde{C}\left\\{\|f_{1}(u+u_{0},v+v_{0})-f_{1}(\tilde{u}+u_{0},\tilde{v}+v_{0})\|_{\infty,\Omega_{T}}\right.\\\
&&+\|f_{2}(u+u_{0},v+v_{0})-f_{2}(\tilde{u}+u_{0},\tilde{v}+v_{0})\|_{\infty,\Omega_{T}}\\\
&&+\|g_{1}(u+u_{0},v+v_{0})-g_{1}(\tilde{u}+u_{0},\tilde{v}+v_{0})\|_{\infty,\partial\Omega_{T}}\\\
&&\left.+\|g_{2}(u+u_{0},v+v_{0})-g_{2}(\tilde{u}+u_{0},\tilde{v}+v_{0})\|_{\infty,\partial\Omega_{T}}\right\\}\\\
\end{array}$
Since $f_{i}$, $g_{i}$ $i=1,2$ are Lipschitz, $S$ is continuous with respect
to the sup norm. Now it remain to show that this $S$ is compact. Moreover,
$p>n+2$ from Corollary 4.3 imples that solution is infact Hölder continuous
therefore $S$ maps bounded sets in $X$ to precompact sets, hence $S$ is
compact with respect to sup norm. The uniqueness of the solution follows by
deriving the Gronwall’s inequality on $\Omega_{T}$ by arguments similar to as
in the proof of Theorem 6.1 of [31]. Since $T>0$ was arbitrary, we further
conclude the existence of unique global solution.
Proof of Theorem 3.1: The proof of the theorem involves truncating the given
functions $f$, $g$ so that the truncated functions are Lipschitz. Precisely,
for each $r>k$, we define cut off functions $\phi_{r}\in
C_{0}^{\infty}(\mathbb{R}^{2},[0,1])$ and $\psi_{r}\in
C_{0}^{\infty}(\mathbb{R}^{2},[0,1])$ such that $\phi_{r}(z,w)=1$ when
$|z|\leq r$ and $|w|\leq r$, and $\phi_{r}(z,w)=0$ for all $|z|>2r$ or
$|w|\geq 2r$. Define $f_{r}=f\phi_{r}$ and $g_{r}=g\psi_{r}$. We also have
$u_{0}\in W^{2}_{p}(\Omega)$ and $v_{0}\in W^{2}_{p}(\partial\Omega)$ with
$p>n$ and $u_{0}$, $v_{0}$ satisfy the compatibility condition for $p>3$.
Hence, from the Sobolev imbedding theorem, $u_{0}$ and $v_{0}$ are bounded
functions, i.e., there exists $k>0$ such that
$\|u_{0}(\cdot)\|_{\infty,\Omega}\leq k$ and
$\|v_{0}(\cdot)\|_{\infty,\partial\Omega}\leq k$. Applying Theorem 5.1, we
obtain global solution $(u_{r},v_{r})$ for each $r$. Then letting $r\to\infty$
we obtain the solution $(u,v)$ with required properties. We refer to [31] for
details.
## 6\. Existence of global solution
In this section we will prove global existence of solutions of the system
(3.1) under given conditions. We begin by proving apriori estimates, in
particular, $L_{1}$ estimate for the solutions of $(\ref{grow}).$
###### Lemma 6.1.
($L_{1}$-estimates) Let $(u,v)$ be the unique maximal nonnegative solution to
$(\ref{grow})$ and suppose that $T_{max}<\infty$. If $V_{N}$, $V_{F}$ and
$V_{L1}$ holds, then there exists $C_{1}(D,\tilde{D},L_{1},k_{2})$ such that
$\|u(\cdot,t)\|_{1,\Omega},\|u(\cdot,t)\|_{1,\Gamma},\|v(\cdot,t)\|_{1,\Gamma}\leq
C_{1}(t)\ \forall\ 0\leq t<T_{max}$ (6.1)
###### Proof.
Adding the two equations in $(\ref{grow})$ and integrating the equation over
$\Omega$, we get
$\displaystyle\frac{d}{dt}\int\limits\limits_{\Omega}(u+v)$
$\displaystyle=\int\limits\limits_{\Omega}D\Delta_{t}u+\int\limits\limits_{\Omega}\tilde{D}\Delta_{t}v+\int\limits\limits_{\Omega}a(t)(u+v)+\int\limits\limits_{\Omega}(f_{1}(u,v)+f_{2}(u,v))$
$\displaystyle\leq\int\limits\limits_{\Omega}(f_{1}(u,v)+f_{2}(u,v))+\int\limits\limits_{\Gamma}(g_{1}(u,v)+g_{2}(u,v))+\int\limits\limits_{\Omega}a(t)(u+v)$
$\displaystyle\leq\int\limits\limits_{\Omega}(L_{1}+k_{2})\left({u+v+1}\right)+\int\limits\limits_{\Gamma}L_{1}\left({u+v+1}\right).$
(6.2)
where recall
$a(t)=\frac{(\lambda_{1}(t)\lambda_{2}(t)\lambda_{(}t))^{\prime}}{(\lambda_{1}(t)\lambda_{2}(t)\lambda_{3}(t))}$
and $a(t)\leq k_{2}$ for all $t$ by assumption. Fix $0<T<T_{max}$, $d>0$ a
constant (to be chosen later), $L_{1}>0$ and consider the system
$\displaystyle\varphi_{t}$
$\displaystyle=-d\Delta_{t}\varphi-(L_{1}+k_{2})\varphi$
$\displaystyle(x,t)\in\Omega\times(0,T)$ $\displaystyle
d\nabla_{t}\varphi\cdot\eta$ $\displaystyle=(L_{1}+k_{2})\varphi+1$
$\displaystyle(x,t)\in\Gamma\times(0,T)$ $\displaystyle\varphi$
$\displaystyle=\varphi_{T}$ $\displaystyle x\in\Omega,\ t=T.$ (6.3)
Here, $\varphi_{T}\in C^{2+\gamma}(\overline{\Omega})$ for some $\gamma>0$ is
strictly positive and satisfies the compatibility condition
$d\nabla_{T}\varphi_{T}\cdot\eta=(L_{1}+k_{2})\varphi_{T}+1\ \text{on}\
\Gamma\times\\{T\\}.$
From Theorem 5.3 in chapter 4 of [16], $\varphi\in
C^{2+\gamma,1+\frac{\gamma}{2}}(\overline{\Omega}\times[0,T])$, and therefore
$\varphi\in C^{2+\gamma,1+\frac{\gamma}{2}}(\Gamma\times[0,T])$ . Moreover,
arguing as in the previous section, we conclude $\varphi\geq 0$. Now, consider
$\displaystyle 0$
$\displaystyle=\int\limits\limits_{0}^{T}\int\limits\limits_{\Omega}u(-\varphi_{t}-d\Delta_{t}\varphi-(L_{1}+k_{2})\varphi)$
$\displaystyle=\int\limits\limits_{0}^{T}\int\limits\limits_{\Omega}\varphi(u_{t}-D\Delta_{t}u-(L_{1}+k_{2})\int\limits\limits_{0}^{T}\int_{\Omega}u\varphi-\int\limits\limits_{0}^{T}\int\limits\limits_{\Gamma}ud\nabla_{t}\varphi\cdot\eta+(D-d)\int\limits\limits_{0}^{T}\int_{\Omega}u\Delta_{t}\varphi$
$\displaystyle+\int\limits_{0}^{T}\int\limits\limits_{\Gamma}\varphi
D\nabla_{t}u\cdot\eta+\int\limits_{\Omega}u(x,0)\varphi(x,0)-\int\limits\limits_{\Omega}u(x,T)\varphi(\cdot,T)$
$\displaystyle\leq\int_{0}^{T}\int\limits_{\Omega}\varphi(f_{1}(u,v)-L_{1}u)-\frac{D}{d}\int\limits_{0}^{T}\int\limits\limits_{\Gamma}(u(L_{1}+k_{2})\varphi+u)+(D-d)\int\limits_{0}^{T}\int_{\Omega}u\Delta_{t}\varphi$
$\displaystyle+\int\limits\limits_{0}^{T}\int\limits\limits_{\Gamma}\varphi
g_{1}(u,v)+\int\limits\limits_{\Omega}u(x,0)\varphi(x,0)-\int\limits\limits_{\Omega}u(x,T)\varphi(\cdot,T).$
(6.4)
For $v$ we have the similar equation with $f_{1}$ replaced by $f_{2}$ and
$g_{1}$ replaced by $g_{2}$, i.e.,
$\displaystyle 0$ $\displaystyle\leq$
$\displaystyle\int_{0}^{T}\int\limits_{\Omega}\varphi(f_{2}(u,v)-L_{1}v)-\frac{\tilde{D}}{d}\int\limits_{0}^{T}\int\limits\limits_{\Gamma}(v(L_{1}+k_{2})\varphi+u)+(\tilde{D}-d)\int\limits_{0}^{T}\int_{\Omega}v\Delta_{t}\varphi$
(6.5)
$\displaystyle+\int\limits\limits_{0}^{T}\int\limits\limits_{\Gamma}\varphi
g_{2}(u,v)+\int\limits\limits_{\Omega}v(x,0)\varphi(x,0)-\int\limits\limits_{\Omega}v(x,T)\varphi(\cdot,T).$
Summing these equations, and making use of $(V_{L1})$ and choosing
$d=\min\\{D,\tilde{D}\\}$, gives
$\displaystyle\int\limits_{0}^{T}\int_{\Gamma}(u+v)\leq\int\limits_{0}^{T}\int_{\Gamma}(u+v)(1+(L_{1}+k_{2})\varphi)\leq$
$\displaystyle\int\limits_{0}^{T}\int_{\Omega}L_{1}\varphi+\int\limits_{0}^{T}\int_{\Gamma}L_{1}\varphi+(D-d)\int\limits_{0}^{T}\int_{\Omega}u\Delta_{t}\varphi+(\tilde{D}-d)\int\limits\limits_{0}^{T}\int_{\Omega}v\Delta_{t}\varphi$
$\displaystyle+\int\limits\limits_{\Omega}u_{0}(x)\varphi(x,0)-\int\limits_{\Gamma}u(x,T)\varphi_{T}(x)+\int\limits_{\Omega}v_{0}(x)\varphi(x,0)-\int\limits_{\Gamma}v(x,T)\varphi_{T}(x).$
(6.6)
Since $\varphi_{T}$ is strictly positive, we can choose a $\delta>0$ such that
$\delta\leq\varphi(x)$ for all $x\in\Omega$. Then (6) implies
$\displaystyle\delta\int\limits_{\Gamma}(u(x,T)+v(x,T))+\int\limits_{0}^{T}\int\limits_{\Gamma}(u+v)\leq$
$\displaystyle\int\limits_{0}^{T}\int\limits_{\Omega}L_{1}\varphi+\int\limits_{0}^{T}\int_{\Gamma}L_{1}\varphi+(D-d)\int\limits_{0}^{T}\int\limits_{\Omega}u\Delta_{t}\varphi+(\tilde{D}-d)\int\limits_{0}^{T}\int\limits_{\Omega}v\Delta_{t}\varphi+\int\limits_{\Omega}(u_{0}+v_{0})\varphi(x,0).$
(6.7)
Thus, there exist constants $C_{1},C_{2}>0$, depending on $L_{1}$, $d$,
$\varphi_{T}$, $u_{0},v_{0}$, $D,\tilde{D}$, and at most exponentially on $T$,
such that
$\displaystyle\delta\int\limits_{\Gamma}(u(x,T)+v(x,T))+\int\limits_{0}^{T}\int\limits_{\Gamma}(u+v)\leq
C_{1}+C_{2}\int_{0}^{T}\int\limits_{\Omega}(u+v)$ (6.8)
Now, return to (6), and integrate both sides with respect to $t$ to obtain
$\displaystyle\int\limits_{\Omega}(u+v)dx\leq
L_{1}\left(\int\limits_{0}^{t}\int\limits_{\Omega}(u+v)+\int\limits_{0}^{t}\int\limits_{\Gamma}(u+v)+t|\Gamma|+t|\Omega|\right)+\int\limits_{\Omega}(u_{0}+v_{0})$
(6.9)
The second term on the right hand side of (6.9) can be bounded above by
$L_{1}$ times the right hand side of (6.8). Using this estimate, and
Gronwall’s inequality, we can obtain a bound for
$\int\limits_{0}^{T}\int_{\Omega}(u+v)$ that depends on $T$. Placing this on
the right hand side of (6.8) gives a bound for
$\int\limits_{\Gamma}(u(x,T)+v(x,T))$ that depends on $T$. Applying this to
the second integral on the right hand side of (6), and using Gronwall’s
inequality, gives the result.
Remark: The above proof can be imitated for $m$ components to obtain $L_{1}$
estimates for solutions of (3.7).
For sake of completeness of our arguments, we state below the Lemma 3.3 proved
in [30].
###### Lemma 6.2.
Given $\gamma\geq 1$ and $\epsilon>0$, there exists $C_{\epsilon,\gamma}>0$
such that
$\|v\|_{2,\partial\Omega}^{2}\leq\epsilon\|\nabla
v\|_{2,\Omega}^{2}+C_{\epsilon,\gamma}\|v^{\frac{2}{\gamma}}\|_{1,\Omega}^{\gamma}$
(6.10)
and
$\|v\|_{2,\Omega}^{2}\leq\epsilon\|\nabla
v\|_{2,\Omega}^{2}+C_{\epsilon,\gamma}\|v^{\frac{2}{\gamma}}\|_{1,\Omega}^{\gamma}$
(6.11)
for all $v\in H^{1}(\Omega)$.
Proof of Theorem 3.2: If $T_{\max}=\infty$, then there is nothing to prove.
So, assume $T=T_{\max}<\infty$. We first claim that under the given
assumptions,
$||u||_{p,\Omega_{T}}\leq
C(p,h(T),L_{1},D,\tilde{D},\Omega,||detA(t)||_{\infty}).$ (6.12)
We can assume without loss of generality that $b_{1}=b_{2}=1$ in $(V_{L1})$.
Let $1<p<\infty$, set $p^{\prime}=\frac{p}{p-1}$ and choose $\xi\in
L_{p^{\prime}}(\Omega_{T})$ such that
$\xi\geq 0\mbox{ and }\|\xi\|_{p^{\prime},\Omega_{T}}=1.$ (6.13)
Furthermore, let $L_{2}\geq\max\\{\frac{\tilde{D}L_{1}}{D},L_{1}\\}$ and
suppose $\varphi$ solves
$\displaystyle\varphi_{t}+D\Delta_{t}\varphi$
$\displaystyle=-L_{1}\varphi-\xi\quad\text{in }\Omega_{T},$ $\displaystyle
D\nabla_{t}\varphi\cdot\eta$ $\displaystyle=L_{2}\varphi\quad\text{on
}\Gamma_{T},$ (6.14) $\displaystyle\varphi$ $\displaystyle=0\quad\text{in
}\Omega\times\left\\{T\right\\}.$
Though (6) may appear to be a backwards heat equation, the substitution
$\tau=T-t$ immediately reveals that it is actually a forward heat equation.
Arguing as in the proof of the Theorem 3.1, we conclude that $\varphi\geq 0$.
In addition, from Lemma 4.1, there is a constant
$C(p,D,\tilde{D},\Omega,L_{1})>0$, and independent of $\xi$ such that
$\|\varphi\|_{p^{\prime},\Omega_{T}}^{(2,1)}\leq C.$ (6.15)
Multiply (6) with $(u+v)$ and integrating by parts we have
$\displaystyle\int\limits\limits\limits_{0}^{T}\int\limits_{\Omega}(u+v)\xi
dxdt$
$\displaystyle=\int\limits\limits\limits_{0}^{T}\int\limits_{\Omega}(u+v)(-\varphi_{t}-D\Delta_{t}\varphi-
L_{1}\varphi)dxdt$
$\displaystyle\leq\int\limits_{\Omega}(u_{0}+v_{0})\varphi(x,0)dx-\int\limits\limits\limits_{0}^{T}\int\limits_{\Omega}(u+v)D\Delta_{t}\varphi
dxdt-\int\limits\limits\limits_{0}^{T}\int\limits_{\Omega}L_{1}(u+v)\varphi
dxdt.$ (6.16)
Multiplying equation (3.1), integrating by parts and using $(V_{L1})$ we get
$\displaystyle\int\limits\limits\limits_{0}^{T}\int\limits_{\Omega}\varphi(u_{t}+v_{t})dxdt$
$\displaystyle=\int\limits\limits\limits_{0}^{T}\int\limits_{\Omega}\varphi(f+g)dxdt+\int\limits\limits\limits_{0}^{T}\int\limits_{\Omega}\varphi(D\Delta_{t}u+\tilde{D}\Delta_{t}v)dxdt-\int\limits_{0}^{T}\int\limits_{\Omega}a(t)(u+v)dxdt$
$\displaystyle\leq\int\limits_{0}^{T}\int\limits_{\Omega}\varphi
L_{1}(u+v+1)dxdt+\int\limits_{0}^{T}\int\limits_{\Omega}\varphi(D\Delta_{t}u+\tilde{D}\Delta_{t}v)dxdt-\int\limits_{0}^{T}\int\limits_{\Omega}a(t)(u+v)dxdt$
$\displaystyle\leq\int\limits_{0}^{T}\int\limits_{\Omega}\varphi
L_{1}(u+v+1)dxdt+\int\limits_{0}^{T}\int\limits_{\Omega}(uD\Delta_{t}\varphi+\tilde{D}v\Delta_{t}\varphi)dxdt$
$\displaystyle-\int\limits_{0}^{T}\int\limits_{\Omega}a(t)(u+v)dxdt-\int\limits_{0}^{T}\int\limits_{\Gamma}L_{2}\varphi
u-\int\limits_{0}^{T}\int\limits_{\Gamma}\frac{\tilde{D}}{D}L_{2}\varphi v$
(6.17)
Combining (6) and (6), we have
$\displaystyle\int\limits_{0}^{T}\int\limits_{\Omega}(u+v)\xi dxdt$
$\displaystyle\leq\int\limits_{\Omega}(u_{0}+v_{0})\varphi(x,0)dx+\int\limits_{0}^{T}\int\limits_{\Omega}(\tilde{D}-D)v\Delta\varphi
dxdt$ $\displaystyle+\int\limits_{0}^{T}\int\limits_{\Omega}L_{1}\varphi
dxdt+\int\limits_{0}^{T}\int\limits_{\Gamma}L_{1}\varphi
dxd\sigma-\int\limits_{0}^{T}\int\limits_{\Omega}a(t)(u+v)dxdt$
$\displaystyle-\int\limits_{0}^{T}\int\limits_{\Gamma}L_{2}\varphi
u-\int\limits_{0}^{T}\int\limits_{\Gamma}\frac{\tilde{D}}{D}L_{2}\varphi v$
$\displaystyle\leq\int\limits_{\Omega}(u_{0}+v_{0})\varphi(x,0)dx+\int\limits_{0}^{T}\int\limits_{\Omega}(\tilde{D}-D)v\Delta\varphi
dxdt$ $\displaystyle+\int\limits_{0}^{T}\int\limits_{\Omega}L_{1}\varphi
dxdt+\int\limits_{0}^{T}\int\limits_{\Gamma}L_{1}\varphi
dxd\sigma-\int\limits_{0}^{T}\int\limits_{\Omega}a(t)(u+v)dxdt$ (6.18)
By assumption, $\|v(\cdot,t)\|_{\infty,\Omega}\leq h(t)$ and (6.15) implies
$\|\varphi\|_{p^{\prime},\Omega_{T}}^{(2,1)}\leq C_{0}$. Also, integrating (6)
reveals that
$\displaystyle\int\limits_{\Omega}\varphi(\cdot,0)=-\int\limits_{0}^{T}\int\limits_{\partial\Omega}L_{2}\varphi+\int\limits_{0}^{T}\int\limits_{\Omega}L_{1}\varphi+\int\limits_{0}^{T}\int\limits_{\Omega}\xi.$
(6.19)
Therefore, $\|\varphi(\cdot,0)\|_{1,\Omega}$ can be bounded independent of
$\xi$ by using the norm bound on $\varphi$ and the fact that
$\|\xi\|_{p^{\prime},\Omega_{T}}=1$. In addition, the trace embedding theorem
implies $\|\varphi\|_{1,\Gamma_{T}}$ can be bounded in terms of
$\|\varphi\|_{p^{\prime},\Omega_{T}}^{(2,1)}$, which can be bounded
independent of $\xi$, for the same reason as above.
Therefore, by applying duality to (6), we see that
$\|u\|_{p,\Omega_{T}}\leq C(p,h(T),L_{1},D,\tilde{D},\Omega).$ (6.20)
Also, since $1<p<\infty$ is arbitrary, we have this estimate for every
$1<p<\infty$. Moreover, the sup norm bound on $v$, the $L_{p}(\Omega_{T})$
bounds on $u$ for all $1<p<\infty$, and $(V_{Poly})$ imply we have
$L_{q}(\Omega_{T})$ bounds on $f(u,v)$ and $g(u,v)$ for all $1<q<\infty$.
Now,we use the bounds above and assumption (3.9) to show
$\|u\|_{p,\Gamma_{T}}$ is bounded for all $1<p<\infty$. To this end, we employ
a modification of an argument given in [1] for the case $m=2$. Suppose
$p\in\mathbb{N}$ such that $p\geq 2$, and choose a constant
$\Theta>\max\left\\{K,\frac{D+\tilde{D}}{2\sqrt{D\tilde{D}}}\right\\}$. For
$a,b\geq 0$ we denote $w^{(a,b)}:=u^{a}v^{b}$ and define the polynomial
$P(u,v,p,\Theta^{\beta^{2}})=\sum_{\beta=0}^{p}\frac{p!}{\beta!(p-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-\beta)}.$
(6.21)
In general, to fix notation we let
$P(u,v,p,\Theta^{c(\beta)})=\sum_{\beta=0}^{p}\frac{p!}{\beta!(p-\beta)!}\Theta^{c(\beta)}w^{(\beta,p-\beta)}.$
(6.22)
where $c(p)$ is a prescribed function of $p$. Note that
$\displaystyle\frac{\partial P}{\partial t}$
$\displaystyle=\sum\limits_{\beta=0}^{p}\frac{p!}{\beta!(p-\beta)!}\Theta^{\beta^{2}}\left(\beta
w^{(\beta-1,p-\beta)}u_{t}+(p-\beta)w^{(\beta,p-\beta-1)}v_{t}\right)$
$\displaystyle=\left(pv^{p-1}v_{t}+p\Theta^{p^{2}}u^{p-1}u_{t}\right)dx+X_{1}+X_{2},$
(6.23)
where
$\displaystyle X_{1}$
$\displaystyle=\sum\limits_{\beta=1}^{p-1}\frac{p!}{(\beta-1)!(p-\beta)!}\Theta^{\beta^{2}}w^{(\beta-1,p-\beta)}u_{t}$
$\displaystyle=p\Theta
v^{p-1}u_{t}dx+\sum\limits_{\beta=2}^{p-1}\frac{p!}{(\beta-1)!(p-\beta)!}\Theta^{\beta^{2}}w^{(\beta-1,p-\beta)}u_{t}$
$\displaystyle=p\Theta
v^{p-1}u_{t}dx+\sum\limits_{\beta=1}^{p-2}\frac{p!}{\beta!(p-\beta-1)!}\Theta^{(\beta+1)^{2}}w^{(\beta,p-\beta-1)}(u_{1})_{t}$
$\displaystyle=p\Theta
v^{p-1}u_{t}dx+\sum\limits_{\beta=1}^{p-2}\frac{p!}{\beta!(p-\beta-1)!}\Theta^{\beta^{2}}w^{(\beta,p-\beta-1)}\Theta^{2\beta+1}u_{t}$
(6.24)
and
$\displaystyle X_{2}$
$\displaystyle=\sum\limits_{\beta=1}^{p-1}\frac{p!}{\beta!(p-\beta-1)!}\Theta^{\beta^{2}}w^{(\beta,p-\beta-1)}v_{t}$
$\displaystyle=p\Theta^{(p-1)^{2}}u^{p-1}v_{t}dx+\sum\limits_{\beta=1}^{p-2}\frac{p!}{\beta!(p-\beta-1)!}\Theta^{\beta^{2}}w^{(\beta,p-\beta-1)}v_{t}.$
(6.25)
Combining (6)-(6) gives
$\frac{\partial P}{\partial
t}=\sum\limits_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-1-\beta)}\left(\Theta^{2\beta+1}u_{t}+v_{t}\right)$
(6.26)
Clearly, above steps hold even if we differentiate with respect to any
variable $x_{i}$, $i=1,2,3$, i.e.,
$\frac{\partial P}{\partial
x_{i}}=\sum\limits_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-1-\beta)}\left(\Theta^{2\beta+1}\frac{\partial
u}{\partial x_{i}}+\frac{\partial v}{\partial x_{i}}\right)$ (6.27)
Using the fact that $(u,v)$ satisfies the equation (3.1).
$\frac{\partial P}{\partial
t}=\sum\limits_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-1-\beta)}\left(\Theta^{2\beta+1}(\mathcal{L}u+f_{1}(u,v))+\tilde{\mathcal{L}}v+f_{2}(u,v)\right).$
(6.28)
Integrating over $\Omega$, we have
$\int\limits\limits\limits_{\Omega}\frac{\partial P}{\partial t}\,dx=I+II+III$
(6.29)
where
$\displaystyle
I=\int\limits\limits\limits_{\Omega}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-1-\beta)}\left(\Theta^{2\beta+1}f_{1}(u,v)+f_{2}(u,v)\right)dx$
(6.30)
and
$\displaystyle
II=\int\limits\limits\limits_{\Omega}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-1-\beta)}\left(\Theta^{2\beta+1}D\Delta_{t}u+\tilde{D}\Delta_{t}v\right)dx.$
(6.31) $\displaystyle III$
$\displaystyle=-\int\limits_{\Omega}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-1-\beta)}a(t)\left(\Theta^{2\beta+1}u+v\right)dx$
$\displaystyle\leq-k_{1}C(p,\Theta)\int\limits_{\Omega}(u+v)^{p}\,dx.$ (6.32)
Choosing $\Theta\geq K$ and applying (3.9), we have
$\displaystyle I$
$\displaystyle\leq\int\limits_{\Omega}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-1-\beta)}L_{\Theta}\left(u+v+1\right)dx$
$\displaystyle\leq L_{\Theta}\int\limits_{\Omega}[(u+v)^{p}+(u+v)^{p-1}]\,dx$
$\displaystyle\leq
C(p,h(T),L_{1},D,\tilde{D},\Omega,||detA(t)||_{\infty},L_{\Theta})\quad\mbox{from
(\ref{ubound}).}$ (6.33)
While
$\displaystyle II$
$\displaystyle=\int\limits_{\Omega}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-1-\beta)}\left(\Theta^{2\beta+1}D\Delta_{t}u+\tilde{D}\Delta_{t}v\right)dx$
$\displaystyle=\int\limits_{\Gamma}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-1-\beta)}\left(\Theta^{2\beta+1}g_{1}(u,v)+g_{2}(u,v)\right)dx$
$\displaystyle-\int\limits_{\Omega}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}<\Theta^{2\beta+1}D\nabla_{t}w^{(\beta,p-1-\beta)},\nabla_{t}u>\,dx$
$\displaystyle-\int\limits_{\Omega}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}<\tilde{D}\nabla_{t}w^{(\beta,p-1-\beta)},\nabla_{t}v>\,dx.$
(6.34)
We have $\nabla_{t}w^{(\beta,p-1-\beta)}=\beta
u^{\beta-1}v^{(p-1-\beta)}\nabla_{t}u+u^{\beta}(p-1-\beta)v^{(p-2-\beta)}\nabla_{t}v$,
hence we can write
$\displaystyle\int\limits_{\Omega}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}\left(<\Theta^{2\beta+1}D\nabla_{t}w^{(\beta,p-1-\beta)},\nabla_{t}u>+<\tilde{D}\nabla_{t}w^{(\beta,p-1-\beta)},\nabla_{t}v>\,dx\right)\,dx$
$\displaystyle\geq\int\limits_{\Omega}\sum_{\beta=0}^{p-2}\frac{p!}{\beta!(p-2-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-2-\beta)}\sum\limits_{k=1}^{3}\frac{1}{\lambda_{k}(t)^{2}}\left<B(\Theta,D,\tilde{D})\left(\begin{array}[]{c}\partial_{x_{k}}u\\\
\partial_{x_{k}}v\end{array}\right),(\partial_{x_{k}}u,\partial_{x_{k}}v)\right>$
(6.37)
where
$B(\Theta,D,\tilde{D})=\begin{pmatrix}D\Theta^{4\beta+4}&\frac{(D+\tilde{D})}{2}\Theta^{2\beta+1}\\\
\frac{(D+\tilde{D})}{2}\Theta^{2\beta+1}&\tilde{D}\end{pmatrix}.$
Again choosing $\Theta$ sufficiently large so that the matrix
$B(\Theta,D,\tilde{D})$ is positive definite and recalling that
$0<\Lambda_{1}\leq\frac{1}{\lambda_{i}^{2}(t)}\leq\Lambda_{2}$, there exists
$\alpha_{\Theta,p}>0$ such that
$\displaystyle\int\limits_{\Omega}\sum_{\beta=0}^{p-1}\frac{p!}{\beta!(p-1-\beta)!}\Theta^{\beta^{2}}\left(<\Theta^{2\beta+1}D\nabla_{t}w^{(\beta,p-1-\beta)},\nabla_{t}u>+<\tilde{D}\nabla_{t}w^{(\beta,p-1-\beta)},\nabla_{t}v>\,dx\right)\,dx$
$\displaystyle\geq\Lambda_{1}^{2}\alpha_{\Theta,p}\int\limits_{\Omega}\sum_{\beta=0}^{p-2}\frac{p!}{\beta!(p-2-\beta)!}\Theta^{\beta^{2}}w^{(\beta,p-2-\beta)}\sum\limits_{k=1}^{3}(\partial_{x_{k}}u^{2}+\partial_{x_{k}}v^{2})$
$\displaystyle\geq\Lambda_{1}^{2}\alpha_{\Theta,p}\int\limits_{\Omega}\left(|\nabla(u)^{p/2}|^{2}+|\nabla(v)^{p/2}|^{2}\right)dx$
(6.38)
Substituting (6), (6), (6) and (6) in (6.29) we get
$\displaystyle\frac{\partial P}{\partial
t}+\Lambda_{1}^{2}\alpha_{\Theta,p}\int\limits_{\Omega}\left(|\nabla(u)^{p/2}|^{2}+|\nabla(v)^{p/2}|^{2}\right)dx+k_{1}C(p,\Theta)\int\limits_{\Omega}(u+v)^{p}\,dx$
$\displaystyle\leq
C(p,h(T),L_{1},D,\tilde{D},\Omega,L_{\Theta})+N_{p,\Theta,\Gamma}\left[\int\limits_{\Gamma}\left(u^{p}+v^{p}\right)d\sigma+1\right].$
(6.39)
for some constant $N_{p,\Theta,\Gamma}$. Applying Lemma 6.2 to the functions
$u^{p/2}$ and $v^{p/2}$ with $\gamma=p$ and using (6.10), there exists
$\tilde{N}_{p,\Theta,\Omega}>0$ such that
$\displaystyle
2N_{p,\Theta,\Gamma}\int_{\Gamma}\left(u^{p}+v^{p}\right)d\sigma\leq\alpha_{\Theta,p}\Lambda_{1}\int\limits_{\Omega}\left(|\nabla(u)^{p/2}|^{2}+|\nabla(v)^{p/2}|^{2}\right)dx+\tilde{N}_{p,\Theta,\Gamma}\left(\int_{\Omega}(u+v)dx\right)^{p}$
(6.40)
Adding $(\ref{Lprime8.5})$ with $(\ref{Lprime8.7})$, we get
$\displaystyle\frac{\partial P}{\partial
t}+N_{p,\Theta,\Gamma}\int\limits_{\Gamma}(u^{p}+v^{p})d\sigma\leq
C(p,h(T),L_{1},D,\tilde{D},M,L_{\Theta})+\tilde{N}_{p,\Theta,\Gamma}\left(\int\limits_{\Omega}(u+v)\
dx\right)^{p}+N_{p,\Theta,\Gamma}.$ (6.41)
Finally, if we integrate over time, we find that $\|u\|_{p,\Gamma_{T}}$ is
bounded in terms of $p$, $\Gamma$, $\Omega$, $\Theta$, $h(T)$, $w_{1}$,
$w_{2}$ and $\|v\|_{p,\Omega_{T}}$. Since this holds for every natural number
$p\geq 2$, we can use the assumption $(V_{Poly})$ and the bounds above, along
with Proposition 4.10 to conclude that $\|(u,v)\|_{\infty,\Omega_{T}}<\infty$.
From Theorem 3.1, this contradicts our assumption that $T_{\max}<\infty$.
Therefore, $T_{\max}=\infty$, and Theorem 3.2 is proved.
For $m\geq 2$ components, we first obtain the following $L_{p}$ estimates.
###### Lemma 6.3.
Suppose that $(V_{N})$, $(V_{F})$, $(V_{QP})$ and $(V_{L})$ are satisfied, and
$u$ is the unique, componentwise nonnegative, maximal solution to
$(\ref{grow})$. If $1<p<\infty$ and $T=T_{max}<\infty$, then
$\|u\|_{p,\Omega_{T}}$ and $\|u\|_{p,\Gamma_{T}}$ are bounded.
The proof of Lemma 6.3 is using the Lyapunov function which is an extension of
the polynomial (6.21) to $m$ components, i.e.,
$P=\sum\limits_{|\beta|=0}^{p}\frac{p!}{\beta!(p-|\beta|)!}\theta_{1}^{\beta_{1}^{2}}\ldots\theta_{m-1}^{\beta_{m-1}^{2}}u_{1}^{\beta_{1}}\ldots
u_{m-1}^{\beta_{m-1}}u_{m}^{p-|\beta|}$ (6.42)
where $|\beta|=|{\beta_{1}}|+\ldots+|\beta_{m-1}|$ and
$\beta!=\beta_{1}!\ldots\beta_{m-1}!$ The estimates are obtained following
steps of Lemma 5.3 of [30] using this polynomial, which is relatively simpler
than $H$ defined in [1] and used in [30].
Proof of Theorem 3.4: From Theorem 3.1, we already have a componentwise
nonnegative, unique, maximal solution of (3.1). If $T_{\max}=\infty$, then we
are done. So, by way of contradiction assume $T_{\max}<\infty$. From Lemma 6.3
, we have $L_{p}$ estimates for our solution for all $p\geq 1$ on
$\Omega\times(0,T_{\max})$ and $M\times(0,T_{\max})$. We know from
$(V_{Poly})$ that the $F_{i}$ and $G_{i}$ are polynomially bounded above for
each $i$. Then proceeding as in the proof of Theorem 3.3 in [31] with the
bounds from Lemma 6.3 we have $T_{\max}=\infty$.
## 7\. Examples
### Example 1
Here, we give an example related to the well known Brusselator. Consider the
system
$\displaystyle u_{1_{t}}$ $\displaystyle=d_{1}\Delta u_{1}\quad$
$\displaystyle y\in\Omega_{t},t>0$ $\displaystyle u_{2_{t}}$
$\displaystyle=d_{2}\Delta u_{2}\quad$ $\displaystyle y\in\Omega_{t},t>0$
$\displaystyle d_{1}\frac{\partial u_{1}}{\partial\eta_{t}}$
$\displaystyle=\alpha u_{2}-u_{2}^{2}u_{1}\quad$ $\displaystyle
y\in\partial\Omega_{t},t>0$ (7.1) $\displaystyle d_{2}\frac{\partial
u_{2}}{\partial\eta_{t}}$
$\displaystyle=\beta-(\alpha+1)u_{2}+u_{2}^{2}u_{1}\quad$ $\displaystyle
y\in\partial\Omega_{t},t>0$ $\displaystyle u_{i}(y,0)$
$\displaystyle=w_{i}(y)$ $\displaystyle
y\in\overline{\Omega}_{0}\times\\{0\\}$
where $d_{1},d_{2},\alpha,\beta>0$ and $w$ is sufficiently smooth and
componentwise nonnegative. If we define
$f(u)=\begin{pmatrix}0\\\ 0\end{pmatrix}\quad\text{and }\quad
g(u)=\begin{pmatrix}\alpha u_{2}-u_{2}^{2}u_{1}\\\
\beta-(\alpha+1)u_{2}+u_{2}^{2}u_{1}\end{pmatrix}$
for all $u\in\mathbb{R}_{+}^{2}$, then $(V_{N})$, $(V_{F})$, $(V_{QP})$ and
$(V_{Poly})$ are satisfied with $a_{1}\geq 1$ and
$L_{a}=\max\\{\beta,\alpha\cdot a_{1}\\}$. Therefore, Theorem 3.4 implies (7)
has a unique, componentwise nonnegative, global solution.
### Example 2
We next consider a general reaction mechanism of the form
$R_{1}+R_{2}\begin{subarray}{c}\longrightarrow\\\
\longleftarrow\end{subarray}P_{1}$
where $R_{i}$ and $P_{i}$ represent reactant and product species,
respectively. If we set $u_{i}=[R_{i}]$ for $i=1,2$, and $u_{3}=[P_{1}]$, and
let $k_{f},k_{r}$ be the (nonnegative) forward and reverse reaction rates,
respectively, then we can model the process by the application of the law of
conservation of mass and the second law of Fick (flow) with the following
reaction–diffusion system:
$\displaystyle u_{i_{t}}$ $\displaystyle=d_{i}\Delta u_{i}\quad$
$\displaystyle y\in\Omega_{t},t>0,i=1,2,3$ $\displaystyle d_{1}\frac{\partial
u_{1}}{\partial\eta_{t}}$ $\displaystyle=-k_{f}u_{1}u_{2}+k_{r}u_{3}\quad$
$\displaystyle y\in\partial\Omega_{t},t>0$ $\displaystyle d_{2}\frac{\partial
u_{2}}{\partial\eta_{t}}$ $\displaystyle=-k_{f}u_{1}u_{2}+k_{r}u_{3}\quad$
$\displaystyle y\in\partial\Omega_{t},t>0$ (7.2) $\displaystyle
d_{3}\frac{\partial u_{3}}{\partial\eta_{t}}$
$\displaystyle=k_{f}u_{1}u_{2}-k_{r}u_{3}\quad$ $\displaystyle
y\in\partial\Omega_{t},t>0$ $\displaystyle u_{i}(y,0)$
$\displaystyle=w_{i}(y)$ $\displaystyle
y\in\overline{\Omega}_{0}\times\\{0\\},i=1,2,3,$
where $d_{i}>0$ and the initial data $w$ is sufficiently smooth and
componentwise nonnegative. If we define
$f(u)=\begin{pmatrix}0\\\ 0\\\ 0\end{pmatrix}\quad\text{,}\quad
g(u)=\begin{pmatrix}-k_{f}u_{1}u_{2}+k_{r}v_{3}\\\
-k_{f}u_{1}u_{2}+k_{r}v_{3}\\\ k_{f}u_{1}u_{2}-k_{r}v_{3}\end{pmatrix}$
for all $u\in\mathbb{R}_{+}^{3}$, then $(V_{N})$, $(V_{F})$, $(V_{QP})$ and
$(V_{Poly})$ are satisfied. In addition, $(V_{L1})$ is satisfied with
$L_{1}=0$ since
$\frac{1}{2}f_{1}(z)+\frac{1}{2}f_{2}(z)+f_{3}(z)=0\quad\text{and}\quad\frac{1}{2}g_{1}(z)+\frac{1}{2}g_{2}(z)+g_{3}(z)=0$
for all $z\in\mathbb{R}^{3}_{+}$. Therefore, the hypothesis of Theorems 3.4 is
satisfied. As a result (7) has a unique, componentwise nonnegative, global
solution.
### Example 3
Finally, we consider a system that satisfies the hypothesis of the Theorem
3.4, where the boundary reaction vector field does not satisfy a linear
intermediate sums condition. Let
$\displaystyle u_{1_{t}}$ $\displaystyle=d_{1}\Delta u\quad$ $\displaystyle
y\in\Omega_{t},t>0$ $\displaystyle u_{2_{t}}$ $\displaystyle=d_{2}\Delta
u\quad$ $\displaystyle y\in\Omega_{t},t>0$ $\displaystyle d_{1}\frac{\partial
u_{1}}{\partial\eta_{t}}$ $\displaystyle=\alpha
u_{1}u_{2}^{3}-u_{1}u_{2}^{2}\quad$ $\displaystyle y\in\partial\Omega_{t},t>0$
(7.3) $\displaystyle d_{2}\frac{\partial u_{2}}{\partial\eta_{t}}$
$\displaystyle=u_{1}u_{2}^{2}-\beta u_{1}u_{2}^{6}\quad$ $\displaystyle
y\in\partial\Omega_{t},t>0$ $\displaystyle u(y,0)$ $\displaystyle=w(y)$
$\displaystyle y\in\overline{\Omega}_{0}\times\\{0\\}$
where $d_{1},d_{2},\alpha,\beta>0$ and $w$ is sufficiently smooth and
componentwise nonnegative. In this setting
$f(u)=\begin{pmatrix}0\\\ 0\end{pmatrix}\quad\text{,}\quad
g(u)=\begin{pmatrix}\alpha u_{1}u_{2}^{3}-u_{1}u_{2}^{2}\\\
u_{1}u_{2}^{2}-\beta u_{1}u_{2}^{6}\end{pmatrix}$
for all $u\in\mathbb{R}_{+}^{2}$. It is simple matter to see that $(V_{N})$,
$(V_{F})$, $(V_{QP})$ and $(V_{Poly})$ are satisfied. Also, if $a\geq 1$ then
$af_{1}(u)+f_{2}(u)=0\quad\text{and}\quad
ag_{1}(u)+g_{2}(u)\leq(a\alpha-\beta)u_{1}(u_{2}^{3}-u_{2}^{6})\leq\frac{a\alpha}{4}u_{1}$
for all $u\in\mathbb{R}^{2}_{+}$. Consequenty, $(V_{L})$ is satisfied.
Therefore, Theorem 3.4 implies $(\ref{square})$ has a unique, componentwise
nonnegative, global solution.
## References
* [1] S. Abdelmalek and S. Kouachi, Proof of existence of global solutions for m-component reaction–diffusion systems with mixed boundary conditions via the Lyapunov functional method, J. Phys. A: Math. Theor. 40 (2007) 12335–12350.
* [2] I. Barrass, E. J. Crampin and P. K. Mainia, Mode Transitions in a Model Reaction–Diffusion System Driven by Domain Growth and Noise, Bulletin of Mathematical Biology 68 (2006) 981–995
* [3] E.J. Crampin, E.A. Gaffney and P.K. Maini, Reaction and diffusion on growing domains: Scenarios for robust pattern formation, Bulletin of Mathematical Biology 61 (1999) 1093–1120. https://doi.org/10.1006/bulm.1999.0131
* [4] E. J. Crampin, E. A. Gaffney and P. K. Maini, Mode-doubling and tripling in reaction-diffusion patterns on growing domains: A piecewise linear model, Journal of Mathematical Biology 44, (2002) 107–128
* [5] A. Comanici and M. Golubitsky, Patterns on growing square domains via mode interactions, Dynamical Systems 23:2 (2008) 167-206, DOI: 10.1080/14689360801945327
* [6] J. Ding and S. Li, Blow-up and global solutions for nonlinear reaction–diffusion equations with Neumann boundary conditions, Nonlinear Analysis 68(3) (2008) 507-514.
* [7] K. Fellner, J. Morgan and B. Q. Tang, Uniform-in-time Bounds for Quadratic Reaction-Diffusion Systems with Mass Dissipation in Higher Dimensions, to appear in Discrete and Continuous Dynamical Systems Series S. ArXiv version: https://arxiv.org/abs/1906.06902.
* [8] E. B. Fabes and N. M. Riviere, Dirichlet and Neumann problems for the heat equation in $C^{1}$ cylinders, “Proc. Sympos. Pure Math” Vol. 35 (1979) pp.179-196.
* [9] K. Fellner, J. Morgan and B. Q. Tang, Global Classical Solutions to Quadratic Systems with Mass Control in Arbitrary Dimensions, Annales de l’Institut Henri Poincaré, Volume 37, Issue 2 (March-April 2020) 281-307.
* [10] A. Hahn, K. Held, L. Tobiska, Modelling of surfactant concentration in a coupled bulk surface problem, PAMM Proc. Appl. Math. Mech Vol. 14 (2014) 525–526
* [11] S. L. Hollis, R. H. Martin, Jr., and M. Pierre, Global Existence and Boundedness in Reaction-Diffusion Systems, SIAM Journal of Mathematical Analysis 18(3) (1987)744–761.
* [12] S. Kondo, R. Asai, A reaction–diffusion wave on the skin of the marine angelfish Pomacanthus, Nature 376 (1995) 765–768. https://doi.org/10.1038/376765a0
* [13] A. L. Krause, M. A. Ellis and R. A. Van Gorder, Influence of Curvature, Growth, and Anisotropy on the Evolution of Turing Patterns on Growing Manifolds, Bulletin of Mathematical Biology 81:3 (2019) pages 759-799
* [14] P. M. Kulesa, G. C. Cruywagen, S. R. Lubkin, P. K. Maini, J. Sneyd, M. W. J. Ferguson and J. D. Murray, On a model mechanism for the spatial pattering of teeth primordia in the alligator, Journal of Theoretical Biology 180 (1996) 287–296.
* [15] O. A. Ladyzhenskaia and N. N. Ural’tseva, “Linear and quasilinear elliptic equations”, Academic Press, New York, 1968.
* [16] O. A. Ladyzhenskaya, V. A. Solonnikov and N. N. Uraltseva, “Linear and quasilinear equations of parabolic type”, American Mathematical Society, Providence, R.I, 1968.
* [17] M .Labadie, The stabilizing effect of growth on pattern formation (2008). Preprint.
* [18] A. Madzvamuse and A. H. Chung, Analysis and Simulations of Coupled Bulk-surface Reaction-Diffusion Systems on Exponentially Evolving Volumes, Mathematical Modelling of Natural Phenomena Vol. 11, No. 5 (2016) pp. 4-32
* [19] A. Madzvamuse and P. K. Maini, Velocity-induced numerical solutions of reaction-diffusion systems on continuously growing domains Journal of Computational Physics Volume 225, Issue 1 (1 July 2007) Pages 100-119
* [20] A. Madzvamuse, E. Gaffney and P. Maini, Stability analysis of non-autonomous reaction-diffusion systems: the effects of growing domains. Journal of Mathematical Biology 61(1), (2010) 133–164.
* [21] A. Madzvamuse, Time-stepping schemes for moving grid finite elements applied to reaction–diffusion systems on fixed and growing domains Journal of Computational PhysicsVolume 214, Issue 1 ( 1 May 2006) Pages 239-263
* [22] J. Morgan, Global Existence for Semilinear Parabolic Systems, Siam Journal on Mathematical Analysis 20(5) 1989) 1128–1144.
* [23] J. Morgan and V. Sharma, Global Existence of Solutions to Volume-Surface Reaction Diffusion systems with Dynamic Boundary Conditions, Differential and Integral Equations,Volume 33 (March-April 2020) 113-139.
* [24] R. G. Plaza, F. Sànchez-Garduño, P. Padilla, R. A. Barrio, and P. K. Maini, The Effect of Growth and Curvature on Pattern Formation Journal of Dynamics and Differential Equations Vol. 16, No. 4 (2004)
* [25] M. Pierre and D. Schmitt, Blowup in Reaction-Diffusion Systems with Dissipation of Mass, SIAM Review, Vol. 42, No. 1 (2000) 93-106.
* [26] M. Pierre, Global Existence in Reaction-Diffusion Systems with Control of Mass: a survey, Milan Journal of Mathematics 78(2) (2010) 417–455.
* [27] A. Rätz and M. Röger, Turing instabilities in a mathematical model for signaling networks, Journal of Mathematical Biology 65 (2012) 1215–1244.
* [28] A. Rätz and M. Röger, Symmetry breaking in a bulk-surface reaction-diffusion model for signaling networks, Nonlinearity, Volume 27, Number 8 (2013)
* [29] F. Rothe, Global solutions of reaction-diffusion systems “Lecture Notes in Math” 1072 (1984).
* [30] V. Sharma, Global Existence and uniform estimates for Solutions to Reaction-Diffusion Systems with Mass Transport Type of Boundary Conditions, Comunication on Pure and Applied Analysis, Accepted (2020)
* [31] V. Sharma and J. Morgan, Global Existence of Solutions to Coupled Reaction-Diffusion Systems with Mass Transport Type of Boundary Conditions, SIAM Journal on Mathematical Analysis, 48(6) (2016) 4202-4240.
* [32] V. Sharma and J. Morgan Uniform Bounds for Solutions to Volume-Surface Reaction Diffusion Systems, Differential and Integral Equations, 30(5-6) (2017) 423-442.
* [33] V. Sharma and J. V. Prajapat, Global Existence of Solution to Volume Surface Reaction Diffusion System with Evolving Domain, In Preparation
* [34] A. M. Turing, The chemical basis of morphogenesis Philosophical Transactions of the Royal Society of London. Series B 237 (1952) 37–72.
* [35] C.Venkataraman, O. Lakkis and A. Madzvamuse, Global existence for semilinear reaction–diffusion systems on evolving domains. Journal of Mathematical Biology 64 (2012) 41–67.
|
capbtabboxtable[][]
# Shapley-Scarf Housing Markets: Respecting Improvement, Integer Programming,
and Kidney Exchange111This work is financed by COST Action CA15210 ENCKEP,
supported by COST (European Cooperation in Science and Technology) –
http://www.cost.eu/.
Péter Biró Centre for Economic and Regional Studies, and Corvinus University
of Budapest, 1097 Budapest, Tóth Kálmán utca 17, Hungary. P. Biró is supported
by the Hungarian Scientific Research Fund – OTKA (no. K129086). Flip Klijn
Institute for Economic Analysis (CSIC) and Barcelona GSE, Campus UAB, 08193
Bellaterra (Barcelona), Spain; e-mail<EMAIL_ADDRESS>F. Klijn
gratefully acknowledges financial support from AGAUR–Generalitat de Catalunya
(2017-SGR-1359) and the Spanish Ministry of Science and Innovation through
grant ECO2017-88130-P AEI/FEDER, UE and the Severo Ochoa Programme for Centres
of Excellence in R&D (CEX2019-000915-S). Xenia Klimentova INESC TEC, Porto,
Portugal. Ana Viana INESC TEC and ISEP – School of Engineering, Polytechnic of
Porto, Porto, Portugal.
(August 28, 2024)
###### Abstract
In a housing market of Shapley and Scarf [41], each agent is endowed with one
indivisible object and has preferences over all objects. An allocation of the
objects is in the (strong) core if there exists no (weakly) blocking
coalition. In this paper we show that in the case of strict preferences the
unique strong core allocation (or competitive allocation) “respects
improvement”: if an agent’s object becomes more attractive for some other
agents, then the agent’s allotment in the unique strong core allocation weakly
improves. We obtain a general result in case of ties in the preferences and
provide new integer programming formulations for computing (strong) core and
competitive allocations. Finally, we conduct computer simulations to compare
the game-theoretical solutions with maximum size and maximum weight exchanges
for markets that resemble the pools of kidney exchange programmes.
Keywords: housing market, respecting improvement, core, competitive
allocations, integer programming, kidney exchange programmes.
## 1 Introduction
Shapley and Scarf [41] introduced so-called “housing markets” to model trading
in commodities that are inherently indivisible. Specifically, in a housing
market each agent is endowed with an object (e.g., a house or a kidney donor)
and has ordinal preferences over all objects, including her own. The aim is to
find plausible or desirable allocations where each agent is assigned one
object. A standard approach in the literature is to discard allocations that
can be blocked by a coalition of agents. Specifically, a coalition of agents
blocks an allocation if they can trade their endowments so that each of the
agents in the coalition obtains a strictly preferred allotment. Similarly, a
coalition of agents weakly blocks an allocation if they can trade their
endowments so that each of the agents in the coalition obtains a weakly
preferred allotment and at least one of them obtains a strictly preferred
allotment. Thus, an allocation is in the (strong) core if it is not (weakly)
blocked. A distinct but also well-studied solution concept is obtained from
competitive equilibria, each of which consists of a vector of prices for the
objects and a (competitive) allocation such that each agent’s allotment is one
of her most preferred objects among those that she can afford. Interestingly,
the three solution concepts are entwined: the strong core is contained in the
set of competitive allocations, and each competitive allocation pertains to
the core.
In a separate line of research, Balinski and Sönmez [6] studied the classical
two-sided college admissions model of Gale and Shapley [20] and proved that
the student-optimal stable matching mechanism (SOSM) respects improvement of
student’s quality. This means that under SOSM, an improvement of a student’s
rank at a college will, ceteris paribus, lead to a weakly preferred match for
the student. The natural transposition of this property to (one-sided) housing
markets requires that an agent obtains a weakly preferred allotment whenever
her object becomes more desirable for other agents. We study the following
question: Do the most prominent solution concepts for Shapley and Scarf’s [41]
housing market “respect improvement”? We obtain several positive answers to
this question, which we describe in more detail in the next subsection.
The respecting improvement property is important in many applications where
centralised clearinghouses use mechanisms to implement barter exchanges. A
leading example are kidney exchange programmes (KEPs), where end-stage renal
patients exchange their willing but immunologically incompatible donors (Roth
et al. [37]). In the context of KEPs, the respecting improvement property
means that whenever a patient brings a “better” donor (e.g., younger or with
universal blood type 0 instead of A, B, or AB) or registers an additional
donor, the KEP should assign her the same or a better exchange donor. However,
in current KEPs, the typical objective is to maximise the number of
transplants and their overall qualities (see, e.g., [11]) which can lead to
violations of the respecting improvement property. As an illustration,
consider the maximisation of the number of transplants in Figure 1, where each
node represents a patient-donor pair. Directed edges represent compatibility
between the donor in one pair and the patient in another, and patients may
have different levels of preference over their set of compatible donors.
$1$$2$$3$$4$ Figure 1: The maximisation of the number of transplants does not
respect improvement.
Initially there are only continuous edges, where a thick (thin) edge points to
the most (least) preferred donor. For example, patient 3 has two compatible
donors: donors 1 and 4, and donor 1 is preferred to donor 4. Obviously, the
unique way to maximise the number of (compatible) transplants is obtained by
picking the three-cycle (1,2,3). Suppose that patient 4 receives antigen
desensitisation treatment so that donor 3 becomes compatible for her, or
patient 3 succeeds in bringing a second donor to the KEP and this donor turns
out to be compatible for patient 4. Then, the discontinuous edge is included
and patient-donor pair 3 “improves.” But now the unique way to maximise the
number of (compatible) transplants is obtained by picking the 2 two-cycles
(1,2) and (3,4), which means that patient 3 receives a kidney that is strictly
worse than the kidney she would have received initially.
Similarly, the allocations induced by the standard objectives of KEPs need not
be in the (strong) core. We refer to Example 1 for an illustration of this for
the case of the maximisation of the number of transplants. As a consequence,
blocking coalitions may exist. This is an undesirable feature because patient
groups could make a potentially justified claim that the matching procedure is
not in their best interest. A particular instance could occur in the
organisation of international kidney exchanges if a group of patient-donor
pairs, all citizens of the same country, learn that an internal (i.e.,
national) matching would all of them give a more preferred kidney.
We conduct simulations to determine to what extent the typical KEP objectives
lead to the violation of the respecting improvement property and the existence
of blocking coalitions. The simulations also include the three standard game-
theoretical solution concepts (as they also do not “completely” satisfy the
respecting improvement property), for which we first develop novel integer
programming formulations to speed up the computations.
Next, we describe our contributions in more detail and review the related
literature.
### 1.1 Contributions
Section 3 contains our theoretical results on the respecting improvement
property. First, we show that for strict preferences the unique strong core
allocation (which coincides with the unique competitive allocation) respects
improvement (Theorem 1). In the case of preferences with ties, since the
strong core can be empty, we focus on the set of competitive allocations.
Since typically multiple competitive allocations exist, we have to make
setwise comparisons. We obtain a natural extension of our first result:
focusing on the agent’s allotments obtained at competitive allocations, we
establish that her most preferred allotment in the new market is weakly
preferred to her most preferred allotment in the initial market; and
similarly, her least preferred allotment in the new market is weakly preferred
to her least preferred allotment in the initial market (Proposition 1).
Finally, we also prove that when preferences have ties the strong core
respects improvement conditional on the strong core being non-empty. More
precisely, under the assumption that strong core allocations exist in both the
initial and new markets, we show that the agent under consideration weakly
prefers each allotment in the new strong core to each allotment in the initial
strong core (Theorem 2).
Then we tackle an important assumption in the housing market of Shapley and
Scarf, namely that allocations can contain exchange cycles of any length,
i.e., cycles are unbounded. As a consequence, some allocations obtained from
the theory of housing markets might be difficult to implement in the case of
KEPs. The reason is that all transplants in a cycle are usually carried out
simultaneously to avoid reneging. So, if the number of surgical teams and
operation rooms is small, some of the transplants have to be conducted
necessarily in a non-simultaneous way. In many countries, this “risky”
solution is not allowed [8]. The definition of core, set of competitive
allocations, and strong core can be adjusted to the requirement that the
length of exchange cycles does not exceed an exogenously given maximum.
However, in this case the core (and hence also the set of competitive
allocations and the strong core) can be empty.222The corresponding decision
problem is NP-hard [10, 23] even for tripartite graphs (also known as the
cyclic 3D stable matching problem [34]). Conditional on the existence of a
core, competitive, or strong core allocation, we show that even if preferences
are strict, when the length of exchange cycles is limited (upper bound 3 or
higher), the core, the set of competitive allocations, and the strong core do
not respect improvement in terms of the most preferred allotment (Proposition
2).
In view of practical applications such as KEPs, we provide as a second main
contribution, in Section 4, novel integer programming (IP) formulations for
finding core, competitive, and strong core allocations. Our simple sets of
constraints for the three solution concepts clearly show the hierarchy between
them by pinpointing the additional requirements needed when moving from one
solution concept to a stronger one. Our formulations are concise and useful
for practical computations.
Section 5 contains our third main contribution, which complements our
theoretical analysis and consists of computer simulations comparing
core/competitive/strong core with maximum number of transplants and maximum
total weight allocations for both bounded length and unbounded length
exchanges. The total weight of an exchange is the sum of the weights
associated to the arcs involved in the exchange. In the simulations we use the
IP models developed in [25] for bounded length exchanges. For unbounded length
exchanges we use our novel IP formulations. To carry out our simulations we
draw markets from pools similar to those observed in KEPs. We analyse the
impact in the objective functions of the stability requirements associated
with core, competitive, and strong core allocations. For unbounded length we
also study the price of fairness: the decrease in the number of transplants of
maximum weight, core, competitive and strong core allocations, when compared
with the maximum size solution. The analysis proceeds with an indirect
assessment of how far other solutions would be from the strong core. Such
indicator provides some insight in/into the number of patients in a pool that
could get a strictly better match. The section concludes with a computational
analysis of the frequency of violations of the respecting improvement
property. It was observed that even though there exist cases of violation of
the property for (Wako-, strong) core, their quantity is dramatically lower
that those for the maximum size and maximum size solutions.
### 1.2 Literature review
##### Housing markets
The non-emptiness of the core was proved in [41] by showing the balancedness
of the corresponding NTU-game, and also in a constructive way, by showing that
David Gale’s famous Top Trading Cycles algorithm (TTC) always yields
competitive allocations. Roth and Postlewaite [36] later showed that for
strict preferences the TTC results in the unique strong core allocation, which
coincides with the unique competitive allocation in this case. However, if
preferences are not strict (i.e., ties are present), the strong core can be
empty or contain more than one allocation, but the TTC still produces all
competitive allocations. Wako [43] showed that the strong core is always a
subset of the set of competitive allocations. Quint and Wako [35] provided an
efficient algorithm for finding a strong core allocation whenever there exists
one. Their work was further generalised and simplified by Cechlárová and
Fleiner [14] who used graph models. Wako [45] showed that the set of
competitive allocations coincides with a core defined by antisymmetric weak
domination. This equivalence is key for our extension of the definition of
competitive allocations to the case of bounded exchange cycles.
##### Respecting improvement
For Gale and Shapley’s [20] college admissions model, Balinski and Sönmez [6]
proved that the student-optimal stable matching mechanism (SOSM) respects
improvement of student’s quality. Kominers [28] generalised this result to
more general settings. Balinski and Son̈mez [6] also showed that SOSM is the
unique stable mechanism that respects improvement of student quality.
Abdulkadiroglu and Son̈mez [1] proposed and discussed the use of TTC in a
model of school choice, which is closely related to the college admissions
model. Klijn [24] proved that TTC respects improvement of student quality.
Focusing on the other side of the market, Hatfield et al. [22] studied the
existence of mechanisms that respect improvement of a college’s quality. The
fact that colleges can match with multiple students leads to a strong
impossibility result: Hatfield et al. [22] proved that there is no stable nor
Pareto-efficient mechanism that respects improvement of a college’s quality.
In particular, the (Pareto-efficient) TTC mechanism does not respect
improvement of a college’s quality.
In the context of KEPs with pairwise exchanges, the incentives for bringing an
additional donor to the exchange pool was first studied by Roth et al. [38].
In the model of housing markets their donor-monotonicity property boils down
to the respecting improvement property. Roth et al. [38] showed that so-called
priority mechanisms are donor-monotonic if each agent’s preferences are
dichotomous, i.e., she is indifferent between all acceptable donors. However,
if agents have non-dichotomous preferences, then any mechanism that maximises
the number of pairwise exchanges (so, in particular any priority mechanism)
does not respect improvement. This can be easily seen by means of Example 5 in
Section 3.
##### IP formulations for matching
Quint and Wako [35] already gave IP formulations for finding core and strong
core allocations, but the number of constraints in their paper is highly
exponential, as their formulations contain a no-blocking condition for each
set of agents and any possible exchanges among these agents. Other studies
provided IP formulations for other matching problems. In particular, for Gale
and Shapley’s [20] college admissions model, Baïou and Balinski [5] already
described the stable admissions polytope, which can be used as a basic IP
formulation. Further recent papers in this line of research focused on college
admissions with special features [3], stable project allocation under
distributional constraints [4], the hospital–resident problem with couples
[9], and ties [30, 17].
##### Kidney exchange programmes
Starting from the seminal work by Saidman et al. [39], initial research on
KEPs focused on integer programming (IP) models for selecting pairs for
transplantation in such a way that maximum (social) welfare, generally
measured by the number of patients transplanted, is achieved. Authors in [15,
18, 31] proposed new, compact formulations that, besides extending the models
in [39] to accommodate for non-directed donors and patients with multiple
donors, also aimed to efficiently solve problems of larger size. Later studies
[26, 19, 12, 32] modelled the possibility of pair dropout or cancellation of
transplants (if e.g. new incompatibilities are revealed). While [26] and [19]
aimed to find a solution that maximises expected welfare, [12] proposed robust
optimisation models that search for a solution that, in the event of a
cancellation, can be replaced by an alternative (recourse) solution that in
terms of selected patients is as “close” as possible to the initial solution.
McElfresh et al. [32] also addressed the robustness of solutions, but they did
not consider the possibility of recourse.
In a different line of research, [13, 27, 33] studied KEPs where agents (e.g.
hospitals, regional and national programmes) can collaborate. Allowing agents
to control their internal exchanges, Carvalho et al. [13] studied strategic
interaction using non-cooperative game theory. Specifically, for the two-agent
case, they designed a game such that some Nash equilibrium maximises the
overall social welfare. Considering multiple matching periods, Klimentova et
al. [27] assumed agents to be non-strategic. Taking into account that at each
period there can be multiple optimal solutions, each of which can benefit
different agents, the authors proposed an integer programming model to achieve
an overall fair allocation. Finally, Mincu et al. [33] proposed integer
programming formulations for the case where optimisation goals and constraints
can be distinct for different agents.
A recent line of research acknowledges the importance of considering patients’
preferences (associated with e.g. graft quality) over matches. Biró and
Cechlárová [7] considered a model for unbounded length kidney exchanges, where
patients most care about the graft they receive, but as a secondary factor
(whenever there is a tie) they prefer to be involved in an exchange cycle that
is as short as possible. The authors showed that although core allocations can
still be found by the TTC algorithm, finding a core allocation with maximum
number of transplants is a computationally hard problem (inapproximable,
unless $P=NP$). Recently, Klimentova et al. [25] provided integer programming
formulations when each patient has preferences over the organs that she can
receive. The authors focused on allocations that among all (strong) core
allocations have maximum cardinality. Moving away from the (strong) core, they
also analysed the trade-off between maximum cardinality and the number of
blocking cycles. Finally, the reader is referred to [11] for a recent review
of KEPs.
## 2 Preliminaries
We consider housing markets as introduced by Shapley and Scarf [41]. Let
$N=\\{1,\ldots,n\\}$, $n\geq 2$, be the set of agents. Each agent $i\in N$ is
endowed with one object denoted by $e_{i}=i$. Thus, $N$ also denotes the set
of objects. Each agent $i\in N$ has complete and transitive (weak) preferences
$R_{i}$ over objects. We denote the strict part of $R_{i}$ by $P_{i}$, i.e.,
for all $j,k\in N$, $jP_{i}k$ if and only if $jR_{i}k$ and not $kR_{i}j$.
Similarly, we denote the indifference part of $R_{i}$ by $I_{i}$, i.e., for
all $j,k\in N$, $jI_{i}k$ if and only if $jR_{i}k$ and $kR_{i}j$. Let
$R\equiv{(R_{i})}_{i\in N}$. A (housing) market is a pair $(N,R)$. Object
$j\in N$ is acceptable to agent $i\in N$ if $jR_{i}i$. Agent $i$’s preferences
are called strict if they do not exhibit ties between acceptable objects,
i.e., for all acceptable $j,k\in N$ with $j\neq k$ we have $jP_{i}k$ or
$kP_{i}j$. A housing market has strict preferences if each agent has strict
preferences. A housing market where agents do not necessarily have strict
preferences is referred to as a housing market with weak preferences.
Given a housing market $M=(N,R)$ and a set $S\subseteq N$, the submarket
$M_{S}$ is the housing market where $S$ is the set of agents/objects and where
the preferences ${(R_{i})}_{i\in S}$ are restricted to the objects in $S$.
The acceptability graph of a housing market $M=(N,R)$ is the directed graph
$G_{M}=(N,E)$, or $G$ for short, where the set of nodes is $N$ and where
$(i,j)$ is a directed edge in $E$ if $j$ is an acceptable object for $i$,
i.e., $jR_{i}i$. In particular, all self-loops $(i,i)$ are in the graph (but
for convenience they are omitted in all figures). Let $\tilde{N}\subseteq N$
and $\tilde{E}\subseteq E\cap(\tilde{N}\times\tilde{N})$. For each
$i\in\tilde{N}$, the set of agent $i$’s most preferred edges in graph
$\tilde{G}\equiv(\tilde{N},\tilde{E})$ or simply $\tilde{E}$ is the set
$\tilde{E}^{T,i}\equiv\\{(i,j):(i,j)\in\tilde{E}\mbox{ and for each
}(i,k)\in\tilde{E},jR_{i}k\\}$. The most preferred edges in graph $\tilde{G}$
is the set $\cup_{i\in\tilde{N}}\tilde{E}^{T,i}$.
Let $M=(N,R)$ be a housing market. An allocation is a redistribution of the
objects such that each agent receives exactly one object. Formally, an
allocation is a vector $x=\left(x_{i}\right)_{i\in N}\in N^{N}$ such that:
for each $i\in N$, $x_{i}\in N$ denotes agent $i$’s allotment, i.e., the
object that she receives, and
no object is assigned to more than one agent, i.e., $\cup_{i\in
N}\\{x_{i}\\}=N.$ We will focus on individually rational allocations, i.e.,
allocations where each agent receives an acceptable object. Then, an
allocation $x$ can equivalently be described by its corresponding cycle cover
$g^{x}$ of the acceptability graph $G$. Formally, $g^{x}=(N,E^{x})$ is the
subgraph of $G$ where $(i,j)\in E^{x}$ if and only if $x_{i}=j$. Thus, the
graph $g^{x}$ consists of disconnected trading cycles or exchange cycles that
cover $G$. We will often write an (individually rational) allocation in cycle-
notation, i.e., as a set of exchange cycles (where we sometimes omit self-
cycles). We refer to Example 1 for an illustration.
An allocation $x$ Pareto-dominates an allocation $z$ if for each $i\in N$,
$x_{i}R_{i}z_{i}$, and for some $j\in N$, $x_{j}P_{j}z_{j}$. An allocation is
Pareto-efficient if it is not Pareto-dominated by any allocation. Two
allocations $x,z$ are welfare-equivalent if for each $i\in N$,
$x_{i}I_{i}z_{i}$.
Next, we recall the definition of solution concepts that have been studied in
the literature. A non-empty coalition $S\subseteq N$ blocks an allocation $x$
if there is an allocation $z$ such that (1) $\\{z_{i}:i\in S\\}=S$ and (2) for
each $i\in S$, $z_{i}P_{i}x_{i}$. An allocation $x$ is in the core333In the
literature the core is sometimes called the weak core or “regular” core. of
the market if there is no coalition that blocks $x$.
A non-empty coalition $S\subseteq N$ weakly blocks an allocation $x$ if there
is an allocation $z$ such that (1) $\\{z_{i}:i\in S\\}=S$, (2) for each $i\in
S$, $z_{i}R_{i}x_{i}$, and (3) for some $j\in S$, $z_{j}P_{j}x_{j}$. An
allocation $x$ is in the strong core444In the literature the strong core is
sometimes called the strict core. of the market if there is no coalition that
weakly blocks $x$.
A price-vector is a vector $p={(p_{i})}_{i\in N}\in\mathbb{R}^{N}$ where
$p_{i}$ denotes the price of object $i$. A competitive equilibrium is a pair
$(x,p)$ where $x$ is an allocation and $p$ is a price-vector such that:
for each agent $i\in N$, object $x_{i}$ is affordable, i.e., $p_{x_{i}}\leq
p_{i}$ and
for each agent $i\in N$, each object she prefers to $x_{i}$ is not affordable,
i.e., $jP_{i}x_{i}$ implies $p_{j}>p_{i}$. An allocation is a competitive
allocation if it is part of some competitive equilibrium. Since there are $n$
objects, we can assume, without loss of generality, that prices are integers
in the set $\\{1,2,\ldots,n\\}$.
###### Remark 1.
If $(x,p)$ is such that
for each $i\in N$, $p_{x_{i}}\leq p_{i}$, or
for each $i\in N$, $p_{i}\leq p_{x_{i}}$, then for each $i\in N$,
$p_{x_{i}}=p_{i}$. This follows immediately by looking at each exchange cycle
separately (see, e.g., the proof of Lemma 1 in [14]). Hence, at each
competitive equilibrium $(x,p)$, for each $i\in N$, $p_{x_{i}}=p_{i}$.
Wako [45] proved that the set of competitive allocations can be defined
equivalently as a different type of core. Formally, a non-empty coalition
$S\subseteq N$ antisymmetrically weakly blocks an allocation $x$ if there is
an allocation $z$ such that (1) $\\{z_{i}:i\in S\\}=S$, (2) for each $i\in S$,
$z_{i}R_{i}x_{i}$, (3) for some $j\in S$, $z_{j}P_{j}x_{j}$, and (4) for each
$i\in S$, if $z_{i}I_{i}x_{i}$ then $z_{i}=x_{i}$. Requirements (1–3) say that
coalition $S$ weakly blocks $x$. The additional requirement (4) is that if an
agent in $S$ is indifferent between her allotments at $x$ and $z$ then she
must get the very same object, i.e., $z_{i}=x_{i}$. An allocation $x$ is in
the core defined by antisymmetric weak domination if there is no coalition
that antisymmetrically weakly blocks $x$. Wako [45] proved that the set of
competitive allocations coincides with the core defined by antisymmetric weak
domination. Henceforth, we will often refer to the set of competitive
allocations as the Wako-core.
###### Lemma 1.
The strong core, the set of competitive allocations (i.e., the Wako-core), and
the core consist of individually rational allocations. Moreover, the cores are
equivalently characterised by the absence of blocking cycles in the
acceptability graph $G=(N,E)$. In other words, in the definition of each of
the three cores, it is sufficient to require no-blocking by coalitions $S$,
say $S=\\{i_{1},\ldots,i_{k}\\}$, such that for each $l=1,\ldots,k$ (mod $k$),
$z_{i_{l}}=i_{l+1}$ and $(i_{l},i_{l+1})\in E$.
###### Proof.
Individual rationality is immediate. To prove the statement for the strong
core, let $x$ be an individually rational allocation. Suppose there is a non-
empty coalition $T$ that weakly blocks $x$ through some allocation $w$. Let
$j\in T$ be such that $w_{j}P_{j}x_{j}$. Let $S\subseteq T$ be the agents that
constitute the exchange cycle, say $(i_{1},\ldots,i_{k})$, in $w$ that
involves agent $j$, i.e., without loss of generality, $j=i_{1}$. Since $w$ is
individually rational, $S=\\{i_{1},\ldots,i_{k}\\}$ weakly blocks $x$ through
the allocation $z$ defined by
$z_{i}\equiv\left\\{\begin{array}[]{ll}w_{i}&\mbox{if $i\in S$;}\\\
x_{i}&\mbox{if $i\not\in S$}\end{array}\right.$
and for each $l=1,\ldots,k$ (mod $k$), $z_{i_{l}}=i_{l+1}$ and
$(i_{l},i_{l+1})\in E$. This proves the statement for the strong core. The
statements for the core and the Wako-core follow similarly. ∎
An individually rational allocation $x$ is a maximum size allocation if for
each individually rational allocation $z$, $|\\{i\in N:x_{i}\neq
i\\}|\geq|\\{i\in N:z_{i}\neq i\\}|$. Below we provide an example to
illustrate the three cores and maximum size allocation.
###### Example 1.
Let $N=\\{1,\ldots,6\\}$ and let preferences be given by Table 4, where each
agent’s own object and all her unacceptable objects are not displayed. For
instance, agent 1 is indifferent between objects 2 and 3, and strictly prefers
both objects to object 5.
1 | 2 | 3 | 4 | 5 | 6
---|---|---|---|---|---
2,3 | 1 | 2 | 3 | 2 | 1
5 | 3 | 4 | 2 | 6 |
Figure 2: Preferences
$1$$2$$3$$4$$5$$6$
Figure 3: Acceptability graph
$x^{a}=\\{(1,3,2)\\}$
---
$x^{b}=\\{(1,2),(3,4)\\}$
$x^{c}=\\{(1,5,2),(3,4)\\}$
$x^{d}=\\{(1,3,4,2)\\}$
$x^{e}=\\{(1,5,6),(2,3,4)\\}$
Figure 4: Allocations
Figure 4 displays the induced acceptability graph.555Throughout the paper,
self-loops are omitted from the acceptability graphs in the examples. Here, a
thick edge denotes the most preferred object(s) and a thin edge denotes the
second most preferred object (if any).
Consider the allocations defined in Table 4. For instance, $x^{d}$ (in cycle-
notation, but without self-cycles) is the allocation
$x^{d}=(x^{d}_{1},x^{d}_{2},x^{d}_{3},x^{d}_{4},x^{d}_{5},x^{d}_{6})=(3,1,4,2,5,6)$.
It can be verified that $x^{a}$ is the unique strong core allocation, $x^{a}$
and $x^{b}$ are the competitive allocations, while $x^{a}$, $x^{b}$, $x^{c}$,
and $x^{d}$ form the core. Hence, the strong core is a singleton and a proper
subset of the set of competitive allocations, while the latter set is also a
proper subset of the core. Finally, $x^{e}$ is the unique maximum size
allocation and does not pertain to the core. $\diamond$
Shapley and Scarf [41] (see also page 135, Roth and Postlewaite [36]) showed
that the set of competitive allocations is non-empty and coincides with the
set of allocations that are obtained through David Gale’s Top Trading Cycles
algorithm, which is discussed in the next subsection.666If preferences are not
strict, then the Top Trading Cycles algorithm is applied to the preference
profiles that can be obtained by breaking ties in all possible ways. Roth and
Postlewaite [36] showed that if preferences are strict, then there is a unique
strong core allocation which coincides with the unique competitive allocation.
In general, when preferences are not strict, the strong core can be empty
(see, e.g., Footnote 8) or contain more than one allocation (see, e.g.,
Example 1). However, Wako [43] showed that the strong core is always a subset
of the set of competitive allocations.777Wako [44] showed that the strong core
coincides with the set of competitive allocations if and only if any two
competitive allocations are welfare-equivalent. Hence, whenever the set of
competitive allocations is a singleton it coincides with the strong core.
Furthermore, it is easy to see that the set of competitive allocations is
always a subset of the core (Shapley and Scarf [41]).
If preferences are strict, the unique competitive allocation is Pareto-
efficient (because it is in the strong core) and Pareto-dominates any other
allocation (Lemma 1, Roth and Postlewaite [36]); in particular, any other core
allocation is Pareto-inefficient. If preferences are not strict, it is
possible that each competitive allocation is Pareto-dominated by some
allocation that is not competitive.888Example 1 in Sotomayor [42], which is
attributed to Jun Wako, is illustrative: $N=\\{1,2,3\\}$ with $2P_{1}3P_{1}1$,
$1I_{2}3P_{2}2$, $2P_{3}1P_{3}3$. The set of competitive allocations consists
of $x=\\{(1,2),(3)\\}$ and $x^{\prime}=\\{(1),(2,3)\\}$, which are Pareto-
dominated by core allocations $\\{(1,2,3)\\}$ and $\\{(1,3,2)\\}$,
respectively. Moreover, $x_{1}P_{1}x_{1}^{\prime}$ and
$x_{3}^{\prime}P_{3}x_{3}$. The strong core is empty.
Finally, competitive allocations need not be welfare-equivalent: in fact,
different agents can strictly prefer distinct competitive allocations (see,
e.g., Footnote 8). However, Wako [44] showed that all strong core allocations
are welfare-equivalent. The latter result also immediately follows from Quint
and Wako’s [35] algorithm, which is discussed in the next subsection.
### Definitions for bounded length exchanges
Motivated by kidney exchange programmes, here we consider housing markets
where the length of allowed exchange cycles in allocations is limited.
Assuming that blocking coalitions are subject to the same limitation, the core
and strong core can be adjusted straightforwardly (see also [10]). In view of
Wako’s [45] result we similarly adjust the set of competitive allocations by
using the (equivalent) Wako-core.
For a housing market $M=(N,R)$, let $k$ be an integer that indicates the
maximal allowed length of exchange cycles. An allocation is a $k$-allocation
if each exchange cycle has length at most $k$. Formally, an allocation $x$ is
a $k$-allocation if there exists a partition of $N=S_{1}\cup
S_{2}\cup\dots\cup S_{q}$ such that for each $p\in\\{1,\dots,q\\}$,
$|S_{p}|\leq k$ and $\\{x_{i}:i\in S_{p}\\}=S_{p}$. The definition of the
three cores can be adjusted accordingly as well. Specifically, the $k$-core
consists of the $k$-allocations for which there is no blocking coalition of
size at most $k$; the strong $k$-core consists of the $k$-allocations for
which there is no weakly blocking coalition of size at most $k$; the
Wako-$k$-core consists of the $k$-allocations that are not antisymmetrically
weakly dominated through a coalition of size at most $k$. Due to the
“nestedness” of the three blocking notions, it follows that the strong
$k$-core is a subset of the Wako $k$-core, and that the Wako $k$-core is a
subset of the $k$-core. It is also easy to verify that, similarly to the
unbounded case, for strict preferences the strong $k$-core coincides with the
Wako-$k$-core.
To keep notation as simple as possible, whenever the context is clear, we will
omit “$k$” from $k$-allocation, $k$-core, etc. and instead refer to
$k$-housing markets to invoke the above restriction on exchange cycles,
blocking coalitions, allocations, and cores.
The absence of blocking coalitions is also called stability in the literature,
that is widely used especially for bounded length exchanges. In the case of
pairwise exchanges (i.e., for $k=2$), the problem is equivalent to the so-
called stable roommates problem, and stable marriage problem if the graph is
bipartite, as introduced in [20]. For strict preferences the core, Wako-core,
and strong core are all equivalent and they correspond to the set of stable
matchings. For weak preferences the core and Wako-core are the same, and
correspond to weakly stable matchings, whilst the strong core corresponds to
strongly stable matchings. See more about these concepts in [16].
### 2.1 Algorithms to find all strong core allocations
In this section we describe the TTC algorithm and its extension by Quint and
Wako [35] for finding strong core allocations. Our concise and standardised
descriptions provide an easy summary of the current state of the art.
Moreover, the graphs defined in the algorithms are crucial tools in Section 3
where we prove that the strong core “respects improvement.” We consider
housing markets with strict preferences and weak preferences separately. In
the first case the strong core is always a singleton (which consists of the
unique competitive allocation), while in the second case it can be empty.
#### Strict preferences
Let $M=(N,R)$ be a housing market with strict preferences. We will construct a
subgraph $G^{CP}$ of $G$ by using the Top Trading Cycles (TTC) algorithm of
David Gale [41]. The node set of $G^{CP}$ is $N$ and its directed edges
$E^{CP}=E^{C}\cup E^{P}$ are partitioned into two sets $E^{C}$ and $E^{P}$,
where $E^{C}$ will denote the edges in the TTC cycles and $E^{P}$ will denote
a particular subset of edges pointing to more preferred objects.
TTC algorithm, construction of $G^{CP}$
Set $E^{C}\equiv\emptyset$, $E^{P}\equiv\emptyset$, and $M_{1}\equiv M$. Let
$G_{1}=(N_{1},E_{1})\equiv(N,E)$ denote the acceptability graph of $M_{1}$. We
iteratively construct “shrinking” submarkets $M_{t}$ ($t=2,3,\ldots$) whose
acceptability graph will be denoted by $G_{t}=(N_{t},E_{t})$. Set $t\equiv 1$.
1. 1.
Let $E^{T}_{t}$ be the set of most preferred edges in $G_{t}$.
2. 2.
Let $c_{t}$ be a (top trading) cycle in $(N_{t},E^{T}_{t})$. Let $C_{t}$ and
$E_{t}$ denote the node set and edge set of $c_{t}$, respectively.
3. 3.
Add the edges of $c_{t}$ to $E^{C}$, i.e., $E^{C}\equiv E^{C}\cup E_{t}$.
4. 4.
Let $E^{T}_{t}(\vec{C_{t}})$ denote the subset of edges of $E^{T}_{t}$
pointing to $C_{t}$ from outside $C_{t}$. Formally,
$E^{T}_{t}(\vec{C_{t}})\equiv\\{(i,j)\in E^{T}_{t}:i\in N_{t}\setminus
C_{t}\mbox{ and }j\in C_{t}\\}$. Add $E^{T}_{t}(\vec{C_{t}})$ to $E^{P}$,
i.e., $E^{P}\equiv E^{P}\cup E^{T}_{t}(\vec{C_{t}})$.
5. 5.
If $N_{t}=C_{t}$, stop. Otherwise, let $N_{t+1}\equiv N_{t}\setminus C_{t}$,
denote the submarket $M_{N_{t+1}}$ by $M_{t+1}$, and go to step 1.
When the algorithm terminates the set of cycles in $E^{C}$ is the unique
competitive allocation and hence the unique strong core allocation.
We classify the relation between any two agents through graph $G^{CP}$ as
follows. Let $i,j\in N$ with $i\neq j$. Then, exactly one of the following
situations holds:
* •
$i$ and $j$ are independent: there is no directed path from $i$ to $j$ or from
$j$ to $i$;
* •
$i$ and $j$ are cycle-members: there is a path from $i$ to $j$ that entirely
consists of edges in $E^{C}$, i.e., $i$ and $j$ are in the same top trading
cycle;999Obviously, in this case there is also a path from $j$ to $i$ that
consists of edges in $E^{C}$.
* •
$i$ is a predecessor of $j$ (and $j$ is a successor of $i$): there is a path
from $i$ to $j$ in $G^{CP}$ using at least one edge from $E^{P}$;101010Note
that in this case $j$ was removed from the market before $i$. Hence, there is
no path from $i$ to $j$ using only edges in $E^{C}$ and $j$ is not a
predecessor of $i$. or
* •
$j$ is a predecessor of $i$ (and $i$ is a successor of $j$): there is a path
from $j$ to $i$ in $G^{CP}$ using at least one edge from $E^{P}$.
In case $i$ is a predecessor of $j$, we define the best path from $i$ to $j$
to be the path from $i$ to $j$ in $G^{CP}$ where at each node $k\neq j$ on the
path, the path follows agent $k$’s (unique) most preferred edge in
$\\{(k,l)\in E^{CP}:\mbox{ there is a path from $l$ to $j$ using edges in
$E^{CP}$}\\}.$
Let $p^{b}(i,j)$ denote the unique best path from $i$ to $j$ in $G^{CP}$. For
each node $k\neq j$ on $p^{b}(i,j)$, if there are multiple paths from $k$ to
$j$, then $p^{b}(i,j)$ follows the edge that points to the object that is part
of the earliest top trading cycle.
#### Weak preferences
Let $M=(N,R)$ be a housing with weak preferences. We will now describe the
efficient algorithm of Quint and Wako [35] for finding a strong core
allocation whenever there exists one. We use the simplified interpretation of
Cechlárová and Fleiner [14] and construct a subgraph $G^{SP}$ of $G$ with node
set $N$ and edge set $E^{SP}\equiv E^{S}\cup E^{P}$, which will be useful for
our later analysis.
We first recall two definitions. A strongly connected component of a directed
graph is a subgraph where there is a directed path from each node to every
other node. An absorbing set is a strongly connected component with no
outgoing edge. Note that each directed graph has at least one absorbing set.
Quint-Wako algorithm, construction of $G^{SP}$
Set $E^{S}\equiv\emptyset$, $E^{P}\equiv\emptyset$, and $M_{1}=M$. Let
$G_{1}=(N_{1},E_{1})\equiv(N,E)$ denote the acceptability graph of $M_{1}$. We
iteratively construct “shrinking” submarkets $M_{t}$ ($t=2,3,\ldots$) whose
acceptability graph will be denoted by $G_{t}=(N_{t},E_{t})$. Set $t\equiv 1$.
1. 1.
Let $E^{T}_{t}$ be the set of most preferred edges in $G_{t}$.
2. 2.
Let $S_{t}$ be an absorbing set in $(N_{t},E^{T}_{t})$. Let $N_{t}(S_{t})$ and
$E^{T}_{t}(S_{t})$ denote the node set and edge set of $S_{t}$.
3. 3.
Add the edges of $S_{t}$ to $E^{S}$, i.e., $E^{S}\equiv E^{S}\cup
E^{T}_{t}(S_{t})$.
4. 4.
Let $E^{T}_{t}(\vec{S_{t}})$ denote the subset of edges of $E^{T}_{t}$
pointing to $N_{t}(S_{t})$ from outside $N_{t}(S_{t})$. Formally,
$E^{T}_{t}(\vec{S_{t}})\equiv\\{(i,j)\in E^{T}_{t}:i\in N_{t}\setminus
N_{t}(S_{t})\mbox{ and }j\in N_{t}(S_{t})\\}$. Add $E^{T}_{t}(\vec{S_{t}})$ to
$E^{P}$, i.e., $E^{P}\equiv E^{P}\cup E^{T}_{t}(\vec{S_{t}})$.
5. 5.
If $N_{t}=N_{t}(S_{t})$, stop. Otherwise, let $N_{t+1}\equiv N_{t}\setminus
N_{t}(S_{t})$, denote the submarket $M_{N_{t+1}}$ by $M_{t+1}$, and go to step
1.
Quint and Wako [35] proved that there is a strong core allocation for $M$ if
and only if for each absorbing set $S_{t}$ defined in the above algorithm
there exists a cycle cover, i.e., a set of cycles covering all the nodes of
$S_{t}$. Finding a cycle cover, if one exists, can be done with the classical
Hungarian method [29] for finding a perfect matching for the corresponding
bipartite graph where the objects are on one side, the agents are on the other
side, and there is an undirected arc between an object-agent pair if the
object is among the agent’s most preferred objects (which might include her
own object). We refer to [2], [35], and [14] for further details on this
reduction.
###### Remark 2.
If for each absorbing set $S_{t}$ defined in the above algorithm there exists
a cycle cover, then the set of cycle covers (one cycle cover for each
absorbing set) constitutes a strong core allocation. Conversely, as shown in
the proof of Theorem 5.5 in Quint and Wako [35], each strong core allocation
can be written as a set of cycle covers (one for each absorbing set $S_{t}$).
Hence, if the strong core is non-empty, all its allocations can be obtained by
selecting all possible cycle covers in the algorithm. $\diamond$
###### Remark 3.
In the Quint-Wako algorithm, each agent obtains the same welfare at any two
cycle covers in which she is involved (because the agent is indifferent
between any two of her outgoing edges in an absorbing set). Together with
Remark 2, this immediately proves Theorem 2(2) in Wako [44], which states that
all strong core allocations are welfare-equivalent. $\diamond$
We classify the relation between any two agents through graph $G^{SP}$ as
follows. Let $i,j\in N$ with $i\neq j$. Then, exactly one of the following
situations holds:
* •
$i$ and $j$ are independent: there is no directed path from $i$ to $j$ or from
$j$ to $i$;
* •
$i$ and $j$ are absorbing set members: there is a path from $i$ to $j$ that
entirely consists of edges in $E^{S}$, i.e., $i$ and $j$ are in the same
absorbing set;111111Obviously, in this case there is also a path from $j$ to
$i$ that consists of edges in $E^{S}$.
* •
$i$ is a predecessor of $j$ (and $j$ is a successor of $i$): there is a path
from $i$ to $j$ in $G^{SP}$ using at least one edge from $E^{P}$;121212Note
that in this case $j$ was removed from the market before $i$. Hence, there is
no path from $i$ to $j$ using only edges in $E^{S}$ and $j$ is not a
predecessor of $i$. or
* •
$j$ is a predecessor of $i$ (and $i$ is a successor of $j$): there is a path
from $j$ to $i$ in $G^{SP}$ using at least one edge from $E^{P}$.
In case $i$ is a predecessor of $j$, a path from $i$ to $j$ in $G^{SP}$ is
said to be a best path from $i$ to $j$ if at each node $k\neq j$ on the path,
the path follows one of agent $k$’s most preferred edges in
$\\{(k,l)\in E^{SP}:\mbox{ there is a path from $l$ to $j$ using edges in
$E^{SP}$}\\}.$
Let $P^{b}(i,j)$ denote the set of best paths from $i$ to $j$ in $G^{SP}$.
## 3 Respecting Improvement
Let $R,\tilde{R}$ be two preference profiles over objects $N$. Let $i\in N$.
We say that $\tilde{R}$ is an improvement for $i$ with respect to $R$ if
$\tilde{R}_{i}\,=\,R_{i}$;
for all $j\neq i$ and all $k$ with $k\,R_{j}\,j$, $i\,I_{j}\,k\Longrightarrow
i\,\tilde{R}_{j}\,k$ and $i\,P_{j}\,k\Longrightarrow i\,\tilde{P}_{j}\,k$;
and
for all $j\neq i$ and all $k,l\neq i$ with $k,l\,R_{j}\,j$,
$k\,R_{j}\,l\Longleftrightarrow k\,\tilde{R}_{j}\,l$. In other words, (1) only
agents different from $i$ have possibly different preferences at $\tilde{R}$
and ${R}$, (2) for each agent $j\neq i$, object $i$ can become more preferred
than some acceptable objects, and (3) for each agent $j\neq i$ and for each
pair of acceptable objects different from $i$, preferences remain unchanged.
As a simple example with $N=\\{1,2,3,4,5\\}$, let $R$ be any preference
profile such that $4\,P_{5}\,1\,I_{5}\,2\,I_{5}\,3\,P_{5}5$. Let $\tilde{R}$
be the preference profile where agents $1,2,3$, and $4$ have the same
preferences as at $R$ and let $\tilde{R}_{5}$ be defined by
$1\,I_{5}4\,P_{5}\,2\,I_{5}\,3\,P_{5}5$. Then, $\tilde{R}$ is an improvement
for agent $1$ with respect to $R$.
### 3.1 Strict preferences
For each profile of strict preferences $R$, let $\tau(R)$ denote the unique
competitive allocation (or strong core allocation). We show that $\tau$
respects improvement on the domain of strict preferences:
###### Theorem 1.
For each $i\in N$ and each pair of profiles of strict preferences
$R,\tilde{R}$ such that $\tilde{R}$ is an improvement for $i$ with respect to
$R$, $\tau_{i}(\tilde{R})\,R_{i}\,\tau_{i}(R)$.
###### Proof.
Let $x=\tau(R)$ and $\tilde{x}=\tau(\tilde{R})$. We can assume that there is a
unique agent $j\neq i$ with $\tilde{R}_{j}\neq R_{j}$ and prove that
$\tilde{x}_{i}R_{i}x_{i}$. (If there is more than one such agent, we
repeatedly apply the one-agent result to obtain the result.) We can also
assume that $i\tilde{R}_{j}x_{j}$. (Otherwise $x_{j}\tilde{P}_{j}i$ and hence,
from the TTC algorithm, $\tilde{x}=x$.)
We distinguish among three cases, depending on the relation between agents $i$
and $j$ in the graph $G^{CP}$ for the market $(N,R)$, i.e., the graph that is
obtained in the TTC algorithm for $x$.
$j$$i$$F(i)$ | $p(i,j)$$i$$\dots$$j$$F^{*}(N(i,j))$ | $i$$l$$\dots$$j$$k$$p^{b}(i,j)$$F(k)$
---|---|---
Case I | Case II | Case III
Figure 5: Graph $G^{CP}$ (simplified) in the proof of Theorem 1. Each ellipse
represents a top trading cycle.
Case I: $i$ and $j$ are independent or $j$ is a predecessor of $i$. Let $F(i)$
be the set of followers131313We avoid the use of the usually equivalent
nomenclature “successor” as the latter term has already a particular (and
different) meaning. of $i$ in the graph $G^{CP}$, where we use the convention
$i\in F(i)$. Then, $j\not\in F(i)$. In the TTC algorithm for $R$, the agents
in $F(i)$ form, among themselves, trading cycles. Since for each agent $k\in
F(i)$, $\tilde{R}_{k}=R_{k}$, it follows that the trading cycles formed by
$F(i)$ in the TTC algorithm for $R$ are also formed in the TTC algorithm for
$\tilde{R}$. Hence, $\tilde{x}_{i}=x_{i}$.
Case II: $i$ and $j$ are cycle-members. Let $C$ be the cycle in the graph
$G^{CP}$ that contains $i$ and $j$. Let $p(i,j)$ be the unique path from $i$
to $j$ in the graph $G^{CP}$. Obviously, $p(i,j)$ is part of $C$. Let $N(i,j)$
be the nodes on $p(i,j)$. (So, $i,j\in N(i,j)$.) Let $F^{*}(N(i,j))$ be the
followers outside of $N(i,j)$ that can be reached by some path in $G^{CP}$
that (1) starts from a node in $N(i,j)$ and (2) does not contain edges in $C$.
Then, the nodes in $F^{*}(N(i,j))$ constitute trading cycles at $x$. Moreover,
the nodes in $F^{*}(N(i,j))$ are neither predecessors of $j$ nor cycle-members
with $j$. Hence, by the same arguments as in Case I, at $\tilde{x}$ the nodes
in $F^{*}(N(i,j))$ constitute the same trading cycles as at $x$. Since
$i\tilde{R}_{j}x_{j}$, the trading cycle of agent $i$ at $\tilde{x}$ is the
cycle that consists of the path $p(i,j)$ and the edge $(j,i)$. Since $p(i,j)$
is part of $C$, it follows that $\tilde{x}_{i}=x_{i}$.
Case III: $i$ is a predecessor of $j$. Since for each $k\neq j$,
$\tilde{R}_{k}=R_{k}$, $i\tilde{R}_{j}x_{j}$, and $\tilde{R}_{j}$ is obtained
from $R_{j}$ by shifting $i$ up, it follows that at some step in the TTC
algorithm for $\tilde{R}$, agent $j$ will start pointing to agent $i$ and will
keep doing so if and as long as agent $i$ is present. Next, consider the
predecessor of $j$ on the path $p^{b}(i,j)$ (in $G^{CP}$ for the market
$(N,R)$), say agent $l$. Let $k\in N$ with $kP_{l}j$. By definition of
$p^{b}(i,j)$, $k$ and $j$ are not cycle-members nor is $k$ a predecessor of
$j$. From the same arguments as in Case I, the nodes in $F(k)$ (the followers
of $k$, where $k\in F(k)$) form among themselves the same trading cycles at
$x$ and $\tilde{x}$. Hence, at some step in the TTC algorithm for $\tilde{R}$,
agent $l$ will start pointing to agent $j$ and will keep doing so if and as
long as agent $j$ is present. We can repeat the same arguments until we
conclude that each node in the cycle formed by $p^{b}(i,j)$ and the edge
$(j,i)$ will, at some step, start pointing to its follower and will keep doing
so if and as long as the follower is present. Thus, the cycle is a trading
cycle at $\tilde{x}$. Let $i^{\prime}$ be the follower of $i$ in this cycle.
Note that in the graph $G^{CP}$, $(i,i^{\prime})\in E^{C}$ or
$(i,i^{\prime})\in E^{P}$. If $(i,i^{\prime})\in E^{C}$, then
$i^{\prime}=x_{i}$, in which case $\tilde{x}_{i}=i^{\prime}=x_{i}$. If
$(i,i^{\prime})\in E^{P}$, then by definition of $E^{P}$,
$\tilde{x}_{i}=i^{\prime}P_{i}x_{i}$. ∎
### 3.2 Weak preferences
For each profile of preferences $R$, let $\mathcal{T}(R)$ denote the set of
competitive allocations.
###### Proposition 1.
For each $i\in N$ and each pair of profiles of preferences $R,\tilde{R}$ such
that $\tilde{R}$ is an improvement for $i$ with respect to $R$,
there is $\tilde{x}\in\mathcal{T}(\tilde{R})$ such that for each
$x\in\mathcal{T}(R)$, $\tilde{x}_{i}R_{i}x_{i}$; and
there is $x\in\mathcal{T}(R)$ such that for each
$\tilde{x}\in\mathcal{T}(\tilde{R})$, $\tilde{x}_{i}R_{i}x_{i}$.
In other words, if agent $i$ compares her best allotment at the allocations in
$\mathcal{T}(R)$ with her best allotment at the allocations in
$\mathcal{T}(\tilde{R})$, then she prefers the latter. Similarly, if agent $i$
compares her worst allotment at the allocations in $\mathcal{T}(R)$ with her
worst allotment at the allocations in $\mathcal{T}(\tilde{R})$, then she
prefers again the latter. Note that in general there is no competitive
allocation where each agent receives her most preferred allotment (among those
that are obtained at competitive allocations), i.e., agents do not unanimously
agree on the “best” competitive allocation (see, e.g., agents 3 and 4 and
competitive allocations $x^{a}$ and $x^{b}$ in Example 1). Nonetheless,
Proposition 1 shows that any individual agent that is systematically
optimistic or pessimistic about the specific competitive allocation that is
chosen subscribes to the thesis that “the competitive mechanism” would respect
any of her potential improvements.
###### Proof.
Let $R^{1},R^{2},\ldots,R^{m}$ be the profiles of strict preferences that are
obtained from $R$ by breaking ties between acceptable objects in each possible
way. Similarly, let $\tilde{R}^{1},\tilde{R}^{2},\ldots,\tilde{R}^{p}$ be the
profiles of strict preferences that are obtained from $\tilde{R}$ by breaking
ties between acceptable objects in each possible way, where possibly $p\neq
m$. Then, from Shapley and Scarf [41] (see also page 306 in Wako [44]),
$\mathcal{T}(R)=\\{\tau(R^{1}),\tau(R^{2}),\ldots,\tau(R^{m})\\}$ and
$\mathcal{T}(\tilde{R})=\\{\tau(\tilde{R}^{1}),\tau(\tilde{R}^{2}),\ldots,\tau(\tilde{R}^{p})\\}$.
It is not difficult to see that for each $R^{k}$, there is some
$\tilde{R}^{l}$ such that $\tilde{R}^{l}$ is an improvement for $i$ with
respect to $R^{k}$. Similarly, for each $\tilde{R}^{l}$, there is some $R^{k}$
such that $\tilde{R}^{l}$ is an improvement for $i$ with respect to $R^{k}$.
We can assume, without loss of generality, that for each $k=1,\ldots,m$,
$\tau_{i}(R^{1})R_{i}\tau_{i}(R^{k})$. Let $\tilde{R}^{l}$ be an improvement
for $i$ with respect to $R^{1}$. Then, from Theorem 1,
$\tau_{i}(\tilde{R}^{l})R^{1}_{i}\tau_{i}({R}^{1})$. Since $R^{1}_{i}=R_{i}$,
$\tau_{i}(\tilde{R}^{l})R_{i}\tau_{i}({R}^{1})$. This proves the first
statement.
We can also assume, without loss of generality, that for each $l=1,\ldots,p$,
$\tau_{i}(\tilde{R}^{l})R_{i}\tau_{i}(\tilde{R}^{p})$. Let $R^{k}$ be such
that $\tilde{R}^{p}$ is an improvement for $i$ with respect to $R^{k}$. Then,
from Theorem 1, $\tau_{i}(\tilde{R}^{p})R^{k}_{i}\tau_{i}({R}^{k})$. Since
$R^{k}_{i}=R_{i}$, $\tau_{i}(\tilde{R}^{p})R_{i}\tau_{i}({R}^{k})$. This
proves the second statement. ∎
The following example illustrates Proposition 1.
###### Example 2.
Let $N=\\{1,2,3,4,5\\}$ and let preferences $R$ and $\tilde{R}$ be given by
Tables 8 and 8, where only acceptable objects are displayed. Note that
$\tilde{R}$ is an improvement for agent $3$ with respect to $R$.
1 | 2 | 3 | 4 | 5
---|---|---|---|---
4 | 1 | 4 | 1 | 2
2 | 3,5 | 2 | 4 | 5
1 | 2 | 3 | |
Figure 6: $R$
1 | 2 | 3 | 4 | 5
---|---|---|---|---
4 | 3 | 4 | 1,3 | 2
2 | 1 | 2 | 4 | 5
1 | 5 | 3 | |
| 2 | | |
Figure 7: $\tilde{R}$
$x^{a}=\\{(1,4),(2,5)\\}$
---
$x^{b}=\\{(1,4),(2,3)\\}$
$x^{c}=\\{(1,2),(3,4)\\}$
Figure 8: Competitive allocations
By applying the TTC algorithm to the strict preferences obtained by breaking
all ties in all possible ways, we compute the competitive allocations (Table
8). In the case of $R$, the two competitive allocations are $x^{a}$ and
$x^{b}$, and in the case of $\tilde{R}$, the two competitive allocations are
$x^{b}$ and $x^{c}$. Hence, both of agent 3’s best and worst competitive
allotment strictly improve, and at both $R$ and $\tilde{R}$ her best allotment
is different from her worst allotment. $\diamond$
In Example 2, for the “improving agent” (agent 3), each competitive allotment
in the new market is weakly preferred to each competitive allotment in the
initial market. However, it is easy to construct housing markets without this
feature.
Next, we turn to the strong core, which in the case of weak preferences is a
(possibly proper) subset of the set of competitive allocations. Formally, for
a profile of preferences $R$, let $\mathcal{SC}(R)$ denote the (possibly
empty) strong core of $R$. Since strong core allocations are welfare-
equivalent (Remark 3), we can show that the correspondence $\mathcal{SC}$
conditionally respects improvement:
###### Theorem 2.
Let $i\in N$. Let $R,\tilde{R}$ be a pair of profiles of preferences such that
$\tilde{R}$ is an improvement for $i$ with respect to $R$. If
$\mathcal{SC}(R),\mathcal{SC}(\tilde{R})\neq\emptyset$, then for each
$\tilde{x}\in\mathcal{SC}(\tilde{R})$ and for each $x\in\mathcal{SC}(R)$,
$\tilde{x}_{i}R_{i}x_{i}$.
###### Proof.
Let $x\in\mathcal{SC}(R)$. From Remark 3 it follows that it is sufficient to
show that there exists $\tilde{x}\in\mathcal{SC}(\tilde{R})$ with
$\tilde{x}_{i}R_{i}x_{i}$. We can assume that there is a unique agent $j\neq
i$ with $\tilde{R}_{j}\neq R_{j}$. (If there is more than one such agent, we
repeatedly apply the one-agent result to obtain the result.) We can also
assume that $i\tilde{R}_{j}x_{j}$. (Otherwise $x_{j}\tilde{P}_{j}i$ and hence,
from the Quint-Wako algorithm, $x\in\mathcal{SC}(R)=\mathcal{SC}(\tilde{R})$.)
We distinguish among three cases, depending on the relation between agents $i$
and $j$ in the graph $G^{SP}$ for the market $(N,R)$, i.e., the graph that is
generated in the Quint-Wako algorithm to obtain $x$.
Case I: $i$ and $j$ are independent or $j$ is a predecessor of $i$. Let $F(i)$
be the followers of $i$ in the graph $G^{SP}$, where we use the convention
$i\in F(i)$. Then, $j\not\in F(i)$. In the Quint-Wako algorithm for $R$, the
nodes in $F(i)$ are exactly the nodes of a collection of absorbing sets. Since
for each agent $k\in F(i)$, $\tilde{R}_{k}=R_{k}$, it follows that in the
Quint-Wako algorithm for $\tilde{R}$ the nodes in $F(i)$ are exactly the nodes
of the same collection of absorbing sets. Since
$\mathcal{SC}(\tilde{R})\neq\emptyset$, it follows from Remark 2 that there
exists $\tilde{x}\in\mathcal{SC}(\tilde{R})$ such that for each agent $k\in
F(i)$, $\tilde{x}_{k}=x_{k}$. In particular, $\tilde{x}_{i}=x_{i}$.
Case II: $i$ and $j$ are absorbing set members. Let $S_{t}$ be the absorbing
set that contains $i$ and $j$ in the Quint-Wako algorithm for $R$. Let $(i,l)$
be an edge in $S_{t}$, where possibly $l=i$. Let $F^{*}(N_{t}(S_{t}))$ be the
followers outside of $N_{t}(S_{t})$ that can be reached by some path in
$G^{SP}$ that starts from a node in $N_{t}(S_{t})$. Then, the nodes in
$F^{*}(N_{t}(S_{t}))$ are exactly the nodes of a collection of absorbing sets
in the Quint-Wako algorithm for $R$. Moreover, the nodes in
$F^{*}(N_{t}(S_{t}))$ are neither predecessors of $j$ nor absorbing set
members with $j$. Hence, by the same arguments as in Case I, in the Quint-Wako
algorithm for $\tilde{R}$ the nodes in $F^{*}(N_{t}(S_{t}))$ are again the
nodes of the same collection of absorbing sets. Therefore, when the Quint-Wako
algorithm is applied to $\tilde{R}$, the absorbing set that contains $i$ will
again contain $l$. Since $\mathcal{SC}(\tilde{R})\neq\emptyset$, at each
$\tilde{x}\in\mathcal{SC}(\tilde{R})$, agent $i$ will receive an object
$\tilde{x}_{i}$ such that $\tilde{x}_{i}\,I_{i}l$. Since also
${x}_{i}\,I_{i}l$, we obtain $\tilde{x}_{i}\,I_{i}{x}_{i}$.
Case III: $i$ is a predecessor of $j$. Since for each $k\neq j$,
$\tilde{R}_{k}=R_{k}$, $i\tilde{R}_{j}x_{j}$, and $\tilde{R}_{j}$ is obtained
from $R_{j}$ by shifting $i$ up, it follows that at some step in the Quint-
Wako algorithm for $\tilde{R}$, agent $j$ will start pointing to agent $i$ and
will keep doing so if and as long as agent $i$ is present. Next, consider the
predecessor of $j$ on a best path $p^{b}(i,j)\in P^{b}(i,j)$ (in $G^{SP}$ for
the market $(N,R)$), say agent $l$. Let $k\in N$ with $kP_{l}j$. By definition
of $p^{b}(i,j)$, $k$ and $j$ are not absorbing set members nor is $k$ a
predecessor of $j$. From the same arguments as in Case I, the nodes in $F(k)$
(the followers of $k$, where $k\in F(k)$) form among themselves the same
absorbing sets in the Quint-Wako algorithm for both $R$ and $\tilde{R}$.
Hence, during the Quint-Wako algorithm for $\tilde{R}$, at some step agent $l$
will start pointing to agent $j$ and will keep doing so if and as long as
agent $j$ is present. We can repeat the same arguments until we conclude that
each node in the cycle formed by $p^{b}(i,j)$ and the edge $(j,i)$ will, at
some step, start pointing to its follower and will keep doing so if and as
long as the follower is present. Hence, at some step of the algorithm the
cycle formed by $p^{b}(i,j)$ and the edge $(j,i)$ is part of an absorbing set.
Let $i^{b}$ denote the follower of agent $i$ in path $p^{b}(i,j)$. Since
$\mathcal{SC}(\tilde{R})\neq\emptyset$, at each
$\tilde{x}\in\mathcal{SC}(\tilde{R})$, agent $i$ will receive an object
$\tilde{x}_{i}$ such that $\tilde{x}_{i}\,I_{i}i^{b}$. Note that in the graph
$G^{SP}$, $(i,i^{b})\in E^{S}$ or $(i,i^{b})\in E^{P}$. If $(i,i^{b})\in
E^{S}$, then $i^{b}I_{i}x_{i}$, in which case $\tilde{x}_{i}I_{i}x_{i}$. If
$(i,i^{b})\in E^{P}$, then by definition of $E^{P}$, $i^{b}R_{i}x_{i}$, in
which case $\tilde{x}_{i}R_{i}x_{i}$. ∎
###### Corollary 1.
For each $i\in N$ and each pair of profiles of strict preferences
$R,\tilde{R}$ such that $\mathcal{SC}(R),\mathcal{SC}(\tilde{R})\neq\emptyset$
and $\tilde{R}$ is an improvement for $i$ with respect to $R$,
there is $\tilde{x}\in\mathcal{SC}(\tilde{R})$ such that for each
$x\in\mathcal{SC}(R)$, $\tilde{x}_{i}R_{i}x_{i}$; and
there is $x\in\mathcal{SC}(R)$ such that for each
$\tilde{x}\in\mathcal{SC}(\tilde{R})$, $\tilde{x}_{i}R_{i}x_{i}$.
### 3.3 Bounded length exchanges
In this subsection we provide several examples to demonstrate the possible
violations of the respecting improvement property (or variants/extensions of
the property) in the setting of bounded length exchanges.
#### Pairwise exchanges
As mentioned in Section 1, the maximisation of the number of pairwise
exchanges does not respect improvement. Example 5 below proves this formally.
A consequence is that the priority mechanisms studied by Roth et al. [38] need
not be donor-monotonic if agents’ preferences can be non-dichotomous.
###### Example 3.
Let $N=\\{1,\ldots,4\\}$ and let preferences $R$ be given by Table 11 and the
new preferences $\tilde{R}$ by Table 11, where the only improvement is that
agent 1 becomes acceptable for agent 3.
1 | 2 | 3 | 4
---|---|---|---
2 | 1 | 3 | 2
3 | 4 | | 4
1 | 2 | |
Figure 9: $R$
1 | 2 | 3 | 4
---|---|---|---
2 | 1 | 1 | 2
3 | 4 | 3 | 4
1 | 2 | |
Figure 10: $\tilde{R}$
$1$$2$$3$$4$
Figure 11: Acceptability graph
Initially, at $R$, there are two ways to maximise the number of pairwise
exchanges, namely by picking either of the two-cycles $(1,2)$ and $(2,4)$.
Assume, without loss of generality, that $(1,2)$ is selected. (In case $(2,4)$
is selected, similar arguments can be employed.) Suppose the discontinuous
edge (in Figure 11) is included so that agent 1 “improves” and we obtain
$\tilde{R}$. Then, the unique way to maximise the number of pairwise exchanges
is obtained by picking the 2 two-cycles (1,3) and (2,4), which means that
agent 1 is strictly worse off than in the initial situation. $\diamond$
We illustrate that the respecting improvement property can be violated in a
weak sense , namely the best allotment remains the same, but a worse allotment
is created for the improving agent.
###### Example 4.
Let $N=\\{1,\ldots,4\\}$ and let preferences $R$ be given by Table 14 and the
new preferences $\tilde{R}$ by Table 14, where the only improvement is that
agent 1 becomes acceptable for agent 3.
1 | 2 | 3 | 4
---|---|---|---
2 | 4 | 4 | 3
3 | 1 | 3 | 2
1 | 2 | | 4
Figure 12: $R$
1 | 2 | 3 | 4
---|---|---|---
2 | 4 | 1 | 3
3 | 1 | 4 | 2
1 | 2 | 3 | 4
Figure 13: $\tilde{R}$
$1$$2$$3$$4$
Figure 14: Acceptability graph
Initially, at $R$, the unique (strong) core allocation is
$x^{a}=\\{(1,2),(3,4)\\}$. Suppose the discontinuous edge (in Figure 14) is
included so that agent 1 “improves” and we obtain $\tilde{R}$. Then, another
(strong) core solution is created, $x^{b}=\\{(1,3),(2,4)\\}$, which is
strictly worse for 1. $\diamond$
The following example illustrates violation of best-Ri property for pairwise
exchanges with ties for core and Wako-core.
###### Example 5.
Let $N=\\{1,\ldots,4\\}$ and let preferences $R$ be given by Table 17. In the
new preferences $\tilde{R}$, Table 17, agent 4 makes an improvement and
becomes acceptable for agent 1.
1 | 2 | 3 | 4
---|---|---|---
3 | 4 | 1,4 | 1
| | | 3
| | | 4
Figure 15: $R$
1 | 2 | 3 | 4
---|---|---|---
3 | 4 | 1,4 | 1
4 | | | 3
| | | 4
Figure 16: $\tilde{R}$
$1$$2$$3$$4$
Figure 17: Acceptability graph
For original preferences $R$ there exists two (Wako-) core allocations
$x^{a}=\\{(3,4)\\}$ and $x^{b}=\\{(1,3),(2,4)\\}$, i.e.,
$\mathcal{C}(R)=\mathcal{WC}(R)=\\{x^{a},x^{b}\\}$. The best allotment for
agent 4 agent 3, i.e. allocation $x^{a}$ is to be chosen.
For preferences $\tilde{R}$, newly formed cycle $(1,4)$ is blocking for
allocation $x^{a}$, while $(1,3)$ is blocking for allocation $(1,4)$. Then
$\mathcal{C}(\tilde{R})=\mathcal{WC}(\tilde{R})=\\{x^{b}\\}$, hence the
improvement is not respected.
#### Three-way exchanges
The following example exhibits three housing markets where for each housing
market the three cores coincide (and are non-empty). Subsequently, we will use
the example to show that the three cores do not respect improvement when the
maximal allowed length of exchange cycles is 3.
For a housing market $(N,R)$, let, with a slight abuse of notation,
$\mathcal{SC}(R)$, $\mathcal{WC}(R)$, and $\mathcal{C}(R)$ denote the
(possibly empty) strong core, Wako-core, and core of $R$, respectively.
###### Example 6.
Throughout the example we focus on the core. However, since all blocking
arguments can be replaced by weak blocking arguments, all statements also hold
for the strong core, and hence also for the Wako-core. Let
$N=\\{1,\ldots,10\\}$ be the set of agents. We consider three different
housing markets that only differ in preferences. First, consider the housing
market $(N,R)$, or simply $R$ for short, with the following “cyclic” strict
preferences (where unacceptable objects are not displayed):
preferences $R$ 1 2 3 4 5 6 7 8 9 10 2 3 4 5 6 7 8 9 10 1 10 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 10
Since only directly neighbouring objects (and one’s own object) are
acceptable, it follows that the only exchange cycles where each agent is
assigned an acceptable object are the 10 self-cycles and the 10 two-cycles
$(i,i+1)$ (mod 10) where agents $i$ and $i+1$ swap their objects.141414So, the
core coincides with the set of stable matchings of the corresponding “roommate
problem” (Gale and Shapley, 1962). The core $\mathcal{C}(R)=\\{x^{a},x^{b}\\}$
consists of the following two allocations:
$\displaystyle x^{a}$ $\displaystyle=$
$\displaystyle\\{(1,2),(3,4),(5,6),(7,8),(9,10)\\}\mbox{ and}$ $\displaystyle
x^{b}$ $\displaystyle=$ $\displaystyle\\{(10,1),(2,3),(4,5),(6,7),(8,9)\\}.$
Next, we create an extended housing markets $R^{b}$ by inserting one three-
cycle in $R$.
Preferences $R^{b}$ are provided in Table 19, where the changes with respect
to $R$ are bold-faced and depicted in Figure 19.
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
---|---|---|---|---|---|---|---|---|---
4 | 3 | 4 | 5 | 6 | 7 | 8 | 1 | 10 | 1
2 | 1 | 2 | 8 | 4 | 5 | 6 | 9 | 8 | 9
10 | 2 | 3 | 3 | 5 | 6 | 7 | 7 | 9 | 10
1 | | | 4 | | | | 8 | |
Figure 18: Preferences $R^{b}$
$2$$3$$5$$6$$7$$9$$10$$1$$4$$8$
Figure 19: Acceptability graph for $R^{b}$
Apart from the earlier mentioned self-cycles and two-cycles, the only
additional exchange cycle with only acceptable objects in $R^{b}$ is
$c^{b}=(1,4,8)$. Allocation $x^{b}$ is in the core of $R^{b}$ because $c^{b}$
does not block $x^{b}$: agent $4$ obtains object $8$ in $c^{b}$, which is
strictly less preferred to her assigned object at $x^{b}$. In fact, $x^{b}$ is
the unique core allocation of $R^{b}$. To see this, note first that $x^{a}$ is
not in the core of $R^{b}$ as $c^{b}$ blocks it. And second, the only new
exchange cycle created in $R^{b}$, i.e., $c^{b}$, cannot be part of a core
allocation, because if it were, then to avoid blocking cycle $(4,5)$, the next
two-cycle $(5,6)$ would have to be part of the allocation, in which case $7$
would remain unmatched (i.e., be a self-cycle) and cycle $(6,7)$ would block
the allocation. Therefore, $x^{b}$ is the unique core allocation of $R^{b}$,
i.e., $\mathcal{C}(R^{b})=\\{x^{b}\\}$. $\diamond$
Using the above example we can easily prove the following result. For $i\in
N$, preferences $R_{i}$, and a set of allocations $X$, agent $i$’s most
preferred allotment in $X$ is her most preferred allotment among those that
she receives at the allocations in $X$.
###### Proposition 2.
Suppose the maximum allowed length of exchange cycles is 3. Then, there are
3-housing markets with strict preferences $(N,R)$ and $(N,\tilde{R})$ with
$X(R)\equiv\mathcal{SC}(R)=\mathcal{WC}(R)=\mathcal{C}(R)\neq\emptyset$ and
$X(\tilde{R})\equiv\mathcal{SC}(\tilde{R})=\mathcal{WC}(\tilde{R})=\mathcal{C}(\tilde{R})\neq\emptyset$
such that for some $i\in N$, $\tilde{R}$ is an improvement for $i$ with
respect to $R$ but
$X(\tilde{R})\subseteq X(R)$,
for the unique $x\in X(R)\backslash X(\tilde{R})$ and for each $\tilde{x}\in
X(\tilde{R})\backslash X(R)$, $x_{i}\,P_{i}\tilde{x}_{i}$.
###### Proof.
Let $(N,\tilde{R})$ be the 3-housing market with $N=\\{1,\ldots,10\\}$ and
$\tilde{R}=R^{b}$ from Example 6. Let $(N,R)$ be the 3-housing market that is
obtained from $(N,\tilde{R})$ by making object 1 unacceptable for agent 8.
Obviously, $\tilde{R}$ is an improvement for agent 1 with respect to $R$. As
shown in Example 6,
$\mathcal{SC}(\tilde{R})=\mathcal{WC}(\tilde{R})=\mathcal{C}(\tilde{R})=\\{x^{b}\\}\neq\emptyset$.
One also easily verifies that
$\mathcal{SC}(R)=\mathcal{WC}(R)=\mathcal{C}(R)=\\{x^{a},x^{b}\\}\neq\emptyset$.
Finally, agent 1’s most preferred allotment in
$\mathcal{SC}(R)=\mathcal{WC}(R)=\mathcal{C}(R)$ is object 2, while agent 1’s
unique (hence, most preferred) allotment in
$\mathcal{SC}(\tilde{R})=\mathcal{WC}(\tilde{R})=\mathcal{C}(\tilde{R})$ is
object 10. Since agent 1 strictly prefers object 2 to object 10, the result
follows. ∎
## 4 Integer Programming Formulations
In this section we propose novel integer programming (IP) formulations for the
core, the set of competitive allocations (the Wako-core), and the strong core.
First, we propose models for the unbounded case, for the three solution
concepts. Second, we propose alternative cycle formulations for the Wako-Quint
formulations for core and strong core. Finally, we propose a new formulation
for the competitive allocations.
### 4.1 Novel edge-formulations
Let $(N,R)$ be a housing market and $G\equiv G(N,R)=(N,E)$ its acceptability
graph. Since all three cores only contain individually rational allocations,
we can restrict attention to the edges of the acceptability graph.
Specifically, with each edge $(i,j)\in E$ we associate a variable $y_{ij}$ as
follows:
$y_{ij}=\left\\{\begin{array}[]{ll}1&\mbox{if agent $i$ receives object
$j$;}\\\ 0&\mbox{otherwise.}\end{array}\right.$
Then, the base model reads as follows:
$\displaystyle\sum_{j:(i,j)\in E}y_{ij}=1$ $\displaystyle\forall i\in N$ (1)
$\displaystyle\sum_{j:(j,i)\in E}y_{ji}=1$ $\displaystyle\forall i\in N$ (2)
$\displaystyle y_{ij}\in\\{0,1\\}$ $\displaystyle\forall(i,j)\in E$ (3)
Constraints (1) guarantee that agent $i$ receives exactly one (acceptable)
object (possibly her own). Constraints (2) guarantee that object $i$ is given
to exactly one agent. Each vector ${(y_{ij})}_{(i,j)\in E}$ that satisfies
(1), (2), and (3) yields an allocation $x$ defined by $x_{i}=j$ if and only if
$y_{ij}=1$. Moreover, each allocation can be obtained in this way. So, there
is a one-to-one correspondence between allocations and vectors that satisfy
(1), (2), and (3).
We introduce for each $i\in N$ an additional integer variable $p_{i}$ that
represents the price of object $i$.
$\displaystyle p_{i}\in\\{1,\dots,n\\}$ $\displaystyle\forall i\in N$ (4)
In what follows we give our IP formulations for the general case of weak
preferences and explain how they can be simplified for strict preferences. We
tackle the core, the set of competitive allocations, and the strong core (in
this order), by subsequently adding constraints. Given an allocation $x$, we
say that $x$ dominates an edge $(i,j)$ in the acceptability graph $G$ if agent
$i$ weakly prefers her allotment $x_{i}$ to object $j$, i.e.,
$x_{i}\,R_{i}\,j$.
#### IP for the core
It follows from Lemma 1 that an allocation $x$ is in the core if and only if
each cycle in $G$ contains an edge that is dominated by $x$. Or equivalently,
there exists no cycle in $G$ that consists of undominated edges. Note that the
undominated edges form a cycle-free subgraph if and only if there is a
topological order of the objects. The existence of this topological order is
equivalent to the existence of prices of the objects such that for each
undominated edge $(i,j)$, $p_{i}<p_{j}$. Therefore, an allocation $x$ is in
the core if and only if there exist prices ${(p_{i})}_{i\in N}$ such that
$(i,j)\in E\mbox{ is not dominated by }x\,\,\Longrightarrow\,\,p_{i}<p_{j}.$
(*)
Thus, core allocations are characterised by constraints (1)–(4) together with
(5) below:
$\displaystyle p_{i}+1\leq p_{j}+n\cdot\sum_{k:kR_{i}j}y_{ik}$
$\displaystyle\forall(i,j)\in E$ (5)
###### Proposition 3.
Let $x$ be an allocation. Let $y$ be the corresponding vector that satisfies
(1), (2), and (3). Allocation $x$ is in the core if and only if there are
prices ${(p_{i})}_{i\in N}$ such that (4) and (5) hold.
###### Proof.
First observe that for each $(i,j)\in E$,
$(i,j)$ is dominated by $x$
$\displaystyle\Longleftrightarrow\,\,x_{i}\,R_{i}\,j$
$\displaystyle\Longleftrightarrow\,\,\mbox{there is }k\in N\mbox{ with
}k\,R_{i}\,j\mbox{ and }y_{ik}=1$
$\displaystyle\Longleftrightarrow\,\,\sum_{k:kR_{i}j}y_{ik}=1.$ (**)
Suppose $x$ is in the core. Then, there exist prices ${(p_{i})}_{i\in N}$ that
satisfy (4) and (* ‣ 4.1). We verify that (5) holds. Let $(i,j)\in E$. If
$(i,j)$ is not dominated by $x$, then (5) follows immediately from (* ‣ 4.1).
Suppose $(i,j)$ is dominated by $x$. From (** ‣ 4.1),
$\sum_{k:kR_{i}j}y_{ik}=1$. Hence,
$p_{i}+1\leq n+1\leq p_{j}+n=p_{j}+n\cdot\sum_{k:kR_{i}j}y_{ik}.$
Suppose that there exist prices ${(p_{i})}_{i\in N}$ such that (4) and (5)
hold. We verify that (* ‣ 4.1) holds. Let $(i,j)\in E$ and suppose it is not
dominated by $x$. From (** ‣ 4.1), $\sum_{k:kR_{i}j}y_{ik}=0$. Hence, from
(5), $p_{i}+1\leq p_{j}+n\cdot 0$, i.e., $p_{i}<p_{j}$. ∎
#### IP for the set of competitive allocations, i.e., the Wako-core
The set of competitive allocations is characterised by constraints (1)–(5)
together with (6) below:
$\displaystyle p_{i}\leq p_{j}+n\cdot(1-y_{ij})$ $\displaystyle\forall(i,j)\in
E$ (6)
###### Proposition 4.
Let $x$ be an allocation. Let $y$ be the corresponding vector that satisfies
(1), (2), and (3). Allocation $x$ is competitive if and only if there exist
prices ${(p_{i})}_{i\in N}$ such that (4), (5), and (6) hold. Moreover, if
such prices exist, then together with $x$ they constitute a competitive
equilibrium.
###### Proof.
Suppose $x$ is competitive. Let ${(p_{i})}_{i\in N}$ be prices such that
$(x,p)$ is a competitive equilibrium. Then, (4) and (* ‣ 4.1) hold. From the
first part of the proof of Proposition 3 it follows that (5) holds. We now
prove that (6) holds as well. Let $(i,j)\in E$. If $y_{ij}=0$, then
immediately $p_{i}\leq p_{j}+n=p_{j}+n\cdot(1-y_{ij})$. If $y_{ij}=1$, then
$x_{i}=j$, and since $(x,p)$ is a competitive equilibrium it follows from
Remark 1 that $p_{i}=p_{x_{i}}=p_{j}$.
Suppose that there exist prices ${(p_{i})}_{i\in N}$ such that (4), (5), and
(6) hold. We verify that $(x,p)$ is a competitive equilibrium. First, it
follows from (6) that for each $i\in N$, taking $j=x_{i}$ yields $p_{i}\leq
p_{x_{i}}+n\cdot(1-1)=p_{x_{i}}$, i.e., $p_{i}\leq p_{x_{i}}$. Hence, from
Remark 1, for each $i\in N$, $p_{i}=p_{x_{i}}$. Second, let $j\in N$ be an
object such that $j\,P_{i}\,x_{i}$. Then, $(i,j)\in E$ is not dominated by
$x$. From the second part of the proof of Proposition 3 it follows that (* ‣
4.1) holds. Hence, we obtain $p_{i}<p_{j}$. ∎
#### IP for the strong core
The strong core is characterised by constraints (1)–(6) together with (7)
below:
$\displaystyle p_{i}\leq p_{j}+n\cdot\left(\sum_{k:kP_{i}j}y_{ik}\right)$
$\displaystyle\forall(i,j)\in E$ (7)
###### Proposition 5.
Let $x$ be an allocation. Let $y$ be the corresponding vector that satisfies
(1), (2), and (3). Allocation $x$ is in the strong core if and only if there
exist prices ${(p_{i})}_{i\in N}$ such that (4), (5), (6), and (7) hold.
Moreover, if such prices exist, then together with $x$ they constitute a
competitive equilibrium.
###### Proof.
Suppose $x$ is in the strong core. By Remark 2, $x$ can be obtained in the
Quint-Wako algorithm by choosing for each absorbing set in the algorithm a
particular cycle cover. Hence, there exist price ${(p_{i})}_{i\in N}$ such
that (i) constraints (4) are satisfied, (ii) all objects in the same absorbing
set have the same price, and (iii) an absorbing set that is processed earlier
by the algorithm has a strictly higher associated price (of its objects). It
is easy to verify that $(x,p)$ is a competitive allocation. Hence, from the
first part of the proof of Proposition 4 it follows that (5) and (6) hold.
Finally, to see that (7) holds note that from the definition of the prices it
follows that (i) if $jR_{i}x_{i}$ then $p_{i}\leq p_{j}$ and (ii) if
$x_{i}P_{i}j$ then $p_{i}\leq n=n(\sum_{k:kP_{i}j}y_{ik})$.
Suppose that there exist prices ${(p_{i})}_{i\in N}$ such that (4), (5), (6),
and (7) hold. It follows from Proposition 4 that $(x,p)$ is a competitive
equilibrium. We prove that $x$ is a strong core allocation. Suppose there is a
coalition $S$ that weakly blocks $x$ through an allocation $z$. From Lemma 1
it follows that we can assume, without loss of generality, that
$S=\\{1,\ldots,r\\}$ and that for each $i=1,\ldots,r-1$, $z_{i}=i+1$,
$z_{r}=1$, and $z_{1}P_{1}x_{1}$. Since $x$ is individually rational, $r>1$.
Since $(x,p)$ is a competitive equilibrium, $p_{1}<p_{2}$. Since
$3=z_{2}\,R_{2}\,x_{2}$, we have $\sum_{k:kP_{2}3}y_{2k}=0$. Hence, from (7),
$p_{2}\leq p_{3}+n\cdot\left(\sum_{k:kP_{2}3}y_{2k}\right)=p_{3}.$
So, $p_{2}\leq p_{3}$. By repeatedly applying the same arguments we find
$p_{2}\leq p_{3}\leq\cdots\leq p_{r}\leq p_{1}$. Since $p_{1}<p_{2}$, we
obtain a contradiction. Therefore, there is no coalition that weakly blocks
$x$. Hence, $x$ is a strong core allocation. ∎
###### Remark 4.
We note that in the case of strict preferences, constraints (7) are satisfied
by any competitive equilibrium $(x,p)$. To see this note that if $y_{ij}=1$
then (6) implies (7), since $1-y_{ij}=0$, and hence
$p_{i}\leq p_{j}+n\cdot(1-y_{ij})=p_{j}\leq
p_{j}+n\cdot\left(\sum_{k:kP_{i}j}y_{ik}\right).$
Otherwise, if $y_{ij}=0$ then (5) implies (7), since for strict preferences
$\sum_{k:kP_{i}j}y_{ik}=\sum_{k:kP_{i}j}y_{ik}+y_{ij}=\sum_{k:kR_{i}j}y_{ik}$,
and hence
$p_{i}<p_{i}+1\leq
p_{j}+n\cdot\left(\sum_{k:kR_{i}j}y_{ik}\right)=p_{j}+n\cdot\left(\sum_{k:kP_{i}j}y_{ik}\right).$
Therefore, in either case, constraints (7) are satisfied. This reflects the
fact that for strict preferences the strong core is a singleton that consists
of the unique competitive allocation. $\diamond$
### 4.2 Quint and Wako’s IP formulations
To compare our IP formulations with the IP formulations for the core and the
strong core given by Quint and Wako [35], we describe the latter IP
formulations using our notation.
First, for both the core and the strong core, Quint and Wako [35] used the
“basic” constraints (1), (2), and (3). We refer to their formulas (9.2),
(9.3), (9.4), as well as (8.2), (8.3), (8.4), together with an integrality
condition.
Next, to obtain the core Quint and Wako [35] imposed the following additional
no-blocking condition (see (9.1) in [35]):
$\displaystyle\sum_{i\in S}\left(\sum_{j:jR_{i}\pi_{i}}y_{ij}\right)\geq 1$
$\displaystyle\forall S\subseteq N,\pi\in\Pi_{S}$ (8)
Finally, to obtain the strong core Quint and Wako [35] imposed the following
additional no-blocking condition (see (8.1) in [35]):
$\displaystyle\sum_{i\in
S}\left(\sum_{j:jP_{i}\pi_{i}}y_{ij}+\frac{1}{|S|}\sum_{j:jI_{i}\pi_{i}}y_{ij}\right)\geq
1$ $\displaystyle\forall S\subseteq N,\pi\in\Pi_{S},$ (9)
where $\Pi_{S}$ is the set of allocations in the submarket $M_{S}$ (so that
$\pi$ is an allocation in $M_{S}$).
Constraints (8) and (9) directly describe that no coalition $S$ can block /
weakly block through an allocation $\pi$, respectively. Both sets of
constraints are highly exponential (in the number of agents), since they are
required not only for all subsets $S$ of $N$, but also for all possible
redistributions within each $S$.
Alternative cycle-formulations
In view of Lemma 1, it is sufficient to impose constraints (8) and (9) for the
cycles of the acceptability graph $G$. Based on this observation and results
in [25], we will describe alternative cycle-formulations for the core and the
strong core. Furthermore, we will provide a new proposition and IP formulation
for the Wako-core.
Let $M=(N,R)$ be a housing market. Let $\mathcal{C}$ denote the set of
exchange cycles in $G(N,R)$. For a cycle $c\in\mathcal{C}$, let $N(c)$ and
$A(c)$ denote the set of nodes and edges in $c$, respectively, and let $|c|$
denote the size/length of $c$. We write $c_{i}=j$ if agent $i$ receives object
$j$ in the exchange cycle $c$, i.e., $(i,j)\in A(c)$.
###### Proposition 6 ([25]).
An allocation $x$ is in the core if and only if for each cycle
$c\in\mathcal{C}$, for some agent $i\in N(c)$, $x_{i}R_{i}c_{i}$.
The corresponding IP constraints, which reduce the constraints (8) to cycles,
are as follows:
$\displaystyle\sum_{(i,j)\in A(c)}\sum_{k:kR_{i}j}y_{ik}\geq 1$
$\displaystyle\forall c\in\mathcal{C}$ (10)
Next, we describe the alternative cycle-formulation for the strong core. First
we focus on the special case of strict preferences.
###### Proposition 7 ([25]).
Suppose preferences are strict. Then, an allocation $x$ is in the strong core
if and only if for each cycle $c\in\mathcal{C}$, $c$ is an exchange cycle in
$x$ or for some agent $i\in N(c)$, $x_{i}P_{i}c_{i}$.
Proposition 7 leads to the following constraints:
$\displaystyle\sum_{(i,j)\in A(c)}y_{ij}+|c|\cdot\left[\sum_{(i,j)\in
A(c)}\,\,\sum_{k:kP_{i}j}y_{ik}\right]\geq|c|$ $\displaystyle\forall
c\in\mathcal{C}$ (11)
The alternative cycle-formulation for the strong core in the general case
(where preferences can have ties) is as follows.
###### Proposition 8 ([25]).
An allocation $x$ is in the strong core if and only if for each cycle
$c\in\mathcal{C}$,
(i) $c$ is an exchange cycle in $x$, or
(ii) for some agent $i\in N(c)$, $x_{i}P_{i}c_{i}$, or
(iii) for each agent $i\in N(c)$, $c_{i}I_{i}x_{i}$.
The corresponding IP constraints, which reduce the constraints (9) to cycles,
are as follows:
$\displaystyle\sum_{(i,j)\in
A(c)}\sum_{k:kI_{i}j}y_{ik}+|c|\cdot\left[\sum_{(i,j)\in
A(c)}\,\,\sum_{k:kP_{i}j}y_{ik}\right]\geq|c|$ $\displaystyle\forall
c\in\mathcal{C}$ (12)
Finally, similarly to the core and strong core, we provide a new alternative
characterisation for the Wako-core.
###### Proposition 9.
An allocation $x$ is in the Wako-core if and only if for each cycle
$c\in\mathcal{C}$,
(i) $c$ is an exchange cycle in $x$, or
(ii) for some agent $i\in N(c)$, $x_{i}P_{i}c_{i}$, or
(iii) for some agent $i\in N(c)$, $c_{i}I_{i}x_{i}$ and $c_{i}\neq x_{i}$.
The proof of Proposition 9 is omitted as it is very similar to that of
Proposition 8 (see [25]). Proposition 9 leads to the following constraints,
which can be used to find competitive allocations (i.e., allocations in the
Wako-core):
$\displaystyle\sum_{(i,j)\in A(c)}y_{ij}+|c|\cdot\left[\sum_{(i,j)\in
A(c)}\,\,\sum_{k:kR_{i}j,k\neq j}y_{ik}\right]\geq|c|$ $\displaystyle\forall
c\in\mathcal{C}$ (13)
To see the correctness of this new formulation, observe that the first term of
(13) is equal to $|c|$ if condition (i) of Proposition 9 is satisfied and less
than $|c|$ otherwise; and the second term has value at least $|c|$ if
condition (ii) or (iii) of Proposition 9 is satisfied and 0 otherwise.
Therefore, constraint (13) is satisfied if and only if at least one of the
three conditions of Proposition 9 holds.
### 4.3 Bounded length exchanges
Note that the above cycle-formulations are not very practical due to the
exponentially large number of cycles. In fact, this justified the novel IP
formulations proposed in Section 4.1. However, the cycle-formulations are
practical for the case of bounded length exchanges.
One easily verifies that Lemma 1 can be extended to bounded length exchanges
in a natural way: the strong core, Wako-core, and core of a $k$-housing market
can be defined equivalently by the absence of corresponding blocking cycles of
size at most $k$. In fact, Klimentova et al. [25] proposed associated IP
formulations by adapting constraints (10) and (12) to bounded exchange cycles.
One can similarly adapt constraints (13) to obtain an IP formulation for the
Wako-core of a $k$-housing market. In our simulations we used the most
efficient cycle-edge formulations by Klimentova et al. (see the detailed
description in section 3.3 of [25]).
## 5 Computational Experiments
In this section we perform a computational analysis of the models proposed in
Section 4 and compare them with the models for bounded length exchange cycles,
proposed in [25]. Furthermore, we estimate the frequency of violations of the
respecting improvement property for all models by computational simulations.
The models are run for both strict and weak preferences and considering two
objective functions: maximisation of the size of the exchange (corresponding
to the maximisation of the number of transplants in the context of KEPs),
denoted by Maxt, and maximisation of total weight (where weights can mean the
scores given to the corresponding transplants in a KEP, reflecting the
qualities of the transplants), denoted by Maxw. For bounded length exchanges,
the maximum length considered are $k=2$ and $k=3$.
In Section 5.1, we compare the size/weight of the maximum size/weight
allocation to the core, competitive and strong core allocations under the same
objective. For the unbounded case, we further analyse the price of fairness:
the difference in percentage in the number of transplants of the maximum total
weight solution, and the core, competitive and strong core allocations for
both objectives, when compared to the maximum size solution.
In Section 5.2 we calculate the average number of weakly blocking cycles for
allocations provided by each formulation. By doing so, we give a rough
indication of how far each solution is from the strong core. We complement
that analysis with the quantification of the average number of vertices of an
instance that may obtain a strictly better allotment in at least one weakly
blocking cycle.
Average CPU times required to solve an instance of a given size for each of
the formulations in Section 4 are presented in subsection 5.3.
Finally, in subsection 5.4 we provide results on the frequency of violations
of the respecting improvement property for all of our models.
All programs were implemented using Python programming language and tested
using Gurobi as optimisation solver [21]. Tests were executed on a MacMini 8
running macOS version 10.14.3 in a Intel Core i7 CPU with 6 cores at 3.2 GHz
with 8GB of RAM.
Test instances were generated with the generator proposed in [40, 27] and are
available from http: TBA. The number of pairs of an instance ranges from 20 to
150; 50 instances of each size were generated. The weights associated to the
arcs of the graph were generated randomly within the interval $(0,1)$, and
preferences were assigned in accordance with those weights: the higher is the
weight of an outgoing arc for a given vertex, the more preferred is the
corresponding good. For weak preferences, outgoing arcs with weights within
each interval of length $\frac{1}{|V|}$ were considered equally preferable.
### 5.1 Impact of stability on the number of transplants
Figure 20 presents average results for the maximum size and maximum weight
objectives for weak preferences under different settings: no stability
requirements (Max), core, competitive and strong core allocations. We refrain
ourselves from presenting the results for the case of strict preferences as
all curves are similar, except that for strict preferences the competitive and
strong core allocations are the same.
As expected, both the number of transplants and total weight decrease by
increasing the number of constraints from Max to Core, then to Competitive,
and then to Strong Core allocations. The strong core curve is non-monotonic,
which is explained by the absence of feasible solutions for several instances.
Next to the curve we present the number of instances out of the total 50 where
a feasible solution existed.
Figure 20: Number of transplants (left) and total weight of transplants
(right) for unbounded length and weak preferences. The numbers in the chart
reflect the number of instances where a feasible solution existed.
Figure 21 makes a similar analysis for the bounded case, when $k=2$ and $k=3$.
Maxk=∞ refers to the unbounded exchange problem, whilst Maxk=2 and Maxk=3
correspond to the bounded problem for $k=2$ and $k=3$, respectively. The same
reasoning is used for the notation associated to the Wako-core 151515As
mentioned before, for the unbounded case competitive allocations are
equivalent to Wako-core, and for bounded case we only have Wako-core.
(W.-Core) and strong core (S.Core). For easiness of comparison between the
bounded and the unbounded cases, we again plot the two curves from Figure 20
associated with maximum utility (Max) which, in both cases, represent an upper
bound for our solutions. Naturally, the curves associated with $k=2$ are
dominated by those associated with $k=3$. We can observe that the maximum
number of transplants for $k=3$ and for unbounded $k$ are very similar (see
Figure 21 (left)). Notice also that even though some curves overlap and seem
identical, there are minor differences among them, except for the case of core
and Wako-core allocations for $k=2$, that coincide. Again, we only present
results for weak preferences, as this is the more general case. For strict
preferences, for $k=3$ the curves are similar, for $k=2$, core, competitive
and strong core allocations coincide, and the latter two are also the same for
unbounded exchanges.
Figure 21: Comparison of the number of transplants (left) and the total weight
of transplants (right) for bounded length exchanges ($k=2,3$) and weak
preferences. A solid line is used for the unbounded case, dotted lines are
used for $k=2$ and dashed lines for $k=3$.
From a practical point of view an interesting question is to study the impact
of stability requirements on the number of transplants achievable. Although
KEPs have many other key performance indicators, this is unarguably the most
relevant one, as this criterion is optimised as a first objective in all the
European KEPs [11]. Figure 22 presents the price of fairness, that is
difference in percentage in the number of transplants for Maxw allocation, and
for Core, Competitive and Strong Core allocations under both objectives, when
compared to the maximum number of transplants achievable (Maxt). Subscripts
$t$ and $w$ identify the objective functions used for each allocation. As
shown, the price of fairness for competitive and strong core allocations is
extremely high, when compared to the core. It decreases with problem size for
both objective functions and for all allocation models, being slightly higher
for the core for the total weight objective (see curve Corew). For the
maximisation of the number of transplants (curve Coret), for instances with
more than 50 nodes the reduction is of less than 3%, decreasing to 1% for the
largest instance. Such result is of major practical relevance as it indicates
that with increasing size of the programs one can consider pairs’ preferences
in the matching with no significant reduction in the number of pairs
transplanted.
Figure 22: Price of fairness with respect to maximum number of transplants for
core, competitive and strong core allocations, with maximum number of
transplants and maximum total weight, and solution with maximum total weight;
for strict (left) and weak (right) preferences.
### 5.2 Assessing the distance of different solutions from the strong core
Figure 23 (left) presents the average number of weakly blocking cycles of size
2 in Max, Core, and Competitive (Wako-core) allocations. We denote the maximum
length of the blocking cycles considered by $l$. For the bounded case,
following the same reasoning as in [25], the figure also reports the minimum
average number of weakly blocking cycles for the cases where the strong core
does not exist, i.e., for the maximum number of transplants/total weight
solution with minimum number of weakly blocking cycles. Interestingly, when
the objective function is the number of transplants, the “unstability” of the
solutions barely depends on the size of exchanges allowed. The same does not
hold for the core, where the number of blocking cycles is considerably smaller
for $k=2$. For this and all the remaining cases, the average number of weakly
blocking cycles is very low, in most cases below 1. It is worth to note that
the average number of blocking cycles tends to be smaller when the objective
is to maximise the total weight (Figure 23 (right)). A plausible justification
for this is that the weights reflect patients preferences and therefore a
solution obtained by considering that objective will be closer to a stable
solution.
Figure 23: Number of weakly blocking cycles of size ${l=2}$ for solutions with
maximum number of transplants (left) and maximum total weight of transplants
(right), for unbounded exchanges and exchanges of size up to $k=2$ and $k=3$
for weak preferences. A solid line is used for the unbounded case, dotted
lines are used for $k=2$ and dashed lines for $k=3$.
Figure 24 presents the same analysis, now considering weakly blocking cycles
of size up to 3. Naturally, the solutions for $k=2$ are excluded from this
analysis, as they are fully reflected in Figure 23. The conclusions drawn for
$l=2$ remain valid for this case.
Figure 24: Number of weakly blocking cycles of size ${l=3}$ for solutions with
maximum number of transplants (left) and maximum total weight of transplants
(right), for unbounded exchanges and exchanges of size up to $k=3$ for weak
preferences.
For the unbounded case, the number of blocking cycles is larger, since one
must consider also the cases when $l>3$. Figure 25 provides information on the
number of weakly blocking cycles of size up to 4 and up to 5. We do not
present results for $l>5$ as searching for those blocking cycles would exceed
our CPU time.
Figure 25: Number of weakly blocking cycles of size ${l=4}$ and $l=5$ for
solutions with maximum number of transplants (left) and maximum total weight
of transplants (right), for unbounded exchanges and weak preferences.
Although the information above is already insightful, to complement our
analysis we provide in Figure 26 information on the average number of vertices
of an instance that strictly prefer their allotments in at least one weakly
blocking cycle (i.e., on the number of patients that could receive a strictly
better kidney in a deviating allocation). An important conclusion can be drawn
from the results in the figure: the maximisation of total weight decreases the
number of agents that can get a better allotment in a blocking cycle when
compared to the maximum size solutions (compare curves Max in Figure 26 (left)
and (right)). It also allows, by comparison with Figure 22, to analyse the
trade-off that would be necessary to make in terms of reduction of the total
number of transplants to meet a certain level of patients preferences.
Figure 26: Average number of agents for an instance for those there exists at
least one weakly blocking cycles, where this agent receives a strictly better
allotment for weak preferences. The grey line is a reference line showing the
number of vertices in an instance.
### 5.3 CPU time for unbounded models
In Table 1 we present the average CPU time for solving an instance of a given
size with one of the tree newly proposed IP models for unbounded case.
| Max # transplants | Max total weight | Max # transplants | Max total weight
---|---|---|---|---
$|V|$ | Core | Compet. | S.Core | Core | Compet. | S.Core | Core | Compet. | S.Core | Core | Compet. | S.Core
| Strict preferences | Weak preferences
20 | 0.00 | 0.03 | 0.01 | 0.00 | 0.02 | 0.01 | 0.00 | 0.04 | 0.01 | 0.00 | 0.03 | 0.01
30 | 0.03 | 0.13 | 0.04 | 0.02 | 0.11 | 0.03 | 0.02 | 0.28 | 0.04 | 0.02 | 0.17 | 0.03
40 | 0.08 | 0.48 | 0.12 | 0.06 | 0.25 | 0.11 | 0.09 | 0.63 | 0.10 | 0.06 | 0.44 | 0.08
50 | 0.24 | 1.74 | 0.38 | 0.16 | 0.58 | 0.34 | 0.20 | 2.15 | 0.25 | 0.17 | 1.06 | 0.21
60 | 0.47 | 2.39 | 0.87 | 0.28 | 0.91 | 0.79 | 0.52 | 6.03 | 0.44 | 0.26 | 2.87 | 0.39
70 | 1.06 | 3.91 | 1.94 | 0.66 | 2.29 | 1.50 | 0.84 | 16.99 | 1.09 | 0.53 | 7.35 | 0.77
80 | 1.62 | 6.54 | 3.26 | 0.82 | 3.39 | 2.32 | 1.41 | 32.21 | 1.63 | 0.76 | 17.47 | 1.01
90 | 3.14 | 36.34 | 5.31 | 3.27 | 5.38 | 3.59 | 3.29 | 167.15 | 2.36 | 1.82 | 80.88 | 1.49
100 | 3.53 | 16.19 | 19.26 | 2.43 | 6.15 | 9.81 | 4.51 | 188.35 | 8.87 | 3.08 | 95.39 | 4.62
110 | 8.73 | 21.42 | 28.26 | 4.97 | 9.01 | 13.79 | 6.68 | 331.64 | 16.40 | 5.92 | 159.12 | 7.24
120 | 17.84 | 72.87 | 57.36 | 6.81 | 15.36 | 24.32 | 20.14 | 392.88 | 19.60 | 6.79 | 218.58 | 10.87
130 | 14.34 | 46.92 | 84.49 | 14.24 | 22.68 | 34.11 | 14.78 | 586.27 | 21.75 | 12.32 | 438.23 | 10.42
140 | 29.50 | 61.99 | 110.82 | 21.51 | 34.33 | 46.67 | 41.59 | 708.92 | 40.97 | 16.43 | 539.56 | 14.89
150 | 41.99 | 161.10 | 214.32 | 30.66 | 52.61 | 70.77 | 57.13 | 786.43 | 61.79 | 27.82 | 682.99 | 23.91
Table 1: Average CPU time (in seconds) for solving an instance of a given size
with the proposed formulation.
The instances with the weak preferences are more complicated, for core and, in
particular, for the competitive allocation model. However, it was faster to
find the strong core for weak, rather than for strict preferences. Moreover,
surprisingly, finding the strong core is the most time consuming task for
strict preferences, while it is least time consuming for weak preferences.
Finally, we can notice that models for finding core and strong core
allocations are performing within the same ranges of magnitude with respect to
the CPU time if compared with the corresponding models for the bounded case,
analysed in [25].
### 5.4 Violation of respecting improvement property
In this section we will make a computational analysis on how often the
respecting improvement (RI) property is violated for different models, for
both unbounded and bounded cases. To do so, for each model and for instances
with 20 and 30 vertices we run the following procedure, presented in Algorithm
1. For the unbounded case we considered the Max and Core models under both
objectives.
Let $r^{i}_{k}$, $r\in\\{1,\dots,|V|\\}$ be the rank of good $k$ for agent
$i$, that will reflect preferences of $i$, i.e. if
$r^{i}_{k}\leq(<,=)r^{i}_{j}$, then $kR_{i}(P_{i},I_{i})j$.
Result: $N$ number of violations of RI property
$N\leftarrow 0$;
for $i\in V$, $j\in V$, $i\neq j$ do
Let $R$ be the current preferences of agents;
Find allocation with the best allotment for $i$ with respect to $R$, denote
the solution by $\bar{y}$;
For $\bar{y}_{il}=1$ denote $\bar{r}\leftarrow r^{i}_{l}$;
while $\exists kP_{j}i$ do
Let $k$ be the first strictly preferred agent for $j$ that precedes $i$ in
$R$;
if Strict preferences then
Swap $i$ with $k$ in the list of preferences of $j$;
end if
if Weak preferences then
Let $i$ become equally preferred for $j$ as $k$ (i.e. $r^{j}_{i}=r^{j}_{k}$);
end if
Denote modified preferences by $\tilde{R}$;
Find allocation with the best allotment for $i$ with respect to $\tilde{R}$,
denote solution by $\tilde{y}$;
For $\tilde{y}_{it}=1$, denote $\tilde{r}=r^{i}_{t}$;
if $\bar{r}<\tilde{r}$ then
The respecting improvement property is violated: $N\leftarrow N+1$;
end if
$\bar{r}\leftarrow\tilde{r}$; $R\leftarrow\tilde{R}$;
end while
end for
Algorithm 1 Procedure for Checking RI property
For each pair of agents $i$ and $j$, agent $i$ is consecutively making
improvements, moving up in the preference list of agent $j$ until its top. In
each step (see while loop in the algorithm) for the case of strict
preferences, $i$ is swapped with $k$, who is the first strictly preferred
agent by $j$ to $i$. For the case of ties, agent $i$ first becomes equally
preferred for $j$ as $k$. After the improvements, the best allocations for the
original ($R$) and improved ($\tilde{R}$) preferences are compared for $i$. It
is considered that there is a violation of the RI property if $i$ obtains a
strictly worse allotment in allocation for $\tilde{R}$.
Figures 27 and 28 present box plots for the number of violations of the RI
property for instances of a given size for strict and weak preferences,
respectively, for those models where the RI property is violated at least
once. Models whose results are the same, independently of the objective
considered, are plotted together. That is the case, for example, of Coret and
Corew, for $k=3$ and strict preferences (see figure 27), or of Wako-core and
Core, weak preferences and $k=2$ and $k=3$ (see figure 28).
Figure 27: Number of violations of the respecting improvement property for all
instances in total of a given size, $|V|=20,30$, for strict preferences.
Figure 28: Number of violations of the respecting improvement property for all
instances in total of a given size, $|V|=20,30$, for weak preferences.
For (Wako-, Strong) Core models there were few cases of violations of the RI
property, as reflected in the figure. To give an indication, the total number
of violations for all instances with 30 vertices for the weak preferences and
$k=3$ was 4549 for Maxt, 3145 for Maxw, 10 for Coret,w, 20 for W.-Coret,w, and
2 for S.Coret,w. For maximum size and maximum weight solutions (Maxt and Maxw,
respectively), both for the unbounded and the bounded cases, one can observe a
significant number of violations. Those numbers increase with instance size.
Interestingly, for the unbounded case the number of violations for Maxt was
lower than that for Maxw. This can be explained by the fact that the former
problem has a larger number of alternative best allocations, while for the
weighted objective problem the solution is usually unique. On the contrary,
for the bounded case maximum weighted solutions violated the property less
times, compared to maximum size solutions.
## 6 Conclusion
This paper advances current state of the art in several lines of research. We
first prove that in case of strict preferences the unique competitive
allocation respects improvement; an extension of that result is provided for
the case of ties.
We also advance the work in the housing market of Shapley and Scarf presented
in [35] by providing Integer Programming models that do not require
exponential number of constraints for the weak core, strong core, and the set
of competitive allocations. These models assume that there is no limit on the
maximum size of an exchange cycle. However, since there are problems where
such assumption may be difficult to hold (e.g. Kidney Exchange Programmes) we
further propose alternative IP models for bounded cycles. This contribution is
inspired by the definition of competitive equilibrium allocations provided in
[45].
We proceed with computational experiments that provide insights on the trade-
off between stability requirements and maximum number of transplants. Results
show that with increasing size of the instances, such trade-off decreases: for
instances with more than 50 nodes core allocations impact on the reduction of
transplants is less than 3%, decreasing to 1% for the largest instance.
Furthermore, results show that when the objective is to maximise the number of
transplants, the “unstability” of the solutions, measured by the number of
weakly blocking cycles barely depends on the length of the exchanges.
Additionally, the maximisation of total weight instead of the number of
transplants, leads to solutions where patients’ preferences matter more.
As the main open question we left open whether the respecting improvement
property with regard to the best allotment holds for a) the core for unbounded
exchanges b) for stable matchings in the roommates problem with strict
preferences. It would also be interesting to study whether the respecting
improvement property can be used to characterise the TTC mechanism for the
classical housing markets with strict preferences.
## Acknowledgements
We thank Antonio Nicoló for his contribution to an earlier version of this
paper, and Tayfun Sönmez, and Utku Ünver for valuable comments.
## References
* [1] Atila Abdulkadiroğlu and Tayfun Sönmez. School choice: A mechanism design approach. American Economic Review, 93(3):729–747, 2003.
* [2] David J. Abraham, Avrim Blum, and Tuomas Sandholm. Clearing algorithms for barter exchange markets: Enabling nationwide kidney exchanges. In Proceedings of the 8th ACM Conference on Electronic Commerce, EC-07, pages 295–304, New York, NY, USA, 2007. ACM.
* [3] Kolos Csaba Ágoston, Péter Biró, and Iain McBride. Integer programming methods for special college admissions problems. Journal of Combinatorial Optimization, 32(4):1371–1399, 2016.
* [4] Kolos Csaba Ágoston, Péter Biró, and Richárd Szántó. Stable project allocation under distributional constraints. Operations Research Perspectives, 5:59–68, 2018.
* [5] Mourad Baïou and Michel Balinski. The stable admissions polytope. Mathematical Programming, 87(3):427–439, 2000.
* [6] Michel Balinski and Tayfun Sönmez. A tale of two mechanisms: Student placement. Journal of Economic Theory, 84(1):73–94, 1999.
* [7] Péter Biró and Katarína Cechlárová. Inapproximability of the kidney exchange problem. Information Processing Letters, 101(5):199–202, 2007.
* [8] Péter Biró, Bernadette Haase-Kromwijk, Joris van de Klundert, and et al. Building kidney exchange programmes in Europe – an overview of exchange practice and activities. Transplantation, 103(7):1514–1522, 2019.
* [9] Péter Biró, David F. Manlove, and Iain McBride. The hospitals/residents problem with couples: Complexity and integer programming models. In International Symposium on Experimental Algorithms, pages 10–21. Springer, 2014.
* [10] Péter Biró and Eric McDermid. Three-sided stable matchings with cyclic preferences. Algorithmica, 58(1):5–18, 2010.
* [11] Péter Biró, Joris van de Klundert, David F. Manlove, and et al. Modelling and optimisation in European Kidney Exchange Programmes. European Journal of Operational Research, 2019.
* [12] Margarida Carvalho, Xenia Klimentova, Kristiaan Glorie, Ana Viana, and Miguel Constantino. Robust models for kidney exchange programs. Accepted for publication in INFORMS Journal on Computing, 2020.
* [13] Margarida Carvalho, Andrea Lodi, João P. Pedroso, and Ana Viana. Nash equilibria in the two-player kidney exchange game. Mathematical Programming, 161(1-2):389–417, 2017.
* [14] Katarína Cechlárová and Tamás Fleiner. Housing markets through graphs. Algorithmica, 58(1):19–33, 2010.
* [15] Miguel Constantino, Xenia Klimentova, Ana Viana, and Abdur Rais. New insights on integer-programming models for the kidney exchange problem. European Journal of Operational Research, 231(1):57–68, 2013.
* [16] Manlove David. Algorithmics of matching under preferences, volume 2. World Scientific, 2013.
* [17] Maxence Delorme, Sergio García, Jacek Gondzio, Jörg Kalcsics, David F. Manlove, and William Pettersson. Mathematical models for stable matching problems with ties and incomplete lists. European Journal of Operational Research, 277(2):426–441, 2019\.
* [18] John P. Dickerson, David F. Manlove, Benjamin Plaut, Tuomas Sandholm, and James Trimble. Position-indexed formulations for kidney exchange. CoRR, abs/1606.01623, 2016.
* [19] John P. Dickerson, Ariel D. Procaccia, and Tuomas Sandholm. Failure-aware kidney exchange (extended version of the EC-13 paper). 2014\.
* [20] David Gale and Lloyd S. Shapley. College admissions and the stability of marriage. American Mathematical Monthly, 69(1):9–15, 1962.
* [21] LLC Gurobi Optimization. Gurobi optimizer reference manual, 2018.
* [22] John William Hatfield, Fuhito Kojima, and Yusuke Narita. Improving schools through school choice: A market design approach. Journal of Economic Theory, 166:186–211, 2016.
* [23] Chien-Chung Huang. Circular stable matching and 3-way kidney transplant. Algorithmica, 58(1):137–150, 2010.
* [24] Flip Klijn. The top trading cycles algorithm respects improvement of student quality in school choice. Mimeo, 2019.
* [25] Xenia Klimentova, Péter Biró, Virginia Costa, João P. Pedroso, and Ana Viana. Novel IP formulations for the stable kidney exchange problem. Working paper available at https://arxiv.org/abs/2012.04918, 2020\.
* [26] Xenia Klimentova, João P. Pedroso, and Ana Viana. Maximising expectation of the number of transplants in kidney exchange programmes. Computers & OR, 73:1–11, 2016.
* [27] Xenia Klimentova, Ana Viana, João P. Pedroso, and Nicolau Santos. Fairness models for multi-agent kidney exchange programs. Accepted for publication in Omega, 2019.
* [28] Scott D. Kominers. Respect for improvements and comparative statics in matching markets. Working paper, 2019.
* [29] Harold W. Kuhn. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2(1-2):83–97, 1955.
* [30] Augustine Kwanashie and David F. Manlove. An integer programming approach to the hospitals/residents problem with ties. In Operations Research Proceedings 2013, pages 263–269. Springer, 2014.
* [31] Vicky Mak-Hau. On the kidney exchange problem: cardinality constrained cycle and chain problems on directed graphs: a survey of integer programming approaches. Journal of Combinatorial Optimization, 33(1):35–59, 2017.
* [32] Duncan C. McElfresh, Hoda Bidkhori, and John P. Dickerson. Scalable robust kidney exchange. CoRR, Presented at AAAI 19, abs/1811.03532, 2018.
* [33] Radu-Stefan Mincu, Péter Biró, Márton Gyetvai, Alexandru Popa, and Utkarsh Verma. Ip solutions for international kidney exchange programmes. Accepted for publication in Central European Journal of Operations Research, 2020.
* [34] Cheng Ng and Daniel S. Hirschberg. Three-dimensional stable matching problems. SIAM Journal on Discrete Mathematics, 4(2):245–252, 1991.
* [35] Thomas Quint and Jun Wako. On houseswapping, the strict core, segmentation, and linear programming. Mathematics of Operations Research, 29(4):861–877, 2004.
* [36] Alvin E. Roth and Andrew Postlewaite. Weak versus strong domination in a market with indivisible goods. Journal of Mathematical Economics, 4(2):131–137, 1977.
* [37] Alvin E. Roth, Tayfun Sönmez, and M Utku Ünver. Kidney exchange. Quarterly Journal of Economics, 119(2):457–488, 2004.
* [38] Alvin E. Roth, Tayfun Sönmez, and M Utku Ünver. Pairwise kidney exchange. Journal of Economic Theory, 125(2):151–188, 2005.
* [39] Susan L. Saidman, Alvin E. Roth, Tayfun Sönmez, M. Utku Ünver, and Francis L. Delmonico. Increasing the opportunity of live kidney donation by matching for two and three way exchanges. Transplantation, pages 773–782, 2006.
* [40] Nicolau Santos, Paolo Tubertini, Ana Viana, and João P. Pedroso. Kidney exchange simulation and optimization. Journal of the Operational Research Society, 68(12):1521––1532, 12 2017.
* [41] Lloyd S. Shapley and Herbert Scarf. On cores and indivisibility. Journal of Mathematical Economics, 1(1):23–37, 1974.
* [42] Marilda Sotomayor. An elementary non-constructive proof of the non-emptiness of the core of the housing market of Shapley and Scarf. Mathematical Social Sciences, 50(3):298–303, 2005.
* [43] Jun Wako. A note on the strong core of a market with indivisible goods. Journal of Mathematical Economics, 13(2):189–194, 1984.
* [44] Jun Wako. Some properties of weak domination in an exchange market with indivisible goods. Economics Studies Quarterly, 42(4):303–314, 1991.
* [45] Jun Wako. Coalition-proofness of the competitive allocations in an indivisible goods market. Fields Institute Communications, 23:277–283, 1999.
|
# Can We Automate Scientific Reviewing?
Weizhe Yuan
Carnegie Mellon University
<EMAIL_ADDRESS>
&Pengfei Liu
Carnegie Mellon University
<EMAIL_ADDRESS>&Graham Neubig
Carnegie Mellon University
<EMAIL_ADDRESS>Corresponding author.
###### Abstract
The rapid development of science and technology has been accompanied by an
exponential growth in peer-reviewed scientific publications. At the same time,
the review of each paper is a laborious process that must be carried out by
subject matter experts. Thus, providing high-quality reviews of this growing
number of papers is a significant challenge. In this work, we ask the question
“can we automate scientific reviewing?”, discussing the possibility of using
state-of-the-art natural language processing (NLP) models to generate first-
pass peer reviews for scientific papers. Arguably the most difficult part of
this is defining what a “good” review is in the first place, so we first
discuss possible evaluation measures for such reviews. We then collect a
dataset of papers in the machine learning domain, annotate them with different
aspects of content covered in each review, and train targeted summarization
models that take in papers to generate reviews. Comprehensive experimental
results show that system-generated reviews tend to touch upon more aspects of
the paper than human-written reviews, but the generated text can suffer from
lower constructiveness for all aspects except the explanation of the core
ideas of the papers, which are largely factually correct. We finally summarize
_eight_ challenges in the pursuit of a good review generation system together
with potential solutions, which, hopefully, will inspire more future research
on this subject. We make all code, and the dataset publicly available:
https://github.com/neulab/ReviewAdvisor as well as a ReviewAdvisor system:
http://review.nlpedia.ai/ (See demo screenshot in LABEL:sec:screenshot). The
review of this paper (without TL;QR section) written by the system of this
paper can be found LABEL:sec:self-review
TL;QR
> This paper proposes to use NLP models to generate reviews for scientific
> papers . The model is trained on the ASAP-Review dataset and evaluated on a
> set of metrics to evaluate the quality of the generated reviews . It is
> found that the model is not very good at summarizing the paper , but it is
> able to generate more detailed reviews that cover more aspects of the paper
> than those created by humans . The paper also finds that both human and
> automatic reviewers exhibit varying degrees of bias and biases , and that
> the system generate more biased reviews than human reviewers.(“Too Long;
> Quick Read”, this paragraph, is generated by our system.)
## 1 Introduction
The number of published papers is growing exponentially tabah1999literature;
de2009bibliometrics; bornmann2015growth. While this may be positively viewed
as indicating acceleration of scientific progress, it also poses great
challenges for researchers, both in reading and synthesizing the relevant
literature for one’s own benefit, and for performing _peer review_ of papers
to vet their correctness and merit. With respect to the former, a large body
of existing work explores automatic summarization of a paper or a set of
papers for automatic survey generation mohammad-etal-2009-using; jha-
etal-2013-system; jha-etal-2015-content; DBLP:conf/aaai/JhaCR15;
DBLP:conf/aaai/YasunagaKZFLFR19; cohan-etal-2018-discourse; xing-
etal-2020-automatic. However, despite the fact that peer review is an
important, but laborious part of our scientific process, automatic systems to
aid in the peer review process remain relatively underexplored.
bartoli2016your investigated the feasibility of generating reviews by surface-
level term replacement and sentence reordering, and wang2020reviewrobot
(contemporaneously and independently) propose a two-stage information
extraction and summarization pipeline to generate paper reviews. However, both
do not extensively evaluate the quality or features of the generated review
text.
In this work, we are concerned with providing at least a preliminary answer to
the ambitious over-arching question: can we automate scientific reviewing?
Given the complexity of understanding and assessing the merit of scientific
contributions, we do not expect an automated system to be able to match a
well-qualified and meticulous human reviewer at this task any time soon.
However, some degree of review automation may assist reviewers in their
assessments, or provide guidance to junior reviewers who are just learning the
ropes of the reviewing process. Towards this goal, we examine two concrete
research questions, the answers to which are prerequisites to building a
functioning review assistant:
Q1: What are the desiderata of a good automatic reviewing system, and how can
we quantify them for evaluation? Before developing an automatic review system,
we first must quantify what constitutes a good review in the first place. The
challenge of answering this question is that a review commonly involves both
objective (e.g. “lack of details necessary to replicate the experimental
protocol”) and subjective aspects (e.g. “lack of potential impact”). Due to
this subjectivity, defining a “good” review is itself somewhat subjective.
As a step towards tackling this challenge, we argue that it is possible to
view review generation as a task of _aspect-based scientific paper
summarization_ , where the summary not only tries to summarize the core idea
of a paper, but also assesses specific aspects of that paper (e.g. novelty or
potential impact). We evaluate review quality from multiple perspectives, in
which we claim a good review not only should make a good summary of a paper
but also consist of factually correct and fair comments from diverse aspects,
together with informative evidence.
To operationalize these concepts, we build a dataset of reviews, named ASAP-
Review111ASpect-enhAnced Peer Review dataset from machine learning domain, and
make fine-grained annotations of aspect information for each review, which
provides the possibility for a richer evaluation of generated reviews.
Q2: Using state-of-the-art NLP models, to what extent can we realize these
desiderata? We provide an initial answer to this question by using the
aforementioned dataset to train state-of-the-art summarization models to
generate reviews from scientific papers, and evaluate the output according to
our evaluation metrics described above. We propose different architectural
designs for this model, which we dub ReviewAdvisor (§LABEL:sec:model), and
comprehensively evaluate them, interpreting their relative advantages.
Lastly, we highlight our main observations and conclusions:
(1) What are review generation systems (not) good at? Most importantly, we
find the constructed automatic review system _generates non-factual
statements_ regarding many aspects of the paper assessment, which is a serious
flaw in a high-stakes setting such as reviewing. However, there are some
bright points as well. For example, it _can often precisely summarize the core
idea_ of the input paper, which can be either used as a draft for human
reviewers or help them (or general readers) quickly understand the main idea
of the paper to be reviewed (or pre-print papers). It can also generate
reviews that _cover more aspects_ of the paper’s quality than those created by
humans, and provide evidence sentences from the paper. These could potentially
provide a preliminary template for reviewers and help them quickly identify
salient information in making their assessment.
(2) Will the system generate biased reviews? Yes. We present methods to
identify and quantify potential biases in reviews (§LABEL:sec:bias-analysis),
and find that both human and automatic reviewers exhibit varying degrees of
bias. (i) regarding native vs. non-native English speakers: papers of native
English speakers tend to obtain higher scores on “Clarity” from human
reviewers than non-native English ones,222Whether this actually qualifies as
“bias” is perhaps arguable. Papers written by native English speakers may be
more clear due to lack of confusing grammatical errors, but the paper may
actually be perfectly clear but give the impression of not being clear because
of grammatical errors. but the automatic review generators narrow this gap.
Additionally, system reviewers are harsher than human reviewers when
commenting regarding the paper’s “Originality” for non-native English
speakers. (ii) regarding anonymous vs. non-anonymous submissions: both human
reviewers and system reviewers favor non-anonymous papers, which have been
posted on non-blind preprint servers such as arXiv333https://arxiv.org/ before
the review period, more than anonymous papers in all aspects.
Based on above mentioned issues, we claim that a review generation system can
not replace human reviewers at this time, instead, it may be helpful as part
of a machine-assisted human review process. Our research also enlightens
what’s next in pursuing a better method for automatic review generation or
assistance and we summarize _eight challenges_ that can be explored for future
directions in §LABEL:sec:challenges.
## 2 What Makes a Good Peer Review?
Although peer review has been adopted by most journals and conferences to
identify important and relevant research, its effectiveness is being
continuously questioned (Smith2006PeerRA; langford2015arbitrariness;
tomkins2017reviewer; gao-etal-2019-rebuttal; rogers-augenstein-2020-improve).
As concluded by jefferson2002measuring: “Until we have properly defined the
objectives of peer-review, it will remain almost impossible to assess or
improve its effectiveness.” Therefore we first discuss the possible objectives
of peer review.
### 2.1 Peer Review for Scientific Research
A research paper is commonly first reviewed by several committee members who
usually assign one or several scores and give detailed comments. The comments,
and sometimes scores, cover diverse aspects of the paper (e.g. “clarity,”
“potential impact”; detailed in §3.2.1), and these aspects are often directly
mentioned in review forms of scientific conferences or journals.444For
example, one example from ACL can be found at:
https://acl2018.org/downloads/acl_2018_review_form.html
Then a senior reviewer will often make a final decision (i.e., “reject” or
“accept”) and provide comments summarizing the decision (i.e., a meta-review).
After going through many review
guidelines555https://icml.cc/Conferences/2020/ReviewerGuidelines
https://NeurIPS.cc/Conferences/2020/PaperInformation/ReviewerGuidelines,
https://iclr.cc/Conferences/2021/ReviewerGuide and resources about how to
write a good
review666https://players.brightcove.net/3806881048001/rFXiCa5uY_default/index.html?videoId=4518165477001,
https://soundcloud.com/nlp-highlights/77-on-writing-quality-peer-reviews-with-
noah-a-smith, https://www.aclweb.org/anthology/2020.acl-tutorials.4.pdf,
https://2020.emnlp.org/blog/2020-05-17-write-good-reviews we summarize _some_
of the most frequently mentioned desiderata below:
1. 1.
Decisiveness: A good review should take a clear stance, selecting high-quality
submissions for publication and suggesting others not be accepted
(jefferson2002effects; Smith2006PeerRA).
2. 2.
Comprehensiveness: A good review should be well-organized, typically starting
with a brief summary of the paper’s contributions, then following with
opinions gauging the quality of a paper from different aspects. Many review
forms explicitly require evaluation of different aspects to encourage
comprehensiveness.
3. 3.
Justification: A good review should provide specific reasons for its
assessment, particularly whenever it states that the paper is lacking in some
aspect. This justification also makes the review more constructive (another
oft-cited desiderata of reviews), as these justifications provide hints about
how the authors could improve problematic aspects in the paper (xiong-
litman-2011-automatically).
4. 4.
Accuracy: A review should be factually correct, with the statements contained
therein not being demonstrably false.
5. 5.
Kindness: A good review should be kind and polite in language use.
Based on above desiderata, we make a first step towards evaluation of reviews
for scientific papers and characterize a “good” review from multiple
perspectives.
### 2.2 Multi-Perspective Evaluation
Desiderata | Metrics | Range | Automated
---|---|---|---
Decisiveness | RAcc | [-1, 1] | No
Comprehen. | ACov | [0, 1] | Yes
ARec | [0, 1] | Yes
Justification | Info | [0, 1] | No
Accuracy | SAcc | [0, 1] | No
ACon | [0, 1] | No
Others | ROUGE | [0, 1] | Yes
BERTScore | [-1, 1] | Yes
Table 1: Evaluation metrics from different perspectives. “Range” represents
the range value of each metric. “Automated” denotes if metrics can be obtained
automatically.
Given input paper $D$ and meta-review $R^{m}$, our goal is to evaluate the
quality of review $R$, which can be either manually or automatically
generated. We also introduce a function $\textsc{Dec}(D)\in\\{1,-1\\}$ that
indicates the final decision of a given paper reached by the meta-review:
“accept” or “reject”. Further, $\textsc{Rec}(R)\in\\{1,0,-1\\}$ represents the
acceptance recommendation of a particular review: “accept,” “neutral,” or
“reject (see Appendix LABEL:app:detail-metric for details).
Below, we discuss evaluation metrics that can be used to approximate the
desiderata of reviews described in the previous section. And we have
summarized them in Tab. 2.2.
#### 2.2.1 D1: Decisiveness
First, we tackle the _decisiveness_ , as well as accuracy of the review’s
recommendation, through Recommendation Accuracy (RAcc). Here we use the final
decision regarding a paper and measure whether the acceptance implied by the
review $R$ is consistent with the actual accept/reject decision of the
reviewed paper. It is calculated as:
$\text{RAcc}(R)=\textsc{Dec}(D)\times\textsc{Rec}(R)$ (1)
A higher score indicates that the review more decisively and accurately makes
an acceptance recommendation.
#### 2.2.2 D2: Comprehensiveness
A comprehensive review should touch on the quality of different aspects of the
paper, which we measure using a metric dubbed Aspect Coverage (ACov).
Specifically, given a review $R$, aspect coverage measures how many aspects
(e.g. clarity) in a predefined aspect typology (in our case, §3.2.1) have been
covered by $R$.
In addition, we propose another metric Aspect Recall (ARec), which explicitly
takes the meta-review $R^{m}$ into account. Because the meta-review is an
authoritative summary of all the reviews for a paper, it provides an
approximation of which aspects, and with which sentiment polarity, should be
covered in a review. Aspect recall counts how many aspects in meta-review
$R^{m}$ are covered by general review $R$, with higher aspect recall
indicating better agreement with the meta-review.777 Notably, this metric
potentially biases towards high scores for reviews that were considered in the
writing of the meta-review. Therefore, higher aspect recall is not the only
goal, and should be taken together with other evaluation metrics.
#### 2.2.3 D3: Justification
As defined in §2.1, a good peer review should provide hints about how the
author could improve problematic aspects. For example, when reviewers comment:
“this paper lacks important references”, they should also list these relevant
works. To satisfy this justification desideratum, we define a metric called
Informativeness (Info) to quantify how many negative comments888We only
consider whether the reviewer has provided enough evidence for negative
opinions since we find that most human reviewers rarely provide evidence for
their positive comments. are accompanied by corresponding evidence.
First, let $n_{\text{na}}(R)$ denote the number of aspects in $R$ with
negative sentiment polarity. $n_{\text{nae}}(R)$ denotes the number of aspects
with negative sentiment polarity that are supported by evidence. The judgement
of supporting evidence is conducted manually (details in Appendix
LABEL:app:detail-metric). Info is calculated as:
$\text{Info}(R)=\frac{n_{\text{nae}}(R)}{n_{\text{na}}(R)}$ (2)
And we set it to be 1 when there are no negative aspects mentioned in a
review.
### 2.3 D4: Accuracy
We use two measures to evaluate the accuracy of assessments. First, we use
Summary Accuracy (SAcc) to measure how well a review summarizes contributions
of a paper. It takes value of 0, 0.5, or 1, which evaluates the summary part
of the review as incorrect/absent, partially correct, and correct. The
correctness judgement is performed manually, with details listed in Appendix
LABEL:app:detail-metric.
Info implicitly requires that negative aspects should be supported with
evidence, ignoring the quality of this evidence. However, to truly help to
improve the quality of a paper, the evidence for negative aspects should be
factual as well. Here we propose Aspect-level Constructiveness (ACon), the
percentage of the supporting statements $n_{\text{nae}}(R)$ that are judged as
valid support by human annotators. If $n_{\text{nae}}(R)$ is 0, we set its
ACon as 1. This metric will implicitly favor reviews that do not provide
enough evidence for negative aspects. However, in this case, the Info of those
reviews will be rather low. The details of evaluating “validity” are also
described in Appendix LABEL:app:detail-metric.
### 2.4 D5: Kindness
While kindness is very important in maintaining a positive research community,
accurately measuring it computationally in a nuanced setting such as peer
review is non-trivial. Thus, we leave the capturing of kindness in evaluation
to future work.
### 2.5 Similarity to Human Reviews
For automatically generated reviews, we also use Semantic Equivalence metrics
to measure the similarity between generated reviews and reference reviews. The
intuition is that while human reviewers are certainly not perfect, knowing how
close our generated reviews are to existing human experts may be informative.
Here, we investigate two specific metrics: ROUGE lin2003automatic and
BERTScore zhang2019bertscore. The former measures the surface-level word match
while the latter measures the distance in embedding space. Notably, for each
source input, there are multiple reference reviews. When aggregating ROUGE and
BERTScore, we take the maximum instead of average since it is not necessary
for generated reviews to be close to all references.
## 3 Dataset
Next, in this section we introduce how we construct a review dataset with more
fine-grained metadata, which can be used for system training and the multiple
perspective evaluation of reviews.
### 3.1 Data Collection
Figure 1: Data annotation pipeline.
The advent of the Open Peer Review system999https://openreview.net/ makes it
possible to access review data for analysis or model training/testing. One
previous work kang18naacl attempts to collect reviews from several prestigious
publication venues including the Conference of the Association of
Computational Linguistics (ACL) and the International Conference on Learning
Representations (ICLR). However, there were not nearly as many reviews
accumulated in OpenReview at that time101010During that time, there are no
reviews of ICLR from 2018 to 2020 nor reviews of NeurIPS from 2018 to 2019.
and other private reviews only accounted for a few hundred. Therefore we
decided to collect our own dataset Aspect-enhanced Peer Review (ASAP-Review).
We crawled ICLR papers from 2017-2020 through
OpenReview111111https://openreview.net and NeurIPS papers from 2016-2019
through NeurIPS Proceedings.121212http://papers.NeurIPS.cc For each paper’s
review, we keep as much metadata information as possible. Specifically, for
each paper, we include following metadata information that we can obtain from
the review web page:
* •
Reference reviews, which are written by a committee member.
* •
Meta reviews, which are commonly written by an area chair (senior committee
member).
* •
Decision, which denotes a paper’s final “accept” or “reject” decision.
* •
Other information like url, title, author, etc.
We used Allenai Science-parse131313https://github.com/allenai/science-parse to
parse the pdf of each paper and keep the structured textual information (e.g.,
titles, authors, section content and references). The basic statistics of our
ASAP-Review dataset is shown in Tab. 2.
| ICLR | NeurIPS | Both
---|---|---|---
Accept | 1,859 | 3,685 | 5,544
Reject | 3,333 | 0 | 3,333
Total | 5,192 | 3,685 | 8877
Avg. Full Text Length | 7,398 | 5,916 | 6782
Avg. Review Length | 445 | 411 | 430
# of Reviews | 15,728 | 12,391 | 28,119
# of Reviews per Paper | 3.03 | 3.36 | 3.17
Table 2: Basic statistics of ASAP-Review dataset. Note that NeurIPS only
provide reviews for accepted papers to the public.
### 3.2 Aspect-enhanced Review Dataset
Although reviews exhibit internal structure, for example, as shown in Fig.
LABEL:fig:multi-view, reviews commonly start with a paper summary, followed by
different aspects of opinions, together with evidence. In practice, this
useful structural information cannot be obtained directly. Considering that
fine-grained information about the various aspects touched on by the review
plays an essential role in review evaluation, we conduct aspect annotation of
those reviews. To this end, we first (i) introducing an aspect typology and
(ii) perform human annotation.
#### 3.2.1 Aspect Typology and Polarity
We define a typology that contains 8 aspects, which follows the ACL review
guidelines141414https://acl2018.org/downloads/acl_2018_review_form.html. We
manually inspected several review guidelines from ML conferenecs and found the
typology in ACL review guideline both general and comprehensive. with small
modifications, which are Summary (SUM), Motivation/Impact (MOT) , Originality
(ORI), Soundness/Correctness (SOU), Substance (SUB), Replicability (REP),
Meaningful Comparison (CMP) and Clarity (CLA). The detailed elaborations of
each aspect can be found in Supplemental Material B.1. Inside the parentheses
are what we will refer to each aspect for brevity. To take into account
whether the comments regarding each aspect are positive or negative, we also
mark whether the comment is positive or negative for every aspect (except
summary).
#### 3.2.2 Aspect Annotation
Overall, the data annotation involves four steps that are shown in Fig. 1.
##### Step 1: Manual Annotation
To manually annotate aspects in reviews, we first set up a data annotation
platform using Doccano.151515https://github.com/doccano/doccano We asked 6
students from ML/NLP backgrounds to annotate the dataset. We asked them to tag
an appropriate text span that indicates a specific aspect. For example, “The
results are new${}_{\text{[Positive Originality]}}$ and important to this
field${}_{\text{[Positive Motivation]}}$”. The detailed annotation guideline
can be found in Supplemental Material B.1. Each review is annotated by two
annotators and the lowest pair-wise Cohen kappa is 0.653, which stands for
substantial agreement. In the end, we obtained 1,000 human-annotated reviews
in total. The aspect statistics in this dataset are shown in Fig. 2-(a).
Figure 2: (a) and (b) represent distributions over seven aspects obtained by
human and BERT-based tagger respectively. Red bins represent positive
sentiment while green ones suggest negative sentiment. We omit “Sum” aspect
since there is no polarity definition of it.
##### Step 2: Training an Aspect Tagger
Since there are over 20,000 reviews in our dataset, using human labor to
annotate them all is unrealistic. Therefore, we use the annotated data we do
have to train an aspect tagger and use it to annotate the remaining reviews.
The basic architecture of our aspect tagger contains a pre-trained model BERT
devlin2019bert and a multi-layer perceptron. The training details can be found
in Appendix LABEL:app:train-tagger.
##### Step 3: Post-processing
However, after inspecting the automatically labeled dataset, we found that
there appears to be some common problems such as interleaving different
aspects and inappropriate boundaries. To address those problems, we used seven
heuristic rules to refine the prediction results and they were executed
sequentially. The detailed heuristics can be found in Appendix
LABEL:sec:heuristics. An example of our model prediction after applying
heuristic rules is shown in Appendix LABEL:sec:example-annotate. Fig. 2-(b)
shows the distribution of all reviews over different aspects. As can be seen,
the relative number of different aspects and the ratio of positive to negative
are very similar across human and automatic annotation.
##### Step 4: Human Evaluation
To evaluate the data quality of reviews’ aspects, we conduct human evaluation.
Specifically, we measure both aspect precision and aspect recall for our
defined 15 aspects.
We randomly chose 300 samples from our automatically annotated dataset and
assigned each sample to three different annotators to judge the annotation
quality. As before, these annotators are all from ML/NLP backgrounds.
The detailed calculation for aspect precision and aspect recall can be found
in Appendix LABEL:app:asp-pre-and-asp-recall. Under these criteria, we
achieved $92.75\%$ aspect precision and 85.19% aspect recall. The fine-grained
aspect precision and aspect recall for each aspect is shown in Tab.
LABEL:tab:asp-cov-rec. The aspect recall for positive replicability is low.
This is due to the fact that there are very few mentions of positive
replicability. And in our human evaluation case, the system identified one out
of two, which results in 50%. Other than that, the precision and recall are
much higher.161616 The recall numbers for negative aspects are lower than
positive aspects. However, we argue that this will not affect the fidelity of
our analysis much because (i) we observe that the imperfect recall is mostly
(over $85\%$) caused by partial recognition of the same negative aspect in a
review instead of inability to recognize at least one. This will not affect
our calculation of Aspect Coverage and Aspect Recall very much. (ii) The
imperfect recall will slightly pull up Aspect Score (will discuss in
§LABEL:sec:measure_bias), but the trend will remain the same.
Besides, one thing to mention is that our evaluation criterion is very strict,
and it thus acts as a lower bound for these two metrics.
However, we found that none of these adjustments could generate satisfying
fluent and coherent texts according to our experiments. Common problems
include interchanges between first and third person narration (They… Our
model…), contradiction between consecutive sentences, more descriptive texts
and fewer opinions, etc.
### A.9 CE Extraction Details
The basic sentence statistics of our ASAP-Review dataset is listed in Tab. 12.
| ICLR | NeurIPS | Both
---|---|---|---
Avg. Sentence Num. | 216 | 198 | 209
Table 12: Sentence statistics of ASAPReview dataset. “Avg. Sentence Num.”
denotes average sentence number in a paper.
We use two steps to extract salient sentences from a source document: (i)
Keywords filtering, (ii) Cross-entropy method
#### A.9.1 Keywords Filtering
We have predefined 48 keywords and in the first stage, we select sentences
containing those keywords as well as their inflections. The 48 keywords are
shown in Tab. LABEL:table:_keywords. After applying keywords filtering, the
statistics of selected sentences are shown in Tab. 13.
| ICLR | NeurIPS | Both
---|---|---|---
Avg. Sentence Num. | 97 | 85 | 92
Table 13: Sentence statistics of selected sentences after keywords filtering.
“Avg. Sentence Num.” denotes average selected sentence number in a paper.
#### A.9.2 Cross Entropy Method
Following 10.1145/3077136.3080690’s approach in unsupervised summaization. We
formalize the sentence extraction problem as a combinatorial optimization
problem. Specifically, we define the performance function $R$ as below.
$\displaystyle R(S)$ $\displaystyle=-\sum_{w\in S}p_{S}(w)\log p_{S}(w)$ (8)
$\displaystyle p_{S}(w)$ $\displaystyle=\frac{\text{Count}(w)}{\text{Len}(S)}$
(9)
Where $S$ represents the concatenation of selected sentences, $\text{Len}(S)$
represents the number of words in $S$ while $\text{Count}(w)$ represents the
number of times $w$ appears in $S$. The intuition behind this performance
function is that we want to select sentences that can cover more diverse
words. Note that when calculating $R(S)$, we do preprocessing steps (i.e.
lowercasing, removing punctuation, removing stop words etc.).
For each paper containing $n$ sentences, we aim to find a binary vector
$p=(p_{1},\cdots,p_{n})$ in which $p_{i}$ indicates whether the $i$-th
sentence is selected such that the conbination of selected sentences achieves
highest performance score and also contains fewer than 30282828This number is
chosen according to our empirical observations. We need to extract sentences
that can fit BART’s input length (1024). sentences. We did this by using Cross
Entropy Method rubinstein2013cross. The algorithm is shown below.
1. 1.
For each paper containing $n$ sentences, we first assume that each sentence is
equally likely to be selected. We start with $p_{0}=(1/2,1/2,...,1/2)$. Let
$t:=1$.
2. 2.
Draw a sample $X_{1},\cdots,X_{N}$ of Bernoulli vectors with success
probability vector $p_{t-1}$. For each vector, concatenate the sentences
selected and get $N$ sequences $S_{1},\cdots,S_{N}$. Calculate the performance
scores $R(S_{i})$ for all $i$, and order them from smallest to biggest,
$R_{(1)}\leq R_{(2)}\leq\cdots\leq R_{(N)}$. Let $\gamma_{t}$ be $(1-\rho)$
sample quantile of the performances: $\gamma_{t}=R_{(\lceil(1-\rho)N\rceil)}$.
3. 3.
Use the same sample to calculate
$\hat{p_{t}}=(\hat{p}_{t,1},\cdots,\hat{p}_{t,n})$ via
$\hat{p}_{t,j}=\frac{\sum_{i=1}^{N}I_{\\{R(S_{i})\geq\gamma_{t}\\}}I_{\\{X_{ij}=1\\}}}{\sum_{i=1}^{N}I_{\\{R(S_{i})\geq\gamma_{t}\\}}}$
(10)
where $I_{\\{c\\}}$ takes the value 1 if $c$ is satisfied, otherwise 0.
4. 4.
Perform a smoothed update.
$p_{t}=\alpha\hat{p_{t}}+(1-\alpha)p_{t-1}$ (11)
5. 5.
If the value of $\gamma_{t}$ hasn’t changed for 3 iterations, then stop.
Otherwise, set $t:=t+1$ and return to step 2.
The elements in $p_{t}$ will converge to either very close to 0 or very close
to 1. And we can sample from the converged $p_{t}$ to get our extraction.
We chose $N=1000$, $\rho=0.05$ and $\alpha=0.7$ when we ran this algorithm. If
we happen to select more than 30 sentences in a sample, we drop this sample.
Note that we slightly decrease the initial probability when there are more
than 90 sentences after filtering to ensure enough sample number in the first
few iterations.
### A.10 Detailed Analysis and Case Study
We take our aspect-enhanced model using CE extraction to conduct case study.
Tab. 16 lists five examples for each aspect the model mentions. It can be seen
that the language use of generated reviews are pretty close to real reviewers.
##### Evidence-sensitive
For aspect-enhanced model, It would also be interesting to trace back to the
evidence when the model generates a specific aspect. To do that we inspect
where the model attends when it generates a specific aspect by looking at the
attention values with respect to the source input.292929The way we aggregate
attention values is to take the maximum, no matter is to aggregate tokens to a
word or to aggregate different attention heads or to aggregate words to an
aspect.
And interestingly, we found that the model attends to the reasonable place
when it generates a specific aspect. Fig. 9 presents the attention heatmap of
several segment texts, the bottom of the figure shows aspects the model
generates. There are some common patterns we found when we examined the
attention values between the source input and output.
1. 1.
When the model generates summary, it will attend to sentences that contain
strong indicators like “we propose” or “we introduce”.
2. 2.
When it generates originality, it will attend to previous work part as well as
places describing contributions of this work.
3. 3.
When it generates substance, it will attend to experiment settings and number
of experiments conducted;
4. 4.
When it generates meaningful comparison, it will attend to places contains “et
al.”
Figure 9: Attention heatmap between source document and generated reviews. $+$
denotes positive sentiment and $-$ denotes negative sentiment.
### A.11 Calculation of Aspect Score
For accepted (rejected) papers, we calculate the average aspect score for each
aspect.
The aspect score of a review is calculated as follows.
* •
If an aspect does not appear in a review, then we count the score for this
aspect as 0.5 (which stands for neutral)
* •
If an aspect appears in a review, we denote its occurrences as
$\mathcal{O}=\\{o_{1},o_{2},\cdots,o_{n}\\}$ where $n$ is the total number of
occurrences. And we denote the positive occurrences of this aspect as
$\mathcal{O}_{p}=\\{o_{p_{1}},o_{p_{2}},\cdots,o_{p_{n}}\\}$ where $p_{n}$ is
the total number of positive occurrences. The aspect score is calculated using
Formula 12.
$\text{Aspect Score}=\frac{p_{n}}{n}$ (12)
### A.12 Bias Analysis for All Models
Here, following the methods we proposed in §LABEL:sec:measure_bias, we list
the bias analysis for all models in Fig. 10, Fig. 11, Tab. 14, Tab. 15.
Figure 10: Spider chart of aspect scores for all models with regard to nativeness. | MOT | ORI | SOU | SUB | REP | CLA | CMP | Total
---|---|---|---|---|---|---|---|---
INTRO | -0.72 | +18.71 | +3.84 | -3.66 | +0.73 | -13.32 | +2.40 | 43.39
INTRO+ASPECT | +3.12 | +15.75 | +6.14 | +0.66 | -10.61 | -13.50 | +19.05 | 68.84
CE | +2.56 | +18.33 | +11.16 | -13.41 | -3.71 | -9.94 | +13.49 | 72.58
CE+ASPECT | +1.13 | +24.77 | +28.78 | -2.92 | -3.18 | -12.02 | +18.36 | 91.18
ABSTRACT+CE | +1.77 | +23.01 | +3.79 | +0.44 | +0.37 | -15.18 | -2.13 | 46.69
ABSTRACT+CE+ASPECT | +1.72 | +22.23 | +12.94 | -8.30 | -0.38 | -13.40 | +0.89 | 59.86
Table 14: Disparity differences regarding nativeness. Total is the sum of absolute value of disparity difference. Figure 11: Spider chart of aspect scores for all models with regard to anonymity. | MOT | ORI | SOU | SUB | REP | CLA | CMP | Total
---|---|---|---|---|---|---|---|---
INTRO | -5.69 | -4.43 | +2.76 | -0.64 | +5.65 | +5.80 | +3.02 | 28.00
INTRO + ASPECT | -3.53 | -1.65 | +7.85 | +0.01 | +5.93 | +11.02 | +4.20 | 34.20
CE | +1.89 | -1.18 | +0.05 | -0.44 | +13.09 | +8.00 | -2.56 | 27.21
CE+ASPECT | -4.20 | -12.32 | -0.52 | -2.57 | +2.70 | +8.75 | -10.31 | 41.37
ABSTRACT+CE | +3.18 | -0.05 | -7.96 | -3.73 | +2.25 | +8.69 | -12.02 | 37.88
ABSTRACT+CE+ASPECT | +5.45 | +2.49 | +2.80 | +5.69 | +1.33 | +8.03 | -3.79 | 29.59
Table 15: Disparity differences regarding anonymity. Total is the sum of
absolute value of disparity difference. Motivation
---
1\. The motivation of using the conditional prior is unclear.
2\. I think this paper will be of interest to the NeurIPS community.
3\. The idea of continual learning is interesting and the method is well
motivated.
4\. Overall, I think this paper is a good contribution to the field of
adversarial robustness.
5\. It is hard to understand the motivation of the paper and the motivation
behind the proposed methods.
Originality
1\. This paper presents a novel approach to cross-lingual language model
learning.
2\. The novelty of the paper is limited . The idea of using low rank matrices
is not new.
3\. The proposed method seems to be very similar to the method of Dong et al.
( 2018 ).
4\. The idea of using neural networks to learn edit representations is
interesting and novel .
5\. The proposed method seems to be a simple extension of the batched-E-step
method proposed by Shazeer
et al.
Soundness
1\. This assumption is not true in practice .
2\. The experimental results are not very convincing .
3\. But the authors do not provide any theoretical justification for this
claim.
4\. The theoretical results are sound and the experimental results are
convincing.
5\. The paper does not provide any insights on the reasons for the success of
the supervised methods.
Substance
1\. The experiments are well-conducted.
2\. The ablation study in Section A.1.1 is not sufficient.
3\. It would be better to show the performance on a larger dataset.
4\. The authors should show the performance on more difficult problems.
5\. The experiments are extensive and show the effectiveness of the proposed
method.
Replicability
1\. It is not clear how the network is trained.
2\. The authors should provide more details about the experiments.
3\. The authors should provide more details about the hyperparameters.
4\. The authors should provide more details about the training procedure.
5\. It would be better if the authors can provide more details about the
hyperparameters of LST.
Meaningful Comparison
1\. The author should compare with [ 1 , 2 , 3 ] and [ 4 ] .
2\. The authors should compare the proposed method with existing methods .
3\. It would be more convincing if the authors can compare with other methods
such as AdaGrad.
4\. authors should compare the performance with the state-of-the-art methods
in real-world applications .
5\. I also think the paper should compare the performance of intrinsic fear
with the other methods proposed
in [ 1 , 2 , 3 , 4 , 5 ].
Clarity
1\. There are some typos in the paper.
2\. The paper is well-written and easy to follow.
3\. It is not clear to me how to interpret the results in Table 1.
4\. It would be better if the authors can provide a more detailed explanation
of the difference.
5\. The paper is not well organized . It is hard to follow the description of
the proposed method.
Table 16: Examples for different aspect mention from generated reviews.
## Appendix B Supplemental Material
### B.1 Dataset Annotation Guideline
The annotation guideline for annotating aspects in reviews can be found at
https://github.com/neulab/ReviewAdvisor/blob/main/materials/AnnotationGuideline.pdf
|
# IMPROVED OBSERVATION OF TRANSIENT PHENOMENA WITH DOPPLER RADARS: A COMMON
FRAMEWORK FOR OCEANIC AND ATMOSPHERIC SENSING
###### Abstract
Doppler radars are routinely used for the remote sensing of oceanic surface
currents and atmospheric wind profiles. Even though they operate at different
frequencies and address different media, they follow very similar processing
for the extraction of measured velocities. In particular they both face the
challenging issue of capturing geophysical phenomena which vary rapidly with
respect to the typical integration time. Recently, the authors applied a non-
spectral formalism based on autoregressive processes to model the
backscattered time series obtained from High-Frequency oceanic radars. They
showed that it allows to calculate Doppler spectra for very short integration
times without losing in frequency resolution nor signal-to-noise ratio. We
apply this technique to synthetic and experimental data within a common
framework and show for the first time the strong potential of the method for
the study of transient atmospheric phenomena.
Index Terms— Doppler Radar, High-Frequency Radar, Wind Profiler, Bragg
Scattering, Autoregressive Modeling
## 1 Introduction
Doppler radars have been customarily used for decades for measuring wind
profiles in the air column as well as oceanic currents at the sea surface (e.g
[1, 2]). Even though the physical mechanisms driving the backscattering from
atmospheric and oceanic media are very different, there are many formal
analogies in the description of the received time signal and its conversion to
geophysical variables. In both cases the derivation of a radial velocity,
which can be further interpreted in terms of wind speed or surface current,
relies on measuring a Doppler shift with respect to some reference frequency,
namely the zero Doppler in the atmospheric case and the Bragg frequency in the
oceanic case. In either situation the accuracy of the measurement is limited
by the coherent observation time which is necessary to produce a Doppler
spectrum. As it is well known, the choice of the observation time results from
a trade-off between the required duration for sufficient Doppler frequency
resolution and Signal-to-Noise Ratio (SNR) and the maximum time scale over
which the geophysical observables can be assumed stationary. The typically
employed observation times are of the order of a few tens of seconds for
VHF/UHF radar Wind Profilers (WP) and a few tens of minutes for oceanographic
High-Frequency Radars (HFR). This is satisfactory for the vast majority of
situations where the main atmospheric and oceanic features are only slowly
varying with respect to the temporal scale of observation. However, there are
some specific instances which do not comply to this observation scheme. This
is the case whenever transient phenomena or rapidly evolving fields of
velocities are involved, such as e.g. 1) wind gusts, storm surges or tsunamis
in the oceanic context; 2) landing planes, bird swarms and wake turbulence
echoes in the atmospheric context. This calls for specific processing of the
time echo to overcome the classical time-frequency dilemma.
The authors recently applied a non-spectral, parametric approach, referred to
as the Time-Varying Autoregressive Maximum Entropy Method (TVAR-MEM) to
process rapidly changing oceanic data [3, 4]. It is based on an Auto-
Regressive (AR) representation of the received time series that allows
maintaining high Doppler resolution and elevated SNR even with short samples.
Due to the similarity of the scattering formalism for oceanic and atmospheric
sensing (Section 2), the method can be also employed for atmospheric sensing
and we present here its first utilization in this context. We illustrate the
performances of this analysis with synthetic (Section 3) as well as original
experimental data (Section 4). We provide high-resolution Time-Frequency
imaging of the radar time series that can capture some hitherto hidden
signatures of birds and planes echoes.
## 2 Theoretical background
As it is well known, the backscattered time series $s(t)$ from an atmospheric
turbulent layer and from the sea surface share the same remarkable property,
once resolved in direction: within a single-scattering approximation they are
proportional to the spatial Fourier Transform of the perturbating quantity
$X(\mathbf{r},t)$ in the medium:
$s(t)\sim\int_{\textrm{medium}}X(\mathbf{r},t)e^{-2\textrm{i}\mathbf{K_{0}}\cdot\mathbf{r}}\,d\mathbf{r}$
(1)
In the former case, this is obtained with the Born approximation for weak
permittivity contrast (e.g. [5]), $X$ is a contrast induced by the atmospheric
particles and $\mathbf{K}_{0}$ is the (three-dimensional) incident EM wave
vector; in the latter case, this results from the perturbation theory for
shallow rough surfaces (e.g. [6]), $X$ is the contrast of elevation induced by
waves at the sea surface and $\mathbf{K}_{0}$ is the (two-dimensional)
horizontal projection of the incident EM wave vector. In both cases, the
backscattering echo is mainly caused by resonant structures having a typical
length comparable to half the radar wavelength, a result known as “Bragg law”.
For clear-air scattering, such structures are “blobs” of turbulent air moving
with the wind. They are seen in the Doppler spectrum as a single broad peak
around the central Doppler shift $f_{c}$ induced by the radial wind speed
$U_{r}=-\lambda f_{c}/2$. For a clean sea surface observed with an coastal
oceanographic radar, the resonant features are the so-called Bragg waves [7]
which are the gravity waves at half the radar wavelength. As they can be
possibly propagating in two opposite directions, the resulting Doppler
spectrum generally exhibits 2 Doppler peaks $f_{c}^{\pm}=\pm f_{B}$ at the so-
called Bragg frequency $f_{B}=\sqrt{g/(\pi\lambda)}$ and its opposite. Any
additional surface current $U_{r}$ translates the 2 Bragg peaks by the same
shift $-2U_{r}/\lambda$, so that the latter can be inverted from the residual
Doppler shift.
Digital computation of the Doppler spectrum is routinely achieved from the
range-resolved complex voltage time series $s(t)$ using a Fast Fourier
Transform (FFT) algorithm. Best frequency resolution and SNR are thus obtained
for “long” integration times. Inversely, short integration times, such those
needed to observe transient phenomena, strongly deteriorates the quality of
the spectrum and eventually the Doppler estimate. Here, we use the TVAR-MEM
approach [4] to model the backscattered Doppler spectrum at high temporal and
frequency resolution. The full time series are splitted in sequences of $N$
samples, overlapping by half of their length. Each sequence is then modeled as
an autoregressive (AR) process of order $p$ [8]:
$s(n\Delta t)=-\sum_{k=1}^{p}a_{k}s\big{(}(n-k)\Delta
t\big{)}+\varepsilon_{n}$ (2)
where $a_{k}$ are the modeling AR coefficients and $\varepsilon_{n}$ is a
white noise. In the context of oceanographic measurements, the authors
experimentally demonstrated that the best choice for the AR order $p$ is $N/2$
[3], a criteria we extend to the context of atmospheric analysis. The AR
coefficients are here evaluated using the Maximum Entropy Method (MEM) or
“Burg method” [9], which was found efficient for short integration times. The
Power Spectral Density (PSD) is finally computed from the AR coefficients:
$P_{AR}(\omega)=P_{\varepsilon}\left|1+\sum_{k=1}^{p}a_{k}e^{-ik\omega\Delta
t}\right|^{-2}$ (3)
The fast updating of the AR coefficients makes them “time-varying” (TV) and we
will refer to this method as TVAR-MEM. The temporal fluctuations of the
backscattered Doppler spectrum, evaluated at rapid scale with the TVAR-MEM,
can visually be assessed by representing the PSD in the Time-Doppler plane. We
will further refer to this representation as the “Time-Doppler spectrogram”.
Despite being commonly used in the radar community, this representation has
found little to none applications to HFR nor WP until now, because of the
“long” integration times usually required. The TVAR-MEM approach alleviates
this issue.
## 3 Assessment with Synthetic Data
We will first assess the performances of the TVAR approach in a common
formalism including both atmospheric and oceanic remote sensing. For this, we
simulate radar time series following the approach proposed by [10]. A typical
backscattered Doppler spectrum $P$ can be written as
$P(\omega)=-\big{(}S(\omega)+N\big{)}X(\omega)$ where $S$ is the signal PSD,
$N$ is the uniform PSD of white noise and $X$ is an exponentially distributed
random variable. The complex voltage time series can then be obtained (up to a
scaling factor) with a Discrete Fourier Transform of the complex spectral
components:
$s(t)\sim\sum_{j}\sqrt{P(\omega_{j})}e^{\textrm{i}\big{(}\omega_{j}t+\varphi_{j}\big{)}}e^{\mathrm{i}\Phi_{D}(t)}$
(4)
where $\varphi_{j}$ are uniform independent random phases. By construction,
the amplitudes $\sqrt{P(\omega_{j})}$ are Rayleigh distributed and the
individual frequency components are complex Gaussian variables. The
deterministic varying phase $\Phi_{D}(t)$ represents the phase shift induced
by the velocity of perturbations,
$\Phi_{D}(t)=\int_{0}^{t}U_{r}(\tau)\,d\tau$. In the oceanic context,
$U_{r}(\tau)$ is the instantaneous radial surface current and the integral
$\Phi_{D}(t)$ is referred to as the “Memory Term”, see e.g. [11]; in the
atmospheric context, $U_{r}(\tau)$ is the radial wind speed. The memory term
accounts for the possible fluctuations of the velocity $U_{r}$ during the
integration time and reduces to the classical Doppler shift,
$\Phi_{D}(t)=4\pi/\lambda U_{r}t=\omega_{D}t$, whenever the velocity can be
assumed constant over the integration time. The resonant frequency peaks in
the signal PSD $S$ are modeled with a pair of Gaussian functions centered at
plus or minus the Bragg frequency (HFR) or a single Gaussian shape centered at
the null Doppler frequency (WP). As a generic example we have generated a
backscattered time series corresponding to $20\text{\,}\mathrm{MHz}$ radar
carrier frequency at a sampling rate $\Delta t=$ $100\text{\,}\mathrm{ms}$. A
single positive Bragg line of width $\sigma=$
$3\text{\times}{10}^{-3}\text{\,}\mathrm{m}\text{.}{\mathrm{s}}^{-1}$ has been
assumed with rapidly varying velocity
$U_{r}(t)=U_{0}\cos\left(\omega_{0}t\right)$ where $U_{0}=$
$0.1\text{\,}\mathrm{m}\text{.}{\mathrm{s}}^{-1}$ and $\omega_{0}=$
$6\text{\times}{10}^{-2}\text{\,}\mathrm{rad}\text{.}{\mathrm{s}}^{-1}$. The
instantaneous PSD has been recalculated from the time series using either the
TVAR-MEM or the classical FFT approach by processing half overlapping series
of $N=$
$128\text{\,}\mathrm{s}\mathrm{a}\mathrm{m}\mathrm{p}\mathrm{l}\mathrm{e}\mathrm{s}$
(i.e. $33\text{\,}\mathrm{s}$). The chosen values correspond to the typical
case of a HFR observing surface currents but could be simply rescaled to be
consistent with the case of a WP sensing wind velocity. Figure 1 shows the
Time-Doppler spectrograms obtained with the two methods. The temporal
variations of the Bragg line are accurately rendered with the TVAR-MEM, while
barely visible using FFT.
Fig. 1: Simulation of the normalized PSD (colorscale; $\mathrm{dB}$) that
would obtained with a HFR. The representation is in the Time-Doppler plane,
where the vertical axis is the residual Doppler frequency. The instantaneous
PSD is computed from overlapping synthetic time series of $N=$
$128\text{\,}\mathrm{p}\mathrm{o}\mathrm{i}\mathrm{n}\mathrm{t}\mathrm{s}$
($33\text{\,}\mathrm{s}$) in presence of a rapidly-varying radial surface
current $U_{r}(t)$ and limited to the positive Bragg line: (a) TVAR-MEM; (b)
FFT. Simulated Doppler shift $f_{D}$ is superimposed as dashed line.
## 4 Application to Experimental Data
Next we present an application of the TVAR-MEM approach to two experimental
data sets. The first has been routinely acquired by the WERA HFR (Helzel GmbH)
of Tofino, on the Pacific Coast of Vancouver Island, British Columbia; the
selected time series has been recorded during the passage of a an abnormal
transitory oceanic and atmospheric event. The second has been acquired with
the Degreane Horizon PCL-1300 WP during the SESAR experiment that took place
near the landing runways of Paris Charles de Gaulle Airport.
### 4.1 The October 2016 “Meteotsunami” in Tofino
On October 14, 2016, the HFR of Tofino raised a tsunami alert based on the
measurements of strong abnormal surface currents. Due to the absence of any
recorded seismic activity, this event was related to the family of
atmospheric-induced tsunamis [11, 4] and can be used as benchmark for tsunami
detection algorithms. Here, we apply the TVAR-MEM to model the backscattered
Doppler spectrum at high temporal scale. The time series $s(t)$ were processed
by half overlapping blocks of $N=$
$128\text{\,}\mathrm{s}\mathrm{a}\mathrm{m}\mathrm{p}\mathrm{l}\mathrm{e}\mathrm{s}$
(i.e. $33\text{\,}\mathrm{s}$). Figure 2 is the resulting Time-Doppler
spectrogram centered on the positive Bragg line. The fine frequency resolution
reveals a micro-Doppler jump of
$2.5\text{\times}{10}^{-2}\text{\,}\mathrm{Hz}$ (i.e.
$25\text{\,}\mathrm{cm}\text{.}{\mathrm{s}}^{-1}$) at 05:40 UTC, corresponding
to a sudden surge of surface current. Furthermore, the increase of the
positive Bragg line amplitude corresponds to a strengthening of the Bragg
waves advancing towards the radar, confirmed by a sudden
$20\text{\,}\mathrm{cm}$ rise in sea level measured by coastal tide gauges.
Fig. 2: Normalized HFR PSD (same colorscale as Figure 1; $\mathrm{dB}$) in the
Time-Doppler plane, computed from overlapping synthetic time series of $N=$
$128\text{\,}\mathrm{p}\mathrm{o}\mathrm{i}\mathrm{n}\mathrm{t}\mathrm{s}$
($33\text{\,}\mathrm{s}$) every $\tau=$ $16.5\text{\,}\mathrm{s}$, from data
acquired with the HFR of Tofino on October 14, 2016 and limited to the
positive Bragg line within a
$f_{B}\,\pm\,$$7.5\text{\times}{10}^{-3}\text{\,}\mathrm{Hz}$ window (i.e.
$\pm\,$$75\text{\,}\mathrm{cm}\text{.}{\mathrm{s}}^{-1}$). Vertical bar is an
interruption in acquisition for quality control.
### 4.2 Various Transient Atmospheric Events
Fig. 3: Data acquired with the PCL-1300 WP during the SESAR experiment. Top:
Normalized PSD (colorscale; $\mathrm{dB}$) in the Time-Doppler plane, computed
with TVAR-MEM from overlapping samples of $N=$
$32\text{\,}\mathrm{p}\mathrm{o}\mathrm{i}\mathrm{n}\mathrm{t}\mathrm{s}$
($0.15\text{\,}\mathrm{s}$) every $\tau=$ $75\text{\,}\mathrm{ms}$. Bottom:
Real part of the radar time series $s(t)$ showing the “contamination” by
transient phenomena.
The SESAR P12.2.2 XP1 campaign was conducted in autumn 2012 on the Paris CDG
airport. The Degreane Horizon PCL-1300 WP was installed vertically below the
“Outer Marker” of the landing runway 26L, which is located about
$10\text{\,}\mathrm{km}$ East of the runway and marks the begin of the final
approach segment. We have analyzed the complex voltage time series received on
an antenna pointed towards the landing axis at a
$73\text{\,}\mathrm{\SIUnitSymbolDegree}$ site angle. The time series were
processed using TVAR-MEM from overlapping blocks of $N=$
$32\text{\,}\mathrm{p}\mathrm{o}\mathrm{i}\mathrm{n}\mathrm{t}\mathrm{s}$
($0.15\text{\,}\mathrm{s}$) and updated every $\tau=$
$75\text{\,}\mathrm{ms}$. We selected for the illustration 3 specific events,
acquired on September 24, 2012 at range gate 3 (altitudes 720 to
$1075\text{\,}\mathrm{m}$). Figure 3 shows the TVAR-MEM corresponding Time-
Doppler spectrograms and time series.
(a) Wind Echoes:
Typical steady wind echoes are seen in Figure 3a as the horizontal strip
around the frequency $f_{D}=$ $45\text{\,}\mathrm{Hz}$ (i.e. $U_{r}=$
$5.29\text{\,}\mathrm{m}\text{.}{\mathrm{s}}^{-1}$). The instantaneous PSD
(vertical slices in the Time-Doppler representation) have a Gaussian shape
around this central frequency. The marked horizontal line around the zero
Doppler frequency corresponds to the dominant echo of fixed target.
(b) Flapping Bird:
Clear-air echoes are contaminated by avian echoes starting at 19:03:37 UTC
(Figure 3b). A bird is flying towards the radar at a radial speed varying from
4.7 to $0\text{\,}\mathrm{m}\text{.}{\mathrm{s}}^{-1}$. The wingbeat frequency
can be extracted from the micro-Doppler oscillations and is here close to
$3\text{\,}\mathrm{Hz}$.
(c) Plane and Wake Turbulences:
Strong echo of an airplane is located in the the first
$10\text{\,}\mathrm{s}$. Note that the airplane speed exceeds the Nyquist
frequency leading to aliased echo. Assuming constant speed and altitude during
the record, one can infer a Doppler rate of change of
$66\text{\,}\mathrm{Hz}\text{.}{\mathrm{s}}^{-1}$, corresponding to an average
plane radial speed of $75\text{\,}\mathrm{m}\text{.}{\mathrm{s}}^{-1}$ which
is consistent with the typical landing speed of commercial aircrafts. The echo
is followed by multiple oscillating echoes which we attribute to wake vortex
turbulence.
## 5 Conclusion
We have presented the first application of the TV-AR-MEM in the double context
of ocean and atmospheric sensing. It has been applied for the first time to an
experimental WP dataset. The resulting Time-Doppler maps unveil details of
rapid atmospheric variations at the scale of one second, such as bird flapping
or turbulence in the wake of a plane. Further work is in progress to confirm
the strong potential of this technique.
Acknowledgments: First author was supported by the Direction Générale de
l’Armement (DGA). We are grateful to Ocean Networks Canada for providing HFR
data and to Dr Philipp Currier for countless discussions on WP.
## References
* [1] B. Liu _et al._ , “Boundary Layer Heights as Derived from Ground-Based Radar Wind Profiler in Beijing,” _IEEE Trans. Geosci. Remote Sens._ , vol. 57, no. 10, pp. 8095–8104, 2019.
* [2] H. Roarty _et al._ , “The Global High Frequency Radar Network,” _Frontiers in Marine Science_ , vol. 6, p. 164, 2019.
* [3] B. Domps, D. Dumas, C.-A. Guérin, and J. Marmain, “High-Frequency Radar Ocean Current Mapping at Rapid Scale with Autoregressive Modeling,” _IEEE J. Ocean. Eng._ , 2021.
* [4] B. Domps, J. Marmain, and C.-A. Guérin, “A reanalysis of the october 2016 ”Meteotsunami” in British Columbia with help of high-frequency radars and autoregressive modeling,” 2020. [Online]. Available: https://arxiv.org/abs/2011.07237
* [5] V. I. Tatarskii, _Wave Propagation in a Turbulent Medium_. McGraw-Hill, 1961.
* [6] T. M. Elfouhaily and C.-A. Guérin, “A Critical Survey of Approximate Scattering Wave Theories from Random Rough Surfaces,” _Waves in Random Media_ , vol. 14, no. 4, 2004.
* [7] D. E. Barrick, “First-Order Theory and Analysis of MF/HF/VHF Scatter from the Sea,” _IEEE Trans. Antennas Propag._ , vol. 20, no. 1, pp. 2–10, 1972.
* [8] P. Stoica, _Spectral Analysis of Signals_. Prentice Hall, 2005.
* [9] J. P. Burg, “Maximum Entropy Spectral Analysis,” Ph.D. dissertation, Stanford University, 1975.
* [10] D. S. Zrnić, “Simulation of Weatherlike Doppler Spectra and Signals,” _Journal of Applied Meteorology and Climatology_ , vol. 14, no. 4, Jun. 1975\.
* [11] C.-A. Guérin _et al._ , “Tsunami Detection by High-Frequency Radar in British Columbia: Performance Assessment of the Time-Correlation Algorithm for Synthetic and Real Events,” _Ocean Dyn._ , no. 68, pp. 423–438, 2018\.
|
# Hořava-Lifshitz scalar field cosmology: classical and quantum viewpoints
Fatimah Tavakoli1 , Babak Vakili1 and Hossein Ardehali2
1Department of Physics, Central Tehran Branch, Islamic Azad University,
Tehran, Iran
2Department of Physics, Science and Research Branch, Islamic Azad University,
Tehran, Iran e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
In this paper, we study a projectable Hořava-Lifshitz cosmology without the
detailed balance condition minimally coupled to a non-linear self-coupling
scalar field. In the minisuperspace framework, the super Hamiltonian of the
presented model is constructed by means of which, some classical solutions for
scale factor and scalar field are obtained. Since these solutions exhibit
various types of singularities, we came up with the quantization of the model
in the context of the Wheeler-DeWitt approach of quantum cosmology. The
resulting quantum wave functions are then used to investigate the possibility
of the avoidance of classical singularities due to quantum effects which show
themselves important near these singularities.
PACS numbers: 04.50.+h, 98.80.Qc, 04.60.Ds
Keywords: Hořava-Lifshitz cosmology, Quantum cosmology
## 1 Introduction
In recent years, a new gravitation theory based on anisotropic scaling of the
space $\bf{x}$ and time $t$ presented by Hořava. Since the methods used in
this gravitation theory are similar to the Lifshitz work on the second-order
phase transition in solid state physics, it is commonly called Hořava-Lifshitz
(HL) theory of gravity [1, 2, 3, 4]. In general, HL gravity is a
generalization of general relativity (GR) at high energy ultraviolet (UV)
regime and reduces to standard GR in the low energy infra-red (IR) limit.
However, unlike another candidates of quantum gravity the issue of the Lorentz
symmetry breaking at high energies, is described somehow in a different way.
Here, the well-known phenomenon of Lorentz symmetry breaking will be expressed
by a Lifshitz-like process in solid state physics. This is based on the
anisotropic scaling between space and time as
$t\rightarrow b^{z}t,\hskip 14.22636pt{\bf x}\rightarrow b{\bf x},$ (1)
where $b$ is a scaling parameter and $z$ is dynamical critical exponent. It is
clear that $z=1$ corresponds to the standard relativistic scale invariance
with Lorentz symmetry. Indeed, with $z=1$, the theory falls within its IR
limit. However, different values of $z$ correspond to different theories, for
instance what is proposed in [1, 2] as the UV gravitational theory requires
$z=3$. In order to better represent the asymmetry of space and time in HL
theory, we write the space-time metric is its ADM form, that is,
$g_{\mu\nu}(t,{\bf x})=\left(\begin{array}[]{cc}-N^{2}(t,{\bf x})+N_{a}(t,{\bf
x})N^{a}(t,{\bf x})&N_{b}(t,{\bf x})\\\ N_{a}(t,{\bf x})&h_{ab}(t,{\bf x})\\\
\end{array}\right),$ (2)
where $N(t,{\bf x})$ is the lapse function, $N^{a}(t,{\bf x})$ are the
components of the shift vector and $h_{ab}(t,{\bf x})$ is the spacial metric.
There are two classes of HL theories depending on whether the lapse function
is a function only of $t$, for which the theory is called projectable, or of
$(t,{\bf x})$, for which we have a non-projectable theory. Since in
cosmological settings the lapse function usually is chosen only as a function
of time, the corresponding HL cosmologies are projectable [5]-[7]. In more
general cases however, one may consider the lapse function as a function of
both $t$ and ${\bf x}$ to get a non-projectable theory, see [8, 9]. At first
glance, it may seem that imposing the lapse function to be just a function of
time, is a serious restriction. However, it should be noted that in the
framework of this assumption the classical Hamiltonian constraint is no longer
a local constraint but its integral over all spatial coordinates should be
done, which means that we have not a local energy conservation. In [10], it is
shown that this procedure yields classical solutions to the IR limit of HL
gravity which are equivalent to Friedmann equations with an additional term of
the cold dark matter type. On the other hand, in homogeneous models like
Robertson-Walker metric, such spatial integrals are simply the spatial volume
of the space and thus the above mentioned dark dust constant must vanish [11].
In summary, it is worth to note that although almost all physically important
solutions of the Einstein equations like Schwarzschild, Reissner-Nordström,
Kerr and Friedmann-Lemaitre-Robertson-Walker space times, can be cast into the
projectable form by suitable choice of coordinates, most of the results, in
principle, may be extended to the non-projectable case through a challenging
but straightforward calculation [12].
Another thing to note about the HL theory is the form of its action which in
general consists of two kinetic and potential terms. Its kinetic term ${\cal
S}_{K}$, is nothing other than what comes from the Einstein-Hilbert action.
The form for the potential term is ${\cal S}_{V}=\int
d^{4}x\sqrt{-g}V[h_{ab}],$ where $V$ is a scalar function depends only on the
spacial metric $h_{ab}$ and its spacial derivatives. Among the very different
possible combinations that can be constructed using the three-metric scalars
[5], Hořava considered a special form in $z=3$ theory known as ”detailed
balance condition”, in which the potential is a combination of the terms
$\nabla_{a}R_{bc}\nabla^{a}R^{bc},\hskip
5.69046pt\nabla_{a}R_{bc}\nabla^{b}R^{ac},\hskip
5.69046pt\nabla_{a}R\nabla^{a}R$, [2, 3]. Here, we do not go into the details
of this issue. The detailed balanced theories show simpler quantum behavior
because they have simpler renormalization properties. However, as is shown in
[11], if one relaxes this condition, the resulting action with extra allowed
terms is well-behavior enough to recover the models with detailed balance.
Another feature of HL theory is its known inconsistency problems such as its
instabilities, ghost scalar modes and the strong coupling problem. Indeed, by
perturbation of this theory around its IR regime one can show that it suffers
from some instabilities and fine-tunings that may not be removed by usual
tricks such as analytic continuation. Since our study in this article is done
at the background level, such issues are beyond our discussion. However, a
detailed review of this topic can be found in [13]. On the other hand, there
are some extensions of the initial version of the HL gravity theory that deal
with such problems. Some of these are: [14], in which a projectable $U(1)$
symmetric soft-breaking detailed balance condition model is considered and it
is shown that the resulting theory displays anisotropic scaling at short
distances while almost all features of GR are recovered at long distances. The
non-projectable model without detailed balance condition is studied in [9],
where it is proved that only non-projectable model is free from instabilities
and strong coupling. The $U(1)$ symmetric non-projectable version of the HL
gravity is studied in [15], in which a coupling of the theory with a scalar
field is also considered and it is shown that all the problems that the
original theory suffers from, will be disappeared. Finally, a progress report
around all of the above mentioned issues has been reviewed in [16].
In this paper we consider a Friedmann-Robertson-Walker (FRW) cosmological
model coupled to a self-interacting scalar field, in the framework of a
projectable HL gravity without detailed balance condition. The basis of our
work to deal with this issue is through its representation with minisuperspace
variables. Minisuperspace formulation of classical and quantum HL cosmology is
studied in some works, see for instance [5, 6] and [17]-[19]. Also,
quantization of the HL theory without restriction to a cosmological background
is investigated for instance in [20], in which the quantization of two-
dimensional HL theory without the projectability condition is considered, and
[21], where a $(1+1)$-dimensional projectable HL gravity is quantized.
Here, we first construct a suitable form for the HL action and then will add a
self-coupling scalar field to it. For the flat FRW model and in some special
cases, the classical solutions are presented and their singularities are
investigated. We then construct the corresponding quantum cosmology based on
the canonical approach of Wheeler-DeWitt (WDW) theory to see how things may
change their behavior if the quantum mechanical considerations come into the
model.
## 2 The model outline
To study the FRW cosmology within the framework of HL gravity, let us start by
its geometric structure which in a quasi-spherical polar coordinate the space
time metric is assumed to be
$ds^{2}=-N^{2}(t)dt^{2}+a^{2}(t)\left[\frac{dr^{2}}{1-kr^{2}}+r^{2}\left(d\vartheta^{2}+\sin^{2}\vartheta
d\varphi\right)\right],$ (3)
where $N(t)$ is the lapse function, $a(t)$ the scale factor and $k=1$, $0$ and
$-1$ corresponds to the closed, flat and open universe respectively. In terms
of the ADM variables the above metric takes the form
$g_{\mu\nu}(t,{\bf x})=\left(\begin{array}[]{cc}-N^{2}(t)&{\bf 0}\\\ {\bf
0}&h_{ab}\\\ \end{array}\right),$ (4)
where
$h_{ab}=a^{2}(t)\mbox{diag}\left(\frac{1}{1-kr^{2}},r^{2},r^{2}\sin^{2}\vartheta\right),$
is the intrinsic metric induced on the spatial $3$-dimensional hypersurfaces.
The gravitational part of the HL action, without the detailed balance
condition, is given by $S_{HL}=S_{K}+S_{V}$, where $S_{K}$ is its kinetic part
$S_{K}\sim\int dtd^{3}\mathbf{x}\,N\sqrt{h}\;\left(K_{ab}K^{ab}-\lambda
K^{2}\right),$ (5)
where $h$ is the determinant of $h_{ab}$ and $\lambda$ is a correction
constant to the usual GR due to HL theory. Also, $K_{ab}$ is the extrinsic
curvature (with trace $K$) defined as
$K_{ab}=\frac{1}{2N}\left(N_{a|b}+N_{b|a}-\frac{\partial h_{ab}}{\partial
t}\right),$
where $N_{a|b}$ denotes the covariant derivative with respect to $h_{ab}$.
Since for the FRW metric all components of the shift vector are zero, a simple
calculation based on the above definition results in
$K_{ab}K^{ab}=\frac{3\dot{a}^{2}}{N^{2}a^{2}}$ and $K=-\frac{3\dot{a}}{Na}$,
where a dot represents differentiation with respect to $t$. Going back to the
action, its potential part is in the form
$S_{V}=-\int dtd^{3}\mathbf{x}N\sqrt{h}V[h_{ij}].$ (6)
According to the relation (1) and because of the anisotropic scaling of space
and time coordinates, their dimensions are different as
$[\mathbf{x}]=[\kappa]^{-1}$ and $[t]=[\kappa]^{-z}$, where the $[\kappa]$ is
a symbol of dimension of momentum. In this sense, the dimension of the metric,
lapse function and shift vector will be $[\gamma_{ij}]=[N]=1$ and
$[N^{i}]=[\kappa]^{z-1}$. Therefore, the potential term in a three-dimensional
space has the dimension $\left[V[h_{ij}]\right]=[\kappa]^{z+3}$. So, according
to such a dimensional analysis one may argue that for special case $z=3$, the
potential $V[h_{ij}]$ consists of the following terms of the Ricci tensor,
Ricci scalar and their covariant derivatives of dimension $[\kappa]^{6}$, [22]
$\displaystyle V[h_{ij}]$ $\displaystyle=$ $\displaystyle
g_{0}\zeta^{6}+g_{1}\zeta^{4}R+g_{2}\zeta^{2}R^{2}+g_{3}\zeta^{2}R_{ij}R^{ij}$
(7) $\displaystyle+g_{4}R^{3}+g_{5}RR_{ij}R^{ij}+g_{6}R_{ij}R^{jk}R_{k}^{\;i}$
$\displaystyle+g_{7}R\nabla^{2}R+g_{8}\nabla_{i}R_{jk}\nabla^{i}R^{jk},$
where $g_{i}$ ($i=0,...,8$) are dimensionless coupling constants come from HL
correction to usual GR and $\zeta$ with dimension $[\zeta]=[\kappa]$ is
introduced to make the constants $g_{i}$ dimensionless. Under these
conditions, the gravitational part of the HL theory that we shall consider
hereafter has the following form, [11], [22], [23] and [24]
$\displaystyle S_{HL}$ $\displaystyle=$
$\displaystyle\frac{M_{PL}^{2}}{2}\int_{\mathcal{M}}dtd^{3}\mathbf{x}\;N\sqrt{h}\Big{[}K_{ij}K^{ij}-\lambda
K^{2}+R-2\Lambda$ (8)
$\displaystyle\qquad-\frac{g_{2}}{M_{PL}^{2}}R^{2}-\frac{g_{3}}{M_{PL}^{2}}R_{ij}R^{ij}-\frac{g_{4}}{M_{PL}^{4}}R^{3}-\frac{g_{5}}{M_{PL}^{4}}RR_{ij}R^{ij}$
$\displaystyle\qquad-\frac{g_{6}}{M_{PL}^{4}}R_{ij}R^{jk}R^{i}_{\;k}-\frac{g_{7}}{M_{PL}^{4}}R\nabla^{2}R-\frac{g_{8}}{M_{PL}^{4}}\nabla_{i}R_{jk}\nabla^{i}R^{jk}\Big{]},$
in which $M_{PL}=\frac{1}{\sqrt{8\pi G}}$ and we have set $c=1$, $\zeta=1$,
$\Lambda=g_{0}M_{Pl}^{2}/2$ and $g_{1}=-1$.
Now, let us consider a scalar field minimally coupled to gravity but has a
non-linear interaction with itself by a coupling function $F(\phi)$ [25]. The
action of such a scalar field may be written as
$S_{\phi}=\int_{\mathcal{M}}d^{4}\mathbf{x}\sqrt{-g}F(\phi)g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi.$
(9)
The existence of a scalar field in a gravitational theory can address many
issues in cosmology such as spatially flat and accelerated expanding universe
at the present time, inflation, dark matter and dark energy. The action of the
scalar field considered here has the same form as that in usual cosmological
models with general covariance. However, in the HL gravity the Lorentz
symmetry is broken in such a way that various higher spatial derivatives will
appear in the gravitational part of the action. Therefore, one expects this
feature to be considered when constructing the action of the scalar field.
This means that we may be able to add higher spatial derivatives of the scalar
field into its action. One of the possible types of such actions for scalar
field is presented in [26] as
$S_{\phi}=\int
d^{4}\mathbf{x}\sqrt{-g}N\left[\frac{1}{N^{2}}\left(\dot{\phi}-N^{i}\partial_{i}\phi\right)^{2}-{\cal
V}(\partial_{i}\phi,\phi)\right],$ (10)
where the the potential function ${\cal V}$ can in general contain arbitrary
combinations of $\phi$ and its spatial derivatives. However, as emphasized in
[26], in homogeneous and isotropic cosmological settings like FRW metric, we
have $N^{i}=0$ and the cosmological ansatz for the scalar field is
$\phi=\phi(t)$. In such a cases since $\partial_{i}\phi=0$, the function
${\cal V}$ in the scalar action reduces effectively to a usual potential that
vanishes for a free scalar field. In this respect, the action (9), we
presented for a self-interacting scalar field in the theory appears to be
based on physical grounds.
The total action may now be written by adding the HL and scalar field actions
as $S=S_{HL}+S_{\phi}=\int dtL[a,\phi,\dot{a},\dot{\phi}]$. Having at hand the
actions (8) and (9), by substituting the metric (3) into them, we are led to
the following effective Lagrangian in terms of the minisuperspace variables
$(a(t),\phi(t))$ 111The Planck mass can be absorbed in time term and so we may
re-scale the time as $t\rightarrow\frac{t}{t_{PL}}$, where
$t_{PL}=\frac{1}{M_{PL}}$ is the Planck time. In this sense, in what follows,
by $t$, we mean the dimensionless quantity $\frac{t}{t_{PL}}$. Specially, the
figures are plotted in terms of this quantity.:
$L[a,\phi,\dot{a},\dot{\phi}]=N\left(-\frac{a\dot{a}^{2}}{N^{2}}+g_{c}a-g_{\Lambda}a^{3}-\frac{g_{r}}{a}-\frac{g_{s}}{a^{3}}+\frac{1}{N^{2}}F(\phi)a^{3}\dot{\phi}^{2}\right),$
(11)
in which the new coefficients are defied as [27]
$g_{c}=\frac{6k}{3(3\lambda-1)},\hskip
8.5359ptg_{\Lambda}=\frac{2\Lambda}{3(3\lambda-1)},\hskip
8.5359ptg_{r}=\frac{12k^{2}(3g_{2}+g_{3})}{3(3\lambda-1)M^{2}_{PL}},\hskip
8.5359ptg_{s}=\frac{24k^{3}(9g_{4}+3g_{5}+g_{6})}{3(3\lambda-1)M^{4}_{PL}}.$
(12)
The momenta conjugate to each of the dynamical variables can be obtained by
definition $p_{q}=\frac{\partial L}{\partial\dot{q}}$ with results
$p_{a}=-\frac{2a\dot{a}}{N}$ and $p_{\phi}=\frac{2}{N}F(\phi)a^{3}\dot{\phi}$.
In terms of the these momenta, the total Hamiltonian reads
$\displaystyle
H=N\mathcal{H}=N\left(-\frac{p_{a}^{2}}{4a}+\frac{p_{\phi}^{2}}{4a^{3}F(\phi)}-g_{c}a+g_{\Lambda}a^{3}+\frac{g_{r}}{a}+\frac{g_{s}}{a^{3}}\right).$
(13)
As it should be, the lapse function appears as a Lagrange multiplier in the
Hamiltonian which means that the Hamiltonian equation for this variable yields
the constraint equation ${\cal H}=0$. At classical level this constraint is
equivalent to the Friedmann equation. As we shall see later, this constraint
also plays a key role in forming the basic equation of quantum cosmology, that
is, the WDW equation.
## 3 Classical cosmology
In this section we intend to study the classical cosmological solutions of the
HL model whose Hamiltonian is given by equation (13). In Hamiltonian approach,
the classical dynamics of each variable is determined by the Hamilton equation
$\dot{q}=\left\\{q,H\right\\}$, where $\left\\{.,.\right\\}$ is the Poisson
bracket. So, we get
$\displaystyle\left\\{\begin{array}[]{cl}\dot{a}&=-\frac{Np_{a}}{2a},\\\
\dot{p}_{a}&=N\left(-\frac{p_{a}^{2}}{4a^{2}}+g_{c}-3g_{\Lambda}a^{2}+\frac{g_{r}}{a^{2}}+\frac{3g_{s}}{a^{4}}+\frac{3p_{\phi}^{2}}{4a^{4}F(\phi)}\right),\\\
\dot{\phi}&=\frac{Np_{\phi}}{2a^{3}F(\phi)},\\\
\dot{p}_{\phi}&=\frac{Np_{\phi}^{2}}{4a^{3}}\frac{F^{\prime}}{F^{2}},\end{array}\right.$
(18)
where $F^{\prime}=\frac{dF(\phi)}{d\phi}$. Before attempting to solve the
above system of equations, we must choose a time gauge by fixing the Lapse
function. Without this, we will face with the problem of under-determinacy
which means that there are fewer equations than unknowns. So, let us fix the
lapse function as $N=a^{n}(t)$, where $n$ is a constant. With this time gauge,
by eliminating $p_{\phi}$ from two last equations of (18), we obtain the
following equation for $\phi$:
$\frac{\ddot{\phi}}{\dot{\phi}}+\frac{1}{2}\frac{F^{\prime}(\phi)}{F(\phi)}\dot{\phi}+(3-n)\frac{\dot{a}}{a}=0,$
(19)
which seems to be a conservation law for the scalar field. This equation can
easily be integrated with result
$\dot{\phi}^{2}F(\phi)=Ca^{2(n-3)},$ (20)
where $C$ is an integration constant. Now, to obtain a differential equation
for the scale factor, let us eliminate the momenta from the system (18) from
which and also using (20) we will arrive at
$\dot{a}^{2}+a^{2n}\left(g_{c}-g_{\Lambda}a^{2}-\frac{g_{r}}{a^{2}}-\frac{g_{s}+C}{a^{4}}\right)=0,$
(21)
which is equivalent to the Hamiltonian constraint $H=0$. The last differential
equation we want to derive from the above relations is an equation between $a$
and $\phi$ whose solutions give us the classical trajectories in the plane
$a-\phi$. This may be done by removing the time parameter between (20) and
(21) which yields
$F(\phi)\left(\frac{d\phi}{da}\right)^{2}=Ca^{-6}\left(-g_{c}+g_{\Lambda}a^{2}+\frac{g_{r}}{a^{2}}+\frac{g_{s}+C}{a^{4}}\right)^{-1}.$
(22)
The non-dependence of this equation on the parameter $n$ indicates that
although different time gauges lead to different functions for scale factor
and scalar field, the classical trajectories are independent of these gauges.
On the other hand, from now on, to make the model simple and solvable, we take
a polynomial coupling function for the scalar field as
$F(\phi)=\lambda\phi^{m}$.
In general, the above equations do not seem to have analytical solutions, so
in what follows we restrict ourselves to some special cases for which we can
obtain analytical closed form solutions for the above field equations.
### 3.1 Flat universe with cosmological constant: $k=0$, $\Lambda\neq 0$
For a flat universe the coefficients $g_{c}$, $g_{r}$ and $g_{s}$ vanish. So,
if we choose the time parameter corresponding to the gauge $N=1$, or
equivalently $n=0$, the Friedmann equation (21) reads
$\dot{a}^{2}=g_{\Lambda}a^{2}+\frac{C}{a^{4}},$ (23)
with solution
$a(t)=\left(\frac{C}{g_{\Lambda}}\right)^{\frac{1}{6}}\sinh^{\frac{1}{3}}(3\sqrt{g_{\Lambda}}t),$
(24)
where the integration constant is adopted in such a way that the singularity
occurs at $t=0$, which means that $a(t=0)=0$. Now, let us to find an
expression for the scalar field. As we mentioned before, we consider its self-
coupling function in the form of $F(\phi)=\lambda\phi^{m}$. With this choice
for the function $F(\phi)$, equation (20) takes the form
$\phi^{\frac{m}{2}}d\phi=\pm\sqrt{\frac{C}{\lambda}}\frac{dt}{a^{3}(t)},$ (25)
which with the help of equation (24), we are able to integrate it and find the
time evolution of the scalar field as
$\phi(t)=\left[\phi_{c}^{\frac{m+2}{2}}-\frac{m+2}{6\sqrt{\lambda}}\;\ln\left(\tanh\frac{3\sqrt{g_{\Lambda}}\;t}{2}\right)\right]^{\frac{2}{m+2}},\qquad
m\neq-2,$ (26)
where the integration constant $\phi_{c}$ is chosen such that
$\lim_{t\rightarrow\infty}\phi(t)=\phi_{c}$. In figure 1, we have plotted the
behavior of the scale factor and the scalar field. As this figure shows, the
universe begins its evolution with a big-bang singularity (zero size for
$a(t)$) at $t=0$ where the scalar field blows up there. As time goes on, while
the universe expands (with positive acceleration) until it finally enters to a
de Sitter phase, that is, we have $a(t)\sim e^{\sqrt{g_{\Lambda}}t}$, as
$t\rightarrow+\infty$, the scalar field eventually tends to a constant value.
We can also follow this behavior by studying the classical trajectories in the
plane $a-\phi$. To do this, we need to extract the scalar field in terms of
the scale factor from equation (22) which now can be written as
$\phi^{m}\left(\frac{d\phi}{da}\right)^{2}=\frac{C}{\lambda}\frac{1}{a^{2}(g_{\Lambda}a^{6}+C)},$
(27)
whose integration yields
$\phi(a)=\left[\phi_{c}^{\frac{m+2}{2}}+\frac{m+2}{6\sqrt{\lambda}}\;\ln\frac{\sqrt{g_{\Lambda}a^{6}+C}-\sqrt{C}}{\sqrt{g_{\Lambda}}\;a^{3}}\right]^{\frac{2}{m+2}},\qquad
m\neq-2.$ (28)
In figure 1, we also plotted the above expression for typical numerical values
of the parameters. How the scale factor varies with respect to the scalar
field, or vice versa, can also be seen from this figure. We will return to
this classical trajectory again when looking at the quantum model to
investigate whether the peaks of the wave function correspond to these paths.
Figure 1: Left: the qualitative dynamical behavior of the scale factor (blue
line) and scalar field (red line) in flat universe. Right: classical
trajectory in the plane $a-\phi$. The figures are plotted for numerical
values: $g_{\Lambda}=1$, $C=5$, $\lambda=1$, $m=2$ and $\phi_{c}=\frac{2}{5}$.
### 3.2 Non-flat universe with zero cosmological constant: $k\neq 0$,
$\Lambda=0$
In this subsection, we consider another special case in which while the
curvature index is non-zero but the cosmological constant is equal to zero.
This means that $g_{c},g_{s},g_{r}\neq 0$ and $g_{\Lambda}=0$. Under these
conditions if we take an evolution parameter corresponding to the lapse
function $N(t)=a(t)$, (or $n=1$), the Friedmann equation (21) will be
$\dot{a}^{2}+g_{c}a^{2}-g_{r}-\frac{g_{s}+C}{a^{2}}=0.$ (29)
Before trying to solve this equation, we should note a point about the
selected lapse function. Unlike the case in the previous subsection in which
our time parameter with $N=1$, was indeed the usual cosmic time, here with
$N=a(t)$, $t$ is just a evolution or clock parameter in terms of which the
evolution of all dynamical variables is measured. However, one may translate
the final results in terms of the cosmic time $\tau$, using its relation with
the time parameter $t$, that is, $d\tau=N(t)dt$.
the general solution to the equation (29) is
$a(t)=\sqrt{\alpha(1-\cos\omega t)+\beta\sin\omega t},$ (30)
where $\alpha=\frac{g_{r}}{2g_{c}}$, $\beta=\sqrt{\frac{g_{s}+C}{g_{c}}}$ and
$\omega=2\sqrt{g_{c}}$. To express this and the following relations in a
simpler form let us take $g_{r}=0$, which is equivalent to $g_{3}=-3g_{2}$ in
(12). Also, we assume that $\lambda>1/3$ and $9g_{4}+3g_{5}+g_{6}>0$, so that
$\mbox{sign}(g_{c},g_{s})=\mbox{sign}(k)$. In the following, we will present
the solutions for the closed universe $k=+1$. The open ($k=-1$) counterpart of
the solutions can be obtained via making small changes by replacing the
trigonometric functions with their hyperbolic counterparts. Therefore, by
applying, again, the initial condition $a(t=0)=0$, we have
$a(t)=\left(\frac{g_{s}+C}{g_{c}}\right)^{\frac{1}{4}}\sqrt{\sin(2\sqrt{g_{c}}t)}.$
(31)
The time dependence of the scalar field can also be deduced from equation (20)
which for the present case has the solution
$\phi(t)=\left[\phi_{c}^{\frac{m+2}{2}}+\frac{m+2}{4}\sqrt{\frac{C}{(g_{s}+C)\lambda}}\ln\tan(\sqrt{g_{c}}\,t)\right]^{\frac{2}{m+2}},\qquad
m\neq-2.$ (32)
Finally, what remains is the classical trajectories in the plane $a-\phi$
which as before may be obtained from (22) with result
$\phi(a)=\left[\frac{m+2}{4}\sqrt{\frac{C}{(g_{s}+C)\lambda}}\;\ln\frac{\sqrt{g_{s}+C}+\sqrt{g_{s}+C-g_{c}a^{4}}}{\sqrt{g_{c}}\;a^{2}}\right]^{\frac{2}{m+2}}.$
(33)
In figure 2, we have shown the time behavior of the scale factor and the
scalar field. As is clear from this figure, the universe begins its
decelerated expansion from a singularity where the size of the universe is
zero and the value of the scalar field is infinite. As the scale factor
expands to its maximum value, the scalar field decreases to zero. Then, by re-
collapsing the scale factor to a big-crunch singularity, the scalar field
again increases until it blows where the scale factor vanishes. The behavior
of the scale factor and the scalar field in the plane $a-\phi$ is also plotted
in this figure. This figure also clearly shows the divergent behavior of the
scalar field where the scale factor is singular.
Figure 2: Left: the qualitative behavior of the scale factor (blue line) and
scalar field (red line) when $\Lambda=0$ and $g_{r}=0$. Right: the classical
trajectory in the plane $a-\phi$. The figures are plotted for the numerical
values: $g_{c}=2$, $g_{s}=1$, $C=3$, $\lambda=1$, $m=2$ and $\phi_{c}=0$.
### 3.3 Early universe
In this subsection we consider the dynamics of the universe in the early times
of cosmic evolution when the scale factor is very small. For a such a
situation the Friedmann equation (21) (again in the gauge $N=a(t)$) takes the
form
$\dot{a}^{2}=g_{r}+\frac{g_{s}+C}{a^{2}},$ (34)
in which we omit the terms containing $a^{2}$ and $a^{4}$. It is easy to
derive the scale factor from this equation as
$a(t)=\left[g_{r}\;(t+\delta)^{2}-\frac{g_{s}+C}{g_{r}}\right]^{\frac{1}{2}},$
(35)
where $\delta=\frac{\sqrt{g_{s}+C}}{g_{r}}$. This equation shows that,
regardless of whether the curvature index is positive or negative, the
universe has a power law expansion in the early times of its evolution coming
from a big-bang singularity. Following the same steps we took in the previous
sections will lead us to the following expressions for the scalar field and
the classical trajectory
$\phi(t)=\left[\phi_{c}^{\frac{m+2}{2}}-\frac{m+2}{4}\sqrt{\frac{C}{(g_{s}+C)\lambda}}\;\ln\frac{g_{r}(t+\delta)-\sqrt{g_{s}+C}}{g_{r}(t+\delta)+\sqrt{g_{s}+C}}\right]^{\frac{2}{m+2}},\qquad
m\neq-2,$ (36)
and
$\phi(a)=\left[\phi_{c}^{\frac{m+2}{2}}+\frac{m+2}{2}\sqrt{\frac{C}{(g_{s}+C)\lambda}}\;\ln\frac{\sqrt{g_{r}a^{2}+g_{s}+C}+\sqrt{g_{s}+C}}{\sqrt{g_{r}}\;a}\right]^{\frac{2}{m+2}}.$
(37)
As before we summarized all the results of this subsection in figure 3, which
illustrates the evolution and singularity of the dynamical variables.
Figure 3: Left: the behavior of the scale factor (blue line) and scalar field
(red line) in the early times of cosmic evolution. Right: the corresponding
classical trajectory in the plane $a-\phi$. The figures are plotted for
numerical values: $g_{r}=1$, $g_{s}=2$, $C=1$, $\lambda=1$, $m=2$ and
$\phi_{c}=\frac{1}{2}$.
### 3.4 Late time expansion
Another important issue in cosmological dynamics is the late time behavior of
the universe. In this limit the Friedmann equation (21), in the gauge $N=1$,
has the form
$\dot{a}^{2}+g_{c}-g_{\Lambda}a^{2}=0,$ (38)
where here we have neglected the terms $1/a^{2}$ and $1/a^{4}$. It is easy to
see that this equation is solved by the following functions
$a(t)=\frac{1}{2g_{\Lambda}}\left(e^{\sqrt{g_{\Lambda}}t}+g_{c}g_{\Lambda}e^{-\sqrt{g_{\Lambda}}t}\right),\hskip
14.22636pta(t)=\frac{1}{2g_{\Lambda}}\left(g_{c}g_{\Lambda}e^{\sqrt{g_{\Lambda}}t}+e^{-\sqrt{g_{\Lambda}}t}\right),$
(39)
both of which enter the de Sitter phase
$a(t)\sim e^{\sqrt{g_{\Lambda}}t},$ (40)
when $t\rightarrow+\infty$. Similar to the calculations of the preceding
sections, we can obtain the following expression for the scalar field
$\phi(t)=\left[\phi_{c}^{\frac{m+2}{2}}+\frac{(m+2)g_{\Lambda}}{4g_{c}}\sqrt{\frac{C}{\lambda\,g_{c}}}\,\left(\frac{\tanh(\sqrt{g_{\Lambda}}\;t)}{\cosh(\sqrt{g_{\Lambda}}\;t)}+2\arctan\left(e^{\sqrt{g_{\Lambda}}\;t}\right)\right)\right]^{\frac{2}{m+2}},$
(41)
which tends to a constant value as $t\rightarrow+\infty$.
## 4 Canonical quantization of the model
As we mentioned before, HL gravity is a generalization of the usual GR at UV
regimes in such a way that in the low energy limits the standard GR is
recovered. Therefore, from a cosmological point of view, one may obtained some
nonsingular bouncing solutions. From this perspective, this theory may be
considered as an alternative to inflation since it is expected it might solve
the flatness and horizon problems and generate scale invariant perturbations
for the early universe without the need of exponential expansion usually used
in the inflationary theories [28].
On the other hand, at the background (non-perturbation) level almost all
solutions to the Einstein field equations exhibit different kinds of
singularities. On this basis, cosmological solutions along with conventional
matter fields are no exception to this rule and mainly exhibit big bang type
singularities. Any hope to eliminate these singularities would be in the
development of a concomitant and conducive quantum theory of gravity. In the
absence of a complete theory of quantum gravity, it would be useful to
describe the quantum state of the universe in the context of quantum
cosmology, in which based on the canonical quantization procedure, the
evolution of the universe is described by a wave function in the
minisuperspace. In other words, in this view, the system in question will be
reduced to a conventional quantum mechanical system. In what follows,
according to this procedure, we are going to overcome the singularities that
appear in the classical model.
Now let us focus our attention on the quantization of the model described in
the previous section. To do this, we start with the Hamiltonian (13). As we
know, the lapse function in such Hamiltonians appears itself as a Lagrange
multiplier, so we have the Hamiltonian constraint $H=0$. This means that
application of the canonical quantization procedure demands that the quantum
states of the system (here the universe) should be annihilated by the quantum
(operator) version of $H$, which yields the WDW equation
$\hat{H}\Psi(a,\phi)=0$, where $\Psi(a,\phi)$ is the wave function of the
universe. To obtain the differential form of this equation, if we use the
usual representation $P_{q}\rightarrow\partial_{q}$, we are led to the
following WDW equation
$\displaystyle\frac{1}{4a}\left(\frac{\partial^{2}}{\partial
a^{2}}+\frac{\beta}{a}\frac{\partial}{\partial
a}\right)\Psi(a,\phi)-\frac{1}{4a^{3}F(\phi)}\left(\frac{\partial^{2}}{\partial\phi^{2}}+\frac{\kappa
F^{\prime}(\phi)}{F(\phi)}\frac{\partial}{\partial\phi}\right)\Psi(a,\phi)$
$\displaystyle+\left(-g_{c}a+g_{\Lambda}a^{3}+\frac{g_{r}}{a}+\frac{g_{s}}{a^{3}}\right)\Psi(a,\phi)=0,$
(42)
where the parameters $\beta$ and $\kappa$ represent the ambiguity in the
ordering of factors $(a,P_{a})$ and $(\phi,P_{\phi})$ respectively. It is
clear that there are lots of the possibilities to choose this parameters. For
example with $\beta=\kappa=0$, we have no factor ordering, with
$\beta=\kappa=1$, the kinetic term of the Hamiltonian takes the form of the
Laplacian $-\frac{1}{2}\nabla^{2}$, of the minisuperspace. In general, as is
clear from the WDW equation, the resulting form of the wave function depends
on the chosen factor ordering [29]. However, it can be shown that the factor-
ordering parameter will not affect semiclassical calculations in quantum
cosmology [30], and so for convenience one usually chooses a special value for
it in the special models.
As the first step in solving the equation (4), let us separate the variables
into the form $\Psi(a,\phi)=\psi(a)\Phi(\phi)$, which yields the following
differential equations for the functions $\psi(a)$ and $\Phi(\phi)$:
$\frac{d^{2}\psi(a)}{da^{2}}+\frac{\beta}{a}\frac{d\psi(a)}{da}+4\left(-g_{c}a^{2}+g_{\Lambda}a^{4}+g_{r}+\frac{g_{s}+w^{2}}{a^{2}}\right)\psi(a)=0,$
(43)
and
$\frac{d^{2}\Phi(\phi)}{d\phi^{2}}+\frac{\kappa
F^{\prime}(\phi)}{F(\phi)}\frac{d\Phi(\phi)}{d\phi}+4w^{2}F(\phi)\Phi(\phi)=0,$
(44)
with $w$ being a separation constant. As in the classical case, here we
examine the analytical solutions of the above equations in a few specific
cases. Moreover, we assume that the wave functions are supposed to obey the
boundary conditions
$\Psi(a=0,\phi)=0,\hskip 14.22636pt\mbox{Dirichlet B.C.},$ (45)
$\qquad\left.\frac{\partial\Psi(a,\phi)}{\partial a}\right|_{a=0}=0,\hskip
14.22636pt\mbox{Neumann B.C.},$ (46)
where the first condition is called the Dewitt boundary condition to avoid the
singularity in the quantum domain. In what follows we will deal with the
resulting quantum cosmology in the same special cases that we have already
examined the classical solutions.
### 4.1 Flat quantum universe with cosmological constant: $k=0$, $\Lambda\neq
0$
In this case by selecting $\beta=-2$, the equation (43) reads as
$\frac{d^{2}\psi(a)}{da^{2}}-\frac{2}{a}\frac{d\psi(a)}{da}+4\left(g_{\Lambda}a^{4}+\frac{w^{2}}{a^{2}}\right)\psi(a)=0,$
(47)
the solutions of which in terms of Bessel functions $J_{\nu}(z)$ and
$Y_{\nu}(z)$ are as follows
$\psi(a)=C_{1}\;a^{\frac{3}{2}}\;\mathrm{J}_{\frac{i}{6}\sqrt{16w^{2}-9}}\left(\frac{2}{3}\sqrt{g_{\Lambda}}\;a^{3}\right)+C_{2}\;a^{\frac{3}{2}}\;\mathrm{Y}_{\frac{i}{6}\sqrt{16w^{2}-9}}\left(\frac{2}{3}\sqrt{g_{\Lambda}}\;a^{3}\right),$
(48)
where $C_{1}$ and $C_{2}$ are the integration constants. In the case where the
order of Bessel functions is imaginary, ($w^{2}>9/16$), both of them satisfy
the DeWitt boundary conditions and so both integral constants can be non-zero
which we take them here as $C_{1}=1$ and $C_{2}=i$.
On the other hand putting the coupling function $F(\phi)=\lambda\phi^{m}$, and
setting the ordering parameter as $\kappa=0$, equation (44) takes the form
$\frac{d^{2}\Phi(\phi)}{d\phi^{2}}+4w^{2}\lambda\phi^{m}\Phi(\phi)=0,$ (49)
with solutions
$\Phi(\phi)=C_{3}\;\sqrt{\phi}\,\mathrm{J}_{\frac{1}{m+2}}\left(\frac{4\sqrt{\lambda}w}{m+2}\phi^{\frac{m+2}{2}}\right)+C_{4}\;\sqrt{\phi}\,\mathrm{Y}_{\frac{1}{m+2}}\left(\frac{4\sqrt{\lambda}w}{m+2}\phi^{\frac{m+2}{2}}\right).$
(50)
Thus, the eigenfunctions of the WDW equation can be written as
$\displaystyle\Psi_{w}(a,\phi)$ $\displaystyle=$
$\displaystyle\psi_{w}(a)\Phi_{w}(\phi)$ (51) $\displaystyle=$ $\displaystyle
a^{\frac{3}{2}}\sqrt{\phi}\;\mathrm{H}^{(1)}_{\frac{i}{6}\sqrt{16w^{2}-9}}\left(\frac{2}{3}\sqrt{g_{\Lambda}}\;a^{3}\right)\,\mathrm{J}_{\frac{1}{m+2}}\left(\frac{4\sqrt{\lambda}w}{m+2}\phi^{\frac{m+2}{2}}\right),$
where we choose $C_{4}=0$, for having well-defined functions in all ranges of
variable $\phi$, and
$\mathrm{H}^{(1)}_{\nu}(z)=\mathrm{J}_{\nu}(z)+i\mathrm{Y}_{\nu}(z)$ are the
Hankel functions. We may now write the general solutions to the DWD equation
as a superposition of the above eigenfunctions, that is
$\displaystyle\Psi(a,\phi)$ $\displaystyle=$
$\displaystyle\int_{D}dw\;f(w)\;\Psi_{w}(a,\phi)$ (52) $\displaystyle=$
$\displaystyle
a^{\frac{3}{2}}\sqrt{\phi}\int_{D}dw\;f(w)\;\mathrm{H}^{(1)}_{\frac{i}{6}\sqrt{16w^{2}-9}}\left(\frac{2}{3}\sqrt{g_{\Lambda}}\;a^{3}\right)\,\mathrm{J}_{\frac{1}{m+2}}\left(\frac{4\sqrt{\lambda}w}{m+2}\phi^{\frac{m+2}{2}}\right),$
where $f(w)$ is a suitable weight function to construct the quantum wave
packets and $D=(-\infty,-3/4]\cup[+3/4,+\infty)$ is the domain on which the
integral is taken. It is seen that this expression is too complicated for
extracting an analytical closed form for the wave function and the choice of a
function $f(w)$ that leads to an analytical solution for the wave function is
not an easy task. However, such weight functions in quantum systems can be
chosen as a shifted Gaussian weight function
$f(w)=w^{p}e^{-\sigma(w-w_{0})^{2}},$ (53)
which are widely used in quantum mechanics as a way to construct the localized
states. This is because these types of weight factors are centered about a
special value of their argument and they fall off rapidly away from that
center. Due to this behavior the corresponding wave packet resulting from (52)
after integration, has also a Gaussian-like behavior, i.e. is localized about
some special values of its arguments.
To realize the correlation between these quantum patterns and the classical
trajectories, note that in the minisuperspace formulation, the cosmic
evolution of the universe is modeled with the motion of a point particle in a
space with minisuperspace coordinates. In this sense, one of the most
important features in quantum cosmology is the recovery of classical solutions
from the corresponding quantum model or, in other words, how can the WDD wave
functions predict a classical universe. In quantum cosmology, one usually
constructs a coherent wave packet with suitable asymptotic behavior in the
minisuperspace, peaking in the vicinity of the classical trajectory. In figure
4, we have plotted the qualitative behavior of the square of the wave function
(52) with the above mentioned Gaussian weight factor and its contour plot for
typical numerical values of the parameters. As this figure shows, while the
wave function has its dominant peaks in the vicinity of the classical
trajectories, these peaks predict a universe to come out of a non-zero value
of the scale factor. Therefore, it can be seen that by the quantization of the
model, while we are able to eliminate the classical singularity, we are also
led to a quantum pattern with a good agreement with its classical counterpart.
Figure 4: The figures show the square of the wave function (left) and its
corresponding contour plot (right). Also, dashed line denotes the classical
trajectory of the system in phase space. The figures are plotted for the
numerical values: $g_{\Lambda}=1$, $\lambda=1$, $m=2$, $p=0$,
$\sigma=\frac{1}{10}$ and $w_{0}=\frac{1}{2}$.
### 4.2 Non-flat quantum universe with zero cosmological constant: $k\neq 0$,
$\Lambda=0$
In this case equation 43), the WDW equation for the scale factor takes the
form
$\frac{d^{2}\psi(a)}{da^{2}}+4\left(-g_{c}a^{2}+g_{r}+\frac{g_{s}+w^{2}}{a^{2}}\right)\psi(a)=0,$
(54)
in which we have set the ordering parameter as $\beta=0$. The general
solutions to this differential equation are
$\psi(a)=C_{5}\,\frac{\mathrm{M}_{\mu\nu}\left(2\sqrt{g_{c}}\,a^{2}\right)}{\sqrt{a}}\,+\,C_{6}\,\frac{\mathrm{W}_{\mu\nu}\left(2\sqrt{g_{c}}\,a^{2}\right)}{\sqrt{a}},$
(55)
where $\mathrm{M}_{\mu\nu}(x)$ and $\mathrm{W}_{\mu\nu}(x)$ are Whittaker
functions with $\mu=\frac{g_{r}}{2\sqrt{g_{c}}}$ and
$\nu=\frac{i}{4}\sqrt{16(g_{s}+w^{2})-1}$. If as in the classical case we take
$g_{r}=0$ (or equivalently $g_{3}=-2g_{2}$), the Whittaker functions can be
expressed in terms of the modified Bessel functions $K_{i\nu}(z)$ and
$I_{i\nu}(z)$ as follows [31]
$\displaystyle\psi(a)=C_{7}\,\sqrt{a}\,\mathrm{K}_{\frac{i}{4}\sqrt{16(g_{s}+w^{2})-1}}\left(\sqrt{g_{c}}\,a^{2}\right)\,+\,C_{8}\,\sqrt{a}\,\mathrm{I}_{\frac{i}{4}\sqrt{16(g_{s}+w^{2})-1}}\left(\sqrt{g_{c}}\,a^{2}\right).$
(56)
Since the wave functions must satisfy $\lim\psi(a)_{a\rightarrow\infty}=0$, we
restrict ourselves to consider only the modified Bessel function $K_{i\nu}(z)$
as solution and so we set $C_{8}=0$. The other part of the WDW equation is the
equation of the scalar field which in this case is also the same as equation
(49) and its solutions have already been given in relation (50). Therefore, if
the coefficients are selected as $C_{3}=1$ and $C_{4}=i$, the eigenfunctions
of the WDW equation read
$\Psi_{w}(a,\phi)=\sqrt{a\phi}\,\mathrm{K}_{\frac{i}{4}\sqrt{16(g_{s}+w^{2})-1}}\left(\sqrt{g_{c}}\,a^{2}\right)\,\mathrm{H}^{(1)}_{\frac{1}{m+2}}\left(\frac{4\sqrt{\lambda}w}{m+2}\phi^{\frac{m+2}{2}}\right),$
(57)
which their superposition gives the total wave function as
$\Psi(a,\phi)=\sqrt{a\phi}\int_{D^{\prime}}dw\,f(w)\,\mathrm{K}_{\frac{i}{4}\sqrt{16(g_{s}+w^{2})-1}}\left(\sqrt{g_{c}}\,a^{2}\right)\,\mathrm{H}^{(1)}_{\frac{1}{m+2}}\left(\frac{4\sqrt{\lambda}w}{m+2}\phi^{\frac{m+2}{2}}\right),$
(58)
where $f(w)$ is again a Gaussian-like weight factor in the form (53) and
$D^{\prime}$ is the domain of integration over $w$ as
$\displaystyle
D^{\prime}=\left\\{\begin{array}[]{ll}(-\infty,+\infty)&g_{s}\geq\frac{1}{16},\\\
(-\infty,-\frac{1}{4}\sqrt{1-16g_{s}})\cup(+\frac{1}{4}\sqrt{1-16g_{s}},+\infty)&g_{s}<\frac{1}{16}.\end{array}\right.$
(61)
The results of the numerical study of this wave function are shown in figure
5. The similarities and differences between quantum and classical solutions,
and the fact that the wave functions’ peaks correspond very well to the
classical trajectories, are similar to those described at the end of the
previous subsection.
Figure 5: The figures show the square of the wave function (left) and its
corresponding contour plot (right). Also, dashed line denotes the classical
trajectory of the system in the plane $a-\phi$. The figures are plotted for
the numerical values: $g_{c}=2$, $g_{s}=1$, $\lambda=1$, $m=2$, $p=2$,
$\sigma=\frac{1}{2}$ and $w_{0}=3$.
### 4.3 Early quantum universe
In this last part of the article, we study the quantum dynamics of the model
in the early times of evolution of the universe. As is well known, the quantum
behavior of the universe is more important in this period, and it is the
quantum effects in this era that prevent the classical singularities. For
small values of the scale factor, equation (43) with $\beta=0$, takes the form
$\frac{d^{2}\psi(a)}{da^{2}}+4\left(g_{r}+\frac{g_{s}+w^{2}}{a^{2}}\right)\psi(a)=0,$
(62)
with solutions
$\psi(a)=C_{9}\;\sqrt{a}\,\mathrm{J}_{\frac{i}{2}\sqrt{16(g_{s}+w^{2})-1}}(2\sqrt{g_{r}}\,a)+C_{10}\;\sqrt{a}\,\mathrm{Y}_{\frac{i}{2}\sqrt{16(g_{s}+w^{2})-1}}(2\sqrt{g_{r}}\,a).$
(63)
Since both Bessel functions satisfy the DeWitt boundary condition, both can
contribute to making the wave function, so we take the coefficients as
$C_{9}=1$ and $C_{1}0=i$. The solutions for the scalar field are the same as
the ones we presented in the previous two subsections. So, the final form of
the wave function in this case is
$\displaystyle\Psi(a,\phi)$ $\displaystyle=$
$\displaystyle\int_{D^{\prime}}dw\;f(w)\;\psi_{w}(a)\Phi_{w}(\phi)$ (64)
$\displaystyle=$
$\displaystyle\sqrt{a\phi}\;\int_{D^{\prime}}dw\;f(w)\;\mathrm{H}^{(1)}_{\frac{i}{2}\sqrt{16(g_{s}+w^{2})-1}}(2\sqrt{g_{r}}\,a)\;\mathrm{J}_{\frac{1}{m+2}}\left(\frac{4\sqrt{\lambda}w}{m+2}\phi^{\frac{m+2}{2}}\right),$
where as before $f(w)$ is a Gaussian-like weight function and the domain of
integration $D^{\prime}$, is given by (61). The final results are shown in
figure 6. A look at this figure shows that the universe started its evolution
from a non-zero value for the scale factor which in turn, means that quantum
effects have eliminated the singularity of the classical model. Also, as the
figure clearly shows, the wave function has its peaks in the vicinity of
classical trajectories shown in figure 3, which indicate the compatibility of
classical and quantum solutions.
Figure 6: The figures show the square of the wave function (left) and its
corresponding contour plot (right) for the early times of cosmic evolution.
Also, dashed line denotes the classical trajectory of the system in the plane
$a-\phi$. The figures are plotted for the numerical values: $g_{r}=1$,
$g_{s}=2$, $\lambda=1$, $m=2$, $p=0$, $\sigma=1$ and $w_{0}=\frac{1}{2}$.
## 5 Summary
In this paper we have studied the classical and quantum FRW cosmology in the
framework of the projectable HL theory of gravity without the detailed balance
condition. The phase space variables turn out to correspond to the scale
factor of the FRW metric and a non-linear self-coupling scalar field with
which the action of the model is augmented. After an introductory introduction
to the HL theory, based on a dimensional analysis, we present the terms which
are allowed to be included in the potential part of the action of this theory.
This process enabled us to write the Lagrangian and Hamiltonian of the model
in terms of the minisuperspace variables and some correction parameters coming
from the HL theory.
We then studied the classical cosmology of this model and formulate the
corresponding equations within the framework of Hamiltonian formalism. Though,
in general, the classical equations did not have exact solutions, we analyzed
their behavior in the special cases of the flat universe with cosmological
constant, the non-flat universe with vanishing cosmological constant, the
early and late times of cosmic evolution and obtained analytical expressions
for the scale factor and the scalar field in each of these cases. Another
point to note about the classical solutions is the choice of the appropriate
lapse function in each case, which actually represents the time gauge in which
that solution is obtained. We have seen that the classical expressions for the
scale factor and scalar field exhibit some kinds of classical singularities.
These singularities are mainly of the big-bang type for the scale factor and
blowup type for the scalar field.
The last part of the paper is devoted to the quantization of the model
described above in which we saw that the classical singular behavior will be
modified. In the quantum models, we separated the WDW equation and showed that
its eigenfunctions can be obtained in terms of analytical functions. By an
appropriate superposition of the eigenfunctions we constructed the integral
form of the wave functions. Although it is seen that these integral
expressions are too complicated for extracting an analytical closed form for
the wave functions, employing numerical methods, we have plotted the
approximate behavior of the square of the wave functions for typical values of
the parameters. In each case, investigation of the pattern followed by the
wave functions show a non-singular behavior near the classical singularity. In
addition to singularity avoidance, we saw that the wave functions’ peaks are
with a good approximation, in the vicinity of the classical trajectories which
indicate the fact that the classical and quantum solutions are in complete
agreement with each other in the late time of cosmic evolution.
## References
* [1] P. Hořava, JHEP 0903 (2009) 020 (arXiv: 0812.4287 [hep-th])
* [2] P. Hořava, Phys. Rev. D 79 (2009) 084008 (arXiv: 0901.3775 [hep-th])
* [3] P. Hořava, Phys. Rev. Lett. 102 161301 (2009) (arXiv: 0902.3657 [hep-th])
* [4] P. Hořava, Phys. Lett. B 694 (2010) 172 (arXiv: 0811.2217 [hep-th])
* [5] B. Vakili and V. Kord, Gen. Rel. Grav. 45 (2013) 1313 (arXiv: 1301.0809 [gr-qc])
* [6] H. Ardehali, P. Pedram and B. Vakili, Acta Phys. Pol. B 48 (2017) 827 (arXiv: 1705.09618 [gr-qc])
* [7] H. Ardehali and P. Pedram, Phys. Rev. D 93 (2016) 043532 (arXiv: 1601.05965 [gr-qc])
* [8] D. Blas, O. Pujolas and S. Sibiryakov, Phys. Rev. Lett. 104 (2010) 181302 (arXiv: 0909.3525 [hep-th])
* [9] D. Blas, O. Pujolas and S. Sibiryakov, JHEP 04 (2011) 018 (arXiv: 1007.3503 [hep-th])
* [10] S. Mukohyama, Phys. Rev. D 80 (2009) 064005 (arXiv: 0905.3563 [hep-th])
* [11] T. P. Sotiriou, M. Visser and S. Weinfurtner, Phys. Rev. Lett. 102 (2009) 251601 (arXiv: 0904.4464 [hep-th])
T.P. Sotiriou, M. Visser and S. Weinfurtner, J. High Energy Phys. JHEP 10
(2009) 033 (arXiv: 0905.2798 [hep-th])
* [12] S. Weinfurtner, T. P. Sotiriou and M. Visser, J. Phys.: Conf. Ser. 222 (2010) 012054 (arXiv: 1002.0308 [gr-qc])
M. Visser, The Kerr spacetime: A brief introduction (arXiv: 0706.0622 [gr-qc])
* [13] C. Bogdanos and E. N. Saridakis, Class. Quantum Grav. 27 (2010) 075005 (arXiv: 0907.1636 [hep-th])
* [14] P. Hořava and C. M. Melby-Thompson, Phys. Rev. D 82 (2010) 064027 (arXiv: 1007.2410 [hep-th])
* [15] T. Zhu, F.-W. Shu, Q. Wu and A. Wang, Phys. Rev. D 85 (2012) 044053 (arXiv: 1110.5106 [hep-th])
T. Zhu, Q. Wu, A. Wang and F. Shu, Phys. Rev. D 84 (2011) 101502 (arXiv:
1108.1237 [hep-th])
* [16] A. Wang, Int. J. Mod. Phys. D 26 (2017) 1730014 (arXiv: 1701.06087 [gr-qc])
* [17] O. Bertolami and C.A.D. Zarro, Phys. Rev. D 84 044042 (2011) (arXiv: 1106.0126 [hep-th])
* [18] J. P. M. Pitelli and A. Saa, Phys. Rev. D 86 (2012) 063506 (arXiv: 1204.4924 [gr-qc])
* [19] T. Christodoulakis and N. Dimakis, Classical and Quantum Bianchi Type III vacuum Hořava-Lifshitz Cosmology (arXiv: 1112.0903 [gr-qc])
G. Leon and A. Paliathanasis, Eur. Phys. J. C 79 (2019) 746
* [20] B.-F. Li, V. H. Satheeshkumar and A. Wang, Phys. Rev. D 93 (2016) 064043 (arXiv: 1511.06780 [gr-qc])
* [21] B.-F. Li, A. Wang, Y. Wu and Z. C. Wu, Phys. Rev. D 90 (2014) 124076 (arXiv: 1408.2345 [hep-th])
* [22] M. Visser, Phys. Rev. D 80 (2009) 025011 (arXiv: 0902.0590 [hep-th])
* [23] J. P. M. Pitelli, Phys. Rev. D 92 (2015) 084012 (arXiv: 1509.04983 [gr-qc])
* [24] G. Leon and E. N. Saridakis, JCAP 0911 (2009) 006 (arXiv: 0909.3571 [hep-th])
* [25] B. Vakili, Phys. Lett. B 688 (2010) 129
* [26] E. Kiritsis and G. Kofinas, Nucl. Phys. B 821 (2009) 467 (arXiv: 0904.1334 [hep-th])
* [27] K. Maeda, Y. Misonoh and T. Kobayashi, Phys. Rev. D 82 (2010) 064024 (arXiv: 1006.2739 [hep-th])
* [28] R. Brandenberger, Phys. Rev. D 80 (2009) 043516 (arXiv: 0904.2835 [hep-th])
G. Calcagni, JHEP 0909 (2009) 112 (arXiv: 0904.0829 [hep-th])
S. Mukohyama, JCAP 0906 (2009) 001 (arXiv: 0904.2190 [hep-th])
* [29] R. Steigl and F. Hinterleitner, Class. Quantum Grav. 23 (2006) 3879
* [30] S. W. Hawking and D. N. Page, Nucl. Phys. B 264 (1986) 185
* [31] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions 1972 (New York: Dover)
|
# Golod and tight 3-manifolds
Kouyemon Iriye Department of Mathematical Sciences, Osaka Prefecture
University, Sakai, 599-8531, Japan<EMAIL_ADDRESS>and Daisuke
Kishimoto Department of Mathematics, Kyoto University, Kyoto, 606-8502, Japan
<EMAIL_ADDRESS>
###### Abstract.
The notions Golodness and tightness for simplicial complexes come from algebra
and geometry, respectively. We prove these two notions are equivalent for
3-manifold triangulations, through a topological characterization of a
polyhedral product for a tight-neighborly manifold triangulation of dimension
$\geq 3$.
###### Key words and phrases:
Golod complex, tight triangulation, polyhedral product, fat-wedge filtration
###### 2010 Mathematics Subject Classification:
Primary 57Q15, Secondary 13F55, 55U10
## 1\. Introduction
Let $\mathbb{F}$ be a field, and let $S=\mathbb{F}[x_{1},\ldots,x_{m}]$, where
we assume each $x_{i}$ is of degree 2. Serre [26] proved that for $R=S/I$
where $I$ is a homogeneous ideal of $S$, there is a coefficientwise inequality
$P(\operatorname{Tor}^{R}(\mathbb{F},\mathbb{F});t)\leq\frac{(1+t^{2})^{m}}{1-t(P(\operatorname{Tor}^{S}(R,\mathbb{F});t)-1)}$
where $P(V;t)$ denotes the Poincaré series of a graded vector space $V$. In
the extreme case that the equality holds, $R$ is called _Golod_. It was Golod
who proved that $R$ is Golod if and only if all products and (higher) Massey
products in the Koszul homology of $R$ vanish, where the Koszul homology of
$R$ is isomorphic with $\operatorname{Tor}^{S}(R,\mathbb{F})$ as a vector
space.
Let $K$ be a simplicial complex with vertex set $[m]=\\{1,2,\ldots,m\\}$. Let
$\mathbb{F}[K]$ denote the Stanley-Reisner ring of $K$ over $\mathbb{F}$,
where we assume generators of $\mathbb{F}[K]$ are of degree 2. Then
$\mathbb{F}[K]$ expresses combinatorial properties of $K$, and conversely, it
is of particular interest to translate a given algebraic property of the
Stanley-Reisner ring $\mathbb{F}[K]$ into a combinatorial property of $K$. We
say that $K$ is _$\mathbb{F}$ -Golod_ if $\mathbb{F}[K]$ is Golod. We aim to
characterize Golod complexes combinatorially.
Recently, a new approach to a combinatorial characterization of Golod
complexes is taken. We can construct a space $Z_{K}$, called the _moment-angle
complex_ for $K$, in accordance with the combinatorial information of $K$.
Then combinatorial properties are encoded on the topology of $Z_{K}$, and in
particular, Golodness can be read from a homotopical property of $Z_{K}$ as
follows. Baskakov, Buchstaber and Panov [6] proved that the cohomology of
$Z_{K}$ with coefficients in $\mathbb{F}$ is isomorphic with the Koszul
homology of $\mathbb{F}[K]$, where the isomorphism respects products and
(higher) Massey products. Then it follows that $K$ is Golod over any field
whenever $Z_{K}$ is a suspension, and so Golod complexes have been studied
also in connection with desuspension of $Z_{K}$ and a more general _polyhedral
product_ [10, 11, 12, 14, 15, 16, 17, 19, 18]. See a survey [4] for more
information about moment-angle complexes and polyhedral products. Here we
remark that there is a Golod complex $K$ such that $Z_{K}$ is not a suspension
as shown by Yano and the first author [20].
In [15, 17, 19], the authors characterized Golod complexes of dimension one
and two in terms of both combinatorial properties of $K$ and desuspension of
$Z_{K}$. Here we recall a characterization of Golodness of a closed connected
surface triangulation, proved in [15]. Recall that a simplicial complex is
called _neighborly_ if every pair of vertices forms an edge.
###### Theorem 1.1.
Let $S$ be a triangulation of a closed connected $\mathbb{F}$-orientable
surface. Then the following statements are equivalent:
1. (1)
$S$ is $\mathbb{F}$-Golod;
2. (2)
$S$ is neighborly;
3. (3)
$Z_{S}$ is a suspension.
We introduce another notion of a simplicial complex coming from geometry.
S.-S. Chern and R.K. Lashof proved that the total absolute curvature of an
immersion $f\colon M\to\mathbb{R}^{n}$ of a compact manifold $M$ is bounded
below by the Morse number of some Morse function on $M$. On the other hand,
the Morse number is bounded below by the Betti number. Tightness of an
immersion $f$ is defined by the equality between the total absolute curvature
of an immersion $f$ and the Betti number of $M$, which is the case that the
total absolute curvature is minimal and attained by the Betti number. See [22,
23]. It is known that an immersion $f$ is tight if and only if for almost
every closed half-space $H$, the inclusion $f(M)\cap H\to f(M)$ is injective
in homology.
Tightness of a simplicial complex is defined as a combinatorial analog of
tightness of an immersion. See [22] for details. Let $K$ be a simplicial
complex with vertex set $[m]$. For $\emptyset\neq I\subset[m]$, the full
subcomplex of $K$ over $I$ is defined by
$K_{I}=\\{\sigma\in K\mid\sigma\subset I\\}.$
###### Definition 1.2.
Let $K$ be a connected simplicial complex with vertex set $[m]$. We say that
$K$ is _$\mathbb{F}$ -tight_ if the natural map $H_{*}(K_{I};\mathbb{F})\to
H_{*}(K;\mathbb{F})$ is injective for each $\emptyset\neq I\subset[m]$.
Golodness and tightness have origins in different fields of mathematics,
algebra and geometry, respectively. The aim of this paper is to prove the
seemingly irrelevant these two notions are equivalent for 3-manifold
triangulations through the topology of $Z_{K}$ or a more general _polyhedral
product_ (see Section 5). Now we state the main theorem.
###### Theorem 1.3.
Let $M$ be a triangulation of a closed connected $\mathbb{F}$-orientable
3-manifold. Then the following statements are equivalent:
1. (1)
$M$ is $\mathbb{F}$-Golod;
2. (2)
$M$ is $\mathbb{F}$-tight;
3. (3)
$Z_{M}$ is a suspension.
Recall that a $d$-manifold triangulation is called _stacked_ if it is the
boundary of a $(d+1)$-manifold triangulation whose interior simplices are of
dimension $\geq d$. Stacked manifold triangulations have been studied in
several directions, and we will use its connection to tightness (Section 2).
See [3, 9, 22] and references therein for more on stacked manifold
triangulations. Bagchi, Datta and Spreer [3] (cf. Theorem 2.3) proved that a
closed connected $\mathbb{F}$-orientable 3-manifold triangulation is
$\mathbb{F}$-tight if and only if it is neighborly and stacked. Then we get
the following corollary of Theorem 1.3, which enables us to compare with the
2-dimensional case Theorem 1.1.
###### Corollary 1.4.
Let $M$ be a triangulation of a closed connected $\mathbb{F}$-orientable
3-manifold. Then the following statements are equivalent:
1. (1)
$M$ is $\mathbb{F}$-Golod;
2. (2)
$M$ is neighborly and stacked;
3. (3)
$Z_{M}$ is a suspension.
We will investigate a relation between Golodness and tightness of $d$-manifold
triangulations for $d\geq 3$, not only for $d=3$, through tight-
neighborliness. We will prove the following theorem, where Theorem 1.3 is its
special case $d=3$.
###### Theorem 1.5.
Let $M$ be a triangulation of a closed connected $\mathbb{F}$-orientable
$d$-manifold for $d\geq 3$, and consider the following conditions:
1. (1)
$M$ is $\mathbb{F}$-Golod;
2. (2)
$M$ is $\mathbb{F}$-tight;
3. (3)
$M$ is tight-neighborly;
4. (4)
the fat-wedge filtration of $\mathbb{R}Z_{M}$ is trivial.
Then there are implications
$(1)\quad\Longrightarrow\quad(2)\quad\Longleftarrow\quad(3)\quad\Longrightarrow\quad(4)\quad\Longrightarrow\quad(1).$
Moreover, for $d=3$, the implication $(2)\,\Rightarrow\,(3)$ also holds, so
all conditions are equivalent.
Remarks on Theorem 1.5 are in order. Tight-neighborly triangulations of
$d$-manifolds for $d\geq 3$ will be defined in Section 2. The space
$\mathbb{R}Z_{K}$ is the real moment-angle complex, and properties of its fat-
wedge filtration will be given in Section 5. In particular, we will see that
if the fat-wedge filtration of $\mathbb{R}Z_{K}$ is trivial, then $Z_{K}$ is a
suspension. So Theorem 1.3 is the special case of Theorem 1.5 for $d=3$ as
mentioned above. Datta and Murai [9] proved that if $M$ is tight-neighborly
and $d\geq 4$, then it is $\mathbb{F}$-tight and $\beta_{i}(M;\mathbb{F})=0$
for $2\leq i\leq d-2$, where $\beta_{i}(M;\mathbb{F})=\dim
H_{i}(M;\mathbb{F})$ denotes the $i$-th Betti number. So if
$\beta_{i}(M;\mathbb{F})=0$ for $2\leq i\leq d-2$ and $d\geq 4$, then all
conditions in Theorem 1.5 are equivalent, where the triviality of the Betti
numbers is necessary because there is a $\mathbb{F}$-tight 9-vertex
triangulation of $\mathbb{C}P^{2}$ for any field $\mathbb{F}$ as in [2,
Example 3.15] which is not tight-neighborly.
The paper is organized as follows. Section 2 collects properties of tight and
tight-neighborly manifold triangulations that will be needed in later
sections. Section 3 introduces a weak version of Golodness and proves the weak
Golodness implies tightness of orientable manifold triangulations. Section 4
investigates a simplicial complex $F(M)$ constructed from a tight-neighborly
$d$-manifold triangulation $M$ for $d\geq 3$, and Section 5 recalls the fat-
wedge filtration technique for polyhedral products which is the main
ingredient in desuspending $Z_{K}$. Section 6 applies the results in Sections
4 and 5 to prove Theorem 1.5. Finally, Section 7 poses two problems for a
further study of a relationship between Golodness and tightness.
Acknowledgement: The first author was supported by JSPS KAKENHI 26400094, and
the second author was supported by JSPS KAKENHI 17K05248.
## 2\. Tightness
This section collects facts about tight and tight-neighborly manifold
triangulations that we will use. As mentioned in Section 1, tightness of a
simplicial complex is a discrete analog of a tight space studied in
differential geometry with connection to minimality of the total absolute
curvature, and tight complexes have been studied mainly for manifold
triangulations. First, we show:
###### Lemma 2.1.
Every $\mathbb{F}$-tight complex is neighborly.
###### Proof.
Let $K$ be an $\mathbb{F}$-tight complex. Then for two vertices $v,w$ of $K$,
the natural map $H_{0}(K_{\\{v,w\\}};\mathbb{F})\to H_{0}(K;\mathbb{F})$ is
injective. Since $K$ is connected, $H_{0}(K;\mathbb{F})\cong\mathbb{F}$, and
so $H_{0}(K_{\\{v,w\\}};\mathbb{F})\cong\mathbb{F}$. Then $v$ and $w$ must be
joined by an edge. ∎
Next, we explain a conjecture on tight manifold triangulations. Let $|K|$
denote the geometric realization of $K$, and let
$f(K)=(f_{0}(K),f_{1}(K),\ldots,f_{\dim K}(K))$ denotes the $f$-vector of $K$.
We say that $K$ is _strongly minimal_ if for any simplicial complex $L$ with
$|K|\cong|L|$, it holds that
$f_{i}(K)\leq f_{i}(L)$
for each $i\geq 0$. Kühnel and Lutz [22] conjectured that every
$\mathbb{F}$-tight triangulation of a closed connected manifold is strongly
minimal. Clearly, the only $\mathbb{F}$-tight closed connected 1-manifold
triangulation is the boundary of a 2-simplex, so the conjecture is true in
dimension 1. Moreover, the 2-dimensional case was verified as mentioned in
[22], and the 3-dimensional case was verified by Bagchi, Datta and Spreer [3].
But the case of dimensions $\geq 4$ is still open.
As for minimality of a manifold triangulation, we have another notion
introduced by Lutz, Sulanke and Swartz [24].
###### Definition 2.2.
A closed connected $d$-manifold triangulation $M$ with vertex set $[m]$ for
$d\geq 3$ is _tight-neighborly_ if
$\binom{m-d-1}{2}=\binom{d+2}{2}\beta_{1}(M;\mathbb{F}).$
Tight-neighborly manifold triangulations are known to be vertex minimal. By
definition, tight-neighborliness seems to depend on the ground field
$\mathbb{F}$, but it is actually independent of the ground field $\mathbb{F}$
as tight-neighborly manifold triangulations are neighborly and stacked.
Tightness and tight-neighborliness have the following relation. Let
$S^{1}\widetilde{\times}S^{d-1}$ denote a non-trivial $S^{d-1}$-bundle over
$S^{1}$.
###### Theorem 2.3.
Let $M$ be a closed connected $\mathbb{F}$-orientable $d$-manifold
triangulation for $d\geq 3$, and consider the following conditions:
1. (1)
$M$ is $\mathbb{F}$-tight;
2. (2)
$M$ is tight-neighborly;
3. (3)
$M$ is neighborly and stacked;
4. (4)
$M$ has the topological type of either
$S^{d},\quad(S^{1}\widetilde{\times}S^{d-1})^{\\#k},\quad(S^{1}\times
S^{d-1})^{\\#k}.$
Then there are implications
$(1)\quad\Longleftarrow\quad(2)\quad\Longleftrightarrow\quad(3)\quad\Longrightarrow\quad(4).$
Moreover, the implication $(1)\,\Rightarrow\,(2)$ also holds for $d=3$.
###### Proof.
The implications are shown in [9] for $d\geq 4$ and [3] for $d=3$. ∎
###### Remark 2.4.
The integer $k$ in Theorem 2.3 for $d=3$ is known to satisfy $80k+1$ is a
perfect square. For $k=1,30,99,208,357,546$, tight-neighborly triangulations
of $(S^{1}\widetilde{\times}S^{2})^{\\#k}$ are constructed in [8], but no
tight-neighborly triangulation of $(S^{1}\times S^{2})^{\\#k}$ is not known.
## 3\. Weak Golodness
This section introduces weak Golodness and study it for manifold
triangulations. Let $K$ be a simplicial complex with vertex set $[m]$, and let
$\mathcal{H}_{*}(\mathbb{F}[K])$ denote the Koszul homology of the Stanley-
Reisner ring $\mathbb{F}[K]$. As mentioned in Section 1, $K$ is
$\mathbb{F}$-Golod if and only if all products and (higher) Massey products in
$\mathcal{H}_{*}(\mathbb{F}[K])$ vanish. Now we define weak Golodness.
###### Definition 3.1.
A simplicial complex $K$ is _weakly $\mathbb{F}$-Golod_ if all products in
$\mathcal{H}_{*}(\mathbb{F}[K])$ vanish.
Clearly, $K$ is weakly $\mathbb{F}$-Golod whenever it is $\mathbb{F}$-Golod.
Berglund and Jöllenbeck [7] stated that Golodness and weak Golodness of every
simplicial complex are equivalent, but this was disproved by Katthän [21].
Then defining weak Golodness makes sense.
We recall a combinatorial description of the multiplication of
$\mathcal{H}_{*}(\mathbb{F}[K])$. For disjoint non-empty subsets
$I,J\subset[m]$, there is an inclusion
$\iota_{I,J}\colon K_{I\sqcup J}\to K_{I}*K_{J},\quad\sigma\mapsto(\sigma\cap
I,\sigma\cap J).$
Baskakov, Buchstaber and Panov [6] proved:
###### Lemma 3.2.
There is an isomorphism of vector spaces
$\mathcal{H}_{i}(\mathbb{F}[K])\cong\bigoplus_{\emptyset\neq
I\subset[m]}\widetilde{H}^{i-|I|-1}(K_{I};\mathbb{F})$
for $i>0$ such that for non-empty subsets $I,J\subset[m]$ the multiplication
$\widetilde{H}^{i-|I|-1}(K_{I};\mathbb{F})\otimes\widetilde{H}^{i-|J|-1}(K_{J};\mathbb{F})\to\widetilde{H}^{i+j-|I\cup
J|-1}(K_{I\cup J};\mathbb{F})$
is trivial for $I\cap J\neq\emptyset$ and the induced map of $\iota_{I,J}$ for
$I\cap J=\emptyset$.
Let $M$ be a triangulation of a closed connected $\mathbb{F}$-oriented
$d$-manifold with vertex set $[m]$. We consider a relation between the
inclusion $\iota_{I,J}$ and the Poincaré duality. For any subset
$I\subset[m]$, the Poincaré duality [13, Proposition 3.46] holds such that the
map
$H^{i}(|M_{I}|;\mathbb{F})\to
H_{d-i}(|M|,|M|-|M_{I}|;\mathbb{F}),\quad\alpha\mapsto\alpha\frown[M]$
is an isomorphism, where $[M]$ denotes the fundamental class of $M$. By [25,
Lemma 70.1], $|M|-|M_{I}|\simeq|M_{J}|$ for $J=[m]-I$. Then there is an
isomorphism
$D_{I,J}\colon
H^{i}(M_{I};\mathbb{F})\xrightarrow{\cong}H_{d-i}(M,M_{J};\mathbb{F}).$
Let $\partial\colon H_{*}(M,M_{J};\mathbb{F})\to H_{*-1}(M_{J};\mathbb{F})$
denote the boundary map of the long exact sequence
$\cdots\to H_{*}(M_{J};\mathbb{F})\to H_{*}(M;\mathbb{F})\to
H_{*}(M,M_{J};\mathbb{F})\xrightarrow{\partial}H_{*-1}(M_{J};\mathbb{F})\to\cdots.$
###### Lemma 3.3.
Let $M$ be a triangulation of a closed connected $\mathbb{F}$-oriented
$d$-manifold with vertex set $[m]$. For any partition $[m]=I\sqcup J$ and
$\alpha\in H^{i}(M_{I};\mathbb{F})$,
$(\partial\circ D_{I,J})(\alpha)=(-1)^{i+1}(\alpha\otimes
1)((\iota_{I,J})_{*}([M]))\in H_{d-i-1}(K_{J};\mathbb{F})$
where we regard $(\iota_{I,J})_{*}([M])$ as an element of
$\bigoplus_{i+j=d-1}H_{i}(M_{I};\mathbb{F})\otimes
H_{j}(M_{J};\mathbb{F})\cong H_{d}(M_{I}*M_{J};\mathbb{F})$.
###### Proof.
Let $\varphi\in C^{i}(M_{I};\mathbb{F})$ be a representative of $\alpha$. We
define $\overline{\varphi}\in C^{i}(M;\mathbb{F})$ by
$\overline{\varphi}(\sigma)=\begin{cases}\varphi(\sigma)&\sigma\in M_{I}\\\
0&\text{otherwise.}\end{cases}$
Then $\alpha\frown[M]$ is represented by $\overline{\varphi}\frown\mu$ where
$\mu$ represents $[M]$. Let $[v_{0},\ldots,v_{i}]$ denote an oriented
$i$-simplex with vertices $v_{0},\ldots,v_{i}$. We may set
$\mu=\sum_{k}a_{k}[v_{0}^{k},v_{1}^{k},\ldots,v_{d}^{k}]\in
C_{d}(M;\mathbb{F})$
for $a_{k}\in\mathbb{F}$, where $v_{0}^{k},\ldots,v_{n_{k}}^{k}\in I$ and
$v_{n_{k}+1}^{k},\ldots,v_{d}^{k}\in J$ for some $n_{k}$. Then $(\partial\circ
D_{I,J})(\alpha)$ is represented by
$\partial(\overline{\varphi}\frown\mu)=(\overline{\varphi}\circ\partial)\frown\mu=[(\overline{\varphi}\circ\partial)\frown\mu]=\sum_{k}a_{k}\overline{\varphi}(\partial[v_{0}^{k},\ldots,v_{i+1}^{k}])[v_{i+1}^{k},\ldots,v_{d}^{k}].$
Since
$(\overline{\varphi}\circ\partial)|_{C_{i+1}(M_{I};\mathbb{F})}=\varphi\circ\partial=0$,
$\overline{\varphi}(\partial[v_{0}^{k},\ldots,v_{i+1}^{k}])\neq 0$ only when
$n_{k}=i$. Then $(\partial\circ D_{I,J})(\alpha)$ is represented by
$\sum_{n_{k}=i}a_{k}\overline{\varphi}(\partial[v_{0}^{k},\ldots,v_{i+1}^{k}])[v_{i+1}^{k},\ldots,v_{d}^{k}]=(-1)^{i+1}\sum_{n_{k}=i}a_{k}\varphi([v_{0}^{k},\ldots,v_{i}^{k},\widehat{v_{i+1}^{k}}])[v_{i+1}^{k},\ldots,v_{d}^{k}]$
On the other hand, since the $C_{i}(M_{I};\mathbb{F})\otimes
C_{d-i-1}(M_{J};\mathbb{F})$ part of $\mu$ is
$\sum_{n_{k}=i}a_{k}[v_{0}^{k},\ldots,v_{d}^{k}]$, $(\iota_{I,J})_{*}([M])$ is
represented by
$\sum_{n_{k}=i}a_{k}[v_{0}^{k},\ldots,v_{i}^{k}]\otimes[v_{i+1}^{k},\ldots,v_{d}^{k}].$
Thus the proof is complete. ∎
Now we are ready to prove:
###### Theorem 3.4.
If a triangulation of a closed connected $\mathbb{F}$-orientable $d$-manifold
is weakly $\mathbb{F}$-Golod, then it is $\mathbb{F}$-tight.
###### Proof.
Let $M$ be a triangulation of a closed connected $\mathbb{F}$-oriented
$d$-manifold with vertex set $[m]$. Let $[m]=I\sqcup J$ be a partition.
Suppose that the map $\iota_{I,J}$ is trivial in cohomology with coefficients
in $\mathbb{F}$. Then by the universal coefficient theorem, $\iota_{I,J}$ is
trivial in homology with coefficients in $\mathbb{F}$ too. Thus by Lemma 3.3
the boundary map $\partial\colon H_{*}(M,M_{J};\mathbb{F})\to
H_{*-1}(M_{J};\mathbb{F})$ is trivial, and so the natural map
$H_{*}(M_{J};\mathbb{F})\to H_{*}(M;\mathbb{F})$ is injective, completing the
proof. ∎
## 4\. The complex $F(M)$
Throughout this section, let $M$ be a closed connected tight-neighborly
$d$-manifold triangulation for $d\geq 3$ with vertex set $[m]$. Let $K$ be a
simplicial complex with vertex set $[m]$. A subset $I\subset[m]$ is a _minimal
non-face_ of $K$ if every proper subset of $I$ is a simplex of $K$ and $I$
itself is not a simplex of $K$. Define a simplicial complex $F(M)$ by filling
all minimal non-faces of cardinality $d+1$ into $M$. This section investigates
the complex $F(M)$.
We set notation. Let $K$ be a simplicial complex with vertex set $[m]$. The
link of a vertex $v$ in a simplicial complex $K$ is defined by
$\operatorname{lk}_{K}(v)=\\{\sigma\in K\mid v\not\in\sigma\text{ and
}\sigma\sqcup v\in K\\}.$
For a finite set $S$, let $\Delta(S)$ denote the simplex with vertex set $S$.
Then $I\subset[m]$ is a minimal non-face of $K$ if and only if
$K_{I}=\partial\Delta(I)$. Let $K_{1},K_{2}$ be simplicial complexes of
dimension $d$ such that $K_{1}\cap K_{2}$ is a single $d$-simplex $\sigma$.
Then we write
$K_{1}\\#K_{2}=K_{1}\cup K_{2}-\sigma\quad\text{and}\quad K_{1}\circ
K_{2}=K_{1}\cup K_{2}.$
###### Lemma 4.1.
For each $v\in[m]$, there are $V(v,1),\ldots,V(v,n_{v})\subset[m]$ such that
$|V(v,k)|=d+1$ for $1\leq k\leq n_{v}$ and
$\operatorname{lk}_{M}(v)=\partial\Delta(V(v,1))\\#\cdots\\#\partial\Delta(V(v,n_{v})).$
###### Proof.
The case $d=3$ is proved [3, Proof of Theorem 1.2]. For $d\geq 4$, tight-
neighborliness implies local stackedness, that is, every vertex link is a
stacked sphere, as in [9]. Moreover, stacked spheres are characterized by
Bagchi and Datta [1] such that every stacked $(d-1)$-sphere is of the form
$\partial\Delta^{d}\\#\cdots\\#\partial\Delta^{d}$. Then we obtain the result
for $d\geq 4$. ∎
Generalizing neighborliness, we say that a simplicial complex is _$k$
-neighborly_ if every $k+1$ vertices form a simplex. So 1-neighborliness is
precisely neighborliness.
###### Lemma 4.2.
For each $v\in[m]$ and $1\leq k\leq n_{v}$, $M_{V(v,k)\sqcup v}$ is
$(d-1)$-neighborly.
###### Proof.
By Lemma 4.1, $\operatorname{lk}_{M}(v)_{V(v,k)}$ is $\partial\Delta^{d}$
removed some $(d-1)$-simplices, implying it is $(d-2)$-neighborly. So if $I$
is a subset of $V(v,k)$ with $|I|=d-1$, then $I\sqcup v$ is a simplex of $M$.
It remains to show $M_{V(v,k)}$ is $(d-1)$-neighborly. Let $J$ be any subset
of $V(v,k)$ with $|J|=d$. Then $\partial\Delta(J)$ is a subcomplex of $M$. If
$M_{J}=\partial\Delta(J)$, then $M_{J\sqcup v}=\partial\Delta(J)*v$, which is
contractible. So the inclusion $M_{J}\to M_{J\sqcup v}$ is not injective in
homology with coefficients in $\mathbb{F}$. By Theorem 2.3, $M$ is
$\mathbb{F}$-tight, so we get a contradiction. Thus $J$ must be a simplex of
$M$, completing the proof. ∎
We prove local properties of the complex $F(M)$.
###### Proposition 4.3.
1. (1)
For each $v\in[m]$,
$\operatorname{lk}_{F(M)}(v)=\partial\Delta(V(v,1))\circ\cdots\circ\partial\Delta(V(v,n_{v})).$
2. (2)
For each $v\in[m]$ and $1\leq k\leq n_{v}$, $V(v,k)\sqcup v$ is a minimal non-
face of $F(M)$.
###### Proof.
(1) Let $\sigma$ be the $(d-1)$-simplex
$(\partial\Delta(V(v,1))\\#\cdots\\#\partial\Delta(V(v,k)))\cap\partial\Delta(V(v,k+1))$.
Then by Lemma 4.2, $\partial\Delta(\sigma\sqcup v)$ is a subcomplex of $M$,
implying $\sigma\sqcup v$ is a simplex of $F(M)$. Then by induction, we get
$\partial\Delta(V(v,1))\circ\cdots\circ\partial\Delta(V(v,n_{v}))\subset\operatorname{lk}_{F(M)}(v)$.
The reverse inclusion is obvious by the construction of $F(M)$, completing the
proof.
(2) By Lemma 4.2, $V(v,k)$ is a simplex of $F(M)$, so every proper subset $I$
of $V(v,k)\sqcup v$ is a simplex of $F(M)$. By (1), $V(v,k)\sqcup v$ is not a
simplex of $F(M)$. Then the statement is proved. ∎
We compute the homology of $F(M)$. Let
$S(M)=\\{V(v,k)\sqcup v\mid v\in[m]\text{ and }1\leq k\leq n_{v}\\}.$
Then $S(M)$ is the set of all subsets $I\subset[m]$ such that $|I|=d+2$ and
$\operatorname{lk}_{M_{I}}(v)$ is $(d-2)$-neighborly for some $v\in I$.
###### Lemma 4.4.
$F(M)=\bigcup_{I\in S(M)}\partial\Delta(I)$.
###### Proof.
Let $K=\bigcup_{I\in S(M)}\partial\Delta(I)$. By Proposition 4.3, $K\subset
F(M)$. For any $k$-simplex $\sigma$ of $F(M)$ with $0\leq k\leq d-1$ and
$v\in\sigma$, $\sigma-v$ is a simplex of $\operatorname{lk}_{M}(v)$ because
$\sigma$ is a simplex of $M$ too. Then $\sigma-v\subset V(v,l)$ for some
$1\leq l\leq n_{v}$, implying $\sigma$ is a simplex of $K$. Thus the
$(d-1)$-skeleton of $F(M)$ is included in $K$. Take any $d$-simplex $\sigma$
of $F(M)$. Then $\sigma$ is either a simplex or a minimal non-face of $M$. In
both cases, $\partial\Delta(\sigma-v)$ is a subcomplex of
$\operatorname{lk}_{M}(v)$ for $v\in\sigma$. Then $\sigma-v\subset V(v,l)$ for
some $1\leq l\leq n_{v}$, implying $\sigma$ is a simplex of $K$. Thus
$F(M)\subset K$, completing the proof. ∎
We compute the homology of $F(M)$. By Lemma 4.4, there is an inclusion
$g_{I}\colon\partial\Delta(I)\to F(M)$ for each $I\in S(M)$. Let $u_{I}\in
H_{d}(F(M);\mathbb{Z})$ be the Hurewicz image of $g_{I}$.
###### Proposition 4.5.
The integral homology of $F(M)$, except for dimension 1, is given by
$\widetilde{H}_{*}(F(M);\mathbb{Z})=\begin{cases}\mathbb{Z}\langle u_{I}\mid
I\in S(M)\rangle&*=d\\\ 0&*\neq 1,d.\end{cases}$
###### Proof.
Since $F(M)$ is obtained from $M$ by attaching $d$-simplices, we only need to
calculate $H_{d-1}$ and $H_{d}$ by Theorem 2.3. By Lemma 4.1, each component
of $\operatorname{lk}_{M_{I}}(v)$ is $(d-2)$-connected, where
$\operatorname{lk}_{M_{I}}(v)=\operatorname{lk}_{M}(v)_{I-v}$. Then there is
an exact sequence
$0\to\widetilde{H}_{d}(F(M)_{I-v};\mathbb{Z})\to
H_{d}(F(M)_{I};\mathbb{Z})\xrightarrow{\partial}H_{d-1}(\operatorname{lk}_{F(M)_{I}}(v);\mathbb{Z})\\\
\to H_{d-1}(F(M)_{I-v};\mathbb{Z})\to H_{d-1}(F(M)_{I};\mathbb{Z})\to 0.$
By Proposition 4.3, there is an inclusion
$\partial\Delta(V(v,k))\to\operatorname{lk}_{F(M)_{I}}(v)$ for $V(v,k)\sqcup
v\subset I$, and we write the Hurewicz image of this inclusion by
$\bar{u}_{V(v,k)}$. By Proposition 4.3,
$H_{d-1}(\operatorname{lk}_{F(M)_{I}}(v);\mathbb{Z})=\mathbb{Z}\langle\bar{u}_{V(v,k)}\mid
V(v,k)\sqcup v\subset I\rangle$
such that $\partial(u_{V(v,k)\sqcup v})=\bar{u}_{V(v,k)}$. Then $\partial$ is
surjective, so we get an isomorphism
$H_{d-1}(F(M)_{I-v};\mathbb{Z})\cong H_{d-1}(F(M)_{I};\mathbb{Z}).$
Thus we obtain $H_{d-1}(F(M)_{I};\mathbb{Z})=0$ for any $I\subset[m]$ by
induction on $|I|$, where $H_{d-1}(F(M)_{I};\mathbb{Z})=0$ for $|I|=1$. We
also get a split exact sequence
$0\to H_{d}(F(M)_{I-v};\mathbb{Z})\to
H_{d}(F(M)_{I};\mathbb{Z})\xrightarrow{\partial}H_{d-1}(\operatorname{lk}_{F(M)_{I}}(v);\mathbb{Z})\to
0.$
Then by induction on $|I|$, we also obtain
$H_{d}(F(M)_{I};\mathbb{Z})=\mathbb{Z}\langle u_{V(v,k)}\mid V(v,k)\sqcup
v\subset I\rangle.$
Thus the proof is complete. ∎
By Theorem 2.3, $\pi_{1}(|M|)$ is a free group. Since $|F(M)|$ is obtained by
attaching $d$-cells to $|M|$, the inclusion $|M|\to|F(M)|$ is an isomorphism
in $\pi_{1}$, so $\pi_{1}(|F(M)|)$ is a free group too. Then there is a map
$f\colon B\to|F(M)|$ which is an isomorphism in $\pi_{1}$, where $B$ is a
wedge of circles. Let $\widehat{F}(M)$ be the cofiber of $f$. Since there is
an exact sequence
$\cdots\to
H_{*}(B;\mathbb{Z})\xrightarrow{f_{*}}H_{*}(F(M);\mathbb{Z})\to\widetilde{H}_{*}(\widehat{F}(M);\mathbb{Z})\to\cdots$
the natural map $H_{*}(F(M);\mathbb{Z})\to H_{*}(\widehat{F}(M);\mathbb{Z})$
is an isomorphism for $*\neq 1$. Let $\hat{g}_{I}$ be the composite
$|\partial\Delta(I)|\xrightarrow{g_{I}}|F(M)|\to\widehat{F}(M)$ for $I\in
S(M)$, and let $\hat{u}_{I}$ be the Hurewicz image of $\hat{g}_{I}$. By
Proposition 4.5, we get:
###### Corollary 4.6.
The integral homology of $\widehat{F}(M)$ is given by
$\widetilde{H}_{*}(\widehat{F}(M);\mathbb{Z})=\begin{cases}\mathbb{Z}\langle\hat{u}_{I}\mid
I\in S(M)\rangle&*=d\\\ 0&*\neq d.\end{cases}$
Since $\widehat{F}(M)$ is path-connected, there is a map
$g\colon\bigvee_{I\in S(M)}|\partial\Delta(I)|\to\widehat{F}(M)$
such that $g|_{|\partial\Delta(I)|}\simeq\hat{g}_{I}$ for each $I\in S(M)$.
Then by Corollary 4.6 and the J.H.C. Whitehead theorem, we obtain the
following.
###### Corollary 4.7.
The map $g\colon\bigvee_{I\in S(M)}|\partial\Delta(I)|\to\widehat{F}(M)$ is a
homotopy equivalence.
## 5\. Polyhedral product
Throughout this section, let $K$ be a simplicial complex with vertex set
$[m]$. Let $(\underline{X},\underline{A})=\\{(X_{i},A_{i})\\}_{i=1}^{m}$ be a
collection of pairs of pointed spaces indexed by vertices of $K$. For
$I\subset[m]$, let
$(\underline{X},\underline{A})^{I}=Y_{1}\times\cdots\times Y_{m}$
where $Y_{i}=X_{i}$ for $i\in I$ and $Y_{i}=A_{i}$ for $i\not\in I$. The
_polyhedral product_ of $(\underline{X},\underline{A})$ over $K$ is defined by
$Z_{K}(\underline{X},\underline{A})=\bigcup_{\sigma\in
K}(\underline{X},\underline{A})^{\sigma}.$
For $\emptyset\neq I\subset[m]$, let
$(\underline{X}_{I},\underline{A}_{I})=\\{(X_{i},A_{i})\\}_{i\in I}$. Then we
can define $Z_{K_{I}}(\underline{X}_{I},\underline{A}_{I})$. The following
lemma is immediate from the definition of a polyhedral product.
###### Lemma 5.1.
For each $\emptyset\neq I\subset[m]$,
$Z_{K_{I}}(\underline{X}_{I},\underline{A}_{I})$ is a retract of
$Z_{K}(\underline{X},\underline{A})$.
For a collection of pointed spaces $\underline{X}=\\{X_{i}\\}_{i=1}^{m}$, let
$(C\underline{X},\underline{X})=\\{(CX_{i},X_{i})\\}_{i=1}^{m}$. For $0\leq
i\leq m$, we define a subspace of $Z_{K}(C\underline{X},\underline{X})$ by
$Z_{K}^{i}(C\underline{X},\underline{X})=\\{(x_{1},\ldots,x_{m})\in
Z_{K}(C\underline{X},\underline{X})\mid\text{at least }m-i\text{ of
}x_{1},\ldots,x_{m}\text{ are basepoints}\\}.$
Using the basepoint of each $X_{i}$, we regard
$Z_{K_{I}}(C\underline{X}_{I},\underline{X}_{I})$ as a subspace of
$Z_{K}(C\underline{X},\underline{X})$ so that we can alternatively write
(5.1)
$Z_{K}^{i}(C\underline{X},\underline{X})=\bigcup_{I\subset[m],\,|I|=i}Z_{K_{I}}(C\underline{X}_{I},\underline{X}_{I}).$
There is a filtration
$*=Z_{K}^{0}(C\underline{X},\underline{X})\subset
Z_{K}^{1}(C\underline{X},\underline{X})\subset\cdots\subset
Z_{K}^{m}(C\underline{X},\underline{X})=Z_{K}(C\underline{X},\underline{X})$
which we call the _fat-wedge filtration_ of
$Z_{K}(C\underline{X},\underline{X})$. By [17, Theorem 4.1], we have
$Z_{K}^{i}(C\underline{X},\underline{X})/Z_{K}^{i-1}(C\underline{X},\underline{X})=\bigvee_{I\subset[m],\,|I|=i}|\Sigma
K_{I}|\wedge\widehat{X}^{I}$
where $\widehat{X}^{I}=\bigwedge_{i\in I}X_{i}$. Moreover, it is shown in [17,
Corollary 4.2] that the fat-wedge filtration of
$Z_{K}(C\underline{X},\underline{X})$ splits after a suspension and the
decomposition of Bahri, Bendersky, Cohen and Gitler [5, Theorem 2.2.1] is
reproduced as:
###### Theorem 5.2 (BBCG decomposition).
There is a homotopy equivalence
$\Sigma Z_{K}(C\underline{X},\underline{X})\simeq\Sigma\bigvee_{\emptyset\neq
I\subset[m]}|\Sigma K_{I}|\wedge\widehat{X}^{I}.$
In particular, if the BBCG decomposition desuspends, then
$Z_{K}(C\underline{X},\underline{X})$ itself desuspends. Moreover, if each
$X_{i}$ is a connected CW complex, then the BBCG decomposition desuspends
whenever $Z_{K}(C\underline{X},\underline{X})$ desuspends [17]. Then we aim to
desuspend the BBCG decomposition. Desuspension of the BBCG decomposition was
studied for specific Golod complexes such as shifted complexes [14, 11, 12],
and desuspension for much broader classes of simplicial complexes, including
the previous specific simplicial complexes, can be reproved by using the fat-
wedge filtration technique [17].
The moment-angle complex $Z_{K}$ introduced in Section 1 is the polyhedral
product $Z_{K}(D^{2},S^{1})$. The _real moment-angle complex_
$\mathbb{R}Z_{K}$ is defined to be the polyhedral product
$Z_{K}(D^{1},S^{0})$, and we denote its fat-wedge filtration by
$*=\mathbb{R}Z^{0}_{K}\subset\mathbb{R}Z^{1}_{K}\subset\cdots\subset\mathbb{R}Z^{m}_{K}=\mathbb{R}Z_{K}$
where we choose the basepoint of $S^{0}=\\{-1,+1\\}$ to be $-1$. The fat-wedge
filtration of $\mathbb{R}Z_{K}$ is proved to be a cone decomposition [17,
Theorem 3.1]. For $\emptyset\neq I\subset[m]$, let
$j_{K_{I}}\colon\mathbb{R}Z_{K_{I}}^{|I|-1}\to\mathbb{R}Z_{K}^{|I|-1}$ denote
the inclusion.
###### Theorem 5.3.
For each $\emptyset\neq I\subset[m]$, there is a map
$\varphi_{K_{I}}\colon|K_{I}|\to\mathbb{R}Z^{|I|-1}_{K_{I}}$ such that
$\mathbb{R}Z_{K}^{i}=\mathbb{R}Z_{K}^{i-1}\bigcup_{I\subset[m],\,|I|=i}C|K_{I}|$
where the attaching maps are $j_{K_{I}}\circ\varphi_{K_{I}}$.
We say that the fat-wedge filtration of $\mathbb{R}Z_{K}$ is trivial if
$\varphi_{K_{I}}$ is null homotopic for each $\emptyset\neq I\subset[m]$. We
remark that $\varphi_{K_{I}}$ is null homotopic if and only if so is
$j_{K_{I}}\circ\varphi_{K_{I}}$ because $\mathbb{R}Z_{K_{I}}^{|I|-1}$ is a
retract of $\mathbb{R}Z_{K}^{|I|-1}$. The fat-wedge filtration is useful for
desuspending the BBCG decomposition because we have the following criterion
[17, Theorem 1.2].
###### Theorem 5.4.
If the fat-wedge filtration of $\mathbb{R}Z_{K}$ is trivial then for any
$\underline{X}$,
$Z_{K}(C\underline{X},\underline{X})\simeq\bigvee_{\emptyset\neq
I\subset[m]}|\Sigma K_{I}|\wedge\widehat{X}^{I}.$
For $\emptyset\neq I\subset[m]$, define a map
$\alpha_{I}\colon\mathbb{R}Z_{K_{I}}^{|I|-1}\to\mathbb{R}Z_{K}^{m-1}$ by
$\alpha_{I}(x_{i}\mid i\in I)=(y_{1},\ldots,y_{m})$ such that
$y_{i}=\begin{cases}x_{i}&i\in I\\\ +1&i\not\in I\end{cases}$
for $(x_{i}\mid i\in I)\in\mathbb{R}Z_{K_{I}}^{|I|-1}$. Note that $\alpha_{I}$
is not the natural inclusion because the basepoint of $S^{0}=\\{-1,+1\\}$ is
taken to be $-1$ as mentioned above. For $\emptyset\neq J\subset I\subset[m]$
and $|J|\leq i\leq|I|$, let $\pi$ denote the composite of projections
$\mathbb{R}Z_{K_{I}}^{i}\to\mathbb{R}Z_{K_{J}}\to\mathbb{R}Z_{K_{J}}/\mathbb{R}Z_{K_{J}}^{|J|-1}=|\Sigma
K_{J}|.$
By the construction of $\varphi_{K}$, we have:
###### Lemma 5.5.
For $\emptyset\neq J\subsetneq I\subset[m]$, there is a commutative diagram
$\textstyle{|K_{I}|\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{K_{I}}}$$\textstyle{\mathbb{R}Z_{K_{I}}^{|I|-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\scriptstyle{\alpha_{I}}$$\textstyle{|\Sigma
K_{J}|\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{|\Sigma
j|}$$\textstyle{|K|\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{K}}$$\textstyle{\mathbb{R}Z_{K}^{m-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{|\Sigma
K_{J\sqcup([m]-I)}|}$
where $j\colon K_{J}\to K_{J\sqcup([m]-I)}$ is the inclusion.
The following two lemmas, proved in [17, Proof of Theorem 7.2] and [17, Lemma
10.1] respectively, are quite useful in detecting the triviality of
$\varphi_{K}$.
###### Lemma 5.6.
Let $\overline{K}$ be a simplicial complex obtained by filling all minimal
non-faces into $K$. Then $\varphi_{K}$ factors through the inclusion
$|K|\to|\overline{K}|$.
###### Lemma 5.7.
If $\varphi_{K_{I}}\simeq*$ for each $\emptyset\neq I\subsetneq[m]$, then the
composite
$|K|\xrightarrow{\varphi_{K}}\mathbb{R}Z_{K}^{m-1}\to\mathbb{R}Z_{K_{J}}\xrightarrow{\pi}|\Sigma
K_{J}|$
is null-homotopic for each $\emptyset\neq J\subsetneq[m]$.
Finally, we estimate the connectivity of $\mathbb{R}Z_{K}$.
###### Lemma 5.8.
If $K$ is $k$-neighborly, then $\mathbb{R}Z_{K}$ is $k$-connected.
###### Proof.
The proof is done by the same calculation as [17, Proposition 5.3]. The proof
is alternatively done as follows. By definition, $\pi_{*}(\mathbb{R}Z_{K})$ is
isomorphic to $\pi_{*}(\mathbb{R}Z_{K_{k}})$ for $*\leq k$, where $K_{k}$
denotes the $k$-skeleton of $K$. Since $K$ is $k$-neighborly,
$K_{k}=\Delta^{m-1}_{k}$. Since $\Delta^{m-1}_{k}$ is shifted, it follows from
[14] that there is a homotopy equivalence
$\mathbb{R}Z_{\Delta_{k}^{m-1}}\simeq\bigvee_{\emptyset\neq
I\subset[m]}|\Sigma(\Delta^{m-1}_{k})_{I}|.$
Since each $|\Sigma(\Delta^{m-1}_{k})_{I}|$ is $k$-connected, the proof is
done. ∎
## 6\. Proof of Theorem 1.5
Throughout this section, let $M$ be a tight-neighborly triangulation of a
closed connected $\mathbb{F}$-orientable $d$-manifold with vertex set $[m]$
unless otherwise is specified. We aim to prove that the fat-wedge filtration
of $\mathbb{R}Z_{M}$ is trivial. First, we compute the fundamental group of
$|F(M)_{I}|$ for $\emptyset\neq I\subset[m]$.
###### Lemma 6.1.
For each $\emptyset\neq I\subset[m]$, $\pi_{1}(|F(M)_{I}|)$ is a free group.
###### Proof.
Since the fundamental group of a suspension is a free group, we prove
$|F(M)_{I}|$ is a suspension by induction on $I$. For $|I|=1$, $|F(M)_{I}|$ is
obviously a suspension. Suppose that $|F(M)_{I-v}|$ is a suspension for $v\in
I$. Note that
(6.1) $F(M)_{I}=F(M)_{I-v}\cup(\operatorname{lk}_{F(M)_{I}}(v)*v)$
where
$F(M)_{I-v}\cap(\operatorname{lk}_{F(M)_{I}}(v)*v)=\operatorname{lk}_{F(M)_{I}}(v)$.
Since $\operatorname{lk}_{F(M)_{I}}(v)=\operatorname{lk}_{F(M)}(v)_{I-v}$, it
follows from Proposition 4.3 that there are inclusions
$\operatorname{lk}_{F(M)_{I}}(v)\subset(\Delta(V(v,1))\circ\cdots\circ\Delta(V(v,n_{v})))_{I-v}\subset
F(M)_{I-v}.$
Since $M$ is neighborly by Theorem 2.3, so is $M_{I-v}$, implying $F(M)_{I-v}$
is connected. On the other hand, each component of
$(\Delta(V(v,1))\circ\cdots\circ\Delta(V(v,n_{v})))_{I-v}$ is contractible.
Then the inclusion
$|(\Delta(V(v,1))\circ\cdots\circ\Delta(V(v,n_{v})))_{I-v}|\to|F(M)_{I-v}|$ is
null-homotopic, and so the inclusion
$|\operatorname{lk}_{F(M)_{I}}(v)|\to|F(M)_{I-v}|$ is null-homotopic too. Thus
by (6.1), we get a homotopy equivalence
$|F(M)_{I}|\simeq|F(M)_{I-v}|\vee|\Sigma\operatorname{lk}_{F(M)_{I}}(v)|.$
Since $|F(M)_{I-v}|$ is a suspension by the induction hypothesis, $|F(M)_{I}|$
turns out to be a suspension, completing the proof. ∎
Let $\emptyset\neq I\subset[m]$. By Lemma 5.6, the map $\varphi_{M_{I}}$
decomposes as
(6.2) $|M_{I}|\to|F(M)_{I}|\to\mathbb{R}Z_{M_{I}}^{|I|-1}.$
By Lemma 6.1, there is a map $f_{I}\colon B_{I}\to|F(M)_{I}|$, where $B_{I}$
is a wedge of circles, such that $f_{I}$ is an isomorphism in $\pi_{1}$. Let
$\widehat{F}(M)_{I}$ denote the cofiber of $f_{I}$, where
$\widehat{F}(M)_{[m]}$ coincides with $\widehat{F}(M)$ in Section 4. On the
other hand, since $M$ is neighborly by Lemma 2.1, so is $M_{J}$ for any
$\emptyset\neq J\subset[m]$. Then by (5.1) and Lemma 5.8, we can see that
$\mathbb{R}Z_{M_{I}}^{|I|-1}$ is simply-connected. In particular, there is a
commutative diagram
(6.3)
$\textstyle{|F(M)_{I}|\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\widehat{F}(M)_{I}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{R}Z_{M_{I}}^{|I|-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{R}Z_{M_{I}}^{|I|-1}.}$
Then by combining (6.2) and (6.3), we get:
###### Lemma 6.2.
For each $\emptyset\neq I\subset[m]$, the map $\varphi_{M_{I}}$ factors
through the inclusion $|M_{I}|\to\widehat{F}(M)_{I}$.
###### Proposition 6.3.
For each $\emptyset\neq I\subsetneq[m]$, the map $\varphi_{M_{I}}$ is null-
homotopic.
###### Proof.
As is computed in the proof of Proposition 4.5,
$\widetilde{H}_{*}(F(M)_{I};\mathbb{Z})=0$ unless $*=1,d$. Thus as well as
$\widehat{F}(M)$, we can see that $\widehat{F}(M)_{I}$ is $(d-1)$-connected.
Since $I\neq[m]$, $|M_{I}|$ is homotopy equivalent to a CW complex of
dimension $\leq d-1$. Then we obtain that the inclusion
$|M_{I}|\to\widehat{F}(M)_{I}$ is null-homotopic. Thus by Lemma 6.2, the proof
is complete. ∎
It remains to show $\varphi_{M}$ is null-homotopic. By Lemma 5.5, there is a
commutative diagram
$\textstyle{\bigvee_{I\in
S(M)}|M_{I}|\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\bigvee_{I\in
S(M)}\varphi_{M_{I}}}$$\textstyle{|M|\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{M}}$$\textstyle{\bigvee_{I\in
S(M)}\mathbb{R}Z_{M_{I}}^{d+1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\bigvee_{I\in
S(M)}\alpha_{I}}$$\textstyle{\mathbb{R}Z_{M}^{m-1}.}$
Then since $F(M)_{I}=\partial\Delta(I)$ for $I\in S(M)$ by Proposition 4.3, we
get a commutative diagram
(6.4) $\textstyle{\bigvee_{I\in
S(M)}|\partial\Delta(I)|\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\bigvee_{I\in
S(M)}g_{I}}$$\textstyle{|F(M)|\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\bigvee_{I\in
S(M)}\mathbb{R}Z_{M_{I}}^{d+1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\bigvee_{I\in
S(M)}\alpha_{I}}$$\textstyle{\mathbb{R}Z_{M}^{m-1}.}$
Juxtaposing the commutative diagrams (6.3) and (6.4), we get a commutative
diagram
$\textstyle{\bigvee_{I\in
S(M)}|\partial\Delta(I)|\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\textstyle{\widehat{F}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\bigvee_{I\in
S(M)}\mathbb{R}Z_{M_{I}}^{d+1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\bigvee_{I\in
S(M)}\alpha_{I}}$$\textstyle{\mathbb{R}Z_{M}^{m-1}}$
and by Corollary 4.7 and Lemma 6.2, we obtain:
###### Lemma 6.4.
The map $\varphi_{M}\colon|M|\to\mathbb{R}Z_{M}^{m-1}$ is homotopic to the
composite
$|M|\to\widehat{F}(M)\xrightarrow{g^{-1}}\bigvee_{I\in
S(M)}|\partial\Delta(I)|\to\bigvee_{I\in
S(M)}\mathbb{R}Z_{M_{I}}^{d+1}\xrightarrow{\bigvee_{I\in
S(M)}\alpha_{I}}\mathbb{R}Z_{M}^{m-1}.$
We will investigate the composite of maps in Lemma 6.4 by identifying a
homotopy set with a homology.
###### Lemma 6.5.
Let $W$ be a finite wedge of $S^{d}$. Then there is an isomorphism of sets
$[|M|,W]\cong H^{d}(M;\mathbb{Z})\otimes H_{d}(W;\mathbb{Z})$
which is natural with respect to maps among finite wedges of $S^{d}$.
###### Proof.
Since $\dim M=d$, the statement follows from the Hopf degree theorem. ∎
###### Lemma 6.6.
For each $v\in I\in S(M)$, the natural map
$H^{d}(M;\mathbb{Z})\otimes H_{d-1}(M_{I-v};\mathbb{Z})\to
H^{d}(M;\mathbb{Z})\otimes H_{d-1}(M_{[m]-v};\mathbb{Z})$
is injective.
###### Proof.
By Lemma 4.2, $|M_{I-v}|$ is contractible or $S^{d-1}$, so in particular,
$H_{d-1}(M_{I-v};\mathbb{Z})$ is a free abelian group, and so there is a
natural isomorphism
(6.5) $H_{d-1}(M_{I-v};\mathbb{F})\cong
H_{d-1}(M_{I-v};\mathbb{Z})\otimes\mathbb{F}.$
By definition, $M_{[m]-v}$ is $M$ removed the open star of $v$, which is
homotopy equivalent to $M$ removed the vertex $v$ by [25, Lemma 70.1]. Then by
Theorem 2.3, $|M_{[m]-v}|$ is homotopy equivalent to a wedge of finitely many,
possibly zero, copies of $S^{1}$ and $S^{d-1}$. Then
$H_{*}(M_{[m]-v};\mathbb{Z})$ is a free abelian group, and so there is a
natural isomorphism
(6.6) $H_{d-1}(M_{[m]-v};\mathbb{F})\cong
H_{d-1}(M_{[m]-v};\mathbb{Z})\otimes\mathbb{F}.$
Since $M$ is $\mathbb{F}$-tight by Theorem 2.3, the natural map
$H_{d-1}(M_{I-v};\mathbb{F})\to H_{d-1}(M_{[m]-v};\mathbb{F})$ is injective.
Then by (6.5) and (6.6), the natural map
$H_{d-1}(M_{I-v};\mathbb{Z})\otimes\mathbb{F}\to
H_{d-1}(M_{[m]-v};\mathbb{Z})\otimes\mathbb{F}$
is injective too. Since both $H_{d-1}(M_{I-v};\mathbb{Z})$ and
$H_{d-1}(M_{[m]-v};\mathbb{Z})$ are free abelian group, the case that $M$ is
orientable is proved because $H^{d}(M;\mathbb{Z})\cong\mathbb{Z}$. If $M$ is
unorientable, then $H^{d}(M;\mathbb{Z})\cong\mathbb{F}_{2}$ and the base field
$\mathbb{F}$ is of characteristic 2, where $\mathbb{F}_{2}$ is the field of
two elements. Thus the case that $M$ is unorientable is proved too, completing
the proof. ∎
###### Proposition 6.7.
The map $\varphi_{M}\colon|M|\to\mathbb{R}Z_{M}^{m-1}$ is null-homotopic.
###### Proof.
Note that $m\geq d+2$. Let $\emptyset\neq J\subset I\in S(M)$. By Lemma 4.2,
$|M_{J}|$ is contractible for $|J|\leq d$, and $|M_{J}|$ is contractible or
$S^{d-1}$ for $|J|=d+1$. Then by Proposition 6.3, there is a homotopy
equivalence
(6.7) $\mathbb{R}Z_{M_{I}}^{d+1}\simeq\bigvee_{v\in I}|\Sigma M_{I-v}|$
where $|\Sigma M_{I-v}|$ is contractible or $S^{d}$ as mentioned above. Let
$A=\bigvee_{I\in S(M)}\bigvee_{v\in I}|\Sigma M_{I-v}|\quad\text{and}\quad
B=\bigvee_{I\in S(M)}\bigvee_{v\in I}|\Sigma M_{[m]-v}|$
where $A\simeq\bigvee_{I\in S(M)}\mathbb{R}Z_{M_{I}}^{d+1}$ by (6.7). Let
$f\colon|M|\to A$ denote the composite of the first three maps in Lemma 6.4.
Then it suffices to show $f$ is null-homotopic. By Lemma 6.5, $f$ is
identified with some element $\phi$ of $H_{d}(M;\mathbb{Z})\otimes
H_{d}(A;\mathbb{Z})$, so that $f$ is null-homotopic if and only if $\phi=0$.
As in the proof of Lemma 6.6, $|\Sigma M_{[m]-v}|$ is a wedge of finitely many
copies of $S^{2}$ and $S^{d}$ for each vertex $v$ of $M$. Let $C_{v}$ denote
the $S^{d}$-wedge part of $|\Sigma M_{[m]-v}|$. Then there is a projection
$q_{v}\colon B\to C_{v}$. By Lemmas 5.5, 5.7 and 6.4, the composite
(6.8) $|M|\xrightarrow{f}A\to|\Sigma M_{I-v}|\to|\Sigma M_{[m]-v}|$
is null homotopic for each $v\in I\in S(M)$. Then by Lemma 6.5, $\phi$ is
mapped to 0 by
$1\otimes(q_{v}\circ j)_{*}\colon H^{d}(M;\mathbb{Z})\otimes
H_{d}(A;\mathbb{Z})\to H^{d}(M;\mathbb{Z})\otimes H_{d}(C_{v};\mathbb{Z})$
for each $v\in I\in S(M)$, where $j\colon A\to B$ denotes the inclusion. Since
the map
$\bigoplus_{v\in I\in S(M)}(q_{v})_{*}\colon
H_{d}(B;\mathbb{Z})\to\bigoplus_{v\in I\in S(M)}H_{d}(C_{v};\mathbb{Z})$
is an isomorphism, we get $(1\otimes j_{*})(\phi)=0$. Thus we obtain $\phi=0$
by Lemma 6.6, completing the proof. ∎
Now we are ready to prove Theorem 1.5:
###### Proof of Theorem 1.5.
The implications (1) $\Rightarrow$ (2) $\Leftarrow$ (3) are proved by Theorems
3.4 and 2.3. The implication (3) $\Rightarrow$ (4) is proved by Propositions
6.3 and 6.7. If (4) holds, then by Theorem 5.4, $Z_{M}$ is a suspension. So by
the fact that $K$ is $\mathbb{F}$-Golod whenever $Z_{K}$ is a suspension, as
mentioned in Section 1, we obtain the implication (4) $\Rightarrow$ (1),
completing the proof. ∎
## 7\. Further problems
This section proposes possible problems for a further study on a relationship
of Golodness and tightness. As proved in Theorem 3.4, weak
$\mathbb{F}$-Golodness implies $\mathbb{F}$-tightness for a closed connected
$\mathbb{F}$-orientable manifold triangulations. But not all weakly
$\mathbb{F}$-Golod complexes are $\mathbb{F}$-tight. For example, if $K$ is
the join of a vertex and the boundary of a simplex, then it is
$\mathbb{F}$-Golod for any field $\mathbb{F}$ as the fat-wedge filtration of
$\mathbb{R}Z_{K}$ is trivial but it is not $\mathbb{F}$-tight as in the proof
of Lemma 4.2. Then it is natural to ask:
###### Problem 7.1.
For which simplicial complexes does weak $\mathbb{F}$-Golodness imply
$\mathbb{F}$-tightness?
Interestingly, the opposite implication always holds.
###### Proposition 7.2.
Let $K$ be a simplicial complex with vertex set $[m]$. If $K$ is
$\mathbb{F}$-tight, then it is weakly $\mathbb{F}$-Golod.
###### Proof.
Take any disjoint subsets $\emptyset\neq I,J\subset[m]$. Then there is a map
$\iota_{I,J}\colon K_{I\sqcup J}\to K_{I}*K_{J}$
as in Section 3. By Lemma 3.2, $K$ is weakly $\mathbb{F}$-Golod if and only if
the map $\iota_{I,J}$ is trivial in homology with coefficients in
$\mathbb{F}$. Now we suppose $K$ is $\mathbb{F}$-tight. Then $K_{I\sqcup J}$
is $\mathbb{F}$-tight too, and so we only need to consider the case $I\sqcup
J=[m]$. By the Künneth theorem, the map
$(j_{I}*j_{J})_{*}\colon\widetilde{H}_{*}(K_{I}*K_{J};\mathbb{F})\to\widetilde{H}_{*}(K*K;\mathbb{F})$
is injective, where $j_{I}\colon K_{I}\to K$ denotes the inclusion. Then it
suffices to show the composite $(j_{I}*j_{J})\circ\iota_{I,J}$ is null-
homotopic.
Now we may assume $|K|\subset\mathbb{R}^{m}$ by identifying a simplex
$\\{i_{1},\ldots,i_{k}\\}\in K$ with
$\\{t_{1}e_{i_{1}}+\cdots+t_{k}e_{i_{k}}\mid t_{1}+\cdots+t_{k}=1,\,t_{1}\geq
0,\,\ldots,t_{k}\geq 0\\}$
where $e_{1},\ldots,e_{m}$ is the standard basis of $\mathbb{R}^{m}$. We may
assume $|K*K|\subset\mathbb{R}^{2m}$ in the same way. Consider a homotopy
$h_{t}^{i}\colon\mathbb{R}^{2m}\times[0,1]\to\mathbb{R}^{2m}$ defined by
$h_{t}^{i}(x_{1},\ldots,x_{m},y_{1},\ldots,y_{m})=(x_{1},\ldots,(1-t)x_{i}+ty_{i},\ldots,x_{m},y_{1},\ldots,tx_{i}+(1-t)y_{i},\ldots,y_{m})$
for $(x_{1},\ldots,x_{m},y_{1},\ldots,y_{m})\in\mathbb{R}^{2m}$. Then
$h_{t}^{i}$ restricts to a homotopy $h_{t}^{i}\colon|K*K|\times[0,1]\to|K*K|$
such that for $i\in I$,
$(j_{I}*j_{J})\circ\iota_{I,J}=h_{0}^{i}\circ(j_{I}*j_{J})\circ\iota_{I,J}\simeq
h_{1}^{i}\circ(j_{I}*j_{J})\circ\iota_{I,J}=(j_{I-i}*j_{J\cup
i})\circ\iota_{I-i,J\cup i}.$
Thus for $v\in[m]$,
$(j_{I}*j_{J})\circ\iota_{I,J}\simeq(j_{v}*j_{[m]-v})\circ\iota_{v,[m]-v}$.
Since $|v*K_{[m]-v}|$ is contractible, we get
$(j_{I}*j_{J})\circ\iota_{I,J}\simeq*$, completing the proof. ∎
In this paper, we have been studying a relationship between Golodness and
tightness through tight-neighborliness which perfectly works in dimension 3.
However, in dimensions $\geq 4$, tight-neighborliness does not work well
because it is not equivalent to tightness as mentioned in Section 1. So we
consider the following problem, the weak version of which is Problem 7.1 for
manifold triangulations of dimensions $\geq 4$.
###### Problem 7.3.
What condition on closed connected $d$-manifold triangulations with $d\geq 4$
guarantees $\mathbb{F}$-Golodness and $\mathbb{F}$-tightness are equivalent?
One approach is to put a topological condition on manifolds. For example, the
condition on the Betti number is stated in Section 1. We also have the
following theorem, in which manifolds triangulations are not tight-neighborly.
###### Theorem 7.4.
Let $M$ be a triangulation of a closed $(d-1)$-connected $2d$-manifold for
$d\geq 2$. Then the following are equivalent:
1. (1)
$M$ is $\mathbb{F}$-Golod for any field $\mathbb{F}$;
2. (2)
$M$ is $\mathbb{F}$-tight for any field $\mathbb{F}$;
3. (3)
$M$ is $d$-neighborly;
4. (4)
the fat-wedge filtration of $\mathbb{R}Z_{M}$ is trivial.
###### Proof.
The implication (1) $\Rightarrow$ (2) holds by Theorem 3.4 because $M$ is
orientable. Suppose $M$ has a minimal non-face $I$ with $|I|\leq d+1$. Then
$M_{I}=\partial\Delta(I)$, implying $H_{|I|-2}(M_{I};\mathbb{F})\neq 0$. Since
$M$ is $\mathbb{F}$-tight, the natural map $H_{|I|-2}(M_{I};\mathbb{F})\to
H_{|I|-2}(M;\mathbb{F})$ is injective, and since $M$ is $(d-1)$-connected,
$\widetilde{H}_{*}(M;\mathbb{F})=0$ for $*<d$. Then we get a contradiction, so
we obtain the implication (2) $\Rightarrow$ (3). The implication (3)
$\Rightarrow$ (4) follows from [17, Theorem 1.6]. The implication (4)
$\Rightarrow$ (1) holds by the fact that $K$ is $\mathbb{F}$-Golod over any
field $\mathbb{F}$ whenever $Z_{K}$ is a suspension, as mentioned in Section
1. Therefore, the proof is complete. ∎
## References
* [1] B. Bagchi and B. Datta, On $k$-stellated and $k$-stacked sphere, Discrete Mathematics 313 (2013), 2318-2329.
* [2] B. Bagchi and B. Datta, On stellated spheres and a tightness criterion for combinatorial manifolds, Eur. J. Comb. 36 (2014), 294-313.
* [3] B. Bagchi, B. Datta, and J. Spreer, A characterization of tightly triangulated 3-manifolds, Eur. J. Comb. 61 (2017), 133-137.
* [4] A. Bahri, M. Bendersky, and F.R. Cohen, Polyhedral products and features of their homotopy theory, in Handbook of Homotopy Theory, Chapman and Hall/CRC, 2019.
* [5] A. Bahri, M. Bendersky, F.R. Cohen, and S. Gitler, The polyhedral product functor: A method of decomposition for moment-angle complexes, arrangements and related spaces, Adv. Math. 225 (2010), 1634-1668.
* [6] I.V. Baskakov, V.M. Buchstaber, and T.E. Panov, Cellular cochain algebras and torus actions (in Russian), Uspekhi Mat. Nauk 59, no. 3 (2004), 159-160; English translation in Russian Math. Surveys 59 (2004), 562-563.
* [7] A. Berglund and M. Jöllenbeck, On the Golod property of Stanley–Reisner rings, J. Algebra 315 (2007), no. 1, 249-273.
* [8] B.A. Burton, B. Datta, N. Singh, and J. Spreer, A construction principle for tight and minimal triangulations of manifolds, Exp. Math. 27 (2018), 22-36.
* [9] B. Datta and S. Murai, On stacked triangulated manifolds, Electronic J. Comb. 24 (2017), no. 4, Paper No. 4. 12, 14pp.
* [10] J. Grbić, T. Panov, S. Theriault, and J. Wu, The homotopy types of moment-angle complexes for flag complexes, Trans. of the Amer. Math. Soc. 368 (2016), no.9, 6663-6682.
* [11] J. Grbić and S. Theriault, The homotopy type of the polyhedral product for shifted complexes, Adv. Math. 245 (2013), 690-715.
* [12] V. Grujić and V. Welker, Moment-angle complexes of pairs $(D^{n},S^{n-1})$ and simplicial complexes with vertex-decomposable duals, Monatsh. Math. 176 (2015), 255-273.
* [13] A. Hatcher, Algebraic Topology, Cambridge University Press, Cambridge, 2002.
* [14] K. Iriye and D. Kishimoto, Decompositions of polyhedral products for shifted complexes, Adv. Math. 245 (2013), 716-736.
* [15] K. Iriye and D. Kishimoto, Golodness and polyhedral products for two-dimensional simplicial complexes, Forum Math. 30 (2018), no. 2, 527-532.
* [16] K. Iriye and D. Kishimoto, Golodness and polyhedral products of simplicial complexes with minimal Taylor resolutions, Homology Homotopy Appl. 20 (2018), no. 1, 69-78.
* [17] K. Iriye and D. Kishimoto, Fat-wedge filtration and decomposition of polyhedral products, Kyoto J. Math. 59 (2019), no. 1, 1-51.
* [18] K. Iriye and D. Kishimoto, Whitehead products in moment-angle complexes, J. Math. Soc. Japan 72 (2020), 1239-1257.
* [19] K. Iriye and D. Kishimoto, Two dimensional Golod complexes, accepted by Homology, Homotopy Appl.
* [20] K. Iriye and T. Yano, A Golod complex with non-suspension moment-angle complex, Topology Appl. 225 (2017), 145-163.
* [21] L. Katthän, A non-Golod ring with a trivial product on its Koszul homology, J. Algebra 479 (2017), 244-262.
* [22] W. Kühnel and F.H. Lutz, A census of tight triangulations, in: Discrete Geometry and Rigidity, Budapest, 1999, Period. Math. Hungar. 39 (1999), 161-183.
* [23] N.H. Kuiper, Minimal total absolute curvature for immersions, Invent. Math. 10 (1970), 209-238.
* [24] F.H. Lutz, T. Sulanke, E. Swartz, $f$-vectors of 3-manifolds, Electron. J. Combin. 16 (2) (2009) $\\#$R13.
* [25] J.R. Munkres, Elements of Algebraic Topology, Addison-Wesley 1984.
* [26] J.-P. Serre, Algèbre locale, Multiplicités, Lecture Notes in Mathematics 11 Springer, Berlin, (1965).
|
# $\eta$ mesons in hot magnetized nuclear matter
Rajesh Kumar<EMAIL_ADDRESS>Arvind Kumar<EMAIL_ADDRESS><EMAIL_ADDRESS>Department of Physics, Dr. B R Ambedkar National Institute
of Technology Jalandhar, Jalandhar – 144011,Punjab, India
###### Abstract
The $\eta N$ interactions are investigated in the hot magnetized asymmetric
nuclear matter using chiral SU(3) model and chiral perturbation theory (ChPT).
In the chiral model, the in-medium properties of $\eta$-meson are calculated
by the medium modified scalar densities under the influence of an external
magnetic field. Further, in the combined approach of chiral model and ChPT,
off-shell contributions of $\eta N$ interactions are evaluated from the ChPT
effective $\eta N$ Lagrangian, and the in-medium effect of scalar densities
are incorporated from the chiral SU(3) model. We observe a significant effect
of magnetic field on the in-medium mass and optical potential of $\eta$ meson.
We observe a deeper mass-shift in the combined approach of ChPT and chiral
model compared to the effect of solo chiral SU(3) model. In both approaches,
no additional mass-shift is observed due to the uncharged nature of $\eta$
mesons in the presence of magnetic field.
## I Introduction
The investigation of in-medium meson-baryon properties under the effect of
strong external magnetic field is a demanding area of research in the non-
central Heavy-Ion Collisions (HICs) Cho2014 ; Cho2015 ; Gubler2016 ; Reddy2018
; Kharzeev2008 ; Kharzeev2013 ; Fukushima2008 ; Skokov2009 . Besides, the
presence of temperature and isospin asymmetry ,i.e., uneven numbers of
neutrons and protons, lead to significant modifications in the in-medium
properties of hadrons Reddy2018 ; Papazoglou1999 ; Mishra2009 . The strong
magnetic fields (of the order of $eB\sim{15{m^{2}_{\pi}}}$ ($5\times 10^{19}$
gauss) at large hadron collider (LHC), CERN and $eB\sim{2{m^{2}_{\pi}}}$
($6.5\times 10^{18}$ gauss) at relativistic heavy ion collider (RHIC), BNL may
have been produced Kharzeev2008 ; Fukushima2008 ; Skokov2009 . As the remnant
move away from the collision zone, the magnitude of the magnetic field
decreases rapidly. Meanwhile, the decaying magnetic field interacts with
residual matter and as per Lenz’s law, the induced current comes into the
picture. These induced currents further generate the opposite magnetic field
which interacts with the primary magnetic field. This whole process slows down
the decay rate of the primary magnetic field and gives it enough time to leave
imprints on the mesons and hadrons Tuchin2011 ; Tuchin2011a ; Tuchin2013 ;
Marasinghe2011 ; Das2017 ; Reddy2018 ; Cho2015 . The slowing down of
relaxation time is known as chiral magnetic effect Kharzeev2013 ;
Fukushima2008 ; Vilenkin1980 ; Burnier2011 . In HICs, the time evolution of
the magnetic field is still unclear, therefore to have a correct estimate of
the medium’s electrical conductivity and relaxation time, further study is
required Reddy2018 .
In the future experiments namely compressed baryonic matter (CBM) and
antiproton annihilation at Darmstadt (PANDA) at GSI, Germany, Japan proton
accelerator research complex (J-PARC) at Japan, and nuclotron-based ion
collider facility (NICA) at Dubna, Russia, we anticipate significant research
in the in-medium meson-baryons interactions Kumar2019 ; Rapp2010 ; Kumar2020a
. On the theoretical side, various effective models have been built to study
meson-baryon interactions. These models are namely quark-meson coupling (QMC)
model Guichon1988 ; Hong2001 ; Tsushima1999 ; Sibirtsev1999 ; Saito1994 ;
Panda1997 , Polyakov quark meson (PQM) model Chatterjee2012 ; Schaefer2010 ,
coupled channel approach Tolos2004 ; Tolos2006 ; Tolos2008 ; Hofmann2005 ,
chiral SU(3) model Papazoglou1999 ; Mishra2004a ; Mishra2009 ; Kumar2010 ;
Kumar2019 , chiral perturbation theory (ChPT) Zhong2006 ; Jenkins1991 ;
Kumar2020c , QCD sum rules Reinders1981 ; Hayashigaki2000 ; Hilger2009 ;
Reinders1985 ; Klingl1997 ; Klingl1999 , Nambu-Jona-Lasinio (NJL) model
Nambu1961 , and the Polyakov loop extended NJL (PNJL) model Fukushima2004 ;
Kashiwa2008 ; Ghosh2015 . In the present work, we have used two of the above
theoretical approaches i.e. chiral SU(3) model and ChPT.
The $K/\pi/\eta-B$ interactions are much studied in literature using various
theoretical and experimental approaches Jenkins1991 ; Haider1986 ; Liu1986 ;
Kaplan1986 ; Kumar2020c ; Chen2017 ; David2018 ; Inoue2002 . Among these
mesons, the $\eta$-meson is of special interest due to the possibility of
$\eta$-nucleon bound state formation Jenkins1991 ; Zhong2006 ; Waas1997 . The
properties of $\eta$ mesons, Peng1987 ; Berg1994 ; Chiavassa1998 ;
Martinez1999 ; Averbeck2003 ; Agakishiev2013 , for instance, the transverse
momentum spectrum near the threshold of free $N$-$N$ production Agakishiev2013
and the $\eta$-meson production have been studied experimentally Peng1987 ;
Martinez1999 ; Agakishiev2013 . On the theoretical side, Haider and Liu were
the first to observe that the $\eta N$ interactions show attractive behavior
and therefore, the $\eta$-meson can form bound state with nucleons Haider1986
; Liu1986 . Chiang $et.al.$ anticipated an optical potential $U_{\eta}$ = -34
MeV at $\rho_{0}$ using the chiral coupled channel approach Chiang1991 . The
authors also predicted that the negative potential can be used to generate
$\eta$-mesic atom with light/heavy nucleus. At nuclear saturation density
$\rho_{0}$, by incorporating the leading order terms in the coupled channel
approach a mass-shift of -20 MeV was evaluated Waas1997 . In Ref. Wang2010 ,
the optical potential of -72 MeV was anticipated. The authors predicted the
optical potential of -60 MeV at $\rho_{N}=\rho_{0}$ using the QMC model
Tsushima1998 . Using ChPT and relativistic mean-field model at nuclear
saturation density by including the $\eta N$ off-shell terms in the equation
of motion, the authors anticipated optical potential of -83 $\pm$ 5 MeV
Zhong2006 . Furthermore, using the same approach Song $et$ $al.$ obtained the
negative optical potential as a function of $\eta N$ scattering length
Song2008 . The authors studied the $\eta$ production rate and momentum
dependence under the influence of isospin asymmetric HICs in Ref. Chen2017 ,
also the effect of $\eta N$ interactions were studied using intranuclear
cascade model under the effect of distinct medium attributes David2018 .
Recently, using the combined (chiral SU(3) model $+$ ChPT) and solo (chiral
SU(3) model) approach, we derived $\eta N$ equation of motion in the non-
magnetized nuclear matter and observed (-54.61) -116.83 MeV mass-shift at
$\rho_{0}$ with $a^{\eta N}$=1.02 fm Kumar2020c . Evidently, the mass and
optical potential of $\eta$-mesons have the model dependencies and therefore
still need more exploration.
In this article, we investigated the magnetic field effect on the in-medium
mass and optical potential of the $\eta$-meson in the hot asymmetric nuclear
matter. In this work, we extended our previous study in the nuclear medium at
zero magnetic field Kumar2020c . First, we evaluated the in-medium dispersion
relation of $\eta$-meson using the $\eta N$ Lagrangian by the magnetically
induced scalar densities from the chiral SU(3) model Kumar2020c . In the
second approach, we used the scalar density of nucleons calculates using
chiral SU(3) model in the dispersion relation of $\eta$-mesons which is
derived from the chiral effective $\eta N$ Lagrangian of chiral perturbation
theory Zhong2006 .
The chiral perturbation theory is widely used to study the in-medium baryon-
meson interactions. For the first time, the theory was used to investigate the
in-medium properties of kaons Kaplan1986 and later it was modified by adding
leading order terms in the Lagrangian to study the interactions of $\eta$ with
nucleons Jenkins1991 . The theory was also used to study astrophysical objects
such as neutron stars. In the neutron star matter, the heavy baryon ChPT was
applied to study the kaon condensation Brown1994 ; Lee1995 ; Kaiser1995 .
Furthermore, to give correct description of $\eta N$ interactions, the next-
to-leading order terms were introduced in the ChPT Lagrangian. Consequently,
the authors anticipated more deep optical potential of $\eta$-mesons in the
nuclear medium Zhong2006 . The chiral SU(3) model is also widely used to study
the hot and dense hadronic matter Kumar2010 ; Zschiesche2004 ; Mishra2004 .
For instance, the methodology was used in the strange hadronic matter to study
the in-medium properties of kaons and antikaons Mishra2004 . Recently, the
mass and decay width of the $\phi$ meson was also calculated in the strange
hadronic matter by considering the $K\bar{K}$ loop at one-loop level
Kumar2020b . The chiral SU(3) model was extended to charm SU(4) and bottom
SU(5) sector to study the properties of heavy $D$ and $B$ mesons, respectively
Mishra2004a ; Mishra2009 ; Kumar2011 . The chiral model is also successfully
used to anticipate the in-medium properties of baryons and mesons in the
presence of strong external magnetic field. For instance, using the combined
approach of chiral model and QCD sum rules the in-medium mass and decay
constant of scalar, pseudoscalar, vector, and axial-vector $D$ mesons were
calculated with Kumar2020 ; Kumar2020a and without incorporating the effect
of the external magnetic field Kumar2014 ; Chhabra2017 ; Chhabra2017a ;
Chhabra2018 . Using the same combination, the medium modified properties of
charmonia and bottomonia were studied in the hot magnetized asymmetric nuclear
matter Kumar2019 ; Kumar2019a ; Kumar2010 .
The outline of the present paper is as follows: In the coming section, we will
give a brief explanation of the magnetic field effect in the present work. In
Section II.1.1, we will derive the magnetic induced $\eta N$ interactions in
the chiral SU(3) model whereas, in Section II.1.2, $\eta N$ formalism will be
given in the joint approach of the chiral model and chiral perturbation
theory. In Section III, we will discuss the in-medium effects of strong
magnetic field on the mass of $\eta$-meson, and finally, in Section IV, we
will conclude our investigation.
## II Methodology
### II.1 MAGNETIC FIELD INDUCED SCALAR FIELDS IN THE CHIRAL SU(3) MODEL
The hadronic chiral SU(3) model incorporates the trace anomaly and the non-
linear realization of chiral symmetry Weinberg1968 ; Coleman1969 ;
Zschiesche1997 ; Bardeen1969 ; Kumar2020 ; Papazoglou1999 ; Kumar2019
property of the QCD. In this methodology, the $\eta$-nucleon interactions are
computed by the exchange of scalar ($\sigma$, $\zeta$, $\delta$ and $\chi$)
and vector ($\omega$ and $\rho$) fields. The glueball field $\chi$ is
introduced in the model to preserve the broken scale invariance property of
QCD Kumar2020 . The isospin asymmetry, $I$, of the nuclear matter is
introduced by the addition of the scalar-isovector field $\delta$ and vector-
isovector field $\rho$ Kumar2010 . In the present work, the impact of the
strong magnetic field along $Z$-axis with the vector potential
$A^{\mu}=(0,0,Bx,0)$ is studied by including the magnetic induced Lagrangian
density to the chiral model’s effective Lagrangian density Kumar2019 ;
Reddy2018 . Thus, we write the modified Lagrangian density of chiral model as
${\cal L}_{chiral}={\cal L}_{kin}+\sum_{M=S,V}{\cal L}_{NM}+{\cal
L}_{vec}+{\cal L}_{0}+{\cal L}_{SB}+{\cal L}_{mag}.$ (1)
Individually,
$\displaystyle{\cal L}_{NM}=-\sum_{i}\bar{\psi_{i}}\left[m_{i}^{*}+g_{\omega
i}\gamma_{0}\omega+g_{\rho i}\gamma_{0}\rho\right]\psi_{i},$ (2)
$\displaystyle{\cal L}_{vec}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(m_{\omega}^{2}\omega^{2}+m_{\rho}^{2}\rho^{2}\right)\frac{\chi^{2}}{\chi_{0}^{2}}+g_{4}(\omega^{4}+6\omega^{2}\rho^{2}+\rho^{4}),$
(3) $\displaystyle{\cal L}_{0}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}k_{0}\chi^{2}\left(\sigma^{2}+\zeta^{2}+\delta^{2}\right)+k_{1}\left(\sigma^{2}+\zeta^{2}+\delta^{2}\right)^{2}$
(4) $\displaystyle+$ $\displaystyle
k_{2}\left(\frac{\sigma^{4}}{2}+\frac{\delta^{4}}{2}+3\sigma^{2}\delta^{2}+\zeta^{4}\right)+k_{3}\chi\left(\sigma^{2}-\delta^{2}\right)\zeta$
$\displaystyle-$ $\displaystyle
k_{4}\chi^{4}-\frac{1}{4}\chi^{4}{\rm{ln}}\frac{\chi^{4}}{\chi_{0}^{4}}+\frac{d}{3}\chi^{4}{\rm{ln}}\Bigg{(}\bigg{(}\frac{\left(\sigma^{2}-\delta^{2}\right)\zeta}{\sigma_{0}^{2}\zeta_{0}}\bigg{)}\bigg{(}\frac{\chi}{\chi_{0}}\bigg{)}^{3}\Bigg{)},$
$\displaystyle{\cal
L}_{SB}=-\left(\frac{\chi}{\chi_{0}}\right)^{2}\left[m_{\pi}^{2}f_{\pi}\sigma+\big{(}\sqrt{2}m_{K}^{2}f_{K}-\frac{1}{\sqrt{2}}m_{\pi}^{2}f_{\pi}\big{)}\zeta\right],$
(5)
and
${\cal
L}_{mag}=-{\bar{\psi_{i}}}q_{i}\gamma_{\mu}A^{\mu}\psi_{i}-\frac{1}{4}\kappa_{i}\mu_{N}{\bar{\psi_{i}}}\sigma^{\mu\nu}F^{\mu\nu}\psi_{i}-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}.$
(6)
In Eq. (1), the first term ${\cal L}_{kin}$ denotes the kinetic energy term,
the second term ${\cal L}_{NM}$ given by Eq. (2) describes the nucleon-meson
interaction term. In this equation, the in-medium mass of nucleons is given as
$m_{i}^{*}=-(g_{\sigma i}\sigma+g_{\zeta i}\zeta+g_{\delta i}\tau_{3}\delta)$
where $\tau_{3}$ denotes the $z$th component of isospin quantum number and
$g_{\sigma i}$, $g_{\zeta i}$ and $g_{\delta i}$ denote the coupling strengths
of scalar-isoscalar field $\sigma$, scalar-isoscalar field $\zeta$ and scalar-
isovector field $\delta$ with nucleons ($i$=$p,n$) respectively. The term
${\cal L}_{vec}$ (Eq. (3)) generates the mass of vector mesons through the
interactions with scalar mesons and quartic self-interaction terms, ${\cal
L}_{0}$ represents the spontaneous chiral symmetry breaking where
$\sigma_{0}$, $\zeta_{0}$, $\delta_{0}$ and $\chi_{0}$ symbolize the vacuum
values of the $\sigma$, $\zeta$, $\delta$ and $\chi$ scalar fields,
respectively. To calculate the value of the $d$ parameter in the last term of
Eq. (4), we recall the QCD $\beta$ function at one loop level, for $N_{c}$
colors and $N_{f}$ flavours and is given by Schechter1980 ; Gomm1986
$\beta_{\rm{QCD}}\left(g\right)=-\frac{11N_{c}g^{3}}{48\pi^{2}}+\frac{N_{f}g^{3}}{24\pi^{2}}+O(g^{5}).$
(7)
In the above expression, the first term comes from the gluons self-interaction
(anti-screening) and the second term comes from interactions of quark pairs
(screening). For $N_{c}$=3 and $N_{f}$=3, we estimate the value of $d$ to be
6/33, whereas for $N_{c}$=3 and $N_{f}$=2, the parameter $d$ gives the value
4/33 Schechter1980 ; Gomm1986 ; Kumar2010 . In the present model, we use
$d$=0.064 Kumar2010 , which is fitted along with the other medium parameters
such as $k_{i}(i=1$ to $4)$ to generate the vacuum values of fields
($\sigma_{0}$, $\zeta_{0}$, $\delta_{0}$, $\chi_{0}$, $\omega_{0}$ and
$\rho_{0}$) and the masses of the nucleons and $\eta$, $\eta^{\prime}$ mesons
Papazoglou1999 ; Kumar2010 ; Kumar2019 . The values of fitted medium
parameters are tabulated in Table 1.
Furthermore, the ${\cal L}_{SB}$ in Eq. (5) denotes the explicit chiral
symmetry breaking property. The term ${\cal L}_{mag}$ accounts for the hadrons
interaction with the magnetic field. In Eq. (6), the symbol $\psi_{i}$
represents a wave function of $i$th nucleon and the second term describes the
tensorial interaction with the electromagnetic tensor, $F_{\mu\nu}$. Also, the
symbols $\mu_{N}$ and $k_{i}$ represent the nuclear magneton
($\mu_{N}$=$\frac{e}{2m_{N}}$) and anomalous magnetic moment of $i$th nucleon,
respectively.
The non-linear coupled equations of motion of the meson fields are obtained by
solving the Euler-Lagrange equations using the total Lagrangian [Eq. (1)]
Kumar2019 ; Kumar2019a and are given as
$\displaystyle
k_{0}\chi^{2}\sigma-4k_{1}\left(\sigma^{2}+\zeta^{2}+\delta^{2}\right)\sigma-2k_{2}\left(\sigma^{3}+3\sigma\delta^{2}\right)-2k_{3}\chi\sigma\zeta$
$\displaystyle-\frac{d}{3}\chi^{4}\bigg{(}\frac{2\sigma}{\sigma^{2}-\delta^{2}}\bigg{)}+\left(\frac{\chi}{\chi_{0}}\right)^{2}m_{\pi}^{2}f_{\pi}=\sum
g_{\sigma i}\rho_{i}^{s},$ (8) $\displaystyle
k_{0}\chi^{2}\zeta-4k_{1}\left(\sigma^{2}+\zeta^{2}+\delta^{2}\right)\zeta-4k_{2}\zeta^{3}-k_{3}\chi\left(\sigma^{2}-\delta^{2}\right)$
$\displaystyle-\frac{d}{3}\frac{\chi^{4}}{\zeta}+\left(\frac{\chi}{\chi_{0}}\right)^{2}\left[\sqrt{2}m_{K}^{2}f_{K}-\frac{1}{\sqrt{2}}m_{\pi}^{2}f_{\pi}\right]=\sum
g_{\zeta i}\rho_{i}^{s},$ (9) $\displaystyle
k_{0}\chi^{2}\delta-4k_{1}\left(\sigma^{2}+\zeta^{2}+\delta^{2}\right)\delta-2k_{2}\left(\delta^{3}+3\sigma^{2}\delta\right)+2k_{3}\chi\delta\zeta$
$\displaystyle+\frac{2}{3}d\chi^{4}\left(\frac{\delta}{\sigma^{2}-\delta^{2}}\right)=\sum
g_{\delta i}\tau_{3}\rho_{i}^{s},$ (10)
$\displaystyle\left(\frac{\chi}{\chi_{0}}\right)^{2}m_{\omega}^{2}\omega+g_{4}\left(4{\omega}^{3}+12{\rho}^{2}{\omega}\right)=\sum
g_{\omega i}\rho_{i}^{v},$ (11)
$\displaystyle\left(\frac{\chi}{\chi_{0}}\right)^{2}m_{\rho}^{2}\rho+g_{4}\left(4{\rho}^{3}+12{\omega}^{2}{\rho}\right)=\sum
g_{\rho i}\tau_{3}\rho_{i}^{v},$ (12)
and
$\displaystyle
k_{0}\chi\left(\sigma^{2}+\zeta^{2}+\delta^{2}\right)-k_{3}\left(\sigma^{2}-\delta^{2}\right)\zeta+\chi^{3}\left[1+{\rm{ln}}\left(\frac{\chi^{4}}{\chi_{0}^{4}}\right)\right]+(4k_{4}-d)\chi^{3}$
$\displaystyle-\frac{4}{3}d\chi^{3}{\rm{ln}}\Bigg{(}\bigg{(}\frac{\left(\sigma^{2}-\delta^{2}\right)\zeta}{\sigma_{0}^{2}\zeta_{0}}\bigg{)}\bigg{(}\frac{\chi}{\chi_{0}}\bigg{)}^{3}\Bigg{)}+\frac{2\chi}{\chi_{0}^{2}}\left[m_{\pi}^{2}f_{\pi}\sigma+\left(\sqrt{2}m_{K}^{2}f_{K}-\frac{1}{\sqrt{2}}m_{\pi}^{2}f_{\pi}\right)\zeta\right]$
$\displaystyle-\frac{\chi}{{{\chi_{0}}^{2}}}(m_{\omega}^{2}\omega^{2}+m_{\rho}^{2}\rho^{2})=0,$
(13)
respectively.
In the above equations, the symbols $m_{\pi}$, $m_{K}$, $f_{\pi}$ and $f_{K}$
represent the masses and decay constants of pions and kaons, respectively. The
isospin effect is measured by the parameter through definition,
$I=-\frac{\Sigma_{i}\tau_{3i}\rho^{v}_{i}}{2\rho_{N}}$. Furthermore,
$\rho^{s}_{i}$ and $\rho^{v}_{i}$ describe the magnetic field induced scalar
and vector densities of $i$th nucleons ($i=n,p$) Kumar2019 ; Broderick2000 ;
Broderick2002 . Due to Landau quantization, the magnetic field interact with
proton and neutron differently. For uncharged neutron the expressions for
scalar and vector densities are given as
$\displaystyle\rho^{s}_{n}$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi^{2}}\sum_{s=\pm
1}\int^{\infty}_{0}k^{n}_{\bot}dk^{n}_{\bot}\left(1-\frac{s\mu_{N}\kappa_{n}B}{\sqrt{m^{*2}_{n}+\left(k^{n}_{\bot}\right)^{2}}}\right)\int^{\infty}_{0}\,dk^{n}_{\parallel}\frac{m^{*}_{n}}{\tilde{E}^{n}_{s}}\left(f^{n}_{k,s}+\bar{f}^{n}_{k,s}\right),$
(14)
and
$\displaystyle\rho^{v}_{n}$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi^{2}}\sum_{s=\pm
1}\int^{\infty}_{0}k^{n}_{\bot}dk^{n}_{\bot}\int^{\infty}_{0}\,dk^{n}_{\parallel}\left(f^{n}_{k,s}-\bar{f}^{n}_{k,s}\right),$
(15)
respectively. Likewise, the scalar and vector densities for a charged proton
with the effect of Landau quantization are given by Broderick2000 ;
Broderick2002
$\rho^{s}_{p}=\frac{|q_{p}|Bm^{*}_{p}}{2\pi^{2}}\Bigg{[}\sum_{\nu=0}^{\nu_{max}^{(s=1)}}\int^{\infty}_{0}\frac{dk^{p}_{\parallel}}{\sqrt{(k^{p}_{\parallel})^{2}+(\bar{m}_{p})^{2}}}\left(f^{p}_{k,\nu,s}+\bar{f}^{p}_{k,\nu,s}\right)+\sum_{\nu=1}^{\nu_{max}^{(s=-1)}}\int^{\infty}_{0}\frac{dk^{p}_{\parallel}}{\sqrt{(k^{p}_{\parallel})^{2}+(\bar{m}_{p})^{2}}}\left(f^{p}_{k,\nu,s}+\bar{f}^{p}_{k,\nu,s}\right)\Bigg{]},$
(16)
and
$\displaystyle\rho^{v}_{p}=\frac{|q_{p}|B}{2\pi^{2}}\Bigg{[}\sum_{\nu=0}^{\nu_{max}^{(s=1)}}\int^{\infty}_{0}dk^{p}_{\parallel}\left(f^{p}_{k,\nu,s}-\bar{f}^{p}_{k,\nu,s}\right)+\sum_{\nu=1}^{\nu_{max}^{(s=-1)}}\int^{\infty}_{0}dk^{p}_{\parallel}\left(f^{p}_{k,\nu,s}-\bar{f}^{p}_{k,\nu,s}\right)\Bigg{]},$
(17)
respectively.
In the above equations,
$\bar{m}_{p}$=$\sqrt{m^{*2}_{p}+2\nu|q_{p}|B}-s\mu_{N}\kappa_{p}B$ defines the
effective mass of proton where symbol $\nu$ represents the Landau levels. The
effective energy of neutron and proton is given by
$\tilde{E}^{n}_{s}=\sqrt{\left(k^{n}_{\parallel}\right)^{2}+\left(\sqrt{m^{*2}_{n}+\left(k^{n}_{\bot}\right)^{2}}-s\mu_{N}\kappa_{n}B\right)^{2}},$
(18)
and
$\tilde{E}^{p}_{\nu,s}=\sqrt{\left(k^{p}_{\parallel}\right)^{2}+\left(\sqrt{m^{*2}_{p}+2\nu|q_{p}|B}-s\mu_{N}\kappa_{p}B\right)^{2}},$
(19)
respectively. Also, the symbols ${f}^{n}_{k,\nu,s}$, $\bar{f}^{n}_{k,\nu,s}$,
${f}^{p}_{k,s}$ and $\bar{f}^{p}_{k,s}$ define the finite temperature Fermi
distribution functions for nucleon and their antinucleons, and are given as
$\displaystyle f^{n}_{k,s}$ $\displaystyle=$
$\displaystyle\frac{1}{1+\exp\left[\beta(\tilde{E}^{n}_{s}-\mu^{*}_{n})\right]},\qquad\bar{f}^{n}_{k,s}=\frac{1}{1+\exp\left[\beta(\tilde{E}^{n}_{s}+\mu^{*}_{n})\right]}.$
(20) $\displaystyle f^{p}_{k,\nu,s}$ $\displaystyle=$
$\displaystyle\frac{1}{1+\exp\left[\beta(\tilde{E}^{p}_{\nu,s}-\mu^{*}_{p})\right]},\qquad\bar{f}^{p}_{k,\nu,s}=\frac{1}{1+\exp\left[\beta(\tilde{E}^{p}_{\nu,s}+\mu^{*}_{p})\right]}.$
(21)
Table 1: Values of different parameters used in the present investigation Papazoglou1999 . Parameter | Value | Parameter | Value | Parameter | Value
---|---|---|---|---|---
$k_{0}$ | 2.53 | $\sigma_{0}$ (MeV) | -93.29 | $g_{\sigma N}$ | 10.56
$k_{1}$ | 1.35 | $\zeta_{0}$ (MeV) | -106.8 | $g_{\zeta N}$ | -0.46
$k_{2}$ | -4.77 | $\chi_{0}$ (MeV) | 409.8 | $g_{\delta N}$ | 2.48
$k_{3}$ | -2.77 | $d$ | 0.064 | $g_{\omega N}$ | 13.35
$k_{4}$ | -0.218 | $g_{4}$ | 79.91 | $g_{\rho N}$ | 5.48
$f_{K}$ (MeV) | 122.14 | $\rho_{0}$ (${\mathrm{fm}}^{-3}$) | 0.15 | $m_{\sigma}$ (MeV) | 466.5
$m_{\pi}$ (MeV) | 139 | $m_{K}$ (MeV) | 498 | $f_{\pi}$ (MeV) | 93.29
$m_{\zeta}$(MeV) | 1024.5 | $m_{\delta}$ (MeV) | 899.5 | $m_{\eta}$ (MeV) | 574.374
$M_{N}$ (MeV) | 939 | | | |
#### II.1.1 $\eta$N INTERACTIONS IN THE MAGNETIZED NUCLEAR MATTER
In this subsection, we evaluate the in-medium mass of $\eta$ mesons via
dispersion relation in hot magnetized asymmetric nuclear matter. The medium
modified $\eta$ meson mass is obtained in terms of scalar and vector fields of
the chiral model which are solved by considering the interactions of nucleons
with $\eta$ mesons in the presence of an external magnetic field. These scalar
and vector fields modify the scalar and vector densities of the nucleons which
in result modifies the self-energy of the $\eta$ mesons.
The $\eta N$ interaction Lagrangian is given as
${\mathcal{L}}_{{\eta N}}={\mathcal{L}}_{{RT}}+{\cal L}_{\eta SB}+{\cal
L}_{d_{1}}^{BM}+{\cal L}_{d_{2}}^{BM}.$ (22)
The individual terms are given in detail as follows
1. 1.
${\mathcal{L}}_{{RT}}$, The first range term:
The first term in the $\eta N$ Lagrangian comes from the first range term
${\mathcal{L}}_{{\mathrm{1strangeterm}}}=Tr(u_{\mu}Xu^{\mu}X+Xu_{\mu}u^{\mu}X),$
(23)
where
$u_{\mu}=-\frac{i}{2}\left[u^{\dagger}(\partial_{\mu}u)-u(\partial_{\mu}u^{\dagger})\right]$
and $u$=$\text{exp}\left[\frac{i}{\sqrt{2}\sigma_{0}}P\gamma_{5}\right]$. In
the present investigation, we have taken the interactions up to second order.
The $X$ and $P$, represent the scalar and pseudoscalar meson matrices
Zhong2006 , respectively and are explicitly given as
$X=\frac{1}{\sqrt{2}}\sigma^{a}\lambda_{a}=\left(\begin{array}[]{ccc}(\delta+\sigma)/\sqrt{2}&\delta^{+}&\kappa^{+}\\\
\delta^{-}&(-\delta+\sigma)/\sqrt{2}&\kappa^{0}\\\
\kappa^{-}&\overline{\kappa^{0}}&\zeta\end{array}\right),$ (24)
and
$P=\frac{1}{\sqrt{2}}\pi_{a}\lambda^{a}=\left(\begin{array}[]{ccc}\frac{1}{\sqrt{2}}\left(\pi^{0}+{\frac{\eta}{\sqrt{1+2\,{w}^{2}}}}\right)&\pi^{+}&2\,{\frac{K^{+}}{w+1}}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\pi^{-}&\frac{1}{\sqrt{2}}\left(-\pi^{0}+{\frac{\eta}{\sqrt{1+2\,{w}^{2}}}}\right)&2\,{\frac{K^{0}}{w+1}}\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr
2\,{\frac{K^{-}}{w+1}}&2\,{\frac{\overline{K}^{0}}{w+1}}&-{\frac{\eta\,\sqrt{2}}{\sqrt{1+2\,{w}^{2}}}}\end{array}\right).$
(25)
In Eq. (23), the calculations of the axial current of pions and kaons result
in the following relations
$\sigma_{0}=-f_{\pi}\qquad\zeta_{0}=-\frac{1}{\sqrt{2}}(2f_{K}-f_{\pi}),$ (26)
for the vacuum values of the scalar condensates $\sigma$ and $\zeta$ found in
the linear $\sigma$-model Papazoglou1999 . In the Eq. (25), the re-
normalization factor $w=\sqrt{2}\zeta_{0}/\sigma_{0}$ is incorporated to
obtain the canonical form of the kinetic energy terms Papazoglou1999 . The
matrix $P$ reduces to the matrix normally used in in chiral perturbation
theory Zhong2006 for $w=1$ (i.e. $f_{\pi}$=$f_{K}$). The advantage of $w\neq
1$ is that the SU(3)V breaking effects are accounted in the $P$ matrix for
even at lowest order Zhong2006 .
2. 2.
${\cal L}_{\eta SB}$, The mass term:
The second term in Eq. (22), represents the scale breaking term of the chiral
model Lagrangian, which is given by
${\cal L}_{SB}=-\frac{1}{2}{\rm
Tr}A_{p}\left(uXu+u^{\dagger}Xu^{\dagger}\right),$ (27)
with $A_{p}$ as a diagonal matrix, given as
$A_{p}=\frac{1}{\sqrt{2}}\left(\begin{array}[]{ccc}m_{\pi}^{2}f_{\pi}&0&0\\\
0&m_{\pi}^{2}f_{\pi}&0\\\
0&0&2m_{K}^{2}f_{K}-m_{\pi}^{2}f_{\pi}\end{array}\right).$ (28)
The $\eta$ meson vacuum mass is extracted from the Lagrangian [Eq. (27)] and
given as
$m_{\eta}=\frac{1}{f}\sqrt{\left(3m_{\pi}^{2}f_{K}m_{K}^{2}+\frac{8f_{K}^{2}m_{K}^{2}}{f_{\pi}^{2}}-\frac{4f_{K}m_{\pi}^{2}}{f_{\pi}}\right)}.$
(29)
Using the values of various constants, the value of $m_{\eta}$ turns out to be
574.374 MeV which is with an accuracy of 4.9 $\%$ of experimental mass, i.e.,
547.862 MeV PDG2020 . Moreover, using Gell-Mann Okubo mass formula under octet
approximation, the authors calculated the vacuum mass of $\eta$-meson as 567
MeV which is with an accuracy of 3.6 $\%$ of physical mass Burakovsky1997 . It
has been observed that the vacuum mass of $\eta$-meson has model dependencies
Burakovsky1997 but here in the current scenario, the in-medium mass-shift of
$\eta$-meson is nearly the same for both obtained masses and therefore can be
neglected.
3. 3.
${\cal L}_{d_{1}}^{BM}+{\cal L}_{d_{2}}^{BM}$, The $d$ terms:
The last term in the Eq. (22) is called another range term which basically
arises from the baryon-meson interaction Lagrangian terms of chiral model
Mishra2004a ; Mishra2006 and are given as
${\cal L}_{d_{1}}^{BM}=\frac{d_{1}}{2}Tr(u_{\mu}u^{\mu})Tr(\bar{B}B),$ (30)
and,
${\cal L}_{d_{2}}^{BM}=d_{2}Tr(\bar{B}u_{\mu}u^{\mu}B).$ (31)
In above, $B$ denotes the baryon matrix, given as
$B=\frac{1}{\sqrt{2}}b^{a}\lambda_{a}=\left(\begin{array}[]{ccc}\frac{\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda^{0}}{\sqrt{6}}&\Sigma^{+}&p\\\
\Sigma^{-}&-\frac{\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda^{0}}{\sqrt{6}}&n\\\
\Xi^{-}&\Xi^{0}&-2\frac{\Lambda^{0}}{\sqrt{6}}\end{array}\right).$ (32)
The explicit form of above three terms are inserted in Eq. (22) and the
interaction Lagrangian is given as
$\displaystyle\mathcal{L_{\eta}}$ $\displaystyle=$
$\displaystyle\left(\frac{1}{2}-\frac{\sigma^{\prime}+4\zeta^{\prime}(2f_{K}-f_{\pi})}{\sqrt{2}f^{2}}\right)\partial^{\mu}\eta\partial_{\mu}\eta$
(33) $\displaystyle-$
$\displaystyle\frac{1}{2}\left(m_{\eta}^{2}-\frac{(\sqrt{2}\sigma^{\prime}-4\zeta^{\prime})m^{2}_{\pi}f_{\pi}+8\zeta^{\prime}m^{2}_{K}f_{K}}{\sqrt{2}f^{2}}\right)\eta^{2}$
$\displaystyle+\frac{d^{\prime}}{f^{2}}\left(\frac{\rho^{s}_{p}+\rho^{s}_{n}}{4}\right)\partial^{\mu}\eta\partial_{\mu}\eta.$
In above, the fields $\sigma^{\prime}(=\sigma-\sigma_{0})$,
$\zeta^{\prime}(=\zeta-\zeta_{0})$ and $\delta^{\prime}(=\delta-\delta_{0})$
are the digression of the expectation values of scalar fields from their
vacuum expectation values, the constant $f$, is equal to
$\sqrt{f_{\pi}^{2}+2(2f_{K}-f_{\pi})^{2}}$ and the parameter
$d^{\prime}$=$3d_{1}+d_{2}$.
At the mean-field level, the equation of motion for the $\eta$ meson field is
simplified to
$\displaystyle\partial^{\mu}\partial_{\mu}\eta+\left(m_{\eta}^{2}-\frac{(\sqrt{2}\sigma^{\prime}-4\zeta^{\prime})m^{2}_{\pi}f_{\pi}+8\zeta^{\prime}m^{2}_{k}f_{k}}{\sqrt{2}f^{2}}\right)\eta$
$\displaystyle+\frac{2d^{\prime}}{f^{2}}\left(\frac{\rho^{s}_{p}+\rho^{s}_{n}}{4}-\frac{\sigma^{\prime}+4\zeta^{\prime}(2f_{K}-f_{\pi})}{\sqrt{2}}\right)\partial^{\mu}\partial_{\mu}\eta=0.$
(34)
Furthermore, the dispersion relation for $\eta$ meson field is obtained by
Fourier transformation of the above equation
$-\omega^{2}+{\vec{k}}^{2}+m_{\eta}^{2}-\Pi^{*}(\omega,|\vec{k}|)=0,$ (35)
where $\Pi^{*}$ symbolize the in-medium self-energy of $\eta$ meson, and it is
explicitly given as
$\displaystyle\Pi^{*}(\omega,|\vec{k}|)$ $\displaystyle=$
$\displaystyle\frac{(\sqrt{2}\sigma^{\prime}-4\zeta^{\prime})m^{2}_{\pi}f_{\pi}+8\zeta^{\prime}m^{2}_{K}f_{K}}{\sqrt{2}f^{2}}+\frac{2d^{\prime}}{f^{2}}\left(\frac{\rho^{s}_{p}+\rho^{s}_{n}}{4}\right)(\omega^{2}-{\vec{k}}^{2})$
(36) $\displaystyle-$
$\displaystyle\frac{2}{f^{2}}\left(\frac{\sigma^{\prime}+4\zeta^{\prime}(2f_{K}-f_{\pi})}{\sqrt{2}}\right)(\omega^{2}-{\vec{k}}^{2}).$
In the asymmetric nuclear matter, the in-medium mass of $\eta$ meson is
evaluated by solving Eq. (35) under the condition,
$m_{\eta}^{*}=\omega(|\vec{k}|$=0). The parameter $d^{\prime}$ in the
expression of self energies is estimated from the empirical value of
scattering length $a^{\eta N}$ of $\eta$ meson Zhong2006 , whose expression is
given as
$\displaystyle d^{\prime}$ $\displaystyle=$
$\displaystyle\frac{f^{2}}{2\pi\left(1+\frac{m_{\eta}}{M_{N}}\right)}\frac{a^{\eta
N}}{m^{2}_{\eta}}+\frac{\sqrt{2}g_{\sigma
N}}{m^{2}_{\sigma}}-\frac{4\sqrt{2}(2f_{K}-f_{\pi})g_{\zeta
N}}{m^{2}_{\zeta}}$ (37) $\displaystyle-$
$\displaystyle\left(\frac{\sqrt{2}g_{\sigma
N}}{m^{2}_{\sigma}}-\frac{4g_{\zeta
N}}{m^{2}_{\zeta}}\right)\frac{m^{2}_{\pi}f_{\pi}}{\sqrt{2}m^{2}_{\eta}}-\tau_{3}\frac{4\sqrt{2}g_{\delta
N}m^{2}_{K}f_{K}}{m^{2}_{\delta}m^{2}_{\eta}},$
where $m_{\sigma}$, $m_{\zeta}$ , $m_{\delta}$ and $m_{N}$ denote the vacuum
masses of the fields $\sigma$, $\zeta$, $\delta$ and nucleons, respectively
and their values are given in Table 1. Using, the in-medium mass of $\eta$
mesons, the optical potential for $\eta$-meson for finite momentum Mishra2008
; Mishra2009 in the nuclear matter is given by
$U^{*}_{\eta}(\omega,\textbf{k})=\omega(\textbf{k})-\sqrt{\textbf{k}^{2}+m^{2}_{\eta}},$
(38)
and for zero momentum, the relation becomes
$U^{*}_{\eta}=\Delta m_{\eta}^{*}={m_{\eta}^{*}}-m_{\eta}.$ (39)
#### II.1.2 FUSION OF CHIRAL PERTURBATION THEORY AND CHIRAL SU(3) MODEL
Chiral Perturbation theory (ChPT) is one of the phenomenological approach to
study the low-energy dynamics of QCD with an effective field theory Lagrangian
based on the underlying chiral symmetry of quantum chromodynamics Zhong2006 .
In this, the SU(3)${}_{\mathrm{L}}\times$SU(3)R Lagrangian describing the
pseudoscalar mesons and baryons interactions is given as
$\displaystyle{\mathcal{L}_{\text{ChPT}}}={\mathcal{L}_{P}}+{\mathcal{L}_{PB}},$
(40)
where the pseudoscalar mesonic term, $\mathcal{L}_{P}$ is taken up to second
chiral order Zhong2006 ; Kaplan1986 and is given by Zhong2006 ,
$\displaystyle{\mathcal{L}_{P}}$ $\displaystyle=$
$\displaystyle\frac{1}{4}f_{\pi}^{2}\textrm{Tr}\partial^{\mu}\Sigma\partial_{\mu}\Sigma^{\dagger}+\frac{1}{2}f_{\pi}^{2}B_{0}\left\\{\mbox{Tr}M_{q}(\Sigma-1)+\mathrm{H.c.}\right\\}.$
(41)
In above equation, $\Sigma=\xi^{2}=\exp{(i\sqrt{2}P/f_{\pi})}$, the symbol
$B_{0}$ represent the connection with order parameter of spontaneously broken
chiral symmetry and $M_{q}=\mbox{diag}\\{m_{q},m_{q},m_{s}\\}$ being the
current quark mass matrix. The second term in the Eq. (40), $\mathcal{L}_{PB}$
defines the leading order and next-to leading order baryon-meson interactions
Kaplan1986 . The off-shell terms are developed by using heavy baryon chiral
theory Jenkins1991 . However, the former theory has additional properties such
as quantum corrections and Lorentz invariance. The properties of the nuclear
system has been described successfully by using the off-shell Lagrangian and
the higher-order terms of this next-to-leading order Lagrangian are also
studied Park1993 . In the present article, we have limited our calculations up
to the small momentum scale, $Q^{2}$ without loop contributions (for s-wave
$\eta{N}$ scattering) because the higher order corrections are suppressed
Zhong2006 .
By using the heavy-baryon approximation and expanding the Eq. (40) up to the
order of $1/f_{\pi}^{2}$, we get the $\eta N$ Lagrangian as
$\displaystyle\mathcal{L_{\eta N}}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\partial^{\mu}\eta\partial_{\mu}\eta-\frac{1}{2}\left(m{{}^{\prime}}_{\eta}^{2}-\frac{\Sigma_{\eta\mathrm{N}}}{f_{\pi}^{2}}\bar{\Psi}_{\mathrm{N}}\Psi_{\mathrm{N}}\right)\eta^{2}+\frac{1}{2}\frac{\kappa}{f_{\pi}^{2}}\bar{\Psi}_{\mathrm{N}}\Psi_{\mathrm{N}}\partial^{\mu}\eta\partial_{\mu}\eta.$
(42)
In above equation, $m_{\eta}$ represent the mass of $\eta$-meson calculated in
ChPT and is evaluated by relation
$m{{}^{\prime}_{\eta}}^{2}=\frac{2}{3}B_{0}(m_{q}+2m_{s})$. In this mass
relation, $m_{q(s)}$ defines the mass of light (strange) quarks Burakovsky1997
. We have used the same value of $\eta$ meson vacuum mass i.e.
$m{{}^{\prime}}_{\eta}$=$m_{\eta}$= 574.374 MeV in the ChPT+chiral model
calculations for consistency with the chiral SU(3) model. Also, the
$\Sigma_{\eta\mathrm{N}}$, the $\eta N$ sigma term and the $\kappa$ term is
determined by relations
$\displaystyle\Sigma_{\eta\mathrm{N}}=-\frac{2}{3}[a_{1}m_{q}+4a_{2}m_{s}+2a_{3}(m_{q}+2m_{s})],$
(43)
and
$\displaystyle\kappa=4\pi
f_{\pi}^{2}\left(\frac{1}{m_{\eta}^{2}}+\frac{1}{m{{}^{\prime}}_{\eta}M_{\mathrm{N}}}\right)a^{\eta\mathrm{N}}-\frac{\Sigma_{\eta\mathrm{N}}}{m_{\eta}^{2}},$
(44)
respectively. The $a$ terms in the Eq. (43) corresponds to the chiral breaking
effects and are fitted from the parameter $\Sigma_{\mathrm{KN}}=380\pm 100$
MeV, where $\pm 100$ MeV reflects the uncertainty Lyubovitskij2001 ; Dong1996
; Hatsuda1994 ; Brown1994 ; Georgi1984 ; Politzer1991 ; Lee1995 ; Zhong2006 .
The parameter, $\kappa$ is estimated from the $\eta$N scattering length
Zhong2006 with the range of $a^{\eta\mathrm{N}}$ values i.e. 0.91 $\sim$ 1.14
fm, which is assumed from the empirical investigations Green2005 ; Renard2002
; Arndt2005 ; Green1999 ; Zhong2006 .
The equations of motion for $\eta N$ interactions in the unified approach of
chiral SU(3) model and ChPT can be written as by
$\displaystyle\left(\partial_{\mu}\partial^{\mu}+m_{\eta}^{2}-\frac{\Sigma_{\eta
N}}{2f_{\pi}^{2}}\langle\bar{\Psi}_{\mathrm{N}}\Psi_{\mathrm{N}}\rangle+\frac{\kappa}{2f_{\pi}^{2}}\langle\bar{\Psi}_{\mathrm{N}}\Psi_{\mathrm{N}}\rangle\partial_{\mu}\partial^{\mu}\right)\eta=0,$
(45)
where
$\langle\bar{\Psi}_{\mathrm{N}}\Psi_{\mathrm{N}}\rangle\equiv\rho^{s}_{N}$=$\left(\rho^{s}_{p}+\rho^{s}_{n}\right)$
is the magnetic field influenced scalar density of nucleon calculated within
the chiral SU(3) model. The plane wave decomposition of Eq. (45) gives
$\displaystyle-\omega^{2}+\vec{\textbf{k}}^{2}+m_{\eta}^{2}-\frac{\Sigma_{\eta\mathrm{N}}}{2f_{\pi}^{2}}\rho^{s}_{N}+\frac{\kappa}{2f_{\pi}^{2}}\rho^{s}_{N}\left(-\omega^{2}+\vec{\textbf{k}}^{2}\right)=0.$
(46)
By solving the above quadratic equation, we get
$\omega=\sqrt{{m_{\eta}^{*}}^{2}+\vec{\textbf{k}}^{2}},$ (47)
and the explicit expression of magnetic field induced mass of $\eta$ meson,
$m_{\eta}^{*}$ is given by
$\displaystyle
m_{\eta}^{*}=\sqrt{\left(m_{\eta}^{2}-\frac{\Sigma_{\eta\mathrm{N}}}{2f_{\pi}^{2}}\rho^{s}_{N}\right)\Big{/}\left(1+\frac{\kappa}{2f_{\pi}^{2}}\rho^{s}_{N}\right)}.$
(48)
The last two terms of the Eq. (46) gives the $\eta$-meson self-energy
$\displaystyle\Pi^{*}(\omega,\vec{\textbf{k}})=\Big{(}-\frac{\Sigma_{\eta\mathrm{N}}}{2f_{\pi}^{2}}+\frac{\kappa}{2f_{\pi}^{2}}(-\omega^{2}+\vec{\textbf{k}}^{2})\Big{)}\rho^{s}_{N},$
(49)
where $\omega$ is $\eta$-meson single-particle energy and $\vec{\textbf{k}}$
is the momentum.
## III Results and Discussions
In this section, we discuss the magnetic field induced optical potential of
$\eta$ meson evaluated using two approaches i.e. (i) chiral SU(3) model in
Section III.1 and (ii) ChPT + chiral SU(3) model in Section III.2. In both
methodologies, we have taken the values of scattering length, $a^{\eta_{N}}$
in the range 0.91-1.14 fm. We start by discussing the in-medium behavior of
nucleon scalar densities under the influence of a strong magnetic field for
different values of nuclear density, isospin asymmetry, and temperature.
In Figure 1, at nuclear saturation density, we illustrate the scalar density
of neutron and proton as a function of temperature for zero and non-zero
values of the magnetic field. In the left (right) column of the figure, we
present the scalar densities for symmetric (anti-symmetric) nuclear matter.
For symmetric nuclear matter and zero magnetic field, we observe the same
behavior of neutron and proton scalar density with temperature. The scalar
densities slowly decrease linearly up to $T\approx$150 MeV and start
increasing for higher values of temperature. These modifications reflect the
interplay between the contributions from higher momenta states and the thermal
distribution functions in the scalar density expressions [see Eqs. (16) and
(14)]. Further, on increasing the magnetic field the proton and neutron scalar
density behave unevenly, for a particular value of temperature, the proton
scalar density increases significantly whereas the neutron scalar density
slightly decreases. The additional effects in proton scalar density are
because of the charged nature of proton, the positively charged proton
interacts with the magnetic field and experiences Landau quantization and
contributions from the anomalous magnetic moment [see Eq. (16)]. Whereas due
to uncharged nature, the neutron only experiences contributions from the
anomalous magnetic moment [see Eq. (14)]. Moreover, the transition temperature
decrease with the increase in the magnetic field which supports the
restoration of chiral symmetry Kumar2020 . On the other side, for asymmetric
nuclear matter [sub-plot (b) and (d)], even for zero value of magnetic field
we find unequal values of proton and nucleon scalar density. In the chiral
SU(3) model, the scalar densities of protons and neutrons are calculated
through Eq. (16) and Eq. (14), respectively and these equations include the
effect of in-medium scalar and vector fields Papazoglou1999 . In asymmetric
nuclear matter the iso-scalar(vector) $\delta(\rho)$ field show non-zero
contributions which eventually leads to unequal values of proton and neutron
densities Kumar2020c . The neutron scalar density modifies significantly in
asymmetric nuclear medium and decreases with the increase in the temperature
whereas the proton scalar density shows zero value up to $T\approx$ 90 MeV and
then increases rapidly. Naively, the value of proton scalar density should be
zero for $I$=0.5 but at higher temperatures, despite $\rho^{v}_{p}$=0, the
proton condensate ($\bar{p}p$) still populates in the nuclear matter. The
inclusion of magnetic field does significant changes in the proton scalar
density whereas the neutron scalar density shows a small decrement with the
increasing magnetic field.
In Figure 2, we plot the scalar densities for the same values of medium
parameters but $\rho_{N}$=4$\rho_{0}$. On the same line, at $eB$=0 and $I$=0,
we observe similar behavior of proton and neutron scalar density. When we move
from zero to non-zero values of magnetic field strength, for a particular
value of temperature we observe the proton scalar density increase appreciably
whereas the neutron scalar density slightly decreases. This is again due to
the additional energy levels in the protons due to the magnetic field
intervention. It is to be noted that the effect of the magnetic field is more
pronounced in the high density regime. Furthermore, in the highest value of
medium asymmetry, as a function of temperature, we observe that the proton
scalar density remains zero up to $T\approx$50 MeV and further increases non-
linearly with the increase in the magnetic field. On the contrary, the neutron
scalar density modifies appreciably in the asymmetric matter, it decreases
with the increase in temperature and magnetic field. In the symmetric nuclear
matter, the observed scalar densities at zero magnetic field are in agreement
with the results of the relativistic mean-field model Zhong2006 ; Song2008 .
Figure 1: (Color online) The in-medium scalar density of nucleons at
$\rho_{0}$. Figure 2: (Color online) The in-medium scalar density of nucleons
at 4$\rho_{0}$.
### III.1 Impact of Magnetic Field on the $\eta$ Mesons in Chiral Model
In this section, we present the result and discussion for the in-medium mass
of $\eta$-meson calculated in the chiral SU(3) model under the influence of
the external magnetic field. In Figures 3 and 4, we illustrate the in-medium
$\eta$ mass as a function of temperature for different other parameters such
as isospin asymmetry, magnetic field, and scattering length at
$\rho_{N}$=$\rho_{0}$ and 4$\rho_{0}$, respectively. In Figure 3, for any
value of $I$, $eB$ and $a^{\eta N}$, we observe the in-medium mass gradually
increases with the increase in temperature up to a certain value of
temperature and then it starts decreasing. This behavior represents the
opposite reflection of in-medium scalar densities plotted in Figure 1 as the
expression of $\eta$-meson [see Eq. (48)] has an inverse dependence on the sum
of scalar densities of nucleons. In symmetric nuclear matter, the impact of
the magnetic field leads to a more attractive contribution in the in-medium
$\eta$ mass for a particular value of temperature, scattering length. With the
increase in the magnetic field, we observe the transition point (i.e. the
temperature where in-medium mass starts decreasing as a function of
temperature) moves towards the lower temperature side. As discussed earlier,
the medium modified mass of $\eta$-meson has indirect dependence on the sum of
the nucleon’s scalar densities and therefore it shows opposite behavior to the
scalar densities. However, in the asymmetric nuclear matter, we observe that
the in-medium mass slowly increases for zero value of the magnetic field as
was observed for the symmetric matter. This is because the in-medium mass
depends upon the sum of scalar densities with no additional parameter
dependence. Further, at $I\neq$0, we observe a little change in $\eta$ mass
for the lower temperature region whereas a substantial change in the higher
temperature region concerning the magnetic field. This is because in the
highly asymmetric matter for lower (higher) temperatures, we have a negligible
(substantial) contribution to the proton scalar density. It is to be noted
that due to the uncharged nature of $\eta$-meson it does not couple with
magnetic field directly and therefore does not show any additional
contributions from Landau energy levels as was observed for charged $D$ mesons
Kumar2020 ; Reddy2018 ; Kumar2020a . From fig. 3, we also anticipate the
effect of scattering length. When we change $a^{\eta N}$ from 0.91 to 1.14 fm,
we observe a significant decrement in the effective mass for a particular
value of magnetic field, and temperature. This is because of the parameter
$d^{\prime}$’s direct relationship with the scattering length in Eq. (37). The
$d^{\prime}$ parameter gives attractive contribution to the $\eta$ in-medium
mass through equation of motion [Eq. (35)] and self-energy [Eq. (36)].
In Figure 4, we plot the in-medium mass of $\eta$-meson for same values of
medium parameters but $\rho_{N}$=4$\rho_{0}$. In the left panel, at $eB$=0 and
$I$=0, we observe a similar trend of in-medium mass as a function of
temperature as was observed for $\rho$=$\rho_{0}$. For high nuclear density,
we observe a significant drop in the mass of $\eta$-meson. The drop in
effective mass increase with the increase in the magnetic field and scattering
length. When we move from symmetric nuclear matter to asymmetric nuclear
matter, we observe the cross-over behavior of in-medium mass as a function of
temperature for a particular value of scattering length. This is due to a
similar reason that was discussed for the nuclear saturation density case. The
difference is, here in the high density regime the proton scalar density
populates little (but greater than the $\rho$=$\rho_{0}$ case) in the lower
temperature regime and substantially in the higher temperature regime. Also,
the effect of the magnetic field is more pronounced in the high density
regime.
Furthermore, for a better understanding of in-medium $\eta$-meson mass, in
Figure 5, we plot the individual terms of $\eta$-meson’s self-energy. The
expression of self-energy [Eq. (36)] have three interaction terms: (i) first
range term (ii) mass term and (iii) $d^{\prime}$ term. At nuclear saturation
density, in this figure, we show the contributions of the individual terms as
a function of temperature and asymmetry for $a^{\eta N}$=1.02 fm. At zero
magnetic field and asymmetry, we anticipate that the first range term gives a
significant repulsive contribution to the in-medium mass whereas the mass and
$d^{\prime}$ terms give little and significant attractive contributions,
respectively. For the non-zero magnetic field (asymmetry), the modification in
the $d^{\prime}$ term becomes more (less). This behavior is due to the
presence of nucleon’s scalar density terms in the second term of the self-
energy expression [Eq. (36)]. The $d^{\prime}$ term dependence emphasizes the
importance of scattering length $a^{\eta N}$, in the eta-nucleon interactions.
The optical potential at zero and non zero momentum can be used to study the
$\eta$-mesic nuclei Jenkins1991 ; Kumar2020c ; Zhong2006 and eta-meson
momentum dependence Berg1994 ; Chen2017 ; David2018 . In Figure 6, in
symmetric nuclear matter we plot the optical potential as a function of medium
momentum $\lvert\textbf{k}\rvert$ for various values of magnetic field and
density at $a^{\eta N}$=0.91 fm. In this figure at $\rho_{N}$=$\rho_{0}$, we
observe that the magnitude of optical potential decreases with the increase in
momentum. With the increase in the magnetic field (temperature), we observe
the drop in optical potential become more (less). The behavior of in-medium
optical potential reflects the interplay between the in-medium mass and
momentum which can be understood from the expression given by Eq. (38). At
higher values of the momentum $\lvert\textbf{k}\rvert$, in the optical
potential curve, the contribution of effective mass is suppressed by the
increase in momentum states. Furthermore, in the right panel ,i.e., high
density regime, we anticipate deep optical potential which becomes less as
momentum states increase. In Figures 7 and 8, we find likewise trend of
optical potential with $\eta$ momentum. In these figures, we find a more deep
optical potential with the increasing scattering length. The behavior of
optical potential with scattering length and other medium parameters can be
understood in terms of in-medium mass. For a more clear picture, we listed the
values of in-medium optical potential in chiral SU(3) model at
$\lvert\textbf{k}\rvert$=0 in Table 2.
| | $I$=0 | $I$=0.5
---|---|---|---
| $a^{\eta N}(\text{fm})$ | T=0 | T=100 | T=0 | T=100
| | $eB$=0 | $eB$=5${{m^{2}_{\pi}}}$ | $eB$=0 | $eB$=5${{m^{2}_{\pi}}}$ | $eB$=0 | $eB$=5${{m^{2}_{\pi}}}$ | $eB$=0 | $eB$=5${{m^{2}_{\pi}}}$
| 0.91 | -46.60 | -52 | -38.24 | -46.36 | -45.17 | -42.78 | -38.21 | -38.23
$\Delta m^{*}_{\eta}$ | 1.02 | -55 | -61 | -45.66 | -54.59 | -53.42 | -50.69 | -45.62 | -45.64
| 1.14 | -63.75 | -70.30 | -53.41 | -63.14 | -62 | -59 | -53.36 | -53.38
Table 2: In-medium mass-shift (MeV) of $\eta$-meson with and without taking
the effect of magnetic field at $\rho=\rho_{0}$ and different parameters
calculated in the chiral SU(3) model.
### III.2 Impact of Magnetic Field on the $\eta$ Mesons in ChPT+Chiral Model
In this section, we evaluate the in-medium mass of $\eta$-meson mass using the
joint approach of chiral SU(3) model and chiral perturbation theory and also
compared it with the results calculated in the chiral SU(3) model alone. As
discussed in the Section II.1.2, the $\eta N$ equation of motion is derived
from the ChPT $\eta N$ Lagrangian density. The magnetic field influenced
scalar density of nucleons in the ChPT self-energy [Eq. (49)] is taken from
the chiral SU(3) model discussed in Section II.1. In the present work, we took
the value of parameter $\Sigma_{\eta N}$ as 280 MeV by neglecting the
uncertainties in the parameter Kumar2020c . We will see later that the
contribution of $\Sigma_{\eta N}$ term is very less as compared to the kappa
term.
In Figures 9 and 10, we plot the mass ratio $m^{*}_{\eta}/m_{\eta}$ with
respect to temperature, scattering length and isospin asymmetry at
$\rho_{N}$=$\rho_{0}$ and 4$\rho_{0}$, respectively. In these figures, we have
also compared the in-medium mass evaluated from the different approaches i.e.
(i) ChPT and chiral model (ii) chiral model alone. In Figure 9, at nuclear
saturation density, using the joint approach, we observe a significant
decrement in the in-medium mass of $\eta$-meson. We find a similar behavior of
the medium modified $\eta$-meson mass concerning the magnetic field, isospin
asymmetry, and scattering length as was found in the observations of the
chiral SU(3) model. The substantial decrement in the joint approach lies in
the fact that there is no term having repulsive contribution term in the ChPT.
The net contribution in ChPT comes from the $\Sigma_{\eta N}$ and $\kappa$
term (both attractive in nature). In Figure 10, at a higher value of nuclear
density, we observe that the trend of mass ratio with temperature remains the
same but here we get more negative mass-shift. The ratio shows similar
behavior concerning other medium parameters and scattering length. To have a
clear understanding, in fig. 11 at $\rho_{N}$=$\rho_{0}$ and $a^{\eta N}$=1.02
fm, we have illustrated the in-medium behavior of individual terms present in
the ChPT self-energy in magnetized asymmetric nuclear matter which contribute
to the in-medium mass of $\eta$-meson through Eq. (48). From this figure, we
observe that the contribution of $\Sigma_{\eta N}$ term is negative but very
less as compared to $\kappa$ term. The $\kappa$ term has a significant
attractive contribution to the in-medium mass because, in the in-medium mass
mathematical relation given by Eq. (48), the numerator has negative
contribution of nucleon scalar density whereas the denominator has a positive
contribution. Therefore, due to this inverse relationship, with the increase
in scalar density the value of effective mass decreases. The values of in-
medium optical potential at zero momentum calculated using ChPT+chiral model
are given in table 3.
To the best of our knowledge, no work has been done to study the effect of
magnetic field on the in-medium mass of $\eta$-mesons. The current results at
zero magnetic field can be compared with the existing literature Zhong2006 ;
Waas1997 ; Tsushima1998 ; Song2008 . In our previous work at zero magnetic
field, we have elaborately compared the results and observed that the findings
of different papers are in agreement for varying values of scattering length
Kumar2020c .
| | $I$=0 | $I$=0.5
---|---|---|---
| $a^{\eta N}(\text{fm})$ | T=0 | T=100 | T=0 | T=100
| | $eB$=0 | $eB$=5${{m^{2}_{\pi}}}$ | $eB$=0 | $eB$=5${{m^{2}_{\pi}}}$ | $eB$=0 | $eB$=5${{m^{2}_{\pi}}}$ | $eB$=0 | $eB$=5${{m^{2}_{\pi}}}$
| 0.91 | -107.57 | -115.95 | -93.79 | -105.70 | -105.25 | -102.11 | -93.72 | -93.70
$\Delta m^{*}_{\eta}$ | 1.02 | -116.83 | -125.69 | -102.21 | -114.89 | -114.35 | -110.22 | -102.11 | -102.17
| 1.14 | -126.36 | -135.64 | -110.96 | -124.32 | -123.75 | -119.42 | -110.86 | -110.93
Table 3: In-medium mass-shift (MeV) of $\eta$-meson with and without taking
the effect of magnetic field at $\rho=\rho_{0}$ and different parameters
calculated in the ChPT+chiral SU(3) model. Figure 3: (Color online) The in-
medium $\eta$ meson mass in chiral model at $\rho_{0}$ . Figure 4: (Color
online) The in-medium $\eta$ meson mass in chiral model at 4$\rho_{0}$ .
Figure 5: (Color online) The different terms of in-medium $\eta$ meson mass in
chiral model at $\rho_{0}$ and $a^{\eta N}$=1.02 fm. Figure 6: (Color online)
The in-medium $\eta$ meson optical potential in chiral model at $a^{\eta
N}$=0.91 fm and $I$=0. Figure 7: (Color online) The in-medium $\eta$ meson
optical potential in chiral model at $a^{\eta N}$=1.02 fm and $I$=0. Figure 8:
(Color online) The in-medium $\eta$ meson optical potential in chiral model at
$a^{\eta N}$=1.14 fm and $I$=0. Figure 9: (Color online) Comparison of in-
medium $\eta$ meson mass at $\rho_{0}$. Figure 10: (Color online) Comparison
of in-medium $\eta$ meson mass at 4$\rho_{0}$. Figure 11: (Color online) The
different terms of in-medium $\eta$ meson mass in ChPT+chiral model at
$\rho_{0}$ and $a^{\eta N}$=1.02 fm.
## IV SUMMARY
To summarize, we studied the effect of an external magnetic field on the in-
medium mass of $\eta$-meson in the hot asymmetric nuclear matter. We studied
the in-medium $\eta N$ interactions using two separate methodologies. In the
first approach, we computed the in-medium mass-shift of $\eta$-meson using the
chiral SU(3) model and observed a decrement in the effective mass as a
function of the magnetic field and nuclear density. We anticipated substantial
medium effects in the regime of high magnetic field and density. In the second
approach, we used the combined method of chiral perturbation theory (ChPT) and
chiral SU(3) model to compute the in-medium properties of $\eta$-meson. In the
latter approach, we introduced the medium effects through the nucleon scalar
density which is calculated in the chiral SU(3) model. Using the joint
approach, we found a substantial decrease in the mass of $\eta$-meson
concerning the magnetic field and nuclear density which is much deeper than
the observations of the first approach. The effects of isospin asymmetry and
temperature are also incorporated and found to be a little repulsive. In both
approaches, we observe a direct dependence of negative mass-shift with
$a^{\eta N}$ scattering length. Furthermore, due to zero charge on the $\eta$
meson, we do not observe Landau quantization therefore no additional energy
levels were discovered. The optical potential at finite momentum can be used
to study the experimental properties such as momentum dependence David2018 ;
Chen2017 ; Berg1994 and $\eta$-meson production rate Peng1987 ; Martinez1999
; Agakishiev2013 in the magnetized nuclear medium. Also, the observed
negative mass-shift can be used to study the possibility of $\eta N$ bound
states formation Jenkins1991 ; Zhong2006 . Also, the magnetic field influenced
optical potential may be used in future experiments to study the in-medium
observables of $\eta$-mesons Rapp2010 ; Vogt2007 .
## Acknowledgment
One of the authors (R.K.) sincerely acknowledges the support of this work from
Ministry of Science and Human Resources Development (MHRD), Government of
India, via the National Institute of Technology Jalandhar.
## References
* (1) D. E. Kharzeev, L. D. McLerran and H. J. Warringa, Nucl. Phys. A 803, 227 (2008).
* (2) K. Fukushima, D. E. Kharzeev and H. J. Warringa, Phys. Rev. D 78, 074033 (2008).
* (3) V. Skokov, A. Illarionov and V. Toneev, Int. J. Mod. Phys. A 24, 5925 (2009).
* (4) S. Cho, K. Hattori, S. H. Lee, K. Morita, S. Ozaki, Phys. Rev. D 91, 045025 (2015).
* (5) D. Kharzeev, K. Landsteiner, A. Schmitt and H.-U. Yee, , Strongly Interacting Matter in Magnetic Fields (Springer,2013).
* (6) S. R. Parvathreddy, A. Jahan, N. Dhale, A. Mishra and J. Schaffner-Bielich, Phys. Rev. C 97, 065208 (2018).
* (7) S. Cho, K. Hattori, S, H. Lee, K. Morita and S. Ozaki, Phys. Rev. Lett. 113, 172301 (2014).
* (8) Philipp Gubler P. Gubler, K. Hattori, S. H. Lee, M. Oka, S. Ozaki, and K. Suzuki, Phys. Rev. D 93, 054026 (2016).
* (9) P. Papazoglou, D. Zschiesche, S. Schramm, J. Schaffner-Bielich, H. Stöcker, and W. Greiner, Phys. Rev. C 59, 411 (1999).
* (10) A. Mishra, A. Kumar, S. Sanyal and S. Schramm, Eur. Phys. J. A 41, 205 (2009).
* (11) K. Tuchin, Phys. Rev. C 83, 017901 (2011).
* (12) K. Tuchin, Phys. Rev. C 82, 034904 (2011).
* (13) K. Tuchin, Phys. Rev. C 88, 024911 (2013).
* (14) K. Marasinghe and K. Tuchin, Phys. Rev. C 84, 044908 (2011).
* (15) A. Das, S. S. Dave, P. S. Saumia, and A. M. Srivastava, Phys. Rev. C 96, 034902 (2017).
* (16) A. Vilenkin, Phys. Rev. D 22, 3080 (1980).
* (17) Y. Burnier, D. E. Kharzeev, J. Liao, and H. Yee, Phys. Rev. Lett. 107, 052303 (2011).
* (18) R. Rapp, D. Blaschke and P. Crochet, Prog. Part. Nucl. Phys. 65, 209 (2010).
* (19) R. Vogt, Ultra-relativistic Heavy-Ion Collisions (Elsevier, Amsterdam, 2007).
* (20) R. Kumar and A. Kumar, Eur. Phys. J C 79, 403 (2019).
* (21) R. Kumar, R. Chhabra and A. Kumar, Eur. Phys. J A 56, 278 (2020).
* (22) P.A.M. Guichon, Phys. Lett. B 200, 235 (1988).
* (23) S. W. Hong and B. K. Jennings, Phys. Rev. C 64, 038203 (2001).
* (24) K. Tsushima, D. H. Lu, A. W. Thomas, K. Saito and R. H. Landau, Phys. Rev. C 59, 2824 (1999).
* (25) A. Sibirtsev, K. Tsushima and A.W. Thomas, Eur. Phys. J. A 6, 351 (1999).
* (26) K. Saito and A.W. Thomas, Phys. Lett. B 327, 9 (1994).
* (27) P. K. Panda, A. Mishra, J. M. Eisenberg and W. Greiner, Phys. Rev. C 56, 3134 (1997).
* (28) S. Chatterjee and K. A. Mohan, Phys. Rev. D 85, 074018 (2012).
* (29) B.-J. Schaefer, M. Wagner and J. Wambach, Phys. Rev. D 81, 074013 (2010).
* (30) L. Tolós, J. Schaffner-Bielich and A. Mishra, Phys. Rev. C 763, 025203 (2004).
* (31) L. Tolós, J. Schaffner-Bielich and H. Stöcker, Phys. Lett. B 635, 85 (2006).
* (32) L. Tolós, A. Ramos and T. Mizutani, Phys. Rev. C 77, 015207 (2008).
* (33) J. Hofmann and M.F.M. Lutz, Nucl. Phys. A 763, 90 (2005).
* (34) A. Kumar and A. Mishra, Phys. Rev. C 82, 045207 (2010).
* (35) A. Mishra, E. L. Bratkovskaya, J. Schaffer-Bielich, S. Schramm and H. Stöcker, Phys. Rev. C 69, 015202 (2004).
* (36) E. Jenkins and A. Manohar, Phys. Lett. B 255, 558 (1991); 259, 353 (1991).
* (37) X. H. Zhong, G. X. Peng, Lei Li, and P. Z. Ning, Phys. Rev. C 73, 015205 (2006).
* (38) R. Kumar and A. Kumar, Phys. Rev. C 102, 065207 (2020).
* (39) A. Hayashigaki, Phys. Lett. B 487, 96 (2000).
* (40) L. J. Reinders, H. R. Rubinstein and S. Yazaki, Nucl. Phys. B 186, 109 2(1981).
* (41) T. Hilger, R. Thomas and B. Kämpfer, Phys. Rev. C 79, 025202 (2009).
* (42) L. J. Reinders, H. R. Rubinstein and S. Yazaki, Phys. Rep. 127, 1 (1985).
* (43) F. Klingl, N. Kaiser and W. Weise, Nucl. Phys. A 624, 527 (1997).
* (44) F. Klingl, S. Kim, S. H. Lee, P. Morath and W. Weise, Phys. Rev. Lett. 82, 3396 (1999).
* (45) Y. Nambu and G. Jona-Lasinio, Phys. Rev. 122, 345 (1961).
* (46) K. Fukushima, Phys. Lett. B 591, 277 (2004).
* (47) K. Kashiwa, H. Kouno, M. Matsuzaki and M. Yahiro, Phys. Lett. B 662, 26 (2008).
* (48) S. K. Ghosh, S. Raha, R. Ray, K. Saha and S. Upadhaya, Phys. Rev. D 91, 054005 (2015).
* (49) Q. Haider and L. C. Liu, Phys. Lett. B 172, 257 (1986).
* (50) L. C. Liu and Q. Haider, Phys. Rev. C 34, 1845 (1986).
* (51) D. B. Kaplan and A. E. Nelson, Phys. Lett. B 175, 57 (1986).
* (52) J. Chen, Z. -Q. Feng, P. -H. Chen, F. Niu, Y. -F. Guo and J. -S. Wang, Eur. Phys. J A 53, 128 (2017).
* (53) J. C. David, A. Boudard, J. Cugnon, J. Hirtz, S. Leray, D. Mancusi and J. L. Rodriguez-Sanchez, Eur. Phys. J Plus 133, 253 (2018).
* (54) T. Inoue and E. Oset, Nucl. Phys. A 710, 354 (2002).
* (55) T. Waas and W. Weise, Nucl. Phys. A 625, 287 (1997).
* (56) J.C. Peng $et$ $al.$, Phys. Rev. Lett. 58, 2027 (1987).
* (57) G. Martinez $et$ $al.$, Phys. Rev. Lett. 83, 1538 (1999).
* (58) G. Agakishiev $et$ $al.$, Phys. Rev. C 88, 024904 (2013).
* (59) F.-D. Berg $et$ $al.$, Phys. Rev. Lett. 72, 977 (1994).
* (60) E. Chiavassa, G. Dellacasa, N. De Marco, C. De Oliveira Martins, M. Gallio, P. Guaita, A. Musso, A. Piccotti, E. Scomparin and E. Vercellin, EPL 41, 365 (1998).
* (61) R. Averbeck, R. Holzmann, V. Metag and R. S. Simon, Phys. Rev. C 67, 024903 (2003).
* (62) Wang Teng-Teng, Chin. Phys. C 34, 460 (2010).
* (63) K. Tsushima, D.H. Lu, A.W. Thomas and K.Saito, Phys. Lett. B 443, 26 (1998).
* (64) H. C. Chiang, E. Oset and L. C. Liu, Phys. Rev. C 44, 738 (1991).
* (65) C. Y. Song, X. H. Zhong, L. Li and P. Z. Ning, EPL 81, 4 (2008).
* (66) G. E .Brown, C. -H. Lee, M. Rho and V. Thorsson, Nucl. Phys. A 567, 937 (1994).
* (67) C. -H. Lee, G. E. Brown, D. -P. Min and M. Rho, Nucl. Phys. A 585, 401 (1995).
* (68) N. Kaiser, P. B. Siegel and W. Weise, Nucl. Phys. A 594, 325 (1995).
* (69) D. Zschiesche, A. Mishra, S. Schramm, H. Stöcker and W. Greiner, Phys. Rev. C 70, 045202 (2004).
* (70) A. Mishra, K. Balazs, D. Zschiesche, S. Schramm, H. Stöcker and W. Greiner, Phys. Rev. C 69, 024903 (2004).
* (71) R. Kumar and A. Kumar, Phys. Rev. C 102, 045206 (2020).
* (72) A. Kumar and A. Mishra, Eur. Phys. J. A 47, 164 (2011).
* (73) R. Kumar and A. Kumar, Phys. Rev. C 101, 015202 (2020).
* (74) A. Kumar, Adv. High Energy Phys. 2014, 549726 (2014).
* (75) R. Chhabra and A. Kumar, Eur. Phys. J. A 53, 105 (2017).
* (76) R. Chhabra and A. Kumar, Eur. Phys. J. C 77, 726 (2017).
* (77) R. Chhabra and A. Kumar, Phys. Rev. C 98,025205 (2018).
* (78) R. Kumar and A. Kumar, Chin. Phys. C 43, 12 (2019).
* (79) Steven Weinberg, Phys. Rev. 166, 1568, 1968.
* (80) S. Coleman, J. Wess and B. Zumino, Phys. Rev. 177, 2239 (1969).
* (81) D. Zschiesche, Description of Hot, Dense and Strange Hadronic Matter in a Chiral $SU(3)_{L}\times SU(3)_{R}$ $\sigma$-Model, Diploma Thesis, Goethe University Frankfurt, Germany, (1997).
* (82) W. A. Bardeen and B. W. Lee, Phys. Rev. 177, 2389 (1969).
* (83) J. Schechter, Phys. Rev. D 21, 3393 (1980).
* (84) H. Gomm, Phys. Rev. D 33, 801 (1986).
* (85) A. Broderick, M. Prakash, and J. M. Lattimer, Astrophys. J. 537, 351 (2000).
* (86) A. Broderick, M. Prakash, and J. M. Lattimer, Phys. Lett. B 531, 167 (2002).
* (87) P.A. Zyla [Particle Data Group] et al., Prog. Theor. Exp. Phys. 2020, 083C01 (2020).
* (88) L. Burakovsky, T. Goldman, arXiv:hep-ph/9708498v1 (1997).
* (89) A. Mishra and S. Schramm, Phys. Rev. C 74, 064904 (2006).
* (90) A. Mishra, S. Schramm and W. Greiner, Phys. Rev. C 78, 024901 (2008).
* (91) T-S. Park, D. -P. Min and M. Rho, Phys. Rep. 233, 341 (1993).
* (92) V. E. Lyubovitskij, Th. Gutsche, A. Faessler, and E. G. Drukarev, Phys. Rev. D 63, 054026 (2001).
* (93) S. J. Dong, J. -F. Lagaë and K. F. Liu, Phys. Rev. D 54, 5496 (1996).
* (94) T. Hatsuda and T. Kunihiro Phys. Rep. 247, 221 (1994).
* (95) H. Georgi, Weak Interactions and Modern Particle Theory (Benjamin/Cummings, Menlo Park, CA, 1984).
* (96) H. D. Politzer and M. B. Wise, Phys. Lett. B 273, 156 (1991).
* (97) A. M. Green and S. Wycech, Phys. Rev. C 71, 014001 (2005).
* (98) F. Renard et al., Phys. Lett. B 528, 215 (2002).
* (99) R. A. Arndt, W. J. Briscoe, T. W. Morrison, I. I. Strakovsky, R. L. Workman and A. B. Gridnev, Phys. Rev. C 72, 045202 (2005).
* (100) A. M. Green and S. Wycech, Phys. Rev. C 60, 035208 (1999).
|
# Renormalized Schwinger-Dyson functional
Enore Guadagnini${}^{\,a}$ and Vittoria Urso${}^{\,b}$
${}^{a\,}$Dipartimento di Fisica E. Fermi dell’Università di Pisa, and INFN
Sezione di Pisa,
Largo B. Pontecorvo 2, 56127 Pisa, Italy.
${}^{b\,}$Dipartimento di Matematica e Fisica Ennio De Giorgi, Università del
Salento,
and IIT di Lecce, Piazza Tancredi 7, 73100 Lecce, Italy
###### Abstract
We consider the perturbative renormalization of the Schwinger-Dyson
functional, which is the generating functional of the expectation values of
the products of the composite operator given by the field derivative of the
action. It is argued that this functional plays an important role in the
topological Chern-Simons and BF quantum field theories. It is shown that, by
means of the renormalized perturbation theory, a canonical renormalization
procedure for the Schwinger-Dyson functional is obtained. The combinatoric
structure of the Feynman diagrams is illustrated in the case of scalar models.
For the Chern-Simons and the BF gauge theories, the relationship between the
renormalized Schwinger-Dyson functional and the generating functional of the
correlation functions of the gauge fields is produced.
## 1 Introduction
The Schwinger-Dyson equations [1, 2] of quantum field theory can be derived
[3, 4] from the invariance of the functional integration under field
translations. The structure of the Schwinger-Dyson equations is determined by
the action functional, which is involved in the computation of the vacuum
expectation values of the fields. Let the action $S[\phi]$ be a function of a
set of fields denoted by $\phi(x)$. The basic Schwinger-Dyson equation takes
the form
$\left\langle\frac{\delta
S[\phi]}{\delta\phi(x)}\,\phi(y_{1})\phi(y_{2})\cdots\phi(y_{n})\right\rangle=i\sum_{j=1}^{n}\,\delta(x-y_{j})\left\langle\phi(y_{1})\cdots\phi(y_{j-1})\phi(y_{j+1})\cdots\phi(y_{n})\right\rangle\;,$
(1.1)
where the vacuum expectation value $\langle{\cal P}[\phi]\rangle$ of a field
operator ${\cal P}[\phi]$ is given by
$\left\langle{\cal P}[\phi]\right\rangle=\frac{\int D\phi\;e^{iS[\phi]}\;{\cal
P}[\phi]}{\int D\phi\;e^{iS[\phi]}}\;.$ (1.2)
Recently, developments of the Schwinger-Dyson equations have been applied in
the study of various subjects like, for instance, the renormalization theory
[5, 6], condensed matter investigations [7, 8], and bound states and strong
interactions [9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. Standard Schwinger-Dyson
equations have been used also in the case of topological quantum field
theories with and without matter [19, 20, 21].
We are interested in a particular generalisation of equation (1.1) which
concerns the computation of the expectation values of the products of the
composite operator $\delta S[\phi]/\delta\phi(x)$,
$F(x_{1},x_{2},...,x_{n})=\left\langle\frac{\delta
S[\phi]}{\delta\phi(x_{1})}\,\frac{\delta
S[\phi]}{\delta\phi(x_{2})}\cdots\frac{\delta
S[\phi]}{\delta\phi(x_{n})}\right\rangle\;.$ (1.3)
The generating functional $Z_{SD}[B]$ of the expectation values (1.3) is
called the Schwinger-Dyson functional and is defined by
$Z_{SD}[B]=\left\langle e^{i\int dx\,B(x)\,\delta
S[\phi]/\delta\phi(x)}\right\rangle\;,$ (1.4)
where $B(x)$ denotes a classical source.
The functional (1.4) plays an important role in the low-dimensional gauge
field theories of topological type, like the Chern-Simons and BF quantum field
theories [22, 23, 24]. In these models, the derivative of the action with
respect to the components of the connection is proportional to the curvature
(plus possible additional contributions which are related to the gauge-fixing
lagrangian terms), that combined with the topology of 3-manifolds determines
the values of the Wilson line observables [22].
In facts, when the gauge structure group of these topological models is
abelian, the Schwinger-Dyson functional provides the complete solution for the
gauge invariant observables [22, 23].
For instance, when the first homology group [25] $H_{1}(M)$ of the 3-manifold
$M$ is trivial, one can compute [22] the observables of the abelian Chern-
Simons theory defined in $M$ by means of perturbation theory. The action for
the connection $A$ is given by $2\pi k\int A\wedge dA$ and the variation of
the action with respect to the fields is proportional to the curvature
$F_{A}=dA$. By introducing the coupling $\int B\wedge dA$ of the curvature
with an external classical source $B=B_{\mu}(x)dx^{\mu}$, one finds
$\widetilde{Z}_{SD}[B]\equiv\left\langle e^{2\pi i\int B\wedge
dA}\right\rangle=e^{-i\,\pi/2k\int B\wedge B}\;,$ (1.5)
that specifies the expectation values of the Wilson lines associated with
links in $M$. Quite remarkably, an appropriate generalization [22] of this
procedure furnishes the solution of the abelian Chern-Simons theory in a
generic closed and oriented 3-manifold $M$. Indeed, when the first homology
group $H_{1}(M)$ is not trivial, for each element of the torsion subgroup [25]
of $H_{1}(M)$ one can introduce a corresponding classical background
connection. Then one needs to take the sum of the Schwinger-Dyson functionals
that are computed in the presence of each background connection. Somehow, in
the functional integration, the values of the curvature correspond to the
local degrees of freedom —which do not depend on the topology of the manifold
$M$— whereas the effects of topology are taken into account by the background
connections.
In the case of the non-abelian $SU(N)$ Chern-Simons theory, the structure of
the gauge orbits, which are associated with the $SU(N)$ connections, does not
admit [26] a simple description based on the homology group $H_{1}(M)$. Yet,
in the characterization of the local degrees of freedom which are not related
with the manifold topology, the non-abelian curvature $F_{A}=2dA+i[A,A]$
appears to play a fundamental role. In fact, the value $F_{A}(x)$ of the
curvature in the point $x$ is specified [4, 27] by the value of the gauge
holonomy associated an infinitesimal loop centered in $x$, and each
infinitesimal loop does not depend on the topology of $M$. Let us present a
rough sketch of a possible argument that can be used to make this statement
more precise.
$SU(N)$ gauge connections can be described by one-forms defined in $M$ with
values in the $SU(N)$ Lie algebra. The local value $\\{A(x)\\}$ of each
configuration A can also be specified by the set $\\{H_{\gamma}[A]\\}$ of the
holonomies,
$H_{\gamma}[A]={\rm P}e^{i\int_{\gamma}A}\;,$
which are associated with all the possible closed oriented paths
$\\{\gamma\\}$ in $M$ with a given base-point $x_{0}\in M$, which represents
the starting/final point of each closed path $\gamma$. This correspondence is
denoted by
$\\{A(x)\\}\;\leftrightarrow\;\\{H_{\gamma}[A]\\}\;.$ (1.6)
In turn, the value of the holonomies $H_{\gamma}[A]$ as a function of the
paths can be determined by combining the local values $\\{F_{A}(x)\\}$ of the
curvature with smooth deformations of the paths. In order to illustrate this
point, let us consider a nontrivial reference path $\gamma_{0}$, with
parametrization $x^{\mu}(\tau)$ in which $0\leq\tau\leq 1$. Let the holonomy
$\widehat{H}_{\gamma_{0}}(s)$ with $0\leq s\leq 1$ be defined by
$\widehat{H}_{\gamma_{0}}(s)={\rm P}e^{i\int_{0}^{s}d\tau\,A(\tau)}\;,$ (1.7)
in which $A(\tau)=A_{\mu}(x(\tau))(dx^{\mu}(\tau)/d\tau)$. Note that
$H_{\gamma_{0}}[A]=\widehat{H}_{\gamma_{0}}(1)$. An infinitesimal deformation
$\gamma_{0}+\delta\gamma$ of the path $\gamma_{0}$ can be described by the
parametrization $x^{\mu}(\tau)+\epsilon^{\mu}(\tau)$ with
$\epsilon^{\mu}(\tau)\ll 1$. At first order in $\epsilon^{\mu}$, one has
$H_{\gamma_{0}+\delta\gamma}\simeq H_{\gamma_{0}}+\Delta H_{\gamma_{0}}$,
where the infinitesimal modification $\Delta H_{\gamma_{0}}$ of the holonomy,
$\Delta
H_{\gamma_{0}}=i\int_{0}^{1}ds\,\widehat{H}_{\gamma_{0}}(s)\,\epsilon^{\mu}(s)\,\dot{x}^{\nu}(s)F_{\mu\nu}(x(s))\,\widehat{H}^{-1}_{\gamma_{0}}(s)\,H_{\gamma_{0}}[A]\,,$
(1.8)
is specified by $\widehat{H}_{\gamma_{0}}(s)$ and by the values
$\\{F_{A}(x)\\}$ of the curvature. If the path $\gamma$ is homotopically
equivalent to $\gamma_{0}$, the value of the associate holonomy
$H_{\gamma}[A]$ is expected to be determined by $H_{\gamma_{0}}[A]$ with the
help of a set of infinitesimal transformations of the path. As shown in
equation (1.8), the corresponding infinitesimal modifications of the holonomy
can be fixed by means of $\widehat{H}_{\gamma_{0}}(s)$ and the local values
$\\{F_{A}(x)\\}$ of the curvature. If the 3-manifold $M$ is simply connected,
any closed path $\gamma$ is homotopic with $\gamma_{0}$. If $M$ is not simply
connected, for each generator $g$ of the fundamental group $\pi_{1}(M)$ one
can choose a representative path $\gamma_{g}$ and, in agreement with equation
(1.7), one can compute the corresponding holonomy
$\widehat{H}_{\gamma_{g}}(s)$. For each set of homotopic paths, the associated
holonomies can be determined by combining $\widehat{H}_{\gamma_{g}}(s)$ with
the local values $\\{F_{A}(x)\\}$ of the curvature. Thus relation (1.6) should
develop into
$\\{A(x)\\}\;\leftrightarrow\;\\{H_{\gamma}[A]\\}\;\leftrightarrow\;\\{\widehat{H}_{\gamma_{1}}(s),\widehat{H}_{\gamma_{2}}(s)\,,\dots,\widehat{H}_{\gamma_{g}}(s),\dots\,;F_{A}(x)\\}\;.$
(1.9)
Consequently, in the computation of the functional integration, the sum over
the configurations $\\{A(x)\\}$ can then be envisaged to be decomposed into a
sum over the values $\\{F_{A}(x)\\}$ of the curvature and a sum over the
values of the holonomies associated with the paths which represent the
generators of the fundamental group $\pi_{1}(M)$. The values of the curvature
should correspond to the “purely local” degrees of freedom which are
independent of the topology of the manifold $M$. Whereas the values of the
holonomies
$\\{\widehat{H}_{\gamma_{1}}(s),\widehat{H}_{\gamma_{2}}(s)\,,\dots,\widehat{H}_{\gamma_{g}}(s)\\}$
should describe the effects of the nontrivial topology of $M$.
How to carry out the precise disentanglement of the “purely local” degrees of
freedom —described by the curvature— and the “topology dependent” degrees of
freedom in the functional integration is an open problem. In order to
investigate this issue in the case of the nonabelian Chern-Simons and BF
theories, in the present article we analyze a preliminary question which is
related to the perturbative computation of the renormalized generating
functional of the vacuum expectation values of the products of the nonabelian
curvature $F_{A}(x)$ in different points of spacetime. Indeed, for the
topological abelian gauge theories the renormalization is trivial, whereas in
the non-abelian case the renormalization task is not trivial. The main
purposes of our article is to show how the renormalization of the
corresponding Schwinger-Dyson functional $Z_{SD}[B]$ is canonically determined
by the standard renornalization procedure [28, 29] for the correlation
functions of the gauge connections.
We demonstrate that, in the Chern-Simons and BF theories, the renormalized
Schwinger-Dyson functional is related with the generating functional $Z[J]$ of
the correlation functions of the gauge connections by some kind of duality
transformation. Therefore the standard perturbative procedure called
“renormalized perturbation theory” [4] provides a canonical renormalization
for $Z_{SD}[B]$. Note that we are not interested in the matrix elements the
composite operator $\delta S[\phi]/\delta\phi(x)$ between generic states; this
issue can be studied by means of standard techniques [4, 29, 30]. Motivated by
the results of the topological models with an abelian gauge group, we shall
concentrate on the vacuum expectation values of products of operators $\delta
S[\phi]/\delta\phi(x)$. In this case, the relationship that we derive between
$Z_{SD}[B]$ and $Z[J]$ shows that the standard technique [4, 29, 30] for the
study of the renormalization properties of the composite operator $\delta
S[\phi]/\delta\phi(x)$ greatly simplifies.
Let us remember that the renormalization of the lagrangian field theory models
is expected [31, 32] to be independent of the global aspects of the manifold
that do not modify the short-distance behaviour of the theory. Therefore,
since the nonabelian curvature $F_{A}(x)$ describes degrees of freedom which
do not depend on the topology of the manifold, we shall consider the
renormalization properties of $Z_{SD}[B]$ in flat spacetime.
The combinatoric structure of the Feynman diagrams —entering the perturbative
computation of $Z_{SD}[B]$— is illustrated in the simple case of the field
theory models $\phi^{3}$ and $\phi^{4}$ in four dimensions in Section 2. By
means of the Wick contractions [3, 4] of the field operators, we examine the
Feynman diagrams which are associated with the expectation values (1.3). We
demonstrate that the short distance behaviour of the products of the composite
operator $\delta S[\phi]/\delta\phi(x)$ —that in $d$ dimensional spacetime has
dimension $(d-1)$— is really determined by the ultraviolet properties of
operators of dimensions $(d-2)$ and/or $(d-3)$. For instance, when the
interacting lagrangian entering the action $S[\phi]$ is a cubic function of a
scalar field $\phi(x)$, $Z_{SD}[B]$ can be related to the ordinary generating
functional $Z[J]$ of correlation functions of the field $\phi(x)$. In general,
it turns out that the connected component of $Z_{SD}[B]$ is the union of a
local functional of $B(x)$ and a non-local part which is specified by the
expectation values of field components $\phi(x)$ and possibly $\phi^{2}(x)$.
Applications and extensions of the results of Section 2 are presented in
Section 3, where low dimensional gauge theories of topological type are
considered. For the nonabelian $SU(N)$ Chern-Simons model and the $ISU(2)$ BF
gauge theory in ${\mathbb{R}}^{3}$, the relationship between the renormalized
Schwinger-Dyson functional and the generating functional of the correlation
functions of the gauge fields is produced. Section 4 contains the conclusions.
## 2 Structure of the Feynman diagrams
The case of a cubic interaction lagrangian is relevant for the topological
gauge theories in low dimensions. So, let us first consider the field theory
model which is defined by the action
$S[\phi]=\int d^{4}x\,\left({\textstyle{\raise 0.8pt\hbox{$\scriptstyle
1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}\partial_{\mu}\phi\,\partial^{\mu}\phi-{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}m^{2}\phi^{2}+{\textstyle{\raise 0.8pt\hbox{$\scriptstyle
1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle 3$}}}}g\phi^{3}\right)\;,$ (2.1)
where $\phi(x)$ is a real scalar field and the real parameter $g$ denotes the
coupling constant. The generating functional $Z[J]$ of the correlation
functions of the field $\phi(x)$ is defined by
$Z[J]=\left\langle e^{i\int d^{4}x\,J(x)\,\phi(x)}\right\rangle\;.$ (2.2)
The renormalization of $Z[J]$ is obtained by means of the standard procedure
denominated “renormalized perturbation theory” [4]. In this scheme, the
lagrangian parameters assume their renormalized values and, in order to
maintain the validity of the normalization conditions at each order of
perturbation theory, local counterterms are introduced, which cancel exactly
all the contributions to these parameters which are obtained in the loop
expansion. The normalization conditions for the model defined be the action
(2.1) concern the values of the mass, of the coupling constant and the wave
function normalization. Finally, in order to complete the list of the
normalization conditions, one needs to require the absence of a proper vertex
which is linear in the field. Let $\Gamma[\varphi]$ be the effective action
which corresponds to the sum of the one-particle-irreducible diagrams with
external legs represented by $\varphi(x)$. In agreement with the structure of
the lagrangian (2.1), the additional normalization condition is given by
$\left(\delta\Gamma/\delta\varphi(x)\right)|_{\varphi=0}=0$. Note that, in the
case of the $\phi^{4}$ model, the vanishing of the proper vertices which are
linear and cubic in powers of the fields is a consequence of the symmetry
$\varphi\rightarrow-\varphi$ which is imposed to the effective action. In the
case of gauge fields, the analogue of the condition
$\left(\delta\Gamma/\delta\varphi(x)\right)|_{\varphi=0}=0$ is automatically
satisfied.
Let us now consider the perturbative computation of the mean values (1.3). The
perturbative expansion [3, 4] of a generic expectation value (1.2) can be
written as
$\left\langle{\cal P}[\phi]\right\rangle=\frac{\left\langle\,{\cal
P}[\phi]\,e^{iS_{I}[\phi]}\,\right\rangle_{0}}{\left\langle\,e^{iS_{I}[\phi]}\,\right\rangle_{0}}\;,$
(2.3)
where $S_{I}[\phi]$ denotes the integral of the interaction lagrangian
$S_{I}[\phi]={\textstyle{\raise 0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower
0.8pt\hbox{$\scriptstyle 3$}}}}g\int d^{4}x\,\phi^{3}(x)\;,$ (2.4)
and the vacuum expectation value of the time-ordered product of the fields
$\left\langle\,{\cal
P}[\phi]\,e^{iS_{I}[\phi]}\,\right\rangle_{0}\equiv\langle 0|\,{\rm
T}\left({\cal P}[\phi]\,e^{iS_{I}[\phi]}\right)|0\rangle$ (2.5)
corresponds to the sum of the Feynman diagrams which are obtained by means of
the Wick contractions [32] of the fields. The set of the connected diagrams is
denoted by $\left\langle\,{\cal
P}[\phi]\,e^{iS_{I}[\phi]}\,\right\rangle_{0}^{c}$. The Feynman propagator
reads
$\langle 0|\,{\rm
T}\left(\phi(x)\,\phi(y)\right)|0\rangle=\mathop{\vbox{\halign{#\cr\kern
3.0pt\cr$\,\,\vrule depth=4.0pt,width=0.4pt\leaders\vrule
depth=0.0pt,height=0.4pt\hfill\vrule depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{\phi(x)\,\phi}\hfil$\crcr}}}\limits(y)=i\,\langle
x|\,\frac{1}{-\partial^{2}-m^{2}+i\epsilon}\,|y\rangle\;.$ (2.6)
The composite operator $\delta S[\phi]/\delta\phi(x)$ takes the form
$\frac{\delta S}{\delta\phi(x)}\equiv
E_{\phi}(x)=\left[-\partial^{2}-m^{2}\right]\phi(x)+g\phi^{2}(x)=\nabla\phi(x)+g\phi^{2}(x)\;,$
(2.7)
where we have introduced the simplifying notation
$\nabla\phi(x)\equiv\left[-\partial^{2}-m^{2}\right]\phi(x)$. The Schwinger-
Dyson functional (1.4) can be written as $Z_{SD}[B]=\exp(iW_{SD}[B])$ where
$iW_{SD}[B]$ is given by the sum of the connected diagrams
$iW_{SD}[B]=\sum_{n}\frac{i^{n}}{n!}\int d^{4}x_{1}\cdots
d^{4}x_{n}B(x_{1})\cdots B(x_{n})\,\left\langle E_{\phi}(x_{1})\cdots
E_{\phi}(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}\;.$ (2.8)
Note that the overall multiplying factor, which is given by the sum of the
vacuum-to-vacuum diagrams, is not included in the set of the connected
diagrams (2.8) contributing to $Z_{SD}[B]$. Indeed, as a consequence of the
normalisation —shown in equations (1.2) and (2.3)— of the generating
functional (1.4), the multiplying factor of the numerator simplifies —or
cancels— with the same factor of the denominator.
Let us examine the perturbative evaluation of $\left\langle
E_{\phi}(x_{1})\cdots
E_{\phi}(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$. In agreement with
the Wick Theorem, let us first consider all the possible Wick contractions of
the operator $\nabla\phi(x)$. Since
$\left[-\partial^{2}-m^{2}\right]\mathop{\vbox{\halign{#\cr\kern
3.0pt\cr$\,\,\vrule depth=4.0pt,width=0.4pt\leaders\vrule
depth=0.0pt,height=0.4pt\hfill\vrule depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{\phi(x)\,\phi}\hfil$\crcr}}}\limits(y)=i\delta^{4}(x-y)$,
the Wick contraction of $\nabla\phi(x)$ with the fields contained in
$e^{iS_{I}[\phi]}$ gives
$\left\langle\,\nabla\phi(x)\,e^{iS_{I}[\phi]}\,\right\rangle_{0}=i\left\langle\nabla\mathop{\vbox{\halign{#\cr\kern
3.0pt\cr$\,\,\vrule depth=4.0pt,width=0.4pt\leaders\vrule
depth=0.0pt,height=0.4pt\hfill\vrule depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{\phi(x)S_{I}}\hfil$\crcr}}}\limits[\phi]\,e^{iS_{I}[\phi]}\,\right\rangle_{0}=\left\langle(-g\phi^{2}(x))\,e^{iS_{I}[\phi]}\,\right\rangle_{0}\;,$
(2.9)
and consequently
$\left\langle
E_{\phi}(x)\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=\left\langle\left(\nabla\phi(x)+g\phi^{2}(x)\right)\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=0\;.$
(2.10)
Let us now consider $\left\langle
E_{\phi}(x_{1})E_{\phi}(x_{2})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$.
Because of equation (2.9), the contraction of $\nabla\phi(x_{1})$ with the
fields contained in $e^{iS_{I}[\phi]}$ gives a vanishing result as a
consequence of the sum with the term $g\phi^{2}(x_{1})$, as shown in equation
(2.10) for the case of $\left\langle
E_{\phi}(x)\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$. So we must consider the
contraction of $\nabla\phi(x_{1})$ with $E_{\phi}(x_{2})$, which produces
${\nabla\mathop{\vbox{\halign{#\cr\kern 3.0pt\cr$\,\,\vrule
depth=4.0pt,width=0.4pt\leaders\vrule depth=0.0pt,height=0.4pt\hfill\vrule
depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{\phi(x_{1})E}\hfil$\crcr}}}\limits}_{\phi}(x_{2})=i\left[-\partial^{2}-m^{2}\right]\delta^{4}(x_{1}-x_{2})+2ig\,\phi(x_{2})\,\delta^{4}(x_{1}-x_{2})\;.$
(2.11)
Thus one finds
$\displaystyle\left\langle
E_{\phi}(x_{1})E_{\phi}(x_{2})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$
$\displaystyle=$ $\displaystyle
i\left[-\partial^{2}-m^{2}\right]\,\delta^{4}(x_{1}-x_{2})$ (2.12)
$\displaystyle\qquad+2ig\delta^{4}(x_{1}-x_{2})\left\langle\phi(x_{2})\,e^{iS_{I}[\phi]}\right\rangle_{0}^{c}\;.$
The normalization condition
$\left(\delta\Gamma/\delta\varphi(x)\right)|_{\varphi=0}=0$ on the absence of
the tadpole implies
$\left\langle\phi(x_{2})\,e^{iS_{I}[\phi]}\right\rangle_{0}^{c}=0$. Therefore
$\left\langle
E_{\phi}(x_{1})E_{\phi}(x_{2})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=i\left[-\partial^{2}-m^{2}\right]\,\delta^{4}(x_{1}-x_{2})\;.$
(2.13)
The same arguments illustrated above give
$\left\langle
E_{\phi}(x_{1})E_{\phi}(x_{2})E_{\phi}(x_{3})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=-4\,g\,\delta^{4}(x_{1}-x_{2})\,\delta^{4}(x_{2}-x_{3})\;.$
(2.14)
The structure of the diagrams associated with $\left\langle
E_{\phi}(x_{1})E_{\phi}(x_{2})\cdots
E_{\phi}(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$, for generic $n\geq
4$, can be obtained by first considering all the Wick contractions of the
field operators of the type $\nabla\phi$. The combinatoric of these
contractions can easily be obtained by taking into account the symmetric role
of the operators $E_{\phi}(x_{1})E_{\phi}(x_{2})\cdots E_{\phi}(x_{n})$ in the
computation of $W_{SD}[B]$, as shown in equation (2.8). For the connected
diagrams, we find
$\left\langle E_{\phi}(x_{1})E_{\phi}(x_{2})\cdots
E_{\phi}(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=0\quad,\quad\hbox{ if
$n$ is odd}\;,$ (2.15)
and
$\displaystyle\left\langle E_{\phi}(x_{1})E_{\phi}(x_{2})\cdots
E_{\phi}(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$ $\displaystyle=$
$\displaystyle\left\langle\phi(x_{2})\,\phi(x_{4})\cdots\phi(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}\times$
(2.16)
$\displaystyle{\hskip-170.71652pt}\times(n-1)!!\,(2ig)^{n/2}\,\delta^{4}(x_{1}-x_{2})\cdots\delta^{4}(x_{n-1}-x_{n})\quad,\quad\hbox{
if $n$ is even}\;.$
Equations (2.15) and (2.16) show that the expectation value $\left\langle
E_{\phi}(x_{1})E_{\phi}(x_{2})\cdots
E_{\phi}(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$ is completely
specified by the expectation value
$\left\langle\phi(x_{2})\,\phi(x_{4})\cdots\phi(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$.
Therefore the standard renormalization procedure for the correlation functions
of the field $\phi(x)$ canonically defines the renormalization for
$\left\langle E_{\phi}(x_{1})E_{\phi}(x_{2})\cdots
E_{\phi}(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$. Equation (2.10)
implies
$i\int d^{4}x\,B(x)\,\left\langle
E_{\phi}(x)\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=0\;,$ (2.17)
The contact terms (2.13) and (2.14) give origin to a local contribution to
$W_{SD}[B]$. In particular, we find
$\frac{i^{2}}{2!}\int d^{4}x_{1}d^{4}x_{2}\,B(x_{1})B(x_{2})\,\left\langle
E_{\phi}(x_{1})E_{\phi}(x_{2})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=i\int
d^{4}x\,\left\\{-{\textstyle{\raise 0.8pt\hbox{$\scriptstyle
1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}\partial_{\mu}B\partial^{\mu}B+{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}m^{2}B^{2}\right\\}\;,$ (2.18) $\frac{i^{3}}{3!}\int
d^{4}x_{1}d^{4}x_{2}d^{4}x_{3}\,B(x_{1})B(x_{2})B(x_{n})\,\left\langle
E_{\phi}(x_{1})E_{\phi}(x_{2})E_{\phi}(x_{3})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=i\int
d^{4}x\,\left[{\textstyle{\raise 0.8pt\hbox{$\scriptstyle 2$}\over\hbox{\lower
0.8pt\hbox{$\scriptstyle 3$}}}}gB^{3}\right]\;.$ (2.19)
The expectation values
$\left\langle\phi(x_{2})\,\phi(x_{4})\cdots\phi(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$,
with $n\geq 4$, correspond to non-local amplitudes. By collecting all the
results on $\left\langle E_{\phi}(x_{1})E_{\phi}(x_{2})\cdots
E_{\phi}(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}$ with $n\geq 4$, we
get
$\displaystyle\sum_{n=4}^{\infty}\frac{i^{n}}{n!}\int d^{4}x_{1}\cdots
d^{4}x_{n}B(x_{1})\cdots B(x_{n})\,\left\langle E_{\phi}(x_{1})\cdots
E_{\phi}(x_{n})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=$
$\displaystyle\quad=\sum_{p=2}^{\infty}\frac{i^{(2p)}(2p-1)!!\,(2ig)^{p}}{(2p)!}\int
d^{4}x_{1}\cdots d^{4}x_{2p}B(x_{1})\cdots B(x_{2p})\,\times$
$\displaystyle\qquad\qquad\qquad\times\left\langle\phi(x_{2})\,\phi(x_{4})\cdots\phi(x_{2p})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}\delta^{4}(x_{1}-x_{2})\cdots\delta^{4}(x_{2p-1}-x_{2p})=$
$\displaystyle\qquad=\sum_{p=2}^{\infty}\frac{(-ig)^{p}}{p!}\int
d^{4}x_{1}\cdots d^{4}x_{p}B^{2}(x_{1})\cdots
B^{2}(x_{p})\left\langle\phi(x_{1})\,\phi(x_{2})\cdots\phi(x_{p})\,e^{iS_{I}[\phi]}\,\right\rangle^{c}_{0}=$
$\displaystyle\qquad=\left\langle e^{i\int
d^{4}x(-g)B^{2}(x)\phi(x)}\,e^{iS_{I}[\phi]}\>\right\rangle^{c}_{0}=Z\left[\left(J=-gB^{2}\right)\right]\;.$
(2.20)
The sum of the contributions (2.17)-(2.20) shows that the renormalized
Schwinger-Dyson functional for the $\phi^{3}$ model satisfies
$Z_{SD}[B]=e^{iR[B]}\,Z[\widetilde{J}[B]]\;,$ (2.21)
where
$R[B]=\int d^{3}x\,\left\\{-{\textstyle{\raise 0.8pt\hbox{$\scriptstyle
1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}\partial_{\mu}B\partial^{\mu}B+{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}m^{2}B^{2}+{\textstyle{\raise 0.8pt\hbox{$\scriptstyle
2$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle 3$}}}}gB^{3}\right\\}\;,$ (2.22)
and
$\widetilde{J}[B](x)=-gB^{2}(x)\;.$ (2.23)
The expectation values (2.13) and (2.14) determine the local functional $R[B]$
of equation (2.22). As shown in equation (2.21), the renormalization of
$Z_{SD}[B]$ is specified by the standard renormalization of $Z[J]$.
The structure of the results for the $\phi^{3}$ model admits appropriate
generalizations which depend on the form of the lagrangian of each field
theory. Let us consider for instance the $\phi^{4}$ model which is specified
by the action
$S[\phi]=\int d^{4}x\,\left({\textstyle{\raise 0.8pt\hbox{$\scriptstyle
1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}\partial_{\mu}\phi\,\partial^{\mu}\phi-{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}m^{2}\phi^{2}-{\textstyle{\raise 0.8pt\hbox{$\scriptstyle
1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle 4$}}}}g\phi^{4}\right)\;.$
(2.24)
One has
$\frac{\delta
S}{\delta\phi(x)}=\left[-\partial^{2}-m^{2}\right]\phi(x)-g\phi^{3}(x)\;.$
(2.25)
The perturbative expansion of the Schwinger-Dyson functional $Z_{SD}[B]$ can
be examined by means of the method described above. We find
$Z_{SD}[B]=e^{iQ[B]}\left\langle\exp\left\\{i\int
d^{4}x\,\left[-2gB^{3}(x)\,\phi(x)+{\textstyle{\raise 0.8pt\hbox{$\scriptstyle
3$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}gB^{2}(x)\,\phi^{2}(x)\right]\right\\}\right\rangle\;,$ (2.26)
in which
$Q[B]=\int d^{4}x\left[-{\textstyle{\raise 0.8pt\hbox{$\scriptstyle
1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}\partial_{\mu}B(x)\partial^{\mu}B(x)+{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}m^{2}B^{2}(x)+{\textstyle{\raise 0.8pt\hbox{$\scriptstyle
3$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle 4$}}}}gB^{4}(x)\right]\;.$
(2.27)
In this case, $Z_{SD}[B]$ is related to the expectation value of a term in
which, in addition to a coupling with the field operator $\phi(x)$, a coupling
with the operator $\phi^{2}(x)$ is also present. Note that the short distance
behaviour of the composite operator $\phi^{2}(x)$ is taken into account by the
standard renormalization of the generating functional $Z[J]$ of the
correlation functions because $\phi^{2}(x)$ has canonical dimension $2$. For
instance, the one-loop correlation $\langle\phi^{2}(x)\phi^{2}(y)\rangle$ is
described by the diagram of Figure 1 (with removed external legs), which
enters the ordinary renormalization of the $\phi^{4}$ theory. Thus, for this
model also, the renormalization of $Z_{SD}[B]$ can be specified by the
standard renormalization procedure.
Figure 1. One loop diagram of the $\phi^{4}$ model contributing to
$\langle\phi^{2}(x)\phi^{2}(y)\rangle$.
As these examples have shown, the connected component $W_{SD}[B]$ of the
renormalized Schwinger-Dyson functional contains a local part, expressions
(2.22) and (2.27), which is determined by the form of the lagrangian of each
specific model, and a non-local contribution which is related to the vacuum
expectation values of field operators of dimension 1 or 2, i.e. the fields
$\phi(x)$ and $\phi^{2}(x)$. This universal feature can be used to specify the
renormalized values of the parameters of each model by the introduction of
appropriate normalization conditions written in terms of the operator
$\left[\delta S/\delta\phi(x)\right]$. Equivalently, some of the vacuum
expectations values of the products of $\left[\delta S/\delta\phi(x)\right]$
are uniquely specified and do not receive perturbative changes. For example,
in the $\phi^{3}$ model, relations (2.13) and (2.14) are exact and are not
modified by loop corrections.
## 3 Topological models
In this section we consider gauge theories of topological type in
${\mathbb{R}}^{3}$. The action of the $SU(N)$ quantum Chern-Simons theory [33,
34, 24] in the Landau gauge is given by
$\displaystyle S$ $\displaystyle=$ $\displaystyle{k\over 4\pi}\int
d^{3}x\,\Big{\\{}\epsilon^{\mu\nu\tau}\left[{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}A^{a}_{\mu}\partial_{\nu}A^{a}_{\tau}-{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
6$}}}}\,f^{abc}A^{a}_{\mu}A^{b}_{\nu}A^{c}_{\tau}\right]$ (3.1)
$\displaystyle\qquad\qquad-M^{a}\partial^{\mu}A^{a}_{\mu}+\partial^{\mu}{\overline{c}}^{a}\left(\partial_{\mu}c^{a}-f^{abc}A^{b}_{\mu}c^{c}\right)\Big{\\}}\;,$
and then
$\frac{\delta S}{\delta A^{a}_{\mu}(x)}=\left({k\over
4\pi}\right)\left[\epsilon^{\mu\nu\tau}\left(\partial_{\nu}A_{\tau}^{a}(x)-{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}\,f^{abc}A^{b}_{\nu}(x)A^{c}_{\tau}(x)\right)+\partial^{\mu}M^{a}(x)+\partial^{\mu}{\overline{c}}^{b}(x)f^{abd}c^{d}(x)\right]\;.$
(3.2)
The Schwinger-Dyson functional $Z_{SD}[L^{a}_{\mu}]$ is defined by
$Z_{SD}[L^{a}_{\mu}]=\left\langle\exp\left(i\int
d^{3}x\,L^{a}_{\mu}(x)\left[\delta S/\delta
A_{\mu}^{a}(x)\right]\right)\right\rangle\;,$ (3.3)
and the generating functional $Z[J^{a}_{\mu}]$ of the correlation functions
for the gauge field $A^{a}_{\mu}(x)$ is given by
$Z[J^{a}_{\mu}]=\left\langle\exp\left(i\int
d^{3}x\,J^{a\mu}(x)A_{\mu}^{a}(x)\right)\right\rangle\;.$ (3.4)
In order to examine the diagrams entering the vacuum expectation values of the
product of fields $\delta S/\delta A^{a}_{\mu}(x_{1})\cdots\delta S/\delta
A^{b}_{\nu}(x_{n})$, one needs to use the following relationship between the
propagators of the fields
$\epsilon^{\mu\sigma\lambda}\partial_{\sigma}\mathop{\vbox{\halign{#\cr\kern
3.0pt\cr$\,\,\vrule depth=4.0pt,width=0.4pt\leaders\vrule
depth=0.0pt,height=0.4pt\hfill\vrule depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{A^{a}_{\lambda}(x)A}\hfil$\crcr}}}\limits\\!{}^{b}_{\nu}(y)+\partial^{\mu}\\!\mathop{\vbox{\halign{#\cr\kern
3.0pt\cr$\,\,\vrule depth=4.0pt,width=0.4pt\leaders\vrule
depth=0.0pt,height=0.4pt\hfill\vrule depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{M^{a}(x)A}\hfil$\crcr}}}\limits\\!{}^{b}_{\nu}(y)=i\left(\frac{4\pi}{k}\right)\delta^{ab}\,\delta^{\mu}_{\nu}\,\delta^{3}(x-y)\;,$
(3.5)
which can be derived from the action (3.1), or it can be checked directly by
means of the expressions
$\mathop{\vbox{\halign{#\cr\kern 3.0pt\cr$\,\,\vrule
depth=4.0pt,width=0.4pt\leaders\vrule depth=0.0pt,height=0.4pt\hfill\vrule
depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{A^{a}_{\mu}(x)A}\hfil$\crcr}}}\limits\\!{}^{b}_{\nu}(y)=\delta^{ab}\left(\frac{4\pi}{k}\right)\int{d^{3}p\over(2\pi)^{3}}\,e^{ip(x-y)}\,\epsilon_{\mu\nu\lambda}{p^{\lambda}\over
p^{2}}\;,$ (3.6) $\mathop{\vbox{\halign{#\cr\kern 3.0pt\cr$\,\,\vrule
depth=4.0pt,width=0.4pt\leaders\vrule depth=0.0pt,height=0.4pt\hfill\vrule
depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{A^{a}_{\mu}(x)M}\hfil$\crcr}}}\limits\\!{}^{b}(y)=-\delta^{ab}\left(\frac{4\pi}{k}\right)\int{d^{3}p\over(2\pi)^{3}}\,e^{ip(x-y)}\,{p_{\mu}\over
p^{2}}\;.$ (3.7)
The presence of the operator
$\partial^{\mu}{\overline{c}}^{b}(x)f^{abd}c^{d}(x)$ in the equation (3.2)
does not modify the perturbative relations between the expectation values of
the operators $\delta S/\delta A^{a}_{\mu}(x_{1})\cdots\delta S/\delta
A^{b}_{\nu}(x_{n})$ —that have been derived in Section 2 by using the Wick
contractions of the fields— because this additional term has no contractions
with the gauge fields $A^{a}_{\mu}(x)$ and the auxiliary field $M^{a}(x)$.
Thus, by means of the arguments presented in Section 2 , one gets
$Z_{SD}[L^{a}_{\mu}]=e^{iF[L^{a}_{\mu}]}\,Z[\,\widehat{J}[L]^{a}_{\mu}\,]\;,$
(3.8)
where
$F[L^{a}_{\mu}]=-{k\over 4\pi}\int
d^{3}x\,\epsilon^{\mu\nu\tau}\left\\{{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}L^{a}_{\mu}\partial_{\nu}L^{a}_{\tau}+{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
3$}}}}f^{abc}L^{a}_{\mu}L^{b}_{\nu}L^{c}_{\tau}\right\\}\;,$ (3.9)
and
$\widehat{J}[L]^{a}_{\mu}(x)={k\over
8\pi}\epsilon^{\mu\nu\tau}\,f^{abc}L_{\nu}^{b}(x)L^{c}_{\tau}(x)\;.$ (3.10)
In addition to the local component which is described the function
$F[L^{a}_{\mu}]$ of equation (3.9), the connected component of
$Z_{SD}[L^{a}_{\mu}]$ contains powers of $L^{a}_{\mu}$ greater than (or equal
to) four. In particular, for each connected diagram $D$ contributing to
$Z[J^{a}_{\mu}]$ there is a corresponding connected diagram $D^{\prime}$
contributing to $Z_{SD}[L^{a}_{\mu}]$. In agreement with equation (3.8),
$D^{\prime}$ can be obtained from $D$ by the introduction of the “effective
classical vertex” (3.10) in each external leg of $D$, as depicted in Figure 2.
Clearly, the renormalization of the amplitude associated with each diagram $D$
defines a canonical renormalization of the amplitude corresponding to
$D^{\prime}$. So, equation (3.8) provides a definition of the renormalized
Schwinger-Dyson functional $Z_{SD}[L^{a}_{\mu}]$.
$D$$D^{\prime}$
Figure 2. Diagram $D$ and the associated diagram $D^{\prime}$ contributing to
$Z_{SD}$.
Equation (3.8) also specifies the leading term of the operator product
expansion [4]
$\left[\delta S/\delta A^{a}_{\mu}(0)\right]\,\left[\delta S/\delta
A^{b}_{\nu}(x)\right]\rightarrow\sum_{n}C_{n}(x)\,{\cal O}_{n}\;.$ (3.11)
Indeed the value of the coefficient function $C_{1}(x)$ for the identity
operator ${\cal O}_{1}=1$ is determined by expression (3.9)
$C_{1}(x)=\left(\frac{k}{4\pi}\right)\delta^{ab}\epsilon^{\mu\nu\lambda}\partial_{\lambda}\delta^{3}(x)\;,$
(3.12)
and it does not receive perturbative corrections.
The Schwinger-Dyson functional for the abelian Chern-Simons theory can be
obtained in the $f^{abc}\rightarrow 0$ limit. In this case, equation (3.8)
becomes
$Z_{SD}[L_{\mu}]=\exp\left\\{-i{k\over 4\pi}\int
d^{3}x\,\epsilon^{\mu\nu\tau}\,{\textstyle{\raise 0.8pt\hbox{$\scriptstyle
1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}L_{\mu}\partial_{\nu}L_{\tau}\right\\}\;.$ (3.13)
When $L_{\mu}(x)$ coincides with the de Rham-Federer current [35, 36, 37, 38]
associated with a link $\cal L$ (with support on a Seifert surface [25] which
bounds the link $\cal L\,$), expression (3.13) represents precisely the
exponent of the linking matrix corresponding to $\cal L$. For this reason, in
the abelian Chern-Simons theory the Schwinger-Dyson functional provides the
solution [22] for the link observables.
Finally, let us consider the $ISU(2)$ BF theory [39, 40, 41, 42, 43] in
${\mathbb{R}}^{3}$, with fields $A^{a}_{\mu}(x)$ and $B^{a}_{\mu}(s)$ and
gauge-fixed action in the Landau gauge
$\displaystyle S$ $\displaystyle=$ $\displaystyle\int
d^{3}x\,\epsilon^{\mu\nu\lambda}\left\\{{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}\,B^{a}_{\mu}F^{a}_{\nu\lambda}(A)+g\left[{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}A^{a}_{\mu}\partial_{\nu}A^{a}_{\lambda}-{\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
6$}}}}\epsilon^{abc}\,A^{a}_{\mu}A^{b}_{\nu}A^{c}_{\lambda}\right]\right\\}$
(3.14) $\displaystyle\qquad+\int
d^{3}x\Bigl{\\{}M^{a}\partial^{\mu}A^{a}_{\mu}+N^{a}\partial^{\mu}B^{a}_{\mu}+\partial^{\mu}{\overline{c}}^{a}(\partial_{\mu}c^{a}-\epsilon^{abc}A_{\mu}^{b}c^{c})$
$\displaystyle{\hskip
51.21504pt}+\partial^{\mu}{\overline{\xi}}^{a}(\partial_{\mu}\xi^{a}-\epsilon^{abc}A_{\mu}^{b}\xi^{c}-\epsilon^{abc}B_{\mu}^{b}c^{c})\Bigr{\\}}\;,$
where the real parameter $g$ denotes a coupling constant and
$F^{a}_{\mu\nu}(A)=\partial_{\mu}A_{\nu}^{a}-\partial_{\nu}A_{\mu}^{a}-\epsilon^{abc}A^{b}_{\mu}A^{c}_{\nu}\;.$
(3.15)
The non-vanishing propagators for the components of the connection and the
auxiliary fields are given by [43]
$\displaystyle\mathop{\vbox{\halign{#\cr\kern 3.0pt\cr$\,\,\vrule
depth=4.0pt,width=0.4pt\leaders\vrule depth=0.0pt,height=0.4pt\hfill\vrule
depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{A^{a}_{\mu}(x)B}\hfil$\crcr}}}\limits\\!{}^{b}_{\nu}(y)$
$\displaystyle=$
$\displaystyle\delta^{ab}\int{d^{3}k\over(2\pi)^{3}}\,e^{ik(x-y)}\,\epsilon_{\mu\nu\lambda}{k^{\lambda}\over
k^{2}}\;,$ (3.18) $\displaystyle\mathop{\vbox{\halign{#\cr\kern
3.0pt\cr$\,\,\vrule depth=4.0pt,width=0.4pt\leaders\vrule
depth=0.0pt,height=0.4pt\hfill\vrule depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{B^{a}_{\mu}(x)B}\hfil$\crcr}}}\limits\\!{}^{b}_{\nu}(y)$
$\displaystyle=$
$\displaystyle-g\,\delta^{ab}\int{d^{3}k\over(2\pi)^{3}}\,e^{ik(x-y)}\,\epsilon_{\mu\nu\lambda}{k^{\lambda}\over
k^{2}}\;,$ (3.21)
and
$\displaystyle\mathop{\vbox{\halign{#\cr\kern 3.0pt\cr$\,\,\vrule
depth=4.0pt,width=0.4pt\leaders\vrule depth=0.0pt,height=0.4pt\hfill\vrule
depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{A^{a}_{\mu}(x)M}\hfil$\crcr}}}\limits\\!{}^{b}(y)$
$\displaystyle=$
$\displaystyle\delta^{ab}\int{d^{3}k\over(2\pi)^{3}}\,e^{ik(x-y)}\,{k_{\mu}\over
k^{2}}\;,$ (3.24) $\displaystyle\mathop{\vbox{\halign{#\cr\kern
3.0pt\cr$\,\,\vrule depth=4.0pt,width=0.4pt\leaders\vrule
depth=0.0pt,height=0.4pt\hfill\vrule depth=4.0pt,width=0.4pt\,\,$\crcr\kern
0.8pt\nointerlineskip\cr$\hfil\displaystyle{B^{a}_{\mu}(x)N}\hfil$\crcr}}}\limits\\!{}^{b}(y)$
$\displaystyle=$
$\displaystyle\delta^{ab}\int{d^{3}k\over(2\pi)^{3}}\,e^{ik(x-y)}\,{k_{\mu}\over
k^{2}}\;.$ (3.27)
In this model one has
$\frac{\delta S}{\delta
A^{a}_{\mu}(x)}=\epsilon^{\mu\nu\lambda}\left[\left(\partial_{\nu}B^{a}_{\lambda}-\epsilon^{abc}A^{b}_{\nu}B_{c}^{c}\right)+{\textstyle{\raise
0.8pt\hbox{$\scriptstyle g$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}F^{a}_{\nu\lambda}(A)\right]-\partial^{\mu}M^{a}-\epsilon^{abc}\left[\partial^{\mu}{\overline{c}}^{a}c^{c}-\partial^{\mu}{\overline{\xi}}^{a}\xi^{c}\right],$
(3.28)
and
$\frac{\delta S}{\delta B^{a}_{\mu}(x)}={\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}\epsilon^{\mu\nu\lambda}F^{a}_{\nu\lambda}(A)-\partial^{\mu}N^{a}-\epsilon^{abc}\partial^{\mu}{\overline{\xi}}^{a}c^{c}\;.$
(3.29)
The Schwinger-Dyson functional $Z_{SD}[L^{a}_{\mu},H^{a}_{\mu}]$ is defined by
$Z_{SD}[L^{a}_{\mu},H^{a}_{\mu}]=\left\langle e^{i\int
d^{3}x\left[L^{a}_{\mu}\left(\delta S/\delta
A_{\mu}^{a}\right)+H^{a}_{\mu}\left(\delta S/\delta
B_{\mu}^{a}\right)\right]}\right\rangle\;,$ (3.30)
where $L^{a}_{\mu}(x)$ and $H^{a}_{\mu}(x)$ are classical sources. The
generating functional $Z[J^{a}_{\mu},K^{a}_{\mu}]$ of the correlation
functions for the gauge field $A^{a}_{\mu}(x)$ and $B_{\mu}^{a}(x)$ is given
by
$Z[J^{a}_{\mu},k^{a}_{\mu}]=\left\langle e^{i\int
d^{3}x\left[J^{a}_{\mu}A_{\mu}^{a}+K^{a}_{\mu}B^{a}_{\mu}\right]}\right\rangle\;.$
(3.31)
With several field components, the construction and the sum of the Feynman
diagrams becomes rather laborious. We get
$Z_{SD}[L^{a}_{\mu},H^{a}_{\mu}]=e^{iG[L^{a}_{\mu},H^{a}_{\mu}]}\,Z[\widetilde{J}^{a}_{\mu},\widetilde{K}^{a}_{\mu}]\;,$
(3.32)
in which
$G[L^{a}_{\mu},H^{a}_{\mu}]=-\int
d^{3}x\,\epsilon^{\mu\nu\lambda}\Bigl{\\{}H^{a}_{\mu}\partial_{\nu}L^{a}_{\lambda}+\epsilon^{abc}H^{a}_{\mu}L^{b}_{\nu}L^{c}_{\lambda}+{\textstyle{\raise
0.8pt\hbox{$\scriptstyle g$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}L^{a}_{\mu}\partial_{\nu}L^{a}_{\lambda}+{\textstyle{\raise
0.8pt\hbox{$\scriptstyle g$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
3$}}}}\epsilon^{abc}L^{a}_{\mu}L^{b}_{\nu}L^{c}_{\lambda}\Bigr{\\}},$ (3.33)
and
$\widetilde{J}^{a}_{\mu}=\epsilon^{\mu\nu\lambda}\epsilon^{abc}\left[H^{b}_{\nu}L^{c}_{\lambda}+{\textstyle{\raise
0.8pt\hbox{$\scriptstyle g$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}L^{b}_{\nu}L^{c}_{\lambda}\
\right]\qquad,\qquad\widetilde{K}^{a}_{\mu}={\textstyle{\raise
0.8pt\hbox{$\scriptstyle 1$}\over\hbox{\lower 0.8pt\hbox{$\scriptstyle
2$}}}}\epsilon^{\mu\nu\lambda}\epsilon^{abc}L^{b}_{\nu}L^{c}_{\lambda}\;.$
(3.34)
In addition to the contact terms, which are specified by the local functional
$G[L^{a}_{\mu},H^{a}_{\mu}]$, in the BF theory there are additional
expectation values that can be displayed in their exact form. For instance,
since the vacuum polarization vanishes, from equation (3.32) one derives the
following relation for the connected mean value
$\displaystyle\int
d^{3}xd^{3}yd^{3}zd^{3}t\,L^{a}_{\mu}(x)L^{b}_{\nu}(y)L^{c}_{\lambda}(z)H^{d}_{\rho}(t)\left\langle\frac{\delta
S}{A^{a}_{\mu}(x)}\frac{\delta S}{A^{b}_{\nu}(y)}\frac{\delta
S}{A^{c}_{\lambda}(z)}\frac{\delta S}{B^{d}_{\rho}(t)}\right\rangle^{c}=$
$\displaystyle\;\;=-i\frac{3}{4\pi}\int
d^{3}x\,d^{3}y\,\frac{(x-y)^{\lambda}}{|x-y|^{3}}\epsilon_{\lambda\mu\nu}\epsilon^{\mu\rho\tau}\epsilon^{\nu\sigma\xi}\epsilon^{abc}\epsilon^{ade}\,H^{b}_{\rho}(x)L^{c}_{\tau}(x)L^{d}_{\sigma}(y)L^{e}_{\xi}(y)\;.$
(3.35)
Actually in both the Chern-Simons and the BF theories, the connected
components of the Schwinger-Dyson functional containing up to six powers of
the external classical sources are exhibited in closed form because of the
vanishing of the loop corrections to the two-points and three-points
correlation functions of the gauge fields [29, 34, 43].
## 4 Conclusions
For renormalizable quantum field theories we have shown that, in the
perturbative computation of the corresponding Schwinger-Dyson functional
$Z_{SD}$, the use of standard renormalized perturbation theory provides a
canonical renormalization procedure for $Z_{SD}$. In facts, the short distance
behaviour of the products of the composite operator $\delta
S[\phi]/\delta\phi(x)$ turns out to be determined by the ultraviolet
properties of the field operators of dimensions 1 (and possibly 2). The
explicit combinatoric of the Wick contractions of the field operators and the
resulting structure of the Feynman diagrams have been illustrated in the
simple cases of the $\phi^{3}$ and $\phi^{4}$ models. We have shown that the
connected component of $Z_{SD}$ is the union of a local functional of the
classical source and a non-local part which is specified by the expectation
values of field components. The arguments that have been presented in these
scalar models naturally extend to a generic theory.
In order to study possible applications of the Schwinger-Dyson functional in
gauge field theories of topological type, for the non-abelian Chern-Simons and
BF gauge theories, the relationship between the renormalized Schwinger-Dyson
functional and the generating functional of the correlation functions of the
gauge fields has been derived. In these cases, the vanishing of the loop
corrections for the two-points and three-points correlation functions implies
that the connected components of the renormalized Schwinger-Dyson functional
containing up to six powers of the external classical sources have been
produced in closed form. In these topological models, the derivative of the
action with respect of the gauge fields is proportional to the curvature of
the connection (plus gauge-fixing contributions). So relations (3.8) and
(3.32) could possibly be used for the introduction of appropriate field
variables —similar to the local gauge invariant variables decomposition [44]—
which simplify the functional integration when the theory is defined in
topological non-trivial manifolds.
## References
* [1] F.J. Dyson, Phys. Rev. 75 (1949) 1736.
* [2] J. Schwinger, Proc. Nat. Acad. S. 37 (1951) 452.
* [3] C. Itzykson and J-B. Zuber, Quantum Field Theory, McGraw-Hill (Singapore, 1980).
* [4] M.E. Peskin and D.V. Schroeder, An Introduction to Quantum Field Theory, Westview Press (Boulder, 1995).
* [5] D.J. Broadhurst and D. Kreimer, Nucl. Phys. B 600 (2001) 403.
* [6] D. Kreimer and K. Yeats, Nucl. Phys. Proc. Suppl. 160 (2006) 116.
* [7] B. Sbierski, C. Fra$\ss$dorf, Phys. Rev. B 99 (2019) 020201.
* [8] X.-Y. Pan, X. Li , G.Z. Liu, Nonperturbative Dyson-Schwinger equation approach to strongly interacting Dirac fermion systems, arXiv:2003.10371.
* [9] M. Chen, L. Chang, Y.-x Liu, Phys. Rev. D 101, (2020) 056002.
* [10] G. Eichmann, H. Sanchis-Alepuz, R. Williams, R. Alkofer, and C.S. Fischer, Prog. Part. Nucl. Phys. 91 (2016) 1.
* [11] I.C. Cloet and C.D. Roberts, Prog. Part. Nucl. Phys. 77 (2014) 1.
* [12] H. Sanchis-Alepuz and R. Williams, Comput. Phys. Commun. 232 (2018) 1.
* [13] P. Issersted, M. Buballa, C.S. Fischer, P.J. Gunkel, Phys. Rev. D 100 (2019) 074011.
* [14] J-H. Huang, T-T. Sun , H. Chen, Phys. Rev. D 101 (2020) 054007.
* [15] K. Raya, A. Bashir, P. Roig, Phys. Rev. D 101 (2020) 074021.
* [16] R. Contant, M.Q. Huber, Phys. Rev. D 101 (2020) 014016.
* [17] A. Doff, A.A. Natale, Phys. Rev. D 99 (2019) 055026.
* [18] C. Tang, F. Gao, Y.-x. Liu, Phys. Rev. D 100 (2019) 056001.
* [19] D. Daniel and N. Dorey, Phys. Lett. B 246 (1990) 82.
* [20] S. Giombi, S. Minwalla, S. Prakash, S.P. Trivedi, S.R. Wadia, X. Yin, Eur. Phys. J. C 72 (2012) 2112\.
* [21] S. Jain, S.P. Trivedi, S.R. Wadia, S. Yokoyama, JHEP (2012) 194.
* [22] E. Guadagnini and F. Thuillier, Nucl. Phys. B 882 (2014) 450-484.
* [23] Ph. Mathieu and F. Thuillier, J. Math. Phys. 57 (2016), 022306.
* [24] E. Guadagnini, Nucl. Phys. B 912 (2016) 238.
* [25] D. Rolfsen, Knots and Links, AMS Chelsea Publishing, Providence, 2003.
* [26] N. Saveliev, Invariants for Homology 3-Spheres, Springer-Verlag (Berlin, Heidelberg, 2010).
* [27] L.D. Landau and E.M. Lifshitz, Vol. 2 of the Course of Theoretical Physics; The Classical Theory of Fields, Pergamon Press Ltd. (Oxford, 1981).
* [28] A.N. Kapustin and P.I. Pronin, Mod. Phys. Lett. A9 (1994) 1925.
* [29] O. Piguet and S.P. Sorella, Algebraic Renormalization, Springer-Verlag (Berlin, 1995).
* [30] K.G. Wilson and W. Zimmermann, Commun. Math. Phys. 24 (1972) 87.
* [31] S. Coleman, Aspects of Symmetry, Cambridge University Press (Cambridge, 1985).
* [32] N.N. Bogoliubov et D.V. Shirkov, Introduction to the theory of quantized fields, John Wiley & Sons (New York, 1980).
* [33] E. Witten, Commun. Math. Phys. 121 (1989) 351.
* [34] E. Guadagnini, M. Martellini and M. Mintchev, Phys. Lett. B227 (1989) 111.
* [35] G. de Rham, Varétés Differentiables, Formes, Courants, Formes Harmoniques, Hermann (Paris, 1955).
* [36] H. Federer, Geometric Measure Theory, Springer-Verlag (New-York, 1969).
* [37] M. Bauer, G. Girardi, R. Stora and F. Thuillier, JHEP 0508 (2005) 027.
* [38] E. Guadagnini and F. Thuillier, SIGMA 4 (2008) 078.
* [39] G.T. Horowitz, Commun. Math. Phys. 125 (1989) 417.
* [40] A. Karlhede and M. Rocek, Phys. Lett. B 224 (1989) 58.
* [41] R. Meyers and V. Periwal, Phys. Lett. B 225 (1989) 352.
* [42] M. Blau and G. Thompson, Phys. Lett. B 228 (1989) 64.
* [43] E. Guadagnini and F. Rottoli, Nucl. Phys. B 954 (2020) 114987.
* [44] O. Ganor and J. Sonnenschein, Int. J. Mod. Phys. A 11 (1996) 5701.
|
# A heavy molecular weight atmosphere for the super-Earth $\pi$ Men c
A. García Muñoz Zentrum für Astronomie und Astrophysik, Technische
Universität Berlin, Hardenbergstrasse 36, D-10623, Berlin, Germany L. Fossati
Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6,
A-8042 Graz, Austria A. Youngblood Laboratory for Atmospheric and Space
Physics, 1234 Innovation Drive, Boulder, CO 80303, USA N. Nettelmann
Deutsches Zentrum für Luft- und Raumfahrt, Rutherfordstrasse, 2, Institut für
Planetenforschung, D-12489 Berlin, Germany D. Gandolfi Dipartimento di
Fisica, Università degli Studi di Torino, via Pietro Giuria 1, I-10125,
Torino, Italy J. Cabrera Deutsches Zentrum für Luft- und Raumfahrt, Institut
für Planetenforschung, D-12489 Berlin, Germany H. Rauer Deutsches Zentrum
für Luft- und Raumfahrt, Institut für Planetenforschung, D-12489 Berlin,
Germany Institute of Geological Sciences, Freie Universität Berlin,
Malteserstrasse 74-100, D-12249, Berlin, Germany Zentrum für Astronomie und
Astrophysik, Technische Universität Berlin, Hardenbergstrasse 36, D-10623,
Berlin, Germany
(Accepted January 2021)
###### Abstract
Strongly irradiated exoplanets develop extended atmospheres that can be
utilized to probe the deeper planet layers. This connection is particularly
useful in the study of small exoplanets, whose bulk atmospheres are
challenging to characterize directly. Here we report the 3.4-sigma detection
of C ii ions during a single transit of the super-Earth $\pi$ Men c in front
of its Sun-like host star. The transit depth and Doppler velocities are
consistent with the ions filling the planet’s Roche lobe and moving
preferentially away from the star, an indication that they are escaping the
planet. We argue that $\pi$ Men c possesses a thick atmosphere with abundant
heavy volatiles ($\gtrsim$50% by mass of atmosphere) but that needs not be
carbon rich. Our reasoning relies upon cumulative evidence from the reported C
ii detection, the non-detection of H i atoms in a past transit, modeling of
the planet’s interior and the assumption that the atmosphere, having survived
the most active phases of its Sun-like host star, will survive another 0.2–2
Gyr. Depending on the current mass of atmosphere, $\pi$ Men c may still
transition into a bare rocky core. Our findings confirm the hypothesized
compositional diversity of small exoplanets, and represent a milestone towards
understanding the planets’ formation and evolution paths through the
investigation of their extended atmospheres.
…
††journal: ApJL
## 1 Introduction
Small exoplanets of sizes between Earth and Neptune are ubiquitous in the
galaxy (Batalha, 2014), yet intriguingly absent in our Solar System. Even when
their masses and radii are accurately known (Fulton et al., 2017), little can
be confidently stated about their bulk compositions (Seager et al., 2007;
Valencia et al., 2010, 2013; Rogers & Seager, 2010; Nettelmann et al., 2011)
or the processes through which they form and evolve. $\pi$ Men c is a close-in
transiting super-Earth (Gandolfi et al., 2018; Huang et al., 2018) (mass
$M_{\rm{p}}$/$M_{\earth}$=4.52$\pm$0.81; radius
$R_{\rm{p}}$/$R_{\earth}$=2.06$\pm$0.03; orbital distance 0.06702$\pm$0.00109
AU) expected to develop an extended atmosphere under the significant XUV
(=X-ray + Extreme Ultraviolet) stellar radiation that it receives (King et
al., 2019; García Muñoz et al., 2020), and is thus an ideal target to
investigate the composition of small exoplanets. Orbiting a Sun-like star, its
study also potentially conveys insight into our own Solar System.
Whereas planets with radii $R_{\rm{p}}/R_{\earth}$$<$1.6 are likely rocky
(composed of iron and silicates) and those with $R_{\rm{p}}/R_{\earth}$$>$3
are expected to have a non-negligible amount of light volatiles (H2/He) (Otegi
et al., 2020), $\pi$ Men c’s bulk density is consistent with an atmosphere
that contains an admixture of light and heavy (e.g. H2O, CO2, CH4, NH3)
volatiles (Rogers, 2015; Otegi et al., 2020), which raises the interesting
possibility that it may not be H2/He-dominated. Compositional diversity is
indeed predicted by theory (Fortney et al., 2013; Mordasini et al., 2015), and
supported by the scatter in the $M_{\rm{p}}$–$R_{\rm{p}}$ statistics of known
exoplanets (Hatzes & Rauer, 2015; Otegi et al., 2020). Disentangling the
composition of selected small exoplanets is the key next step, and calls for a
multiple line of evidence approach that goes beyond $M_{\rm{p}}$ and
$R_{\rm{p}}$ measurements.
Transmission spectroscopy at visible-infrared wavelengths provides additional
insight when gas absorption bands are revealed (Benneke & Seager, 2012).
However, when the measured spectrum is featureless (Guo et al., 2020), it is
difficult to discriminate between atmospheres enshrouded by high-altitude
clouds and atmospheres with abundant heavy volatiles. Furthermore, the
precision required for visible-infrared spectroscopy approaches the limit of
current and upcoming telescopes when the band-to-continuum contrast drops
below $\sim$20 parts per million (as for $\pi$ Men c). Here, we alternatively
constrain $\pi$ Men c’s bulk composition with far-ultraviolet (FUV)
transmission spectroscopy of selected atoms in its extended atmosphere
complemented with modeling of its interior structure and atmospheric mass
loss.
The manuscript is structured as follows. In Section 2, we present new FUV
transmission spectroscopy measurements of $\pi$ Men c and argue that the
reported dimming originates in the planet’s atmosphere. In Section 3, we
describe our upper atmosphere modeling, with emphasis on the net mass loss
rate, the neutral/ionized state of the escaping hydrogen atoms, and their
impact on atmospheric stability and detectability of hydrogen atoms. In
Section 4, we describe our planetary interior modeling, which we use to
estimate the atmospheric mass for different bulk compositions. Lastly, we
invoke in Section 5 an argument of stability that connects the atmospheric
mass with a time scale for the planet to lose it to escape. By requiring that
this time scale is not much smaller than the system’s age, which would suggest
fine-tuning in the evolution/current state of the planet, we are able to
constrain $\pi$ Men c’s present-day atmospheric composition. Appendices A–D
provide additional technical details.
## 2 HST/COS observations
We observed one FUV transit of $\pi$ Men c with the Cosmic Origins
Spectrograph (COS) aboard the Hubble Space Telescope (HST) over five
consecutive orbits (Program: GO-15699). The first two orbits occurred before
transit, the third orbit covered the ingress, and the last two orbits occurred
in transit with respect to the updated ephemeris. For unknown reasons, the
third observation returned no data. The data were obtained in time-tag mode
with the G130M grating centered at 1291 Å. Each exposure lasted 3025 s, except
the first one that was 2429 s because of target acquisition prior to the
science observation. Each spectrum covers the 1135–1429 Å range, with a gap
between 1274 and 1291 Å. For optimal stability during the observation, we
adopted one single instrumental Fixed Pattern position (FP-POS=3). We
downloaded the data from MAST, which were calibrated and extracted by calcos
version 3.3.7111
https://www.stsci.edu/hst/instrumentation/cos/documentation/calcos-release-
notes.
Each spectrum covers several stellar lines of abundant elements (i.e.,
hydrogen, carbon, nitrogen, oxygen, silicon). The lines of the O i 1302–1306 Å
triplet, contaminated by geocoronal airglow, were treated separately. The H i
Ly$\alpha$ line was also severely affected by geocoronal airglow contamination
(HST/COS is particularly prone to this problem) and gain sag and was not
analyzed. For each of the other spectral features, we integrated the flux in
wavelength to obtain a transit light curve for each line and ion. We either
considered each line separately or added together the flux from different
lines of the same multiplet to increase the S/N (i.e., for C iii, N v, C ii,
and Si iv). In this way, we constructed a transit light curve for each line
and ion.
Among all light curves, we recorded a significant flux drop during transit
only for the triplet of C ii at 1334–1335 Å. This feature is composed of a
resonance line at 1334.532 Å and a doublet of components at 1335.663 and
1335.708 Å arising from an excited state (0.007863 eV). This is why for nearby
stars only the bluest line of the triplet is affected by ISM absorption
(furthermore in the nearby ISM, C ii is the dominant C ion (Frisch & Slavin,
2003)). The C ii 1335-Å doublet is unresolved with COS and the line at
1335.663 Å is about 10$\times$ weaker than the line at 1335.708 Å, which is
the strongest of the triplet. The resonance line at 1334.532 Å is
intrinsically about 1.8 times weaker than the 1335.708 Å line. In our analysis
we ignored the weakest component at 1335.663 Å. The light curves obtained from
splitting the two main lines further indicated that the absorption signal was
induced by the C ii 1335 Å line (Figure 1, top). Considering only the C ii
1335 Å line, the transit depth integrated across the whole line (i.e.,
$\pm$146 km/s from the line center) is 3.9$\pm$1.1%.
We averaged the in-transit and out-of-transit data separately to obtain one
master in-transit and one master out-of-transit spectrum and identify the
velocities at which the absorption in the C ii 1335 Å feature occurs. Figure 1
(bottom) compares the in- and out-of-transit spectra. The C ii 1335 Å feature
shows in-transit dimming for velocities between $-$70 and $+$10 km/s. The
corresponding transit depth over this velocity range is 6.8$\pm$2.0% (3.4
sigma detection). The corresponding flux drop at the 1334 Å line could not be
detected because of interstellar medium absorption (ISM). We also
unsuccessfully looked for absorption in this same velocity range for all the
other stellar features (Table 1).
Appendix A provides further insight into the COS data analysis, and discusses
the masking of the C ii 1334 Å line by the ISM, the unlikeliness that the
reported in-transit dimming is caused by random fluctuations in the stellar
line shape and the search for in-transit dimming at the O i 1302–1306 Å
triplet. Our tests suggest that the stellar C ii line does not exhibit
intrinsic temporal variations and therefore that the C ii 1335 Å line dimming
is caused by the planet transit. Ideally, future HST/COS observations over one
or more transits will confirm the above. The confirmation is obviously
important for $\pi$ Men c but also to set useful precedents in the
investigation of other small exoplanets with FUV transmission spectroscopy.
We attribute the dimming of the C ii 1335 Å line to absorption by C ii ions
escaping $\pi$ Men c along with other gases, even though the signatures of the
other gases are not directly seen in our data. Our detection adds to a growing
list of exoplanets with extended atmospheres (Vidal-Madjar et al., 2003;
Fossati et al., 2010; Ben-Jaffel & Ballester, 2013; Linsky et al., 2010;
Ehrenreich et al., 2015; Bourrier et al., 2018b). Unlike $\pi$ Men c, there is
strong evidence that these other exoplanets’ atmospheres are H2/He-dominated.
The transit depth and Doppler velocities reported here are consistent with the
C ii ions being swept by the stellar wind into a $\sim$15$R_{\rm{p}}$-wide
(about the extent of the Roche lobe in the substellar direction, and closer to
the planet than the interface between the planetary and stellar winds) tail
and accelerated to high velocities, a scenario suggested by 3D models of $\pi$
Men c and other exoplanets (Shaikhislamov et al., 2020a, b). We found by means
of a simplified phenomenological model of $\pi$ Men c’s tail (Appendix B) that
the C ii measurements can be explained if the planet loses carbon at a rate
$\gtrsim$108 g s-1, requiring that the atmosphere contains this atom in at
least solar abundance. A tail-like configuration facilitates the detection,
but saturation of the absorption signal impedes setting tighter constraints on
the C ii abundance when it becomes supersolar. In summary, we cannot yet
conclude whether carbon is a major or minor constituent of $\pi$ Men c’s
atmosphere. Future 3D modeling that incorporates all the relevant physics for
the escaping atmosphere (including the C ii ions) and its interaction with the
star may help discern amongst atmospheric compositions with various carbon
abundances.
## 3 Extended atmosphere
A prior observation of $\pi$ Men c with the HST Space Telescope Imaging
Spectrograph (STIS) revealed no evidence for in-transit absorption of the
stellar Ly$\alpha$ line (García Muñoz et al., 2020). When present, absorption
in the Ly$\alpha$ wings is primarily caused by energetic neutral atoms (ENAs)
(Holmström et al., 2008; Tremblin & Chiang, 2013), which are fast neutral
hydrogen atoms generated when the low-velocity neutral hydrogen escaping the
planet and the high-velocity protons in the stellar wind exchange charge:
$\rm{H_{planet}}\;(\rm{slow})+\rm{H_{stellar\;wind}^{+}}\;(\rm{fast})\rightarrow\rm{H}\;(\rm{fast})\;(\equiv\rm{ENA})+\rm{H^{+}}\;(\rm{slow}).$
3D modeling shows that if $\pi$ Men c’s atmosphere is H2/He-dominated, large
amounts of ENAs are generated that produce measurable Ly$\alpha$ transit
depths (Shaikhislamov et al., 2020b). Conversely, reduced ENA generation
occurs if either the flux of stellar wind protons or the flux of neutral
hydrogen from the planet are weak. The arrangement of C ii ions into a tail
suggests that the stellar wind is not weak, and thus we disfavor the first
possibility. A weak neutral hydrogen flux from the planet (the slow component
of the above reaction) suggests that hydrogen is not the major atmospheric
constituent or that it ionizes before interacting with the stellar wind.
We investigated $\pi$ Men c’s extended atmosphere and mass loss with a
photochemical-hydrodynamic model (García Muñoz et al., 2020). The model takes
as input the volume mixing ratio (vmr) at the 1 $\mu$bar pressure level for
each species in the chemical network. Photo-/thermochemical considerations
(Moses et al., 2013; Hu & Seager, 2014) dictate the most abundant molecules in
the bulk atmosphere given the equilibrium temperature $T_{\rm{eq}}$ and the
fractions of hydrogen, helium, carbon and oxygen nuclei ($Y_{\rm{H}^{*}}$,
$Y_{\rm{He}^{*}}$, $Y_{\rm{C}^{*}}$, $Y_{\rm{O}^{*}}$; defined as the number
of nuclei (symbol ∗) of each element divided by the total number of nuclei).
For $\pi$ Men c’s $T_{\rm{eq}}$$\sim$1150 K, the bulk atmosphere composition
is dominated by H2 provided that $Y_{\rm{H}^{*}}$$\sim$1\. For lower
$Y_{\rm{H}^{*}}$ values, other molecules become abundant, such as H2O, CO, CO2
and O2 if $Y_{\rm{C}^{*}}$/$Y_{\rm{O}^{*}}$$<<$1, or C2H2 and CO if
$Y_{\rm{C}^{*}}$/$Y_{\rm{O}^{*}}$$>>$1\. To identify the dominant gases in the
bulk atmosphere for given sets of nuclei fractions and specify their vmrs at
the 1 $\mu$bar pressure level, we used a published study of super-Earths and
sub-Neptunes (Hu & Seager, 2014) (in particular their Figure 7). We considered
bulk compositions in which hydrogen nuclei dominate, and compositions in which
carbon and oxygen nuclei are also abundant and combine into various molecules.
Table 2 of Appendix C summarizes the implemented vmrs and other derived
information for the battery of 30 atmospheric runs (labelled as cases 01–30)
that we performed. Further details on our extended atmosphere modeling are
provided in Appendix C. For the conditions explored here, the minimum and
maximum loss rates are $\sim$1 and 3% $M_{\rm{p}}$/Gyr. The loss rates depend
weakly on composition even though the partitioning into different escaping
nuclei can be very different.
Our models indicate that the neutral hydrogen flux (the slow component for ENA
generation, see above) at a reference location defined by the sonic point is
$\dot{m}_{\rm{H\;I}}$($@$Mach=1)$\sim$5$\times$$10^{9}$ g s-1 for H2/He-
dominated atmospheres. From published 3D models for H2/He-dominated
atmospheres incorporating ENAs (Shaikhislamov et al., 2020b; Holmström et al.,
2008), we estimate that a neutral hydrogen flux about $\times$1/4 that value
will bring the ENA generation in line with the non-detection of Ly$\alpha$
absorption. Thus, we set
$\dot{m}_{\rm{H\;I}}$($@$Mach=1)$\lesssim$1.25$\times$$10^{9}$ g s-1 as the
approximate threshold for bulk compositions consistent with insufficient ENA
generation and therefore with the non-detection of Ly$\alpha$ absorption.
Refining this approximate threshold requires the 3D modeling of $\pi$ Men c’s
atmosphere for a diversity of bulk compositions, which should be the focus of
future investigations. Although welcome, such refinements will not modify the
key findings of this work.
The flux $\dot{m}_{\rm{H\;I}}$($@$Mach=1) depends strongly on the mass
fraction of heavy volatiles in the atmosphere $Z$ (=mass of heavy volatiles
relative to the mass of all volatiles), especially for $Z$$\gtrsim$0.4 when
hydrogen becomes preferentially ionized due to high temperatures (Figure 2)
(see also Fig. 4 in García Muñoz et al., 2020). Remarkably,
$\dot{m}_{\rm{H\;I}}$($@$Mach=1) depends weakly on the identity of the gases
contributing to $Z$, and indeed the calculated fluxes are comparable for each
subset of atmospheric runs with different $Y_{\rm{C}^{*}}$/$Y_{\rm{O}^{*}}$
ratios. We exploit the model-predicted $Z$–$\dot{m}_{\rm{H\;I}}$($@$Mach=1)
relation (purple line in Fig. 3, right axis: top panel for cases 01-06; bottom
panel for cases 07-12) to infer that $\pi$ Men c’s atmosphere has a high $Z$
($\gtrsim$0.85 based on our approximate threshold for
$\dot{m}_{\rm{H\;I}}$($@$Mach=1), although the precise $Z$ is subject to the
prescribed threshold), as otherwise the HST/STIS observation would have
revealed Ly$\alpha$ absorption. The trend for
$Z$–$\dot{m}_{\rm{H\;I}}$($@$Mach=1) in Fig. 3 reflects that the partitioning
between neutral and ionized hydrogen varies by a larger factor than the net
mass loss rate (which varies by $\sim$3 for the explored $Z$).
## 4 Bulk atmosphere
We built interior structure models of $\pi$ Men c that are consistent with its
$M_{\rm{p}}$, $R_{\rm{p}}$ and $T_{\rm{eq}}$ using a tested methodology
(Nettelmann et al., 2011; Poser et al., 2019). The models are organized into a
rocky core of iron and silicates in terrestrial proportions, and an atmosphere
on top containing H2/He plus a single heavy volatile (CO2 or H2O, thus
bracketing a broad range of molecular weights). This core composition produces
$M_{\rm{p}}$–$R_{\rm{p}}$ curves for atmosphereless objects consistent with
the known exoplanets that presumably lack a volatile envelope (Otegi et al.,
2020). When considering H2O as the dominant heavy volatile, it is assumed that
another carbon-bearing molecule present in trace amounts carries the carbon
detected in the HST/COS data. We assume that all gases remain well mixed,
which is justifiable for reasonable values of the eddy diffusion coefficient
in the atmosphere and the mass loss rates estimated here (García Muñoz et al.,
2020). The H2/He mass fraction is kept constant to the protosolar value, but
$Z$ is varied to explore atmospheres with different abundances of heavy
volatiles. We can thus pair the interior structure models and the upper
atmosphere models on the basis of their corresponding $Z$ and the dominating
molecules (or more generally the $Y_{\rm{C}^{*}}$/$Y_{\rm{O}^{*}}$ ratio in
the upper atmosphere).
The models consider a present-day intrinsic temperature ($T_{\rm{int}}$, which
specifies the heat flux from the interior through Stefan-Boltzmann’s law
$\sigma_{\rm{B}}$$T_{\rm{int}}^{4}$) and an extra opacity as adjustable
parameters. We find that for $T_{\rm{int}}$$>$100 K all plausible atmospheres
are relatively light and would reach the current state within a time much
shorter than the system’s age (5.2$\pm$1.1 Gyr) and then would continue
cooling and contracting (Fig. 8, top left). We consider this temperature to be
a conservative upper bound. Remarkably, for all other parameters being the
same, a higher $T_{\rm{int}}$ translates into a less massive atmosphere that
is easier to lose. Indeed, increasing $T_{\rm{int}}$ causes larger scale
heights and in turn larger atmospheric volumes for the same atmospheric mass.
The best match betwen evolution models that include stellar irradiation as the
sole external energy source and the measured $R_{\rm{p}}$ after cooling for 3
Gyr occurs for $T_{\rm{int}}$=44–52 K (Fig. 8, top left), in which case the
atmosphere has reached equilibrium with the incident irradiation. In what
follows, we focus on the range $T_{\rm{int}}$=44–100 K. Key model outputs are
the atmosphere and core masses ($M_{\rm{atm}}$+$M_{\rm{core}}$=$M_{\rm{p}}$)
and the core radius ($R_{\rm{core}}$$\leq$$R_{\rm{p}}$). These quantities
($M_{\rm{atm}}$, $R_{\rm{core}}$) are determined with no prior assumption on
their values by iteratively solving the interior structure equations so that
upon convergence the model complies with the specified planet mass and radius
constraints. For reference, the core size turns out to be always
$R_{\rm{core}}$/$R_{\earth}$$\sim$1.6 for $Z$$\ll$1, but can be $\sim$1.4 for
$Z$(H2O)=0.9 and $\sim$1 for $Z$(CO2)=0.9. Appendix D provides additional
insight into the interior structure model.
H2/He-dominated atmospheres ($Z$$\ll$1) are more voluminous but overall
contribute little mass (Figure 3, left). For example,
$M_{\rm{atm}}$/$M_{\rm{p}}$$<$2$\times$10-3 for $Z$(H2O) or $Z$(CO2)=0.3. In
turn, an atmosphere with abundant heavy volatiles must be massive to
compensate for its reduced scale height. Thus,
$M_{\rm{atm}}$/$M_{\rm{p}}$$\sim$7$\times$10-2 for $Z$(H2O)=0.8, and as high
as $\sim$0.2 for $Z$(CO2)=0.8. However, not every atmospheric composition
consistent with the interior models is stable over long timescales. We
estimated the mass of atmosphere that is lost over a range of times
$t_{\rm{XUV}}$ as $\dot{m}\;t_{\rm{XUV}}$/$M_{\rm{p}}$ (dashed lines, Figure
3), where $\dot{m}$ is the loss rate predicted by our photochemical-
hydrodynamic models and that varies as the stellar XUV luminosity and the
planet orbital distance evolve. We incorporate these effects into
$t_{\rm{XUV}}$, which must be viewed as an equivalent time based on the
current stellar luminosity and orbit.
## 5 Atmospheric stability and composition
Using arguments of atmospheric stability to constrain $\pi$ Men c’s interior
requires an appropriate timescale over which the atmosphere will survive. We
first adopted a survival time of 2 Gyr, which is a moderate fraction of the
time left before the star exits the main sequence ($\sim$5 Gyr) and assumes
that $\pi$ Men c is not near catastrophic mass loss. This choice implicitly
assumes that if the atmosphere survived the $\times$100–1000 enhancement in
XUV luminosity experienced in the early life of its host star, then its end
might not be imminent. Under this hypothesis, we infer (Figure 3) that
$Z$(H2O)$\geq$0.73 and $Z$(CO2)$\geq$0.65. These are conservative bounds based
on the uppermost sets of interior model curves for each heavy volatile, and
correspond to volume mixing ratios vmr(H2O)$\geq$0.26 (molecular weight
$\mu$$\geq$6.4 g mol-1) and vmr(CO2)$\geq$0.09 ($\mu$$\geq$6 g mol-1). A ten-
fold shorter survival time results in $Z$(H2O)$\geq$0.50 (vmr(H2O)$\geq$0.11;
$\mu$$\geq$4.1 g mol-1) and $Z$(CO2)$\geq$0.45 (vmr(CO2)$\geq$0.04;
$\mu$$\geq$4 g mol-1).
The inferred heavy mass fractions $Z$ are approximately consistent with the
non-detection of Ly$\alpha$ absorption, which renders independent support to
our findings. It ultimately confirms that a thick atmosphere with more than
half its mass in heavy volatiles is a realistic scenario for $\pi$ Men c. For
comparison, maximum values of vmr(H2O)$\sim$0.09–0.15 have been inferred from
infrared spectroscopy for the only other exoplanet with
$R_{\rm{p}}$$<$3$R_{\earth}$ at which H2O has been detected (Benneke et al.,
2019; Tsiaras et al., 2019; Madhusudhan et al., 2020). Thus, $\pi$ Men c
becomes the exoplanet with the highest abundance of heavy volatiles known to
date, and its case suggests that even higher abundances might be expected for
other small exoplanets. It is uncertain how the planet acquired such a heavy
atmosphere, although high-$Z$ atmospheres are natural outcomes of formation
models (Fortney et al., 2013). Assuming that water is the dominant heavy
volatile, it is plausible that $\pi$ Men c might have formed beyond the snow
line and reached its current orbit following high-eccentricity migration and
tidal circularization. The idea is supported by $\pi$ Men c being on a
misaligned orbit with respect to the stellar spin axis (Kunovac Hodžić et al.,
2020) and the fact that the system contains a far-out gas giant on an
eccentric, inclined orbit (Damasso et al., 2020; De Rosa et al., 2020; Xuan &
Wyatt, 2020).
$\pi$ Men c lies near the so-called radius gap (Owen & Wu, 2013; Fulton et
al., 2017) that separates the population of planets that presumably lost their
volatiles through atmospheric escape (peak at
$R_{\rm{p}}$/$R_{\earth}$$\sim$1.5) from those that were able to retain them
(peak at $R_{\rm{p}}$/$R_{\earth}$$\sim$2.5). The planet may still lose up to
10% of its mass in the future 5 Gyr if it remains on its current (and stable)
orbit (De Rosa et al., 2020; Xuan & Wyatt, 2020). This is more than what the
planetary interior models predict for $M_{\rm{atm}}$/$M_{\rm{p}}$ for some
plausible atmospheric configurations. It is thus likely that we are witnessing
$\pi$ Men c while crossing the radius gap. Indeed, Figure 3 suggests that this
will happen unless the actual $Z$(H2O)$\gtrsim$0.85 or $Z$(CO2)$\gtrsim$0.80.
In that event, and because $M_{\rm{atm}}$/$M_{\rm{p}}$$<<$1 for such unstable
configurations, the remnant core will collapse onto the empirical
$M_{\rm{p}}$–$R_{\rm{p}}$ curve for atmosphereless objects.
Table 1: In-transit absorption measured over the velocity range $-$70 to $+$10 km/s for various stellar emission lines in the COS data. Ion | Wavelength | In-transit | Statistical
---|---|---|---
| [Å] | absorption [$\%$] | significance
C ii | 1335.7 | 6.76$\pm$2.00 | 3.39
O i | 1302.168 | $-$3.65$\pm$40.65 | 0.09
O i | 1304.858 | $-$8.33$\pm$6.32 | 1.32
O i | 1306.029 | 5.17$\pm$3.66 | 1.41
Si iii | 1206.5 | 1.49$\pm$2.01 | 0.74
N v | 1238.821 | $-$0.82$\pm$5.37 | 0.15
N v | 1242.804 | $-$17.26$\pm$9.97 | 1.73
Si ii | 1265.002 | 3.38$\pm$4.94 | 0.68
Cl i | 1351.656 | 6.12$\pm$7.65 | 0.80
O i | 1355.598 | 7.36$\pm$5.87 | 1.25
Si iv | 1393.755 | 8.41$\pm$3.09 | 2.72
Si iv | 1402.770 | $-$0.53$\pm$5.07 | 0.11
All ions excluding | | 2.67$\pm$1.27 | 2.10
C ii and O i triplet | | |
Figure 1: Top. Light curves obtained from the COS spectra integrating in
wavelength across the entire C ii triplet at 1334-1335 Å (left), the C ii 1334
Å line (middle), and the C ii 1335 Å doublet (right). The horizontal bars
indicate the phase range covered by each observation. Each light curve has
been normalized to the average flux of the two out-of-transit points. The
black line shows the optical transit light curve computed employing published
system parameters (Gandolfi et al., 2018). For reference, the blue vertical
dashed lines mark the phases of first and last contact. Bottom. In-transit
(red) and out-of-transit (black) spectra around the position of the C ii 1335
Å doublet. The spectra are rebinned every seven data points for visualization
purposes and to match the instrument’s spectral resolution, so that each bin
corresponds to one resolution element. The gray dots show the unbinned out-of-
transit spectrum. The spectra have been brought to the rest frame by
accounting for the systemic velocity of the host star. The blue dashed
vertical line indicates the position of the main feature composing the C ii
1335 Å doublet. The green dash-dotted vertical lines show the integration
range considered for measuring the absorption.
Figure 2: Photochemical-hydrodynamic solutions for selected cases from Table
2. These cases cover a broad range of heavy volatile abundances from
$Z$=4.8$\times$10-3 to 0.9. a) Velocity profiles. Most of the gas acceleration
occurs below $r$/$R_{\rm{p}}$$\sim$3, at which location the gas reaches
velocities of a few km/s. b) Temperature profiles. Typically, the temperatures
remain well below 4000-5000 K except for the higher $Z$ that it reaches up to
10000 K. This trend is seen over the entire set of 30 atmospheric runs. c)
Number density profiles for H and H+. For the larger $Z$, the transition
between these two states occurs notably closer to the planet. d) Number
density profiles for C and C+. The C atom ionizes closer to the planet than
the H and O atoms. The C+ photoionization lifetime is long enough
($t_{\rm{C\;II}}$$\sim$20 hours) that can form a long tail trailing the
planet. e) Number density profiles for O and O+.
Figure 3: Atmospheric mass normalized to the measured planet mass as
calculated by internal structure models for various choices of $T_{\rm{int}}$
and gas opacity. The modeling assumes H2/He plus a heavy volatile (H2O, top;
CO2, bottom), and explores mass fractions of heavy volatiles $Z$ from 0.2 to
about 1. For reference, the corresponding volume mixing ratio of the heavy
volatile is also shown (brown). The black dashed lines show the mass fraction
lost over the indicated times, from $t_{\rm{XUV}}$=0.005 to 2 Gyr. For the top
(bottom) panel, we used the mass loss rates quoted in Table 2 for cases 01–06
(07–12); the choice between subsets of atmospheric runs with different
$Y_{\rm{C}^{*}}$/$Y_{\rm{O}^{*}}$ ratios is not critical though. The top panel
(purple line, right axis) shows the neutral hydrogen fluxes from the planet at
the sonic point predicted by the photochemical-hydrodynamic model for cases
01–06. Correspondingly, the bottom panel shows the fluxes predicted for cases
07–12. High neutral hydrogen fluxes (which occur for smaller $Z$) will result
in enhanced ENA generation and in turn increased Ly$\alpha$ transit depths.
The dashed purple pattern indicates our approximate threshold ($Z$=0.85)
separating atmospheric conditions that result in detectable and undetectable
Ly$\alpha$ absorption.
Davide Gandolfi gratefully acknowledges financial support from the CRT
foundation under grant No. 2018.2323 ”Gaseous or rocky? Unveiling the nature
of small worlds”.
## Appendix A Preparation of the observations and data analysis
We improved the published transit ephemeris (Gandolfi et al., 2018) using TESS
data from Sectors 1, 4, 8, 11–13 and the code pyaneti (Barragán et al., 2019),
which allows for parameter estimation from posterior distributions.
Figure 4 shows the spectrum obtained during the second HST observation (top),
marking the stellar features at which we looked for absorption in the $-$70 to
$+$10 km/s velocity range, and the resulting signal-to-noise ratio per
spectral bin (middle). The bottom panel compares the in- and out-of-transit
spectra.
Having detected absorption of the C ii 1335 Å feature, we looked for a similar
signal at the C ii 1334 Å line, without success. Next, we show that the
absorption signal at 1334 Å is hidden by ISM contamination (Figure 4, bottom),
which affects the S/N. We first fitted the C ii 1335 Å stellar feature using a
Gaussian profile and the out-of-transit spectrum (black dashed line) and then
employed a further Gaussian profile to fit the position and strength of the
planetary absorption on the in-transit spectrum (yellow solid line). We
obtained that the Gaussian profile simulating the planetary absorption lies at
a velocity of about $-$48.5 km/s with respect to the position of the main C ii
1335 Å feature, has a normalized amplitude of $\sim$0.18, and a width of
$\sim$0.11 Å. These fits were performed considering the unbinned spectra. We
then derived the strength of the C ii 1334 Å line, prior to ISM absorption by
scaling the Gaussian fit to the C ii 1335 Å feature by the ratio of the
oscillator strength times the statistical weight of the two lines (about 1.8;
blue dashed line). We further simulated the C ii ISM absorption profile at the
position of the C ii 1334 Å resonance line employing a Voigt profile (purple
solid line), in which we set the position of the line equal to that obtained
from the reconstruction of the stellar Ly$\alpha$ line (García Muñoz et al.,
2020) and a C ii ISM column density equal to that of hydrogen scaled to the
expected ISM carbon-to-hydrogen abundance ratio and ISM C ionization fraction
(Frisch & Slavin, 2003) (green solid line). Figure 4 (bottom) indicates that
the simulated profile is a good match to the out-of-transit spectrum,
particularly accounting for the uncertainties involved in inferring the C ii
ISM column density. Finally, we added to this line the planetary absorption
obtained from fitting the C ii 1335 Å feature, rescaled by 1.8 (orange solid
line). Lastly, Figure 4 (bottom) shows that the difference between the
simulated C ii 1334 Å line profiles before and after adding the planetary
absorption is significantly smaller than the observational uncertainties and
hence undetectable in the data. In summary, reduced S/N due to ISM
contamination hides the planetary absorption signal at 1334 Å.
To estimate the likelihood that our C ii signal is due to intrinsic stellar
variability, we did a Monte Carlo simulation where we assumed the population
of intrinsic stellar variability between the mean in-transit and mean out-of-
transit spectrum is represented by the measured in-transit absorption from
each line (Table 1). We excluded from this representative population the C ii
lines because of the putative planetary absorption and the O i triplet because
the noise properties and intrinsic variability of these airglow-contaminated
lines are different. For the remaining eight emission lines, we calculate a
weighted mean of 2.67$\pm$1.27% for the in-transit absorption. We made 106
realizations of nine emission lines (the eight ’unaffected’ lines of the
representative population plus C ii 1335.7 Å, the line with the 3.4-sigma in-
transit absorption) each with in-transit absorption randomly drawn from a
normal distribution with mean 2.67% and standard deviation 1.27%. We find that
the likelihood of 1 or more emission lines having an absorption $\geq$6.76%
due to intrinsic stellar variability as measured by our COS spectra is 2%.
We inspected the O i lines for evidence of planetary absorption. The O i
triplet at 1302–1306 Å is comprised of three emission lines that share an
upper energy level. O i 1302.168 Å is the resonance line, and is significantly
affected by ISM absorption. O i 1304.858 and 1306.029 Å each have similar
oscillator strengths to the resonance line. COS’s wide aperture leads to
significant contamination of each line of the O i triplet by geocoronal
airglow. We corrected each orbit’s pipeline-reduced x1d spectrum’s O i
emission lines for geocoronal airglow by performing a least-squares fit of the
airglow templates downloaded from MAST (Bourrier et al., 2018a) to the
spectrum (Figure 5, top). We created mean in-transit and out-of-transit
spectra in the same way as the C ii lines, and measured absorptions in each of
the three lines (shortest wavelength to longest wavelength) of
$-$3.65$\pm$40.65 %, $-$8.33$\pm$6.32%, and 5.17$\pm$3.66% over the same
velocity range as C ii (Figure 5, bottom). No significant variation was
observed in the O i triplet. The O i line at 1355 Å similarly does not show
any significant variation between in-transit and out-of-transit. Lastly, we
produced time series for the flux of some stellar lines (Figure 6). The series
reveals no obvious variability that could cause a false transit detection.
## Appendix B Phenomenological model of the ion tail
Our nominal transit depth for C ii of 6.8% translates into a projected area
equivalent to a disk of radius $R_{\rm{C\;II}}$/$R_{\rm{p}}$=15.1. This is
about the extent of $\pi$ Men c’s Roche lobe in the substellar direction
($R_{\rm{L_{1}}}$/$R_{\rm{p}}$=13.3). The spectroscopic velocity of $+$10 km/s
for the C ii ions is consistent with our photochemical-hydrodynamic
predictions, and likely traces absorption in the vicinity of the planet’s
dayside as the gas accelerates toward the star. Comparable velocities are
predicted by 3D models on the planet’s dayside (Shaikhislamov et al., 2020b)
before the escaping gas interacts with the impinging stellar wind. Negative
velocities of $-$70 km/s (and probably faster, as the stellar line becomes
weak and the S/N poor at the corresponding wavelengths) suggest that the C ii
ions are accelerated away from the star by other mechanisms such as tidal
forces and magnetohydrodynamic interactions with the stellar wind. For
reference, the velocity of the solar wind at the distance of $\pi$ Men c is on
the order of 250 km/s (Shaikhislamov et al., 2020b). There is observational
evidence for HD 209458 b, GJ 436 b and GJ 3470 b that their escaping
atmospheres also result in preferential blue absorption (particularly the
latter two) (Vidal-Madjar et al., 2003; Ehrenreich et al., 2015; Bourrier et
al., 2018b). Models considering the 3D geometry of the interacting stellar and
planet winds also favor blue absorption, especially when the stellar wind is
stronger (Shaikhislamov et al., 2020a, b) and arranges the escaping atoms into
a tail trailing the planet. We consider a C ii tail to be realistic scenario
for $\pi$ Men c.
To gain intuition, we have built the phenomenological model of $\pi$ Men c’s
tail sketched in Figure 7 (top). The C ii ions are injected into a cylindrical
tail of fixed radius 15$R_{\rm{p}}$ and are subject to a prescribed velocity
$U$=$-$($U_{0}$ $-$ $x$/$t_{\rm{acc}}$) km/s that varies in the tail
direction. This geometry surely simplifies the true morphology of the escaping
gas, which is likely to resemble an opening cone tilted with respect to the
star-planet line (Shaikhislamov et al., 2020b). This crude description allows
us at the very least to obtain analytical expressions for some of the relevant
diagnostics. Here, $U_{0}$=10 km/s (a typical value from the photochemical-
hydrodynamic simulations; Figure 2) and $t_{\rm{acc}}$ is an acceleration time
scale. (Note: $x$$<$0 in the tail, and the ions are permanently accelerating.)
Related accelerations have been predicted by physically-motivated 3D models
(Ehrenreich et al., 2015; Shaikhislamov et al., 2020a, b), under the combined
effect of gravitational, inertial and radiative forces. Magnetic interactions
with the stellar wind might additionally affect ion acceleration.
The C ii ions photoionize further into C iii with a time scale
$t_{\rm{C\;II}}$$\sim$20 hours (calculated for unattenuated radiation from
$\pi$ Men at wavelengths shorter than the 508-Å threshold and the
corresponding C ii cross section (Verner & Yakovlev, 1995)). The collision of
stellar wind particles with the planetary C ii ions might ionize further the
latter (ionization potential of 24 eV), especially at the mixing layer between
the two flows (Tremblin & Chiang, 2013), but it remains unclear whether
collisional ionization can compete on the full-tail scale with
photoionization. This should be assessed in future work. The continuity
equations that govern the model are:
$\displaystyle\frac{d({[\rm{C}\;{\sc{II}}]}(x)U(x))}{dx}=-\frac{{[\rm{C}\;{\sc{II}}]}(x)}{t_{\rm{C\;II}}}$
$\displaystyle\frac{d({[\rm{C}\;{\sc{III}}]}(x)U(x))}{dx}=+\frac{{[\rm{C}\;{\sc{II}}]}(x)}{t_{\rm{C\;II}}}.$
As the flow accelerates through the tail, the total density decays and the C
ii ions are converted into C iii. The solution for the C ii ion number density
is:
$\frac{{[\rm{C}\;{\sc{II}}]}(x)}{{[\rm{C}\;{\sc{II}}]}_{0}}=\left(\frac{1}{1-\frac{x}{U_{0}t_{\rm{acc}}}}\right)^{1+t_{\rm{acc}}/t_{\rm{C\;II}}}.$
This highly simplified exercise aims to estimate reasonable values for the
free parameters [C ii]0 and $t_{\rm{acc}}$ that reproduce the transit depth
and Doppler velocities from the COS data. To produce the wavelength-dependent
opacity, we integrate [C ii]($x$) along the line of sight keeping track of the
Doppler shifts introduced by the varying $U$($x$). We represent the C ii cross
section at rest for the 1335.7 Å line by a Voigt lineshape with thermal
($T$=6000 K, a reference temperature in the tail; our findings do not depend
sensitively on this temperature, as the absorption over a broad range of
velocities is caused by the bulk velocity of the ions rather than by their
thermal broadening) and natural (Einstein coefficient
$A_{\rm{ul}}$=2.88$\times$108 s-1) broadening components, and a fractional
population of the substate from which the line arises based on statistical
weights and equal to 2/3.
Figure 7 (bottom) shows how [C ii]0 and $t_{\rm{acc}}$/$t_{\rm{C\;II}}$ affect
the absorption of the stellar line. Absorptions consistent with the
measurements are generally found for [C ii]0$\geq$104 cm-3 and
$t_{\rm{acc}}$/$t_{\rm{C\;II}}$$\leq$1, and result in C ii tails longer than
$\sim$50$R_{\rm{p}}$ and large amounts of C ii moving at $-$70 km/s. A key
outcome of the model is that for $t_{\rm{acc}}$/$t_{\rm{C\;II}}$$>$1 the C ii
ion photoionizes too quickly to produce significant absorption at the faster
velocities. This could in principle be compensated with increasing amounts of
C ii ions entering the tail, but the photochemical-hydrodynamic model
indicates that it is not possible to go beyond [C ii]0$\sim$106 cm-3 without
obtaining unrealistically large escape rates. For [C ii]0$\sim$104$-$106 cm-3,
the mass fluxes of carbon atoms through the tail range from $\sim$108 to
$\sim$1010 g s-1. These mass fluxes are comparable to the loss rates of carbon
nuclei predicted by our photochemical-hydrodynamic model for atmospheres that
have carbon abundances larger than solar (Table 2). This sets a weak
constraint on the carbon abundance that prevents us for the time being from
assessing whether carbon is a major constituent of $\pi$ Men c’s atmosphere.
This insight into $\pi$ Men c’s carbon abundance, even though of limited
diagnostic value, is consistent with the interpretation of in-transit
absorption by C ii at the hot Jupiter HD 209458 b (Vidal-Madjar et al., 2004;
Ben-Jaffel & Sona Hosseini, 2010; Linsky et al., 2010). Both HD 209458 b and
$\pi$ Men c transit Sun-like stars and exhibit similar transit depths. 3D
models of HD 209458 b (Shaikhislamov et al., 2020a) show that its C ii
absorption signal is explained by carbon in solar abundance. Because the
predicted mass loss rate of HD 209458 b is an order of magnitude higher than
for $\pi$ Men c (a result mainly from its lower density), a reasonable guess
is that a $\times$10 solar enrichment for $\pi$ Men c will result in
comparable transit depths. Future 3D modeling of the interaction of the C ii
ions with the stellar wind and the planet’s magnetic field lines should help
refine these conclusions.
## Appendix C Photochemical-hydrodynamic model
We investigated $\pi$ Men c’s extended atmosphere with a 1D photochemical-
hydrodynamic model that solves the gas equations at pressures $\leq$1 $\mu$bar
(García Muñoz et al., 2020). Heating occurs from absorption of stellar photons
by the atmospheric neutrals. Cooling is parameterized as described in our
previous work, and includes emission from H${}_{3}^{+}$ in the infrared,
Ly$\alpha$ in the FUV, atomic oxygen at 63 and 147 $\mu$m, and rotational
cooling from H2O, OH and CO also in the infrared. We adopted a NLTE
formulation of H${}_{3}^{+}$ cooling that captures the departure of the ion’s
population from equilibrium at low H2 densities. We included a
parameterization of CO2 cooling at 15 $\mu$m (Gordiets et al., 1982). The
model considers 26 chemicals (H2, H, He, CO2, CO, C, H2O, OH, O, O2,
H${}_{2}^{+}$, H+, H${}_{3}^{+}$, He+, HeH+, CO${}_{2}^{+}$, CO+, C+,
HCO${}_{2}^{+}$, HCO+, H2O+, H3O+, OH+, O+, O${}_{2}^{+}$ and electrons) that
participate in 154 chemical processes. It does not include hydrocarbon
chemistry, although this omission is not important as hydrocarbons are rapidly
lost at low pressures in favor of other carbon-bearing species (Moses et al.,
2013).
To explore a broad range of compositions, we adopted nuclei fractions
$Y_{\rm{C}^{*}}$ and $Y_{\rm{O}^{*}}$ such that
$Y_{\rm{C}^{*}}$+$Y_{\rm{O}^{*}}$ goes from a few times 10-6 to $\sim$0.4, and
$Y_{\rm{C}^{*}}$/$Y_{\rm{O}^{*}}$ ratios from 0.1 to 10. We imposed
$Y_{\rm{He}^{*}}$=0.1$Y_{\rm{H}^{*}}$. By definition
$Y_{\rm{H}^{*}}$+$Y_{\rm{He}^{*}}$+$Y_{\rm{C}^{*}}$+$Y_{\rm{O}^{*}}$=1, and
thus the composition is specified by only two nuclei fractions. With the above
information, we estimated the corresponding vmrs from Figure 7 of Hu & Seager
(2014) and assigned them as boundary conditions of our extended atmosphere
model. For the other gases, we adopted zero vmrs. Hydrocarbons (e.g. C2H2,
CH4) become abundant in the bulk atmosphere for
$Y_{\rm{C}^{*}}$/$Y_{\rm{O}^{*}}$$\geq$2\. Because our model does not
currently include hydrocarbons, we transferred all the C nuclei at the base of
the extended atmosphere from C2H2 and CH4 into CO and C.
Table 2 summarizes 30 cases, each for a different bulk atmospheric
composition. For cases 05-06 and 11-12 we first assumed for their vmrs in the
bulk atmosphere:
* •
(05) H2: 7.66$\times$10-1; He: 1.66$\times$10-1; CO: 6.90$\times$10-3; H2O:
6.21$\times$10-2.
* •
(06) He: 1.46$\times$10-1; CO2: 9.76$\times$10-2; H2O: 7.32$\times$10-1; O2:
2.44$\times$10-2.
* •
(11) H2: 7.88$\times$10-1; He: 1.63$\times$10-1; CO: 2.48$\times$10-2; H2O:
2.48$\times$10-2.
* •
(12) H2: 4.47$\times$10-1; He: 1.18$\times$10-1; CO2: 1.45$\times$10-1; CO:
1.45$\times$10-1; H2O: 1.45$\times$10-1.
However, our numerical experiments showed that H2O (but also H2 and O2) were
unstable for these cases, and their number densities dropped by orders of
magnitude in a few elements of the spatial grid. This is evidence that these
molecules chemically transform before reaching the $\mu$bar pressure level. To
avoid numerical difficulties, in these four cases we prescribed the vmrs at
the $\mu$bar pressure level by replacing the unstable molecules by their
atomic constituents while preserving the original nuclei fractions. They are
indicated with a $\dagger$ in Table 2, which lists the adopted vmrs.
The mass fraction of e.g. H2 and He is given by the ratio
vmr(H2)$\;\mu_{\rm{H}_{2}}$/($\sum_{s}$vmrs$\;\mu_{s}$) and
vmr(He)$\;\mu_{\rm{He}}$/($\sum_{s}$vmrs$\;\mu_{s}$), with the summation
extending over all species, respectively. The mass fraction of heavy volatiles
in the bulk atmosphere $Z$ is calculated as one minus the added mass fractions
of H2 and He.
The right hand side of Table 2 summarizes the model outputs.
$\dot{m}_{\rm{H}}$($@$Mach=1) quotes the mass flux of neutral H i atoms at the
sonic point. It departs from the mass loss rate of H nuclei
($\dot{m}_{\rm{H}^{*}}$) if at the sonic point hydrogen is ionized.
$\dot{m}_{\rm{He}^{*}}$, $\dot{m}_{\rm{C}^{*}}$, $\dot{m}_{\rm{O}^{*}}$ and
$\dot{m}$ quote the mass loss rates of the specified nuclei and the net mass
loss rate of the atmosphere. All mass loss rates are calculated over a 4$\pi$
solid angle.
Table 2 shows that $\dot{m}_{\rm{H}}$($@$Mach=1) is about 5$\times$109 g s-1
for a H2/He atmosphere. 3D simulations of $\pi$ Men c’s extended atmosphere
(Shaikhislamov et al., 2020b) show that the Ly$\alpha$ absorption signal
varies monotonically with the mass flux of protons in the stellar wind.
According to their numerical experiments (their Figure 4), increases in the
stellar wind flux by a factor of a few result in deeper transits by a similar
factor of a few. Because ENA generation depends on both the flux of protons in
the stellar wind and the flux of neutral hydrogen in the planet wind, we
estimate that a factor of a few (we take $\times$1/4) decrease in
$\dot{m}_{\rm{H}}$($@$Mach=1) with respect to the case of an H2/He atmosphere
suffices to bring ENA generation to undetectable levels for HST/STIS. We take
$\dot{m}_{\rm{H}}$($@$Mach=1)=1.25$\times$109 g s-1 as our approximate
threshold for non-detection of Ly$\alpha$ absorption. A refined estimate of
this threshold calls for 3D simulations over a variety of bulk atmospheric
compositions.
## Appendix D Interior structure model
As the adopted intrinsic temperature affects the predicted atmospheric mass
(because of its impact on the scale height), we constrain its possible values
through thermal evolution calculations. For an adiabatic planetary envelope
that cools efficiently by convection under the moderating effects of
atmospheric opacity, a pattern emerges: The lower the present-day $T_{\rm
int}$ and the more massive the atmosphere is, the longer it takes to cool down
to that state from an initial hot state ($T_{\rm int}$ much larger than 100 K)
after formation (time $t$=0).
Figure 8 (top left) shows evolution tracks for different present-day $T_{\rm
int}$ and a heavy volatile mass fraction $Z$=0.85. The largest $T_{\rm int}$
that yields a cooling time in agreement with the system’s age is 52 K,
although $T_{\rm int}$=60 K might yield a solution where the radius is matched
within the measurement uncertainties. Assuming $T_{\rm int}$=100 K requires an
extra heating source that maintains such a high heat flux at present times.
For inflated hot Jupiters, extra heating of $\sim$0.1% to a few percent of the
incident stellar irradiation is required to explain their large radius, which
is consistent with Ohmic heating (Thorngren & Fortney, 2018). For $\pi$ Men c,
less extra heating is required ($\sim$0.01$\%$; gray dashed curve). However,
the mechanism that may provide this extra heating to the smaller $\pi$ Men c
is not obvious. While Ohmic heating may occur at sub-Neptunes (Pu & Valencia,
2017), the predicted amount falls short by two orders of magnitude of what is
needed for $\pi$ Men c (Figure 8, top right). Another option is tidal heating
provided that the planet is on an eccentric orbit, although the orbital
eccentricity remains poorly constrained (Gandolfi et al., 2018; Huang et al.,
2018; Damasso et al., 2020). For an eccentricity $e$=$0.001$ and tidal quality
factor $Q_{\rm{p}}$$\geq$103 we find that tidal heating affects negligibly the
planetary cooling. Tidal heating however becomes effective at extending the
cooling time of an atmosphere with $T_{\rm int}$=100 K up to the current
system’s age if, for example, $e$=0.02 and $Q_{\rm{p}}$$\sim$103 (about three
times the Earth’s tidal quality factor), or if $e$=0.1 and
$Q_{\rm{p}}$$\sim$5$\times$104 (about the Saturn/Uranus/Neptune value).
Further, we estimated the circularization time for these two configurations
(Jackson et al., 2008) to be $\tau_{\rm{circ}}$=0.6 and 30 Gyr respectively.
Even the shortest of them is on the order of our prescribed survival time. It
is thus reasonable to expect that the recent history of $\pi$ Men c’s
atmosphere may have occurred while the planet followed an eccentric orbit that
could have sustained $T_{\rm int}$$\sim$100 K. It is also conceivable that the
outer companion in the $\pi$ Men planetary system may endow a non-negligible
eccentricity to the innermost planet’s orbit (De Rosa et al., 2020; Kunovac
Hodžić et al., 2020; Xuan & Wyatt, 2020), as is found in some close-in sub-
Neptune-plus-cold Jupiter systems. Importantly, to the effects of planet mass
loss, higher $T_{\rm int}$ values that imply lower $M_{\rm atm}$ are less
likely to survive over time scales of Gigayears, which sets a limit to how
high $T_{\rm int}$ can be given that $\pi$ Men c still hosts an atmosphere.
Finally, we adopt $T_{\rm int}$=100 K as a reasonable upper limit for $\pi$
Men c.
Figure 8 (bottom left) shows evolution tracks for $Z$ from 0.5 to 0.88 at
present-day $T_{\rm int}$=52 K. The lower the adopted $Z$, the less massive
the atmosphere is and the quicker it cools down and contracts. Eventually, the
planet adopts a state of (nearly) equilibrium evolution with the incident
flux, where contraction slows down and further cooling progresses on very long
timescales. In this state, the planet radiates
($\sim$(100/1150)4$\sim$6$\times$10-5) only 0.006% more than if it was in true
equilibrium with the stellar irradiation. Mass fractions $Z$$<$0.5 may be
possible for present $T_{\rm int}$$<$44 K. For comparison, Saturn has $T_{\rm
int}$$\sim$77 K and Neptune $\sim$53 K, and thus we do not expect that $T_{\rm
int}$ at $\pi$ Men c will be much higher than for them given that its mass is
much lower. Also, a non-zero eccentricity may keep $T_{\rm int}$ above such
values. Mass loss, fixed in the preparation of Figure 8 (bottom left) to
2$\times$1010 g/s, prolongs the cooling time immediately after formation but
speeds up the contraction well into the future as the planet loses its
envelope.
We have simulated the future evolution of $\pi$ Men c for some of the
compositions deemed realistic for present-day $\pi$ Men c with the goal of
exploring whether the planet will ever cross the radius valley. As an example,
Figure 8 (bottom right) shows that for $Z$(CO2)=0.50 and $T_{\rm{int}}$=44 K,
$\pi$ Men c will turn into a bare rocky core in about 0.5 Gyr. The same
conclusion is found for other $Z$–$T_{\rm{int}}$ combinations.
The interior structure model calculates the atmospheric pressure-temperature
($p$–$T$) profile using an analytical formulation (Guillot, 2010) that depends
on the ratio $\gamma$=$\kappa_{\rm vis}$/$\kappa_{\rm IR}$ of constant visible
and IR opacities in the $T$–optical depth relation and on the local opacity.
For the IR opacities, we take the Rosseland mean $\kappa_{\rm R}$ for solar
composition (Freedman et al., 2008) as our baseline. In our opacity 1 setting,
we adopt $\gamma$=0.123 and $\kappa_{\rm IR}$=1$\times$$<$$\kappa_{\rm R}$$>$
(bracket $<$$>$ denotes average over wavelength) which was confirmed to
reproduce well the $p$–$T$ profiles published for the H2/He/H2O atmosphere of
K2-18 (Scheucher et al., 2020). As $\pi$ Men c’s atmosphere may also contain
large abundances of CO2, our opacity 2 setting adopts $\gamma$=0.500 and
$\kappa_{\rm IR}$=2$\times$$<$$\kappa_{\rm R}$$>$, which is appropriate for a
CO2-dominated atmosphere with some admixture of H2/He. Using two sets of
opacities for each interior model calculation is a pragmatic way of bracketing
the real opacity of $\pi$ Men c’s atmosphere. In a conservative spirit, our
constraints on the planet’s bulk composition from the atmospheric stability
argument utilize the opacity setting that results in the lowest $Z$.
Caption for Table 2.
Summary of photochemical-hydrodynamic model runs. Columns 2–4 quote the
assumed nuclei fractions for H, C and O in the bulk atmosphere. Columns 6–13
quote the adopted vmrs at the $\mu$bar pressure level for H2, H, He, CO2, CO,
C, H2O and O, and 14 quotes the corresponding heavy mass fraction. Column 15
quotes the mass flux of H i atoms at the location of the sonic point, which
occurs for all cases within the range $r_{{Mach}=1}$/$R_{\rm{p}}$=3.5–4.6.
Columns 16–20 quote the mass loss rates of H, He, C and O nuclei, and 21
quotes the net mass loss rate. $\dagger$: For these cases, we specified the
vmrs at 1 $\mu$bar considering that molecules such as H2, H2O and O2 become
photochemically unstable before reaching the $\mu$bar pressure level. We also
assume that the 1-$\mu$bar pressure level occurs at the radial distance
$\sim$2.06$R_{\earth}$ for the TESS radius of the planet. This neglects the
vertical extent of the region between a few mbars and 1 $\mu$bar, a reasonable
approximation for moderate atmospheric temperatures. We impose that at the
1-$\mu$bar level the temperature coincides with the planet’s $T_{\rm{eq}}$
(1150 K).
Table 2: Case | $Y_{\rm{H}^{*}}$ | $Y_{\rm{C}^{*}}$ | $Y_{\rm{O}^{*}}$ | $Y_{\rm{C}^{*}}$/$Y_{\rm{O}^{*}}$ | Volume mixing ratio at $R_{\rm{1\mu bar}}$ | | $\dot{m}_{\rm{H}}$($@$$M$=1) | $\dot{m}_{\rm{H}^{*}}$ | $\dot{m}_{\rm{He}^{*}}$ | $\dot{m}_{\rm{C}^{*}}$ | $\dot{m}_{\rm{O}^{*}}$ | $\dot{m}$
---|---|---|---|---|---|---|---|---|---|---|---|---
| | | | | H2 | H | He | CO2 | CO | C | H2O | O | $Z$ | [109gs-1] | [109gs-1]
01 | 9.09(-1) | 3.64(-7) | 3.64(-6) | 0.1 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 6.7(-7) | 0.0 | 6.0(-6) | 0.0 | 5.2(-5) | 4.4 | 6.9 | 2.7 | 3.0(-5) | 4.1(-4) | 9.6
02 | 9.09(-1) | 3.64(-6) | 3.64(-5) | 0.1 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 6.7(-6) | 0.0 | 6.0(-5) | 0.0 | 5.2(-4) | 6.7 | 9.7 | 3.8 | 4.3(-4) | 5.9(-3) | 13.5
03 | 9.09(-1) | 3.64(-5) | 3.64(-4) | 0.1 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 6.7(-5) | 0.0 | 6.0(-4) | 0.0 | 5.2(-3) | 11.9 | 15.0 | 5.9 | 6.9(-3) | 9.3(-2) | 21.0
04 | 9.06(-1) | 3.64(-4) | 3.64(-3) | 0.1 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 6.7(-4) | 0.0 | 6.0(-3) | 0.0 | 5.2(-2) | 15.3 | 18.9 | 7.3 | 8.5(-2) | 1.3 | 27.4
05$\dagger$ | 8.73(-1) | 3.64(-3) | 3.64(-2) | 0.1 | 0.0 | 8.8(-1) | 8.8(-2) | 3.7(-3) | 0.0 | 0.0 | 0.0 | 2.9(-2) | 3.7(-1) | 8.4 | 11.5 | 4.5 | 5.9(-1) | 7.5 | 24.1
06$\dagger$ | 5.45(-1) | 3.64(-2) | 3.64(-1) | 0.1 | 0.0 | 5.9(-1) | 5.9(-2) | 3.9(-2) | 0.0 | 0.0 | 0.0 | 3.1(-1) | 9.7(-1) | 0.35 | 1.4 | 4.8(-1) | 8.2(-1) | 11.7 | 14.4
07 | 9.09(-1) | 1.33(-6) | 2.67(-6) | 0.5 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 2.4(-6) | 0.0 | 2.4(-6) | 0.0 | 4.8(-5) | 4.4 | 6.8 | 2.7 | 1.1(-4) | 2.9(-4) | 9.5
08 | 9.09(-1) | 1.33(-5) | 2.67(-5) | 0.5 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 2.4(-5) | 0.0 | 2.4(-5) | 0.0 | 4.8(-4) | 5.6 | 8.3 | 3.3 | 1.4(-3) | 3.6(-3) | 11.6
09 | 9.09(-1) | 1.33(-4) | 2.67(-4) | 0.5 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 2.4(-4) | 0.0 | 2.4(-4) | 0.0 | 4.8(-3) | 11.6 | 14.9 | 5.9 | 2.5(-2) | 6.7(-2) | 20.9
10 | 9.06(-1) | 1.33(-3) | 2.67(-3) | 0.5 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 2.4(-3) | 0.0 | 2.4(-3) | 0.0 | 4.6(-2) | 12.8 | 15.9 | 6.2 | 2.7(-1) | 7.1(-1) | 23.2
11$\dagger$ | 8.73(-1) | 1.33(-2) | 2.67(-2) | 0.5 | 8.1(-1) | 0.0 | 1.6(-1) | 2.5(-2) | 0.0 | 0.0 | 0.0 | 0.0 | 3.4(-1) | 9.3 | 12.5 | 4.5 | 1.9 | 5.4 | 23.3
12$\dagger$ | 5.45(-1) | 1.33(-1) | 2.67(-1) | 0.5 | 0.0 | 7.4(-1) | 7.4(-2) | 1.8(-1) | 0.0 | 0.0 | 0.0 | 0.0 | 9.0(-1) | 0.54 | 1.7 | 5.7(-1) | 3.5 | 9.3 | 14.6
13 | 9.09(-1) | 2.00(-6) | 2.00(-6) | 1 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 3.7(-6) | 0.0 | 0.0 | 0.0 | 4.4(-5) | 4.4 | 6.8 | 2.7 | 1.7(-4) | 2.2(-4) | 9.5
14 | 9.09(-1) | 2.00(-5) | 2.00(-5) | 1 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 3.7(-5) | 0.0 | 0.0 | 0.0 | 4.4(-4) | 4.8 | 7.3 | 2.8 | 1.8(-3) | 2.3(-3) | 10.1
15 | 9.09(-1) | 2.00(-4) | 2.00(-4) | 1 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 3.7(-4) | 0.0 | 0.0 | 0.0 | 4.4(-3) | 9.0 | 12.3 | 4.8 | 3.1(-2) | 4.1(-2) | 17.2
16 | 9.05(-1) | 2.00(-3) | 2.00(-3) | 1 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 3.7(-3) | 0.0 | 0.0 | 0.0 | 4.2(-2) | 10.7 | 13.3 | 5.3 | 3.4(-1) | 4.5(-1) | 19.4
17 | 8.73(-1) | 2.00(-2) | 2.00(-2) | 1 | 8.0(-1) | 0.0 | 1.6(-1) | 0.0 | 3.7(-2) | 0.0 | 0.0 | 0.0 | 3.1(-1) | 4.9 | 6.8 | 2.6 | 1.7 | 2.3 | 13.4
18 | 5.45(-1) | 2.00(-1) | 2.00(-1) | 1 | 5.2(-1) | 0.0 | 1.0(-1) | 0.0 | 3.8(-1) | 0.0 | 0.0 | 0.0 | 8.8(-1) | 0.45 | 1.3 | 0.4 | 3.7 | 4.9 | 10.3
19 | 9.09(-1) | 2.67(-6) | 1.33(-6) | 2 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 2.4(-6) | 2.4(-6) | 0.0 | 0.0 | 4.2(-5) | 4.7 | 7.3 | 2.9 | 2.4(-4) | 1.6(-4) | 10.2
20 | 9.09(-1) | 2.67(-5) | 1.33(-5) | 2 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 2.4(-5) | 2.4(-5) | 0.0 | 0.0 | 4.2(-4) | 4.6 | 7.1 | 2.8 | 2.3(-3) | 1.5(-3) | 9.9
21 | 9.09(-1) | 2.67(-4) | 1.33(-4) | 2 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 2.4(-4) | 2.4(-4) | 0.0 | 0.0 | 4.2(-3) | 4.7 | 7.2 | 2.8 | 2.4(-2) | 1.5(-2) | 10.0
22 | 9.05(-1) | 2.67(-3) | 1.33(-3) | 2 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 2.4(-3) | 2.4(-3) | 0.0 | 0.0 | 4.0(-2) | 5.2 | 7.2 | 2.8 | 2.4(-1) | 1.5(-1) | 10.4
23 | 8.73(-1) | 2.67(-2) | 1.33(-2) | 2 | 7.9(-1) | 0.0 | 1.6(-1) | 0.0 | 2.4(-2) | 2.4(-2) | 0.0 | 0.0 | 3.0(-1) | 4.4 | 5.4 | 2.1 | 1.8 | 1.2 | 10.5
24 | 5.45(-1) | 2.67(-1) | 1.33(-1) | 2 | 4.6(-1) | 0.0 | 9.2(-2) | 0.0 | 2.2(-1) | 2.2(-1) | 0.0 | 0.0 | 8.7(-1) | 0.48 | 1.3 | 0.5 | 6.1 | 3.9 | 11.8
25 | 9.09(-1) | 3.64(-6) | 3.64(-7) | 10 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 6.7(-7) | 6.0(-6) | 0.0 | 0.0 | 3.9(-5) | 4.8 | 7.5 | 2.9 | 3.4(-4) | 4.4(-5) | 10.4
26 | 9.09(-1) | 3.64(-5) | 3.64(-6) | 10 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 6.7(-6) | 6.0(-5) | 0.0 | 0.0 | 3.9(-4) | 4.8 | 7.5 | 3.0 | 3.4(-3) | 4.4(-4) | 10.5
27 | 9.09(-1) | 3.64(-4) | 3.64(-5) | 10 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 6.7(-5) | 6.0(-4) | 0.0 | 0.0 | 3.9(-3) | 5.0 | 7.8 | 3.1 | 3.6(-2) | 4.6(-3) | 10.9
28 | 9.05(-1) | 3.64(-3) | 3.64(-4) | 10 | 8.3(-1) | 0.0 | 1.7(-1) | 0.0 | 6.7(-4) | 6.0(-3) | 0.0 | 0.0 | 3.7(-2) | 5.3 | 7.9 | 3.1 | 3.7(-1) | 4.7(-2) | 11.4
29 | 8.73(-1) | 3.64(-2) | 3.64(-3) | 10 | 7.8(-1) | 0.0 | 1.6(-1) | 0.0 | 6.5(-3) | 5.8(-2) | 0.0 | 0.0 | 2.9(-1) | 5.1 | 6.6 | 2.6 | 3.1 | 4.0(-1) | 12.7
30 | 5.46(-1) | 3.64(-1) | 3.64(-2) | 10 | 3.9(-1) | 0.0 | 7.9(-2) | 0.0 | 5.3(-2) | 4.7(-1) | 0.0 | 0.0 | 8.7(-1) | 0.67 | 1.5 | 5.9(-1) | 11.1 | 1.4 | 14.6
Figure 4: Top. Far-ultraviolet spectrum of $\pi$ Men obtained during the
second HST orbit. Major features are marked. The Earth symbol indicates that
the Ly$\alpha$ line and the O i triplet are affected by geocoronal airglow.
Middle. S/N per pixel of the top panel spectrum. Bottom. Rebinned out-of-
transit (black) and in-transit (red) spectra of $\pi$ Men around the C ii
1334/1335 Å triplet. The black dashed line shows the Gaussian fit to the C ii
1335 Å feature, while the blue dashed line shows the simulated C ii 1334 Å
line, without ISM absorption, obtained scaling the Gaussian fit to the C ii
1335 Å feature by the ratio of the line oscillator strengths times the
statistical weights of the lower states. The yellow line shows the fitted
planetary absorption (right y-axis). The red dashed line shows the Gaussian
fit to the C ii 1335 Å feature multiplied by the simulated planetary
absorption. The purple solid line shows the simulated ISM absorption (right
y-axis), while the green solid line shows the simulated C ii 1334 Å line
profile multiplied by the ISM absorption profile. The orange solid line shows
the C ii 1334 Å line profile multiplied by the ISM and planetary absorption
profiles. The vertical blue dashed lines indicate the positions of the two
main components of the C ii triplet.
Figure 5: Top. Geocoronal airglow template (Bourrier et al., 2018a) fits to
Orbit 1 of our unbinned COS spectra. The narrower emission lines superimposed
on the broader airglow lines have a stellar origin. These stellar lines were
masked out from the fit. Orbits 2, 4, and 5 exhibit a similarly good fit.
Bottom. For the O i triplet, mean in-transit (red) and mean out-of-transit
(black) spectra binned in the same way as the C II spectra after correcting
for geocoronal airglow. For the line at 1306 Å (the least affected by
geocoronal airglow), we estimate a transit depth of 5.2$\pm$3.7%
(statistically insignificant) over the velocity range from $-$70 to +10 km/s.
Figure 6: Normalized light curves for a selection of the observed emission
lines. The gray points show 20-s bins and the green points show 300-s bins.
The standard deviation of the 300-s bin points is printed in each subplot. The
O i’s light curve is dominated by airglow variability over the course of each
orbit. No flares or other short term stellar variability are apparent.
Figure 7: Top. Sketch of the phenomenological model for the C ii tail. The
ions escaping the planet are channeled into a cylindrical tail of radius
15$R_{\rm{p}}$ and accelerated. Further ionization into C iii and the gas
velocity dictate the effective length of the C ii tail. Both $x$ (position
from the tail entrance) and $U$ (velocity) are negative in the tail. The star
is on the right end, and the observer on the left end. Bottom. Examples of
solution of the phenomenological model for acceleration-to-ionization
timescales of the C ii ion ranging between 0.01 and 10. Left column: Transit
depths for a range of C ii densities at the tail entry ($x$=0), namely: [C
ii]0=104 (red), 105 (blue) and 106 (green) cm-3. Middle column: Ion number
densities normalized to the density at $x$=0. Right column: Prescribed
velocity profiles.
Caption for Figure 8.
Top Left. Temporal evolution of planet radius $R_{\rm{p}}$($t$) scaled by the
measured $R_{\rm{p}}$=2.06$R_{\earth}$ for models with $Z$(CO2)=0.85 and
present-day $T_{\rm{int}}$ values as labeled. Both $R_{\rm{p}}$ and
$T_{\rm{int}}$ decrease with time as the planet cools. The gray box indicates
the uncertainty in present radius ($\pm$0.03$R_{\earth}$) and stellar age
(5.2$\pm$1.1 Gyr). For the gray dashed curve of $T_{\rm{int}}=100$ K, the
evolution calculations include an extra energy source that converts 0.01% of
the incident energy flux into heating of the interior. Top Right. Cooling
time, defined as the time to reach the measured planet radius $R_{\rm{p}}$.
Symbols are placed at 5.1 Gyr (13 Gyr) if contraction is halted, i.e. if
$R_{p}$ becomes independent of time and agrees with (remains larger than) the
measured $R_{p}$. All runs are for interior models with $T_{\rm{int}}$=100 K,
$Z$=0.85, and no mass loss. Bottom Left. Same as top left panel but for models
with a lower $T_{\rm int}$=52 K and $Z$ values as labeled. The moderate
$Z$=0.50 model seems to fall short in cooling time. However, evolving that
model further to $T_{\rm int}$=44 K leads to equilibrium evolution and
$R_{\rm{p}}$(t) still in agreement with the measured value. Bottom Right.
Planet size evolution considering a fiducial mass loss of 2$\times$1010 g s-1
(blue) and omitting mass loss (red) for $Z$(CO2)=0.5 and $T_{\rm{int}}$=44 K.
Equilibrium is reached when mass loss is omitted. When mass loss is taken into
account, the planet continues shrinking as it sheds its atmosphere until it
turns into a bare rocky core, which is predicted to happen within 0.5 Gyr from
now. The squares indicate potential present states of $\pi$ Men c, from where
the evolution is calculated backward and forward in time.
Figure 8:
## References
* Batalha (2014) Batalha, N.M. 2014. PNAS, 111, 12647
* Barragán et al. (2019) Barragán, O., Gandolfi, D. & Antoniciello, G. 2019. Monthly Notices of the Royal Astronomical Society, Volume 482, Issue 1, p.1017-1030.
* Benneke & Seager (2012) Benneke, B. & Seager, S. 2012. Astrophys. J., 753:100
* Benneke et al. (2019) Benneke, B., Wong, I., Piaulet, C., Knutson, H.A., Lothringer, J., et al. 2019. Astrophys. J. Lett., 887:L14
* Ben-Jaffel & Ballester (2013) Ben-Jaffel, L. & Ballester, G.E. 2013. Astron. & Astrophys., 553:A52
* Ben-Jaffel & Sona Hosseini (2010) Ben-Jaffel, L. & Sona Hosseini, S. 2010. The Astrophysical Journal, Volume 709, Issue 2, pp. 1284-1296.
* Bourrier et al. (2018b) Bourrier, V., Lecavelier des Etangs, A., Ehrenreich, D., Sanz-Forcada, J., Allart, R., et al. 2018. A&A, 620:A147
* Bourrier et al. (2018a) Bourrier, V., Ehrenreich, D., Lecavelier des Etangs, A., Louden, T., Wheatley, P.J., et al. 2018. A&A, 615:A117
* Damasso et al. (2020) Damasso, M., Sozzetti, A., Lovis, C. Barros, S.C.C., Sousa, S.G. et al. 2020. Astronomy & Astrophysics, Volume 642, id.A31, 14 pp.
* De Rosa et al. (2020) De Rosa, R.J., Dawson, R. & Nielsen, E.L. 2020. Astronomy & Astrophysics, Volume 640, id.A73, 13 pp.
* Ehrenreich et al. (2015) Ehrenreich, D., Bourrier, V., Wheatley, P.J., Lecavelier des Etangs, A., Hébrard, G. 2015. Nature, 522:459
* Fortney et al. (2013) Fortney, J.J., Mordasini, C., Nettelmann, N., Kempton, E. M.-R., Greene, T.P. & Zahnle, K., 2013. Astrophys. J., 775:80
* Fossati et al. (2010) Fossati, L., Haswell, C.A., Froning, C.S., Hebb, L., Holmes, S., et al. 2010. Astrophys. J. Lett., 714:L222
* Fulton et al. (2017) Fulton, B.J., Petigura, E.A., Howard, A.W., Isaacson, H., Marcy, G.W. et al. 2017. Astron. J., 154:109
* Freedman et al. (2008) Freedman, R.S., Marley, M.S. & Lodders, K. 2008. The Astrophysical Journal Supplement Series, Volume 174, Issue 2, pp. 504-513 (2008).
* Frisch & Slavin (2003) Frisch, P.C. & Slavin, J.D. 2003. Astrophys. J., 594:844
* Gandolfi et al. (2018) Gandolfi, D., Barragán, O., Livingston, J. H., Fridlund, M., Justesen, A. B., et al., 2018. A&A Letters, 619, L10
* García Muñoz et al. (2020) García Muñoz, A., Youngblood, A., Fossati, L., Gandolfi, D., Cabrera, J. & Rauer, H. 2020. Astrophys. J. Lett., 888:L21
* Gordiets et al. (1982) Gordiets, B.F., Kulikov, Y.N., Markov, M.N. & Marov, M.Y. 1982. J. Geophys. Res., 87:4504
* Guo et al. (2020) Guo, X., Crossfield, I.J.M., Dragomir, D., Kosiarek, M.R., Lothringer, J. et al. 2020. Astronom. J., 159:239
* Guillot (2010) Guillot, T. 2010. Astronomy & Astrophysics, Volume 520, id.A27, 13 pp.
* Hatzes & Rauer (2015) Hatzes, A.P. & Rauer, H. 2015. The Astrophysical Journal Letters, Volume 810, Issue 2, article id. L25.
* Holmström et al. (2008) Holmström, M., Ekenbäck, A., Selsis, F., Penz, T., Lammer, H. & Wurz, P. 2008. Nature, Volume 451, Issue 7181, pp. 970-972 (2008).
* Huang et al. (2018) Huang, C.X., Burt, J., Vanderburg, A., Günther, M.N., Shporer, A. et al. 2018. Astrophys. J. Lett., 868:L39
* Jackson et al. (2008) Jackson, B., Greenberg, R. & Barnes, R. 2008. Astrophys. J., 678:1396-1406.
* King et al. (2019) King, G.W., Wheatley, P.J., Bourrier, V. & Ehrenreich, D. 2019. MNRAS, 484, L49
* Kunovac Hodžić et al. (2020) Kunovac Hodžić, V., Triaud, A.H.M.J., Cegla, H.M. & Chaplin, W.J. 2020. arXiv:2007.06410v1.
* Linsky et al. (2010) Linsky, J.L., Yang, H., France, K., Froning, C.S., Green, J.C., Stocke, J.T. & Osterman, S.N. 2010. Astrophys. J., 717:1291
* Madhusudhan et al. (2020) Madhusudhan, N., Nixon, M.C., Welbanks, L., Piette, A.A.A. & Booth, R.A. 2020. Astrophys. J. Lett., 891:L7
* Mordasini et al. (2015) Mordasini, C., Mollière, P., Dittkrist, K.-M., Jin, S. & Alibert, Y. 2015. Int. J. Astrob., 14:201
* Moses et al. (2013) Moses, J.I., Line, M.R., Visscher, C., Richardson, M.R., Nettelmann, N., et al. 2013. Astrophys. J., 777:34
* Hu & Seager (2014) Hu, R. & Seager, S. 2014. Astrophys. J., 784:63
* Nettelmann et al. (2011) Nettelmann, N., Fortney, J.J., Kramm, U. & Redmer, R. 2011. Astrophys. J., 733:2
* Otegi et al. (2020) Otegi, J.F., Bouchy, F. & Helled, R. 2020. A&A, 634, A43
* Owen & Wu (2013) Owen, J.E. & Wu, Y. 2013. Astrophys. J., 775:105
* Tremblin & Chiang (2013) Tremblin, P. & Chiang, E. 2013. MNRAS, 428:2565
* Poser et al. (2019) Poser, A.J., Nettelmann, N. & Redmer, R. 2019. Atmosphere, 10:664
* Pu & Valencia (2017) Pu, B. & Valencia, D. 2017. The Astrophysical Journal, Volume 846, Issue 1, article id. 47, 12 pp.
* Rogers & Seager (2010) Rogers, L.A. & Seager, S. 2010. Astrophys. J., 716:1208
* Rogers (2015) Rogers, L.A. 2015. Astrophys. J., 801:41
* Scheucher et al. (2020) Scheucher, M., Wunderlich, F., Grenfell, J.L., Godolt, M., Schreier, F., et al. 2020. The Astrophysical Journal, Volume 898, Issue 1, id.44
* Seager et al. (2007) Seager, S., Kuchner, M., Hier-Majumder, C.A. & Militzer, B. 2007. Astrophys. J., 669:1279-1297
* Shaikhislamov et al. (2020a) Shaikhislamov, I.F., Khodachenko, M.L., Lammer, H., Berezutsky, A.G., Miroshnichenko, I.B. & Rumenskikh, M.S. 2020. MNRAS, 491:3435
* Shaikhislamov et al. (2020b) Shaikhislamov, I.F., Fossati, L., Khodachenko, M.L., Lammer, H., García Muñoz, A., Youngblood, A., Dwivedi, N.K. & Rumenskikh, M.S. 2020. A&A, 639:A109
* Thorngren & Fortney (2018) Thorngren, D.P. & Fortney, J.J. 2018. The Astronomical Journal, Volume 155, Issue 5, article id. 214, 10 pp. (2018).
* Tsiaras et al. (2019) Tsiaras, A., Waldmann, I.P., Tinetti, G., Tennyson, J. & Yurchenko, S.N. 2019. Nature Astronomy, 3:1086
* Valencia et al. (2010) Valencia, D., Ikoma, M., Guillot, T. & Nettelmann, N. 2010. A&A 516:A20
* Valencia et al. (2013) Valencia, D., Guillot, T., Parmentier, V. & Freedman, R.S. 2013. Astrophys. J., 775:10
* Verner & Yakovlev (1995) Verner, D.A. & Yakovlev, D.G. 1995, A&A Suppl. 109:125
* Vidal-Madjar et al. (2003) Vidal-Madjar, A., Lecavelier des Etangs, A., Désert, J.-M., Ballester, G.E., Ferlet, R., Hébrard, G. & Mayor, M. 2003. Nature, 422, 143
* Vidal-Madjar et al. (2004) Vidal-Madjar, A., Désert, J.-M., Lecavelier des Etangs, A., Hébrard, G., Ballester, G. E., et al. 2004. The Astrophysical Journal, Volume 604, Issue 1, pp. L69-L72.
* Xuan & Wyatt (2020) Xuan, J.W. & Wyatt, M.C. 2020. Monthly Notices of the Royal Astronomical Society, Volume 497, Issue 2, pp.2096-2118.
|
###### Abstract
It was recently demonstrated the possibility to tune, through the application
of a control gate voltage, the superconducting properties of mesoscopic
devices based on Bardeen-Cooper-Schrieffer metals. In spite of the several
experimental evidence obtained on different materials and geometries, a
description of the microscopic mechanism at the basis of such unconventional
effect has not been provided yet. This work discusses the technological
potential of gate control of superconductivity in metallic superconductors and
revises the experimental results which provide information regarding a
possible thermal origin of the effect: in the first place, we review
experiments performed on high critical temperature elemental superconductors
(niobium and vanadium) and show how devices based on these materials can be
exploited to realize basic electronic tools such as, e. g., a half-wave
rectifier. In a second part, we discuss the origin of the gating effect by
showing the gate-driven suppression of the supercurrent in a suspended
titanium wire and by providing a comparison between thermal and electric
switching current probability distributions. Furthermore, we discuss the cold
field-emission of electrons from the gate by means of finite element
simulations and compare the results with experimental data. Finally, the
presented data provide a strong indication regarding the unlikelihood of
thermal origin of the gating effect.
###### keywords:
superconductivity, Josephson effect, gate-controlled
1 1 0 Gate control of superconductivity in mesoscopic all-metallic devices
Electrostatic control of superconductivity in mesoscopic all-metallic devices
Claudio Puglia ${}^{1,2,^{*}}$, Giorgio De Simoni 2 and Francesco Giazotto 2
Claudio Puglia, Giorgio De Simoni and Francesco Giazotto Puglia, C.; De
Simoni, G.; Giazotto, F. Correspondence<EMAIL_ADDRESS>
## 1 Introduction
In the last two years, the impact of gate voltage on the superconducting
properties of Bardeen-Cooper-Schrieffer (BCS) Bardeen (1962) metallic
superconductors has been investigated De Simoni et al. (2018); Paolucci et al.
(2018, 2019a, 2019b, 2019c); Bours et al. (2020). In these studies the authors
analyze the effect of the electrostatic gating, generating an electrostatic
field of the order of $10^{8}$ V/m and, at the same time, modifying negligibly
the concentration of the surface charge carriers. Particularly, ampbipolar
suppression of supercurrent has been demonstrated in all-metallic
superconductor wires De Simoni et al. (2018), nano-constriction Josephson
junctions (JJs) Paolucci et al. (2018, 2019a), fully-metallic Superconducting
Quantum Interference Devices (SQUID) Paolucci et al. (2019c), and proximity
nanojunctions De Simoni et al. (2019). Such unconventional gating effect in
BCS superconductor systems is the first step for the realization of easy
fabrication and high-scalable technologies in both the environments of
classical and quantum electronics. The aim of this work is to review the most
recent advances on the modification of superconducting properties in
mesoscopic structures via the application of a control gate voltage. For such
an effect, a fulfilling microscopic theory has not been provided yet. Indeed,
it is not possible to take into account for experimental observations through
the conventional BCS framework, in which the superconducting properties are
negligibly affected by electric fields Virtanen et al. (2019). Although, some
theories have been proposed, including the electric field driven Rashba
orbital polarization Mercaldo et al. (2020), and the gate-driven Schwinger
excitation of quasiparticles from the BCS vacuum Solinas et al. (2020), they
have not been experimentally verified yet. The injection of high-energy field-
emitted cold-electrons into the weak-link was also hypothesized to be at the
origin of the gating effect Alegria et al. (2021); Ritter et al. (2020).
Nevertheless, even in presence of the latter mechanism, several experimental
results are not compatible with a mere power injection resulting in an
overeating of the superconductor De Simoni et al. (2018); Paolucci et al.
(2018); Puglia et al. (2020); Rocci et al. (2020).
The article is organized as follows: Section 2 displays evidence of a gate-
driven supercurrent suppression in vanadium and niobium Dayem bridges (DBs).
Moreover, different technological implementations based on these materials are
presented. In the same section, two further topics are faced. The former is
the modification of the switching current probability distribution as a
function of the electric field. The latter is the influence of the substrate
on the gating effect in titanium weak-links. Section 3 analyses the evidence
against a thermal origin of the supercurrent suppression. Finally, Section 4
provides a summary of the results presented in this review resuming the main
achievements and proposing new experiments to increase the understanding of
the gating effect.
## 2 Gate-driven supercurrent suppression in Nb and V nanojunctions
In this section we present a series of experiments, performed on niobium and
vanadium superconducting weak-link devices, aimed at extending the range of
materials suitable for gated-superconductor applications at elemental
superconductors with critical temperature higher than the liquid helium
temperature $\sim 4.2$ K. The presented results demonstrate the possibility to
implement gate-controlled all-metallic superconducting electronics Likharev
(2012) compatible with industrial standards.
### 2.1 Niobium gate-controlled transistor
All-metallic supercurrent transistors consist of a superconducting mesoscopic
channel realized with BCS metals, equipped with gate electrodes
lithographically fabricated at a distance of a few nanometer from the channel.
The gate electrode is polarized through the application of an either positive
or negative control gate voltage. Niobium gate-controlled transistors
typically consist of a 8-$\mu$m-long, 2.5-$\mu$m-wide wire interrupted by a
50-nm-wide, 120-nm-long constriction. Aligned with the DB weak-link it a was
realized a co-planar, 60-nm-far, 80-nm-wide metallic gate. The thin film was
deposited on a sapphire Al2O3 substrate via DC magneto-sputter deposition of a
10/40-nm-thick Ti/Nb bylayer. The former metal was necessary to increase the
adhesion and the mechanical strength of the metallic film. A pseudo-color
scanning electron micrograph (SEM) is shown in Figure 1(a).
Figure 1: (a) Pseudo-color SEM of a representative niobium gate-tunable
transistor with the four-probes bias scheme. The junction and the wire are in
false-colored orange, the gate in blue. (b) $I$ vs $V$ curves for different
gate voltages $V_{G}$ at a bath temperature of 20 mK. Characteristics are
horizontally shifted for clarity. A clear symmetric suppression of $I_{S}$ is
visible as the modulus of the gate voltage $|V_{G}|$ is increased. (c) $I_{S}$
vs $V_{G}$ for several bath temperatures $T$ in a range from $20$ mK to $3$ K.
The switching current $I_{S}$ values were acquired by acquiring $50$
repetitions of the $I$ vs $V$ characteristics. Error bars are smaller than the
data symbol.
The device shows a Dayem bridge normal state resistance $R_{DB}\simeq 30\
\Omega$ and a critical temperature $T_{DB}\simeq 3$ K De Simoni et al. (2020).
On the other hand, the niobium banks inherit the critical temperature of the
pristine thin film $T_{Nb}\simeq 7.9$ K De Simoni et al. (2020). The smaller
critical temperature of the Dayem bridge is due to its lateral size which is
comparable with the niobium coherence length Stromberg and Swenson (1962);
Finnemore et al. (1966). $T_{Nb}$, instead, is about $80$% of the conventional
critical temperature for Nb because of the proximity effect of the adhesion
titanium layer. The conventional dissipationless transport is displayed by
plotting the current $I$ vs voltage $V$ curves acquired at a bath temperature
$T=20$ mK and a gate voltage $V_{G}=0$ V, as shown in Figure 1(b). The Dayem
bridge switching current is $I_{S}\simeq 30\ \mu$A. The forward and backward
$I$ vs $V$ characteristics shows the characteristic hysteretic behavior due to
the retrapping current $I_{R}$ Giazotto et al. (2006); Tinkham (2004).
The suppression of the switching current via the application of a gate voltage
was demonstrated by measuring the current $I$ vs voltage $V$ characteristics
with a gate voltage in the range from $-40$ to $40$ V at a bath temperature of
$T=20$ mK. Figure 1(b) displays $V(I)$ curves at several gate voltages. A
shadow grey area is drawn to underline the suppression region. The quenching
of the supercurrent is symmetric for $V_{G}\longrightarrow-V_{G}$ for bath
temperatures between 20 mK and 3 K as shown in Figure 1(c). As reported in
conventional experiments De Simoni et al. (2018, 2019); Paolucci et al. (2018,
2019a), the widening of the plateau in which the gate voltage is not effective
is clearly visible as the temperature rises. The suppression of the
supercurrent can be observed up to a temperature of $3$ K with complete
suppression at $|V_{G}|=40$ V for $T>2$ K.
#### 2.1.1 Rectification properties
Basing on the peculiar shape of the $R$ vs $V_{G}$ characteristic De Simoni et
al. (2020), it is possible to implement a superconducting diode scheme. In
particular, an alternate gate voltage $V_{AC}$ can be rectified exploiting the
sharpness of the super-to-normal transition when the junction is current
biased. In this configuration, a sinusoidal gate voltage is transformed into a
square wave voltage-drop across the junction. Such a peculiar system response
is shown in Figure 2(a). The gate voltage signal is the sum of $V_{AC}$ and of
a DC pre-bias voltage used to define a switching current working range
$I_{S}(V_{G}^{max})<I_{B}<I_{S}(V_{G}^{min})$. The oscillation of $I_{S}$
above and below $I_{B}$ results in time dependent normal-to-super and vice
versa transitions that generate a time dependent voltage drop $V(t)$ across
the junction as shown in Figure 2(b). The output signal maintains the same
periodicity of $V_{AC}$ and the duty cycle is defined by the time fraction for
which the condition $I_{S}<I_{B}$ is satisfied.
Figure 2: (a) Sketch of the operation principle exploited by the niobium-based
rectifier. The system is DC current biased with $I_{B}$ (horizontal red dashed
line in (a)) and the side-gate is carries a signal consisting of an alternate
part $V_{AC}$ (blue/green line) and a DC part $V_{DC}$. The resulting
characteristic $I_{S}(t)$ (purple/yellow curve) depends on the amplitude of
$V_{AC}$ and on the value of $V_{DC}$ and generates recurring super-to-normal
and normal-to-super state transitions. In this way, the voltage signal across
the junction $V$ oscillates between a low and a high state with the same
period of $V_{AC}$. (b) Voltage $V$ acquired across constriction in a lock in
four-wires scheme. The alternate voltage $V_{AC}$ is the reference signal of
the lock-in amplifier. The system is current biased with $I_{B}=2.5\ \mu$A.
$V$ is almost zero until $I_{S}(V_{G})<I_{B}$. The peak represent the
rectification effect of the AC gate voltage signal.
Notably, the output voltage depends directly on the amplitude of the AC input
signal thanks to the behavior of $R$ vs $V_{G}$ characteristic. In the
configuration shown in Figure 2 our system acts as a half-wave rectifier which
could be used in a vast range of devices such as diodes and detectors. In the
next paragraph further evidences of the rectification properties of such
systems are provided, with emphases on the versatility of the technology.
### 2.2 Vanadium gate-controlled transistor
The vanadium gate-controlled transistors typically consist of a planar 60-nm-
thick, 160-nm-long, 90-nm-wide Dayem bridge Josephson junction flanked by
70-nm-far, 120-nm-wide side-gate. Our exploited bridge geometry is similar to
those already discussed for Nb-based devices. The system was realized by means
of a one-step electron beam lithography performed on a silicon dioxide
substrate (SiO2) followed by the electron beam evaporation of high purity
vanadium at a rate of 0.36 nm/s in an ultra-high vacuum chamber. Figure 3
shows the pseudo-color SEM of a typical vanadium gate-controlled transistor.
Figure 3: (a) Pseudo-color SEM of a representative vanadium gate-tuned
transistor. The junction and the wire are colored in orange, the gate in blue.
(b) Current $I$ vs voltage $V$ curves of a typical device acquired at several
bath temperatures from $2.0$ to $3.8$ K. The characteristics are horizontally
shifted for clarity. (c) Switching current $I_{S}$ vs gate voltage $V_{G}$
curves for several bath temperatures in the range between $2.0$ and $3.3$ K.
$I_{S}$ values were obtained averaging $25$ acquisitions.
The device shows a normal state resistance $R_{N}\simeq 106\ \Omega$, a
switching current at 2 K of $I_{S}=1.42$ mA and a critical temperature
$T_{C}\simeq 3.6$ K Puglia et al. (2020) as displayed in Figure 3(b).
The suppression of the supercurrent as a function of the gate voltage was
demonstrated by measuring the $I_{S}$ vs $V_{G}$ characteristics. Figure 3(c)
shows the bilateral suppression of the supercurrent down to total quenching
for $|V_{G}|\simeq 10$ V in a range of bath temperature from $2$ to $3.2$ K.
Notably, the sharper suppression of $I_{S}$ observed for positive values of
the gate voltage is in contrast with a possible cold field-emission origin of
the quenching effect. Indeed, the device geometry could facilitate the
electron extraction from the gate that occurs at negative gate bias values
Alegria et al. (2021); Ritter et al. (2020). This consideration is deeply
discussed in Section 3.
#### 2.2.1 Half-wave rectifier
A time-resolved characterization of the device was carried out using both
sinusoidal and square -wave gate voltages. Figure 4(a) shows the bias scheme
of the measurement setup consisting in a DC bias current, a DC voltage
generator and an ADC/DAC digital board providing the AC gate voltage signal.
The latter voltage generators provides a $V_{G}(t)=V_{DC}+V_{AC}(t)$ gate
signal, setting the right operation point in the parameters space (see Figure
4(a)).
Figure 4: (a) Bias scheme for AC measurements. The gate voltage is generated
adding up a DC $V_{DC}$ and a AC $V_{AC}$ arbitrary waveform voltages. The
ADC/CAD board that provides the AC signal performs a real-time measurements of
the voltage drop $V$ across the junction. (b) Voltage $V$ vs current $I$
curves for chosen values of $V_{G}$. The two dot couples displays the
operation points of the system for two bias currents $I_{B}=18$, $71\ \mu$A.
The time dependent response of the system $V(t)$ is represented with the same
color of the dot pairs. The gate voltage signal $V_{G}(t)$ consists of a DC
voltage $V_{DC}=10$ V and an AC square-wave voltage with amplitude $V_{AC}=5$
V. The bath temperature for these measurements was $T=3$ K.
The time dependent voltage drop $V(t)$ across the junction was acquired as a
function of the time-dependent gate voltage $V_{G}(t)$ obtaining a zero-signal
(low-state) when $I_{S}\left[V_{G}(t)\right]>I_{B}$ (superconducting state).
By contrast, when $I_{S}\left[V_{G}(t)\right]<I_{B}$ the junction switches to
the dissipative state and a voltage drop different from zero is obtained
across the constriction (high-state). We measured the system response to a
transistor–transistor logic-like (TTL) square-wave gate signal composed by a
$V_{DC}=10$ V voltage DC bias plus a $V_{AC}=5$ V square-wave with a frequency
up to $\sim 50$ Hz, as shown in Figure 4(b).
The low and high states are highlighted on top of the $I$ vs $V$
characteristics (obtained for $V_{G}$ ranging from $10$ to $15$ V) with
couples of dots of the same colors in Figure 4(b). The $V(t)$ signal resulting
from the $V_{G}(t)$ excitation is shown in Figure 4(d) for two current bias
($I_{S}=18,\ 71\ \mu$A). We note that the output voltage is proportional to
the bias current. It is worth to emphasize again that $V(t)$ maintains the
shape of the input voltage signal with frequencies, in principle, limited only
by $f_{\Delta}$ Barone and Paternò (1982); Tinkham (2004).
Finally, we show the response of the system to a sinusoidal gate voltage
signal. The measurement setup is the same of the square-wave characterization
shown in Figure 4(a). The excitation consists in a $V_{AC}$ sine-wave with
amplitude in the range between $1.0$ and $3.5$ V summed with a $V_{DC}=11$ V
DC voltage bias shown in Figure 5(b). The bias current for this experiment was
chosen to be $I_{B}=72\ \mu$A in order to have a sharp super-to-normal
transition and a linear dependence between $R$ and $V_{G}$. The continuous
variation of the gate voltage provides a continuous variation of the junction
resistance accordingly with the $R(V_{G})$ curves Puglia et al. (2020). Due to
the former behavior, the system lies in the zero-voltage state for
$I_{S}\left[V_{G}(t)\right]>I_{B}$, and a voltage drop across the junction
appears when the condition $I_{S}\left[V_{G}(t)\right]\simeq I_{B}$ is
satisfied. When $I_{S}\left[V_{G}(t)\right]<I_{B}$, the voltage drop across
the nanojunction increases because of the gate-driven evolution of the
constriction resistance and, eventually, reaches the saturation level at the
asymptotic value of the resistance of the normal state. Figure 5(a) displays
the voltage drop across the junction as a function of both the bias current
and the gate voltage. The transition edge is highlighted by a dashed red line.
The working point set by $I_{B}$ and $V_{DC}$ is shown by white dashed lines.
The time resolved $V(t)$s for $V_{AC}$ equal to $1.0$ and $3.5$ V are reported
in Figure 5(c), (d). Notably, the rectification threshold and the portion of
the input signal rectified can be selected setting both $I_{B}$ and $V_{AC}$.
Figure 5: (a). Color-plot of the voltage drop $V(t)$ across the Dayem bridge
constriction versus the gate voltage $V_{G}$ (X-axis) and the bias current
$I_{B}$ (Y-axis). The three dots represent the zero signal state (yellow), the
transition to the normal-state (green) and the maximum $V(t)$ signal (blue),
respectively. The vertical dashed red line shows the operating curve defined
by the bias current. The yellow-to-blue sinusoidal represents the AC + DC gate
voltage excitation $V_{G}(t)$. (b) and (c) Time dependent voltage drop across
the constriction for two amplitude of the alternate component of the gate
voltage, $V_{AC}=\ 3.5,\ 1.0$ V. The color-map is all the panels. The
measurements were performed at a bath temperature of $T=3$ K.
The former characteristics, typical of a half-wave rectifier, here are
realized for the first time exploiting an all-metallic gated superconducting
Dayem bridge. We speculate that the described rectifying behavior can be
suitably exploited to rectify an incoming radiation coupled to the gate
through an antenna, realizing a gate-controlled version of a transition edge
sensor Ullom and Bennett (2015); Paolucci et al. (2021); Gol’tsman et al.
(2001). This device could operate in an extremely wide frequency range,
spanning from below 1 GHz to about 1 THz. This interval is particularly
relevant for both technological applications and for fundamental research
(e.g. in astrophysics for cosmic microwave background detection).
#### 2.2.2 Amplification properties
The vanadium Dayem bridge, thanks to the peculiar $R(V_{G})$ characteristics,
is suitable for the realization of an amplifier. The gain parameter of a gated
DB device is defined as the ratio between peak-to-peak amplitudes of gate
voltage input and output voltage drop across the junction
$g=\frac{V_{out}}{V_{in}}$. For our system $V_{out}=R(V_{G})I_{B}$ is the
voltage drop across the junction and is directly proportional to the
resistance and the current bias. The $V_{in}$ is obtained by the ratio of the
width of SCPD Puglia et al. (2020) and the transcoductance
$\tau=\frac{dI_{S}}{dV_{G}}$. For the devices taken into account in this
section $g\sim 7$ with a typical power consumption of $\sim 40$ nW. It is
worth to highlight, on the one hand, that $g$ is of the same order of
magnitude of the conventional semiconductor cold amplifier Ivanov et al.
(2011); Oukhanski et al. (2003). Such a result, on the other hand, was
obtained with a power consumption of about three order of magnitude smaller
than the typical semiconducting counterpart. Furthermore, a chain of $N$
rectifiers can be obtained by deliver the gate electrode of the n-th device
with the output voltage of the ($n-1$)-th DB, obtaining a total gain equal to
$g^{N}$.
The possibility to tune the critical supercurrent via conventional gating
paves the way to a wide range of applications. Indeed, gate-controlled devices
could be exploited to realize tunable magnetometers Clarke and Bragisnki
(2004); Giazotto et al. (2010) and heat control systems Giazotto and Martínez-
Pérez (2012); Fornieri and Giazotto (2017). Furthermore, by exploiting the
gating effect a voltage-controlled version of the nanocryotron McCaughan
(2018); McCaughan and Berggren (2014) can be implemented. The latter is a
three terminal superconducting device in which a localized switching-current
suppression (triggered by injecting a control-current which generates a
localized high temperature region, an hotspot, by Joule heating) destroys the
superconducting properties of the nearby system. The gated version of the
nancryotron, the so-called (EF-Tron) Paolucci et al. (2019c), is implemented
through the parallel of a resistor and a gated superconductor. Differently
from current-driven devices McCaughan and Berggren (2014); Morpurgo et al.
(1998), the EF-tron is expected not to be limited by the characteristic time
scale of thermal effects, that does not allows to use signals with frequencies
larger than about hundreds of MHz at cryogenic temperatures Giazotto et al.
(2006). In this view, it is worth to discuss the role of an eventual direct
power injection into the gated device (driven, e.g., by a gate-superconductor
leakage current) that could produce an increase of the electronic temperature,
detrimental for device performance.
## 3 Non-thermal origin of supercurrent suppression in gated all-metallic
superconducting devices
### 3.1 SCPDs in a titanium gate-controlled transistor
In a current biased JJ the transition between the superconducting and the
normal state for fixed values of external parameters, e.g. temperature,
electric and magnetic field, is triggered by a phase slip events, i.e., local
random $2\pi$ jumps of the macroscopic quantum phase $\phi$ Bezryadin (2012).
The accidental nature of such events leads to a non-univocal definition of the
switching current, whose value is distributed according to the switching
current probability distribution (SCPD). The investigation of the SCPD of a JJ
is, therefore exploited to probe dynamics of the phase slips. Here, we discuss
an experiment, where a well-established technique is adopted to probe the
impact of gate voltage on the frequency of phase slips events in gate-
controlled metallic titanium (Ti) Josephson weak-links, with the conventional
theory Kurkijärvi (1972); Bezryadin and Goldbart (2010); Fulton et al. (1972).
The device chosen to study the evolution if the SCPD under gating action
consists of a titanium Dayem-bridge. Such JJs consist of $30$-nm-thick,
$10$-$\mu$m-long, $2.5$-$\mu$m-wide wire interrupted by a constriction. This
$30$-nm-thick, $150$-nm-long, $120$-nm-wide narrow structure was aligned with
a planar, $80$-nm-far, $140$-nm-wide metallic gate. The sample was fabricated
by means of single step electron beam lithography on a single-crystal sapphire
(Al2O3) with a typical resistivity larger than $10^{10}\ \Omega$/cm. The metal
was deposited by means of an electron beam evaporation at a rate of $1.2$
nm/s. Figure 6(a) shows a pseudo-color scanning electron micrograph.
Figure 6: (a) Pseudo-color scanning electron micrograph and bias scheme of a
representative Ti gate controlled transistor. The superconducting wire and the
Dayem bridge constriction are colored in orange and the gate electrode in
blue. (b) Evolution of the switching current $I_{S}$ as a function the gate
voltage. The switching current were acquired 50 times in order to obtain the
average value shown in the picture.
The device shows a normal state resistance $R_{N}\simeq 550\ \Omega$, a
switching current at 20 mK of $I_{S}=6.0\ \mu$A and a critical temperature
$T_{C}\simeq 310$ mK Puglia et al. (2020).
The dependence of $I_{S}$ Paolucci et al. (2018, 2019a, 2019c, 2019b); De
Simoni et al. (2018, 2019, 2020); Puglia et al. (2020, 2020) on the gate
voltage is showed acquiring the $I_{S}$ vs $V_{G}$ characteristics as a
function of bath temperature. Figure 6(b) shows that supercurrent vanishes for
$\left|V_{G}\right|\simeq 34$ V and such value appears to be independent from
the bath temperature. By increasing the values of the temperature
$I_{S}^{0}=I_{S}(V_{G}=0)$ lowers and a greater ineffectiveness range of the
gate voltage on $I_{S}$ is observed. This latter behavior is the same obtained
on Ti and Al superconducting gate-controlled devices Paolucci et al. (2018);
De Simoni et al. (2018).
To characterize the effect of the temperature on a superconducting Dayem-
bridges JJ, SCPDs were measured at different values of the thermal bath
temperature. The distributions were reconstructed drawing an histogram based
on $10^{4}$ switching current acquisitions for each value of the bath
temperature $T$.
Figure 7(a), (d), (c) shows the evolution of these so-called thermal-SCPDs in
a bath temperature range between $20$ and $300$ mK. The dependence of the
shape of thermal-SCPDs is analyzed through the conventional Kurkijärvi-Fulton-
Dunkleberger (KFD) theory Kurkijärvi (1972); Fulton and Dynes (1971) with a
fit procedure Puglia et al. (2020). First of all the different phase slip
regimes were identified thanks to the evolution of distribution standard
deviation $\sigma$ as a function of $T$. The evolution of the $\sigma$ vs $T$
curve Bezryadin (2012); Puglia et al. (2020) is flat at low temperature in the
Quantum Phase Slip (QPS) regime, linear in $T$ in Thermal Activated Phase Slip
(TAPS) regime, and decreasing for the Multiple Phase Slip (MPS) regime
Bezryadin et al. (2000); Bezryadin and Goldbart (2010); Bezryadin (2012);
Giordano (1988); Giordano and Schuler (1989); Kurkijärvi (1972); Fulton and
Dunkleberger (1974); Fulton and Dynes (1971). The temperature $T_{Q}$,
separating the QPS and the TAPS regime occurs at about $T\simeq 110$ mK while
the crossover between TAPS and MPS regimes appears for $T_{M}\simeq 160$ mK.
Figure 7: (a) SCPDs vs current $I$ in the QPS regime for temperatures between
$20$ and $90$ mK. The inset shows the evolution of the $\sigma$ vs $T$
characteristic in the QPS regime. (b) SCPDs vs current $I$ in the TAPS regime
for temperatures between $120$ and $150$ mK. The inset shows the evolution of
the $\sigma$ vs $T$ characteristic in the TAPS regime. (c) SCPDs vs current
$I$ in the MPS regime for temperatures between $160$ and $300$ mK. The inset
shows the evolution of the $\sigma$ vs $T$ characteristic in the MPS regime.
For each SCPD the total sampling number of $I_{S}$ is $10^{4}$. Dotted lines
represent the best-fit curves obtained with KFD model.
Although these devices show the conventional evolution of the phase slips
dynamic as a function of the temperature, the gate voltage drives the junction
in a different regime. Figure 8(a) shows vertically shifted SCPDs collected
for several values of the gate voltage at $T=20$ mK. In particular, the SCPDs
are overlapping for $V_{G}<8$ V, and a low current tail appears for
$8<V_{G}<14$ V. In addition, the distributions greatly widens for
$14<V_{G}<24$ V and for high gate voltage values, i.e. $V_{G}>24$ V, the SCPDs
turns out to narrow. In this electrostatically driven scenario, the $\sigma$
vs $V_{G}$ curve takes the place of the conventional $\sigma$ vs $T$
characteristic. As shown in Figure 8(b), $\sigma(V_{G})$ curve exhibits a
region where $\sigma$ is constant, thereby showing a marginal effect of the
gate voltage to the phase slips for small $V_{G}$ values. This behavior is
similar to the QPS regime. Therefore, we identify a crossover gate voltage
$V_{Q}\simeq 8$ V between the former and the Electric Activated Phase Slip
(EAPS) regime, where the $\sigma$ grows up to $\sim 200\ nA$ as the gate
voltage increases. Notably, $\sigma$ starts to increase at the same voltage at
which the switching current begins to be affected by the electric field. Such
evidence seems to suggest that, regardless of the origin of $I_{S}$
suppression, the latter is associated with a corresponding raise of the number
of phase slip events. Finally, for greater values of the gate voltage (i.e.,
$V_{G}>V_{E}\simeq 14$ V), the standard deviation$\sigma$ decreases and
saturates to $\sim 75\ nA$.
Figure 8: (a) SCPDs vs current $I$ in the EAPS regime with gate voltages
ranging from $0$ V to $21$ V. The inset shows the evolution of the $\sigma$ vs
$V_{G}$ characteristic in the EAPS regime. (b) SCPDs vs current $I$ in the MPS
regime with gate voltages ranging from $24$ V to $30$ V. The inset shows the
evolution of the $\sigma$ vs $V_{G}$ characteristic in the MPS regime. For
each SCPD the total sampling number of $I_{S}$ is $10^{4}$. The curves are
horizontally and vertically offset for clarity.
The starkly different behavior between thermal- and electric- SCPDs is clearly
displayed in Figure 9(a) where three $I_{S}$-matched couples of thermal and
electric distributions are plotted in the same graph for comparison. The
$I_{S}$-paired SCPDs highlight remarkably different widths and shapes. Such
behavior seems to stem from a gate-driven strong non-equilibrium state induced
in the junction. Concerning the standard deviation of the distributions, on
the one hand, the comparison between the $\sigma$ vs $I_{S}$ curves extracted
from the two thermal- and electric- SCPDs series, shown in Figure 9(b),
displays a qualitatively-similar behavior. On the other hand, the electric
driven SCPDs present a $\sigma$ on average around one order of magnitude
larger.
Figure 9: (a) $I_{S}$-matched SCPDs. Red/orange SCPDs were acquired with a
negligible electric field $V_{G}=0$ V at chosen temperatures and blue/green
SCPDs were obtained at a bath temperature of $T=20$ mK for different gate
voltages. The values of $I_{S}$ are respectively from left to right $2.2,\
2.8,\ 4.0\ \mu$A. (b) $\sigma$ vs $I_{S}$ characteristics obtained for thermal
(lower curve) and electric (upper curve) distributions.
Indeed, if we assume that the voltage-driven broadening of the SCPD is due to
an increase of the electronic temperature, e.g. a trivial Joule heating due to
a gate-DB current, we run into the absurdity of obtaining an electronic
temperature higher than the critical temperature of the superconductorGiazotto
et al. (2006); Bezryadin (2012). This observation reflects into the
impossibility to fit the gate-activated distributions by means of a
conventional KFD transform, since the obtained parameters would be
nonphysical. Therefore, these data demonstrate on the one hand a strong link
between phase slip events and electric field. On the other hand, they suggest
a non-thermal origin of the switching current suppression: the action of the
gate voltage drives the DB in a state whose description is incompatible with
that of a superconductor heated through a voltage-driven power injection at a
thermal steady state with electronic temperature higher than that of the
thermal bath.
### 3.2 Suspended titanium gate-controlled transistor
We have shown that it is not possible to describe the modification of the
SCPDs as a consequence of a trivial overheating. In other words, the effect of
the gate is unlikely that of driving the superconductor into a higher
electronic-temperature steady state by a mere power injection (driven by a
current injection). To further investigate the role of a possible injection of
current between the gate and superconducting channel, fully-suspended gated
superconducting nanobridges were tested. In conventional gated devices two
gate-channel charge transport mechanisms might be present: the diffusive
current injection through the substrate and the ballistic cold-electron field-
emission (CFE) through the vacuum. The suspended geometry permits to exclude
the first leaving the CFE as the only possible charge transport mechanism.
This experiment was performed on titanium gated suspended wires. The devices
consist of $70$-nm-thick and 1.7-$\mu$m-long nanobridge deposited on top of an
undoped $130$-nm-suspended crystalline InAs Li et al. (2014); Iorio et al.
(2019) nanowire set on two Ti/Au-coated ($5/15$ nm) pillars of cross-linked
insulating PMMA. The wires were gated through two $\sim$350-nm-wide, $40$-nm-
far symmetric Ti side electrodes. Titanium was evaporated at a rate of about
1.2 nm/s. Figure 10(a),(b) shows a 3D-sketch and a pseudo-color scanning
electron micrograph of a typical device.
Such a suspended nanojunction show four different superconducting transition
Rocci et al. (2020), that can be interpreted as the switch of the
superconducting banks for $I_{B}\simeq 1.8\ \mu$A and of the series of three
junction with switching current respectively $I_{S_{1}}\simeq 25\ nA$,
$I_{S_{2}}\simeq 150\ nA$ and $I_{S_{3}}\simeq 180\ nA$. The geometry of the
junction and the multi-step fabrication process induced to the existence of
such series of three junction in the nanobridge due to inhomogeneities of the
titanium layer thickness, covering a InAs nanowire. Moreover, the switching
current difference resides in a the variation of the cross-section of Ti film
coating the wire and on the inhomogeneous anti-proximization effect of the
superconducting film due to the bottom gold layer.
$I$ vs $V$ shifted characteristics at selected gate voltage from $-20$ to $20$
V of the bridge are shown in Figure 10(c) at a temperature of 20 mK. Figure
10(d),(e),(f) shows the evolution of the switching currents $I_{S_{i}}$ of the
three junctions as a function of the voltage gate, extracted from the $I$ vs
$V$ curves measured at several bath temperatures.
Figure 10: (a) SEM image of a representative device. (b) The wire (blue) is
connected in a typical four-wires configuration. The switching current $I_{S}$
was decreased via the application of the same gate voltage
$V_{G}=V_{S_{1}}=V_{S_{2}}$ to the symmetric gate electrodes (green). (c) Back
and forth current $I$ vs voltage $V$ curves for selected gate voltage $V_{G}$
values acquired with a bath temperature of $T=20$ mK. The curves are
horizontally shifted for clarity. Grey regions highlight the decrease of the
switching currents $I_{S_{1}}$, $I_{S_{2}}$ and $I_{S_{3}}$. (d) $I_{S_{1}}$
vs gate voltage $V_{G}$ acquired at different bath temperatures $T$. (e) and
(f) show the gate voltage $V_{G}$ dependence of $I_{S_{2}}$ and $I_{S_{3}}$,
respectively. Error bars represents the standard deviation of $I_{S_{i}}$ with
$i=1,\ 2,\ 3$ calculated over $25$ repetitions.
Notably, as the temperature increases, the ineffectiveness plateau shrinks.
The former observations is in starkly contrast with previous works De Simoni
et al. (2018); Paolucci et al. (2018, 2019a), in which the plateaux broadened
and the quenching gate voltages were constant as the temperatue increases. We
attribute such discrepancy to a smaller substrate-to-bridge thermal coupling
compared with devices located on a substrate. Certainly, independently of the
microscopic origin of the gate effect, the quenching of the switching current
could be connected to a sizable increase of the quasi-particles number in the
superconducting nanowire Puglia et al. (2020); Alegria et al. (2021). Such
enlargement is expected to be more efficient in a suspended geometry in which
the relaxation of quasiparticle excitations via electron-phonon interaction is
reduced compared to conventional devices. These results demonstrate that the
presence of an interface between the substrate and the superconducting
junction is not necessary for the gate effect to occur. This is a unequivocal
evidence against the hypothesis of a Joule heating origin of the supercurrent
suppression due to a diffusive current injected into the substrate.
### 3.3 Leakage current finite element method simulations
The suspended geometry experiment allows to eliminate any Joule overheating
conveyed to the nanowire through phonon coupling due to a leakage current
injected via the substrate. In this framework, a current flowing from the
bridge to the gate (and vice versa) could be possible only via the field
emission of cold electrons (CFE) in the vacuum. Such emission is expected to
arise because of the application of a strong electrostatic field between the
nanowire and the gate electrodes Simmons (1963); Fowler and Nordheim (1928).
To investigate the role of a possible field-emitted current in the $I_{S}$
suppression, the cold emitted current ($I_{FE}$) can be quantified with a
3-dimensional finite-element method simulations implemented on the same
geometry of the suspended titanium gate-controlled transistor, and then
compared with the measured leakage current $I_{L}$ Rocci et al. (2020).
$I_{FE}$ is calculated via the integration over the surface of the cathode,
i.e., the gate (wire) for negative (positive) $V_{G}$, of the Fowler Nordheim
tunnel current density that at the cathode is written as Fowler and Nordheim
(1928); Simmons (1963)
$\vec{J}_{FE}(\vec{F},h_{0})=\frac{2.2e^{3}}{8\pi
hh_{0}}\vec{F}^{2}\exp\left[-\frac{8\pi}{2.96he|\vec{F}|}(2m_{e})^{1/2}h_{0}^{3/2}\right],$
where $\vec{F}(x,y,z)$ is the modulus of the electrostatic field at the
surface of the cathode, $m_{e}$ is the electron mass, $e$ is the electron
charge, $h$ is the Plank’s constant, and $h_{0}=4.3$ eV is the work function
of titanium Wilson (1966). The electrostatic field vector
$\vec{F}(x,y,z;V_{G})$ was computed in the 3D vacuum space zone, around the
bridge and the side gates, via the Maxwell equation $\vec{F}=-\nabla
V(x,y,z;V_{G})$. The electric potential $V(x,y,z;V_{G})$ was calculated thanks
to the numerical integration of the Poisson equation
$\nabla^{2}V(x,y,z;V_{G})=0$ (see Figure 11(a)). As boundary condition, the
bridge and the gate electrode surfaces were simulated with perfect
equipotential conductor at $V=0$ and $V=V_{G}$, respectively.
The propagation of the electric field module $\vec{|E|}$ in the space obtained
for the entire 3D domain of the simulation is color-plotted in Figure 11(b)
from a top view ( X-Y ) for $V_{G}=-15$ V. The electrostatic field is confined
in the region between the titanium bridge and the lateral gate surfaces and it
quickly fades elsewhere, so not influencing the superconducting banks.
Figure 11: (a) Electric potential $V$ in a XY cut-plane which intersects the
suspended nanowire at half height. Applied gate voltage $V_{G}=15$ V. (b)
Color-plot showing the derived electric field amplitude $|F|$ distribution
from the solution of the electric potential. Values obtained for $V_{G}=-15$
V. The simulation shows that the electric field is extremely localized.
The electric field reaches the maximum intensity of $0.2\ GV/m$ in
correspondence of the center of the gate and it is localized near the side
gate surfaces. Moreover, $|\vec{F}(x,y,z)|$ drops more than one order of
magnitude between $500$ nm from the lateral edge of the gate electrode. Such a
field geometry let us to conclude that the banks are unlikely to be affected
by the gate voltage.
By solving the ballistic trajectories of the electron emitted by the
electrode, it is possible to compute the current density $|\vec{J}_{FE}|$ in
the region between the gate and the wire. Figure 12(a), (b) shows the induces
charge density $\rho$ and the current density $|\vec{J}_{FE}|$ evaluated on a
X-Y plane. It is worth to notice the extreme localization of the electrons in
a region of about $500$ nm centered on the electrodes that affect only a small
portion of the nanobridge. Such evidence prove that for CFE the the entire
number of the electrons emitted/absorbed from the wire is absorbed/emitted
from the gate.
Figure 12: a Color-plot of the induced charge density $\rho$. (b) Modulus of
the propagating current density $|J|$obtained solving the ballistic
trajectories of the emitted electrons. Values obtained for $\phi_{0}=4.3$ eV
and $V_{G}=-15$ V. The distribution of the current shows that the injection
mechanism influences only a $400$ nm section of the nano-bridge.
Finally, the integration of the current density $\vec{J}_{FE}$ over the
surface of the electrodes returns the current $I_{FE}$, a quantity that can be
directly compared with the gate-wire current measured in the experiment (shown
in Figure 13(a)) Rocci et al. (2020). Figure 13(b) shows the results of the
simulation for $I_{FE}$.
Figure 13: Comparison between the measured leakage current $I_{L}$ (orange
points) and the field emission current $\left(I_{FE}\right)$ (violet points)
as a function of the gate voltage. The latter quantity wad obtained
integrating numerically the Fowler-Nordheim current density $\vec{J}_{FE}$
with the titanium work function $\phi_{0}=4.3$ eV.
Notably, $I_{FE}$ is more than $20$ orders of magnitude smaller than the
greatest value of the gate-bridge $I_{L}$ measured in the experiment.
Furthermore, a current of about $10^{-40}$ A corresponds to the emission, on
average, of a single electron in $10^{28}$ years. Such emission rate was
computed consistently with an electrostatic field at the surface of the that
is not intense enough to start a true CFE current. As a matter of fact, cold-
emitted electrons typically needs an electrostatic field of the order of
$1-10$ GV/m Bhushan (2012), but in our situation the greatest electrostatic
field is one order of magnitude smaller, at most. Moreover, the simulation
shows an intrinsic asymmetry, of several orders of magnitude, of $I_{FE}$ when
$V_{G}\longrightarrow-V_{G}$ due to the geometry difference between gate and
wire. This seems to suggest that, if the field emitted current was the leading
mechanism in determining the $I_{S}$ suppression, a strongly asymmetric
behavior should be observed for positive and negative gate voltages. Such a
feature was never reported in experiments on gated metallic superconductors De
Simoni et al. (2018); Paolucci et al. (2018, 2019a).
### 3.4 Heating through single cold-electron field emission or absorption
If we admit that a certain number of electrons are emitted or absorbed by the
gate and absorbed/emitted by the wire the expected experimental phenomenology
should be totally different compared to those observed: an electron with an
energy of the order of $10$ eV, and ballistically hitting the junction going
through the vacuum, is expected to release its entire energy increasing
abruptly the system electronic temperature. A straightforward calculation for
the heat capacitance due to electronic contribution $C_{e}$ of a weak-link in
the normal-state is:
$C_{e}=\Omega\gamma T_{e}$
where $\Omega$ is the volume occupied by the junction, $\gamma$ is the
Sommerfeld constant for titanium, $T_{e}$ is the electronic temperature of the
weak-link.
The energy $E(V)$ released in the junction is directly proportional to the
acceleration gate voltage $V$ between the weak-link and the gate electrode:
$E\left(V\right)=qV,\ \ \ \ \ \ P\left(t\right)=E\ \delta\left(t\right)$
where $q$ is the electron charge, $\delta$ is Dirac’s delta distribution and
$P(t)$ is the power as a function of $t$. According to conventional theory of
the heat transport, the electronic temperature in the system is conveniently
described by the differential equation Giazotto et al. (2006), where $T_{B}$
is the lattice temperature:
$C_{e}\frac{\partial T_{e}}{\partial t}=P\left(t\right)\longrightarrow
T_{e}=\sqrt{\frac{2E}{\Omega\gamma}+T_{B}^{2}}$
We wish to stress that by this approach we obtain final electronic temperature
$T_{e}$ which is an underestimate of its real value in the weak-link since we
considered $C_{e}$ to be the normal-state heat capacitance, that is generally
exponentially greater than in the superconducting state Giazotto et al.
(2006). The, already mentioned, calculation demonstrates that one electron
accelerated with a $30$ V gate voltage and injected into a typical DB
constriction at $10$ mK of bath temperature would increase starkly its
electronic temperature $T_{e}$ that would reach a value that is more than $20$
times greater than its critical temperature ($T_{C}\simeq 500$ mK). Such a
conclusion permits us to make the statement that the heat generating from the
absorption of a field-emitted electron in the junction in incompatible with an
equilibrium state with a definite gate-tunable $T_{e}$ and $I_{S}$.
### 3.5 Continuous power injection
In principle, we can assume that, the repeated absorption of high-energy
electrons, leads the system to bouncing continuously between the normal and
superconducting states as shown in Figure 14.
Figure 14: Electronic temperature $T$ vs time $t$ of a mesoscopic
superconducting weak-link that periodically absorb electrons with energy of
the order 10 eV. The red horizontal line represents the critical temperature
of the superconductor. Each electron starkly increase the electronic
temperature of the system driving it in the normal state. $\tau$ is the
measurement time.
If this were the case, the bridge is suddenly driven into the dissipative
state for every electron absorbed. Then, the system relaxes back to the
dissipationless state. The recurrence of such events is due to the emission
rate of the electrons, and the super-to-normal relaxation time is typically
given by the electron-phonon coupling related time, that is of the order of
$\tau\simeq 1$ ns Giazotto et al. (2006). Such time scale is much smaller than
the typical integration time of our voltage measure setup ($\simeq 20$ ms). In
this scheme, in the time of a voltage $V$ vs current $I$ measurement, every
time an electron is absorbed by the junction, if the bias current $I$ is below
the $I_{R}$ the transition that lead to a finite resistance value is likely
too fast to be detected by our setup. On the contrary, if $I>I_{R}$, the
absorption of an electron by the constriction drives the system to the normal
state, and it should persist in such state until the flowing current $I$ is
lowered to $0$. Such consideration suggest that when electron are field-
emitted, $I_{R}$ and $I_{S}$ should always correspond. Since this does not
occur, we conclude that any hot-electron injection-related mechanism related
to field emission as the main cause of our experimental results, should be
excluded.
### 3.6 Unconventional sum rule
Another strong evidence against a trivial heating or a direct power-injection
origin of the supercurrent suppression comes from the evolution of the
switching current of a superconducting wire under the action of a pair of
lateral side gates. By means of a titanium Dayem bridge consisting of a double
gate flanked nanoconstrictions interrupting a Ti strip Paolucci et al.
(2019c), it was possible to asses the mutual influence and the spatial
extension of two opposite gate electrodes effect in the suppression of the
switching current. Figures 15(a),(b) show two color-plot of the normalized
switching current $I_{S}/I_{S}^{0}$ vs $V_{G_{1}}$ (X axis) and $V_{G_{2}}$ of
two representative devices $A$ and $B$. The difference of the two systems in
the complete quenching gate voltage $V_{G}^{C}$ was ascribed to the different
in the gate-DB distance ($\simeq 80$ nm for the sample $A$ and $\simeq 120$ nm
for the sample $B$). The observed square-like shape indicates the existence of
a voltage threshold $V_{th}$: when one of the two gates is biased above
$V_{th}$ the critical supercurrent is suppressed by a fraction which is not
dependent of the voltage applied to the other gate. In other word, the effect
of the two gates on $I_{S}$ are independent and no obvious sum rule exists
between the action of the two voltages. Such evidence suggests that the gate-
driven suppression of the supercurrent is likely related to a surface effect,
which affects non-locally superconductivity, i.e., once that the electric
field established at one of the surface of the superconductor overcomes a
critical value, its effect is propagated inside the superconducting body over
a distance at least comparable with the device width. Accordingly, with
previous works De Simoni et al. (2018) and theoretical calculations Ummarino
et al. (2017), the surface perturbation could affect the superconductor for a
thin depth of the order of the superconducting coherence length $\xi$.
Furthermore, the aforementioned behavior is hardly comparable with the picture
of a direct heat/power injection due to charge transport from/to the gate.
Indeed, in the latter case a sum rule for the total power $P_{sum}$ is
expressed as $V^{2}_{G_{1}}/R_{1}+V^{2}_{G_{2}}/R^{2}$, where $R_{1,2}$ are
the gate-superconductor leakage resistance respectively for the gate 1 and 2,
respectively.
Figure 15: (a) and (b)Effect of the electric field generated in a double gate
flanked Dayem bridge symmetric side gate electrodes. Contour-plots show the
normalized supercurrent as a function of the two gate voltages $V_{G_{1}}$ (x
axis) and $V_{G_{2}}$ (y axis) for two representative devices (A and B).
## 4 Summary and further research
Along this review we showed the electrostatic control of the superconducting
properties in several all-metallic Josephson weak-links: niobium and vanadium
Dayem bridges and titanium suspended wires (Section 2). On the one hand, we
focused on the technological application of the effect, demonstrating
supercurrent suppression on the material that represents the industrial
standard for superconducting electronics, the niobium. Moreover, the vanadium
Dayem bridge experiments showed the potential for electric signal
rectification of such geometry. On the other hand, we investigated the dynamic
of the phase slip in a titanium Dayem bridge JJ under the effect of an
electrostatic field. The results demonstrated that it is impossible to ascribe
the modifications of the shape of the SCPDs to a conventional heating effect.
In particular, in the framework of the established KFD theory, it was not
possible to interpret the width of the distribution with the usual parameters.
Furthermore, the experiment carried out with a suspended titanium wire
demonstrated that the presence of the substrate is not critical to the
occurrence of the effect. Such evidence confutes any possible contribution to
the superconductivity quenching due to the existence of an injection current
that flows in the substrate.
In the second part of this review, we faced the hypothesis of the thermal
origin of the electrostatic effect on BCS superconductors. Thanks to finite
element simulations performed on a system with the same geometry of the
suspended titanium transistor, we demonstrated that the cold field emission
cannot be a satisfactory explanation for the suppression of the supercurrent.
In fact, even assuming that single electrons are emitted from the gate and
absorbed by the junction, the local increase of the electronic temperature of
the system is incompatible with the superconducting state. Additionally, the
non-trivial summation rule of two side gate in the quenching of the
supercurrent is a further evidence that a trivial thermal effect is no able to
explain our unconventional gating effect.
To shed light on these experiments and to progress towards the understanding
of the origin of the effect a set of complementary experiments are required.
For example, SQUID microscopy could provide useful information on the
distribution of the supercurrent in the field-affected region of the
superconductor. In addition, scanning tunnelling spectroscopy and scanning
gate experiments are critical to investigate the spatial variation of the
superconducting gap. Moreover, radio-frequency-based experiments are crucial
to acquire information on the characteristic time scale of the effect and on
the role of the quasiparticle excitation in the quenching of the supercurrent.
Besides a theoretical model able to explain the observed phenomenology has not
been provided yet. From a technological point of view, the unconventional
field-effect promises to be suitable for a wide range of applications. We
already demonstrated the rectification properties of a Dayem bridge system and
more complex devices such as gate- controlled radiation detector Zmuidzinas
and Richards (2004); Gousev et al. (1994); Lösch et al. (2019), signal routers
and computational systems Likharev (2012); Mukhanov (2011) are at reach.
2
The authors acknowledge the European Union’s Horizon 2020 research and
innovation program under Grant Agreement No. 800923 (SUPERTED) for partial
financial support.
The authors declare no conflict of interest.
References
## References
* Bardeen (1962) Bardeen, J. Critical Fields and Currents in Superconductors. Rev. Mod. Phys. 1962, 34, 667–681. doi:black10.1103/RevModPhys.34.667.
* De Simoni et al. (2018) De Simoni, G.; Paolucci, F.; Solinas, P.; Strambini, E.; Giazotto, F. Metallic supercurrent field-effect transistor. Nat. Nanotechnol. 2018, 13, 802–805. doi:black10.1038/s41565-018-0190-3.
* Paolucci et al. (2018) Paolucci, F.; De Simoni, G.; Strambini, E.; Solinas, P.; Giazotto, F. Ultra-Efficient Superconducting Dayem Bridge Field-Effect Transistor. Nano Lett. 2018, 18, 4195–4199. doi:black10.1021/acs.nanolett.8b01010.
* Paolucci et al. (2019a) Paolucci, F.; De Simoni, G.; Solinas, P.; Strambini, E.; Ligato, N.; Virtanen, P.; Braggio, A.; Giazotto, F. Magnetotransport Experiments on Fully Metallic Superconducting Dayem-Bridge Field-Effect Transistors. Phys. Rev. Appl. 2019, 11, 024061. doi:black10.1103/PhysRevApplied.11.024061.
* Paolucci et al. (2019b) Paolucci, F.; Vischi, F.; De Simoni, G.; Guarcello, C.; Solinas, P.; Giazotto, F. Field-Effect Controllable Metallic Josephson Interferometer. Nano Lett. 2019, 19, 6263–6269, [1904.08349]. doi:black10.1021/acs.nanolett.9b02369.
* Paolucci et al. (2019c) Paolucci, F.; De Simoni, G.; Solinas, P.; Strambini, E.; Puglia, C.; Ligato, N.; Giazotto, F. Field-effect control of metallic superconducting systems. AVS Quantum Sci. 2019, 1, 016501. doi:black10.1116/1.5129364.
* Bours et al. (2020) Bours, L.; Mercaldo, M.T.; Cuoco, M.; Strambini, E.; Giazotto, F. Unveiling mechanisms of electric field effects on superconductors by a magnetic field response. Phys. Rev. Res. 2020, 2, 033353, [2003.01655]. doi:black10.1103/PhysRevResearch.2.033353.
* De Simoni et al. (2019) De Simoni, G.; Paolucci, F.; Puglia, C.; Giazotto, F. Josephson Field-Effect Transistors Based on All-Metallic Al/Cu/Al Proximity Nanojunctions. ACS Nano 2019, 13, 7871–7876. doi:black10.1021/acsnano.9b02209.
* Virtanen et al. (2019) Virtanen, P.; Braggio, A.; Giazotto, F. Superconducting size effect in thin films under electric field: Mean-field self-consistent model. Phys. Rev. B 2019, 100, 224506. doi:black10.1103/PhysRevB.100.224506.
* Mercaldo et al. (2020) Mercaldo, M.T.; Solinas, P.; Giazotto, F.; Cuoco, M. Electrically Tunable Superconductivity Through Surface Orbital Polarization. Phys. Rev. Appl. 2020, 14, 034041. doi:black10.1103/PhysRevApplied.14.034041.
* Solinas et al. (2020) Solinas, P.; Amoretti, A.; Giazotto, F. Schwinger effect in a Bardeen-Cooper-Schrieffer superconductor. arXiv 2020, [2007.08323].
* Alegria et al. (2021) Alegria, L.D.; Bøttcher, C.G.L.; Saydjari, A.K.; Pierce, A.T.; Lee, S.H.; Harvey, S.P.; Vool, U.; Yacoby, A. High-energy quasiparticle injection into mesoscopic superconductors. Nat. Nanotechnol. 2021, [2005.00584]. doi:black10.1038/s41565-020-00834-8.
* Ritter et al. (2020) Ritter, M.F.; Fuhrer, A.; Haxell, D.Z.; Hart, S.; Gumann, P.; Riel, H.; Nichele, F. A superconducting switch actuated by injection of high energy electrons 2020. [2005.00462].
* Puglia et al. (2020) Puglia, C.; De Simoni, G.; Giazotto, F. Electrostatic Control of Phase Slips in Ti Josephson Nanotransistors. Phys. Rev. Appl. 2020, 13, 054026. doi:black10.1103/PhysRevApplied.13.054026.
* Rocci et al. (2020) Rocci, M.; De Simoni, G.; Puglia, C.; Degli Esposti, D.; Strambini, E.; Zannier, V.; Sorba, L.; Giazotto, F. Gate-Controlled Suspended Titanium Nanobridge Supercurrent Transistor. ACS Nano 2020, 14, 12621–12628. doi:black10.1021/acsnano.0c05355.
* Likharev (2012) Likharev, K.K. Superconductor digital electronics. Phys. C Supercond. its Appl. 2012, 482, 6–18. doi:black10.1016/j.physc.2012.05.016.
* De Simoni et al. (2020) De Simoni, G.; Puglia, C.; Giazotto, F. Niobium Dayem nano-bridge Josephson gate-controlled transistors. Appl. Phys. Lett. 2020, 116, 242601. doi:black10.1063/5.0011304.
* Stromberg and Swenson (1962) Stromberg, T.F.; Swenson, C.A. Negative Surface Free-Energy Effects in Superconducting Niobium. Phys. Rev. Lett. 1962, 9, 370–374. doi:black10.1103/PhysRevLett.9.370.
* Finnemore et al. (1966) Finnemore, D.K.; Stromberg, T.F.; Swenson, C.A. Superconducting Properties of High-Purity Niobium. Phys. Rev. 1966, 149, 231–243. doi:black10.1103/PhysRev.149.231.
* Giazotto et al. (2006) Giazotto, F.; Heikkilä, T.T.; Luukanen, A.; Savin, A.M.; Pekola, J.P. Opportunities for mesoscopics in thermometry and refrigeration: Physics and applications. Rev. Mod. Phys. 2006, 78, 217–274. doi:black10.1103/RevModPhys.78.217.
* Tinkham (2004) Tinkham, M. Introduction to Superconductivity; Dover Publications: Mineola, 2004; p. 454.
* Puglia et al. (2020) Puglia, C.; De Simoni, G.; Ligato, N.; Giazotto, F. Vanadium gate-controlled Josephson half-wave nanorectifier. Appl. Phys. Lett. 2020, 116, 252601. doi:black10.1063/5.0013512.
* Barone and Paternò (1982) Barone, A.; Paternò, G. Physics and Applications of the Josephson Effect; Wiley, 1982\. doi:black10.1002/352760278X.
* Ullom and Bennett (2015) Ullom, J.N.; Bennett, D.A. Review of superconducting transition-edge sensors for x-ray and gamma-ray spectroscopy. Supercond. Sci. Technol. 2015, 28, 084003. doi:black10.1088/0953-2048/28/8/084003.
* Paolucci et al. (2021) Paolucci, F.; Ligato, N.; Germanese, G.; Buccheri, V.; Giazotto, F. Fully Superconducting Josephson Bolometers for Gigahertz Astronomy. Appl. Sci. 2021, 11, 746. doi:black10.3390/app11020746.
* Gol’tsman et al. (2001) Gol’tsman, G.N.; Okunev, O.; Chulkova, G.; Lipatov, A.; Semenov, A.; Smirnov, K.; Voronov, B.; Dzardanov, A.; Williams, C.; Sobolewski, R. Picosecond superconducting single-photon optical detector. Appl. Phys. Lett. 2001, 79, 705–707. doi:black10.1063/1.1388868.
* Ivanov et al. (2011) Ivanov, B.I.; Trgala, M.; Grajcar, M.; Il’ichev, E.; Meyer, H.G. Cryogenic ultra-low-noise SiGe transistor amplifier. Rev. Sci. Instrum. 2011, 82, 104705. doi:black10.1063/1.3655448.
* Oukhanski et al. (2003) Oukhanski, N.; Grajcar, M.; Il’ichev, E.; Meyer, H.G. Low noise, low power consumption high electron mobility transistors amplifier, for temperatures below 1 K. Rev. Sci. Instrum. 2003, 74, 1145–1146. doi:black10.1063/1.1532539.
* Clarke and Bragisnki (2004) Clarke, J.; Bragisnki, A.I. The SQUID Handbook; Wiley: Weinheim, 2004.
* Giazotto et al. (2010) Giazotto, F.; Peltonen, J.T.; Meschke, M.; Pekola, J.P. Superconducting quantum interference proximitytransistor. Nat. Phys. 2010, 6, 254–259. doi:black10.1038/nphys1537.
* Giazotto and Martínez-Pérez (2012) Giazotto, F.; Martínez-Pérez, M.J. The Josephson heat interferometer. Nature 2012, 492, 401–405, [1205.3353]. doi:black10.1038/nature11702.
* Fornieri and Giazotto (2017) Fornieri, A.; Giazotto, F. Towards phase-coherent caloritronics in superconducting circuits. Nat. Nanotechnol. 2017, 12, 944–952. doi:black10.1038/nnano.2017.204.
* McCaughan (2018) McCaughan, A.N. Readout architectures for superconducting nanowire single photon detectors. Supercond. Sci. Technol. 2018, 31, 040501. doi:black10.1088/1361-6668/aaa1b3.
* McCaughan and Berggren (2014) McCaughan, A.N.; Berggren, K.K. A Superconducting-Nanowire Three-Terminal Electrothermal Device. Nano Lett. 2014, 14, 5748–5753. doi:black10.1021/nl502629x.
* Morpurgo et al. (1998) Morpurgo, A.F.; Klapwijk, T.M.; van Wees, B.J. Hot electron tunable supercurrent. Appl. Phys. Lett. 1998, 72, 966–968. doi:black10.1063/1.120612.
* Bezryadin (2012) Bezryadin, A. Superconductivity in Nanowires: Fabrication and Quantum Transport; Wiley-VCH, 2012; p. 282.
* Kurkijärvi (1972) Kurkijärvi, J. Intrinsic Fluctuations in a Superconducting Ring Closed with a Josephson Junction. Phys. Rev. B 1972, 6, 832–835. doi:black10.1103/PhysRevB.6.832.
* Bezryadin and Goldbart (2010) Bezryadin, A.; Goldbart, P.M. Superconducting Nanowires Fabricated Using Molecular Templates. Adv. Mater. 2010, 22, 1111–1121. doi:black10.1002/adma.200904353.
* Fulton et al. (1972) Fulton, T.A.; Dunkleberger, L.N.; Dynes, R.C. Quantum Interference Properties of Double Josephson Junctions. Phys. Rev. B 1972, 6, 855–875. doi:black10.1103/PhysRevB.6.855.
* Fulton and Dynes (1971) Fulton, T.; Dynes, R. Switching to zero voltage in Josephson tunnel junctions. Solid State Commun. 1971, 9, 1069–1073. doi:black10.1016/0038-1098(71)90465-0.
* Bezryadin et al. (2000) Bezryadin, A.; Lau, C.N.; Tinkham, M. Quantum suppression of superconductivity in ultrathin nanowires. Nature 2000, 404, 971–974. doi:black10.1038/35010060.
* Giordano (1988) Giordano, N. Evidence for Macroscopic Quantum Tunneling in One-Dimensional Superconductors. Phys. Rev. Lett. 1988, 61, 2137–2140. doi:black10.1103/PhysRevLett.61.2137.
* Giordano and Schuler (1989) Giordano, N.; Schuler, E. Macroscopic quantum tunneling and related effects in a one-dimensional superconductor. Phys. Rev. Lett. 1989, 63, 2417–2420. doi:black10.1103/PhysRevLett.63.2417.
* Fulton and Dunkleberger (1974) Fulton, T.A.; Dunkleberger, L.N. Lifetime of the zero-voltage state in Josephson tunnel junctions. Phys. Rev. B 1974, 9, 4760–4768. doi:black10.1103/PhysRevB.9.4760.
* Li et al. (2014) Li, Q.; Huang, S.; Pan, D.; Wang, J.; Zhao, J.; Xu, H.Q. Suspended InAs nanowire gate-all-around field-effect transistors. Appl. Phys. Lett. 2014, 105, 113106. doi:black10.1063/1.4896105.
* Iorio et al. (2019) Iorio, A.; Rocci, M.; Bours, L.; Carrega, M.; Zannier, V.; Sorba, L.; Roddaro, S.; Giazotto, F.; Strambini, E. Vectorial Control of the Spin–Orbit Interaction in Suspended InAs Nanowires. Nano Lett. 2019, 19, 652–657, [1807.04344]. doi:black10.1021/acs.nanolett.8b02828.
* Simmons (1963) Simmons, J.G. Generalized Formula for the Electric Tunnel Effect between Similar Electrodes Separated by a Thin Insulating Film. J. Appl. Phys. 1963, 34, 1793–1803. doi:black10.1063/1.1702682.
* Fowler and Nordheim (1928) Fowler, R.H.; Nordheim, L. Electron emission in intense electric fields. Proc. R. Soc. London. Ser. A, Contain. Pap. a Math. Phys. Character 1928, 119, 173–181. doi:black10.1098/rspa.1928.0091.
* Wilson (1966) Wilson, R.G. Vacuum Thermionic Work Functions of Polycrystalline Be, Ti, Cr, Fe, Ni, Cu, Pt, and Type 304 Stainless Steel. J. Appl. Phys. 1966, 37, 2261–2267. doi:black10.1063/1.1708797.
* Bhushan (2012) Bhushan, B. Encyclopedia of Nanotechnology; Springer Netherlands: Dordrecht, 2012; pp. 824–837.
* Ummarino et al. (2017) Ummarino, G.A.; Piatti, E.; Daghero, D.; Gonnelli, R.S.; Sklyadneva, I.Y.; Chulkov, E.V.; Heid, R. Proximity Eliashberg theory of electrostatic field-effect doping in superconducting films. Phys. Rev. B 2017, 96, 064509. doi:black10.1103/PhysRevB.96.064509.
* Zmuidzinas and Richards (2004) Zmuidzinas, J.; Richards, P. Superconducting detectors and mixers for millimeter and submillimeter astrophysics. Proc. IEEE 2004, 92, 1597–1616. doi:black10.1109/JPROC.2004.833670.
* Gousev et al. (1994) Gousev, Y.P.; Gol’tsman, G.N.; Semenov, A.D.; Gershenzon, E.M.; Nebosis, R.S.; Heusinger, M.A.; Renk, K.F. Broadband ultrafast superconducting NbN detector for electromagnetic radiation. J. Appl. Phys. 1994, 75, 3695–3697. doi:black10.1063/1.356060.
* Lösch et al. (2019) Lösch, S.; Alfonsov, A.; Dobrovolskiy, O.V.; Keil, R.; Engemaier, V.; Baunack, S.; Li, G.; Schmidt, O.G.; Bürger, D. Microwave Radiation Detection with an Ultrathin Free-Standing Superconducting Niobium Nanohelix. ACS Nano 2019, 13, 2948–2955. doi:black10.1021/acsnano.8b07280.
* Mukhanov (2011) Mukhanov, O.A. Energy-Efficient Single Flux Quantum Technology. IEEE Trans. Appl. Supercond. 2011, 21, 760–769. doi:black10.1109/TASC.2010.2096792.
|
111The corresponding author.
# Gradient Estimates And Liouville Theorems For A Class of Nonlinear Elliptic
Equations
Pingliang Huang Department of Mathematics, Shanghai University, Shanghai,
200444, China, email<EMAIL_ADDRESS>and Youde Wang1 1\. School
of Mathematics and Information Sciences, Guangzhou University; 2. Hua Loo-Keng
Key Laboratory of Mathematics, Institute of Mathematics, Academy of
Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190,
China; 3. School of Mathematical Sciences, University of Chinese Academy of
Sciences, Beijing 100049, China<EMAIL_ADDRESS>
###### Abstract.
In this paper, first we study carefully the positive solutions to $\Delta
u+\lambda_{1}u\ln u+\lambda_{2}u^{b+1}=0$ defined on a complete noncompact
Riemannian manifold $(M,g)$ with $Ric(g)\geq-Kg$, which can be regarded as
Lichnerowicz-type equations, and obtain the gradient estimates of positive
solutions to these equations which do not depend on the bounds of the
solutions and the Laplacian of the distance function on $(M,g)$. Then, we
extend our techniques to a class of more general semilinear elliptic equations
$\Delta u(x)+uh(\ln u)=0$ and obtain some similar results under some suitable
analysis conditions on these equations. Moreover, we also obtain some
Liouville-type theorems for these equations when $Ric(g)\geq 0$ and establish
some Harnack inequalities as consequences.
Keywords Gradient estimate, Ricci curvature, Liouville theorem, Harnack
inequality, Nonlinear elliptic equations
## 1\. Introduction
In this paper, we consider a sort of equations which read as
$\Delta u(x)=u\tilde{h}(x,\ln u(x)),$
where $\tilde{h}:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ is a
continuous function. Obviously, the following equations
(1.1) $\Delta u+\lambda_{1}(x)u\ln
u+\lambda_{2}(x)u^{b+1}+\lambda_{3}(x)u^{p}=0,$
defined on a Riemannian manifold $(M,g)$ of dimension $n$, are the special
cases of the sort equations. Indeed, we only need to pick
$\tilde{h}(x,s)=\lambda_{1}(x)se^{s}+\lambda_{2}(x)e^{(b+1)s}+\lambda_{3}(x)e^{ps},$
where $\lambda_{1},\lambda_{2}$ and $\lambda_{3}$ are smooth functions on $M$
and $b,p\in\mathbb{R}$ are two real constant numbers.
In the past four decades, the latter equations with $\lambda_{1}\equiv 0$
include many important and well-known equations stemming from differential
geometry and physics etc, and are deeply and extensively studied by many
mathematicians. For instance, the works of Schoen and Yau in ([14, 15, 16]) on
conformally flat manifold and Yamabe problem highlighted the importance of
studying the distribution solutions of
(1.2) $\Delta u(x)+u^{(n+2)/(n-2)}(x)=0.$
From the viewpoint of analysis, Caffarelli, Gidas and Spruck in [1] studied
non-negative smooth solutions of the conformal invariant equation (1.2), and
discussed some special form of (1.1), written by $\Delta u+g(u)=0$, with an
isolated singularity at the origin.
An analogue but more general form of Yamabe’s equation is the so-called
Einstein-scalar field Lichnerowicz equation. This equation arises from the
Hamiltonian constraint equation for the Einstein-scalar field system in
general relativity [7, 20]. In the case the underlying manifold $M$ has
dimension $n\geq 3$, the Einstein-scalar field Lichnerowicz equation takes the
following form
$\Delta u+\mu(x)u+A(x)u^{p}+B(x)u^{-q}=0,$
where $\mu(x),A(x)$ and $B(x)$ are smooth functions on $M$ and $p=(n+2)/(n-2)$
and $q=(3n-2)/(n-2)$; while on 2-manifolds, we know that the Einstein-scalar
field Lichnerowicz equation is given as follows
$\Delta u+A(x)e^{2u}+B(x)e^{-2u}+D(x)=0.$
Unless otherwise stated, solutions are always required to be smooth and
positive. For more details we refer to [2, 10, 11] and references therein.
Recently, Peng, Wang and Wei [12] used a unified method to consider the
gradient estimates of the positive solution to the following nonlinear
elliptic equation
$\Delta u+\lambda u^{p}=0$
defined on a complete noncompact Riemannian manifold $(M,g)$ where $\lambda>0$
and $p<1+\frac{4}{n}$ or $\lambda<0$ and $p>1$ are two constants. For the case
$\lambda>0$, their results improve considerably the previous known results and
supplements the results for the case $\dim(M)\leq 2$. For the case $\lambda<0$
and $p>1$, they also improved considerably the previous related results. When
the Ricci curvature of $(M,g)$ is nonnegative, a Liouville-type theorem for
the above equation was established. For more details we refer to [8] and
references therein.
On the other hand, if we pick $\tilde{h}(x,s)=2\lambda se^{s}$ with
$\lambda>0$, F. Chung and S.-T. Yau [3] showed that if the function $u$
attains the log-Sobolev constant $\lambda_{M}$ of a closed Riemannian manifold
$(M,g)$ with $\dim(M)=n$, then it must satisfy
$\Delta u+2\lambda_{M}u\ln u=0,$
and
$\sup u\leq e^{n/2},\quad\quad\quad|\nabla u|^{2}+2\lambda_{M}u^{2}\ln
u\leq\lambda_{M}nu^{2}.$
Later, F. Wang [17] extented the results to the case $Ric\geq-K$, and gave a
Harnack inequality.
In [9], Ma investigated the following equation
$\Delta u+\lambda u\ln u+\mu u=0$
on complete non-compact Riemannian manifolds where $\lambda$ and $\mu$ are
constant with $\lambda<0$, which corresponds to $\tilde{h}(x,s)=\lambda
se^{s}+\mu e^{s}$. His finding for local gradient estimates for positive
solutions of this equation is almost optimal if one considers Ricci solitons.
Recently, M. Ghergu, S. Kim and H. Shahgholian in ([5]) studied the semilinear
elliptic equation
$\Delta u+u^{\beta}|\ln u|^{\alpha}=0,$
which corresponds to $\tilde{h}(x,s)=|s|^{\alpha}e^{\beta s}$, and established
that nonnegative solution $u\in C^{2}(B_{1}\backslash{O})$ of the above
equation may have a removable singularity at the origin $O$ or behave like
some class of functions as $x\rightarrow 0$.
Very recently, Peng, Wang and Wei [13] considered the gradient estimates of
the positive solutions to the following equation defined on a complete
Riemannian manifold $(M,g)$
$\Delta u+\lambda u(\ln u)^{p}+\mu u=0,$
where $\lambda,\mu\in\mathbb{R}$ and $p$ is a rational number with
$p=\frac{k_{1}}{2k_{2}+1}\geq 2$ where $k_{1}$ and $k_{2}$ are positive
integer numbers. They obtain the gradient bound of a positive solution to the
equation which does not depend on the bounds of the solution and the Laplacian
of the distance function on $(M,g)$. Their results can be viewed as a natural
extension of Yau’s estimates on positive harmonic function.
The parabolic counterpart of the above equation (1.1) was considered by Dung,
Khan and Ngô [4]. More concretely, let $(M,g,e^{-f}dv)$ be a complete, smooth
metric measure space with the Bakry-Émery Ricci curvature bounded from below,
Dung et al have ever studied the following general $f$-heat equations
$u_{t}=\Delta_{f}u+\lambda u\ln u+\mu u+Au^{p}+Bu^{-q}.$
Suppose that $\lambda$, $\mu$, $A$, $B$, $p$ and $q$ are constants with $A\leq
0$, $B\geq 0$, $p\geq 1$, and $q\geq 0$. If $u\in(0,1]$ is a smooth solution
to the above general $f$-heat equation, they obtained various gradient
estimates for the bounded positive solutions, which depend on the bounds of
positive solution and the Laplacian of the distance functions on domain
manifolds.
Moreover, they also considered the gradient estimate of bounded positive
solution $u\in[1,C)$ to the following equation on a Riemann surface
$u_{t}=\Delta_{f}u+Ae^{2u}+Be^{-2u}+D,$
where $A$, $B$ and $D$ are constants. Besides, Some mathematicians (see [17,
19]) also paid attention to a similar nonlinear parabolic equation defined on
some kind of smooth metric measure space.
In fact, it is of independent interest that one studies various properties of
solutions to the following equation
$\Delta u+u\tilde{h}(x,\ln u)=0$
defined on a complete Riemannian manifold. In this paper, in order to focus on
the core of the problem and not to lengthen this article by adding
technicalities, we restrict us to the case $\tilde{h}(x,s)\equiv h(s)$, which
is independent of $x$, is a $C^{2}$ function with respect to $s$. Now, the
above equations can be written as
$\Delta u+uh(\ln u)=0.$
More concretely, first we consider the case of $\lambda_{i}(x)\equiv constant$
($i=1,2$), $\lambda_{3}\equiv 0$ and $b\leq 0$. That is, we focus on studying
the gradient estimate and the bounds of the positive solution to the following
nonlinear elliptic equation defined on an $n$-dimensional complete noncompact
Riemannian manifold $(M,g)$
(1.3) $\Delta u+\lambda_{1}u\ln u+\lambda_{2}u^{b+1}=0,$
where $b\leq 0$. Then, we turn to studying
(1.4) $\Delta u+uh(\ln u)=0$
where $h$ satisfies some analytic and technical conditions. We try to improve
the classical methods to obtain a gradient bound of a positive solution to
(1.3) and (1.4) which does not depend on the bounds of the solution and the
Laplacian or Hessian of the distance function on $(M,g)$.
For (1.3) we can show the following results:
###### Theorem 1.1.
(Local gradient estimate) Let $(M,g)$ be an $n$-dimensional complete
noncompact Riemannian manifold. Suppose there exists a nonnegative constant
$K:=K(2R)$ such that the Ricci curvature of $M$ is bounded below by $-K$,
i.e., $Ric(g)\geq-Kg$ in the geodesic ball $B_{2R}(O)\subset M$ where $O$ is a
fixed point on $M$. Suppose that $u(x)$ is a smooth positive solution to
equation (1.3) on $B_{2R}(O)$ with $b\leq 0$.
Case 1: If $\lambda_{1}>0$ and $\lambda_{2}>0$, then, for any constant $1<p<2$
there holds true on $B_{R}(O)$
(1.5) $\frac{|\nabla u|^{2}}{u^{2}}+p\lambda_{1}\ln
u+\lambda_{2}u^{b}\leq\tilde{C_{1}}(n,K,R,\lambda_{1},\lambda_{2},b,p);$
where
(1.6)
$\tilde{C_{1}}(n,K,R,\lambda_{1},\lambda_{2},b,p)=\max\left\\{\begin{aligned}
&\min_{C_{3}\in(0,\frac{2(2-p)}{np})}\left\\{\frac{((A+2K+2\lambda_{1})R^{2}C_{3}+C_{1}^{2})np}{(2(2-p)-C_{3}np)R^{2}C_{3}}\right\\};\\\
&nA+\frac{n^{2}C_{1}^{2}}{R^{2}}+2Kn+n(p-2)\lambda_{1}+np\lambda_{1};\\\
&\frac{n}{2(p-1)}\left(\frac{2}{n}(p-1)^{2}L+p\lambda_{1}+2pK\right)\end{aligned}\right\\}$
with $C_{1}$ and ${C_{2}}$ are absolute constants independent of the geometry
of $M$. Here
$A=\frac{((n-1)(1+\sqrt{K}R)+2)C_{1}^{2}+C_{2}}{R^{2}}\quad\quad\mbox{and}\quad\quad
L=\frac{n(p\lambda_{1}+2pK)}{2(p-1)^{2}}.$
Case 2: If $\lambda_{1}\leq 0$ and $\lambda_{2}>0$, then, for any constant
$1<p<2$ there holds true on $B_{R}(O)$
(1.7) $\frac{|\nabla u|^{2}}{u^{2}}+p\lambda_{1}\ln
u+\lambda_{2}u^{b}\leq\tilde{C_{2}}(n,K,R,\lambda_{1},\lambda_{2},b,p);$
where
(1.8)
$\tilde{C_{2}}(n,K,R,\lambda_{1},\lambda_{2},b,p)=\max\left\\{\begin{aligned}
\frac{nKp}{2(p-1)};\min_{C_{3}\in(0,\frac{2(2-p)}{np})}\left\\{\frac{((A+2K)R^{2}C_{3}+C_{1}^{2})np}{(2(2-p)-C_{3}np)R^{2}C_{3}}\right\\}\end{aligned}\right\\}$
with the same $C_{1}$, ${C_{2}}$ and $A$ as in Case 1.
For the equation (1.3) with $b<0$, we can see easily from the estimates in
Theorem 1.1 that the following consequences hold true as a direct corollary.
###### Corollary 1.1.
Let $(M,g)$ be an $n$-dimensional complete noncompact Riemannian manifold.
Suppose there exists a nonnegative constant $K$ such that the Ricci curvature
of $M$ is bounded below by $-K$, i.e., $Ric(g)\geq-Kg$ in $M$. Suppose that
the equation (1.3) defined on $M$ with $b<0$ admits a smooth positive solution
$u(x)$. Then, the positive solution $u$ is of upper bound and positive lower
bound if $\lambda_{1}>0$ and $\lambda_{2}>0$, and is of a positive lower bound
if $\lambda_{1}\leq 0$ and $\lambda_{2}>0$.
###### Remark 1.
In comparison with the relative results stated in Theorem 1.1 of [4], we does
not need to assume the positive solutions are bounded to derive the gradient
estimates. Moreover, we can also establish the estimates on the bounds of
positive solutions in some cases, which does not depend the Laplacian or
Hessian of the distance function on $(M,g)$.
###### Remark 2.
For the equation (1.1) with $\lambda_{1}(x)$, $\lambda_{2}(x)>0$ and
$\lambda_{3}(x)<0$ are three real constant numbers and $p>0$, we can also
employ the same method to establish some similar estimates for the equation
(1.1) with the above results, we will present them in forthcoming papers.
For more general equation (1.4) we can also obtain some results by a delicate
analysis. Now we state them as follows:
###### Theorem 1.2.
(Local gradient estimate) Let $(M,g)$ be an n-dimensional complete noncompact
Riemannian manifold. Suppose there exists a nonnegative constant K:=K(2R) such
that the Ricci Curvature of M is bounded below by $-K$, i.e., $Ric(g)\geq-Kg$
in the geodesic ball $B_{2R}(O)\subset M$ where O is a fixed point on M.
Suppose $h\in C^{2}(\mathbb{R})$ and there exist a $\lambda$ such that
(1.9) $\left\\{\begin{aligned}
&-\frac{4}{n}(\lambda-1)h+(\lambda-2)h^{\prime}+\lambda h^{\prime\prime}\geq
0;\\\ &h(2K\lambda-\frac{2}{n}(\lambda^{2}-1)h-\lambda h^{\prime})\geq 0;\\\
&\lambda h\geq 0.\end{aligned}\right.$
If $u(x)$ is a smooth positive solution to equation (1.4) on $B_{2R}(O)$, then
we have
(1.10) $\frac{|\nabla u|^{2}}{u^{2}}+\lambda h(\ln u)\leq C(n,K,R,h)$
and
(1.11) $\frac{|\nabla u|}{u}\leq\sqrt{C(n,K,R,h)},$
where
(1.12)
$C(n,K,R,h)=\min_{C_{5}\in(0,2/n)}\left\\{\frac{((n-1)(1+\sqrt{K}R)+2+\frac{1}{C_{5}})C_{1}^{2}+2KR^{2}}{R^{2}(\frac{2}{n}-C_{5})}\right\\}.$
Here $C_{1}$ is an absolute constant independent of the geometry of $M$.
It is well-known that, for any two points $x,\,y\in{B_{R/2}(O)}$, there holds
true
$\ln u(x)-\ln u(y)\leq\int_{\gamma}{\frac{|\nabla u|}{u}},$
where $\gamma$ is a curve connecting $x$ and $y$ in $M$. It follows
###### Corollary 1.2.
(Harnack inequality) Suppose the same conditions as in Theorem 1.2 hold. Then
$\sup_{B_{R/2}(O)}u\leq e^{R\sqrt{C(n,K,R,h)}}\inf_{B_{R/2}(O)}u.$
When $K=0$, letting $R\rightarrow+\infty$ in Theorem 1.2, then we have
###### Corollary 1.3.
(Liouville-type result) Let $(M,g)$ be an n-dimensional noncompact complete
Riemannian manifold with nonnegative Ricci curvature. Suppose the same
conditions as in Theorem 1.2 hold. Then any positive solution $u$ of
$(\ref{eq:1.4})$ must be constant. Moreover, if $\lambda h>0$,
$(\ref{eq:1.4})$ admits no positive solutions.
It is convenient that we find some sufficient conditions on function $h$ to
ensure the effectiveness of the method adopted here and make the conditions of
Theorem 1.2 satisfy. By a direct calculation we can see easily that, if $h(\ln
u)\geq 0$, $h^{\prime}(\ln u)\leq 0$, $h^{\prime\prime}(\ln u)\geq 0$, and
$0\leq\lambda\leq 1$, then (1.9) holds true. These are some sufficient
conditions to guarantee the assumptions in Theorem 1.2 are satisfied. Hence
###### Corollary 1.4.
If $h(\ln u)\geq 0$, $h^{\prime}(\ln u)\leq 0$, $h^{\prime\prime}(\ln u)\geq
0$, and $0\leq\lambda\leq 1$, then (1.9) holds true, therefore, the
conclusions of Theorem 1.2 hold true.
In other words, a decreasing, convex and smooth nonnegative function $h(x)$
satisfies the assumptions in Corollary 1.4.
###### Example 1.
(i) It is easy to see that equation
$\Delta u+cu^{d+1}=0$
satisfies the above sufficient conditions. Here $h(\ln u)=cu^{d}$ with
constants $c\geq 0$ and $d\leq 0$. Direct calculation shows that $h=cu^{d}\geq
0$, $h^{\prime}=cdu^{d}\leq 0$ and $h^{\prime\prime}=cd^{2}u^{d}\geq 0$.
(ii) Obviously,
$\Delta u+\Sigma_{i=1}^{n}c_{i}u^{d_{i}+1}=0$
satisfies the sufficient conditions with $c_{i}\geq 0$ and $d_{i}\leq 0$.
###### Example 2.
For equation $\Delta u-u^{3}=0$ , the corresponding function $h$ satisfies
$h(\ln u)=-u^{2}=-e^{2\ln u}<0$. Choosing $\lambda=0$ and taking a direct
calculation we will see that there holds true
$-\frac{4}{n}(\lambda-1)h+(\lambda-2)h^{\prime}+\lambda
h^{\prime\prime}=4\left(1-\frac{1}{n}\right)e^{2\ln u}\geq 0,$
and
$h(2K\lambda-\frac{2}{n}(\lambda^{2}-1)h-\lambda
h^{\prime})=\frac{2}{n}e^{4\ln u}\geq 0.$
So, this example satisfies the conditions (1.9) supposed in Theorem 1.2, but
doesn’t satisfy the above sufficient conditions.
For the case $\lambda=1$, we take the same argument as in Theorem 1.2 to
conclude the following:
###### Corollary 1.5.
Let $(M,g)$ be an n-dimensional complete noncompact Riemannian manifold.
Suppose there exists a nonnegative constant K:=K(2R) such that the Ricci
Curvature of M is bounded below by $-K$, i.e., $Ric(g)\geq-Kg$ in the geodesic
ball $B_{2R}(O)\subset M$ where O is a fixed point on M. Suppose that $u(x)$
is a smooth positive solution to equation (1.4). If $h$ satisfies that
$h^{\prime}(\ln u)\leq\min\\{h^{\prime\prime}(\ln u),2K\\}$ and $h(\ln u)\geq
0$ on $B_{R}(O)$, then
(1.13) $\frac{|\nabla u|^{2}}{u^{2}}+h\leq C(n,K,R,h),$
and
(1.14) $\frac{|\nabla u|}{u}\leq\sqrt{C(n,K,R,h)}.$
Here, $C(n,K,R,h)$ is the same as in Theorem 1.2.
On the other hand, by taking the same discussion as in the proof of Theorem
1.2 for the case $\lambda=0$ we can also conclude the following
###### Corollary 1.6.
Let $(M,g)$ be an n-dimensional complete noncompact Riemannian manifold.
Suppose there exists a nonnegative constant K:=K(2R) such that the Ricci
Curvature of M is bounded below by $-K$, i.e., $Ric(g)\geq-Kg$ in the geodesic
ball $B_{2R}(O)\subset M$ where O is a fixed point on M. Suppose that $u(x)$
is a smooth positive solution to equation (1.4). If the function $h$ satisfied
$h^{\prime}(\ln u)\leq\frac{2}{n}h(\ln u)$ on $B_{R}(O)$, then
(1.15) $\frac{|\nabla u|}{u}\leq\sqrt{C(n,K,R,h)}.$
Here, $C(n,K,R,h)$ is the same as in Theorem 1.2.
It is worthy to point out that any positive function $h(\ln u)$, which is
decreasing with $\ln u$, satisfies the assumptions posed in Corollary 1.6. So,
there are too many choices.
###### Example 3.
For instance, the equation reads
$\Delta u+au\left(\frac{\pi}{2}-\arctan(\ln u)\right)=0$
with constant $a>0$, then, we can verify easily that there hold
$h(\ln u)=\frac{a\pi}{2}-a\arctan(\ln u)\geq 0\quad\quad\mbox{and}\quad\quad
h^{\prime}(\ln u)=-\frac{a}{1+(\ln u)^{2}}<0.$
In the forthcoming paper we will discuss the equation (1.4) including
Lichnerowicz equation as special case, and some more general equations than
those in [4].
Finally, we would like to mention that the strategy of our proofs follows
basically those in [6, 12, 13]. More precisely, we use an appropriate cut-off
function and the maximum principle to obtain the desired results. These
methods are, loosely speaking, well-known and used in many works; for
instance, see [4, 6, 8, 12, 18] and the references therein. However, we also
would like to emphasize that to obtain gradient estimates of these equations
discussed here, our approach is slightly different from those used before.
Except for apply the Bochner-Weitzenböck formula to a suitable auxiliary
function $G$ related to $\ln u$ ( see Section 2), we need to analyze carefully
the equation which is satisfied by $G$ and estimate delicately all terms
appeared so that the required terms do match very well. Then we make use of
the maximum principle to prove our results.
The paper is organized as follows. In Section 2, we recall some notations and
fundamental lemmas. In Section 3 we provide the proof of Theorem 1.1. In
Section 4 we study gradient estimates of the general equation $(\ref{eq:1.4})$
and prove Theorem 1.2. Harnack-type inequalities and Liouville-type theorems
for $(\ref{eq:1.4})$ are also established in this section.
## 2\. Preliminaries
In this section, we denote $(M,g)$ an $n$-dimensional complete Riemannian
manifold with $Ric(g)\geq-Kg$ in the geodesic ball $B_{2R}(O)$, where
$K=K(2R)$ is a nonnegative constant depending on $R$ and $O$ is a fixed point
on $M$.
It is easy to deformed the equation (1.3) into
$\Delta u+uf(\ln u)+ug(\ln u)=0,$
where $f,g\in C^{2}(\mathbb{R},\mathbb{R})$ are $C^{2}$ functions.
For (1.3), Bo Peng, Youde Wang and Guodong Wei have ever proved an important
inequality in [12, 13]:
###### Proposition 2.1.
Suppose that $u(x)$ is a smooth positive solution to equation $(\ref{eq:1.3})$
on $B_{2R}(O)$. Let
$\omega=\ln u\quad\quad\mbox{and}\quad\quad
G=|\nabla{\omega}|^{2}+\beta_{1}f(\omega)+\beta_{2}g(\omega),$
here $\beta_{1}$ and $\beta_{2}$ are constants to be determined later. Then we
have
(2.1) $\displaystyle\Delta G\geq$
$\displaystyle\frac{2}{n}(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)^{2}-2\langle\nabla\omega,\nabla
G\rangle$
$\displaystyle+((\beta_{1}-2\lambda_{1})f^{\prime}+\beta_{1}f^{\prime\prime}+(\beta_{2}-2\lambda_{2})g^{\prime}+\beta_{2}g^{\prime\prime}-2K)(G-\beta_{1}f-\beta_{2}g)$
$\displaystyle-(\beta_{1}f^{\prime}+\beta_{2}g^{\prime})(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g).$
###### Proof.
First, there holds
(2.2) $\Delta\omega+G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g=0$
and
(2.3) $|\nabla\omega|^{2}=G-\beta_{1}f-\beta_{2}g.$
By the Bochner-Weitzenb$\ddot{o}$ck’s formula and $Ric(g)\geq-Kg$ on $(M,g)$,
we obtain
(2.4) $\Delta|\nabla\omega|^{2}\geq
2|\nabla^{2}\omega|^{2}+2\langle\nabla\omega,\nabla(\Delta\omega)\rangle-2K|\nabla\omega|^{2}.$
Combining (2.2), (2.3) and (2.4), we obtain
(2.5) $\displaystyle\Delta G=$
$\displaystyle\Delta|\nabla\omega|^{2}+\Delta(\beta_{1}f+\beta_{2}g)$
$\displaystyle\geq$ $\displaystyle
2|\nabla^{2}\omega|^{2}+2\langle\nabla\omega,\nabla(\Delta\omega)\rangle-2K|\nabla\omega|^{2}+\Delta(\beta_{1}f+\beta_{2}g)$
$\displaystyle\geq$
$\displaystyle\frac{2}{n}(\Delta\omega)^{2}+2\langle\nabla\omega,\nabla(\Delta\omega)\rangle-2K|\nabla\omega|^{2}+\beta_{1}(f^{\prime\prime}|\nabla\omega|^{2}+f^{\prime}\Delta\omega)+\beta_{2}(g^{\prime\prime}|\nabla\omega|^{2}+g^{\prime}\Delta\omega).$
Here we have used the relation
$|\nabla^{2}\omega|^{2}\geq\frac{2}{n}(\Delta\omega)^{2},$
which is derived by Cauchy-Schwarz inequality. Substituting (2.2) and (2.3)
into (2.5), we obtain
(2.6) $\displaystyle\Delta G\geq$
$\displaystyle\frac{2}{n}(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)^{2}-2\langle\nabla\omega,\nabla(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)\rangle$
$\displaystyle-2K|\nabla\omega|^{2}+\beta_{1}(f^{\prime\prime}|\nabla\omega|^{2}+f^{\prime}\Delta\omega)+\beta_{2}(g^{\prime\prime}|\nabla\omega|^{2}+g^{\prime}\Delta\omega)$
$\displaystyle=$
$\displaystyle\frac{2}{n}(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)^{2}-2\langle\nabla\omega,\nabla
G\rangle$
$\displaystyle+((\beta_{1}-2\lambda_{1})f^{\prime}+\beta_{1}f^{\prime\prime}+(\beta_{2}-2\lambda_{2})g^{\prime}+\beta_{2}g^{\prime\prime}-2K)|\nabla\omega|^{2}$
$\displaystyle-(\beta_{1}f^{\prime}+\beta_{2}g^{\prime})(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)$
$\displaystyle=$
$\displaystyle\frac{2}{n}(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)^{2}-2\langle\nabla\omega,\nabla
G\rangle$
$\displaystyle+((\beta_{1}-2\lambda_{1})f^{\prime}+\beta_{1}f^{\prime\prime}+(\beta_{2}-2\lambda_{2})g^{\prime}+\beta_{2}g^{\prime\prime}-2K)(G-\beta_{1}f-\beta_{2}g)$
$\displaystyle-(\beta_{1}f^{\prime}+\beta_{2}g^{\prime})(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g).$
Thus we complete the proof. ∎
Let $\phi(x)$ be a $C^{2}$ cut-off function with $0\leq\phi(x)\leq 1$,
$\phi(x)|_{B_{R}(O)}=1$ and $\phi(x)|_{M\backslash B_{2R}(O)}=0$. Using
Laplacian comparison theorem (see [6]), there holds true
(2.7)
$\frac{|\nabla\phi|^{2}}{\phi}\leq\frac{C_{1}^{2}}{R^{2}},\quad\quad\mbox{and}\quad\quad\Delta\phi\geq-\frac{(n-1)(1+\sqrt{K}RC_{1}^{2}+C_{2})}{R^{2}},$
where $C_{1}$ and $C_{2}$ are absolute constants.
Take $x_{0}\in B_{2R}(O)$ such that
$\phi G(x_{0})=\sup_{B_{2R}(O)}(\phi G)\geq 0.$
Otherwise, if
$\sup_{B_{2R}(O)}(\phi G)<0,$
the conclusion is trivial.
Since $x_{0}$ is a maximum point of $\phi G$ on $B_{2R}(O)$, at $x_{0}$ we
have
$\nabla(\phi G)=0\quad\quad\mbox{and}\quad\quad\Delta(\phi G)\leq 0.$
That is
(2.8) $\phi\nabla G=-G\nabla\phi\quad\quad\mbox{and}\quad\quad\phi\Delta
G\leq-G\Delta\phi+2G\frac{|\nabla\phi|^{2}}{\phi}.$
In the sequel, for the sake of convenience we do neglect $x_{0}$. Setting
$AG:=\frac{((n-1)(1+\sqrt{K}R)+2)C_{1}^{2}+C_{2}}{R^{2}}G,$
we can see easily that
$AG\geq-G\Delta\phi+2G\frac{|\nabla\phi|^{2}}{\phi}\geq\phi\Delta G.$
Now, from (2.6) we obtain
$\displaystyle AG\geq\phi\Delta G\geq$
$\displaystyle\frac{2}{n}(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)^{2}\phi-2\langle\nabla\omega,\nabla
G\rangle\phi$
$\displaystyle+((\beta_{1}-2\lambda_{1})f^{\prime}+\beta_{1}f^{\prime\prime}+(\beta_{2}-2\lambda_{2})g^{\prime}+\beta_{2}g^{\prime\prime}-2K)(G-\beta_{1}f-\beta_{2}g)\phi$
$\displaystyle-(\beta_{1}f^{\prime}+\beta_{2}g^{\prime})(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)\phi.$
Noticing
$-2\langle\nabla\omega,\nabla
G\rangle\phi=2\langle\nabla\omega,\nabla\phi\rangle
G\geq-2|\nabla\omega||\nabla\phi|G=-2|\nabla\phi|G(G-\beta_{1}f-\beta_{2}g)^{\frac{1}{2}},$
we obtain
(2.9) $\displaystyle AG\geq$
$\displaystyle\frac{2}{n}(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)^{2}\phi-2|\nabla\phi|G(G-\beta_{1}f-\beta_{2}g)^{\frac{1}{2}}$
$\displaystyle+((\beta_{1}-2\lambda_{1})f^{\prime}+\beta_{1}f^{\prime\prime}+(\beta_{2}-2\lambda_{2})g^{\prime}+\beta_{2}g^{\prime\prime}-2K)(G-\beta_{1}f-\beta_{2}g)\phi$
$\displaystyle-(\beta_{1}f^{\prime}+\beta_{2}g^{\prime})(G-(\beta_{1}-\lambda_{1})f-(\beta_{2}-\lambda_{2})g)\phi.$
Now we are ready to provide the proofs of these Theorems.
## 3\. The proof of Theorem 1.1
In this section, we consider the gradient estimates of (1.3), i.e.
$\Delta u+\lambda_{1}u\ln u+\lambda_{2}u^{b+1}=0,$
where $b\leq 0$. Now, we present the proof of Theorem 1.1.
###### Proof.
Letting $\beta_{1}=p\lambda_{1}$, $\beta_{2}=q\lambda_{2}$, $f(\omega)=\omega$
and $g(\omega)=e^{b\omega}$ in $(\ref{eq:2.9})$, we know that at $x_{0}$ there
holds true
(3.1) $\displaystyle AG\geq$
$\displaystyle\frac{2}{n}(G-(p-1)\lambda_{1}f-(q-1)\lambda_{2}g)^{2}\phi-2|\nabla\phi|G(G-p\lambda_{1}f-q\lambda_{2}g)^{\frac{1}{2}}$
$\displaystyle+((p-2)\lambda_{1}f^{\prime}+p\lambda_{1}f^{\prime\prime}+(q-2)\lambda_{2}g^{\prime}+q\lambda_{2}g^{\prime\prime}-2K)(G-p\lambda_{1}f-q\lambda_{2}g)\phi$
$\displaystyle-(p\lambda_{1}f^{\prime}+q\lambda_{2}g^{\prime})(G-(p-1)\lambda_{1}f-(q-1)\lambda_{2}g)\phi$
$\displaystyle=$
$\displaystyle\frac{2}{n}(G-(p-1)\lambda_{1}\omega-(q-1)\lambda_{2}e^{b\omega})^{2}\phi-2|\nabla\phi|G(G-p\lambda_{1}\omega-q\lambda_{2}e^{b\omega})^{\frac{1}{2}}$
$\displaystyle+((p-2)\lambda_{1}+(q-2)\lambda_{2}be^{b\omega}+q\lambda_{2}b^{2}e^{b\omega}-2K)(G-p\lambda_{1}\omega-q\lambda_{2}e^{b\omega})\phi$
$\displaystyle-(p\lambda_{1}+q\lambda_{2}be^{b\omega})(G-(p-1)\lambda_{1}\omega-(q-1)\lambda_{2}e^{b\omega})\phi.$
Letting $1<p<2$ and $q=1$ in (3.1), we have
(3.2) $\displaystyle AG\geq$
$\displaystyle\frac{2}{n}(G-(p-1)\lambda_{1}\omega)^{2}\phi-2|\nabla\phi|G(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})^{\frac{1}{2}}$
$\displaystyle+((p-2)\lambda_{1}-\lambda_{2}be^{b\omega}+\lambda_{2}b^{2}e^{b\omega}-2K)(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})\phi$
$\displaystyle-(p\lambda_{1}+\lambda_{2}be^{b\omega})(G-(p-1)\lambda_{1}\omega)\phi.$
Case 1: $\lambda_{1}>0$ and $\lambda_{2}>0$. In order to obtain the required
estimates we need to treat each term appeared in the above inequality by a
delicate way. Therefore, we need to set the following positive number
$L=\frac{n(p\lambda_{1}+2Kp)}{2(p-1)^{2}}$
and divide the value range of $\omega$ into three intervals: (1). $\omega\geq
0$; (2). $-L<\omega<0$; (3). $\omega\leq-L$. Then, according to the value
range intervals of $\omega$ we will deal with the above inequality (3.2)
carefully one by one.
(1). $\omega\geq 0$.
Using Young’s inequality, we can deduce that there holds
(3.3) $2G|\nabla\phi|(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})^{1/2}\leq
C_{3}\phi
G(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})+\frac{|\nabla\phi|^{2}}{\phi}\frac{G}{C_{3}},$
where $C_{3}$ is a positive constant to be determined later.
Noticing $b\leq 0$ and using (2.7) and (3.2) we have
(3.4) $\displaystyle AG\geq$
$\displaystyle\frac{2}{n}(G-(p-1)\lambda_{1}\omega)^{2}\phi-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-C_{3}\phi
G^{2}+C_{3}\phi G(p\lambda_{1}\omega+\lambda_{2}e^{b\omega})$
$\displaystyle+((p-2)\lambda_{1}-\lambda_{2}be^{b\omega}+\lambda_{2}b^{2}e^{b\omega}-2K)(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})\phi$
$\displaystyle-(p\lambda_{1}+\lambda_{2}be^{b\omega})(G-(p-1)\lambda_{1}\omega)\phi$
$\displaystyle\geq$ $\displaystyle\frac{2}{n}\phi G^{2}-\frac{4}{n}\phi
G(p-1)\lambda_{1}\omega-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-C_{3}\phi
G^{2}+C_{3}\phi G(p\lambda_{1}\omega+\lambda_{2}e^{b\omega})$
$\displaystyle+\phi\frac{2}{n}(p-1)^{2}\lambda_{1}^{2}\omega^{2}+((p-2)\lambda_{1}-2K)(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})\phi$
$\displaystyle-(p\lambda_{1}+\lambda_{2}be^{b\omega})(G-(p-1)\lambda_{1}\omega)\phi.$
After rearranging the right side of the above inequality, we have
(3.5) $\displaystyle AG\geq$ $\displaystyle\frac{2}{n}\phi G^{2}-C_{3}\phi
G^{2}-\frac{4}{n}\phi
G(p-1)\lambda_{1}\omega-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-2KG\phi$
$\displaystyle+(p-2)\lambda_{1}G\phi-p\lambda_{1}G\phi+\lambda_{2}e^{b\omega}\phi(C_{3}G-bG+b(p-1)\lambda_{1}\omega)$
$\displaystyle+\lambda_{1}^{2}\phi(\frac{2}{n}(p-1)^{2}\omega^{2}+p(2-p)\omega+p(p-1)\omega)+2Kp\lambda_{1}\omega\phi.$
Noticing that
$K\geq 0,\quad\quad 0\leq\phi\leq 1,\quad\quad 0\leq p\lambda_{1}\omega\leq
G\quad\quad\mbox{and}\quad\quad 0\leq\lambda_{2}e^{b\omega}\leq G,$
we have
(3.6) $\displaystyle AG\geq$
$\displaystyle\left(\frac{2}{n}-C_{3}-\frac{4(p-1)}{np}\right)G^{2}\phi-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-2K\phi
G-2\lambda_{1}G\phi$ $\displaystyle+\lambda_{2}e^{b\omega}\phi
G\left(C_{3}-b+\frac{b(p-1)}{p}\right)$ $\displaystyle\geq$
$\displaystyle\left(\frac{2(2-p)}{np}-C_{3}\right)G^{2}\phi-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-(2K+2\lambda_{1})\phi
G.$
If
$\frac{2(2-p)}{np}-C_{3}>0,$
then, dividing the both sides of the above inequality by $G$ we obtain
(3.7) $\displaystyle A\geq$
$\displaystyle(\frac{2(2-p)}{np}-C_{3})G\phi-\frac{C_{1}^{2}}{R^{2}}\frac{1}{C_{3}}-(2K+2\lambda_{1})\phi.$
Thus, we know that for all $C_{3}\in(0,\frac{2(2-p)}{np})$ there holds true
(3.8) $\displaystyle\sup_{B_{R}(O)}G\leq G\phi\leq$
$\displaystyle\frac{((A+2K+2\lambda_{1})R^{2}C_{3}+C_{1}^{2})np}{(2(2-p)-C_{3}np)R^{2}C_{3}},$
where
$A=\frac{((n-1)(1+\sqrt{K}R)+2)C_{1}^{2}+C_{2}}{R^{2}}.$
On the other hand, we note that the right hand side of the above inequality
tends to $+\infty$ if $C_{3}\rightarrow 0^{+}$ or
$C_{3}\rightarrow(\frac{2(2-p)}{np})^{-}$. The right hand side of (3.8) is a
continuous function of variable $C_{3}$, thus it can take its minimum in
$(0,\frac{2(2-p)}{np})$.
(2). $-L<\omega<0$.
In the present situation, by using Young’s inequality we can verify that there
holds
(3.9) $2G|\nabla\phi|(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})^{1/2}\leq
C_{4}\phi^{1/2}G^{1/2}(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})+\frac{|\nabla\phi|^{2}}{C_{4}\phi}\phi^{1/2}G^{3/2},$
where $C_{4}$ is a positive constant to be determined later.
From (3.2) we have
(3.10) $\displaystyle AG\geq$
$\displaystyle\frac{2}{n}(G-(p-1)\lambda_{1}\omega)^{2}\phi-
C_{4}\phi^{1/2}G^{1/2}(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})-\frac{|\nabla\phi|^{2}}{C_{4}\phi}\phi^{1/2}G^{3/2}$
$\displaystyle+((p-2)\lambda_{1}-\lambda_{2}be^{b\omega}+\lambda_{2}b^{2}e^{b\omega}-2K)(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})\phi$
$\displaystyle-(p\lambda_{1}+\lambda_{2}be^{b\omega})(G-(p-1)\lambda_{1}\omega)\phi$
$\displaystyle\geq$ $\displaystyle\frac{2}{n}\phi G^{2}-\frac{4}{n}\phi
G(p-1)\lambda_{1}\omega-
C_{4}\phi^{1/2}G^{1/2}(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})-\frac{|\nabla\phi|^{2}}{C_{4}\phi}\phi^{1/2}G^{3/2}$
$\displaystyle+\phi\frac{2}{n}(p-1)^{2}\lambda_{1}^{2}\omega^{2}+((p-2)\lambda_{1}-2K)(G-p\lambda_{1}\omega-\lambda_{2}e^{b\omega})\phi$
$\displaystyle-(p\lambda_{1}+\lambda_{2}be^{b\omega})(G-(p-1)\lambda_{1}\omega)\phi.$
By taking an rearrangement of the terms on the right side of the above
inequality, we have
(3.11) $\displaystyle AG\geq$ $\displaystyle\frac{2}{n}\phi
G^{2}-C_{4}\phi^{1/2}G^{3/2}-\frac{C_{1}^{2}}{C_{4}R^{2}}\phi^{1/2}G^{3/2}$
$\displaystyle-2KG\phi+(p-2)\lambda_{1}G\phi-p\lambda_{1}G\phi$
$\displaystyle+\lambda_{2}e^{b\omega}\phi(C_{4}\phi^{1/2}G^{1/2}-bG+b(p-1)\lambda_{1}\omega)$
$\displaystyle+\lambda_{1}\omega(-\frac{4}{n}\phi
G(p-1)+C_{4}p\phi^{1/2}G^{1/2}+\frac{2}{n}(p-1)^{2}\omega\phi+p\lambda_{1}\phi+2Kp\phi).$
If
$0>-\frac{4}{n}\phi
G(p-1)+C_{4}p\phi^{1/2}G^{1/2}+\frac{2}{n}(p-1)^{2}\omega\phi+p\lambda_{1}\phi+2Kp\phi,$
then, from $(\ref{eq:3.11})$ we infer
(3.12) $\displaystyle AG\geq$ $\displaystyle\frac{2}{n}\phi
G^{2}-C_{4}\phi^{1/2}G^{3/2}-\frac{C_{1}^{2}}{C_{4}R^{2}}\phi^{1/2}G^{3/2}$
$\displaystyle-2KG\phi-2\lambda_{1}G\phi.$
Dividing the both sides of the above inequality by $G$, we obtain
(3.13) $\displaystyle A\geq$ $\displaystyle\frac{2}{n}\phi
G-C_{4}\phi^{1/2}G^{1/2}-\frac{C_{1}^{2}}{C_{4}R^{2}}\phi^{1/2}G^{1/2}-2K\phi-2\lambda_{1}\phi$
$\displaystyle\geq$ $\displaystyle\frac{1}{n}\phi
G-\frac{n}{4}(C_{4}+\frac{C_{1}^{2}}{C_{4}R^{2}})^{2}-2K\phi-2\lambda_{1}\phi.$
Thus, at $x_{0}$ we have
(3.14) $\displaystyle\phi G\leq$
$\displaystyle\inf_{C_{4}>0}\\{nA+\frac{n^{2}}{4}(C_{4}+\frac{C_{1}^{2}}{C_{4}R^{2}})^{2}+2Kn+n(p-2)\lambda_{1}+np\lambda_{1}\\}$
$\displaystyle=$ $\displaystyle
nA+\frac{n^{2}C_{1}^{2}}{R^{2}}+2Kn+n(p-2)\lambda_{1}+np\lambda_{1},$
where
$A=\frac{((n-1)(1+\sqrt{K}R)+2)C_{1}^{2}+C_{2}}{R^{2}}.$
Otherwise, we have
$0\leq-\frac{4}{n}\phi
G(p-1)+C_{4}p\phi^{1/2}G^{1/2}+\frac{2}{n}(p-1)^{2}\omega\phi+p\lambda_{1}\phi+2Kp\phi,$
then, it follows that
$0\leq-\frac{2}{n}\phi
G(p-1)+\frac{n}{8(p-1)}C_{4}^{2}p^{2}+\frac{2}{n}(p-1)^{2}L\phi+p\lambda_{1}\phi+2Kp\phi.$
This leads to
(3.15) $\displaystyle\phi G\leq$
$\displaystyle\inf_{C_{4}}\left\\{\frac{n}{2(p-1)}(\frac{n}{8(p-1)}C_{4}^{2}p^{2}+\frac{2}{n}(p-1)^{2}L+p\lambda_{1}+2Kp)\right\\}$
$\displaystyle=$
$\displaystyle\frac{n}{2(p-1)}\left(\frac{2}{n}(p-1)^{2}L+p\lambda_{1}+2Kp\right).$
(3). $\omega\leq-L$.
For this case, we have
$\frac{2}{n}(p-1)^{2}\omega\phi+p\lambda_{1}\phi+2Kp\phi\leq 0.$
From (3.11), we have
(3.16) $\displaystyle AG\geq$ $\displaystyle\frac{2}{n}\phi
G^{2}-C_{4}\phi^{1/2}G^{3/2}-\frac{C_{1}^{2}}{C_{4}R^{2}}\phi^{1/2}G^{3/2}$
$\displaystyle-2KG\phi+(p-2)\lambda_{1}G\phi-p\lambda_{1}G\phi$
$\displaystyle+\lambda_{2}e^{b\omega}\phi(C_{4}\phi^{1/2}G^{1/2}-bG+b(p-1)\lambda_{1}\omega)$
$\displaystyle+\lambda_{1}\omega(-\frac{4}{n}\phi
G(p-1)+C_{4}p\phi^{1/2}G^{1/2}).$
If
$0>-\frac{4}{n}\phi G(p-1)+C_{4}p\phi^{1/2}G^{1/2},$
then, from $(\ref{eq:3.16})$ we have
(3.17) $\displaystyle AG\geq$ $\displaystyle\frac{2}{n}\phi
G^{2}-C_{4}\phi^{1/2}G^{3/2}-\frac{C_{1}^{2}}{C_{4}R^{2}}\phi^{1/2}G^{3/2}$
$\displaystyle-2KG\phi-2\lambda_{1}G\phi.$
Dividing the both sides of the above inequality by $G$, we obtain
(3.18) $\displaystyle A\geq$ $\displaystyle\frac{2}{n}\phi
G-C_{4}\phi^{1/2}G^{1/2}-\frac{C_{1}^{2}}{C_{4}R^{2}}\phi^{1/2}G^{1/2}-2K\phi-2\lambda_{1}\phi$
$\displaystyle\geq$ $\displaystyle\frac{1}{n}\phi
G-\frac{n}{4}(C_{4}+\frac{C_{1}^{2}}{C_{4}R^{2}})^{2}-2K\phi-2\lambda_{1}\phi.$
Hence, we know that at $x_{0}$ there holds true
(3.19) $\displaystyle\phi G\leq$
$\displaystyle\inf_{C_{4}>0}\\{nA+\frac{n^{2}}{4}(C_{4}+\frac{C_{1}^{2}}{C_{4}R^{2}})^{2}+2Kn+n(p-2)\lambda_{1}+np\lambda_{1}\\}$
$\displaystyle=$ $\displaystyle
nA+\frac{n^{2}C_{1}^{2}}{R^{2}}+2Kn+n(p-2)\lambda_{1}+np\lambda_{1},$
where
$A=\frac{((n-1)(1+\sqrt{K}R)+2)C_{1}^{2}+C_{2}}{R^{2}}.$
Otherwise, we have
$0\leq-\frac{4}{n}\phi G(p-1)+C_{4}p\phi^{1/2}G^{1/2},$
it follows that
$0\leq-\frac{2}{n}\phi G(p-1)+\frac{n}{8(p-1)}C_{4}^{2}p^{2},$
hence, we obtain
(3.20) $\displaystyle\phi G\leq$
$\displaystyle\inf_{C_{4}>0}\\{\frac{n}{2(p-1)}\frac{n}{8(p-1)}C_{4}^{2}p^{2}\\}=0.$
This is a trivial conclusion.
Combining $(\ref{eq:3.8}),(\ref{eq:3.14}),(\ref{eq:3.15}),(\ref{eq:3.19})$ and
$(\ref{eq:3.20})$, we have
(3.21) $\sup_{B_{R}(O)}G\leq G\phi\leq\max\left\\{\begin{aligned}
&\min_{C_{3}\in(0,\frac{2(2-p)}{np})}\left\\{\frac{((A+2K+2\lambda_{1})R^{2}C_{3}+C_{1}^{2})np}{(2(2-p)-C_{3}np)R^{2}C_{3}}\right\\};\\\
&nA+\frac{n^{2}C_{1}^{2}}{R^{2}}+2Kn+n(p-2)\lambda_{1}+np\lambda_{1};\\\
&\frac{n}{2(p-1)}(\frac{2}{n}(p-1)^{2}L+p\lambda_{1}+2Kp)\end{aligned}\right\\}=\tilde{C_{1}}.$
Thus, we complete the proof of Case 1.
Case 2: $\lambda_{1}\leq 0$ and $\lambda_{2}>0$. For the present situation, we
need to consider the following two cases on $\omega$: (1). $\omega\geq 0$;
(2). $\omega<0$. We will discuss them one by one.
(1). $\omega\geq 0$.
From (LABEL:eq:3.5) we have
(3.22) $\displaystyle AG\geq$ $\displaystyle\frac{2}{n}\phi G^{2}-C_{3}\phi
G^{2}-\frac{4}{n}\phi
G(p-1)\lambda_{1}\omega-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-2KG\phi$
$\displaystyle-2\lambda_{1}G\phi+\lambda_{2}e^{b\omega}\phi(C_{3}G-bG+b(p-1)\lambda_{1}\omega)$
$\displaystyle+\lambda_{1}^{2}\phi(\frac{2}{n}(p-1)^{2}\omega^{2}+p\omega)+2Kp\lambda_{1}\omega\phi.$
Noticing that
$K\geq 0,\quad\quad 0\leq\phi\leq 1,\quad\quad
1<p<2\quad\quad\mbox{and}\quad\quad b\leq 0,$
we have the followings:
(3.23)
$-2\lambda_{1}G\phi+\lambda_{2}e^{b\omega}\phi(C_{3}G-bG+b(p-1)\lambda_{1}\omega)\geq
0,$
and
(3.24) $\lambda_{1}^{2}\phi(\frac{2}{n}(p-1)^{2}\omega^{2}+p\omega)\geq 0.$
By substituting (3.23) and (3.24) into (LABEL:eq:3.22), we derive
(3.25) $\displaystyle AG\geq$ $\displaystyle\frac{2}{n}\phi G^{2}-C_{3}\phi
G^{2}-\frac{4}{n}\phi
G(p-1)\lambda_{1}\omega-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-2KG\phi+2Kp\lambda_{1}\omega\phi$
$\displaystyle=$
$\displaystyle(\frac{2}{n}-C_{3})G^{2}\phi-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-2KG\phi+\lambda_{1}\omega\phi(-\frac{4}{n}G(p-1)+2Kp).$
If
$-\frac{4}{n}G(p-1)+2Kp<0,$
then we have
$\lambda_{1}\omega\phi(-\frac{4}{n}G(p-1)+2Kp)\geq 0.$
Thus, from (3.25) it follows
(3.26)
$AG\geq(\frac{2}{n}-C_{3})G^{2}\phi-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-2KG\phi.$
Taking $C_{3}<\frac{2}{n}$ and dividing the both sides of (3.26) by
$(\frac{2}{n}-C_{3})G$, we obtain
(3.27)
$G\phi\leq\frac{A+\frac{C_{1}^{2}}{R^{2}}\frac{1}{C_{3}}+2K}{\frac{2}{n}-C_{3}}.$
Otherwise, we have
$-\frac{4}{n}G(p-1)+2Kp\geq 0,$
it follows that
(3.28) $G\phi\leq\frac{nKp}{2(p-1)}.$
(2). $\omega<0$.
From (LABEL:eq:3.5) we have
(3.29) $\displaystyle AG\geq$ $\displaystyle\frac{2}{n}\phi
G^{2}-\frac{4}{n}\phi G(p-1)\lambda_{1}\omega-C_{3}\phi
G^{2}-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-2KG\phi$
$\displaystyle-2\lambda_{1}G\phi+\lambda_{1}^{2}\phi
p\omega-b\lambda_{2}e^{b\omega}\phi(G-(p-1)\lambda_{1}\omega)$
$\displaystyle+\lambda_{1}^{2}\phi\frac{2}{n}(p-1)^{2}\omega^{2}+2Kp\lambda_{1}\omega\phi+\lambda_{2}e^{b\omega}\phi
C_{3}G.$
Since
$G=|\nabla{\omega}|^{2}+p\lambda_{1}\omega+\lambda_{2}e^{b\omega},$
then we have $G\geq p\lambda_{1}\omega$. Hence,
(3.30) $\frac{2}{n}\phi G^{2}-\frac{4}{n}\phi
G(p-1)\lambda_{1}\omega\geq(\frac{2}{n}-\frac{4(p-1)}{np})\phi
G^{2}=\frac{2(2-p)}{np}\phi G^{2},$ (3.31)
$-2\lambda_{1}G\phi+\lambda_{1}^{2}\phi
p\omega=-\lambda_{1}\phi(G+G-p\lambda_{1}\omega)\geq 0,$
and
(3.32)
$-b\lambda_{2}e^{b\omega}\phi(G-(p-1)\lambda_{1}\omega)=-b\lambda_{2}e^{b\omega}\phi(G-p\lambda_{1}\omega+\lambda_{1}\omega)\geq
0.$
Thus, by substituting (3.30), (3.31) and (3.32) into (LABEL:eq:3.29), and
noticing that
$\lambda_{1}^{2}\phi\frac{2}{n}(p-1)^{2}\omega^{2}+2Kp\lambda_{1}\omega\phi+\lambda_{2}e^{b\omega}\phi
C_{3}G\geq 0,$
we obtain
(3.33) $AG\geq(\frac{2(2-p)}{np}-C_{3})\phi
G^{2}-\frac{C_{1}^{2}}{R^{2}}\frac{G}{C_{3}}-2KG\phi.$
Taking $C_{3}<\frac{2(2-p)}{np}$ and dividing the both sides of (3.33) by
$(\frac{2(2-p)}{np}-C_{3})G$, we obtain the following inequality
(3.34)
$G\phi\leq\frac{A+\frac{C_{1}^{2}}{R^{2}}\frac{1}{C_{3}}+2K}{\frac{2(2-p)}{np}-C_{3}}.$
Noticing that if $C_{3}\rightarrow 0^{+}$ or
$C_{3}\rightarrow\left(\frac{2(2-p)}{np}\right)^{-}$, the right hand side of
the above inequality tends to $+\infty$. The right hand side is continuous
function of variable $C_{3}$, thus it can take its minimum in the interval
$(0,\frac{2(2-p)}{np})$.
Since $0<\frac{2(2-p)}{np}<\frac{2}{n}$, combining (3.27), (3.28) and (3.34)
we get
(3.35) $\sup_{B_{R}(O)}G\leq G\phi\leq\max\left\\{\begin{aligned}
&\frac{nKp}{2(p-1)};\\\
&\min_{C_{3}\in(0,\frac{2(2-p)}{np})}\left\\{\frac{((A+2K)R^{2}C_{3}+C_{1}^{2})np}{(2(2-p)-C_{3}np)R^{2}C_{3}}\right\\}\end{aligned}\right\\}=\tilde{C_{2}},$
where
$A=\frac{((n-1)(1+\sqrt{K}R)+2)C_{1}^{2}+C_{2}}{R^{2}}.$
Thus, we complete the proof of Case 2 and the proof of Theorem 1.1. ∎
## 4\. The proof of Theorem 1.2
In this section, we denote $(M,g)$ an $n$-dimensional complete Riemannian
manifold with $Ric(g)\geq-Kg$ in the geodesic ball $B_{2R}(O)$, where $O$ is a
fixed point on $M$ and $K=K(2R)$ is a nonnegative constant depending on $R$.
Let
(4.1) $G_{1}=|\nabla(\ln u)|^{2}+\lambda h(\ln u),$
Taking $x_{1}\in B_{2R}(O)$ such that
$\phi G_{1}(x_{1})=\sup_{B_{2R}(O)}(\phi G_{1})\geq 0$
and replacing $G$ by $G_{1}$ in (2.9), we can get
(4.2) $\displaystyle AG_{1}\geq$ $\displaystyle\phi\Delta G_{1}$
$\displaystyle\geq$
$\displaystyle(-\frac{4}{n}(\lambda-1)G_{1}+\frac{2}{n}(\lambda-1)^{2}h)h\phi$
$\displaystyle+((\lambda-2)G_{1}-(\lambda-1)\lambda h)h^{\prime}\phi$
$\displaystyle+(\lambda G_{1}-\lambda^{2}h)h^{\prime\prime}\phi$
$\displaystyle+\frac{2}{n}G_{1}^{2}\phi-2KG_{1}\phi+2K\lambda
h\phi-2G_{1}|\nabla\phi|(G_{1}-\lambda h)^{1/2}$ $\displaystyle=$
$\displaystyle((-\frac{4}{n}(\lambda-1)h+(\lambda-2)h^{\prime}+\lambda
h^{\prime\prime})G_{1}+\frac{2}{n}(\lambda-1)^{2}h^{2}-(\lambda-1)\lambda
hh^{\prime}-\lambda^{2}hh^{\prime\prime})\phi$ $\displaystyle+2K\lambda
h\phi+\frac{2}{n}G_{1}^{2}\phi-2KG_{1}\phi$
$\displaystyle-2G_{1}|\nabla\phi|(G_{1}-\lambda h)^{1/2}.$
Now, we are in the position to give the proof of Theorem 1.2.
###### Proof.
Using Young’s inequality, we know that there holds
(4.3) $2G_{1}|\nabla\phi|(G_{1}-\lambda h)^{1/2}\leq C_{5}\phi
G_{1}(G_{1}-\lambda h)+\frac{|\nabla\phi|^{2}}{\phi}\frac{G_{1}}{C_{5}},$
where $C_{5}$ is a positive constant to be determined later. Then, from (4.2)
we have
(4.4) $\displaystyle AG_{1}\geq$
$\displaystyle((-\frac{4}{n}(\lambda-1)h+(\lambda-2)h^{\prime}+\lambda
h^{\prime\prime})G_{1}+\frac{2}{n}(\lambda-1)^{2}h^{2}-(\lambda-1)\lambda
hh^{\prime}-\lambda^{2}hh^{\prime\prime})\phi$ $\displaystyle+2K\lambda
h\phi+\frac{2}{n}G_{1}^{2}\phi-2KG_{1}\phi-C_{5}\phi G_{1}(G_{1}-\lambda
h)-\frac{|\nabla\phi|^{2}}{\phi}\frac{G_{1}}{C_{5}}.$
From (1.9),
$-\frac{4}{n}(\lambda-1)h+(\lambda-2)h^{\prime}+\lambda h^{\prime\prime}\geq
0,\quad\quad\lambda h\geq 0.$
Noticing $G_{1}\geq\lambda h$, we have
(4.5) $\displaystyle AG_{1}\geq$
$\displaystyle((-\frac{4}{n}(\lambda-1)h+(\lambda-2)h^{\prime}+\lambda
h^{\prime\prime})\lambda h+\frac{2}{n}(\lambda-1)^{2}h^{2}-(\lambda-1)\lambda
hh^{\prime}-\lambda^{2}hh^{\prime\prime})\phi$ $\displaystyle+2K\lambda
h\phi+\frac{2}{n}G_{1}^{2}\phi-2KG_{1}\phi-C_{5}\phi
G_{1}^{2}-\frac{|\nabla\phi|^{2}}{\phi}\frac{G_{1}}{C_{5}}$ $\displaystyle=$
$\displaystyle h(2K\lambda-\frac{2}{n}(\lambda^{2}-1)h-\lambda
h^{\prime})\phi$ $\displaystyle+\frac{2}{n}G_{1}^{2}\phi-2KG_{1}\phi-C_{5}\phi
G_{1}^{2}-\frac{|\nabla\phi|^{2}}{\phi}\frac{G_{1}}{C_{5}}$
From (1.9),
$h(2K\lambda-\frac{2}{n}(\lambda^{2}-1)h-\lambda h^{\prime})\geq 0.$
Noticing $0\leq\phi\leq 1$ and $K\geq 0$, (4.5) turns into the follwoing
(4.6) $\displaystyle AG_{1}=$
$\displaystyle\frac{((n-1)(1+\sqrt{K}R)+2)C_{1}^{2}+C_{2}}{R^{2}}G_{1}$
$\displaystyle\geq$
$\displaystyle(\frac{2}{n}-C_{5})G_{1}^{2}\phi-2KG_{1}\phi-\frac{C_{1}^{2}}{R^{2}}\frac{G_{1}}{C_{5}}.$
When $\frac{2}{n}-C_{5}>0$, multiplying the both side of the (LABEL:eq:4.6) by
$1/(\frac{2}{n}-C_{5})G_{1}$, we obtain
(4.7)
$\frac{((n-1)(1+\sqrt{K}R)+2)C_{1}^{2}+C_{2}}{(\frac{2}{n}-C_{5})R^{2}}\geq
G_{1}\phi-\frac{2K\phi}{(\frac{2}{n}-C_{3})}-\frac{C_{1}^{2}\frac{1}{C_{5}}}{(\frac{2}{n}-C_{5})R^{2}}.\\\
$
Thus, at $x_{1}$, there holds true
(4.8) $\displaystyle\sup_{B_{R}(O)}G_{1}\leq$ $\displaystyle G_{1}\phi$
$\displaystyle\leq$
$\displaystyle\frac{((n-1)(1+\sqrt{K}R)+2)C_{1}^{2}+C_{2}}{(\frac{2}{n}-C_{5})R^{2}}+\frac{2K\phi}{(\frac{2}{n}-C_{3})}+\frac{C_{1}^{2}\frac{1}{C_{5}}}{(\frac{2}{n}-C_{5})R^{2}}$
$\displaystyle\leq$
$\displaystyle\frac{((n-1)(1+\sqrt{K}R)+2+\frac{1}{C_{5}})C_{1}^{2}+C_{2}+2KR^{2}}{(\frac{2}{n}-C_{5})R^{2}},$
for all $C_{5}\in(0,2/n)$. Noticing that if $C_{5}\rightarrow 0^{+}$ or
$C_{5}\rightarrow(2/n)^{-}$, the right hand side of the inequality tends to
$+\infty$. The right hand side is a continuous function of $C_{5}$, thus it
can take its minimum in $(0,2/n)$. Then we complete the proof of Theorem 1.2.
∎
Here we also give a brief proof of the corollaries:
###### Proof.
For any two points $x,\,y\in{B_{R/2}(O)}$, there holds true
(4.9) $\ln u(x)-\ln u(y)\leq\int_{\gamma}{\frac{|\nabla u|}{u}},$
where $\gamma$ is a curve connecting $x$ and $y$ in $M$. Noticing
$\frac{|\nabla u|}{u}\leq\sqrt{C(n,K,R,h)}$, it follows
$\sup_{B_{R/2}(O)}u\leq
e^{\int_{\gamma}{\sqrt{C(n,K,R,h)}}}\inf_{B_{R/2}(O)}u\leq
e^{R\sqrt{C(n,K,R,h)}}\inf_{B_{R/2}(O)}u.$
This is Corollary 1.2.
When $K=0$, letting $R\rightarrow+\infty$, we have
(4.10) $\frac{|\nabla u|}{u}\leq\sqrt{C(n,K,R,h)}\rightarrow 0.$
Then any positive solution $u$ of $(\ref{eq:1.4})$ must be constant for
$|\nabla u|\equiv 0$. Moreover, if $\lambda h>0$, equation
$\Delta u+uh(\ln u)=0$
admits no positive solutions. This is Corollary 1.3.
By a direct calculation we can see easily that, if $h(\ln u)\geq 0$,
$h^{\prime}(\ln u)\leq 0$, $h^{\prime\prime}(\ln u)\geq 0$, and
$0\leq\lambda\leq 1$, we have
$\left\\{\begin{aligned}
&-\frac{4}{n}(\lambda-1)h+(\lambda-2)h^{\prime}+\lambda h^{\prime\prime}\geq
0;\\\ &h(2K\lambda-\frac{2}{n}(\lambda^{2}-1)h-\lambda h^{\prime})\geq 0;\\\
&\lambda h\geq 0.\end{aligned}\right.$
Then (1.9) holds true, therefore, the conclusions of Theorem 1.2 hold true.
This is Corollary 1.4.
For the case $\lambda=1$, (1.9) turns into
$\left\\{\begin{aligned} &-h^{\prime}+h^{\prime\prime}\geq 0;\\\
&h(2K-h^{\prime})\geq 0;\\\ &h\geq 0.\end{aligned}\right.$
Thus $h^{\prime}(\ln u)\leq\min\\{h^{\prime\prime}(\ln u),2K\\}$ and $h(\ln
u)\geq 0$ on $B_{R}(O)$. This is Corollary 1.5.
On the other hand, for the case $\lambda=0$, (1.9) turns into
$\left\\{\begin{aligned} &\frac{4}{n}h-2h^{\prime}\geq 0;\\\
&\frac{2}{n}h^{2}\geq 0.\end{aligned}\right.$
Thus $\frac{2}{n}h(\ln u)-h^{\prime}(\ln u)\geq 0$ on $B_{R}(O)$. This is
Corollary 1.6. And we complete the proof of the Corollaries. ∎
Acknowledgements: The authors are supported partially by NSFC grant
(No.11731001). The author Y. Wang is supported partially by NSFC grant
(No.11971400) and Guangdong Basic and Applied Basic Research Foundation Grant
(No. 2020A1515011019).
## References
* [1] L. Caffarelli, B. Gidas and J. Spruck, _Asymptotic symmetry and local behavior of semilinear elliptic equations with critical Soblev growth_ , Comm. Pure Appl. Math. 42(1989), no.3, 271-297.
* [2] Y. Choquet-Bruhat, _General Relativity and the Einstein Equations_ , Oxford Mathematical Monographs, Oxford University Press, Oxford (2009)
* [3] F. R. K. Chung, S. T. Yau, _Logarithmic Harnack inequalities_ , Math. Res. Lett. 3(6) (1996), 793-812.
* [4] N.T. Dung, N.N. Khanh and Q.A. Ngô, _Gradient estimates for some f-heat equations driven by Lichnerowicz’s equation on complete smooth metric measure spaces_ , manuscripta math. 155(2018), 471-501.
* [5] M. Ghergu, S. Kim and H. Shahgholian, _Exact behaviour around isolated singularity for semilinear elliptic equations with a log-type nonlinearity_ , arXiv:1804.04287. to appear in Adv. Nonlinear Anal.
* [6] P. Li and S.T. Yau, _On the parabolic kernel of the Schr $\ddot{o}$dinger operator_, Acta. Math. 156(3)(1986), 153-201.
* [7] A. Lichnerowicz, _L’intégration des équations de la gravitation relativiste et le probl $\grave{e}$me des $n$ corps_, J. Math. pures appliquées, 23(1944), 37-63.
* [8] B.-Q. Ma, G.-Y. Huang and Y. Luo, _Gradient estimates for a nonlinear elliptic equation on complete Riemannian manifolds_ , Proc. Amer. Math. Soc. 146 (2018), 4993-5002.
* [9] L. Ma, _Gradient estimates for a simple elliptic equation on complete non-compact Riemannian manifolds_. J. Funct. Anal. 241(2006), 374-382.
* [10] L. Ma and J. Wei, _Stability and multiple solutions to Einstein-scalar field Lichnerowicz equation on manifolds_ , J. Math. Pures Appl. 99(9) (2013), no. 2, 174-186.
* [11] Q.A. Ngô, _Einstein constraint equations on Riemannian manifolds. In: Geometric Analysis Around Scalar Curvatures_ , vol. 31, pp. 119-210. Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore, World Scientific(2016)
* [12] Bo Peng, Youde Wang and Guodong Wei, _Gradient estimates for $\Delta u+au^{p+1}=0$ and Liouville theorems_. (Preprint)
* [13] Bo Peng, Youde Wang and Guodong Wei, _Gradient estimates for $\Delta u+au(\log u)^{p+1}=0$ on Riemannian manifolds_. (Preprint)
* [14] R.Schoen, _Conformal deformation of a Riemannian metric to constant scalar curvature_ , J.Diff. Geometry, 20(1984), 479-495.
* [15] R.Schoen, _The existence fo weak solutions with prescribed singular behavior for a conformally invariant scalar equation_ , Comm. Pure Appl. Math. 41(1988), 317-392.
* [16] R.Schoen and S. T. Yau, _Lectures on Differential Geometry_ , International Press, Cambridge, MA, (1994).
* [17] Feng-Yu Wang, _Harnack inequalities for log-Sobolev functions and estimates of log-Sobolev constants_ , Ann. Probab. 27(2)(1999), 653-663.
* [18] J. Y. Wu, _Gradient estimates for a nonlinear parabolic equation and Liouville theorems_ , Manuscripta Math. 159(2019), no.3-4, 511-547.
* [19] F. Yang and L.D. Zhang, _Gradient estimates for a nonlinear parabolic equation on smooth metric measure spaces_ , Nonlinear Analysis: Theory, Methods & Applications, 187(2019), 49-70.
* [20] J. York, _Conformally invariant orthogonal decomposition of symmetric tensors on Riemannian manifolds and the initial-value problem of general relativity_ , J. Math. Phys., 14(1973), No. 4, 456-464.
* [21] L. Zhao and D.-Y. Yang, _Gradient estimates for the p-Laplacian Lichnerowicz Equation on smooth metric measure spaces_ , Proc. of the American Mathe. Society, 146(2018), 5451-5461.
* [22] L. Zhao, _Liouville theorem for Lichnerowicz equation on complete noncompact manifolds_ , Funkcial. Ekvac. 57 (2014), no. 1, 163-172.
|
# Estimating the Unique Information
of Continuous Variables
Ari Pakman
Columbia University &Amin Nejatbakhsh
Columbia University &Dar Gilboa
Harvard University &Abdullah Makkeh
Georg August University &Luca Mazzucato
University of Oregon &Michael Wibral
Georg August University &Elad Schneidman
Weizmann Institute
###### Abstract
The integration and transfer of information from multiple sources to multiple
targets is a core motive of neural systems. The emerging field of partial
information decomposition (PID) provides a novel information-theoretic lens
into these mechanisms by identifying synergistic, redundant, and unique
contributions to the mutual information between one and several variables.
While many works have studied aspects of PID for Gaussian and discrete
distributions, the case of general continuous distributions is still uncharted
territory. In this work we present a method for estimating the unique
information in continuous distributions, for the case of one versus two
variables. Our method solves the associated optimization problem over the
space of distributions with fixed bivariate marginals by combining copula
decompositions and techniques developed to optimize variational autoencoders.
We obtain excellent agreement with known analytic results for Gaussians, and
illustrate the power of our new approach in several brain-inspired neural
models. Our method is capable of recovering the effective connectivity of a
chaotic network of rate neurons, and uncovers a complex trade-off between
redundancy, synergy and unique information in recurrent networks trained to
solve a generalized XOR task.
## 1 Introduction and background
In neural systems, often multiple neurons are driven by one external event or
stimulus; conversely multiple neural inputs can converge onto a single neuron.
A natural question in both cases is how multiple variables hold information
about the singleton variable. In their seminal work [1], Williams and Beer
proposed an axiomatic extension of classic information theory to decompose the
mutual information between multiple source variables and a single target
variable in a meaningful way. For the case of two sources $X_{1},X_{2}$, their
partial information decomposition (PID) amounts to expressing the mutual
information of $X_{1},X_{2}$ with a target $Y$ as a sum of four non-negative
terms,
$\displaystyle I(Y\mathbin{:}(X_{1},X_{2}))=U(Y\mathbin{:}X_{1}\backslash
X_{2})+U(Y\mathbin{:}X_{2}\backslash
X_{1})+R(Y\mathbin{:}(X_{1},X_{2}))+S(Y\mathbin{:}(X_{1},X_{2}))\,,$ (1.1)
corresponding to unique ($U_{1}$, $U_{2}$), redundant ($R$) and synergistic
($S$) contributions, respectively. These terms should also obey the
consistency equations
$\displaystyle
I(Y\mathbin{:}X_{1})=R(Y\mathbin{:}(X_{1},X_{2}))+U(Y\mathbin{:}X_{1}\backslash
X_{2})\,,$ (1.2) $\displaystyle
I(Y\mathbin{:}X_{2})=R(Y\mathbin{:}(X_{1},X_{2}))+U(Y\mathbin{:}X_{2}\backslash
X_{1})\,.$ (1.3)
The PID has proved useful in understanding information processing by
distributed systems in a diverse array of fields including machine learning
[2, 3], earth science [4] and cellular automata [5], and particularly in
neuroscience [6, 7, 8, 9, 10], where notions of synergy and redundancy,
traditionally considered mutually exclusive and distinguished by the sign of
$\displaystyle\Delta$ $\displaystyle=$ $\displaystyle
I(Y\mathbin{:}(X_{1},X_{2}))-I(Y\mathbin{:}X_{1})-I(Y\mathbin{:}X_{2})\,,$
(1.4) $\displaystyle=$ $\displaystyle
S(Y\mathbin{:}(X_{1},X_{2}))-R(Y\mathbin{:}(X_{1},X_{2}))\,,$
have long played a central role in the quest to understand how neural circuits
integrate information from multiple sources [11, 12, 13, 14]. The novelty of
the PID framework here is in separating the measures of synergy and redundancy
in (1.4).
The above abstract formulation of PID provides three equations for four
unknowns, and only becomes operational once one of $U_{1}$, $U_{2}$, $R$, or
$S$ is defined. This has been done in [15] via a definition of the unique
information:
###### Definition 1 (BROJA [15]).
Given three random variables $(Y,X_{1},X_{2})$ with joint probability density
$p(y,x_{1},x_{2})$, the unique information $U_{1}$ of $X_{1}$ with respect to
$Y$ is
$\displaystyle U(Y\mathbin{:}X_{1}\backslash X_{2})$ $\displaystyle=$
$\displaystyle\min_{q\in Q}I_{q}(Y\mathbin{:}X_{1}|X_{2})\,,$ (1.5)
$\displaystyle=$ $\displaystyle\min_{q\in Q}\int
dydx_{1}dx_{2}\,q(y,x_{1},x_{2})\log\left(\frac{q(y,x_{1}|x_{2})}{q(y|x_{2})q(x_{1}|x_{2})}\right)\,,$
(1.6)
where
$\displaystyle Q=\\{q(y,x_{1},x_{2})\,|\,q(y,x_{i})=p(y,x_{i}),i=1,2\\}\,.$
(1.7)
In words, we minimize the conditional mutual information
$I(Y\mathbin{:}X_{1}|X_{2})$ over the space of density functions that preserve
the marginal densities $p(y,x_{1})$ and $p(y,x_{2})$. The above definition
implies, along with (1.2)-(1.3), that the unique and redundant information
only depend on the marginals $p(y,x_{1}),p(y,x_{2})$, and that the synergy can
only be estimated from the full $p(y,x_{1},x_{2})$.
The original definition in [15] was limited to discrete random variables.
Here, we show that the extension to continuous variables is well-defined and
can be practically estimated.
Motivation from decision theory [15]. Consider for simplicity discrete
variables. A decision maker DM1 can choose an action $a$ from a finite set
${\cal A}$, and receives a reward $u(a,y)$ based on the selected action and
the state $y$, which occurs with probability $p(y)$. Notably, DM1 has no
knowledge of $y$, but observes instead a random signal $x_{1}$ sampled from
$p(x_{1}|y)$. Choosing the action maximizing the expected reward for each
$x_{1}$, his maximal expected reward is
$\displaystyle
R_{1}=\sum_{x_{1}}p(x_{1})\max_{a|x_{1}}\sum_{y}p(y|x_{1})u(a,y)\,.$ (1.8)
DM1 is said to have no unique information about $y$ w.r.t. another decision
maker DM2 that observes $x_{2}\sim p(x_{2}|y)$ – if $R_{2}\geq R_{1}$ for any
set ${\cal A}$, any distribution $p(y)$, and any reward function $u(a,y)$. A
celebrated theorem by Blackwell [16, 17] states that such a generic advantage
by DM2 occurs iff there exist a stochastic matrix $q(x_{1}|x_{2})$ which
satisfies
$\displaystyle p(x_{1}|y)=\sum_{x_{2}}p(x_{2}|y)q(x_{1}|x_{2})\,.$ (1.9)
But this occurs precisely when the unique information (1.5) vanishes, since
then there exists a joint distribution $q(y,x_{1},x_{2})$ in $Q$ for which
$y\perp x_{1}|x_{2}$, which implies $q(x_{1}|x_{2},y)=q(x_{1}|x_{2})$, and
thus (1.9) holds. Similar results exist for continuous variables [18, 19].
Thus the unique information from Definition 1 quantifies a departure from
Blackwell’s relation (1.9).
In this work we present a definition and a method to estimate the BROJA unique
information for generic continuous probability densities. Our approach is
based on the observation that the constraints (1.7) can be satisfied with an
appropriate copula parametrization, and makes use of techniques developed to
optimize variational autoencoders. We only consider one-dimensional
$Y,X_{1},X_{2}$ for simplicity, but the method can be naturally extended to
higher dimensional cases. In Section 2 we review related works, in Section 3
we present our method and Section 4 contains several illustrative examples.
## 2 Related works
Partial information decomposition offers a solution to a repeated question
that was not addressed by ‘classical’ information theory regarding the
relations between two sources and a target [1]. From a mathematical
perspective a ’functional definition’ has to be made, meaning that such a
definition should align with our intuitive notions. Yet, as shown in [20], not
all intuitively desirable properties of a PID can be realized simultaneously.
Thus, different desirable properties are chosen for distinct application
scenarios. Thus, various proposals for decomposition measures are not seen as
conflicting but as having different operational interpretations. For example,
the BROJA approach used here builds on desiderata from decision theory, while
other approaches appeal to game theory [21] or the framework of Kelly gambling
[22]. Yet other approaches use arguments from information geometry [23]. Other
approaches assume agents receiving potentially conflicting or incomplete
information about the source variables for the purpose of inference or
decryption (see e.g. [24, 25]). In [26] the authors separate the specific
operational interpretations of PID measures from the general structure of
information decomposition.
The actual computation of the BROJA unique information is non-trivial, even
for discrete variables. Optimization methods exist for the latter case [27,
28, 29], and analytic solutions are only known when all the variables are
univariate binary [30]. For continuous probability densities, an earlier
definition aligned with the BROJA measure was made by Barret [31], but only
applies to Gaussian variables. For Barret’s measure, an analytic solution is
known when $p(y,x_{1},x_{2})$ is a three-dimensional Gaussian density [31],
but does not generalize to higher dimensional Gaussians [32].
## 3 Bounding and estimating the unique information
We proceed in two steps. We first introduce a parametrization of the
optimization space $Q$ in (1.7) and then introduce and optimize an upper bound
on the unique information.
### 3.1 Parametrizing the optimization space with copulas
To characterize the optimization space $Q$ in (1.5)-(1.7), it is convenient to
recall that according to Sklar’s theorem [33], any $n$-variate probability
density can be expressed as
$\displaystyle p(x_{1}\ldots x_{n})=p(x_{1})\ldots p(x_{n})c(u_{1}\ldots
u_{n})\,,$ (3.1)
where $p(x_{i})$ is the marginal and $u_{i}=F(x_{i})$ is the CDF of each
variable. The dependency structure among the variables is encoded in the
function $c:[0,1]^{n}\rightarrow[0,1]$. This is a copula density, a
probability density on the unit hypercube with uniform marginals [34],
$\displaystyle\underset{[0,1]^{n-1}}{\int}\prod_{j=1,j\neq
i}^{n}du_{j}\,c(u_{1}\ldots u_{n})=1\quad\forall i\,.$ (3.2)
Note that under univariate reparametrizations $z_{i}^{\prime}=g(z_{i})$, the
$u_{i}$’s and the copula $c$ remain invariant. For an overview of copulas in
machine learning, see [35].
###### Proposition 1.
Under the BROJA Definition 1 of unique information, all the terms of the
partial information decomposition in (1.1)-(1.3) are independent of the
univariate marginals $p(x_{1}),p(x_{2}),p(y)$, and only depend on the copula
$c(u_{y},u_{1},u_{2})$.
###### Proof.
Expressing $q(y,x_{1},x_{2}),q(x_{1},x_{2}),q(y,x_{2})$ via copula
decompositions (3.1), and changing variables as $du_{y}=q(y)dy$, etc., the
objective function in (1.6) becomes
$\displaystyle I_{q}(Y\mathbin{:}X_{1}|X_{2})$ $\displaystyle=$
$\displaystyle\underset{[0,1]^{3}}{\int}du_{y}du_{1}du_{2}\,c(u_{y},u_{1},u_{2})\log\left(\frac{c(u_{y},u_{1},u_{2})}{c(u_{y},u_{2})c(u_{1},u_{2})}\right)\,.$
(3.3)
Note that the copula of any marginal distribution is the marginal of the
copula:
$\displaystyle
c(u_{y},u_{2})=\underset{[0,1]}{\int}du_{1}\,c(u_{y},u_{1},u_{2})\,,\qquad\qquad
c(u_{1},u_{2})=\underset{[0,1]}{\int}du_{y}\,c(u_{y},u_{1},u_{2})\,.$ (3.4)
Thus the optimization objective and the unique information are independent of
the univariate marginals. A similar result holds for the mutual information
terms in the l.h.s. of (1.1)-(1.3).111The connection between mutual
information and copulas was discussed in [36, 37]. It follows that none of the
PID terms in (1.1)-(1.3) depend on the univariate marginals, and therefore all
the PID terms are invariant under univariate reparametrizations of
$(y,x_{1},x_{2})$. ∎
In order to parametrize the optimization space $Q$ in (1.7) using copulas,
consider the factorization
$\displaystyle p(y,x_{1},x_{2})=p(x_{1})p(y|x_{1})p(x_{2}|y,x_{1})\,.$ (3.5)
Using the copula decomposition (3.1) for $n=2$, the last two factors in (3.5)
can be expressed as
$\displaystyle p(y|x_{1})$ $\displaystyle=$
$\displaystyle\frac{p(y,x_{1})}{p(x_{1})}=\frac{p(y)p(x_{1})c(y,x_{1})}{p(x_{1})}=c(u_{y},u_{1})p(y)\,,$
(3.6)
and similarly
$\displaystyle p(x_{2}|y,x_{1})$ $\displaystyle=$
$\displaystyle\frac{p(x_{1},x_{2}|y)}{p(x_{1}|y)}\,,$ (3.7) $\displaystyle=$
$\displaystyle c_{1,2|y}(u_{1|y},u_{2|y})p(x_{2}|y)\,,$ (3.8) $\displaystyle=$
$\displaystyle c_{1,2|y}(u_{1|y},u_{2|y})c(u_{y},x_{2})p(x_{2})\,,$ (3.9)
where we defined the conditional CDFs,
$\displaystyle u_{i|y}=F(u_{i}|u_{y})=\frac{\partial C(u_{y},u_{i})}{\partial
u_{y}}\qquad i=1,2$ (3.10)
and $C(u_{y},u_{i})$ is the CDF of $c(u_{y},u_{i})$. Note that the function
$c_{1,2|y}(u_{1|y},u_{2|y})$ in (3.8) is not the conditional copula
$c(u_{1},u_{2}|u_{y})$, but rather the copula of the conditional
$p(x_{1},x_{2}|y)$. Using expressions (3.6) and (3.9), the full density (3.5)
becomes
$\displaystyle p(y,x_{1},x_{2})$ $\displaystyle=$ $\displaystyle
p(y)p(x_{1})p(x_{2})c(u_{y},u_{1},u_{2})\,,$ (3.11)
where
$\displaystyle\begin{aligned}
c(u_{y},u_{1},u_{2})&=c(u_{y},u_{1})\,c(u_{y},u_{2})c_{1,2|y}(u_{1|y},u_{2|y})\,.\end{aligned}$
(3.12)
This is a simple case of the pair-copula construction of multivariate
distributions [38, 39, 40], which allows to expand any $n$-variate copula as a
product of (conditional) bivariate copulas.
###### Proposition 2.
The copula of the conditional, $c_{1,2|y}(\cdot,\cdot)$, parametrizes the
space $Q$ in (1.7).
###### Proof.
Since $q(y,x_{i})=p(y,x_{i})$ $(i=1,2)$, the copula factors in
$\displaystyle p(y,x_{i})=p(y)p(x_{i})\,c(u_{y},u_{i})\,,\qquad i=1,2$ (3.13)
are fixed in $Q$. Therefore, in the copula decomposition (3.12) for
$q(y,x_{1},x_{2})\in Q$, only the last factor can vary in $Q$. Let us denote
by $\theta$ the parameters of a generic parametrization for the copula
$c_{1,2|y}(u_{1|y},u_{2|y})$. Since the latter is conditioned on $u_{y}$, the
parameters can be taken as a function $\theta(u_{y})$. It follows that the
copula of $q$ necessarily has the form
$\displaystyle\begin{aligned}
c_{\theta}(u_{y},u_{1},u_{2})&=c(u_{y},u_{1})\,c(u_{y},u_{2})\,c_{1,2|\theta(u_{y})}(u_{1|y},u_{2|y})\,,\end{aligned}$
(3.14)
and the parameters of the function $\theta(u_{y})$ are the optimization
variables.222We note that in multivariate pair-copula expansions it is common
to assume constant conditioning parameters $\theta$ [41], but we do not make
such a simplifying assumption. ∎
### 3.2 Optimizing an upper bound
Inserting now the expression (3.14) into the objective function (3.3) we get
$\displaystyle
I[\theta]=\mathbb{E}_{c_{\theta}(u_{y},u_{1},u_{2})}\log\left[c(u_{y},u_{1})c_{1,2|\theta(u_{y})}(u_{1|y},u_{2|y})\right]-\mathbb{E}_{c_{\theta}(u_{1},u_{2})}\log
c_{\theta}(u_{1},u_{2})\,,$ (3.15)
which is our objective function and satisfies the marginal constraints (1.7).
Note that apart from the optimization parameters $\theta$, it depends on the
bivariate copulas $c(u_{y},u_{1})$ and $c(u_{y},u_{2})$ which should be
estimated from the observed data. Given $D$ observations
$\\{y^{(i)},x_{1}^{(i)},x_{2}^{(i)})\\}_{i=1}^{D}$, we map each value to
$[0,1]$ via the empirical CDFs of each coordinate $(y,x_{1},x_{2})$. Computing
the latter has a $O(D\log D)$ cost from sorting each coordinate and yields a
data set $\\{u_{y}^{(i)},u_{1}^{(i)},u_{2}^{(i)})\\}_{i=1}^{D}$. The latter
set is used to estimate copula densities $c(u_{y},u_{1})$ and $c(u_{y},u_{2})$
by fitting several parametric and non-parametric copula models [42], and
choosing the best pair of models using the AIC criterion.333For this
fitting/model selection step, we used the pyvinecopulib python package [43].
From the learned copulas we also get the conditional CDF functions
$u_{i|y}=F(u_{i}|u_{y})$ that appear in the arguments of the first term in
(3.15).
A variational upper bound. Minimizing (3.15) directly w.r.t. $\theta$ is
challenging because the second term depends on the copula marginal
$c_{\theta}(u_{1},u_{2})$ which has no closed form, as it requires integrating
(3.14) w.r.t. $u_{y}$. We introduce instead an inference distribution
$r_{\phi}(u_{y}|u_{1},u_{2})$, with parameters $\phi$, that approximates the
conditional copula $c_{\theta}(u_{y}|u_{1},u_{2})$, and consider the bound
$\displaystyle\log c_{\theta}(u_{1},u_{2})=\log\int
du_{y}^{\prime}\,c_{\theta}(u_{y}^{\prime},u_{1},u_{2})\geq\int
du_{y}^{\prime}\,r_{\phi}(u_{y}^{\prime}|u_{1},u_{2})\,\log\frac{c_{\theta}(u_{y}^{\prime},u_{1},u_{2})}{r_{\phi}(u_{y}^{\prime}|u_{1},u_{2})}\,,$
(3.16)
which follows from Jensen’s inequality and is tight when
$r_{\phi}(u_{y}^{\prime}|u_{1},u_{2})=c_{\theta}(u_{y}^{\prime}|u_{1},u_{2})$.
This expression gives an upper bound on $I_{q}[\theta]$, which can be
minimized jointly w.r.t. $(\theta,\phi)$.
A disadvantage of the bound (3.16) is that its tightness depends strongly on
the expressiveness of the inference distribution
$r_{\phi}(u_{y}^{\prime}|u_{1},u_{2})$. This situation can be improved by
considering a multiple-sample generalization proposed by [44],
$\displaystyle\log c_{\theta}(u_{1},u_{2})\geq
D_{A,\theta,\phi}(u_{1},u_{2})\equiv\mathbb{E}_{p(u_{y}^{(1)}\ldots
u_{y}^{(A)})}\log\left[\frac{1}{A}\sum_{a=1}^{A}\frac{c_{\theta}(u_{y}^{(a)},u_{1},u_{2})}{r_{\phi}(u_{y}^{(a)}|u_{1},u_{2})}\right]\,,$
(3.17)
where the expectation is w.r.t. $A$ independent samples of
$r_{\phi}(u_{y}^{\prime}|u_{1},u_{2})$. $D_{A,\theta,\phi}(u_{1},u_{2})$
coincides with the lower bound in (3.16) for $A=1$ and satisfies [44]
$\displaystyle D_{A+1,\theta,\phi}(u_{1},u_{2})$ $\displaystyle\geq$
$\displaystyle D_{A,\theta,\phi}(u_{1},u_{2}),$ (3.18)
$\displaystyle\lim_{A\rightarrow\infty}D_{A,\theta,\phi}(u_{1},u_{2})$
$\displaystyle=$ $\displaystyle\log c_{\theta}(u_{1},u_{2})\,.$ (3.19)
Thus, even when $r_{\phi}(u_{y}^{\prime}|u_{1},u_{2})\neq
c_{\theta}(u_{y}^{\prime}|u_{1},u_{2})$, the bound can be made arbitrarily
tight for large enough $A$. Inserting (3.17) in (3.15), we get finally
$\displaystyle I_{q}[\theta]\leq B_{1}[\theta]+B_{2}[\theta,\phi]\,,$ (3.20)
where
$\displaystyle B_{1}[\theta]$ $\displaystyle=$
$\displaystyle\mathbb{E}_{c_{\theta}(u_{y},u_{1},u_{2})}\log\left[c(u_{y},u_{1})c_{1,2|\theta(u_{y})}(u_{1|y},u_{2|y})\right]\,,$
(3.21) $\displaystyle B_{2}[\theta,\phi]$ $\displaystyle=$
$\displaystyle-\mathbb{E}_{c_{\theta}(u_{1},u_{2})}D_{A,\theta,\phi}(u_{1},u_{2})\,,$
(3.22)
and we minimize the r.h.s. of (3.20) w.r.t. $(\theta,\phi)$. Low-variance
estimates of the gradients to perform the minimization can be obtained with
the reparametrization trick [45, 46], as discussed in detail in Appendix A. In
our examples below we use for $c_{1,2|\theta(u_{y})}$ a bivariate Gaussian
copula (reviewed in Appendix B). Such a copula has just one parameter
$\theta\in[-1,+1]$, and thus the optimization is done over the space of
functions $\theta(u_{y}):[0,1]\rightarrow[-1,+1]$, which we parametrize with a
two-layer neural network. Similarly, we parametrize
$r_{\phi}(u_{y}|u_{1},u_{2})$ with a two-layer neural network. Details of
these networks are in Appendix D.
While the term $B_{2}$ in our bound is similar to the negative of the ELBO
bound in importance weighted autoencoders (IWAEs) [44], there are some
differences between the two settings, the most important being that we are
interested in the precise value of the bound at the minimum, rather than the
learned functions $c_{\theta},r_{\phi}$. Note also that our latent variables
$u_{y}^{(k)}$ are one-dimensional, as opposed to the usual higher dimensional
latent distributions of variational autoencoders, and that the empirical
expectation over data observations in IWAEs is replaced in $B_{2}$ by the
expectation over $c_{\theta}(u_{1},u_{2})$, whose parameters are also
optimized.
Figure 1: Estimated vs. exact values of unique information for Gaussians. For
a three-dimensional Gaussian, we show estimates of
$U(Y\mathbin{:}X_{1}\backslash X_{2})$ as a function of the correlations
$\rho_{y,x_{i}}(i=1,2)$, compared with the exact results from [31]. Only for
Gaussian distributions are exact results known for continuous variables.
Estimating the other PID terms In the following we adopt the minimal value
taken by the upper bound (3.20) as our estimate of $U_{1}$. The other terms in
the partial information decomposition are obtained from the consistency
relations (1.1)-(1.3), after estimating the mutual informations
$I(Y\mathbin{:}(X_{1},X_{2})),I(Y\mathbin{:}X_{1}),I(Y\mathbin{:}X_{2})$.
There are several methods for the latter. In our examples, we use the observed
data to fit additional copulas $c(u_{1},u_{2})$ and $c_{12|\theta(u_{y})}$ and
estimate $I(Y\mathbin{:}X_{1})\simeq\frac{1}{D}\sum_{i=1}^{D}\log
c(u_{y}^{(i)},u_{1}^{(i)})$ and similarly for the other terms. Note that all
our estimates have sources of potential bias. Firstly, the estimation of the
parametric copulas is subject to model or parameter misspecification, which
can be ameliorated by more refined model selection strategies. Secondly, the
optimized bound might not saturate, biasing the estimate upwards. This can be
improved using higher $A$ values and improving the gradient-based optimizer
used.
## 4 Examples
Comparison with exact results for Gaussians. Consider a three-dimensional
Gaussian with correlations $\rho_{y,x_{i}}$ between $y,x_{i}$ for $i=1,2$. The
exact solution to (1.5) in this case is [31]
$\displaystyle U(Y\mathbin{:}X_{1}\backslash
X_{2})=\frac{1}{2}\log\left(\frac{1-\rho_{y,x_{2}}^{2}}{1-\rho_{y,x_{1}}^{2}}\right)\mathbbm{1}\left[\rho_{y,x_{2}}<\rho_{y,x_{1}}\right].$
(4.1)
Fig. 1 compares the above expression with estimates from our method. Here we
know that $c_{y,1}$ and $c_{y,2}$ are Gaussian copulas, with parameters
$\rho_{y,x_{1}},\rho_{y,x_{2}}$, and we assumed a Gaussian copula for
$c_{1,2|y,\theta}(u_{1|y},u_{2|y})$ as well. For each pair of values
$\rho_{y,x_{1}},\rho_{y,x_{2}}$. In this and the rest of the experiments, we
optimized the parameters $(\theta,\phi)$ using the ADAM algorithm [47] with a
fixed learning rate $10^{-2}$ during 1200 iterations, and using $A=50$. The
results reported correspond to the mean of the bound in the last 100
iterations. The comparison in Fig. 1 shows excellent agreement.
Model systems of three neurons. The nature of information processing of neural
systems is a prominent area of application of the PID framework, since synergy
has been proposed as natural measure of information modification [48, 7]. We
consider two models:
$\displaystyle\begin{array}[]{c}{\bf M1}\\\
(X_{1},X_{2})\sim\mathcal{N}(0,\rho_{12}^{2}),\\\
Y=\tanh(w_{1}X_{1}+w_{2}X_{2}).\end{array}\,\begin{array}[]{c}{\bf M2}\\\
(X_{1},X_{2})\sim\mathcal{N}(0,\rho_{12}^{2}),\\\
Y=X_{1}^{2}/\left(0.1+w_{1}X_{1}^{2}+w_{2}X_{2}^{2}\right).\end{array}$ (4.8)
Both models are parameterized by the correlation $\rho_{12}$ and weights
$w_{1},w_{2}$. Model 1 is a particularly simple neural network. The $\tanh$
activation does not affect its copula, and even for a linear activation
function the variables are not jointly Gaussian since $Y$ is deterministic on
$(X_{1},X_{2})$. Model 2 is inspired by a normalization operation widely
believed to be canonical in neural systems [49] and plays a role in common
learned image compression methods [50]. The results, presented in Figure 2.
are obtained from $3000$ samples from each model
Figure 2: Partial information decomposition for two neural network models. In
both models (4.8) we fixed $w_{1}=0.5,\rho_{12}=0.3$, and show the PID terms
as a function of the synaptic strength $w_{2}$, normalized by
$I(Y\mathbin{:}(X_{1},X_{2}))$. We show mean (lines) and standard deviations
(shaded area around each line) from 3 runs. Left: Model 1: The input of
greatest weight conveys all the unique information, and synergy and redundancy
both peak as $w_{1}=w_{2}$. Right: Model 2: The second input $X_{2}$ has
negligible unique information contribution, but its synaptic strength $w_{2}$
modulates the synergistic term, associated to the modification of information
the neuron performs [48].
Computational aspects of connectivity in recurrent neural circuits. We apply
our continuous variable PID to understand computational aspects of the
information processing between recurrently coupled neurons (Fig. 3). A large
amount of work has been devoted to applying information theoretic measures for
quantifying directed pairwise information transfer between nodes in dynamic
networks and neural circuits [51]. However, classical information theory only
allows for the quantification of information transfer, whereas the framework
of PID enables further decomposition of information processing into transfer,
storage, and modification, providing further insights into the computation
within a recurrent system [52]. Transfer entropy (TE) [53] is a popular
measure to estimate the directed transfer of information between pairs of
neurons [54, 55], and is sometimes approximated by linear Granger causality.
Intuitively, TE between a process $X$ and a process $Y$ measures how much the
past of $X$, $X^{-}$, can help to predict the future of $Y$, $Y^{+}$,
accounting for its past $Y^{-}$. Although TE quantifies how much information
is transferred between neurons, it does not shed light on the computation
emerging from the interaction of $X^{-}$ and $Y^{-}$. Simply put, the
information transferred from $X^{-}$ could enter $Y^{+}$, independently of the
past state $Y^{-}$, or it could be fused in a non-trivial way with the
information in the state in $Y^{-}$[56, 52]. PID decomposes the TE into
modified transfer (quantified by $S(Y^{+}:X^{-},Y^{-})$) and unique transfer
(quantified by $U(Y^{+}:X^{-}\setminus Y^{-})$) terms (see the Appendix for a
proof):
$TE(X\rightarrow Y)=I(Y^{+}:X^{-}|Y^{-})=U(Y^{+}:X^{-}\setminus
Y^{-})+S(Y^{+}:X^{-},Y^{-})\,.$
Furthermore, the information kept by the system through time can be quantified
by the unique storage (given by $U(Y^{+}:Y^{-}\setminus X^{-})$) and redundant
storage (given by $R(Y^{+}:X^{-},Y^{-})$) in PID [48]. This perspective is a
new step towards understanding how the information is processed in recurrent
systems beyond merely detecting the direction functional interactions
estimated by traditional TE methods (see Appendix G, for details). To explore
these ideas, we simulated chaotic networks of rate neurons with an a-priori
causal structure consisting of two sub-networks $\mathbf{X}$ and $\mathbf{Y}$
(Fig. 3a, see [57] for more details on causal analyses of this network model).
The sub-network $\mathbf{X}$ is a Rossler attractor of three neurons obeying
the dynamical equations:
$\displaystyle\begin{cases}\dot{X}_{1}=-X_{2}-X_{3}\\\
\dot{X}_{2}=X_{1}+\alpha X_{2}\\\
\dot{X}_{3}=\beta+X_{3}(X_{1}-\gamma)\end{cases}$ (4.9)
where $\\{\alpha,\beta,\gamma\\}=\\{0.2,0.2,5.7\\}$. There are 100 neurons in
the sub-network $\mathbf{Y}$ from which we chose the first three, $Y_{1:3}$,
to simulate the effect of unobserved nodes. Neurons within the sub-network $Y$
obey the dynamical equations
$\displaystyle\dot{Y}=-\lambda Y+10\tanh(J_{YX}X+J_{YY}Y)$ (4.10)
where $J_{YX}\in\mathbb{R}^{100\times 3}$ has all its entries equal to $0.1$,
and $J_{YY}$ is the recurrent weight matrix of the $Y$ sub-network, sampled as
zero-mean, independent Gaussian variables with standard deviation $g=4$. No
projections exist from the downstream sub-network $\mathbf{Y}$ to the upstream
sub-network $\mathbf{X}$. We simulated time series from this network
(exhibiting chaotic dynamics, see Fig. 3a) and estimated the PID as unique,
redundant, and synergistic contribution of neuron $i$ and neuron $j$ at time
$t$ in shaping the future of neuron $j$ at time $t+1$. For each pair of
neurons $Z_{i},Z_{j}\in\\{X_{1:3},Y_{1:3}\\}$ we treated
$(Z_{i}^{t},Z_{j}^{t},Z_{j}^{t+1})_{t=1}^{T}$ as iid samples444Note that the
estimation of the PID from many samples of the triplets
$(Z_{i}^{t},Z_{j}^{t},Z_{j}^{t+1})$ is operationally the same whether such
triplets are iid or, as in our case, temporally correlated. This is similar to
estimating expectations w.r.t. the equilibrium distribution of a Markov chain
by using temporally correlated successive values of the chain. In both cases,
the temporal correlations do not introduce bias in the estimator but can
increase the variance. and ran PID on these triplets ($i,j$ represent rows and
columns in Fig. 3b-d). The PID uncovered the functional architecture of the
network and further revealed non-trivial interactions between neurons
belonging to the different sub-networks, encoded in four matrices: modified
transfer $S$, unique transfer $U_{1}$, redundant storage $R$, and unique
storage $U_{2}$ (details in Fig. 3d). The sum of the modified and unique
transfer terms was found to be consistent with the TE (Fig. 3c, TE equal to
$S+U_{1}$, up to estimation bias). The TE itself captured the network
effective connectivity, consistent with previous results [55, 57].
Figure 3: PID uncovers the effective connectivity and allows for the
quantification of storage, modification, and transfer of information in a
chaotic network of rate neurons. a: Schematics of recurrent network
architecture (left) and representative activity (right). b: Schematic of the
PID triplets for each $3\times 3$ block of the matrices in c, d. c: PID
decomposition into modified transfer $S$, unique transfer $U_{1}$, redundant
storage $R$, and unique storage $U_{2}$ for the rate network. The future of
$X$ neurons only depends on unique information in the past of $X$ neurons and
their synergistic interactions. The interactions between the $X$ and $Y$ sub-
networks only contain synergistic information regarding the future of $Y$ but
no redundant information; the latter is only present in the interactions
confined within each sub-network. d: The transfer entropy (TE), estimated via
IDTxl [58], recovers the sum of modified and unique transfer terms $S+U_{1}$.
Figure 4: PID of RNNs trained to solve generalized XOR problem. a: Input data
drawn from a 2D Gaussian Mixture Model with $K$ mixture components
$X\sim\sum_{k=1}^{K}\frac{1}{K}\mathcal{N}(X|\mu_{k},\sigma I)$ with means
lying on the unit circle (grey and black dots represent the two class labels).
b: Two layer network with 2D input layer, $5$ recurrently connected hidden
neurons $X$ and one readout neuron $Y$; RNN activity unfolds in time
(horizontal axis). The input is presented at time $t=0$, then withdrawn, and
the RNN is trained with BPTT to report the decision at $t=10$. In this
representation, layers correspond to time-steps and weights $W_{XX}$ are
shared between layers. c: PID between output $Y(t)$ and pairs of hidden
neurons $X_{i}(t-1),X_{j}(t-1)$ for $t=10$ yielding $S,R,U_{1},U_{2}$
(distribution over 1000 input samples for each task $K$; 20 networks per
task). Harder tasks led to an increase in PID measures. d: Example receptive
fields for a network with $U>S$ shows emergence of grand-mother cells in the
hidden layer (red and blue colors represent hidden neurons outputs;
grandmother cell, second from left). e: Example receptive fields for a network
with $S>U$, relying on higher synergy between neurons to solve the task.
Uncovering a plurality of computational strategies in RNNs trained to solve
complex tasks. A fundamental goal in neuroscience is to understand the
computational mechanisms emerging from the collective interactions of
recurrent neural circuits leading to cognitive function and behavior. Here, we
show that PID opens a new window for assessing how specific computations arise
from recurrent neural interactions. Unlike MI or TE, the PID quantifies the
alternative ways in which a neuron determines the information in its output
from its inputs, and thus can be a sensitive marker of different computational
strategies. We here trained RNNs as models of cortical circuits [59] and used
the PID to elucidate how the computations emerging from recurrent neural
interactions contribute to task performance. We trained RNNs to solve a
generalized version of the classic XOR classification problem with target
labels corresponding to odd vs. even mixture components (Fig. 4a). Stimuli
were presented for one time step ($t=0$) and the network was trained to report
the decision at $t=10$. By tracking the temporal trajectories of the hidden
layer activity we found that the network recurrent dynamics (represented as
unfolded in time in Fig. 4b) progressively pulls the two input classes in
opposite directions along the output weights (see Appendix). We used PID to
dissect how a plurality of different strategies emerge from recurrent neural
interactions in RNNs trained for solving a classification task. The
computation emerged from the recurrent interaction between hidden neurons at
different time steps. Do all successfully trained networks have a similar
profile in terms of the PID terms? If so, this hints at a single computational
strategy across these networks. If not, it is safe to assume that task
performance is reached via different mechanisms, despite identical network
architecture and training algorithm.
We found that on average across multiple networks S, R, and U rose with task
difficulty (Fig. 4c), yet at all difficulties, individual networks differed
strongly with respect to the ratio $S/U$, i.e. there were networks with larger
average synergy across neuron pairs compared to the average unique
information, and vice versa. For simple networks like the ones used here, one
can inspect receptive fields to understand the reason for this differential
behaviour (Fig. 4d-e). Indeed, networks with high average unique information
displayed ’grandmother-cell’-like neurons, that would alone classify a large
parts of the sample space, while in networks with higher average synergy such
cells were absent (Fig. 4d). The emergence of these ’grandmother-cell’-like
receptive fields is due to the recurrent dynamics. While in a feedforward
architecture ($W_{XX}=0$) hidden layer receptive fields are captured by
hyperplanes in input space, in the RNN the receptive fields are time
dependent, where later times are interpreted as deeper layers (Fig. 4b) and
thus can capture highly non-linear features in input space. The advantage of
PID versus a manual inspection of receptive fields is twofold: First, the PID
framework abstracts and generalizes descriptions of receptive fields as being
e.g. ’grandmother-cell’-like; thus the concept of unique information stays
relevant even in scenarios where the concept of a receptive field becomes
meaningless, or inaccessible. Second, the quantitative outcomes of a PID rest
only on information theory, not specfic assumptions about neural coding or
computational strategies, and can be obtained for large numbers of neurons.
Comparison of our PID-based approach with the concept of neuronal selectivity
used in neuroscience highlights interesting similarities and differences.
Several kinds of selectivity (pure, mixed linear, and mixed non-linear) can be
identified by performing regression analysis of neural responses vs. task
variables [60]. In this framework, our grand-mother cells correspond to
neurons with pure selectivity to the input class labels (a.k.a. "choice-
selective" neurons). In the XOR task, [60] showed that non-linear mixed
selectivity of neurons to the class labels is beneficial when solving the XOR
task, by leading to a high-dimensional representation of the task variables.
While selectivity profiles are a property of single neuron responses to task
variables, our PID measures are a property of the combined activity of
triplets of neurons and thus reveal emerging functional interactions between
units and their computational algorithms (see also [7] and [52]). This allowed
us to characterize a functional property of neural systems less studied than
task variable selectivity: the computations that require functional mixing of
the information from multiple units (measured by the average synergistic
information) vs. the computations that rely on the output of individual
neurons (measured by the unique information and described as grandmother
cells). Concretely, by comparing PID and receptive fields we found that that
in networks with high unique information, neurons typically have receptive
fields with pure selectivity (grandmother cells, with large unique information
to the class labels). In networks with high synergy, neurons show complex
mixed selectivity to class labels.
## 5 Conclusions
We presented a partial information decomposition measure for continuous
variables with arbitrary probability densities, thereby extending the popular
BROJA PID measure for discrete variables. Extending PID measures to continuous
variables drastically broadens the possible applications of the PID framework.
This is important as the latter provides key insights into the way a complex
system represents and modifies information in a computation – via asking which
variables carry information about a target uniquely (such that it can only be
obtained from that variable), redundantly, or only synergistically with other
variables. Answering these questions is pivotal to understanding distributed
computation in complex systems in general, and neural coding in particular. We
believe that the methods presented here will allow PIDs to be extended
efficiently in neuroscience for multiple continuous sources with potentially
complex dependency structures, as would be common in cellular imaging data or
activation properties of brain modules or areas in functional imaging. More
generally, the approach we presented here would be relevant for other
application domains such as machine learning, biomedical science, finance, and
the physical sciences.
## Acknowledgments
We thank Thibault Vatter and Praveen Venkatesh for conversations. The work of
AP is supported by the Simons Foundation, the DARPA NESD program, NSF NeuroNex
Award DBI1707398 and The Gatsby Charitable Foundation. DG is supported by a
Swartz Fellowship. AM and MW are supported by Volkswagenstiftung under the
program ‘Big Data in den Lebenswissenschaften’ and by the Ministry for Science
and Education of Lower Saxony and the Volkswagen Foundation through the
‘Niedersächsisches Vorab’. LM is supported by NINDS Grant NS118461 (BRAIN
Initiative). ES is supported by the Simons Collaboration on the Global Brain
(542997) as well as research support from Martin Kushner Schnur and Mr. and
Mrs. Lawrence Feis, and is the Joseph and Bessie Feinberg Professorial Chair.
## References
* [1] Paul L Williams and Randall D Beer. Nonnegative decomposition of multivariate information. arXiv preprint arXiv:1004.2515, 2010.
* [2] Tycho Tax, Pedro AM Mediano, and Murray Shanahan. The partial information decomposition of generative neural network models. Entropy, 19(9):474, 2017.
* [3] Patricia Wollstadt, Sebastian Schmitt, and Michael Wibral. A rigorous information-theoretic definition of redundancy and relevancy in feature selection based on (partial) information decomposition. arXiv preprint arXiv:2105.04187, 2021.
* [4] Allison E Goodwell et al. Debates—does information theory provide a new paradigm for earth science? Water Resources Research, 56(2):e2019WR024940, 2020.
* [5] Benjamin Flecker, Wesley Alford, John M Beggs, Paul L Williams, and Randall D Beer. Partial information decomposition as a spatiotemporal filter. Chaos, 21(3):037104, 2011.
* [6] Michael Wibral, Joseph T Lizier, and Viola Priesemann. Bits from brains for biologically inspired computing. Frontiers in Robotics and AI, 2:5, 2015.
* [7] Nicholas M Timme, Shinya Ito, Maxym Myroshnychenko, Sunny Nigam, Masanori Shimono, Fang-Chin Yeh, Pawel Hottowy, Alan M Litke, and John M Beggs. High-degree neurons feed cortical computations. PLoS computational biology, 12(5):e1004858, 2016.
* [8] Michael Wibral, Viola Priesemann, Jim W Kay, Joseph T Lizier, and William A Phillips. Partial information decomposition as a unified approach to the specification of neural goal functions. Brain and cognition, 112:25–38, 2017.
* [9] Giuseppe Pica, Eugenio Piasini, Houman Safaai, Caroline Runyan, Christopher D Harvey, Mathew E Diamond, Christoph Kayser, Tommaso Fellin, and Stefano Panzeri. Quantifying how much sensory information in a neural code is relevant for behavior. In NIPS, 2017.
* [10] Jim W Kay, WA Phillips, Jaan Aru, Bruce P Graham, and Matthew E Larkum. A Bayesian decomposition of BAC firing as a mechanism for apical amplification in neocortical pyramidal neurons. bioRxiv, page 604066, 2019.
* [11] Itay Gat and Naftali Tishby. Synergy and redundancy among brain cells of behaving monkeys. In Advances in neural information processing systems, pages 111–117, 1999.
* [12] Naama Brenner, Steven P Strong, Roland Koberle, William Bialek, and Rob R de Ruyter van Steveninck. Synergy in a neural code. Neural computation, 12(7):1531–1552, 2000.
* [13] Elad Schneidman, William Bialek, and Michael J Berry. Synergy, redundancy, and independence in population codes. Journal of Neuroscience, 23(37), 2003.
* [14] Rodrigo Quian Quiroga and Stefano Panzeri. Extracting information from neuronal populations: information theory and decoding approaches. Nature Reviews Neuroscience, 10(3):173–185, 2009.
* [15] Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, Jürgen Jost, and Nihat Ay. Quantifying unique information. Entropy, 16(4):2161–2183, 2014.
* [16] D Blackwell. Comparison of experiments. Proc. 2nd Berkeley Symp. Math. Stats. and Probability, 1951.
* [17] Moshe Leshno and Yishay Spector. An elementary proof of Blackwell’s theorem. Mathematical Social Sciences, 25(1):95–98, 1992.
* [18] Erik Torgersen. Comparison of statistical experiments. Cambridge University Press, 1991.
* [19] Lucien Le Cam. Comparison of experiments: A short review. Lecture Notes-Monograph Series, pages 127–138, 1996.
* [20] Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, and Jürgen Jost. Shared information—new insights and problems in decomposing information in complex systems. In Thomas Gilbert, Markus Kirkilionis, and Gregoire Nicolis, editors, Proceedings of the European Conference on Complex Systems 2012, pages 251–269, Cham, 2013. Springer International Publishing.
* [21] Robin AA Ince. Measuring multivariate redundant information with pointwise common change in surprisal. Entropy, 19(7):318, 2017.
* [22] Conor Finn and Joseph T Lizier. Pointwise partial information decomposition using the specificity and ambiguity lattices. Entropy, 20(4):297, 2018.
* [23] Malte Harder, Christoph Salge, and Daniel Polani. Bivariate measure of redundant information. Physical Review E, 87(1):012130, 2013.
* [24] Johannes Rauh. Secret sharing and shared information. Entropy, 19(11):601, 2017.
* [25] Abdullah Makkeh, Aaron J Gutknecht, and Michael Wibral. Introducing a differentiable measure of pointwise shared information. Physical Review E, 103(3):032149, 2021.
* [26] Aaron J Gutknecht, Michael Wibral, and Abdullah Makkeh. Bits and pieces: Understanding information decomposition from part-whole relationships and formal logic. Proceedings of the Royal Society A, 477(2251):20210110, 2021.
* [27] Pradeep Kr Banerjee, Johannes Rauh, and Guido Montúfar. Computing the unique information. In ISIT, 2018.
* [28] Abdullah Makkeh, Dirk Oliver Theis, and Raul Vicente. Bivariate partial information decomposition: The optimization perspective. Entropy, 19(10):530, 2017.
* [29] Abdullah Makkeh, Dirk Oliver Theis, and Raul Vicente. Broja-2pid: A robust estimator for bivariate partial information decomposition. Entropy, 20(4):271, 2018.
* [30] Johannes Rauh, Maik Schünemann, and Jüergen Jost. Properties of unique information. arXiv preprint arXiv:1912.12505, 2019.
* [31] Adam B Barrett. Exploration of synergistic and redundant information sharing in static and dynamical Gaussian systems. Physical Review E, 91(5), 2015.
* [32] Gabriel Schamberg and Praveen Venkatesh. Partial Information Decomposition via Deficiency for Multivariate Gaussians. arXiv preprint arXiv:2105.00769, 2021.
* [33] M Sklar. Fonctions de repartition a n dimensions et leurs marges. Publ. inst. statist. univ. Paris, 8:229–231, 1959.
* [34] Harry Joe. Multivariate Models and Multivariate Dependence Concepts. CRC Press, May 1997.
* [35] Gal Elidan. Copulas in machine learning. In Copulae in mathematical and quantitative finance, pages 39–60. Springer, 2013.
* [36] Rafael S Calsaverini and Renato Vicente. An information-theoretic approach to statistical dependence: Copula information. EPL (Europhysics Letters), 88(6):68003, 2009.
* [37] Jian Ma and Zengqi Sun. Mutual information is copula entropy. Tsinghua Science & Technology, 16(1):51–54, 2011.
* [38] Tim Bedford and Roger M Cooke. Probability density decomposition for conditionally dependent random variables modeled by vines. Annals of Mathematics and Artificial intelligence, 32(1-4):245–268, 2001.
* [39] Kjersti Aas, Claudia Czado, Arnoldo Frigessi, and Henrik Bakken. Pair-copula constructions of multiple dependence. Insurance: Mathematics and economics, 44(2):182–198, 2009.
* [40] Claudia Czado. Pair-copula constructions of multivariate copulas. In Copula theory and its applications, pages 93–109. Springer, 2010\.
* [41] Thomas Nagler and Claudia Czado. Evading the curse of dimensionality in nonparametric density estimation with simplified vine copulas. Journal of Multivariate Analysis, 151:69–89, 2016.
* [42] Roger B Nelsen. An Introduction to Copulas. Springer Science & Business Media, June 2007.
* [43] Thomas Nagler and Thibault Vatter. pyvinecopulib, November 2020.
* [44] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. ICLR 2016, 2016.
* [45] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. NIPS 2015.
* [46] George Tucker, Dieterich Lawson, Shixiang Gu, and Chris J Maddison. Doubly reparameterized gradient estimators for Monte Carlo objectives. ICLR, 2019.
* [47] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR 2015.
* [48] Joseph T Lizier, Benjamin Flecker, and Paul L Williams. Towards a synergy-based approach to measuring information modification. In 2013 IEEE Symposium on Artificial Life (ALIFE), pages 43–51. IEEE, 2013.
* [49] Matteo Carandini and David J Heeger. Normalization as a canonical neural computation. Nat. Rev. Neurosci., 13(1):51–62, November 2011.
* [50] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. ICLR, 2017.
* [51] Andrew T Reid, Drew B Headley, Ravi D Mill, Ruben Sanchez-Romero, Lucina Q Uddin, Daniele Marinazzo, Daniel J Lurie, Pedro A Valdés-Sosa, Stephen José Hanson, Bharat B Biswal, et al. Advancing functional connectivity research from association to causation. Nature neuroscience, 22(11):1751–1760, 2019.
* [52] Michael Wibral, Conor Finn, Patricia Wollstadt, Joseph T Lizier, and Viola Priesemann. Quantifying information modification in developing neural networks via partial information decomposition. Entropy, 19(9):494, 2017.
* [53] Thomas Schreiber. Measuring information transfer. Physical review letters, 85(2):461, 2000.
* [54] Raul Vicente, Michael Wibral, Michael Lindner, and Gordon Pipa. Transfer entropy—a model-free measure of effective connectivity for the neurosciences. Journal of computational neuroscience, 30(1):45–67, 2011.
* [55] Leonardo Novelli and Joseph T Lizier. Inferring network properties from time series using transfer entropy and mutual information: validation of multivariate versus bivariate approaches. Network Neuroscience, 5(2):373–404, 2021.
* [56] Paul L Williams and Randall D Beer. Generalized measures of information transfer. arXiv preprint arXiv:1102.1507, 2011.
* [57] Amin Nejatbakhsh, Francesco Fumarola, Saleh Esteki, Taro Toyoizumi, Roozbeh Kiani, and Luca Mazzucato. Predicting perturbation effects from resting state activity using functional causal flow. bioRxiv, 2020.
* [58] Patricia Wollstadt, Joseph T Lizier, Raul Vicente, Conor Finn, Mario Martinez-Zarzuela, Pedro Mediano, Leonardo Novelli, and Michael Wibral. Idtxl: The information dynamics toolkit xl: a python package for the efficient analysis of multivariate information dynamics in networks. Journal of Open Source Software, 4(34):1081, 2019.
* [59] Valerio Mante, David Sussillo, Krishna V Shenoy, and William T Newsome. Context-dependent computation by recurrent dynamics in prefrontal cortex. nature, 503(7474):78–84, 2013.
* [60] Mattia Rigotti, Omri Barak, Melissa R Warden, Xiao-Jing Wang, Nathaniel D Daw, Earl K Miller, and Stefano Fusi. The importance of mixed selectivity in complex cognitive tasks. Nature, 497(7451):585–590, 2013.
* [61] Tom Rainforth, Adam Kosiorek, Tuan Anh Le, Chris Maddison, Maximilian Igl, Frank Wood, and Yee Whye Teh. Tighter variational bounds are not necessarily better. In ICML, 2018.
* [62] Murray Rosenblatt. Remarks on a multivariate transformation. The annals of mathematical statistics, 23(3):470–472, 1952.
## Supplementary Material
## Appendix A Estimating the gradients
As shown in Section 3, the unique information $U(Y\mathbin{:}X_{1}\backslash
X_{2})$ is upper bounded as
$\displaystyle I_{q}[\theta]\leq B_{1}[\theta]+B_{2}[\theta,\phi]\,,$ (A.1)
where
$\displaystyle B_{1}[\theta]$ $\displaystyle=$
$\displaystyle\mathbb{E}_{c_{\theta}(u_{y},u_{1},u_{2})}\log\left[c(u_{y},u_{1})c_{1,2|\theta(u_{y})}(u_{1|y},u_{2|y})\right]\,,$
(A.2) $\displaystyle B_{2}[\theta,\phi]$ $\displaystyle=$
$\displaystyle-\mathbb{E}_{c_{\theta}(u_{1},u_{2})}D_{A,\theta,\phi}(u_{1},u_{2})\,,$
(A.3)
and
$\displaystyle
D_{A,\theta,\phi}(u_{1},u_{2})=\mathbb{E}_{r_{\phi}(u_{y}^{(1)}\ldots
u_{y}^{(A)}|u_{1},u_{2})}\log\left[\frac{1}{A}\sum_{a=1}^{A}\frac{c_{\theta}(u_{y}^{(a)},u_{1},u_{2})}{r_{\phi}(u_{y}^{(a)}|u_{1},u_{2})}\right]\,,$
(A.4)
and the above expectation is w.r.t.
$\displaystyle r_{\phi}(u_{y}^{(1)}\ldots
u_{y}^{(A)}|u_{1},u_{2})\equiv\prod_{a=1}^{A}r_{\phi}(u_{y}^{(a)}|u_{1},u_{2})\,.$
(A.5)
The parametrization we use for the inference distribution
$r_{\phi}(u_{y}|u_{1},u_{2})$ is detailed below in Appendix D. We are
interested in minimizing the r.h.s. of (A.1) w.r.t. $(\theta,\phi)$. To obtain
low-variance gradients, it is convenient to eliminate the $\theta,\phi$
dependence in the measures of (A.2)-(A.4) using the ‘reparametrization trick’
[45].
The idea is to obtain samples from $c_{\theta}(u_{y},u_{1},u_{2})$ by a
$\theta$-dependent transformation of three $\textrm{Unif}[0,1]$ samples ${\bf
v}=(v_{y},v_{1},v_{2})$, and samples from $r_{\phi}(u_{y}|u_{1},u_{2})$ by a
$(u_{1},u_{2},\phi)$-dependent transformation of
$\epsilon\sim\textrm{Unif}[0,1]$. We present the details of these
transformations in Appendices C and D, respectively.
Taking $M$ samples of $c_{\theta}(u_{y},u_{1},u_{2})$ and denoting them as
$\bar{u}_{y}^{(m)},\bar{u}_{1}^{(m)},\bar{u}_{2}^{(m)}$, we can estimate (A.2)
as
$\displaystyle
B_{1}[\theta]\simeq\frac{1}{M}\sum_{m=1}^{M}\log\left[c(\bar{u}^{(m)}_{y},\bar{u}^{(m)}_{1})\,c_{1,2|\theta\left(\bar{u}_{y}^{(m)}\right)}(\bar{u}^{(m)}_{1|y},\bar{u}^{(m)}_{2|y}))\right]$
(A.6)
where we denoted $\bar{u}_{i|y}=F(u_{i}=\bar{u}_{i}|u_{y}=\bar{u}_{y})$ for
$i=1,2$. An estimate of the gradient $\nabla_{\theta}B_{1}$ is obtained by
acting on this expression with $\nabla_{\theta}$, which also acts on the
$\theta$-dependent samples.
Denoting $A$ samples from $r_{\phi}(u_{y}|u_{1},u_{2})$ as
$\hat{u}_{y}^{(a)}$, we can also estimate (A.3) as
$\displaystyle
B_{2}[\theta,\phi]\simeq-\frac{1}{M}\sum_{m=1}^{M}\log\left(\frac{1}{K}\sum_{a=1}^{A}w_{a,m}\right)\,,$
(A.7)
where we defined
$\displaystyle
w_{a,m}=\frac{c_{\theta}(\hat{u}_{y}^{(a)},\bar{u}^{(m)}_{1},\bar{u}^{(m)}_{2})}{r_{\phi}(\hat{u}_{y}^{(a)}|\bar{u}^{(m)}_{1},\bar{u}^{(m)}_{2})}\,.$
(A.8)
Acting on this expressions with $\nabla_{\theta}$ yields an estimate of
$\nabla_{\theta}B_{2}$. On the other hand, as noted in [61], the estimate of
$\nabla_{\phi}B_{2}$ resulting from acting with $\nabla_{\phi}$ on (A.7) has a
signal-to-noise ratio which decreases with $A$. A solution to this problem was
found in [46], which showed that a stable gradient estimate can be obtained
instead as
$\displaystyle\nabla_{\phi}B_{2}\simeq\frac{-1}{M}\sum_{m=1}^{M}\sum_{a=1}^{A}\left(\frac{w_{a,m}}{\sum_{s=1}^{A}w_{s,m}}\right)^{2}\frac{\partial\log
w_{a,m}}{\partial\hat{u}_{y}^{(a)}}\nabla_{\phi}\hat{u}_{y}^{(a)}\,,$ (A.9)
and this is the estimate we use in our experiments.
## Appendix B The bivariate Gaussian copula
A bivariate Gaussian copula is parametrized by $\theta\in[-1,1]$ and given by
$\displaystyle
c(u_{1},u_{2})=\frac{1}{\sqrt{1-\theta^{2}}}\exp\left\\{-\frac{\theta^{2}(x_{1}^{2}+x_{2}^{2})-2\theta
x_{1}x_{2}}{2(1-\theta^{2})}\right\\}$ (B.1)
where $x_{i}=\Phi^{-1}(u_{i})$ and $\Phi$ is the standard univariate Gaussian
CDF. For explicit expressions of other popular bivariate copulas, see [39].
## Appendix C Sampling from the copula
In this section we show how to obtain samples from the three-dimensional
copula
$\displaystyle\begin{aligned}
c_{\theta}(u_{y},u_{1},u_{2})&=c(u_{y},u_{1})\,c(u_{y},u_{2})\,c_{1,2|y,\theta}(u_{1|y},u_{2|y})\end{aligned}$
(C.1)
by applying a $\theta$-dependent transformation to samples from Unif$[0,1]$.
We use the Rosenblatt transform [62], which consists in using the inverse CDF
method to sample from each factor in
$\displaystyle
c(u_{y},u_{1},u_{2})=c(u_{1})c(u_{y}|u_{1})c(u_{2}|u_{y},u_{1}).$ (C.2)
We denote $F(\cdot|\cdot)$ is the CDF of $c(\cdot|\cdot)$. Adopting the
notation of [39], we define
$\displaystyle h_{ij}(u_{i},u_{j})=F(u_{i}|u_{j})=\frac{\partial
C(u_{i},u_{j})}{\partial u_{j}}\,\,i,j=1,2.$ (C.3)
For several popular parametric families of bivariate copulas, such as those we
consider in this paper, explicit expressions are known for
$h_{ij}(u_{i},u_{j})$ along with its inverse $h^{-1}_{ij}(\cdot,u_{j})$ w.r.t.
the first argument (see e.g. [39]). Note that using this notation, the
arguments in the last factor of (C.1) are $u_{i|y}=h_{iy}(u_{i},u_{y})$
(i=1,2).
We first sample $(v_{1},v_{y},v_{2})$ from Unif$[0,1]$ and successively obtain
$u_{1},u_{y},u_{2}$ by inverting the functions in the r.h.s. of
$\displaystyle v_{1}$ $\displaystyle=$ $\displaystyle F(u_{1})\,,$
$\displaystyle=$ $\displaystyle u_{1}\,,$ $\displaystyle v_{y}$
$\displaystyle=$ $\displaystyle F(u_{y}|u_{1})\,,$ $\displaystyle=$
$\displaystyle h_{y1}(u_{y},u_{1})\,,$ $\displaystyle v_{2}$ $\displaystyle=$
$\displaystyle F(u_{2}|u_{1},u_{y})\,,$ $\displaystyle=$ $\displaystyle
h_{21|\theta(u_{y})}(F(u_{2}|u_{y}),F(u_{1}|u_{y}))\,,$ $\displaystyle=$
$\displaystyle
h_{21|\theta(u_{y})}(h_{2y}(u_{2},u_{y}),h_{1y}(u_{1},u_{y}))\,.$
Explicitly, we get
$\displaystyle u_{1}$ $\displaystyle=$ $\displaystyle v_{1}\,,$ $\displaystyle
u_{y}$ $\displaystyle=$ $\displaystyle h_{y1}^{-1}(v_{y},u_{1})\,,$
$\displaystyle u_{2}$ $\displaystyle=$ $\displaystyle
h_{2y}^{-1}(h_{21|\theta(u_{y})}^{-1}(v_{2},h_{y1}(u_{y},u_{1})),u_{y})\,.$
Note that only $u_{2}$ actually depends on $\theta$.
## Appendix D Parametrization of the learned models
### D.1 Parametrizing the learned conditional copula
The conditional Gaussian copula $c_{1,2|\theta(u_{y})}(u_{1|y},u_{2|y})$ is
parametrized by the function $\theta(u_{y}):[0,1]\rightarrow[-1,+1]$. For its
functional form we used
$\displaystyle\theta(u_{y})=\tanh\left(\sum_{i=1}^{16}w_{2,i}\tanh(w_{1,i}u_{y}+b_{1})+b_{2}\right)$
(D.1)
where $w_{1,i},w_{2,i},b_{1},b_{2}\in\mathbb{R}$.
### D.2 Parametrizing and sampling from the inference distribution
In our experiments we parametrize the inference distribution
$r_{\phi}(u_{y}|u_{1},u_{2})$ via its CDF, as
$\displaystyle R_{\phi}(u_{y}|u_{1},u_{2})$ $\displaystyle=$
$\displaystyle\int_{0}^{u_{y}}du\,r_{\phi}(u|u_{1},u_{2})\,,$ $\displaystyle=$
$\displaystyle\frac{1}{1+e^{-z(u_{y})a_{\phi}(u_{1},u_{2})-b_{\phi}(u_{1},u_{2})}},$
where $z(u_{y})=\log\left(\frac{u_{y}}{1-u_{y}}\right)$. Derivating w.r.t.
$u_{y}$ gives
$\displaystyle
r_{\phi}(u_{y}|u_{1},u_{2})=R_{\phi}(1-R_{\phi})a_{\phi}\left(u_{y}^{-1}+(1-u_{y})^{-1}\right)\,.$
(D.3)
The functions $a_{\phi}(u_{1},u_{2})$ and $b_{\phi}(u_{1},u_{2})$ take values
in $\mathbb{R}$ and are parametrized with a neural network with two hidden
layers, and we impose $a_{\phi}(u_{1},u_{2})>0$ in order to make $R_{\phi}$
monotonous with $u_{y}$. In order to sample from $r_{\phi}$, we draw
$\epsilon\sim\textrm{Unif}[0,1]$, and use the inverse CDF method to obtain
$\displaystyle u_{y}(\epsilon,u_{1},u_{2})$ $\displaystyle=$ $\displaystyle
R^{-1}_{\phi}(\epsilon|u_{1},u_{2})\,,$ $\displaystyle=$
$\displaystyle\frac{1}{1+e^{-(z(\epsilon)-b_{\phi}(u_{1},u_{2}))/a_{\phi}(u_{1},u_{2})}}\,.$
## Appendix E Comparison with a discrete estimator
In this section we estimate the PID of the two models of three neurons from
eq. (4.8) using the discrete estimator BROJA-2PID [29]. In particular, we
present a quantization scheme of the continuous models that leads to a
qualitative agreement between the discrete and continuous estimators, thus
further validating the results of the latter.
Let us denote the discretized versions of $X_{1},X_{2},Y$ as
$\hat{x}_{1},\hat{x}_{2},\hat{y}$. The discrete PID estimators require as
input a distribution $p(\hat{x}_{1},\hat{x}_{2},\hat{y})$ [29]. To create the
latter from our continuous models in eq. (4.8), we start by dividing the
continuous range of each $X_{i}(i=1,2)$ into $N_{x}$ segments, and associate
each segment with a discrete value $\hat{x}_{i}$ equal to the value of $X_{i}$
in the middle of each segment. To each square in the resulting 2D $N_{x}\times
N_{x}$ grid we associate a discrete probability $p(\hat{x}_{1},\hat{x}_{2})$
equal to the integral of the joint Gaussian density of $(X_{1},X_{2})$ in the
square. Finally, in each of the two models, we split the $Y$ range into
$N_{y}$ segments $\\{s_{i}\\}_{i=1}^{N_{y}}$. The boundaries of the segments
are chosen such that the same fraction $1/N_{y}$ of values of
$Y=Y(X_{1},X_{2})$ falls into each segment using eq. (4.8), a procedure called
‘maximum entropy binning’. Let $\hat{y}\in\\{1\ldots N_{y}\\}$. Using this
quantization, the three-dimensional discrete distribution is defined as
$\displaystyle p(\hat{x}_{1},\hat{x}_{2},\hat{y})$
$\displaystyle=\begin{cases}p(\hat{x}_{1},\hat{x}_{2})\quad&\textrm{if}\,\,\,Y(\hat{x}_{1},\hat{x}_{2})\in
s_{\hat{y}}\,,\\\ 0\quad&\textrm{if}\,\,\,Y(\hat{x}_{1},\hat{x}_{2})\notin
s_{\hat{y}}\,,\end{cases}$ (E.1)
where in each model $Y(\hat{x}_{1},\hat{x}_{2})$ is obtained from eq. (4.8).
Fig. 5 shows the results of the discrete PID obtained using this quantization
for the two models considered, assuming $X_{i}\in[-8,8]$ and $N_{x}=16$
equally-sized segments. For the $Y$ quantization we used $N_{y}=3$. Note the
qualitative agreement with the continuous results in Fig. 2.
Figure 5: Qualitative agreement of discrete and continuous PID estimations.
The two models are defined in (4.8) (left: Model 1, right: Model 2), and we
used the same model parameters indicated in Fig. 2. We show the normalized
discrete PID terms as a function of the synaptic strength $w_{2}$. See the
text for details on the discrete quantization used. Note that for both models
the discrete results agree qualitatively with the continuous results in Fig.
2.
## Appendix F Consistency
Using Model 2 from Eq.(4.8) as an example, we compared estimates of
$U(Y\mathbin{:}X_{2}\backslash X_{1})$ with indirect estimates obtained from
applying the consistency conditions to estimates of
$U(Y\mathbin{:}X_{1}\backslash X_{2})$. The results in Figure 6 show good
agreement, thus further validating the method.
Figure 6: Comparison of direct vs. indirect estimates of
$U(Y\mathbin{:}X_{2}\backslash X_{1})$, illustrating the consistency of the
method.
## Appendix G More on the experiments
In this section we provide more details on the last two experiments presented
in Section 4.
Computational aspects of connectivity in recurrent neural circuits.
We start by deriving the relation $TE=S+U_{1}$ verified in this experiment
(Fig. 3d). Transfer entropy [53] $TE(X\to Y)$ is defined as $I(Y^{+}:X^{-}\mid
Y^{-})$ where $Y^{+}$ is the future of state of $Y$, $X^{-}$ and $Y^{-}$ are
the past states of $X$ and $Y$, respectively. Consider the chain rule for
mutual information,
$\displaystyle
I(Y^{+}\mathbin{:}(X^{-},Y^{-}))=I(Y^{+}\mathbin{:}Y^{-})+I(Y^{+}\mathbin{:}X^{-}|Y^{-})\,.$
(G.1)
Replacing $I(Y^{+}\mathbin{:}(X^{-},Y^{-}))$ and $I(Y^{+}\mathbin{:}Y^{-})$ by
the r.h.s. of (1.1) and (1.3), we get
$\displaystyle
I(Y^{+}\mathbin{:}X^{-}|Y^{-})=U(Y^{+}\mathbin{:}X^{-}\backslash
Y^{-})+S(Y^{+}\mathbin{:}(X^{-},Y^{-}))\,,$ (G.2)
which is the equation we verified by estimating separately the left and right
sides. The two terms in the r.h.s. are called _state-independent transfer
entropy_ and _state-dependent transfer entropy_ respectively in [56],
reflecting their intuitive meaning.
In Fig. 7, we analyze the state space of the network in Fig. 3 of the main
text. The activities of the upstream sub-network X and downstream sub-network
Y are shown, projected onto their first two principal components (PCs). The
causal structure and algorithmic details of the effective connectivity between
the two sub-networks cannot be identified solely by the observation of their
geometrical properties.
Uncovering a plurality of computational strategies in RNNs trained to solve
complex tasks.
Each RNN has fully connected architecture with tanh non-linearity. Data was
generated by sampling from the GMM with K components ($K\in\\{4,6,8,10\\}$) in
batches of 128 data points with the total number of $3000$ batches. The RNNs
were trained using standard backprop in time using Adam optimizer in Pytorch
package with a learning rate of $0.01$. For each trained RNN we considered all
triplets $(Y,X_{i},X_{j})$ where $i,j\in\\{1,\dots,5\\}$, i.e. the target
variable is the output of the network $Y$ and the source variables iterate
over all pairs of the hidden nodes in the RNN. Once the RNN is trained we
collect a test sample of $1000$ data points from the same GMM used for
training, and evaluate the nodes when inputting the RNN using test data and
running it forward for $t=10$ time steps. This gives us $1000$ samples from
each variable $X_{1:5},Y$ which we then use for PID analysis on the triplets
mentioned above. For each level of task difficulty $K\in\\{4,6,8,10\\}$ we
trained 5 RNNs and performed PID ($A=100$) on the resulting trained networks.
In Fig. 8 more details on the trained RNN’s in Fig. 4 of the main text are
illustrated, providing more insight into the computational strategies employed
by each trained instance as the task complexity grows. The first row shows the
time evolution of the recurrent layer of hidden units projected onto their
first 3 PC’s. For these RNN instances, the ones with $K=6,10$ have grand
mother-like cells (as confirmed by the receptive field plots in Fig. 8c), with
large unique information compared to the other cells. These grand mother-like
cells cannot be inferred by just inspecting the geometry of the hidden units
in the state space, but can be identified with the PID. PID reveals more
details about the computation and the differences between strategies for
different instances of trained RNN’s. Details of the PID for individual hidden
nodes including average unique and synergistic information for each node, its
mutual information with the output node, and the decoding weight connecting
the hidden node to the output unit is included in Table 1.
Figure 7: State space of the chaotic network of rate neurons: Projection of
the state space of the recurrent units for upstream network $X$ (a) and
downstream network $Y$ (b) onto their respective first two principal
components.
Figure 8: Algorithmic investigation of trained RNN’s on generalized XOR task:
(a) Evolution of the hidden unit activations in time (recurrent time steps).
Darker colors correspond to earlier time points; red and cyan correspond to
even and odd trials. Blue arrow corresponds to decoding direction, i.e. the
predicted label is given by the sign of the projection of the last time point
of each trajectory onto this direction. (b) Mutual information between
individual hidden units and the output of the network as a function of
recurrent time steps for the different tasks. (c) Receptive fields of
individual neurons, in certain cases (K=6, unit 1 and K=10, unit 2) grand
mother-like cells can be observed, yielding greater unique information than
synergistic information hinting at the algorithmic strategy employed by that
instance of the trained RNN.
| K=4 | | K=6 | | K=8 | | K=10
---|---|---|---|---|---|---|---
N | UI | SI | MI | W | | UI | SI | MI | W | | UI | SI | MI | W | | UI | SI | MI | W
1 | 0.28 | 0.27 | 1.22 | -0.05 | 0.72 | 0.38 | 1.1 | -0.11 | | 0 | 0.22 | 1.59 | 0.09 | 0.05 | 0.19 | 2.7 | 0.27
2 | 0.39 | 0.59 | 0.9 | -0.57 | 0.01 | 0.39 | 1.24 | -0.15 | | 1.58 | 0.42 | 2.93 | -0.57 | 1.74 | 0.13 | 3.47 | -0.55
3 | 0.04 | 0.39 | 0.84 | 0.18 | 0.01 | 0.39 | 0.93 | -0.14 | | 0.04 | 0.05 | 1.18 | 0.07 | 0.38 | 0.46 | 2.21 | 0.04
4 | 0.01 | 0.22 | 1.2 | -0.38 | 0.02 | 0.32 | 0.93 | 0.02 | | 0.34 | 0.33 | 2.16 | 0.03 | 0 | 0.37 | 1.24 | 0.02
5 | 0.4 | 0.29 | 1.38 | -0.34 | 0.56 | 0.69 | 1.8 | -0.43 | | 0.01 | 0.41 | 3.39 | -0.62 | 0.02 | 0.2 | 2.32 | -0.21
Table 1: Node-specific details for generalized XOR task: Average node-specific
unique, synergistic, and mutual information (UI, SI, MI) and the decoding
weight for different nodes in the hidden layer ($N\in\\{1,\dots,5\\}$) and for
different task difficulty levels ($K\in\\{4,6,8,10\\}$).
|
11institutetext<EMAIL_ADDRESS>
# Learning From How Humans Correct
Tong Guo
###### Abstract
In industry NLP application, our manually labeled data has a certain number of
noisy data. We present a simple method to find the noisy data and re-label
them manually, meanwhile we collect the correction information. Then we
present novel method to incorporate the human correction information into deep
learning model. Human know how to correct noisy data. So the correction
information can be inject into deep learning model. We do the experiment on
our own text classification dataset, which is manually labeled, because we re-
label the noisy data in our dataset for our industry application. The
experiment result shows that our method improve the classification accuracy
from 91.7% to 92.5%. The 91.7% accuracy is trained on the corrected dataset,
which improve the baseline from 83.3% to 91.7%.
###### Keywords:
Deep Learning Text Classification NLP
## 1 Introduction
In recent years, deep learning [2] and BERT-based [1] model have shown
significant improvement on almost all the NLP tasks. However, past methods did
not inject human correction information into the deep learning model. Human
interact with the environment and learn from the feedback from environment to
correct the their own error or mistake. Our method try to solve the problem
that let deep learning model imitate how human correct.
In order to solve the problem we present a learning framework. The framework
mainly works for our industry application dataset, because the problem starts
from our industry application. In order to solve the text classification
problem in our industry application, we first label a dataset. Then we can
find the noisy data in the dataset and re-label them. The re-labeling step
collects the human correction information.
To the best of our knowledge, this is the first study exploring the
improvement of injecting human correction information into deep model for
natural language understanding. Our key contribution are 4 folds:
1\. Based on our dataset, we first present the simple step to find the noisy
data and re-label the noisy data.
2\. We present the method to inject the human correction information into BERT
for text classification.
3\. The experiment result shows our framework gets gain of 0.8% accuracy
against the strong baseline.
4\. Our learning framework can apply to a broad set of deep learning industry
applications whose dataset is manually labeled.
Figure 1: Our learning framework
## 2 Relate work
BERT [1] is constructed by the multi-layer transformer encoder [10], which
produces contextual token representations that have been pre-trained from
unlabeled text and fine-tuned for the supervised downstream tasks. BERT
achieved state-of-the-art results on many sentence-level tasks from the GLUE
benchmark [3]. There are two steps in BERT’s framework: pre-training and fine-
tuning. During pre-training, the model is trained on unlabeled data by using
masked language model task and next sentence prediction task. Apart from
output layers, the same architectures are used in both pre-training and fine-
tuning. The same pre-trained model parameters are used to initialize models
for different down-stream tasks.
Our method is different to semi-supervised learning. Semi-supervised learning
solve the problem that making best use of a large amount of unlabeled data.
These works include UDA [6], Mixmatch [7], Fixmatch [8], Remixmatch [9]. These
works do not have the human correction information. Our work has a clear goal
that is to learn how humans correct their mistake. [11] studies the PN
learning problem of binary classification. But our idea can apply to a broad
set of deep learning industry applications.
The work[12] of OpenAI also use model predictions for human as references to
label. The work[12] use the new human-labeled data from model predictions to
train the reward model. Our work do not have the reward model of [12], and I
think reinforcement learning from human feedback(RLHF) is same to re-label
here. Also the new human-labeled data from model predictions do not conflict
to our method, because the new data can be simply merged to the whole dataset
(of [12]’s policy model), and then to re-label/re-correct. Correcting the
[12]’s policy by reward model is same to correct all the related data/labels
in training dataset. The detail pattern-based/substring-based human-correct
method is illustrated at [13].
## 3 Our method
In this section, we describe our method in detail. Our learning framework is
shown in Fig 1. The framework includes 5 steps:
Step 1, in order to solve the industry text classification problem. We label
2,040,000 data and split them into 2,000,000 training data and 40,000 test
data. The 2,040,000 data are sampled from our application database whose data
size is 500 million.
Step 2, we train / fine-tune the BERT model on the 2,000,000 training data. We
named the result model of this step Model-A. Note that Model-A should not
overfit the training dataset.
Step 3, we use Model-A to predict for all the 2,040,000 data. Then we find
300,000 data whose predicted label and human label are not equal. We consider
it is the noisy data. In detail, there are 294,120 noisy data in the training
dataset and 5,880 noisy data in the test dataset.
Step 4, we manually re-label 300,000 data and merge back to the 2,040,000
data. Then we get the merged 2,040,000 data. During the re-labeling, the last
label by human and the model-A’s predicted label are listed as the references
for people. But labeling people also make their own decision.
Step 5, we add the Model-A’s predicted one-hot label as the addition input for
training / fine-tuning a new model. The detail encoding method for the
predicted label is described in the next section. We named the result model of
this step Model-C. The Model-A’s predicted one-hot labels represent the
before-corrected information. The training ground truth for Model-C contains
the 294,120 corrected human label, which represent the corrected information.
So Model-C is learning how to correct and learning the text classification
task in the same time.
Figure 2: The encoding detail for Model-A, which is corresponding to Fig 1.
Figure 3: The encoding detail for Model-C, which is corresponding to Fig 1.
## 4 The Model
We use BERT as our model. The training steps in our method belongs to the
fine-tuning step in BERT. We follow the BERT convention to encode the input
text. The encoding detail for Model-A is shown in Fig 2. The encoding detail
for Model-C is shown in Fig 3.
## 5 Experiments
In this section we describe detail of experiment parameters and show the
experiment result. The detail result is shown in Table 2. The data size in our
experiment is shown in Table 1.
In fine-tuning, we use Adam [4] with learning rate of 1e-5 and use a dropout
[5] probability of 0.1 on all layers. We use BERT-Base (12 layer, 768 hidden
size) as our pre-trained model.
Table 1: The data size for text classification Data Size | Description
---|---
500,000,000 | All the data in our application database.
2,040,000 | The data we label in step 1 of Fig 1.
2,000,000 | The the training dataset we split from the 2,040,000 data.
300,000 | All the noisy data we select from the 2,040,000 data to re-label.
294,120 | The noisy data in the training dataset.
40,000 | The test dataset we split from the 2,040,000 data.
5,880 | The noisy data in the testing dataset.
Table 2: The experiment result of text classification. The test dataset is the 40,000 data. The accuracy reported by human is 5000 data that sampled from the 500 million data. Model-A, Model-B and Model-C are corresponding to Fig 1. Model In Fig 1 | Test Dataset Accuracy | Human Evaluate Accuracy
---|---|---
Model-A | 83.3% | 88.0%
Model-B | 91.7% | 97.2%
Model-C | 92.5% | 97.7%
## 6 Analysis
In step 4 of Fig 1, the manually re-labeling can correct the noisy 300,000
data. Because the selected 300,000 data is unfitting to the Model-A, the re-
labeling’s ’worst’ result is the origin last human’s label.
The step 5 in Fig 1 is the core contribution of our work. In step 5, the
predicted label of Model-A contains the before-human-corrected information.
The ground truth for Model-C contains the after-human-corrected information.
So the model is learning the human correction.
We could use the before-corrected human label (i.e., the ground truth in step
1 of Fig 1) as the input for Model-C. But this way can not apply to real
industry inference. Because we can not get the before-corrected human label as
the input of Model-C in real industry application. In real industry
application, we use the Model-A to predict one-hot label as input for Model-C.
Human evaluation accuracy is higher than the test dataset accuracy, because we
randomly sampled 5000 data from the 500 million data. The sampled 5000 data
represents the great majority of the 500 million data.
Why re-label method work? Because deep learning is statistic-based. Take
classification as example. (In a broad sense, all the machine learning tasks
can be viewed as classification.) If there are three very similar data
(data-1/data-2/data-3) in total, which labels are class-A/class-A/class-B.
Then the trained model will probably predict class-A for data-3. We assume
that data-3 is wrong labeled to class-B by human , because more people label
its similar data to class-A.
## 7 Conclusion
Human interact with environment and learn from the feedback from environment
to correct human’s own error. Base on the human’s correction idea, we design a
learning framework to inject the information of human’s correction into the
deep model. The experiment result shows our idea works. For further
improvement, we will try to re-label the data which top-1 predicted score and
top-2 predicted score are very close. It means that the model can hardly
classify these data and the dataset contains noisy data here. Our idea can
apply to a broad set of deep learning industry applications.
## References
* [1] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
* [2] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[J]. Advances in neural information processing systems, 2012, 25: 1097-1105.
* [3] Wang A, Singh A, Michael J, et al. GLUE: A multi-task benchmark and analysis platform for natural language understanding[J]. arXiv preprint arXiv:1804.07461, 2018.
* [4] Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
* [5] Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. The journal of machine learning research, 2014, 15(1): 1929-1958.
* [6] Xie Q, Dai Z, Hovy E, et al. Unsupervised data augmentation for consistency training[J]. arXiv preprint arXiv:1904.12848, 2019.
* [7] Berthelot D, Carlini N, Goodfellow I, et al. Mixmatch: A holistic approach to semi-supervised learning[J]. arXiv preprint arXiv:1905.02249, 2019.
* [8] Sohn K, Berthelot D, Li C L, et al. Fixmatch: Simplifying semi-supervised learning with consistency and confidence[J]. arXiv preprint arXiv:2001.07685, 2020.
* [9] Berthelot D, Carlini N, Cubuk E D, et al. Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring[J]. arXiv preprint arXiv:1911.09785, 2019.
* [10] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Advances in neural information processing systems. 2017: 5998-6008.
* [11] Northcutt C G, Wu T, Chuang I L. Learning with confident examples: Rank pruning for robust classification with noisy labels[J]. arXiv preprint arXiv:1705.01936, 2017.
* [12] Ouyang L, Wu J, Jiang X, et al. Training language models to follow instructions with human feedback[J]. arXiv preprint arXiv:2203.02155, 2022.
* [13] Guo, Tong (2022): Re-Label By Data Pattern Is All You Need For Knowledge Driven Deep Learning. TechRxiv. Preprint. https://doi.org/10.36227/techrxiv.20485917.v7
|
0 Research Spontaneous body motion acquisition during a robot imitation task
using different viewpoints. The acquired data can be used to define
personalized Body-Machine Interfaces. (top) VR-disabled conditions. (bottom)
VR-enabled conditions.
# The Impact of Virtual Reality and Viewpoints
in Body Motion Based Drone Teleoperation
Matteo Macchini Student Member, IEEE Manana Lortkipanidze Fabrizio Schiano
Member, IEEE
and Dario Floreano Senior Member, IEEE The authors are with the Laboratory of
Intelligent Systems, École Polytechnique Fédérale de Lausanne, CH-1015
Lausanne (EPFL), Switzerland.
###### Abstract
The operation of telerobotic systems can be a challenging task, requiring
intuitive and efficient interfaces to enable inexperienced users to attain a
high level of proficiency. Body-Machine Interfaces (BoMI) represent a
promising alternative to standard control devices, such as joysticks, because
they leverage intuitive body motion and gestures. It has been shown that the
use of Virtual Reality (VR) and first-person view perspectives can increase
the user’s sense of presence in avatars. However, it is unclear if these
beneficial effects occur also in the teleoperation of non-anthropomorphic
robots that display motion patterns different from those of humans. Here we
describe experimental results on teleoperation of a non-anthropomorphic drone
showing that VR correlates with a higher sense of spatial presence, whereas
viewpoints moving coherently with the robot are associated with a higher sense
of embodiment. Furthermore, the experimental results show that spontaneous
body motion patterns are affected by VR and viewpoint conditions in terms of
variability, amplitude, and robot correlates, suggesting that the design of
BoMIs for drone teleoperation must take into account the use of Virtual
Reality and the choice of the viewpoint.
Keywords: Virtual Reality. Presence. Human-Robot Interfaces. Human Body
Motion.
Human-centered computingHuman computer interaction (HCI)Interaction
paradigmsVirtual reality Human-centered computingHuman computer interaction
(HCI)HCI design and evaluation methodsUser studies
## 1 Introduction
Telerobotic systems are needed in many fields in which human cognition and
decision-making capacities are still crucial to accomplish a mission [14].
Such fields include but are not limited to navigation in challenging and
unstructured environments, search and rescue missions, and minimally invasive
surgery [12, 16, 25, 5]. To provide fine control of the telerobotic system,
the implementation of an efficient Human-Robot Interface (HRI) is crucial.
Most telerobotic applications are currently restricted to a small set of
experts who need to undergo long training processes to gain experience and
expertise in the task [9, 7]. With the fast advancements in the field of
robotics, new systems require control interfaces that are sufficiently
powerful and intuitive also for inexperienced users [26].
Body-Machine Interfaces (BoMIs) are the subdomain of HRIs that consist of the
acquisition and processing of body signals for the generation of control
inputs for the telerobotic system [6]. BoMIs are showing great potential in
improving user’s comfort and performance during the operation of mobile robots
[24], representing a more intuitive alternative to standard interfaces. The
acquisition of body motion is particularly suitable to be applied to BoMIs due
to the natural control capabilities that humans exert and train on it by daily
activities. The scientific literature offers several examples of motion-based
interfaces, proposing a heterogeneous set of robots to be controlled and a
multitude of methods to track the operator’s motion.
Among other mobile robots, drones are showing disruptive potential in both
industrial and research applications [13, 32, 11]. In the implementation of
motion-based BoMIs, one of the most challenging aspects is the definition of a
mapping function, which translates the user’s body motion into robot commands.
For non-anthropomorphic systems, mapping functions can be designed based on
the observation of the user’s spontaneous behavior while they imitate the
motion of the robot performing a set of predefined actions (Fig. The Impact of
Virtual Reality and Viewpoints in Body Motion Based Drone Teleoperation). This
procedure, known as calibration or imitation phase, has been used in prior
works to identify the spontaneous motion patterns of users for the control of
different robots [22, 24, 28]. Some key metrics used to discriminate the
relevance of body motion features are their motion amplitude and their
correlation with the robot’s motion. Some studies have been dedicated solely
to the identification of common human motion patterns following this paradigm,
in the case of both anthropomorphic and non-anthropomorphic robotic systems
[28, 8]. Several interfaces have been proposed for the motion-based control of
drones, based on the use of different body parts such as hands, torso, or the
user’s full body [21, 29, 31]. These works show that BoMIs can outperform
standard interfaces, such as remote controllers, both in terms of performance
and in user’s personal preference.
Among the metrics used to qualify an HRI, presence is a subjective measure of
the feeling of ”being there” in the virtual or distal scene [36]. Different
examples in the literature support the hypothesis that increasing the sense of
presence of an operator can improve their performance in controlling the robot
[40, 20, 37]. The concept of presence can be split into three dimensions:
spatial presence, self-presence, and social presence[19]. In our study, we
focus on the first two as social presence requires the interaction with
different agents, which is not the case in most telerobotic missions. While
spatial presence is relative to the feeling of being surrounded by the virtual
environment, self-presence defines the shift of the user’s perception of self
from their own body into the virtual or distal one [17]. A strong sense of
self-presence can improve the operator’s sense of embodiment and give them the
sensation of being the robot, instead of merely controlling it. The sense of
self-presence and the concept of embodiment are thus closely related. Several
factors concur to enhancing the sense of presence in the user during
teleoperation, including the insulation from the real environment, and the
amount and type of provided multi-modal feedback [30].
The change of viewpoint can strongly alter the perception of a virtual
environment. Some works correlate the first-person view (1V) to a higher sense
of embodiment [35, 27]. Studies in the field of video games conclude that 1V
is also associated with higher performance in manipulation tasks, and in
general when an interaction with static objects is needed [39]. The same
research states that a third-person view (3V) can increase the user’s sense of
spatial presence in the environment, which translates into a higher capacity
of navigation and perception of the surrounding space. Nonetheless, this
advantage comes at the cost of a lower sense of embodiment. Gorisse et al.
studied the effects of different viewpoints (1V and 3V) on the sense of
presence and embodiment during the control of an anthropomorphic avatar [15].
Through a survey, they concluded that the viewpoint impact on spatial presence
is very limited. However, the first-person view positively and significantly
affects embodiment.
VR applications are supported by a Head-Mounted Display (HMD), a device that a
user wears on their head, which provides stereoscopic vision and can track the
user’s head and eyes to provide them with the ability to explore the virtual
environment (VE). Prior literature investigated the effects of viewpoint in
VEs as a comparison with standard flat displays, confirming the higher degree
of immersion in 1V [10]. Also, some studies show that large freedom of motion
positively influences the sense of presence [30, 34].
Despite the extensive work conducted on the link between VR, viewpoint, and
presence for the control of anthropomorphic robots and virtual avatars, few
studies have been conducted in the control of non-anthropomorphic robots,
which are the most common type of robotic systems. Research shows that humans
can identify themselves with agents presenting visual aspects and kinematics
different from the human body, provided that they present human-like motion
and move synchronously with the user [3, 2]. The same concept has been
extended to virtual supernumerary robotic arms [38].
In this paper, we study the effects of (a) different viewpoints and (b) the
use of VR on the teleoperation of a fixed-wing drone, as an example of a non-
anthropomorphic robot with non-human motion behaviors. Specifically, we
conducted a set of experiments to assess:
* •
The effects of viewpoint and VR on the user’s sense of spatial presence and
embodiment, when they perceive the environment and move as a fixed-wing drone.
* •
The effects of viewpoint and VR on the user’s spontaneous body motion when
they are asked to mimic the drone’s behavior with their body. We focus our
analysis on three dimensions of body motion: variability, correlation with the
robot’s movements, and gesture amplitude.
Figure 1: Overview of the experimental protocol. (A) Data acquisition scenario
during the imitation task. The participant is free to move spontaneously to
mimic the robot’s behavior during a set of predefined maneuvers. During this
phase, their body is tracked by a motion capture system, and synchronous data
are acquired for the drone’s trajectory and the user’s motion. (B)
Experimental conditions. A total of 30 participants took part in the
experiment. We assigned each participant to one group, corresponding to one of
the viewpoint conditions: first-person view (1V), third-person view (3V), or
ground-view (GV). Each subject took part in the imitation task with and
without the use of VR.
## 2 Methods
### 2.1 Simulation
The simulation environment used in this work is based on the Unity3D engine.
We used a robot model reproducing the dynamics of the commercial drone eBee by
SenseFly111https://www.sensefly.com. The drone’s attitude is stabilized
through a PID controller and the speed is regulated to a constant value of
$8m/s$. The simulation displays a sequence of 4 maneuvers performed by the
fixed-wing drone: two roll maneuvers (right, left), and two pitch maneuvers
(up, down), after a horizontal flight section. Each maneuver’s duration was
set to $T=8s$ to give the participant enough time to recognize the robot’s
behavior, for a total duration of $32s$ per experiment. We display a red path
in front of the drone to inform the participants of the drone’s future
trajectory (Fig. 1B). We considered 3 different conditions on the viewpoints:
a standard first-person view (1V) from the drone’s front camera, a third-
person view from behind the drone (3V), and a third-person view from a ground
observer (GV). While most teleoperation tasks are carried out in 1V and GV
condition, the inclusion of 3V allows us to decouple the effects of immersive
viewpoints (1V vs. 3V, GV) and the effects of the camera moving together with
the robot (1V, 3V vs. GV). The 3V behind the drone follows at a constant
distance and rotates with the drone. The three viewpoints are depicted in Fig.
1B. We performed the experiments with two types of visual displays. In the VR
condition, subjects used an Oculus Rift S HMD. In the non-VR condition,
subjects saw the scene on a computer monitor positioned at a distance of
$1.5m$. In total, there are 6 experimental conditions, hereafter referred to
as in Table 1.
Table 1: Experimental conditions | non-VR | VR
---|---|---
First-person view (1V) | 1V-N | 1V-V
Third-person view (3V) | 3V-N | 3V-V
Ground view (GV) | GV-N | GV-V
### 2.2 Apparatus
We run the experimental sessions in a room equipped with an OptiTrack Motion
Capture (MoCap) System to track the participants’ body. Body motion tracking
was performed through a set of 25 reflective markers strapped on a velcro vest
worn by each subject. The subjects’ upper body was modeled as the
concatenation of 13 different rigid bodies interconnected by sphere joints, as
depicted in Fig. 1A. We consider only 9 rigid bodies for our study: torso,
shoulders, arms, forearms, and hands. This representation has already been
adopted in relevant previous studies and demonstrated to be sufficiently
powerful to derive personalized BoMIs for drone teleoperation [24, 22].
Moreover, being the representation decoupled by construction, it prevents
redundancy which might affect the subsequent data analysis. The orientation of
each rigid body, expressed as a quaternion, was recorded at a frequency of
$100$Hz. During the experiments, we acquired synchronous data from both the
drone simulator and the MoCap. Encoded body pose and drone actions were
streamed through a UDP protocol and concatenated into a dataset for subsequent
analysis.
### 2.3 Presence
The sense of presence was measured through the post-experimental questionnaire
shown in Table 2. The questionnaire consisted of seven questions, where each
item was given on a semantic scale from one to five (one corresponds to not at
all, five corresponds to completely). The questions were designed to
investigate two dimensions of presence:
* •
Embodiment, composed of three items representing two different embodiment
dimensions: self-location and ownership of the virtual body [17]. The sense of
agency was purposefully neglected since no actual teleoperation happens during
our experiment. Questions are adapted from previous literature [15, 10, 1].
* •
Spatial presence, composed of four items, refers to the sense of environmental
location and it is originally adapted from the MEC-SPQ test [41].
Table 2: Presence questionnaire composed of two blocks: Embodiment (E1-E3) and Spatial Presence (SP1-SP4). ID | Question
---|---
E1 | To what extent did you feel that you were located inside the virtual body?
E2 | To what extent did you feel that the virtual body was your own body?
E3 | To what extent did you did you forget your actual body in favor of the virtual body?
SP1 | To what extent did you feel that you were actually there in the virtual environment?
SP2 | To what extent did you feel that the objects in the virtual environment surrounded you?
SP3 | To what extent did it seem to you that your true location had shifted into the virtual environment?
SP4 | To what extent did you feel that you were physically present in the virtual environment?
### 2.4 Participants
30 volunteers participated in our user study. All subjects had no know prior
experience of motion sickness or discomfort using VR headsets, and a correct
or corrected sight. The age of the participants varied from 20 to 31 years
($24.29\pm 2.78$) and 83% of participants were male. Informed consent was
obtained from everyone before the experiment and the study was conducted while
adhering to standard ethical principles.222The experiments were approved by
the Human Research Ethics Committee of the École Polytechnique Fédérale de
Lausanne..
### 2.5 Procedure
Every subject filled a questionnaire about personal information before
participating in the experiment. Later, they were asked to sit on a stool in
front of the computer and were shown the simulation through the screen or the
VR headset depending on the group to which they were pseudo-randomly assigned.
During this procedure, they were asked to move spontaneously, as if they were
controlling the drone’s flight. Since the maneuvers were predefined, no
control was exerted on the simulator by the user.
Each participant took place in the calibration phase with two different
conditions: with and without VR, and was pseudo-randomly assigned a viewpoint
among the options above (1PV, 3V, 3PV). The simulation was shown twice per
condition to each participant: a first one to get familiar with the scenario,
and a second one for the actual data acquisition. The order of the experiment
(with/without VR) they performed was determined pseudo-randomly to compensate
for possible bias caused by the previous simulation. At the end of the
experiment, each subject filled the presence questionnaire shown in Table 2.
### 2.6 Motion data preprocessing
As mentioned in 2.2, raw data correspond to a set of signals consisting of
time series of orientation information expressed in quaternions for the body
motion, and two additional signals representing the drone roll and pitch
angles during the experiment. The user’s upper body was modeled as a kinematic
chain consisting of 9 rigid bodies, interconnected through sphere joints (Fig.
1A). We first computed the relative orientation of each rigid body with
respect to its parent limb in the human kinematic chain. For example, the
shoulder rotation is expressed as the relative rotation with respect to the
torso, and the arm rotation with respect to the shoulder. Subsequently, the
initial rotation of the body segment was reset to zero to compensate for its
initial bias and the orientations are converted from quaternions to Euler
angles. The order convention is chosen to minimize the risk of gimbal lock. We
filtered the Euler Angles with a moving average (N = 100) low-pass filter to
mitigate the effects of quantization noise. Finally, 27 angles (9 limbs x 3
angles) and 2 robot commands time series (roll and pitch) were used for our
analysis.
Table 3: Mean and standard deviation of the responses to the presence survey in a scale 1 to 5 Condition | 1V-N | 3V-N | GV-N | 1V-V | 3V-V | GV-V
---|---|---|---|---|---|---
ID | 1N | 3N | GN | 1V | 3V | GV
E1 | $3.00\pm 0.94$ | $2.92\pm 1.19$ | $2.08\pm 1.12$ | $3.60\pm 0.84$ | $2.73\pm 1.27$ | $1.82\pm 1.08$
E2 | $2.80\pm 0.92$ | $2.92\pm 1.04$ | $2.38\pm 1.19$ | $3.50\pm 1.08$ | $2.82\pm 0.87$ | $1.82\pm 0.98$
E3 | $2.60\pm 1.07$ | $2.62\pm 1.12$ | $2.31\pm 1.18$ | $3.40\pm 1.07$ | $3.45\pm 1.37$ | $2.09\pm 1.04$
SP1 | $2.40\pm 0.84$ | $3.00\pm 1.29$ | $2.85\pm 1.14$ | $3.10\pm 0.99$ | $4.00\pm 1.26$ | $3.45\pm 1.04$
SP2 | $2.20\pm 0.79$ | $2.31\pm 1.03$ | $2.31\pm 1.11$ | $3.30\pm 0.67$ | $3.64\pm 1.12$ | $3.27\pm 1.01$
SP3 | $2.20\pm 0.79$ | $2.85\pm 1.07$ | $2.62\pm 1.12$ | $3.30\pm 0.95$ | $3.91\pm 1.22$ | $3.00\pm 1.00$
SP4 | $2.00\pm 0.82$ | $2.69\pm 1.18$ | $2.23\pm 1.17$ | $3.10\pm 0.74$ | $3.73\pm 1.35$ | $3.09\pm 1.38$
Table 4: Statistical significance of the responses to the presence survey.
$p_{1N,1V}$, $p_{3N,3V}$, $p_{GN,GV}$ refer to the VR condition, and are
evaluated with Mann–Whitney U test. $p_{1N,3N}$, $p_{1N,GN}$, $p_{3N,GN}$,
$p_{1V,3V}$, $p_{1V,GV}$, $p_{3V,GV}$ refer to the viewpoint condition, for
which we used the Wilcoxon Signed-Ranks test. Significant p-values are shown
in bold characters. p-values are considered significant if $p<0.05$. In yellow
and blue, the sections relative to the most relevant results.
| VR Effect
---|---
| $p_{1N,1V}$ | $p_{3N,3V}$ | $p_{GN,GV}$
E1 | $0.084$ | $0.382$ | $0.297$
E2 | $0.089$ | $0.463$ | $0.106$
E3 | $0.052$ | $0.052$ | $0.357$
SP1 | $0.064$ | 0.028 | $0.133$
SP2 | 0.004 | 0.003 | 0.026
SP3 | 0.010 | 0.011 | $0.199$
SP4 | 0.005 | 0.028 | $0.065$
| Viewpoint Effect
---|---
| non-VR | VR
| $p_{1N,3N}$ | $p_{1N,GN}$ | $p_{3N,GN}$ | $p_{1V,3V}$ | $p_{1V,GV}$ | $p_{3V,GV}$
E1 | $0.411$ | 0.028 | 0.043 | $0.064$ | 0.001 | 0.049
E3 | $0.447$ | $0.129$ | $0.087$ | $0.085$ | 0.002 | 0.011
E5 | $0.487$ | $0.272$ | $0.244$ | $0.341$ | 0.008 | 0.015
SP1 | $0.117$ | $0.151$ | $0.374$ | 0.029 | $0.220$ | $0.092$
SP2 | $0.383$ | $0.436$ | $0.500$ | $0.128$ | $0.485$ | $0.176$
SP3 | $0.056$ | $0.209$ | $0.297$ | $0.067$ | $0.315$ | 0.022
SP4 | $0.087$ | $0.397$ | $0.159$ | $0.072$ | $0.485$ | $0.133$
## 3 Results
In this section, we report on the experimental results regarding the presence
survey, and the users’ body motion analysis.
### 3.1 Presence Survey
As a preliminary validation, Cronbach’s alpha coefficient was calculated to
measure the reliability of the questionnaire across both dimensions for all of
the experimental conditions [4]. The coefficient was $>0.73$ for all cases,
thus indicating acceptable reliability of the questionnaire. The Shapiro–Wilk
test was carried out to check the normality of the distributions of the
answers to the post-experiment questionnaire [33]. As not all variables
followed a normal distribution, we used a non-parametric test (the Wilcoxon
Signed-Ranks) to compare objective performance data of paired groups, keeping
the VR variable constant (1V vs 3V vs GV) [42]. We used the Mann–Whitney U
test for the independent groups, keeping the viewpoint constant (VR vs non-VR)
[23].
Table 3 shows the survey results across all conditions and Table 4 the
statistical significance of our findings. Each column refers to a pair of
conditions, with $p_{a,b}$ representing the p-value relative to the t-test
applied between condition $a$ and condition $b$.
VR affects Spatial Presence. In Table 4 (left) we show the survey results
across the VR condition. The p-values are calculated for each viewpoint
varying the VR condition using the Mann–Whitney U test. From E1-E3 in Table 4
(left), we can observe that no significant difference was present between the
VR groups and the non-VR groups ($p_{1N,1V},p_{3N,3V},p_{GN,GV}>0.05$). We
cannot thus conclude that the use of VR affects the sense of embodiment.
Instead, SP1-SP4 show significant differences for almost all cases for both 1V
and 3V. The first-person view group reported higher values of spatial presence
when using VR ($SP_{1V-N}=2.20\pm 0.78$, $SP_{1V-V}=3.20\pm 0.81$). Also for
third-person view the use of VR correlated with a higher sense of spatial
presence ($SP_{3V-N}=2.71\pm 1.13$, $SP_{3V-V}=3.82\pm 1.19$). This result
suggests a higher sense of spatial presence for 1V-V compared to 1V-N and of
3V-V compared to 3V-N. In average, the sense of spatial presence increased by
$45\%$ for 1V and $40\%$ for 3V when using VR. No significant effects were
observed for the GV case. These results are highlighted in light blue in Table
4 (left) and summarized in Fig. 2.
Figure 2: Survey results relative to the Spatial Presence dimension and
average responses (Table 4, blue highlight). (${}^{**}p<0.01$, ${}^{*}p<0.05$)
Figure 3: Survey results relative to the Embodiment dimension and average
responses (Table 4, yellow highlight). (${}^{**}p<0.01$, ${}^{*}p<0.05$)
Figure 4: Observed motion variability during the imitation task. (A)
Concatenated motion data timeseries. In black, the median value of the
corresponding angle. (B) Scatter plot of the results of a two-dimensional PCA
compression on the whole motion timeseries. Group GV exhibits a higher intra-
group variability, while data belonging to groups 1V and 3V are more densely
aggregated. (C) Motion variability metric expressed as MSE from the median
value. Both GV-N and GV-V values are significantly higher compared to the
different viewpoints. (${}^{**}p<0.01$)
Viewpoint affects Embodiment. In Table 4 (right) we show the survey results
across the viewpoint conditions. The p-values are calculated between each pair
of viewpoints using the Wilcoxon Signed-Ranks Test. As a first observation, we
can see that no significant effects are observable for any of the questions
SP1-SP4, meaning that we cannot observe a significant correlation between the
viewpoint and spatial presence. Secondly, no significant effects are
observable in the non-VR case. On the other hand, there is a clear trend in
the embodiment perception when changing the viewpoint while using VR: both
$p_{1V,GV}$ and $p_{3V,GV}$ are significant in E1-E3. GV group reported a
lower sense of embodiment in average: ($E_{GV-V}=1.91\pm 1.00$) than 1V group
($E_{1V-V}=3.50\pm 0.96$, $p<0.01$) and 3V group ($E_{3V-V}=3.00\pm 1.18$,
$p<0.01$). The sense of embodiment was higher by $83\%$ for 1V and $57\%$ for
3V with respect to GV, only when using VR. These results are highlighted in
light yellow in Table 4 (right) and summarized in Fig. 3.
### 3.2 Body Motion Analysis
The second part of the data analysis was dedicated to the body motion data.
These data were analyzed to extract relevant analogies and differences between
the spontaneous body motion patterns during the calibration phase. Due to the
limited amount of subjects per condition (N=10), we chose non-parametric
methods to assess the significance of our results. For motion data, we used
the Kruskal-Wallis test to assess the equality of the medians of different
groups [18]. We focus on three aspects of motion, which are relevant for
motion-based teleoperation: motion variability, human-robot motion
correlation, and gesture amplitude of different body segments.
Spontaneous motion displays higher variability in Ground View. First, we
analyzed the intra-group motion variability. We concatenated motion data for
each subject to a single timeseries containing the motion of all body segments
(Fig. 4A). To quantify the motion difference between subjects, the datasets
were compressed to two-dimensional data using Principal Component Analysis
(PCA) decomposition (Fig. 4B). PCA emphasizes the data covariance across
principal axes, revealing a main cluster of motion behaviors containing most
of the 1V and 3V participants. Oppositely, GV participant’s motion is
scattered further from the cluster center. Considering the centroid of the
cluster formed by 1V and 3V participants, the average euclidean distance of
the distribution is lower for 1V ($d_{1V}=0.83\pm 0.60$) and 3V
($d_{3V}=1.17\pm 1.16$) than for GV ($d_{GV}=7.00\pm 6.93$,
$p_{1V,GV},p_{3V,GV}<0.01$).
We then computed the median value of the concatenated features (Fig. 4B). We
computed the average MSE across all subjects in each group as a measure of
intra-group motion variability (Fig. 4C). Our results confirm the
aforementioned observation: participants in GV condition moved more
differently from each other, while groups 1V and 3V show a significantly lower
motion variability, and thus a higher agreement with each other. Specifically,
the variability of group GV-N ($variab_{GV-N}=1.01\pm 0.46$) was higher than
the variability of groups 1V-N ($variab_{1V-N}=0.30\pm 0.11$, $p<0.01$) and
3V-N ($variab_{3V-N}=0.33\pm 0.13$, $p<0.01$). Similar results were observed
comparing GV-V ($variab_{GV-V}=0.78\pm 0.52$) with 1V-V
($variab_{1V-V}=0.25\pm 0.12$, $p<0.01$) and 3V-V ($variab_{3V-V}=0.33\pm
0.14$, $p<0.01$).
Human-robot motion correlation is higher in First-Person View and Third-Person
View. Secondly, we considered the correlation between the user’s movements and
the drone’s inputs, as a measure of similarity of the two datasets (Fig. 5).
We measured the correlation of a body segment with a robot command through the
Pearson’s correlation coefficient on the sample distribution. We define the
correlation of the Euler Angle related to a body segment with the robot motion
as the sum of its absolute correlations with the drone roll and pitch.
Additionally, we define the correlation of a body segment with the robot
motion the maximum correlation value among all the associated Euler Angles.
Finally, we compute the mean across all body segments to evaluate the
correlation score of the whole dataset. We observed that group GV-N showed a
lower correlation ($corr_{GV-N}=0.54\pm 0.15$) than 1V-N ($corr_{1V-N}=0.66\pm
0.07$, $p=0.034$) and 3V-N ($corr_{3V-N}=0.68\pm 0.08$, $p=0.033$). Similarly,
group GV-V showed a lower correlation ($corr_{GV-V}=0.52\pm 0.13$) than 1V-V
($corr_{1V-V}=0.70\pm 0.10$, $p<0.01$) and 3V-V ($corr_{3V-V}=0.65\pm 0.04$,
$p=0.047$).
Figure 5: Correlation of human body motion and drone motion.
(A) Drone roll superimposed to torso roll of different subjects.
(B) Correlation score between human and robot motion. Both GV-N and GV-V
conditions correspond to a lower correlation score. (${}^{**}p<0.01$,
${}^{*}p<0.05$)
VR and viewpoint affect motion amplitude in different ways. Finally, we
evaluated the differences in the gesture amplitude for the various body
segments. We define Amount of Motion (AoM) of a body segment as the mean value
of the norm of its angular velocity vector. Our data indicate that the AoM
significantly varies across conditions (Fig. 6). We observed two different
effects of the viewpoint between VR and non-VR groups. A first observation is
that no significant VR effects are observable, except for group 1V-V. Group
1V-V employed smaller body gestures ($AoM_{1V-V}=0.46\pm 0.15$) than group
1V-N ($AoM_{1V-N}=0.68\pm 0.16$, $p<0.01$). Additionally, Group 1V-V moved
significantly less than groups 3V-V ($AoM_{3V-V}=0.77\pm 0.26$, $p<0.01$) and
GV-V ($AoM_{GV-V}=0.85\pm 0.62$, $p<0.01$), suggesting that users tend to move
their torso more when they see the robot when controlling it in an immersive
perspective. Such an effect is not observable in VR-disabled experiments.
Figure 6: Amount of Motion score during the imitation task. 1V-V condition
corresponds to smaller body gestures compared to both 1V-N and the other VR
viewpoints, 3V-V and GV-V. (${}^{**}p<0.01$, ${}^{*}p<0.05$)
## 4 Discussion
In this study, we investigated how the use of VR and the viewpoint change
affect two central aspects of motion-based teleoperation: the sense of
presence and the user’s spontaneous body motion. We considered a fixed-wing
drone as an example of a non-anthropomorphic robot with non-human motion
patterns. We run a user study (N=30) on an imitation task for motion pattern
identification for motion-based telerobotics systems (Fig. 1), considering two
experimental variables: the use of VR and the viewpoint (first-person: 1V;
third-person: 3V; ground: GV), for a total of six conditions. Here, we discuss
our main findings.
Our experiments produced two main sets of results. First, we observed that,
for a given viewpoint, the use of VR can increase the sense of spatial
presence. VR-enabled experiments increased the user’s sense of spatial
presence for both 1V and 3V viewpoints up to $45\%$ (Table 3, Table 4, Fig.
2). In comparison, the viewpoint perspective does not play such an important
role on the sense of spatial presence. However, VR significantly increases
spatial presence only in 1V and 3V perspectives.
Furthermore, our data suggest that the viewpoint affects the user’s sense of
embodiment in VR-enabled experiments. 1V and 3V conditions provided a higher
sense of embodiment (respectively, $83\%$ and $57\%$) compared with GV (Table
3, Table 4, Fig. 3). . The same effect was not observed for VR-disabled
experimental conditions. These results are in agreement with prior literature
[15], and show that they hold also for non-anthropomorphic robots with non-
human motion patterns.
In summary, our study shows that the sense of presence in virtual environments
is not exclusively limited to anthropomorphic characters with human-like
motion, but can be elicited during the operation of other types of robots,
such as the fixed-wing drone used here. VR and viewpoint appear to affect
different dimensions of teleoperation: while the first correlates with the
sense of spatial presence, the second mainly impacts the sense of embodiment.
Moreover, the embodiment dimension seems to be related to a camera motion
coherent with the robot’ motion (1V, 3V) more than by using an immersive point
of view (only 1V). These effects do not apply to all conditions of VR and
viewpoint: no change in the sense of spatial presence was observed in ground
view, and no change in the sense of embodiment was observed without the use of
VR.
Our second result concerns the human motion during a robot imitation task. We
observed that human motion is mostly affected by the viewpoint perspective:
while 1V and 3V groups presented similar intra-group motion patterns, subjects
in group GV displayed much higher variability (Fig. 4) both in VR-enabled
($+223\%$) and VR-disabled conditions ($+169\%$). Since very different motion
patterns cannot be recognized by a predefined BoMI, this result implies a
higher need for personalization in applications where the teleoperation of the
robot must take place with a ground view.
We also found a significant effect of the viewpoint on the correlation between
the participants’ and the robot’s motion. Specifically, condition GV reduces
motion correlation up to $26\%$ in VR-enabled and by $35\%$ in VR-disabled
experiments (Fig. 5). This result can be explained by the higher difficulty to
understand the robot’s behavior in GV, since the drone is further away from
the user and the perspective is not aligned with their view. As human-robot
motion correlation is a desirable feature for the definition of linear mapping
functions, this result suggests that nonlinear mappings could be more
effective for the definition of a BoMI for third-person view teleoperation.
Finally, gesture amplitude was affected in different ways by the viewpoint
depending on the VR condition. In VR-disabled experiments, between-groups
motion amplitude did not change significantly. Comparing with the VR-enabled
condition, we observed that group 1V-V moved $32\%$ less than group 1V-N (Fig.
6). As it has been demonstrated that body motion is one of the main
contributors to subjective presence in virtual environments [34], it would be
reasonable to expect that immersive experiences are associated with body
movements with higher amplitude. Our results could be explained by the nature
of our task: while previous literature focuses on active motion, in our study
the subjects were passively following a trajectory, being able to rotate the
camera but not to change the robot’s motion. Consequently, Group 1V saw only a
camera moving, without the possibility to control its motion and this may have
reduced the user’s involvement in the task, and thus reduced the amplitude of
the body movements. Also, group 1V-V moved their body less than groups 3V-V
and GV-V by $40\%$ and $46\%$, respectively. This result suggests that having
the robot in the user’s field of view encourages users to move more than using
an immersive viewpoint, possibly due to the aforementioned reasons.
In summary, we observed that viewpoint, and particularly the GV perspective
translates in both a higher motion variability between subjects and in a lower
user-robot motion correlation, suggesting that in this condition the
implementation of a motion-based BoMI for the control of the drone would
require both a high level of personalization and nonlinear mapping methods.
These results correlate with our findings regarding the sense of presence,
and, particularly, the dimension of embodiment: according to our findings, the
use of a GV viewpoint translates in both a higher sense of embodiment (in VR)
and in a set of more various motion patterns among different participants.
Moreover, GV also corresponds to a lower correlation between human and robot
motion. Further, we observed a lower motion amplitude linked to 1V-V, compared
to both 1V-N, 3V-V and GV-V. This effect might be related to the nature of our
task.
Although our tests showed clear effects of VR and viewpoint over the sense of
presence and the user’s spontaneous body motion in teleoperation, several
aspects should be investigated in future research. First, the study of
different robotic systems with different types of motion patterns would be
needed to assess if our results extend to other platforms. The extension to
different and more complex robots is left for future work. This extension
would add substantial value to our findings, and assess the transferability of
the method. Second, the study of active teleoperation tasks, in addition to
our passive imitation studies, could help to understand whether that has an
effect on the sense of presence and thus on body motion. However, this step
will require a significant advancement in state-of-art methods for the
automatic definition of motion-based HRIs, as the implementation of HRIs from
arbitrary body motion can be challenging on such variable motion datasets.
Finally, our study took into account a limited set of motion variables
consisting of body joint angles. Although this choice is backed by relevant
literature [24, 29], the observation of different kinematic variables (e.g.,
the position of the center of mass of the body segments) could unveil further
results.
## 5 Conclusions
In this paper, we show new results explaining the effects of VR and viewpoint
on the user’s sense of presence and their spontaneous motion in teleoperation
tasks. Our findings provide new insights on presence for teleoperation in the
case of a non-anthropomorphic robot with non-human motion, such as fixed-wing
drones. Additionally, we show that the users’ motion patterns when mimicking
the robot’s motion are affected by VR and viewpoint conditions. Our work
suggests preferred experimental conditions for the definition of personalized
body-machine interfaces for such machines. Since personalized interfaces have
been demonstrated to be more effective than generic ones for the control of
fixed-wing drones, the application of these results could facilitate the
design and the implementation of new motion-based telerobotic systems.
## 6 Acknowledgements
This work was partially funded by the European Union’s Horizon 2020 research
and innovation programme under grant agreement ID: 871479 AERIAL-CORE, the
Swiss National Science Foundation (SNSF) with grand number 200021-155907, and
the National Centre of Competence in Research (NCCR) Robotics
## References
* [1] F. Argelaguet, L. Hoyet, M. Trico, and A. Lecuyer. The role of interaction in virtual embodiment: Effects of the virtual hand representation. In 2016 IEEE Virtual Reality (VR), pp. 3–10, 2016. doi: 10 . 1109/VR . 2016 . 7504682
* [2] L. Aymerich-Franch. Can we identify with a block? identification with non-anthropomorphic avatars in virtual reality games. In Proceedings of the International Society for Presence Research (ISPR), 2012.
* [3] L. Aymerich-Franch, D. Petit, G. Ganesh, and A. Kheddar. Non-human looking robot arms induce illusion of embodiment. International Journal of Social Robotics, 9(4):479–490, 2017. doi: 10 . 1007/s12369-017-0397-8
* [4] J. M. Bland and G. D. Altman. Statistics notes: Cronbach’s alpha. p. 572. Bmj, 1997.
* [5] J. Bodner, H. Wykypiel, G. Wetscher, and T. Schmid. First experiences with the da vinci™ operating robot in thoracic surgery. European Journal of Cardio-Thoracic Surgery, 25:844–851, 2004. doi: 10 . 1016/j . ejcts . 2004 . 02 . 001
* [6] M. Casadio, R. Ranganathan, and F. A. Mussa-Ivaldi. The body-machine interface: A new perspective on an old theme. Journal of Motor Behavior, 44:419–433, 2012. doi: 10 . 1080/00222895 . 2012 . 700968
* [7] J. Casper and R. Murphy. Human-robot interactions during the robot-assisted urban search and rescue response at the world trade center. IEEE Transactions on Systems, Man, and Cybernetics, 33:367–385, 2003. doi: 10 . 1109/TSMCB . 2003 . 811794
* [8] J. R. Cauchard, J. L. E, K. Y. Zhai, and J. A. Landay. Drone & me: an exploration into natural human-drone interaction. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ’15, pp. 361–365, 2015. doi: 10 . 1145/2750858 . 2805823
* [9] J. Y. C. Chen, E. C. Haas, and M. J. Barnes. Human performance issues and user interface design for teleoperated robots. IEEE Transactions on Systems, Man, and Cybernetics, 37:1231–1245, 2007. doi: 10 . 1109/TSMCC . 2007 . 905819
* [10] H. G. Debarba, E. Molla, B. Herbelin, and R. Boulic. Characterizing embodied interaction in first and third person perspective viewpoints. In 2015 IEEE Symposium on 3D User Interfaces (3DUI), pp. 67–72, 2015. doi: 10 . 1109/3DUI . 2015 . 7131728
* [11] V. Delafontaine, F. Schiano, G. Cocco, A. Rusu, and D. Floreano. Drone-aided localization in lora iot networks. In 2020 IEEE International Conference on Robotics and Automation, pp. 286–292, 2020. doi: 10 . 1109/ICRA40945 . 2020 . 9196869
* [12] M. A. Diftler, J. S. Mehling, M. E. Abdallah, N. A. Radford, L. B. Bridgwater, A. M. Sanders, R. S. Askew, D. M. Linn, J. D. Yamokoski, F. A. Permenter, B. K. Hargrave, R. Platt, R. T. Savely, and R. O. Ambrose. Robonaut 2 - the first humanoid robot in space. In 2011 IEEE International Conference on Robotics and Automation, pp. 2178–2183, 2011. doi: 10 . 1109/ICRA . 2011 . 5979830
* [13] D. Floreano and R. J. Wood. Science, technology and the future of small autonomous drones. Nature, 521:460–466, 2015. doi: 10 . 1038/nature14542
* [14] T. Gibo. The shared control committee [society news]. IEEE Systems, Man, and Cybernetics Magazine, 2:51–55, 2016. doi: 10 . 1109/MSMC . 2016 . 2557494
* [15] G. Gorisse, O. Christmann, E. A. Amato, and S. Richir. First- and third-person perspectives in immersive virtual environments: Presence and performance analysis of embodied users. Frontiers in Robotics and AI, 4, 2017. doi: 10 . 3389/frobt . 2017 . 00033
* [16] O. Khatib, X. Yeh, G. Brantner, B. Soe, B. Kim, S. Ganguly, H. Stuart, S. Wang, M. Cutkosky, A. Edsinger, P. Mullins, M. Barham, C. R. Voolstra, K. N. Salama, M. L’Hour, and V. Creuze. Ocean one: A robotic avatar for oceanic discovery. IEEE Robotics Automation Magazine, 23:20–29, 2016. doi: 10 . 1109/MRA . 2016 . 2613281
* [17] K. Kilteni, R. Groten, and M. Slater. The sense of embodiment in virtual reality. Presence: Teleoperators and Virtual Environments, 21:373–387, 2012\. doi: 10 . 1162/PRES_a_00124
* [18] W. H. Kruskal and W. A. Wallis. Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association, 47:583–621, 1952\. doi: 10 . 2307/2280779
* [19] K. M. Lee. Presence, explicated. Communication Theory, 14(1):27–50, 2004. doi: 10 . 1111/j . 1468-2885 . 2004 . tb00302 . x
* [20] R. Ma and D. B. Kaber. Presence, workload and performance effects of synthetic environment design factors. International Journal of Human-Computer Studies, 64:541–552, 2006\. doi: 10 . 1016/j . ijhcs . 2005 . 12 . 003
* [21] M. Macchini, T. Havy, A. Weber, F. Schiano, and D. Floreano. Hand-worn haptic interface for drone teleoperation. In 2020 IEEE International Conference on Robotics and Automation, 2020. doi: 10 . 1109/ICRA40945 . 2020 . 9196664
* [22] M. Macchini, F. Schiano, and D. Floreano. Personalized telerobotics by fast machine learning of body-machine interfaces. IEEE Robotics and Automation Letters, 5:179–186, 2020. doi: 10 . 1109/LRA . 2019 . 2950816
* [23] H. B. Mann and D. R. Whitney. On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics, 18:50–60, 1947. doi: 10 . 1214/aoms/1177730491
* [24] J. Miehlbradt, A. Cherpillod, S. Mintchev, M. Coscia, F. Artoni, D. Floreano, and S. Micera. Data-driven body–machine interface for the accurate control of drones. Proceedings of the National Academy of Sciences of the United States of America, 115:7913–7918, 2018. doi: 10 . 1073/pnas . 1718648115
* [25] R. R. Murphy, S. Tadokoro, D. Nardi, A. Jacoff, P. Fiorini, H. Choset, and A. M. Erkmen. Search and rescue robotics. In Springer Handbook of Robotics, p. 23. 2008.
* [26] J. M. Peschel and R. R. Murphy. On the human–machine interaction of unmanned aerial system mission specialists. IEEE Transactions on Human-Machine Systems, 43:53–62, 2013. doi: 10 . 1109/TSMCC . 2012 . 2220133
* [27] V. I. Petkova, M. Khoshnevis, and H. H. Ehrsson. The Perspective Matters! Multisensory Integration in Ego-Centric Reference Frames Determines Full-Body Ownership. Frontiers in Psychology, 2, 2011. doi: 10 . 3389/fpsyg . 2011 . 00035
* [28] R. M. Pierce and K. J. Kuchenbecker. A data-driven method for determining natural human-robot motion mappings in teleoperation. In 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), pp. 169–176, 2012. doi: 10 . 1109/BioRob . 2012 . 6290927
* [29] C. Rognon, S. Mintchev, F. DellAgnola, A. Cherpillod, D. Atienza, and D. Floreano. FlyJacket: An upper body soft exoskeleton for immersive drone control. IEEE Robotics and Automation Letters, 3:2362–2369, 2018. doi: 10 . 1109/LRA . 2018 . 2810955
* [30] Sanchez-Vives, Maria and Slater, Mel. From presence to consciousness through virtual reality. Nature Reviews Neuroscience, 6:332–339, 2005. doi: 10 . 1038/nrn1651
* [31] A. Sanna, F. Lamberti, G. Paravati, and F. Manuri. A kinect-based natural interface for quadrotor control. Entertainment Computing, 4:179–186, 2013. doi: 10 . 1016/j . entcom . 2013 . 01 . 001
* [32] SESAR. European drones outlook study, 2016.
* [33] S. S. Shapiro and M. B. Wilk. An analysis of variance test for normality (complete samples). p. 22, 2020.
* [34] M. Slater, J. McCarthy, and F. Maringelli. The influence of body movement on subjective presence in virtual environments. Human Factors, 40:469–477, 1998. doi: 10 . 1518/001872098779591368
* [35] M. Slater, B. Spanlang, M. V. Sanchez-Vives, and O. Blanke. First person experience of body transfer in virtual reality. PLoS ONE, 5:e10564, 2010. doi: 10 . 1371/journal . pone . 0010564
* [36] M. Slater, M. Usoh, and A. Steed. Depth of presence in virtual environments. Presence: Teleoperators and Virtual Environments, 3:130–144, 1994\. doi: 10 . 1162/pres . 1994 . 3 . 2 . 130
* [37] M. S. Song. Flight of mind: Sensorimotor and multisensory embodiment with aviation robotics, flight simulator, and virtual reality. PHD Thesis, EPFL, Lausanne, 2020.
* [38] R. Takizawa, A. Verhulst, K. Seaborn, M. Fukuoka, A. Hiyama, M. Kitazaki, M. Inami, and M. Sugimoto. Exploring perspective dependency in a shared body with virtual supernumerary robotic arms. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), pp. 25–257. IEEE. doi: 10 . 1109/AIVR46125 . 2019 . 00014
* [39] L. N. Taylor. Video games: perspective, point-of-view, and immersion. Unpublished master’s thesis, University of Florida, Gainesville, FL., p. 44.
* [40] A. Toet, I. A. Kuling, B. N. Krom, and J. B. F. van Erp. Toward enhanced teleoperation through embodiment. Frontiers in Robotics and AI, 7:14, 2020. doi: 10 . 3389/frobt . 2020 . 00014
* [41] P. Vorderer, W. Wirth, F. R. Gouveia, F. Biocca, T. Saari, L. Jäncke, S. Böcking, H. Schramm, A. Gysbers, T. Hartmann, C. Klimmt, N. Ravaja, A. Sacau, T. Baumgartner, and P. Jäncke. MEC spatial presence questionnaire (MEC-SPQ). 2004\.
* [42] F. Wilcoxon. Individual comparisons by ranking methods. In Breakthroughs in Statistics, Springer Series in Statistics (Perspectives in Statistics). 1992.
|
# SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
###### Abstract
Attention mechanisms, which enable a neural network to accurately focus on all
the relevant elements of the input, have become an essential component to
improve the performance of deep neural networks. There are mainly two
attention mechanisms widely used in computer vision studies, spatial attention
and channel attention, which aim to capture the pixel-level pairwise
relationship and channel dependency, respectively. Although fusing them
together may achieve better performance than their individual implementations,
it will inevitably increase the computational overhead. In this paper, we
propose an efficient Shuffle Attention (SA) module to address this issue,
which adopts Shuffle Units to combine two types of attention mechanisms
effectively. Specifically, SA first groups channel dimensions into multiple
sub-features before processing them in parallel. Then, for each sub-feature,
SA utilizes a Shuffle Unit to depict feature dependencies in both spatial and
channel dimensions. After that, all sub-features are aggregated and a “channel
shuffle” operator is adopted to enable information communication between
different sub-features. The proposed SA module is efficient yet effective,
e.g., the parameters and computations of SA against the backbone ResNet50 are
300 vs. 25.56M and 2.76e-3 GFLOPs vs. 4.12 GFLOPs, respectively, and the
performance boost is more than 1.34% in terms of Top-1 accuracy. Extensive
experimental results on common-used benchmarks, including ImageNet-1k for
classification, MS COCO for object detection, and instance segmentation,
demonstrate that the proposed SA outperforms the current SOTA methods
significantly by achieving higher accuracy while having lower model
complexity. The code and models are available at
https://github.com/wofmanaf/SA-Net.
Index Terms— spatial attention, channel attention, channel shuffle, grouped
features
## 1 Introduction
Fig. 1: Comparisons of recently SOTA attention models on ImageNet-1k,
including SENet, CBAM, ECA-Net, SGE-Net, and SA-Net, using ResNets as
backbones, in terms of accuracy, network parameters, and GFLOPs. The size of
circles indicates the GFLOPs. Clearly, the proposed SA-Net achieves higher
accuracy while having less model complexity.
Attention mechanisms have been attracting increasing attention in research
communities since it helps to improve the representation of interests, i.e.,
focusing on essential features while suppressing unnecessary ones [1, 2, 3,
4]. Recent studies show that correctly incorporating attention mechanisms into
convolution blocks can significantly improve the performance of a broad range
of computer vision tasks, e.g., image classification, object detection, and
instance segmentation.
There are mainly two types of attention mechanisms most commonly used in
computer vision: channel attention and spatial attention, both of which
strengthen the original features by aggregating the same feature from all the
positions with different aggregation strategies, transformations, and
strengthening functions [5, 6, 7, 8, 9]. Based on these observations, some
studies, including GCNet [1] and CBAM [10] integrated both spatial attention
and channel attention into one module and achieving significant improvement
[10, 4]. However, they generally suffered from either converging difficulty or
heavy computation burdens. Other researches managed to simplify the structure
of channel or spatial attention [1, 11]. For example, ECA-Net [11] simplifies
the process of computing channel weights in SE block by using a 1-D
convolution. SGE [12] groups the dimension of channels into multiple sub-
features to represent different semantics, and applies a spatial mechanism to
each feature group by scaling the feature vectors over all locations with an
attention mask. However, they did not take full advantage of the correlation
between spatial and channel attention, making them less efficient. “Can one
fuse different attention modules in a lighter but more efficient way?”
To answer this question, we first revisit the unit of ShuffleNet v2 [13],
which can efficiently construct a multi-branch structure and process different
branches in parallel. Specifically, at the beginning of each unit, the input
of $c$ feature channels are split into two branches with $c-c^{\prime}$ and
$c^{\prime}$ channels. Afterwards, several convolution layers are adopted to
capture a higher-level representation of the input. After these convolutions,
the two branches are concatenated to make the number of channels as same as
the number of input. At last, the “channel shuffle” operator (defined in [14])
is adopted to enable information communication between the two branches. In
addition, to increase calculation speed, SGE [12] introduces a grouping
strategy, which divides the input feature map into groups along the channel
dimension. Then all sub-features can be enhanced parallelly.
Based on these above observations, this paper proposes a lighter but more
efficient Shuffle Attention (SA) module for deep Convolutional Neural
Networks(CNNs), which groups the dimensions of channel into sub-features. For
each sub-feature, SA adopts the Shuffle Unit to construct channel attention
and spatial attention simultaneously. For each attention module, this paper
designs an attention mask over all the positions to suppress the possible
noises and highlight the correct semantic feature regions as well.
Experimental results on ImageNet-1k (which are shown in Figure 1) have shown
that the proposed simple but effective module containing fewer parameters has
achieved higher accuracy than the current state-of-the-art methods.
The key contributions in this paper are summarized as follows: 1) we introduce
a lightweight yet effective attention module, SA, for deep CNNs, which groups
channel dimensions into multiple sub-features, and then utilizes a Shuffle
Unit to integrate the complementary channel and spatial attention module for
each sub-feature. 2) extensive experimental results on ImageNet-1k and MS COCO
demonstrate that the proposed SA has lower model complexity than the state-of-
the-art attention approaches while achieving outstanding performance.
## 2 Related Work
Multi-branch architectures. Multi-branch architectures of CNNs have evolved
for years and are becoming more accurate and faster. The principle behind
multi-branch architectures is “split-transform-merge”, which eases the
difficulty of training networks with hundreds of layers. The InceptionNet
series [15, 16] are successful multi-branch architectures of which each branch
is carefully configured with customized kernel filters, in order to aggregate
more informative and multifarious features. ResNets [17] can also be viewed as
two-branch networks, in which one branch is the identity mapping. SKNets [2]
and ShuffleNet families [13] both followed the idea of InceptionNets with
various filters for multiple branches while differing in at least two
important aspects. SKNets utilized an adaptive selection mechanism to realize
adaptive receptive field size of neurons. ShuffleNets further merged “channel
split” and “channel shuffle” operators into a single element-wise operation to
make a trade-off between speed and accuracy.
Grouped Features. Learning features into groups dates back to AlexNet [18],
whose motivation is distributing the model over more GPU resources. Deep Roots
examined AlexNet and pointed out that convolution groups can learn better
feature representations. The MobileNets [19, 20] and ShuffleNets [13] treated
each channel as a group, and modeled the spatial relationships within these
groups. CapsuleNets [21, 22] modeled each grouped neuron as a capsule, in
which the neuron activity in the active capsule represented various attributes
of a particular entity in the image. SGE [12] developed CapsuleNets and
divided the dimensions of channel into multiple sub-features to learn
different semantics.
Attention mechanisms. The significance of attention has been studied
extensively in the previous literature. It biases the allocation of the most
informative feature expressions while suppressing the less useful ones. The
self-attention method calculates the context in one position as a weighted sum
of all the positions in an image. SE [23] modeled channel-wise relationships
using two FC layers. ECA-Net [11] adopted a 1-D convolution filter to generate
channel weights and significantly reduced the model complexity of SE. Wang et
al. [24] proposed the non-local(NL) module to generate a considerable
attention map by calculating the correlation matrix between each spatial point
in the feature map. CBAM [10], GCNet [1], and SGE [12] combined the spatial
attention and channel attention serially, while DANet [3] adaptively
integrated local features with their global dependencies by summing the two
attention modules from different branches.
## 3 Shuffle Attention
In this section, we firstly introduce the process of constructing the SA
module, which divides the input feature map into groups, and uses Shuffle Unit
to integrate the channel attention and spatial attention into one block for
each group. After that, all sub-features are aggregated and a “channel
shuffle” operator is utilized to enable information communication between
different sub-features. Then, we show how to adopt SA for deep CNNs. Finally,
we visualize the effect and validate the reliability of the proposed SA. The
overall architecture of SA module is illustrated in Figure 2.
Fig. 2: An overview of the proposed SA module. It adopts “channel split” to
process the sub-features of each group in parallel. For channel attention
branch, using GAP to generate channel-wise statistics, then use a pair of
parameters to scale and shift the channel vector. For spatial attention
branch, adopting group norm to generate spatial-wise statistics, then a
compact feature is created similar to the channel branch. The two branches are
then concatenated. After that, all sub-features are aggregated and finally we
utilize a “channel shuffle” operator to enable information communication
between different sub-features.
Feature Grouping. For a given feature map $X\in\mathbb{R}^{C\times H\times
W}$, where $C$, $H$, $W$ indicate the channel number, spatial height, and
width, respectively, SA first divides $X$ into $G$ groups along the channel
dimension, i.e., $X=[X_{1},\cdots,X_{G}],X_{k}\in\mathbb{R}^{C/G\times H\times
W}$, in which each sub-feature $X_{k}$ gradually captures a specific semantic
response in the training process. Then, we generate the corresponding
importance coefficient for each sub-feature through an attention module.
Specifically, at the beginning of each attention unit, the input of $X_{k}$ is
split into two branches along the channels dimension, i.e., $X_{k1}$,
$X_{k2}\in\mathbb{R}^{C/2G\times H\times W}$. As illustrated in Figure 2, one
branch is adopted to produce a channel attention map by exploiting the inter-
relationship of channels, while the other branch is used to generate a spatial
attention map by utilizing the inter-spatial relationship of features, so that
the model can focus on “what” and “where” is meaningful.
Channel Attention. An option to fully capture channel-wise dependencies is
utilizing the SE block proposed in [23]. However, it will bring too many
parameters, which is not good for designing a more lightweight attention
module in terms of a trade-off between speed and accuracy. Also, it is not
suitable to generate channel weights by performing a faster 1-D convolution of
size $k$ like ECA [11] because $k$ tends to be larger. To improve, we provide
an alternative, which firstly embeds the global information by simply using
global averaging pooling (GAP) to generate channel-wise statistics as
$s\in\mathbb{R}^{C/2G\times 1\times 1}$, which can be calculated by shrinking
$X_{k1}$ through spatial dimension $H\times W$:
$s=\mathcal{F}_{gp}(X_{k1})=\frac{1}{H\times
W}\sum_{i=1}^{H}\sum_{j=1}^{W}X_{k1}(i,j)$ (1)
Furthermore, a compact feature is created to enable guidance for precise and
adaptive selection. This is achieved by a simple gating mechanism with sigmoid
activation. Then, the final output of channel attention can be obtained by
$X^{\prime}_{k1}=\sigma(\mathcal{F}_{c}(s))\cdot
X_{k1}=\sigma(W_{1}s+b_{1})\cdot X_{k1}$ (2)
where $W_{1}\in\mathbb{R}^{C/2G\times 1\times 1}$ and
$b_{1}\in\mathbb{R}^{C/2G\times 1\times 1}$ are parameters used to scale and
shift $s$.
Fig. 3: PyTorch code of the proposed SA module Fig. 4: Validation on the
effectiveness of SA.
Spatial Attention. Different from channel attention, spatial attention focuses
on “where” is an informative part, which is complementary to channel
attention. First, we use Group Norm (GN)[25] over $X_{k2}$ to obtain spatial-
wise statistics. Then, $\mathcal{F}_{c}(\cdot)$ is adopted to enhance the
representation of $\hat{X}_{k2}$. The final output of spatial attention is
obtained by
$X^{\prime}_{k2}=\sigma(W_{2}\cdot GN(X_{k2})+b_{2})\cdot X_{k2}$ (3)
where $W_{2}$ and $b_{2}$ are parameters with shape $\mathbb{R}^{C/2G\times
1\times 1}$.
Then the two branches are concatenated to make the number of channels as the
same as the number of input, i.e.
$X^{\prime}_{k}=[X^{\prime}_{k1},X^{\prime}_{k2}]\in\mathbb{R}^{C/G\times
H\times W}$.
Aggregation. After that, all the sub-features are aggregated. And finally,
similar to ShuffleNet v2 [13], we adopt a ”channel shuffle” operator to enable
cross-group information flow along the channel dimension. The final output of
SA module is the same size of $X$, making SA quite easy to be integrated with
modern architectures.
Note that $W_{1},b_{1},W_{2},b_{2}$ and Group Norm hyper-parameters are only
parameters introduced in the proposed SA. In a single SA module, the number of
channels in each branch is $C/2G$. Therefore, the total parameters are $3C/G$
(typically $G$ is 32 or 64), which is trivial compared with the millions of
parameters of the entire network, making SA quite lightweight. The overall
architecture of SA module is illustrated in Figure 2.
Implementation. SA can be easily implemented by a few lines of code in PyTorch
and TensorFlow where automatic differentiation is supported. Figure 3 shows
the code based on PyTorch.
SA-Net for Deep CNNs. For adopting SA into deep CNNs, we exploit exactly the
same configuration with SENet [23] and just replace SE block with SA module.
The generated networks are named as SA-Net.
Fig. 5: Sample visualization on ImageNet-1k val split generated by GradCAM.
All target layer selected is “layer4.2”.
### 3.1 Visualization and Interpretation
In order to verify whether SA can improve semantic feature representation by
utilizing feature grouping and channel shuffle, we first train SA-Net50B
(without “channel shuffle”) and SA-Net50 (with “channel shuffle”) on the
ImageNet-1k training set. Assume $I$ is the original input, we calculate Top-1
accuracy of $I\times X_{k}$ in each group at SA_5_3 (i.e., the last bottleneck
according to the following scheme: SA_stageID_blockID.) before and after using
of the SA module. We use the accuracy scores as indicator and plot the
distribution for different classes (“pug”,“goldfish” and “plane”) in the
ImageNet-1k validation set in Figure 4(a, b). For comparison, we also plot the
distribution of classification scores across all the 1000 classes.
As shown in Figure 4 (a, b), the Top-1 accuracy statistically increases after
SA, which means feature grouping can significantly enhance the semantic
representations of feature maps. In addition, the average score in each group
gains ($\approx 0.4\%$) with “channel shuffle”, which demonstrates the
effectiveness of “channel shuffle”.
To fully validate the effectiveness of SA, we plot the distribution of average
activations (the mean value of the channel-wise feature maps in each group,
similar to SE) across three classes (“pug”, “goldfish”, and “plane”) at
different depths in SA-Net50 (with shuffle). The results are shown in Figure
4(c). We make some observations about the role of SA module: (1) the
distribution across different classes is very similar to each other at the
earlier layers(e.g., SA_2_3 and SA_3_4), which suggests that the importance of
feature groups is likely to be shared by different classes in the early
stages; (2) at greater depths, the activation of each group becomes much more
class-specific as different classes exhibit different performance to the
discriminative value of features(e.g., SA_4_6 and SA_5_3); (3) SA_5_2 exhibits
a similar pattern over different classes, which means SA_5_2 is less important
than other blocks in providing recalibration to the network.
In order to validate the effectiveness of SA more intuitively, we sample 9
images from ImageNet-1k val split. We use GradCAM [26] to visualize their
heatmaps at SA_5_3 on SA-Net50. For comparison, we also draw their heatmaps of
ResNet50 at “layer4.2”. As shown in Figure 5, our proposed SA module allows
the classification model to focus on more relevant regions with more object
details, which means the SA module can effectively improve the classification
accuracy.
Therefore, the proposed SA module is validated to indeed enhance the
representation power of networks.
## 4 Experiments
Experiment Setup. All experiments are conducted with exactly the same data
augmentation and hyper-parameters settings in [17] and [23]. Specifically, the
input images are randomly cropped to $224\times 224$ with random horizontal
flipping. The numbers of groups $G$ in SA-Net and SGE-Net are both set 64. The
initialization of parameters in $\mathcal{F}_{c}(\cdot)$ are set to 0 for
weights ($W_{1}$ and $W_{2}$) and 1 for biases ($b_{1}$ and $b_{2}$) to obtain
better results. All the architectures are trained from scratch by SGD with
weight decay 1e-4, momentum 0.9, and mini-batch size 256 (using 8 GPUs with 32
images per GPU) for 100 epochs, starting from the initial learning rate 0.1
(with a linear warm-up [27] of 5 epochs) and decreasing it by a factor of 10
every 30 epochs. For the testing on the validation set, the shorter side of an
input image is first resized to 256, and a center crop of $224\times 224$ is
used for evaluation.
Table 1: Comparisons of different attention methods on ImageNet-1k in terms of network parameters (Param.), giga floating point operations per second (GFLOPs), and Top-1/Top-5 accuracy (in %). The best records and the improvements are marked in bold and $\uparrow$, respectively. Attention Methods | Backbones | Param. | GFLOPs | Top-1 Acc (%) | Top-5 Acc (%)
---|---|---|---|---|---
ResNet [17] | ResNet-50 | 25.557M | 4.122 | 76.384 | 92.908
SENet [23] | 28.088M | 4.130 | 77.462 | 93.696
CBAM [10] | 28.090M | 4.139 | 77.626 | 93.660
SGE-Net [12] | 25.559M | 4.127 | 77.584 | 93.664
ECA-Net [wang2020eca] | 25.557M | 4.127 | 77.480 | 93.680
SA-Net (Ours) | 25.557M | 4.125 | 77.724 ($\uparrow 1.34$) | 93.798 ($\uparrow 0.89$)
ResNet [17] | ResNet-101 | 44.549M | 7.849 | 78.200 | 93.906
SENet [23] | 49.327M | 7.863 | 78.468 | 94.102
CBAM [10] | 49.330M | 7.879 | 78.354 | 94.064
SGE-Net [12] | 44.553M | 7.858 | 78.798 | 94.368
ECA-Net [wang2020eca] | 44.549M | 7.858 | 78.650 | 94.340
SA-Net (Ours) | 44.551M | 7.854 | 78.960 ($\uparrow 0.76$) | 94.492 ($\uparrow 0.59$)
Classification on ImageNet-1k. We compare SA-Net with the current SOTA
attention methods. Evaluation metrics include both efficiency(i.e., network
parameters and GFLOPs) and effectiveness(i.e., Top-1/Top-5 accuracy). As shown
in Table 1, SA-Net shares almost the same model complexity (i.e., network
parameters and FLOPs) with the original ResNet [17], but achieves 1.34% gains
in terms of Top-1 accuracy and 0.89% advantages over Top-5 accuracy (on
ResNet-50). When using ResNet-101 as the backbone, the performance gains are
0.76% and 0.59%, respectively. Comparing with SOTA counterparts, SA obtains
higher accuracy while benefiting lower or similar model complexity.
Specifically, when using ResNet-101 as backbone, the recently SOTA SE [23]
module increases 4.778M parameters, 14.34FLOPs, and 0.268 top-1 accuracy, but
our SA only increase 0.002M parameters, 5.12FLOPs, and top-1 0.76 accuracy,
which can demonstrate that SA is lighter and more efficient.
Table 2: Performance comparisons of SA-Net (using ResNet-50 as backbones) with four options (i.e., eliminating Group Norm, eliminating Channel Shuffle, eliminating $\mathcal{F}_{c}(\cdot)$ and utilizing $1\times 1$ Conv to replace $\mathcal{F}_{c}(\cdot)$) on ImageNet-1k in terms of GFLOPs and Top-1/Top-5 accuracy (in %). The best records are marked in bold. Methods | GFLOPs | Top-1 Acc (%) | Top-5 Acc (%)
---|---|---|---
origin | 4.125 | 77.724 | 93.798
w/o_gn | 4.125 | 77.372 | 93.804
w/o_shuffle | 4.125 | 77.598 | 93.758
w/o_$\mathcal{F}_{c}(\cdot)$ | 4.125 | 77.608 | 93.886
$1\times 1$ Conv | 4.140 | 77.684 | 93.840
Ablation Study. We report the ablation studies of SA-Net50 on ImageNet-1k, to
thoroughly investigate the components of the SA. As shown in Table 2, the
performance drops significantly when eliminating Group Norm, which indicates
the distribution of features generated by different samples from the same
semantic group is inconsistent. It is difficult to learn robust significant
coefficients without normalization. When eliminating channel shuffle, the
performance drops a little, which demonstrates that information communication
among different groups can enhance the features representation. There is no
doubt if $\mathcal{F}_{c}(\cdot)$ is eliminated, the performance drops since
$\mathcal{F}_{c}(\cdot)$ is adopted to enhance the representation of features.
However, the performance does not improve if we replace
$\mathcal{F}_{c}(\cdot)$ with $1\times 1$ Conv. This may because the number of
channels in each sub-feature is too few, so that it is unnecessary to exchange
information among different channels.
Object Detection on MS COCO. We further train the current SOTA detectors on
the COCO train2017 split, and evaluate bounding box Average Precision (AP) for
object detection, and mask AP for instance segmentation. We implement all
detectors using the MMDetection toolkit with the default settings and trained
them within 12 epochs(namely, ‘1× schedule’). For a fair comparison, we only
replace the pre-trained backbones on ImageNet-1k and transfer them to MS COCO
by fine-tuning, keeping the other components in the entire detector intact. As
shown in Table 3, integration of either SE block or the proposed SA module can
improve the performance of object detection using either one-stage or two-
stage detector by a clear margin. Meanwhile, our SA can outperform SE block
with lower model complexity. Specifically, adopting Faster R-CNN [28] as the
basic detector, SA outperforms SE by 1.0% and 1.4% in terms of AP by using
ResNet-50 and ResNet-101, respectively. If we use RetinaNet as the base
detector, the gains both increase by 1.5%.
Table 3: Object detection results of different attention methods on COCO val2017. The best records and the improvements are marked in bold and $\uparrow$, respectively. Methods | Detectors | Param. | GFLOPs | AP50:95 | AP50 | AP75 | APS | APM | APL
---|---|---|---|---|---|---|---|---|---
ResNet-50 | Faster R-CNN | 41.53M | 207.07 | 36.4 | 58.4 | 39.1 | 21.5 | 40.0 | 46.6
\+ SE | 44.02M | 207.18 | 37.7 | 60.1 | 40.9 | 22.9 | 41.9 | 48.2
\+ SA (Ours) | 41.53M | 207.35 | 38.7 ($\uparrow 2.3$) | 61.2 | 41.4 | 22.3 | 42.5 | 49.8
ResNet-101 | 60.52M | 283.14 | 38.5 | 60.3 | 41.6 | 22.3 | 43.0 | 49.8
\+ SE | 65.24M | 283.33 | 39.6 | 62.0 | 43.1 | 23.7 | 44.0 | 51.4
\+ SA (Ours) | 60.53M | 283.60 | 41.0($\uparrow 2.5$) | 62.7 | 44.8 | 24.4 | 45.1 | 52.5
ResNet-50 | Mask R-CNN | 44.18M | 275.58 | 37.3 | 59.0 | 40.2 | 21.9 | 40.9 | 48.1
\+ SE | 46.67M | 275.69 | 38.7 | 60.9 | 42.1 | 23.4 | 42.7 | 50.0
\+ SA (Ours) | 44.18M | 275.86 | 39.4($\uparrow 2.1$) | 61.5 | 42.6 | 23.4 | 42.8 | 51.1
ResNet-101 | 63.17M | 351.65 | 39.4 | 60.9 | 43.3 | 23.0 | 43.7 | 51.4
\+ SE | 67.89M | 351.84 | 40.7 | 62.5 | 44.3 | 23.9 | 45.2 | 52.8
\+ SA (Ours) | 63.17M | 352.10 | 41.6($\uparrow 2.2$) | 63.0 | 45.5 | 24.9 | 45.5 | 54.2
ResNet-50 | RetinaNet | 37.74M | 239.32 | 35.6 | 55.5 | 38.3 | 20.0 | 39.6 | 46.8
\+ SE | 40.25M | 239.43 | 36.0 | 56.7 | 38.3 | 20.5 | 39.7 | 47.7
\+ SA (Ours) | 37.74M | 239.60 | 37.5($\uparrow 1.9$) | 58.5 | 39.7 | 21.3 | 41.2 | 45.9
ResNet-101 | 56.74M | 315.39 | 37.7 | 57.5 | 40.4 | 21.1 | 42.2 | 49.5
\+ SE | 61.49M | 315.58 | 38.8 | 59.3 | 41.7 | 22.1 | 43.2 | 51.5
\+ SA (Ours) | 56.64M | 315.85 | 40.3($\uparrow 2.6$) | 61.2 | 43.2 | 23.2 | 44.4 | 53.5
Table 4: Instance segmentation results of various state-of-the-arts attention modules using Mask R-CNN on COCO val2017. Methods | AP50:95 | AP50 | AP75 | APS | APM | APL
---|---|---|---|---|---|---
ResNet-50 | 34.2 | 55.9 | 36.2 | 18.2 | 37.5 | 46.3
\+ SE | 35.4 | 57.4 | 37.8 | 17.1 | 38.6 | 51.8
\+ ECA | 35.6 | 58.1 | 37.7 | 17.6 | 39.0 | 51.8
\+ SGE | 34.9 | 56.9 | 37.0 | 19.1 | 38.4 | 47.3
\+ SA (Ours) | 36.1($\uparrow 1.9$) | 58.7 | 38.2 | 19.4 | 39.4 | 49.0
ResNet-101 | 35.9 | 57.7 | 38.4 | 19.2 | 39.7 | 49.7
\+ SE | 36.8 | 59.3 | 39.2 | 17.2 | 40.3 | 53.6
\+ ECA | 37.4 | 59.9 | 39.8 | 18.1 | 41.1 | 54.1
\+ SGE | 36.9 | 59.3 | 39.4 | 20.0 | 40.8 | 50.1
\+ SA (Ours) | 38.0($\uparrow 2.1$) | 60.0 | 40.3 | 20.8 | 41.2 | 51.7
Instance Segmentation on MS COCO. Instance segmentation using Mask R-CNN [29]
on MS COCO are shown in Table 4. As shown in Table 4, SA module achieves
clearly improvement over the original ResNet and performs better than other
state-of-the-arts attention modules (i.e., SE block, ECA module, and SGE
unit), with less model complexity. Particularly, SA module achieves more gains
for small objects, which are usually more difficult to be correctly detected
and segmented. These results verify our SA module has good generalization for
various computer vision tasks.
## 5 Conclusion
In this paper, we propose a novel efficient attention module SA to enhance the
representation power of CNN networks. SA first groups the dimensions of
channel into multiple sub-features before processing them in parallel. Then,
for each sub-feature, SA utilizes a Shuffle Unit to capture feature
dependencies in both spatial and channel dimensions. Afterward, all sub-
features are aggregated, and finally, we adopt a “channel shuffle” operator to
make information communication between different sub-features. Experimental
results demonstrate that our SA is an extremely lightweight plug-and-play
block, which can significantly improve the performance of various deep CNN
architectures.
In the future, we will further explore the spatial and channel attention
modules of SA and adopt them into more CNN architectures, including the
ShuffleNet family, the SKNet [2], and MobileNetV3 [20].
## References
* [1] Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, and Han Hu, “Gcnet: Non-local networks meet squeeze-excitation networks and beyond,” CoRR, vol. abs/1904.11492, 2019.
* [2] Xiang Li, Wenhai Wang, Xiaolin Hu, and Jian Yang, “Selective kernel networks,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, 2019, pp. 510–519.
* [3] Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu, “Dual attention network for scene segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, 2019, pp. 3146–3154.
* [4] Yuhui Yuan and Jingdong Wang, “Ocnet: Object context network for scene parsing,” CoRR, vol. abs/1809.00916, 2018.
* [5] HyunJae Lee, Hyo-Eun Kim, and Hyeonseob Nam, “SRM : A style-based recalibration module for convolutional neural networks,” CoRR, vol. abs/1903.10829, 2019.
* [6] Hengshuang Zhao, Yi Zhang, Shu Liu, Jianping Shi, Chen Change Loy, Dahua Lin, and Jiaya Jia, “Psanet: Point-wise spatial attention network for scene parsing,” in Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part IX, 2018, pp. 270–286.
* [7] Xia Li, Zhisheng Zhong, Jianlong Wu, Yibo Yang, Zhouchen Lin, and Hong Liu, “Expectation-maximization attention networks for semantic segmentation,” CoRR, vol. abs/1907.13426, 2019.
* [8] Zhen Zhu, Mengde Xu, Song Bai, Tengteng Huang, and Xiang Bai, “Asymmetric non-local neural networks for semantic segmentation,” CoRR, vol. abs/1908.07678, 2019.
* [9] Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, and Wenyu Liu, “Ccnet: Criss-cross attention for semantic segmentation,” CoRR, vol. abs/1811.11721, 2018.
* [10] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon, “CBAM: convolutional block attention module,” in Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part VII, 2018, pp. 3–19.
* [11] Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, Wangmeng Zuo, and Qinghua Hu, “Eca-net: Efficient channel attention for deep convolutional neural networks,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. 2020, pp. 11531–11539, IEEE.
* [12] Xiang Li, Xiaolin Hu, and Jian Yang, “Spatial group-wise enhance: Improving semantic feature learning in convolutional networks,” CoRR, vol. abs/1905.09646, 2019.
* [13] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun, “Shufflenet V2: practical guidelines for efficient CNN architecture design,” in Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XIV, 2018, pp. 122–138.
* [14] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. 2018, pp. 6848–6856, IEEE Computer Society.
* [15] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna, “Rethinking the inception architecture for computer vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 2016, pp. 2818–2826.
* [16] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, 2017, pp. 4278–4284.
* [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 2016, pp. 770–778.
* [18] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, 2012, pp. 1106–1114.
* [19] Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 2018, pp. 4510–4520.
* [20] Andrew Howard, Ruoming Pang, Hartwig Adam, Quoc V. Le, Mark Sandler, Bo Chen, Weijun Wang, Liang-Chieh Chen, Mingxing Tan, Grace Chu, Vijay Vasudevan, and Yukun Zhu, “Searching for mobilenetv3,” in 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019. 2019, pp. 1314–1324, IEEE.
* [21] Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton, “Dynamic routing between capsules,” in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, 2017, pp. 3856–3866.
* [22] Geoffrey E. Hinton, Sara Sabour, and Nicholas Frosst, “Matrix capsules with EM routing,” in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. 2018, OpenReview.net.
* [23] Jie Hu, Li Shen, and Gang Sun, “Squeeze-and-excitation networks,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 2018, pp. 7132–7141.
* [24] Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He, “Non-local neural networks,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 2018, pp. 7794–7803.
* [25] Yuxin Wu and Kaiming He, “Group normalization,” in Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XIII, 2018, pp. 3–19.
* [26] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017. 2017, pp. 618–626, IEEE Computer Society.
* [27] Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He, “Accurate, large minibatch SGD: training imagenet in 1 hour,” CoRR, vol. abs/1706.02677, 2017.
* [28] Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” in NIPS 2015, December 7-12, 2015, Montreal, Quebec, Canada, Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, Eds., 2015, pp. 91–99.
* [29] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross B. Girshick, “Mask R-CNN,” in IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, 2017, pp. 2980–2988.
|
# A functional approach to the next to eikonal approximation of high energy
gravitational scattering
A.R. Fazio Departamento de Física, Universidad Nacional de Colombia, Ciudad
Universitaria, Bogotá D.C., Colombia E.A. Reyes R Universidad de Pamplona
(UDP), Pamplona - Norte de Santander, Colombia
###### Abstract
The Fradkin-Schwinger functional methods to represent a Green function in an
external gravitational field are used to study the eikonal and the next-to-
eikonal limit, including the nonlinear gravitational interactions, of the
scattering amplitudes of an ultra-relativistic scalar particle on a static
super-massive scalar target in the nearly forward limit. The functional
approach confirms the exponentiation of the leading eikonal which also applies
to the first non-leading power in the energy of the light particle, moreover
includes the interaction at impact parameter much larger than the
Schwarzschild radius associated with the center of mass energy in the ultra-
relativistic limit.
## 1 Introduction
The high-energy behavior of scattering processes in perturbative quantum
gravity of a very light scalar by a heavy scalar [1],[2],[3] is considered in
the eikonal and next-to-eikonal approximation [4] for the large impact
parameter and the consequent small scattering angle limit. These
approximations schemes notably go beyond the finite order perturbation theory
[5],[6], allowing to sum the logarithmic divergent amplitudes in soft limit
which have ladder like structure [7],[8],[9]. The general idea of the eikonal
limit is that at very high-energies the de Broglie wave-length of the incident
particle of a scattering process is small compared with the spatial variations
in the target, so that the “geometrical optics”-limit might be reasonable as a
first approximation in which it is possible to recover the classical result of
the exponentiation of the leading phase [10]. That phase depends upon the mass
dimension of the coupling constant which is for gravitational interactions
inverse of squared energy [11] and it is a fundamental infrared dominated
physical quantity [12]. By using and developing scattering amplitudes
techniques [13] more kinematic regimes and observables have been explored and
also in generalized theories with gravity compared with classical
gravitational scattering of massive objects untill two loop order [14].
However, the approach of scattering amplitudes in perturbative quantum gravity
to be compared with classical dynamics for the post Minkowskian expansion
needs an expansion in the small scattering angle [15] in which the amplitude
can be resummed by exponentiating a suitably defined eikonal phase. As we will
see in our analysis, there are subtleties about which perturbative diagrams do
exponentiate and possibly, by the resummation approach used in this paper, the
extension to two loops could be afforded in order to compare with the results
from the scattering amplitudes. The extraction of classical results has been
also studied in [16], [17] and the eikonalization has been applied in that
context [18]. In the reference [19] a detailed analysis of the expansion
around the eikonal limit directly in perturbation theory for the above
mentioned scattering is provided and the diagrammatic self-consistency of this
expansion is verified with the derivation of the next-to-leading corrections
in arbitrary dimensions. The first derivation of the eikonal gravitational
phase for the process in question can be found in [20]. The next-to-leading
correction has been also studied intensively [21],[22],[23],[24],[25]. In our
article we provide by the functional methods of quantum field theory the
summation of diagrams corresponding to the treatment of the leading and the
next-to-leading corrections of the high-energy scattering amplitude of the
considered process. In the framework of the functional integral, the Green
function representation in an external field proposed by Fradkin in [26], [27]
allows to perform calculations in a compact form with easy combinatorial
counting based on the Wick’s contractions in the summations of ladder
diagrams. Two nucleons scattering in quantum gravity was also analyzed by the
so called Fradkin’s modified perturbation theory [28], however, our approach
is based on the Feynman path integral and takes into account in the sum of the
considered diagrams the full Feynman denominators like $(p+\sum
k_{i})^{2}-m^{2}+i\epsilon$, by expanding in the correlation terms $k_{i}\cdot
k_{j}$ for the internal momenta. The quantum gravity regime is the appropriate
to discuss the ultrahigh energy gravitational scattering in the eikonal
approximation because it is characterized by the large impact parameter where
the massless fields dominate. They cannot be strings modes because in [29] has
been proved that when the momentum transfers reach the string scale strings
effects appear to be subdominant to higher loop gravitational processes
approximated via eikonal. The high-energy scattering of massless closed
strings from a stack of a given number of D-branes in Minkowski space is
characterized also by the dominance of gravity when the impact parameter is
large with respect to the Schwarzschild radius associated with the center of
mass energy [21]. The eikonalization procedure of [21] is based on the use of
Schwinger proper time, which in that context is expressed in terms of the
strings and D-brane parameters in perturbation theory. In our approach the use
of Schwinger proper time will allow an efficient eikonalization procedure with
a suitable extension at next-to-eikonal approximation.
The paper is organized as follows. In section 2 we fix the kinematics of the
high energy gravitational scattering process of the spinless projectile in the
static limit of the spinless target. In section 3 the path integral
description of the amplitude by semiclassical gravity is provided. In section
4 the Fradkin’s functional representation of the two-point connected Green
function in a linearized background is derived in details. That representation
allows to take into account all emitted gravitons along the propagating line
of the radiating particle with the truncation to linear interaction in the
graviton field. In section 5 the eikonal limit of the two point Green function
in the given external gravitational field is computed. The eikonal limit of
the scattering amplitude of our process is computed by showing crucial
cancellations at next-to-power in the transfer momentum and also by showing
the large impact parameters dominance for the process in the eikonal regime.
In section 6 the deviation from the straight line is considered in our
functional approach obtaining zero at the next to leading power in four
dimensions. In section 7 the non-linear gravitational interactions are
included for the contribution at the next to leading power. Section 8 is for
conclusions and research perspectives to apply the functional methods to
problems in the next-to-eikonal approach proposed in the current literature.
## 2 Kinematics
We will investigate the small angle gravitational scattering of an ultraviolet
light (massless) scalar particle of energy $E_{\phi}$ off a very heavy
particle of mass $M_{\sigma}$, also chosen to be scalar. The specific
kinematics is
$p+q\rightarrow
p’+q’\,\,\,\,\,\,\,p^{2}=p’^{2}=0\,\,\,\,\,\,\,q^{2}=q’^{2}=M_{\sigma}^{2}$
(1)
and we indicate by
$\Delta\equiv\sqrt{-(p-p’)^{2}}<<E_{\phi}=p^{0}<<M_{\sigma}$. The momenta of
the incoming and outgoing scalar particles, $p$ and $p’$, are much larger than
the transferred momentum $\Delta^{\mu}=p^{\prime\mu}-p^{\mu}$. We will take
$q$ and $q’$ to be the incoming and outgoing momenta of the heavy scalar of
mass $M_{\sigma}$. The gravitational force will be mediated by massless
gravitons of helicity two in a Lorentz frame where $\Delta^{0}=\Delta_{z}=0$.
We will always work to leading power in $M_{\sigma}$ and seek the first power
corrections in $E_{\phi}$. Our approximation will be expressed into power
corrections of the form
$\frac{\Delta}{E_{\phi}}=\frac{2\sqrt{-t}M_{\sigma}}{s-M_{\sigma}^{2}}$, where
$s$, $t$ are the Mandelstam invariants. In our frame
$p^{0}=p’^{0}=E_{\phi}=|\vec{p}|$ and $\vec{p}$ is nearly taken along the
$z$-axis amounting to
$p^{\mu}=\left(E_{\phi},p^{z},-\frac{\vec{\Delta}}{2}\right)\,\,\,\,\,\,\,\,p’^{\mu}=\left(E_{\phi},p^{z},\frac{\vec{\Delta}}{2}\right),$
(2)
with $p^{z}=\sqrt{E_{\phi}^{2}-\frac{\Delta^{2}}{4}}=p’^{z}$, because
$p^{z}=E_{\phi}$ up to corrections of order $\frac{\Delta^{2}}{E_{\phi}^{2}}$,
which may be neglected at the leading power of our approximation.
## 3 Path integral description by semiclassical gravity
The full four-point connected Green function of four scalars in quantum
gravity is given by
$\displaystyle
G(x_{1},x’_{1},x_{2},x’_{2})=\int\limits_{\begin{subarray}{c}\text{Connected}\\\
\text{diagrams}\end{subarray}}[Dh][D\phi_{1}][D\phi_{2}]\phi_{1}(x_{1})\phi_{1}(x’_{1})\phi_{2}(x_{2})\phi_{2}(x’_{2})$
$\displaystyle\exp\left\\{i\int d^{4}x\sqrt{-g}\left[\frac{1}{16\pi
G_{N}}\left(R(h)-\frac{1}{2}g_{\mu\nu}C^{\mu}C^{\nu}\right)\right.\right.$
$\displaystyle\left.\left.+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi_{1}\partial_{\nu}\phi_{1}+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi_{2}\partial_{\nu}\phi_{2}-\frac{1}{2}M_{\sigma}^{2}\phi_{2}^{2}\right]\right\\},$
(3)
where $R$ is the curvature scalar and $g_{\mu\nu}$ is defined as the sum of a
flat Minkowski component $\eta_{\mu\nu}$ and a perturbation $\kappa
h_{\mu\nu}$, with $\kappa=\sqrt{32\pi G_{N}}$ in terms of the Newton constant.
It is coupled to the scalars matter represented by the fields $\phi_{1}$ and
$\phi_{2}$, the first massless and the second heavy. $C_{\mu}$ fixes the gauge
$C_{\mu}=\kappa\left(\partial_{\nu}h^{\nu}_{\mu}-\frac{1}{2}\partial_{\mu}h^{\nu}_{\nu}\right),$
(4)
amounting to a covariant generalization of the de Donder gauge. The ghosts for
this gauge are not included because they do not contribute in the forward
limit, $\Delta<<E_{\phi}$. The ghost-ghost-graviton vertex upon flat space,
that can be found for instance in [30], is easily seen to be subdominant at
every step of our analysis. The path integral (3) can be rewritten as
$\displaystyle
G(x_{1},x’_{1},x_{2},x’_{2})=\int\limits_{\begin{subarray}{c}\text{Connected}\\\
\text{diagrams}\end{subarray}}[Dh]G^{c}(x_{1},x’_{1}|h_{\mu\nu})G^{c}_{2}(x_{2},x’_{2}|h_{\mu\nu})\times$
$\displaystyle\exp\left\\{i\int d^{4}x\sqrt{-g}\left[\frac{1}{16\pi
G_{N}}\left(R(h)-\frac{1}{2}g_{\mu\nu}C^{\mu}C^{\nu}\right)\right]\right\\},$
(5)
where $G^{c}(x_{1},x’_{1}|h_{\mu\nu})$ and
$G^{c}_{2}(x_{2},x’_{2}|h_{\mu\nu})$ are two-point connected Green functions
for the propagation of free scalar fields $\phi_{1}$ and $\phi_{2}$ of masses
respectively $0$ and $M_{\sigma}$ in the presence of $h_{\mu\nu}$, taken as
gravitational background field. In the eikonal approximation we will consider
a gravitational interaction linearized in $h_{\mu\nu}$ and for the next-to-
eikonal also the quadratic contributions in the background field, so called
seagull terms, together with the trilinear gravitons vertex will be taken into
account. The sum of the ladder diagrams, those not involving the self energy
corrections, is based on the contributions to the scattering amplitude with
the given kinematics (1) of the on-shell two-point functions. For the light
scalar
$\displaystyle<p’|G^{c}(x,y|h_{\mu\nu})|p>=\lim_{p^{2},p’^{2}\rightarrow
0}\int d^{4}x\int d^{4}y\,$ $\displaystyle e^{-ip\cdot
x}\overrightarrow{(\partial_{x}^{2})}(G(x,y|h)-G_{0}(x,y))\overleftarrow{(\partial_{y}^{2})}e^{ip’\cdot
y},$ (6)
where $G_{0}$ refers to the free propagation for the scalar field without any
external background. Analogously for the heavy scalar of mass $M_{\sigma}$
$\displaystyle<q’|G^{c}(x,y|h_{\mu\nu})|q>=\lim_{q^{2},q’^{2}\rightarrow
M_{\sigma}^{2}}\int d^{4}x\int d^{4}y\,$ $\displaystyle e^{-iq\cdot
x}\overrightarrow{(\partial_{x}^{2})}(G(x,y|h)-G_{0}(x,y))\overleftarrow{(\partial_{y}^{2})}e^{iq’\cdot
y}.$ (7)
The requested scattering amplitude leads to the $\mathcal{T}$ matrix element
$\displaystyle i(2\pi)^{4}\delta^{4}(p+q-p’-q’)\mathcal{T}(p,p’;q,q’)=$
$\displaystyle\exp\left[\int d^{4}xd^{4}y\left(\frac{\delta}{\delta
h^{\alpha\beta}(x)}D^{\alpha\beta,\gamma\delta}(x-y)\frac{\delta}{\delta
h^{\prime\gamma\delta}(y)}\right)\right]$
$\displaystyle\left.<p’|G^{c}(x_{1},x’_{1}|h)|p><q’|G^{c}(x_{2},x’_{2}|h’)|q>\right|_{h,h’=0}.$
(8)
Here $D^{\mu\nu,\alpha\beta}$ is the graviton propagator so that
$\int
D_{\mu\nu,\alpha\beta}(x-z)D^{\alpha\beta,\gamma\delta}(z-y)d^{4}z=\frac{i}{2}\delta^{4}(x-y)(\eta_{\mu}\,^{\gamma}\eta_{\nu}\,^{\delta}+\eta_{\mu}\,^{\delta}\eta_{\nu}\,^{\gamma})$
(9)
which in the de Donder gauge specified by the gauge fixing terms in (3) and
(4), takes the form
$D_{\mu\nu,\alpha\beta}(x-y)=\int\frac{d^{4}k}{(2\pi)^{4}}\frac{i}{2}\frac{\eta_{\mu\alpha}\eta_{\nu\beta}+\eta_{\mu\beta}\eta_{\nu\alpha}-\eta_{\mu\nu}\eta_{\alpha\beta}}{k^{2}}e^{-ik(x-y)}.$
(10)
We will get $G^{c}(x,x’|h_{\mu\nu})$ in the next section by following the
functional methods of Schwinger-Fradkin [26],[27].
## 4 Two point scalar connected Green function
We describe how to calculate the relativistic connected amputated Green
function $G^{c}(x,y|h_{\mu\nu})$ in a linearized gravitational background for
a general scalar field of mass $m$. The spin zero matter action is
$S=\int
d^{4}x\sqrt{-g}\left(\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}m^{2}\phi^{2}\right),$
(11)
expanded untill trilinear couplings [30]
$S=-\frac{1}{2}\int d^{4}x\phi(x)\left[\partial^{2}+m^{2}-\kappa
h^{\mu\nu}\partial_{\mu}\partial_{\nu}+\frac{\kappa}{2}\eta_{\mu\nu}h^{\mu\nu}\partial^{2}+\frac{\kappa}{2}\eta^{\mu\nu}h_{\mu\nu}m^{2}\right]\phi(x).$
(12)
The requested Green function is defined as
$\left(\partial^{2}+m^{2}-\kappa
h^{\mu\nu}\partial_{\mu}\partial_{\nu}+\frac{\kappa}{2}\eta_{\mu\nu}h^{\mu\nu}\partial^{2}+m^{2}\frac{\kappa}{2}\eta_{\mu\nu}h^{\mu\nu}\right)G(x,y|h)=-i\delta^{4}(x-y).$
(13)
It is convenient to go over the momentum representation with respect to $x-y$
$G(x,y|h)=\frac{1}{(2\pi)^{4}}\int d^{4}\ell\,G(x,\ell)\,e^{-i\ell(x-y)},$
(14)
therefore
$\displaystyle\left[-\ell^{2}+m^{2}+\partial^{2}-2i\ell_{\alpha}\partial^{\alpha}+\kappa\ell_{\mu}\ell_{\nu}h^{\mu\nu}-\kappa
h^{\mu\nu}\partial_{\mu}\partial_{\nu}+2i\kappa\ell_{\mu}h^{\mu\nu}\partial_{\nu}\right.$
$\displaystyle\left.-\frac{\kappa}{2}\ell^{2}\eta_{\mu\nu}h^{\mu\nu}+\frac{\kappa}{2}\eta_{\mu\nu}h^{\mu\nu}\partial^{2}-i\kappa\eta_{\mu\nu}h^{\mu\nu}\ell_{\alpha}\partial^{\alpha}+\frac{\kappa}{2}\eta_{\mu\nu}h^{\mu\nu}m^{2}\right]G(x,\ell)=-i.$
In terms of the Schwinger’s proper time $\nu$ as well as the Feynman
$i\epsilon$ prescription for the propagator
$G(x,\ell)=\int_{0}^{+\infty}e^{i\nu(\ell^{2}-m^{2}+i\epsilon)}Y(x,\nu)d\nu.$
(16)
We have the following Cauchy’s problem
$\begin{cases}i\frac{\partial
Y}{\partial\nu}(x,\nu)=\left[\partial^{2}-2i\ell_{\mu}\partial^{\mu}+\kappa\ell_{\mu}\ell_{\beta}h^{\mu\beta}-\kappa
h^{\mu\beta}\partial_{\mu}\partial_{\beta}+2i\kappa\ell_{\mu}h^{\mu\beta}\partial_{\beta}\right.\\\
\left.-\frac{\kappa}{2}\ell^{2}\eta_{\mu\beta}h^{\mu\beta}+\frac{\kappa}{2}\eta_{\mu\beta}h^{\mu\beta}\partial^{2}-i\kappa\eta_{\mu\beta}h^{\mu\beta}\ell_{\alpha}\partial^{\alpha}+\frac{\kappa}{2}\eta_{\mu\beta}h^{\mu\beta}m^{2}\right]Y(x,\nu)\vspace*{0.2cm}\\\
Y(x,0)=1,\end{cases}$ (17)
where the Feynman prescription for the propagator has been included. Take the
external gravitational field
$h_{\alpha\beta}(x)=\hat{h}_{\alpha\beta}(k,\frac{\partial}{\partial
k})e^{ikx}$, where $\hat{h}_{\alpha\beta}(k,\frac{\partial}{\partial k})$ is
an arbitrary operator of $k$ and derivatives with respect to $k$ with the only
restriction that $k^{2}=0$ and
$k^{\alpha}\hat{h}_{\alpha\beta}=\frac{1}{2}k_{\beta}\hat{h}^{\rho}\,_{\rho}(k,\frac{\partial}{\partial
k})$. By taking into account that $\partial^{2}\approx k^{2}=0$ and also
$h^{\alpha}\,_{\beta}\partial_{\alpha}\approx
ik^{\alpha}\hat{h}_{\alpha\beta}e^{ikx}=i\frac{1}{2}k_{\beta}\hat{h}^{\rho}\,_{\rho}(k,\frac{\partial}{\partial
k})e^{ikx}$. The above approximations are equivalent to state that the spatial
variation of $Y(x,\nu)$ is approximately driven by the gravitational plane
wave background like in a forced harmonic oscillator [31]. The Cauchy’s
problem changes as
$\begin{cases}i\frac{\partial
Y}{\partial\nu}(x,\nu)=\left[\partial^{2}-2i\ell_{\alpha}\partial^{\alpha}+\kappa\ell_{\mu}\ell_{\beta}h^{\mu\beta}(x)\right]Y(x,\nu)\vspace*{0.2cm}\\\
Y(x,0)=1,\end{cases}$ (18)
where the on-shell limit, $\ell^{2}=m^{2}$, has been used since we will need
that limit to compute the $\mathcal{T}$-matrix element of (8). Let’s introduce
an additional interaction with the external generators $t_{\mu}(\nu)$ of the
operators $\partial_{\mu}$:
$i\frac{\partial
Y}{\partial\nu}(x,\nu,t)=\left[\partial^{2}-2i\ell_{\alpha}\partial^{\alpha}+\kappa\ell_{\mu}\ell_{\beta}h^{\mu\beta}(x)+it_{\mu}(\nu)\partial^{\mu}\right]Y(x,\nu,t)$
(19)
and
$\left.\frac{\delta}{\delta t^{\mu}(\nu)}Y\right|_{t=0}=\partial_{\mu}Y(\nu).$
(20)
Treating the Schrödinger type equation (19) by functional methods [32],[33]
$\left.Y(x,\nu)=\exp\left[-i\int_{0}^{\nu}\frac{\delta^{2}}{\delta
t^{\mu}(\xi)\delta t_{\mu}(\xi)}d\xi\right]Y_{1}(x,\nu,t)\right|_{t=0}$ (21)
where
$\frac{\partial
Y_{1}}{\partial\nu}(x,\nu,t)=(-2\ell_{\alpha}\partial^{\alpha}-i\kappa\ell_{\mu}\ell_{\beta}h^{\mu\beta}(x)+t_{\mu}(\nu)\partial^{\mu})Y_{1}(x,\nu,t).$
(22)
By the method of variation of arbitrary constants
$Y_{1}(x,\nu,t)=\exp\left(-\int_{0}^{\nu}(2\ell_{\alpha}\partial^{\alpha}-t_{\mu}(\xi)\partial^{\mu})d\xi\right)Y_{2}(x,\nu,t),$
(23)
with
$\frac{\partial
Y_{2}}{\partial\nu}=-i\kappa\ell_{\mu}\ell_{\beta}h^{\mu\beta}\left(x+2\ell\nu-\int_{0}^{\nu}t(\xi)d\xi\right)Y_{2},$
(24)
giving therefore
$Y_{2}=\exp\left[-i\kappa\ell_{\mu}\ell_{\beta}\int_{0}^{\nu}h^{\mu\beta}\left(x+2\ell\nu’-\int_{0}^{\nu’}t(\xi)d\xi\right)d\nu’\right].$
(25)
By taking into account the translation operator of (23)
$\displaystyle Y(x,\nu)=\exp\left[-i\int_{0}^{\nu}\frac{\delta^{2}}{\delta
t^{\mu}(\xi)\delta t_{\mu}(\xi)}d\xi\right]$
$\displaystyle\left.\exp\left[-i\kappa\ell_{\mu}\ell_{\beta}\int_{0}^{\nu}h^{\mu\beta}\left(x+2\ell(\nu’-\nu)+\int_{\nu’}^{\nu}t(\xi)d\xi\right)d\nu’\right]\right|_{t=0},$
(26)
amounting to
$\displaystyle Y(x,\nu)=\exp\left[-i\int_{0}^{\nu}\frac{\delta^{2}}{\delta
t^{\mu}(\xi)\delta t_{\mu}(\xi)}d\xi\right]\times\qquad\qquad$
$\displaystyle\qquad\qquad\,\,\,\exp\left[-i\kappa\ell_{\mu}\ell_{\beta}\int_{0}^{\nu}h^{\mu\beta}\left(x+2\ell(\nu’-\nu)\right.\right.$
$\displaystyle\left.\left.\left.\qquad\qquad\qquad\quad\qquad+\quad\int_{0}^{\nu}t(\xi)\theta(\xi-\nu’)d\xi\right)d\nu’\right]\right|_{t=0}.$
(27)
In terms of the Fourier modes of the gravitational field
$\displaystyle Y(x,\nu)=$
$\displaystyle\sum_{n=0}^{+\infty}\frac{(-i\kappa)^{n}}{n!}\prod_{i,j=1}^{n}\int_{0}^{\nu}d\xi_{i}\int\frac{d^{4}k_{j}}{(2\pi)^{4}}\ell_{\mu_{i}}\ell_{\beta_{i}}\hat{h}^{\mu_{j}\beta_{j}}(k_{j})\times$
$\displaystyle\,\exp\left(-ik_{j}\cdot x-2i\ell\cdot
k_{j}(\xi_{i}-\nu)\right)\times$ (28)
$\displaystyle\left.\exp\left[-i\int_{0}^{\nu}\frac{\delta^{2}}{\delta
t^{\mu}(\xi)\delta
t^{\mu}(\xi)}d\xi\right]\exp\left[-i\int_{0}^{\nu}t(\xi)\cdot
k_{j}\theta(\xi-\nu\text{\textquoteright})d\xi\right]\right|_{t=0}$
The following property holds
$\displaystyle\exp\left[i\int_{0}^{\nu}g^{\rho}(\xi)g_{\rho}(\xi)d\xi\right]=$
$\displaystyle\left.\exp\left[-i\int_{0}^{\nu}\frac{\delta^{2}}{\delta
t^{\mu}(\xi)\delta
t_{\mu}(\xi)}d\xi\right]\exp\left[i\int_{0}^{\nu}t^{\mu}(\xi)g_{\mu}(\xi)d\xi\right]\right|_{t=0}$
(29)
which is proved in [6] and [34]. Finally
$\displaystyle Y(x,\nu)=$ $\displaystyle
1+\sum_{n=1}^{+\infty}\frac{(-i)^{n}\kappa^{n}}{n!}\left(\prod\limits_{m=1}^{n}\int_{0}^{\nu}d\xi_{m}\int\frac{d^{4}k_{m}}{(2\pi)^{4}}\ell^{\mu}\ell^{\beta}\hat{h}_{\mu\beta}(k_{m})e^{-ik_{m}\cdot
x}\right)\times$ $\displaystyle\exp\left[i\sum_{m,m_{1}}k_{m}\cdot
k_{m_{1}}\left(\frac{\xi_{m}+\xi_{m_{1}}}{2}+\frac{1}{2}|\xi_{m}-\xi_{m_{1}}|-\nu\right)\right]\times$
$\displaystyle\exp\left[2i\sum_{m=1}^{n}\ell\cdot k_{m}(\nu-\xi_{m})\right],$
(30)
which is readily obtained by using (29) together with
$\int_{0}^{\nu}\theta(\xi-\xi_{m})\theta(\xi-\xi_{m_{1}})d\xi=\nu-\frac{\xi_{m}+\xi_{m_{1}}}{2}-\frac{|\xi_{m}-\xi_{m_{1}}|}{2}.$
(31)
## 5 Eikonal approximation
In the forward limit, $p\sim p’$ the amplitude (6) gets the following
expression
$\displaystyle<p\text{\textquoteright}|G^{c}(x,y|h)|p>$
$\displaystyle=\lim\limits_{\begin{subarray}{c}p^{2}\rightarrow 0\\\
p^{\prime^{2}}\rightarrow 0\end{subarray}}\int d^{4}xd^{4}ye^{-ip\cdot
x}\overrightarrow{\partial_{x}^{2}}\int\frac{d^{4}\ell}{(2\pi)^{4}}e^{-i\ell\cdot(x-y)}\times$
$\displaystyle\qquad\qquad\int_{0}^{+\infty}d\nu
e^{i\nu(\ell^{2}+i\epsilon)}(Y(x,\nu)-1)\overleftarrow{\partial_{y}^{2}}e^{ip\text{\textquoteright}\cdot
y}.$ (32)
In the eikonal limit where the terms $k^{2}$ and $k_{i}\cdot k_{j}$ type are
neglected, more precisely $|k_{i}\cdot k_{j}|<<|p\cdot k_{i}|$ or $|p\cdot
k_{j}|$, we have
$\displaystyle<p\text{\textquoteright}|G^{c}(x,y|h)|p>=$
$\displaystyle\lim\limits_{\begin{subarray}{c}p^{2}\rightarrow 0\\\ p^{\prime
2}\rightarrow 0\end{subarray}}\int d^{4}xd^{4}ye^{-ip\cdot
x}\overrightarrow{\partial_{x}^{2}}\int\frac{d^{4}\ell}{(2\pi)^{4}}e^{-i\ell\cdot(x-y)}\int_{0}^{+\infty}d\nu
e^{i\nu(\ell^{2}+i\epsilon)}\times$
$\displaystyle\qquad\sum_{n=1}^{+\infty}\frac{(-i)^{n}\kappa^{n}}{n!}\prod\limits_{m=1}^{n}\int_{0}^{\nu}d\xi_{m}\int\frac{d^{4}k_{m}}{(2\pi)^{4}}\ell^{\mu_{m}}\ell^{\beta_{m}}\hat{h}_{\mu_{m}\beta_{m}}(k_{m})\times$
$\displaystyle\qquad e^{-ik_{m}\cdot x}e^{2i\sum\limits_{m=1}^{n}\ell\cdot
k_{m}(\nu-\xi_{m})}\overleftarrow{\partial_{y}^{2}}e^{ip\text{\textquoteright}\cdot
y}\quad=$ $\displaystyle\lim\limits_{\begin{subarray}{c}p^{2}\rightarrow 0\\\
p^{\prime 2}\rightarrow
0\end{subarray}}p^{2}p^{\prime^{2}}(2\pi)^{4}\sum_{n=1}^{+\infty}\frac{(-i)^{n}\kappa^{n}}{n!}\int_{0}^{+\infty}d\nu
e^{i\nu(p^{\text{\textquoteright}2}+i\epsilon)}\times$
$\displaystyle\qquad\int\frac{d^{4}k_{1}\dots
d^{4}k_{n}}{(2\pi)^{4n}}\int_{0}^{\nu}d\xi_{1}\dots
d\xi_{n}\delta^{4}(p\text{\textquoteright}-p+k_{1}+\dots k_{n})\times$
$\displaystyle\qquad e^{-\sum\limits_{m=1}^{n}2ip^{\prime}\cdot
k_{m}(\nu-\xi_{m})}p^{\prime\mu_{1}}p^{\prime\beta_{1}}\hat{h}_{\mu_{1}\beta_{1}}(k_{1})\dots
p^{\prime\mu_{n}}p^{\prime\beta_{n}}\hat{h}_{\mu_{n}\beta_{n}}(\tilde{k}_{r}).$
(33)
For the heavy line we have
$\displaystyle\lim\limits_{\begin{subarray}{c}q^{2}\rightarrow
M_{\sigma}^{2}\\\ q^{\prime^{2}}\rightarrow
M_{\sigma}^{2}\end{subarray}}(q^{2}-M_{\sigma}^{2})(q^{\prime^{2}}-M_{\sigma}^{2})(2\pi)^{4}\sum_{r=1}^{+\infty}\frac{(-i)^{r}\kappa^{r}}{r!}\int_{0}^{+\infty}d\nu_{1}e^{i\nu_{1}(q’^{2}-M_{\sigma}^{2}+i\epsilon)}$
$\displaystyle\int\frac{d^{4}\tilde{k}_{1}\dots
d^{4}\tilde{k}_{r}}{(2\pi)^{4r}}\int_{0}^{\nu_{1}}d\xi_{1}\dots\int_{0}^{\nu_{1}}d\xi_{r}\delta^{4}(q’-q+\tilde{k}_{1}+\dots\tilde{k}_{r})\times$
$\displaystyle
e^{-\sum\limits_{m=1}^{r}2iq’\cdot\tilde{k}_{m}(\nu_{1}-\xi_{m})}q’^{\mu_{1}}q’^{\beta_{1}}\hat{h}_{\mu_{1}\beta_{1}}(\tilde{k}_{1})\dots
q’^{\mu_{r}}q’^{\beta_{r}}\hat{h}_{\mu_{r}\beta_{r}}(\tilde{k}_{r}).$ (34)
The integral on a $\xi_{m}$ variable is
$\int_{0}^{\nu}d\xi_{m}\exp[-2ip’\cdot k_{m}(\nu-\xi_{m})]=\frac{1}{2ip’\cdot
k_{m}}\left[1-\exp(-2i\nu p’\cdot k_{m})\right].$ (35)
The eikonal $i\mathcal{T}$ matrix element for the process (1) in the specified
kinematics is obtained from (8)
$\displaystyle
i\mathcal{T}(p,p\text{\textquoteright},q,q\text{\textquoteright})=\lim\limits_{p_{i}^{2}\rightarrow
m_{i}^{2}}p^{2}p\text{\textquoteright}^{2}(q^{2}-M_{\sigma}^{2})(q\text{\textquoteright}^{2}-M_{\sigma}^{2})(2\pi)^{4}\times$
$\displaystyle\sum_{n=1}^{+\infty}\frac{(-1)^{n}\kappa^{2n}}{n!}\int_{0}^{+\infty}d\nu\,e^{i\nu(p\text{\textquoteright}^{2}+i\epsilon)}\int_{0}^{+\infty}d\nu_{1}\,e^{i\nu_{1}[(q\text{\textquoteright}^{2}-M_{\sigma}^{2})+i\epsilon]}$
$\displaystyle\prod_{m=1}^{n}\frac{i}{2}\int\frac{d^{4}k_{m}}{(2\pi)^{4}}p^{\prime\mu_{m}}p^{\prime\beta_{m}}\frac{2\eta_{\mu_{m}0}\eta_{\beta_{m}0}-\eta_{\mu_{m}\beta_{m}}\eta_{00}}{k_{m}^{2}}\frac{1}{2ip\text{\textquoteright}\cdot
k_{m}}M_{\sigma}^{2}\frac{1}{-2iq\text{\textquoteright}\cdot k_{m}}$
$\displaystyle\left[1-e^{(-2i\nu p\text{\textquoteright}\cdot
k_{m})}\right]\left[1-e^{(2i\nu_{1}q\text{\textquoteright}\cdot
k_{m})}\right]\delta^{4}(q\text{\textquoteright}-q-k_{1}-k_{2}-\dots k_{n}),$
(36)
where by $\lim\limits_{p_{i}^{2}\rightarrow m_{i}^{2}}$ we synthetically refer
to all on the mass shell limits involved in (36). The expression of the
momentum space propagator of gravitons is in (10). We are going to use the so
called eikonal identity [10],[35],
$\displaystyle\sum_{\pi}\frac{1}{x+i\epsilon}\frac{1}{x+a_{\pi(1)}+i\epsilon}\dots\frac{1}{x+a_{\pi(1)}+\dots
a_{\pi(n)}+i\epsilon}=$ $\displaystyle(-i)\int_{0}^{+\infty}d\nu e^{i\nu
x}\prod_{m}\frac{1-e^{i\nu a_{m}}}{a_{m}+i\epsilon},$ (37)
where $\pi$ belongs to the set of permutations of $n$ indices. The sum of the
permutations is done by the use of the identity proved in [19],
$\displaystyle\sum_{perms\,of\,\omega_{i}}\delta(\omega_{1}+\dots\omega_{n})\frac{1}{\omega_{1}+i\epsilon}\dots\frac{1}{\omega_{1}+\dots+\omega_{n-1}+i\epsilon}$
$\displaystyle=\delta(\omega_{1}+\dots\omega_{n})\sum_{\omega_{n}}\prod_{i=1}^{n-1}\frac{1}{\omega_{j}+i\epsilon}=(-2\pi
i)^{n-1}\delta(\omega_{1})\dots\delta(\omega_{n}).$ (38)
To see how the previous identities work together with the on-shell limit,
consider the heavy scalar line and take
$\displaystyle-i\int_{0}^{+\infty}d\nu_{1}e^{i\nu_{1}[q^{\prime
2}-M_{\sigma}^{2}+i\epsilon]}\prod_{m=1}^{n}\frac{1-\exp(2i\nu_{1}q’\cdot
k_{m})}{2q’\cdot k_{m}}=\sum_{\pi}\frac{1}{q^{\prime
2}-M_{\sigma}^{2}+i\epsilon}$ $\displaystyle\frac{1}{q^{\prime
2}-M_{\sigma}^{2}+2q’\cdot k_{\pi(1)}+i\epsilon}\dots\frac{1}{q^{\prime
2}-M_{\sigma}^{2}+2q’\cdot(k_{\pi(1)}+\dots k_{\pi(n)})+i\epsilon}.$ (39)
By using the delta function of momentum conservation
$\delta^{4}(q’-q-k_{1}-k_{2}-\dots k_{n})$ the last fraction in the static
limit, $q\sim q’=(M_{\sigma},0)$, amounts to
$\frac{1}{q^{\prime
2}-M_{\sigma}^{2}+2q’\cdot(q’-q)}\sim\frac{1}{q^{2}-M_{\sigma}^{2}},$ (40)
consequently
$\displaystyle\lim\limits_{\begin{subarray}{c}q^{2}\rightarrow
M_{\sigma}^{2}\\\ q’^{2}\rightarrow
M_{\sigma}^{2}\end{subarray}}\delta(k^{0}_{1}+\dots
k^{0}_{n})(q^{2}-M_{\sigma}^{2})(q^{\prime 2}-M_{\sigma}^{2})$
$\displaystyle\sum_{\pi}\frac{1}{q^{\prime
2}-M_{\sigma}^{2}+i\epsilon}\dots\frac{1}{q^{\prime
2}-M_{\sigma}^{2}+2q’\cdot(k_{\pi(1)}+\dots k_{\pi(n)})}=$
$\displaystyle\frac{(-2\pi
i)^{n-1}}{(2M_{\sigma})^{n-1}}\delta(k^{0}_{1})\dots\delta(k^{0}_{n}).$ (41)
By the use of (37), (38), (39), (40) and (41) the amplitude (36) becomes
$\displaystyle
i\mathcal{T}(p,p\text{\textquoteright};q,q\text{\textquoteright})=$
$\displaystyle\lim\limits_{p_{i}^{2}\rightarrow
m_{i}^{2}}p^{2}p\text{\textquoteright}^{2}(2\pi)^{4}\sum_{n=1}^{+\infty}\frac{(-1)^{n}\kappa^{2n}}{n!}\int_{0}^{+\infty}d\nu\,e^{i\nu(p\text{\textquoteright}^{2}+i\epsilon)}\delta^{3}(\vec{\Delta}+\vec{k}_{1}+\vec{k}_{2}+\dots\vec{k}_{n})$
$\displaystyle\left(\prod_{m=1}^{n}\int\frac{d^{3}k_{m}}{(2\pi)^{4}}\frac{i}{\vec{k}_{m}^{2}}E_{\phi}^{2}M_{\sigma}^{2}\frac{1-\exp(2i\nu\vec{p}^{\prime}\cdot\vec{k}_{m})}{2i\vec{p}^{\prime}\cdot\vec{k}_{m}}\right)\frac{(-2\pi
i)^{n-1}}{(2M_{\sigma})^{n-1}}.$ (42)
Being the components of three-vector
$\vec{k}_{m}=(k^{z}_{m},\vec{k}^{\perp}_{m})$, in our reference frame,
$\Delta^{z}=0$, we can consider the expansion at the first order in
$\frac{\vec{\Delta}\cdot\vec{k}_{m}^{\perp}}{2E_{\phi}k_{m}^{z}}$ for
$\prod_{m=1}^{n}\frac{1-\exp(2i\nu\vec{p}\,\,’\cdot\vec{k}_{m})}{2i\vec{p}\,’\cdot\vec{k}_{m}}.$
(43)
From (2) for our desired approximation we need
$2i\vec{p}\,\,’\cdot\vec{k}_{m}=2iE_{\phi}k_{m}^{z}+i\vec{\Delta}\cdot\vec{k}_{m}^{\perp}.$
(44)
Therefore up to the linear order in $\vec{\Delta}$
$\displaystyle\prod_{m=1}^{n}\frac{1-\exp(2i\nu\vec{p}\,\,’\cdot\vec{k}_{m})}{2i\vec{p}\,’\cdot\vec{k}_{m}}=\prod_{m=1}^{n}\frac{1-e^{2i\nu
E_{\phi}k_{m}^{z}}}{2iE_{\phi}k_{m}^{z}}+$
$\displaystyle\sum_{m=1}^{n}i\nu\vec{\Delta}\cdot\vec{k}_{m}^{\perp}\prod_{l=1}^{n}\frac{1-e^{2i\nu
E_{\phi}k_{l}^{z}}}{2iE_{\phi}k_{l}^{z}}-\nu\sum_{m=1}^{n}\frac{\vec{\Delta}\cdot\vec{k}_{m}^{\perp}}{2E_{\phi}k_{m}^{z}}\prod_{l=1,l\neq
m}^{n}\frac{1-e^{2i\nu E_{\phi}k_{l}^{z}}}{2iE_{\phi}k_{l}^{z}}$
$\displaystyle-\sum_{m=1}^{n}\frac{\vec{\Delta}\cdot\vec{k}_{m}^{\perp}}{2k_{m}^{z}E_{\phi}}\prod_{l=1}^{n}\frac{1-e^{2i\nu
E_{\phi}k_{l}^{z}}}{2iE_{\phi}k_{l}^{z}}.$ (45)
In (45) the term proportional to
$\sum\limits_{m=1}^{n}\vec{\Delta}\cdot\vec{k}_{m}^{\perp}$ is of order
${\Delta}^{2}$ due to the
$\delta^{2}(\vec{\Delta}+\vec{k}_{1}+\vec{k}_{2}+\dots\vec{k}_{n})$ and
therefore it is disregarded in our approximation. The last two terms in the
l.h.s of (45) give a vanishing contribution to the scattering amplitude since
they cancel each others, because those terms are dominated by
$k_{m}^{z}\rightarrow 0$
$\lim_{k_{m}^{z}\rightarrow 0}\frac{1-e^{2i\nu
E_{\phi}k_{m}^{z}}}{2iE_{\phi}k_{m}^{z}}=-\nu,$ (46)
consequently (45) is approximated as
$\prod_{m=1}^{n}\frac{1-\exp(2i\nu\vec{p}\,\,’\cdot\vec{k}_{m})}{2i\vec{p}\,’\cdot\vec{k}_{m}}\sim\prod_{m=1}^{n}\frac{1-e^{2i\nu
E_{\phi}k_{m}^{z}}}{2iE_{\phi}k_{m}^{z}}$ (47)
from which the amplitude in the eikonal approximation is obtained by applying
the eikonal identities (37), (38) with the total null momentum condition along
the $z$-axis $\delta(k_{1}^{z}+\dots+k_{n}^{z})$:
$\displaystyle
i\mathcal{T}(p,p\text{\textquoteright};q,q\text{\textquoteright})\equiv
i\mathcal{T}(\vec{\Delta})=$ (48)
$\displaystyle-4(2\pi)^{2}iE_{\phi}M_{\sigma}\sum_{n=1}^{+\infty}\frac{1}{n!}\left(\frac{i\kappa^{2}E_{\phi}M_{\sigma}}{16\pi^{2}}\right)^{n}\prod_{m=1}^{n}\int\frac{d^{2}k_{m}^{\perp}}{\vec{k}_{m}^{\perp
2}}(2\pi)^{2}\delta^{2}(\vec{k}_{1}^{\perp}+\dots\vec{k}_{n}^{\perp}+\vec{\Delta}).$
The expression found in (48) is dimensionless in natural units as it must be
for a $2\rightarrow 2$ process. Now let us Fourier transform into the
transverse impact parameter space,
$\displaystyle
i\widetilde{\mathcal{T}}(\vec{b}^{\perp})\equiv\int\frac{d^{2}\Delta}{(2\pi)^{2}}e^{i\vec{b}^{\perp}\cdot\vec{\Delta}}\,i\mathcal{T}(\Delta)=$
$\displaystyle-4iE_{\phi}M_{\sigma}\sum_{n=1}^{+\infty}\frac{1}{n!}\left(\frac{i\kappa^{2}E_{\phi}M_{\sigma}}{16\pi^{2}}\right)^{n}\int
d^{2}k_{1}^{\perp}\dots
d^{2}k_{n}^{\perp}\prod_{i=1}^{n}\left[\frac{e^{-i\vec{b}^{\perp}\cdot\vec{k}_{i}^{\perp}}}{\vec{k}_{i}^{\perp
2}}\right],$ (49)
which exponentiates as
$i\widetilde{\mathcal{T}}(\vec{b}^{\perp})=-4iE_{\phi}M_{\sigma}(e^{i\chi_{0}}-1)$
(50)
being the eikonal phase
$\chi_{0}({\vec{b}}^{\perp})=\frac{\kappa^{2}M_{\sigma}E_{\phi}}{16\pi^{2}}\int\frac{d^{2}k^{\perp}}{\vec{k}^{\perp
2}}e^{-i\vec{b}^{\perp}\cdot\vec{k}^{\perp}}.$ (51)
In dimensional regularization by continuing to $d>2$ dimensions [36]
$\int
d^{d}k\frac{e^{-i\vec{b}\cdot\vec{k}}}{\vec{k}^{2}}=\pi^{\frac{d}{2}}2^{-2+\frac{d}{2}}\Gamma\left(-1+\frac{d}{2}\right)|\vec{b}|^{2-d}.$
(52)
The eikonal phase is indeed an infrared dominated quantity and a finite result
for the scattering amplitude is obtained after the resummation [12]. In the
impact parameter space (49) is dominated by the solution of the saddle point
equation for the integral on $\vec{b}^{\perp}$
$|\vec{\Delta}|\cos\theta-2GM_{\sigma}E_{\phi}|\vec{b}^{\perp}|^{1-d}=0$ (53)
and at $d=2$ the order of magnitud of the saddle point is
$|\vec{b}^{\perp}|\sim\frac{GM_{\sigma}E_{\phi}}{|\vec{\Delta}|}.$ (54)
In the ultrarelativistic limit at small momentum transfer we have a large
impact parameter with respect to the Schwarzschild radius of the target
particle, $R_{s}=GM_{\sigma}$, therefore the contributions around $\vec{b}=0$
will be disregarded. We are now going to explore this regime of scattering of
large impact parameter by removing the eikonal approximation.
## 6 Next-to-eikonal approximation
From (30) the next to eikonal corrections to the scattering amplitude in which
the leading quadratic dependence upon the gravitons virtual momenta is taken,
receives the following contributions
$\displaystyle i\mathcal{T}(p,p’,q,q’)_{NE}\equiv\lim_{p_{i}^{2}\rightarrow
m_{i}^{2}}p^{2}p’^{2}(q^{2}-M_{\sigma}^{2})(q’^{2}-M_{\sigma}^{2})\sum_{n=1}^{+\infty}\frac{(-1)^{n}\kappa^{2n}}{n!}$
$\displaystyle\int_{0}^{+\infty}d\nu
e^{i\nu(p’^{2}+i\epsilon)}\int_{0}^{+\infty}d\nu_{1}e^{i\nu_{1}[(q’^{2}-M_{\sigma}^{2})+i\epsilon]}\prod_{m=1}^{n}\int\frac{d^{4}k_{m}}{(2\pi)^{4}}E_{\phi}^{2}M_{\sigma}^{2}\frac{i}{k_{m}^{2}}$
$\displaystyle\int_{0}^{\nu}d\xi_{1}\dots\int_{0}^{\nu}d\xi_{n}\int_{0}^{\nu_{1}}d\tilde{\xi}_{1}\dots\int_{0}^{\nu_{1}}d\tilde{\xi}_{n}(2\pi)^{4}\delta^{4}(q-q’-k_{1}-\dots
k_{n})$ $\displaystyle(-i)\exp\left[\sum_{m=1}^{n}(-2ip’)\cdot
k_{m}(\nu-\xi_{m})\right]\exp\left[\sum_{\tilde{m}=1}^{n}(2iq’)\cdot
k_{\tilde{m}}(\nu_{1}-\tilde{\xi}_{\tilde{m}})\right]$
$\displaystyle\left[\sum_{m,m_{1}=1}^{n}k_{m}\cdot
k_{m_{1}}\left(\nu-\frac{\xi_{m}+\xi_{m_{1}}}{2}-\frac{|\xi_{m}-\xi_{m_{1}}|}{2}\right)\right.$
$\displaystyle\left.+\sum_{m_{2},m_{3}=1}^{n}k_{m_{2}}\cdot
k_{m_{3}}\left(\nu_{1}-\frac{\tilde{\xi}_{m_{2}}+\tilde{\xi}_{m_{3}}}{2}-\frac{|\tilde{\xi}_{m_{2}}-\tilde{\xi}_{m_{3}}|}{2}\right)\right].$
(55)
Observe that
$\displaystyle\sum_{m,m_{1}=1}^{n}k_{m}\cdot
k_{m_{1}}\left(\nu-\frac{\xi_{m}+\xi_{m_{1}}}{2}-\frac{|\xi_{m}-\xi_{m_{1}}|}{2}\right)$
$\displaystyle+\sum_{m_{2},m_{3}=1}^{n}k_{m_{2}}\cdot
k_{m_{3}}\left(\nu_{1}-\frac{\tilde{\xi}_{m_{2}}+\tilde{\xi}_{m_{3}}}{2}-\frac{|\tilde{\xi}_{m_{2}}-\tilde{\xi}_{m_{3}}|}{2}\right)=$
$\displaystyle\sum_{m=1}^{n}k_{m}^{2}(\nu-\xi_{m})+\sum_{m_{1}=1}^{n}k_{m_{1}}^{2}(\nu_{1}-\tilde{\xi}_{m_{1}})$
$\displaystyle+\sum_{\begin{subarray}{c}m_{2},m_{3}=1\\\ m_{2}\neq
m_{3}\end{subarray}}^{n}k_{m_{2}}\cdot
k_{m_{3}}\left(\nu-\frac{\xi_{m_{2}}+\xi_{m_{3}}}{2}-\frac{|\xi_{m_{2}}-\xi_{m_{3}}|}{2}\right)$
$\displaystyle+\sum_{\begin{subarray}{c}m_{4},m_{5}=1\\\ m_{4}\neq
m_{5}\end{subarray}}^{n}k_{m_{4}}\cdot
k_{m_{5}}\left(\nu_{1}-\frac{\tilde{\xi}_{m_{4}}+\tilde{\xi}_{m_{5}}}{2}-\frac{|\tilde{\xi}_{m_{4}}-\tilde{\xi}_{m_{5}}|}{2}\right).$
(56)
Firstly consider the following contribution in dimensional regularization
$\displaystyle
i\mathcal{T}(p,p’,q,q’)^{1}_{NE}\equiv\lim_{p_{i}^{2}\rightarrow
m_{i}^{2}}p^{2}p’^{2}(q^{2}-M_{\sigma}^{2})(q’^{2}-M_{\sigma}^{2})\sum_{n=1}^{+\infty}\frac{(-1)^{n}\kappa^{2n}}{n!}$
$\displaystyle\int_{0}^{+\infty}d\nu
e^{i\nu(p’^{2}+i\epsilon)}\int_{0}^{+\infty}d\nu_{1}e^{i\nu_{1}[(q’^{2}-M_{\sigma}^{2})+i\epsilon]}\prod_{m=1}^{n}\int\frac{d^{4-2\epsilon}k_{m}}{(2\pi)^{4}}$
$\displaystyle
p^{\prime\mu_{m}}p^{\prime\beta_{m}}\frac{i}{2}\frac{2\eta_{\mu_{m}0}\eta_{\beta_{m}0}-\eta_{\mu_{m}\beta_{m}}\eta_{00}}{k_{m}^{2}}M_{\sigma}^{2}\int_{0}^{\nu}d\xi_{1}\dots\int_{0}^{\nu}d\xi_{n}\int_{0}^{\nu_{1}}d\tilde{\xi}_{1}\dots$
$\displaystyle\int_{0}^{\nu_{1}}d\tilde{\xi}_{n}(-i)\exp\left[\sum_{m=1}^{n}(-2ip’)\cdot
k_{m}(\nu-\xi_{m})\right]\exp\left[\sum_{\tilde{m}=1}^{n}(2iq’)\cdot
k_{\tilde{m}}(\nu_{1}-\tilde{\xi}_{\tilde{m}})\right]$
$\displaystyle\sum_{r=1}^{n}k_{r}^{2}(\nu-\xi_{r})(2\pi)^{4}\delta^{4}(q-q’-k_{1}-\dots
k_{n}).$ (57)
By using the formulas (38), (39), (40) we reach
$\displaystyle\lim_{p_{i}^{2}\rightarrow
m_{i}^{2}}p^{2}p’^{2}\sum_{n=1}^{+\infty}\frac{(-1)^{n}\kappa^{2n}}{n!}\int_{0}^{+\infty}d\nu
e^{i\nu(p’^{2}+i\epsilon)}\prod_{m=1}^{n}\int\frac{d^{3-2\epsilon}k_{m}}{(2\pi)^{4}}(p^{\prime
0}M_{\sigma})^{2}\frac{-i}{\vec{k}_{m}^{2}}$
$\displaystyle\int_{0}^{\nu}d\xi_{1}\dots\int_{0}^{\nu}d\xi_{n}(+i)\exp\left[\sum_{s=1}^{n}2i\vec{p^{\prime}}\cdot\vec{k}_{s}(\nu-\xi_{s})\right]\sum_{r=1}^{n}\vec{k}_{r}^{2}(\nu-\xi_{r})\frac{(-2\pi
i)^{n-1}}{(2M_{\sigma})^{n-1}}$
$\displaystyle(2\pi)^{4}\delta^{3}(\vec{\Delta}+\vec{k}_{1}+\dots\vec{k}_{n}).$
(58)
The following identity holds [10],[32],[35],
$\displaystyle\int_{0}^{\nu}d\xi_{1}\dots\int_{0}^{\nu}d\xi_{n}\sum_{r=1}^{n}\vec{k}_{r}^{2}(\nu-\xi_{r})\exp\left[\sum_{s=1}^{n}2i\vec{p^{\prime}}\cdot\vec{k}_{s}(\nu-\xi_{s})\right]=$
$\displaystyle\sum_{r=1}^{n}\vec{k}_{r}^{2}\frac{\partial}{\partial(2i\vec{p^{\prime}}\cdot\vec{k}_{r})}\prod_{s=1}^{n}\frac{-1}{2i\vec{p^{\prime}}\cdot\vec{k}_{s}}[1-e^{2i\nu\vec{p^{\prime}}\cdot\vec{k}_{s}}].$
(59)
By consequence
$\displaystyle\lim_{p_{i}^{2}\rightarrow
m_{i}^{2}}p^{2}p’^{2}\int_{0}^{+\infty}d\nu
e^{i\nu(p’^{2}+i\epsilon)}\prod_{s=1}^{n}\frac{-1}{2i\vec{p^{\prime}}\cdot\vec{k}_{s}}[1-e^{2i\nu\vec{p^{\prime}}\cdot\vec{k}_{s}}]=$
$\displaystyle
i^{n}\frac{1}{\vec{p^{\prime}}\cdot\vec{k}_{1}}\frac{1}{\vec{p^{\prime}}\cdot\vec{k}_{2}}\dots\frac{1}{\vec{p^{\prime}}\cdot\vec{k}_{n-1}}$
(60)
where the eikonal identities (37) and (38) have been used. The contribution to
the amplitude is therefore
$\displaystyle\sum_{n=1}^{+\infty}\frac{(-1)^{n}\kappa^{2n}}{n!}\frac{i(-2\pi
i)^{n-1}}{(2M_{\sigma})^{n-1}}\prod_{m=1}^{n}\int\frac{d^{3-2\epsilon}k_{m}}{(2\pi)^{4}}(p^{\prime
0}M_{\sigma})^{2}\frac{1}{\vec{k}_{m}^{2}}$ (61)
$\displaystyle\sum_{r=1}^{n-1}\vec{k}_{r}^{2}\frac{\partial}{\partial(2i\vec{p^{\prime}}\cdot\vec{k}_{r})}\left[\frac{1}{\vec{p^{\prime}}\cdot\vec{k}_{1}}\frac{1}{\vec{p^{\prime}}\cdot\vec{k}_{2}}\dots\frac{1}{\vec{p^{\prime}}\cdot\vec{k}_{n-1}}\right](2\pi)^{4}\delta^{3-2\epsilon}(\vec{\Delta}+\vec{k}_{1}+\dots\vec{k}_{n}).$
Due to (49) the contribution to the amplitude due to terms proportional to
$k^{2}$ is concentrated around $\vec{b}=0$ and are therefore disregarded for
our purposes due to (54). In (55) start by considering the sums with different
indices
$\displaystyle\sum\limits_{\begin{subarray}{c}m,m_{1}=1\\\ m\neq
m_{1}\end{subarray}}^{n}k_{m}\cdot
k_{m_{1}}\left(\theta(\xi_{m}-\xi_{m_{1}})(\nu-\xi_{m})+\theta(\xi_{m_{1}}-\xi_{m})(\nu-\xi_{m_{1}})\right)+$
$\displaystyle\sum\limits_{\begin{subarray}{c}m_{2},m_{3}=1\\\ m_{2}\neq
m_{3}\end{subarray}}^{n}k_{m_{2}}\cdot
k_{m_{3}}(\theta(\tilde{\xi}_{m_{2}}-\tilde{\xi}_{m_{3}})(\nu_{1}-\tilde{\xi}_{m_{2}})+\theta(\tilde{\xi}_{m_{3}}-\tilde{\xi}_{m_{2}})(\nu_{1}-\tilde{\xi}_{m_{3}})).$
We are going to prove that since $M_{\sigma}>>E_{\phi}$ the only needed sum is
$\displaystyle\sum_{m,m_{1}=1,m\neq m_{1}}^{n}k_{m}\cdot
k_{m_{1}}\left(\theta(\xi_{m}-\xi_{m_{1}})(\nu-\xi_{m})+\theta(\xi_{m_{1}}-\xi_{m})(\nu-\xi_{m_{1}})\right).$
(63)
Let $m,m_{1}\in\\{1,\dots n\\}$, $m\neq m_{1}$ and consider the integral
$\displaystyle\int_{0}^{\nu}d\xi_{m}d\xi_{m_{1}}\exp[-2ip’\cdot
k_{m}(\nu-\xi_{m})-2ip’\cdot k_{m_{1}}(\nu-\xi_{m_{1}})]$
$\displaystyle\quad\times[\theta(\xi_{m}-\xi_{m_{1}})(\nu-\xi_{m})+\theta(\xi_{m_{1}}-\xi_{m})(\nu-\xi_{m_{1}})]=$
$\displaystyle-\frac{1}{(2ip’\cdot k_{m})(2ip’\cdot
k_{m_{1}})(2ip’\cdot(k_{m}+k_{m_{1}}))}\left[1-e^{(-2ip’\cdot(k_{m}+k_{m_{1}})\nu)}\right]$
(64) $\displaystyle+\quad\frac{\nu}{(2ip’\cdot k_{m})(2ip’\cdot
k_{m_{1}})}e^{(-2ip’\cdot(k_{m}+k_{m_{1}})\nu)}\quad+$ (65)
$\displaystyle\frac{2ip’\cdot(k_{m}+k_{m_{1}})}{(2ip’\cdot
k_{m})^{2}(2ip’\cdot k_{m_{1}})^{2}}\left(1-e^{(-2ip’\cdot
k_{m}\nu)}\right)\left(1-e^{(-2ip’\cdot k_{m_{1}}\nu)}\right)$ (66)
The contribution of the term (64) to the $i\mathcal{T}$ matrix amounts to
$\displaystyle
i\mathcal{T}_{NE}^{2a}\equiv\lim\limits_{\begin{subarray}{c}p^{2}\rightarrow
0\\\ p’^{2}\rightarrow 0\end{subarray}}p^{2}p^{\prime
2}(2\pi)^{4}\sum_{n=2}^{\infty}\frac{i^{n}\kappa^{2n}}{n!}\int_{0}^{+\infty}d\nu
e^{i\nu(p’^{2}+i\epsilon)}\frac{(E_{\phi}M_{\sigma})^{2n}}{(2\pi)^{4n}}\sum\limits_{\begin{subarray}{c}\tilde{m},m_{1}=1\\\
\tilde{m}\neq m_{1}\end{subarray}}^{n}$
$\displaystyle\int\frac{d^{3-2\epsilon}k_{1}}{\vec{k}_{1}^{2}}\dots\frac{d^{3-2\epsilon}k_{n}}{\vec{k}_{n}^{2}}\delta^{3-2\epsilon}(\vec{k}_{1}+\dots+\vec{k}_{n}+\vec{\Delta})\vec{k}_{\tilde{m}}\cdot\vec{k}_{m_{1}}\frac{i^{n}(-2\pi
i)^{n-1}}{(2M_{\sigma})^{n-1}}$
$\displaystyle\prod\limits_{\begin{subarray}{c}m=1\\\
m\neq\tilde{m},m_{1}\end{subarray}}^{n}\frac{1-\exp(2i\nu\vec{p}’\cdot\vec{k}_{m})}{-2i\vec{p}’\cdot\vec{k}_{m}}\frac{[1-\exp(2i\vec{p}’\cdot(\vec{k}_{\tilde{m}}+\vec{k}_{m_{1}})\nu)]}{(-2i\vec{p}’\cdot\vec{k}_{\tilde{m}})(-2i\vec{p}’\cdot\vec{k}_{m_{1}})(-2i\vec{p}’\cdot(\vec{k}_{\tilde{m}}+\vec{k}_{m_{1}}))}$
(67)
Once again by the use of (38), (39), (40) we obtain
$\displaystyle\lim\limits_{\begin{subarray}{c}p^{2}\rightarrow 0\\\
p’^{2}\rightarrow 0\end{subarray}}p^{2}p^{\prime
2}\delta(k^{z}_{1}+\dots+k^{z}_{n})\prod\limits_{\begin{subarray}{c}m=1\\\
m\neq\tilde{m},m_{1}\end{subarray}}^{n}\frac{1-\exp(2i\nu\vec{p}’\cdot\vec{k}_{m})}{-2i\vec{p}’\cdot\vec{k}_{m}}\times$
$\displaystyle\frac{[1-\exp(2i\vec{p}’\cdot(\vec{k}_{\tilde{m}}+\vec{k}_{m_{1}})\nu)]}{(-2i\vec{p}’\cdot\vec{k}_{\tilde{m}})(-2i\vec{p}’\cdot\vec{k}_{m_{1}})(-2i\vec{p}’\cdot(k_{\tilde{m}}+k_{m_{1}}))}=$
$\displaystyle\frac{(-2\pi
i)^{n-2}}{(2E_{\phi})^{n-2}}\delta(k_{1}^{z})\dots\delta(k_{n-2}^{z})\delta(k^{z}_{\tilde{m}}+k^{z}_{m_{1}})$
(68)
and replacing into (67) we obtain
$\displaystyle
i\mathcal{T}_{NE}^{2a}=i\sum_{n=2}^{\infty}\left(\frac{i\kappa^{2}E_{\phi}M_{\sigma}}{16\pi^{2}}\right)^{n}\frac{1}{(n-2)!}\frac{M_{\sigma}}{2\pi}\left(\int\frac{d^{2-\epsilon}k_{m}}{\vec{k}_{m}^{\perp
2}}\right)^{n-2}$
$\displaystyle\int\frac{d^{3-\epsilon}k_{\tilde{m}}}{\vec{k}_{\tilde{m}}^{2}}\frac{d^{3-2\epsilon}k_{m_{1}}}{\vec{k}_{m_{1}}^{2}}\frac{\vec{k}_{\tilde{m}}\cdot\vec{k}_{m_{1}}}{k_{\tilde{m}}^{z}k_{m_{1}}^{z}}\delta(k_{\tilde{m}}^{z}+k_{m_{1}}^{z})(2\pi)^{2}\delta^{2}(\vec{k}^{\perp}_{1}+\dots+\vec{k}^{\perp}_{n}+\vec{\Delta})$
(69)
where $\frac{1}{2(n-2)!}=\frac{1}{n!}\times{{n}\choose{2}}$ comes from the sum
over the equivalent combinations of two elements among a set of $n$ elements.
The contribution from (65) to the next-to-eikonal approximation of the
scattering amplitude (55) is the same as (69) as it can be understood by the
replacement into (65)
$-\nu\delta(k^{z}_{m}+k^{z}_{m_{1}})=\delta(k^{z}_{m}+k^{z}_{m_{1}})\frac{1-e^{2i\nu
E_{\phi}(k^{z}_{m}+k^{z}_{m_{1}})}}{2i\nu E_{\phi}(k^{z}_{m}+k^{z}_{m_{1}})}.$
(70)
as consequence of the eikonal identities. Due to the
$\delta(k^{z}_{m}+k^{z}_{m_{1}})$ the exponential factor in (65) amounts to
one. Concerning the contribution of (66) to the Fourier transform of the
amplitude (55) it vanishes because each ladder diagram is proportional to
$\int_{-\infty}^{+\infty}\frac{dk_{1}^{z}}{k_{1}^{z2}+k_{1}^{\perp
2}}\frac{\delta(k_{1}^{z})}{k_{1}^{z}}\int_{-\infty}^{+\infty}\frac{dk_{2}^{z}}{k_{2}^{z2}+k_{2}^{\perp
2}}{\delta(k_{2}^{z})}=0.$ (71)
By taking the Fourier transform to the impact parameter space we obtain
$\displaystyle
i\widetilde{\mathcal{T}}(\vec{b}^{\perp})_{NE}=\int\frac{d^{2-\epsilon}\vec{\Delta}}{(2\pi)^{2}}e^{i\vec{\Delta}\cdot\vec{b}^{\perp}}i\mathcal{T}_{NE}^{2a}=$
$\displaystyle
i\int\frac{d^{2-\epsilon}\vec{\Delta}}{(2\pi)^{2}}e^{i\vec{\Delta}\cdot\vec{b}^{\perp}}\sum_{n=2}^{\infty}\left(\frac{i\kappa^{2}E_{\phi}M_{\sigma}}{16\pi^{2}}\right)^{n}\frac{2}{(n-2)!}\frac{M_{\sigma}}{2\pi}\left(\int\frac{d^{2-2\epsilon}k_{m}}{\vec{k}_{m}^{\perp
2}}\right)^{n-2}$
$\displaystyle\int\frac{d^{3-2\epsilon}k_{\tilde{m}}}{\vec{k}_{\tilde{m}}^{2}}\frac{d^{3-2\epsilon}k_{m_{1}}}{\vec{k}_{m_{1}}^{2}}\frac{\vec{k}_{\tilde{m}}\cdot\vec{k}_{m_{1}}}{k_{\tilde{m}}^{z}k_{m_{1}}^{z}}\delta(k_{\tilde{m}}^{z}+k_{m_{1}}^{z})(2\pi)^{2}\delta^{2-2\epsilon}(\vec{k}^{\perp}_{1}+\dots+\vec{k}^{\perp}_{n}+\vec{\Delta})$
$\displaystyle=\quad
2i(s-M_{\sigma}^{2})\sum_{n=2}^{\infty}\frac{(i\chi_{0}(\vec{b}^{\perp}))^{n-2}}{(n-2)!}\tilde{\chi}(\vec{b}).$
(72)
Here $(s-M_{\sigma}^{2})=2E_{\phi}M_{\sigma}$, the exponentiated eikonal phase
$\chi_{0}(\vec{b}^{\perp})$ with its definition in (51) is recovered. Moreover
$\tilde{\chi}(\vec{b})=\frac{\kappa^{4}E_{\phi}M_{\sigma}^{2}}{1024\pi^{5}}\int
dk_{1}^{z}d^{2-2\epsilon}k_{1}^{\perp}d^{2-2\epsilon}k_{2}^{\perp}\frac{e^{-i\vec{b}^{\perp}\cdot\vec{k}_{1}^{\perp}}}{k_{1}^{\perp
2}+k_{1}^{z2}}\frac{e^{-i\vec{b}^{\perp}\cdot\vec{k}_{2}^{\perp}}}{k_{2}^{\perp
2}+k_{1}^{z2}}\frac{\vec{k}_{1}^{\perp}\cdot\vec{k}_{2}^{\perp}-k_{1}^{z2}}{k_{1}^{z2}}.$
(73)
By following the same derivation of [19] we obtain for (73)
$\tilde{\chi}(\vec{b}^{\perp})=\frac{\kappa^{4}E_{\phi}M_{\sigma}^{2}}{1024\pi^{5}}\left[2\pi
b^{6\epsilon-1}\frac{\Gamma\left(\frac{1}{2}-2\epsilon\right)\Gamma\left(\frac{1}{2}-\epsilon\right)}{\Gamma\left(-\epsilon\right)}\right],$
(74)
with $b=|\vec{b}^{\perp}|$ and for $\epsilon\rightarrow 0$ meaning in four
spacetime dimensions we get
$\tilde{\chi}(\vec{b}^{\perp})=0,$ (75)
in agreement with [21], [42] as well as [19], [43]. It means that in four
dimensions the next-to-eikonal correction of (72) starts at higher orders.
However if in this approximation $e^{i\chi_{0}}$ is modified by a phase also
in this functional context is not clear to us.
## 7 Multi-gravitons exchange beyond ladder
We consider the next to eikonal correction to the amplitude coming from the
the insertion of a seagull interaction and a trilinear graviton vertex.
Consider first of all the insertion of one seagull interaction vertex inside
the sum of the ladder diagrams. This is achieved by expanding the spin zero
matter action (11) untill the quartic interaction, the so called seagull terms
[30],
$\displaystyle S$ $\displaystyle=-\frac{1}{2}\int
d^{4}x\phi(x)\left[\partial^{2}+m^{2}-\kappa
h^{\mu\nu}\partial_{\mu}\partial_{\nu}+\frac{\kappa}{2}\eta_{\mu\nu}h^{\mu\nu}\partial^{2}+\frac{\kappa}{2}\eta^{\mu\nu}h_{\mu\nu}m^{2}\right.$
$\displaystyle\quad-\frac{1}{4}\kappa^{2}\partial_{\alpha}(h^{\mu\nu}h_{\mu\nu})\partial^{\alpha}+\frac{\kappa^{2}}{8}\partial_{\alpha}(h_{\mu}^{\mu})^{2}\partial^{\alpha}+\frac{\kappa^{2}}{8}(h_{\mu}^{\mu})^{2}\partial^{2}$
$\displaystyle\quad+\frac{k^{2}}{8}m^{2}(h_{\mu}^{\mu})^{2}-\frac{k^{2}}{2}\partial_{\alpha}(h_{\mu}^{\mu}h^{\alpha\beta})\partial_{\beta}-\frac{\kappa^{2}}{2}\partial_{\alpha}(h_{\mu}^{\mu}h^{\alpha\beta})\partial_{\beta}$
$\displaystyle\quad\left.-\frac{\kappa^{2}}{2}h_{\mu}^{\mu}h^{\alpha\beta}\partial_{\alpha}\partial_{\beta}+\>\kappa^{2}h^{\mu\lambda}h_{\lambda}^{\nu}\partial_{\mu}\partial_{\nu}+\kappa^{2}\partial_{\mu}(h^{\mu\lambda}h_{\lambda}^{\nu})\partial_{\nu}\right]\phi(x).$
(76)
By following the same steps as in the section (3) we arrive to the following
Cauchy problem
$\begin{cases}i\frac{\partial Y}{\partial\nu}(x,\nu)=\\\
\left[\partial^{2}-2i\ell_{\alpha}\partial^{\alpha}+\kappa\ell_{\mu}\ell_{\beta}h^{\mu\beta}(x)+\frac{\kappa^{2}}{2}h^{\mu}\,_{\mu}h^{\alpha\beta}\ell_{\alpha}\ell_{\beta}-\kappa^{2}h^{\alpha\lambda}h_{\lambda}\,^{\beta}\ell_{\alpha}\ell_{\beta}\right]Y(x,\nu)\\\
Y(x,0)=1.\end{cases}$ (77)
By following the same steps as in (18) the corresponding solution is
$\displaystyle
Y(x,\nu)=\exp\left[-i\kappa\ell_{\mu}\ell_{\beta}\int_{0}^{\nu}h^{\mu\beta}\left(x+2\ell(\nu’-\nu)+\int_{\nu’}^{\nu}\frac{\delta}{\delta
t(\xi)}d\xi\right)d\nu’\right.$
$\displaystyle\left.-i\frac{\kappa^{2}}{2}\ell_{\alpha}\ell_{\beta}\int_{0}^{\nu}h^{\mu}\,_{\mu}h^{\alpha\beta}\left(x+2\ell(\nu’-\nu)+\int_{\nu’}^{\nu}\frac{\delta}{\delta
t(\xi)}d\xi\right)d\nu’\right.$
$\displaystyle\left.+i\kappa^{2}\ell_{\alpha}\ell_{\beta}\int_{0}^{\nu}h^{\alpha\lambda}h_{\lambda}\,^{\beta}\left(x+2\ell(\nu’-\nu)+\int_{\nu’}^{\nu}\frac{\delta}{\delta
t(\xi)}d\xi\right)d\nu’\right]$
$\displaystyle\left.\exp\left(i\int_{0}^{\nu}t^{2}(\xi)d\xi\right)\right|_{t=0}$
(78)
The contribution to $Y(x,\nu)$ corresponding to the sum of $n$ gravitons
exchange diagrams with a single seagull in the amplitude (32) for the
propagation of the light scalar by emitting gravitons in the straight line
approximation is
$\displaystyle<p’|G^{c}(x,y|h)|p>=\left(-i\frac{\kappa^{2}}{2}\right)\lim\limits_{\begin{subarray}{c}p^{2}\rightarrow
0\\\ p^{\prime 2}\rightarrow
0\end{subarray}}p^{2}p^{‘2}(2\pi)^{4}\left(p^{\prime\alpha}p^{\prime\beta}\eta^{\gamma\delta}-2p^{\prime\gamma}p^{\prime\beta}\eta^{\delta\alpha}\right)$
$\displaystyle\sum_{n=0}^{+\infty}\frac{(-i)^{n}\kappa^{n}}{n!}\int_{0}^{+\infty}d\nu\,e^{i\nu(p^{’2}+i\epsilon)}\int\frac{d^{4}\bar{k}d^{4}\underline{k}}{(2\pi)^{8}}\int\frac{d^{4}k_{1}\dots
d^{4}k_{n}}{(2\pi)^{4n}}\int_{0}^{\nu}d\nu’\int_{0}^{\nu}d\xi_{1}$
$\displaystyle\dots d\xi_{n}\delta^{4}(p’-p-k_{1}-\dots-
k_{n}-\bar{k}-\underline{k})e^{-\sum\limits_{m=1}^{n}2ip’\cdot
k_{m}(\nu-\xi_{m})}$ $\displaystyle
e^{-2ip’\cdot(\bar{k}+\underline{k})(\nu-\nu’)}p^{\prime\mu_{1}}p^{\prime\beta_{1}}\hat{h}_{\mu_{1}\beta_{1}}(k_{1})\dots
p^{\prime\mu_{n}}p^{\prime\beta_{n}}\hat{h}_{\mu_{n}\beta_{n}}(k_{n})\hat{h}_{\gamma\delta}(\bar{k})\hat{h}_{\alpha\beta}(\underline{k}).$
For the scattering amplitude of our high-energy process with the insertion of
one seagull interaction we need for the expansion of the heavy line propagator
at least two emitted gravitons amounting
$\displaystyle\lim\limits_{\begin{subarray}{c}q^{2}\rightarrow
M_{\sigma}^{2}\\\ q’^{2}\rightarrow
M_{\sigma}^{2}\end{subarray}}(q^{2}-M_{\sigma}^{2})(q^{2}-M_{\sigma}^{2})(2\pi)^{4}\sum_{r=2}^{+\infty}\frac{(-i)^{r}\kappa^{r}}{r!}\int_{0}^{+\infty}d\nu_{1}e^{i\nu_{1}(q’^{2}-M_{\sigma}^{2}+i\epsilon)}$
$\displaystyle\int\frac{d^{4}\tilde{k}_{1}\dots
d^{4}\tilde{k}_{r}}{(2\pi)^{4r}}\int_{0}^{\nu_{1}}d\xi_{1}\dots\int_{0}^{\nu_{1}}d\xi_{n}\delta^{4}(q’-q+\tilde{k}_{1}+\dots\tilde{k}_{r})$
$\displaystyle
e^{-\sum\limits_{m=1}^{r}2iq’\cdot\tilde{k}_{m}(\nu_{1}-\xi_{m})}q’^{\mu_{1}}q’^{\beta_{1}}\hat{h}_{\mu_{1}\beta_{1}}(\tilde{k}_{1})\dots
q’^{\mu_{r}}q’^{\beta_{r}}\hat{h}_{\mu_{r}\beta_{r}}(k_{r}).$ (80)
The Wick’s contractions performed by following the formula (8) implies the
need of the following matrix element
$\displaystyle\sum\limits_{n=0}^{+\infty}\sum\limits_{r=2}^{+\infty}(-i)^{n+2}(-i)^{r}\frac{\kappa^{n+2}\kappa^{r}}{2\,n!r!}\int\frac{d^{4}k_{1}\dots
d^{4}k_{n}d^{4}\bar{k}d^{4}\underline{k}}{(2\pi)^{4(n+2)}}\int\frac{d^{4}\tilde{k}_{1}\dots
d^{4}\tilde{k}_{r}}{(2\pi)^{4r}}$ $\displaystyle\delta^{4}(p’-p-k_{1}-\dots-
k_{n}-\underline{k}-\bar{k})\delta^{4}(q’-q-\tilde{k}_{1}-\dots-\tilde{k}_{r})\times$
$\displaystyle<p^{\prime\mu_{1}}p^{\prime\beta_{1}}\hat{h}_{\mu_{1}\beta_{1}}(k_{1})\dots
p^{\prime\mu_{n}}p^{\prime\beta_{n}}\hat{h}_{\mu_{n}\beta_{n}}(k_{n})(p^{\prime\alpha}p^{\prime\beta}\eta^{\rho\sigma}-2p^{\prime\rho}p^{\prime\beta}\eta^{\sigma\alpha})$
$\displaystyle\hat{h}_{\rho\sigma}(\bar{k})\hat{h}^{\alpha\beta}(\underline{k})q^{\prime\tilde{\mu}_{1}}q^{\prime\tilde{\beta}_{1}}\hat{h}_{\tilde{\mu}_{1}\tilde{\beta}_{1}}(\tilde{k}_{1})\dots
q^{\prime\tilde{\mu}_{r}}q^{\prime\tilde{\beta}_{r}}\hat{h}_{\tilde{\mu}_{r}\tilde{\beta}_{r}}(\tilde{k}_{r})>,$
(81)
where by the prescription (8) the self-energy contributions are not included.
We get for such matrix element
$\displaystyle\sum\limits_{r=2}^{+\infty}\frac{(-i)^{2r}\kappa^{2r}}{2(r-2)!}\int\frac{d\tilde{k}_{1}\dots
d^{4}\tilde{k}_{r}}{(2\pi)^{4r}}(2\pi)^{4}\delta^{4}(q’-q-\tilde{k}_{1}-\dots-\tilde{k}_{r})\frac{M_{\sigma}^{4}E_{\phi}^{2}}{\tilde{k}_{1}^{2}\tilde{k}_{2}^{2}}$
$\displaystyle\prod\limits_{m=3}^{r}p^{\prime\mu_{m}}p^{\prime\beta_{m}}q^{\prime\tilde{\mu}_{m}}q^{\prime\tilde{\beta}_{m}}\frac{i}{2\tilde{k}_{m}^{2}}(\eta_{\mu_{m}\tilde{\mu}_{m}}\eta_{\beta_{m}\tilde{\beta}_{m}}+\eta_{\mu_{m}\tilde{\beta}_{m}}\eta_{\beta_{m}\tilde{\mu}_{m}}-\eta_{\mu_{m}\beta_{m}}\eta_{\tilde{\beta}_{m}\tilde{\mu}_{m}}).$
The leading contribution to the amplitude including the non-linear seagull
interaction amounts to
$\displaystyle i\mathcal{T}^{NL}(p,p’,q,q’)_{SG}=$
$\displaystyle\frac{i}{2}\lim_{p_{i}^{2}\rightarrow
m_{i}^{2}}p^{2}p’^{2}(q^{2}-M_{\sigma}^{2})(q’^{2}-M_{\sigma}^{2})\sum_{r=2}^{+\infty}\frac{(-i)^{2r}\kappa^{2r}}{(r-2)!}\int_{0}^{+\infty}d\nu
e^{i\nu(p’^{2}+i\epsilon)}$
$\displaystyle\int_{0}^{+\infty}d\nu_{1}e^{i\nu_{1}(q’^{2}-M_{\sigma}^{2}+i\epsilon)}\frac{1}{(2\pi)^{4r}}\int
d^{4}\bar{k}d^{4}\underline{k}\frac{M_{\sigma}^{4}E_{\phi}^{2}}{\bar{k}^{2}\underline{k}^{2}}\left(\prod_{m=3}^{r}\int
d^{4}k_{m}\frac{i}{k_{m}^{2}}E_{\phi}^{2}M_{\sigma}^{2}\right)$
$\displaystyle(2\pi)^{4}\delta^{4}(p’-p-k_{1}-\dots-
k_{n}-\bar{k}-\underline{k})\int_{0}^{\nu}\exp\left[-\sum_{m=1}^{r-2}2ip’\cdot
k_{m}(\nu-\xi_{m})\right]$ $\displaystyle d\xi_{1}\dots
d\xi_{r-2}\int_{0}^{\nu_{1}}\exp\left[\sum_{\tilde{m}=1}^{r}2iq’\cdot
k_{\tilde{m}}(\nu_{1}-\tilde{\xi}_{\tilde{m}})\right]d\tilde{\xi}_{1}\dots
d\tilde{\xi}_{r}\times$
$\displaystyle\int_{0}^{\nu}d\nu’e^{-2ip’\cdot(\bar{k}+\underline{k})(\nu-\nu’)}$
(83)
By applying the eikonal identities of (38), (39), (41) we obtain
$\displaystyle i\mathcal{T}^{NL}(p,p’,q,q’)_{SG}=$
$\displaystyle-\frac{1}{2}\lim_{p_{i}^{2}\rightarrow
m_{i}^{2}}p^{2}p’^{2}\sum_{r=2}^{+\infty}\frac{(-i)^{2r}\kappa^{2r}}{(r-2)!}\int_{0}^{+\infty}d\nu\,\frac{e^{i\nu(p’^{2}+i\epsilon)}}{(2\pi)^{4r}}\int
d^{4}\bar{k}d^{4}\underline{k}\frac{M_{\sigma}^{4}E_{\phi}^{2}}{\bar{k}^{2}\underline{k}^{2}}$
$\displaystyle\left(\prod_{m=3}^{r}\int
d^{4}k_{m}\frac{i}{k_{m}^{2}}E_{\phi}^{2}M_{\sigma}^{2}\right)\int_{0}^{\nu}\exp\left[-\sum_{m=1}^{r-2}2ip’\cdot
k_{m}(\nu-\xi_{m})\right]d\xi_{1}\dots d\xi_{r-2}$
$\displaystyle\int_{0}^{\nu}d\nu’e^{-2ip’\cdot(\bar{k}+\underline{k})(\nu-\nu’)}i^{r}\frac{(-2\pi
i)^{r-1}}{(2M_{\sigma})^{r-1}}\delta(k^{0}_{1})\dots\delta(k^{0}_{r})\times$
$\displaystyle(2\pi)^{4}\delta^{3}(\vec{k}_{1}+\dots+\vec{k}_{n}+\vec{\bar{k}}+\vec{\underline{k}}+\vec{\Delta})$
(84)
To apply the eikonal identities along the $z$-axis consider
$\displaystyle\lim_{p_{i}^{2}\rightarrow
m_{i}^{2}}p^{2}p\text{\textquoteright}^{2}\delta(k_{1}^{z}+\dots
k_{r-2}^{z}+\bar{k}^{z}+\underline{k}^{z})\int_{0}^{+\infty}d\nu
e^{i\nu(p\text{\textquoteright}^{2}+i\epsilon)}\times$
$\displaystyle\frac{1-e^{2i\nu
E_{\phi}(\bar{k}^{z}+\underline{k}_{z})}}{-2iE_{\phi}(\bar{k}^{z}+\underline{k}_{z})}\prod_{m=3}^{r}\frac{1-e^{2i\nu
E_{\phi}k_{m}^{z}}}{-2iE_{\phi}k_{m}^{z}}\>=$ $\displaystyle i^{r}\frac{(-2\pi
i)^{r-2}}{(2E_{\phi})^{r-2}}\delta(k_{1}^{z})\dots\delta(k_{r-2}^{z})\delta(\bar{k}^{z}+\underline{k}^{z}).$
(85)
The contribution to the amplitude becomes
$\displaystyle
i\mathcal{T}^{NL}(p,p\text{\textquoteright},q,q\text{\textquoteright})_{SG}=$
$\displaystyle\sum\limits_{r=2}^{+\infty}\frac{i}{(r-2)!}\left(\frac{i\kappa^{2}M_{\sigma}E_{\phi}}{4(2\pi)^{2}}\right)^{r-2}\left[\frac{\kappa^{4}M_{\sigma}^{3}E_{\phi}^{2}}{32\pi^{3}}\int\frac{d^{3}\bar{k}d^{3}\underline{k}}{\vec{\bar{k}}^{2}\vec{\underline{k}}^{2}}\delta(\bar{k}^{z}+\underline{k}^{z})\right]$
$\displaystyle\int\prod\limits_{m=3}^{r}d^{2}\tilde{k}_{m}^{\perp}\frac{1}{(\tilde{k}_{m}^{\perp})^{2}}\delta^{2}(\vec{\Delta}+\vec{k}_{1}+\dots+\vec{k}_{n}+\vec{\bar{k}}+\vec{\underline{k}}).$
(86)
The contribution beyond the ladders coming from the three-linear vertex of
gravitons is coming from (3) implying (8) with the insertion of the three
linear gravitons vertex. For that calculation the functional methods are
totally equivalent to diagrammatic ones already described [19] for which we
just quote the result for the contribution to the scattering amplitude
$\displaystyle
i\mathcal{T}^{NL}(p,p’,q,q’)_{3g}\equiv-\sum\limits_{r=2}^{+\infty}\frac{i}{(r-2)!}\left(\frac{i\kappa^{2}M_{\sigma}E_{\phi}}{4(2\pi)^{2}}\right)^{r-2}\left(\frac{\kappa^{4}M_{\sigma}^{3}E_{\phi}^{2}}{64\pi^{3}}\right)\times$
$\displaystyle\int\frac{d^{3}\bar{k}d^{3}\underline{k}}{\vec{\bar{k}}^{2}\vec{\underline{k}}^{2}}\frac{\bar{k}_{z}^{2}}{\vec{\Delta}^{2}}\delta(\bar{k}^{z}+\underline{k}^{z})\int\prod\limits_{m=3}^{r}d^{2}\tilde{k}^{\perp}_{m}\frac{1}{(\tilde{k}^{\perp}_{m})^{2}}\delta^{2}(\vec{\Delta}+\vec{k}_{1}+\dots+\vec{k}_{n}+\vec{\bar{k}}+\vec{\underline{k}}).$
(87)
The total next-to-eikonal correction for the inclusion of non linear
interactions is obtained by summing (86) plus (87) amounting to
$\displaystyle i\widetilde{\mathcal{T}}^{NL}(\vec{b}^{\perp})$
$\displaystyle\equiv
i\widetilde{\mathcal{T}}^{NL}(\vec{b}^{\perp})_{SG}+i\widetilde{\mathcal{T}}^{NL}(\vec{b}^{\perp})_{3g}$
$\displaystyle=2i(s-M_{\sigma}^{2}))\sum_{n=2}^{\infty}\frac{(i\chi_{0}(\vec{b}^{\perp}))^{n-2}}{(n-2)!}\tilde{\chi}_{1}(\vec{b}).$
(88)
with
$\displaystyle\tilde{\chi}_{1}(\vec{b})$
$\displaystyle=\frac{\kappa^{4}M_{\sigma}^{2}E_{\phi}}{128\pi^{3}}\int\frac{d^{2}\Delta}{(2\pi)^{2}}e^{i\vec{b}^{\perp}\cdot\vec{\Delta}}\int\frac{d^{3}\bar{k}d^{3}\underline{k}}{\vec{\bar{k}}^{2}\vec{\underline{k}}^{2}}\left(1+\frac{\bar{k}_{z}^{2}}{2\vec{\Delta}^{2}}\right)\delta(\bar{k}^{z}+\underline{k}^{z})$
$\displaystyle=\frac{15\kappa^{4}M_{\sigma}^{2}E_{\phi}}{4096\pi\left|\vec{b}^{\perp}\right|}$
(89)
where standard Feynman parametrization technique has been used to find the
above result which agrees with [21], [42] as well as [19], [43].
## 8 Conclusions and research perspectives
In this paper we have analyzed the eikonal limit for the gravitational
scattering of a high-energy or massless scalar particle by a very heavy
scalar. The eikonal expansion has been performed by using the Fradkin’s
representation of the scattering amplitudes and we have shown that by those
functional methods we get a totally consistent eikonal expansion with a
resummation of diagrams equivalent to the one performed in [19]. Moreover, we
have shown that the exponentiation of the leading eikonal applies as well for
the first non-leading power in the energy of the light particle. Working at
leading power in the heavy particle mass, we expanded the light particle
propagator to next-to-eikonal power and included gravitational interactions of
comparable size, finding power corrections suppressed by a single power of
$\frac{\Delta}{E_{\phi}}$, or $\frac{E_{\phi}}{b}$ in the impact parameter
space, with respect to the leading eikonal term. The comparable terms are
based on one-loop diagrams of order $\kappa^{4}$. In our analysis corrections
are factors leaving leading-power exponentiation unaffected. The consistency
with the exponentiation of the power corrections themselves in our functional
formalism has not been yet explored. Next-to-eikonal corrections vanish in
four dimensions, the way of possible exponentiation in our representation is a
topic for future researches. Our calculations methods could be also useful for
the forward scattering of a gravitational wave from a black hole. For further
research proposal on the line of our study we remark that in a recent article
[37] the conservative two bodies dynamics for spinless compact objects
described by a four-point amplitude truncated to classical order with the
matter poles expanded about the momentum components along $z$ was considered.
This kind of expansion is automatically done in the Fradkin representation of
the scalar propagator meaning in the formula like (42) after the applications
of the eikonal identities (37), (38), (39), (40) and (41). From the
eikonalization of the scattering amplitude for the small momentum transfer an
amplitude-action relation arises, the determination of the radial action by
using the Fradkin representation is something that could be efficiently
achieved, even if a treatment Lagrangian independent should be introduced and
this will be left for future studies. The methods adopted in our paper could
also illuminate the contributions to two body Hamiltonian from an infinite
family of tidal operators, that was started in the paper [38]. The functional
approach to scattering amplitudes in the functional representation could
indeed be optimized to include insertions of powers of the position-space
magnetic and electric components of the linearized Weyl tensor contracted with
a point particle stress tensor. Another direction could be to consider, like
in the reference [39], effective theories of gravity where in addition to the
Einstein-Hilbert term non-minimal couplings of the type $R^{3}$, $R^{4}$ and
$FFR$ could be included, being $F$ the photon field strenght. By using the
Fradkin representation for the scattering of gravitons and photons of
frequency $\omega$ off heavy scalars of mass $m$ in the limit
$m>>\omega>>|\vec{q}|$, where $\vec{q}$ is the momentum transfer, it would be
interesting to see in this context the eikonal and next-to-eikonal limit of
this amplitude by functional techniques, and compare with the analogous
exponentiation in [40], [41]. In the paper of [18] it is proved that the
quantum state setup by a particle is a coherent state fully determined due to
an eikonal type exponentiation, it would also be interesting to discuss, by
using the tools introduced in this paper, how the coherent state changes by
including non-linear interactions and also next-to-eikonal approximations.
## References
* [1] G. t Hooft, Phys. Lett. B 198, 61 (1987).
* [2] G. t Hooft, Nucl. Phys. B304, 867 (1988).
* [3] E. Verlinde, H. Verlinde, Nucl. Phys. B 371 246 (1992).
* [4] S.J. Wallace, Annals Phys. 78 190 (1973).
* [5] R. J. Glauber, Lectures in Theoretical Physics, ed. by W. E. Brittin and L. G. Dunham, Interscience Publishers, Inc., New York, Volume I, page 315, (1959).
* [6] H. M. Fried, Green’s Functions and Ordered Exponentials, Cambridge University Press, (2002).
* [7] S.Weinberg, Phys. Rev. 140 B516 (1965).
* [8] R. Akhoury, R. Saotome, G. Sterman, Phys. Rev. D 84, 104040 (2011).
* [9] M. Beneke, G. Kirilin, JHEP 09, 066 (2012).
* [10] M. Lévy, J. Sucher, Phys. Rev. 1656 (1969).
* [11] K. Dietz, Acta Physica Austriaca Suppl. XIII 711 (1974).
* [12] S. B. Giddings, Subnucl. Ser. 48, 93 (2013).
* [13] H. Elvang, Y.-T. Huang, Scattering amplitudes in gauge theory and gravity, Cambridge University Press, (2015).
* [14] Z. Bern, C. Cheung, R.Roiban, C.-H. Shen, M.P. Solon, M. Zeng, JHEP 10 206 (2019).
* [15] T. Damour, Phys.Rev. D 102 2, 024060 (2020).
* [16] D. A. Kosower, B. Maybee, D. O’Connell, JHEP 02, 137 (2019).
* [17] B. Maybee, D. O’Connell, J. Vines, JHEP 12 156 (2019).
* [18] R. Monteiro, D. O’Connell, D.Peinador, M. Sergola, e-Print: 2012.11190 [hep-th].
* [19] R. Akhoury, R. Saotome, G. Sterman, e-Print: 1308.5204 [hep-th].
* [20] D. Kabat, M. Ortiz, Nucl. Phys. B 388 570 (1992).
* [21] G. D’Appollonio, P. Di Vecchia, R. Russo, G.Veneziano, JHEP 11, 100 (2010).
* [22] C.D.White, Phys. Lett. B 737, 216-222 (2014).
* [23] A. Luna, S. Melville, S. G. Naculich, C. D. White, JHEP 01, 052 (2017).
* [24] A. K. Collado, P. Di Vecchia, R. Russo, S. Thomas, JHEP 10, 038 (2018).
* [25] P. Di Vecchia, A. Luna, S. G. Naculich, R. Russo, G.Veneziano, C. D. White, Phys. Lett. B 798, 134927 (2019).
* [26] E. S. Fradkin, Nucl. Phys. 76 588 (1966).
* [27] E. S. Fradkin,V. Esposito and S. Termini, Rivista del Nuovo Cimento, Serie I, Vol.2, 498 (1970).
* [28] Nguyen Suan Han, Do Thu Ha, Nguyen Nhu Xuan, Eur.Phys.J.C 79 10, 835 (2019).
* [29] S.B.Giddings, D.J.Gross, A. Maharana, Phys.Rev. D 77 046001(2008).
* [30] J.F. Donoghue, M.M. Ivanov, A. Shkerin, arXiv:170200319 [hep-th].
* [31] H. Georgi, “The physics of the waves”, ed. by B. Cummings (1992).
* [32] M.E. Peskin, D.E. Schroeder, “An Introduction to Quantum Field Theory”, Addison Wesley, (1995).
* [33] S. Albeverio, R. Høegh-Krohn, S. Mazzucchi, “Mathematical theory of Feynman path integrals. An Introduction.” 2nd and enlarged edition. Lecture Notes in Mathematics 523. Springer-Verlag (2008).
* [34] C. Sommerfeld, Ann. Phys. 26 1 (1963).
* [35] J.L. Cardy, Nucl. Phys. B 28 477 (1970).
* [36] R. Strichartz, “ A guide to distribution theory and Fourier transforms”, CRC press, Inc. (1994).
* [37] Z. Bern, J. Parra-Martinez, R. Roiban, M. S. Ruf, C.-H. Shen, M.P. Solon, M. Zeng, e-Print: 2101.07254 [hep-th].
* [38] Z. Bern, J. Parra-Martinez, R. Roiban, E. Sawyer, C.-H. Shen e-Print: 2010.08559 [hep-th].
* [39] M. Accettulli Huber, A. Brandhuber, S. De Angelis, G. Travaglini, Phys.Rev. D 102 4, 046014 (2020).
* [40] D. Amati, M. Ciafaloni and G. Veneziano, Phys. Lett. B 197 81 (1987).
* [41] D. Amati, M. Ciafaloni and G. Veneziano, Nucl. Phys. B 347 550 (1990).
* [42] P. Di Vecchia, talk delivered at the conference Mass 2019.
* [43] N.E.J. Bjerrum-Bohr, J. F. Donoghue, B. R. Holstein, L. Plante, P. Vanhove, JHEP 11 117 (2016).
|
# How to observe the QCD instanton/sphaleron processes at hadron colliders?
Edward Shuryak and Ismail Zahed Center of Nuclear Theory, Department of
Physics and Astronomy,
Stony Brook University, Stony Brook, NY 11794, USA
###### Abstract
The instanton/sphaleron processes involve gauge fields with changing topology,
including a nonzero variation of the Chern-Simons number $\Delta N_{CS}=\pm
1$. In QCD such processes lead to the production of $2N_{f}\Delta N_{CS}$
units of the axial charge, proportional to the number of light quark flavors
$N_{f}=3$, for $u,d,s$. In the electroweak theory such processes lead to the
production of 12 fermions, with $\Delta B=\Delta L=\pm 3$ units of baryon and
lepton number. While this is all known for a long time, and is one of the
pillars of the nonperturbative theory of the QCD vacuum, in this paper we
discuss what we call a “reclined tunneling”, in which external forces are
added to the tunneling processes and create certain gluonic objects with
positive invariant mass. The idea to observe these objects experimentally at
hadronic colliders have been proposed before, but insofar without success.
Motivated by the recent CERN workshop on the topic, we review these ideas. We
put forward our own suggestions, in particular to utilize a double-diffractive
(Pomeron-Pomeron) collisions to this goal, which we believe maximizes the
entrance factor and minimizes the backgrounds. We consider clusters of small
($M=3-10\,{\rm GeV}$), medium ($M=10-30\,{\rm GeV}$) and high $M\sim 100\,{\rm
GeV}$ invariant masses, subsequently. Among the proposed signals are specific
flavor combination of channels, originating from well-defined 6-, 8- and
10-quark-antiquark operators, as well as correlation of quark chiralities to
be potentially detected via $\Lambda$ hyperon decays.
††journal: Physics Reports
## 1 Introduction
### 1.1 The instanton/sphaleron processes
In going to the mountains, one needs a reliable map and preferably from a
variety of sources. Building a tunnel is expensive, so one needs to think a
lot. Should it be horizontal or reclined? Where should the entrance be, and
where should the exit be? As we will see shortly, all such questions also
appear when we think of optimal paths producing topologically nontrivial final
states in a collision.
Some of these questions have already been answered in theory decades ago, but
some still require further analyses and calculations. There is a vast number
of applications of instanton-induced effects in vacuum and hadronic structure.
The instanton-sphaleron processes, producing certain topologically nontrivial
gluonic clusters, are only seen in low-mass domain, like $\eta_{c}$ decays.
This paper is about possible ways to see such objects at hadron colliders,
with variable masses.
One can split the expected cross section into three distinct parts: (i)
tunneling action; (ii) the semiclassical prefactor; (iii) the entrance factor;
and, last but not least, (iv) branching ratios into particular exclusive
channels, or sequentially
$\sigma\sim\bigg{[}{{\rm entrance\over factor}}\bigg{]}\bigg{[}{{\rm
semiclassical\over
prefactor}}\bigg{]}\bigg{[}e^{-S_{cl}}\bigg{]}\bigg{[}B({\rm
final\,state})\bigg{]}$ (1)
As we show below, the classical part is well under control. The semiclassical
prefactor is not yet calculated for gauge theory beyond one loop, but it was
done in relevant toy models, so it can be evaluated if needed. The entrance
factor is really the difficult part still in deliberation.
The produced unstable magnetic objects, generically known as “sphalerons”,
explode into outgoing gluon and quark waves, which eventually “hadronize” into
mesons and baryons going to the detectors. This initial part of this process
is described by a solution of the classical YM equation, see section 4. Quark
pair production is described by the solution of the Dirac equation in the
exploding background. For all light quark flavors one has the same solution,
which results in an effective 6-quark operator of the flavor composition
$(\bar{u}u)(\bar{d}d)(\bar{s}s)$ and with specific chiral properties. Its
projection onto the distribution amplitudes of pertinent mesons, provides the
decay probabilities to exclusive final states, e.g. for three mesons those are
$KK\pi,\pi\pi\eta,\pi\pi\eta^{\prime}$.
A further discussion of heavy ($c,b$) quark production is done in section 7.3.
We will estimate the sphaleron scales at which such production becomes
possible, and then discuss 8- and 10-fermion operators and their final states
such as $KKDD$ and $BBD_{s}DK$.
### 1.2 Double diffraction and Pomerons, glueballs and clusters
In multiple papers the instanton/sphaleron process was calculated with the
assumption that it is initiated at a partonic level, gluon-gluon or quark-
antiquark collisions. In this case the secondaries originated from sphalerons
are imbedded in complicated $pp$ collisions, together with many other jets and
secondaries produced. The separation of the signal from backgrounds is in this
case very difficult.
Perhaps the main idea first expressed in [1], is that one may avoid large
backgrounds associated with min.bias $pp$ collisions, by using double
diffractive events (also known as Pomeron-Pomeron abbreviated as PP, distinct
from $pp$). We start with a brief summary of the experimental situation,
before elaborating on various theoretical models.
The two early experiments of interest are WA102 [2] and WA8 [3], both of which
were carried decades ago at CERN .
WA102 was driven by an idea that the PP processes provide a “glueball filter”,
allowing to separate glueball-type hadrons from other mesons. It provided
important data on the distribution over the azimuthal angle $\phi$, between
the transverse momentum transfers to both surviving protons. The collaboration
identified a certain number of scalar, pseudoscalar and tensor hadronic
resonances in the invariant mass distribution. As a subsequent discussion has
shown, the collaboration has indeed seen not only a scalar glueball candidate,
but also ta tensor $2^{+}$ glueball at a mass of $2.2\,{\rm GeV}$. Both have
angular distributions in $\phi$ completely different from those for ordinary
mesons.
WA8 was focused on larger invariant masses. Unfortunately, its central
detector was just a calorimeter, so no exclusive channels were studied. This
collaboration reported observation of some isotropically decaying “clusters”,
with a mass peaked at $M\approx 3\,{\rm GeV}$, which we will discuss further
in sections 6.2 and 11. They also reported non-isotropic background events
with masses $M=5-20\,GeV$.
The revival of the diffractive field at the LHC has began relatively recently.
The Atlas Forward Proton (AFP) detector was installed and ran, but it is
focused on producing systems with very large mass in the range of hundreds of
GeVs. In our understanding, ATLAS ALFA forward detectors were designed to
measure forward elastic scatterings. On the other hand, the CMS-TOTEM
collaborations, working together, have addressed soft PP reactions, and
recently reported soft exclusive $\pi^{+}\pi^{-}$ data [4]. They focused on
small invariant mass region $M(\pi^{+}\pi^{-})<2\,{\rm GeV}$ and have
confirmed the production of (at least) four scalar and tensor mesons, all seen
previously by WA102. From a wider perspective, apart from focusing on glueball
spectroscopy, these experiments have provided important insights into the
structure of the Pomeron-Pomeron-hadron vertices, and thus the structure of
the Pomeron itself.
From the theory side, during the last decade there were basically two major
developments:
(i) The Pomeron is defined as a virtual state, described by continuation of a
glueball Regge trajectory from positive to small negative $t$ domain, where it
is observed in scattering. Note that the nearest physical state to it is the
tensor $2^{++}$ glueball. In holographic models of QCD this glueball is
identified with the excitation of bulk gravitons [5, 6]. This, and other
consideration, relate Pomeron exchanges with Reggeized graviton exchanges.
These ideas have explained well the WA102 data production of pseudoscalar [7]
and tensor [8] glueballs. In this latter paper the PP-$2^{+}$ vertex used was
a 3-tensor coupling deduced from the Einstein-Hilbert action of general
relativity. Earlier suggestions that the Pomeron phenomenology needs some
polarization tensor were made by the Heidelberg group, see review in [9].
(ii) From the 1970’s the perturbative theory of the Pomeron was based first on
two-gluon exchanges, and then developed into inclusion of ladder diagrams with
gluonic rungs. This theory culminated in the famed papers [10] and is known as
the BFKL Pomeron. It is successful in describing diffraction with large
$|t|>few\,GeV$. Yet at small $|t|$ (or large impact parameter $b\sim
1/\sqrt{|t|}$) there is no reason to use perturbative QCD. Alternative theory
for this case, based on “stringy instanton” solution to Nambu-Goto string
exchange was developed in [11], and is known as the BKYZ Pomeron. Although the
two theories start from very different descriptions, with different
interpretations of the parameters, the scattering amplitude is nearly the
same. The latter BKYZ version, unlike BFKL, tells when the transition in
impact parameter should happen, as strings undergo Hagedorn transition.
## 2 The topological landscape
The Hilbert space of all possible gauge potentials $A^{\mu}_{a}(x)$ is the
manifold over which we need to integrate the Feynman path integral. Of course,
we would not discuss infinite-dimensional maps, and focus on two main
variables, for all static ( 3-dimensional and purely magnetic) configurations
of the lowest energy, consistent with the walue of those coordinates. One of
the coordinates is the topological Chern-Simons number
$N_{CS}\equiv{\epsilon^{\alpha\beta\gamma}\over 16\pi^{2}}\int
d^{3}x\left(A^{a}_{\alpha}\partial_{\beta}A^{a}_{\gamma}+{1\over
3}\epsilon^{abc}A^{a}_{\alpha}A^{b}_{\beta}A^{c}_{\gamma}\right)$ (2)
which is integer for pure gauge configurations, at which the energy is zero.
Those points will be referred to as ”valley minima”. The other coordinate is
the mean square radius of the magnetic field strength squared
$\rho^{2}\equiv{\int d^{3}x\vec{x}^{2}\vec{B}^{2}\over\int d^{3}x\vec{B}^{2}}$
(3)
Without fixing it, there is no minimal energy, as simple rescaling can change
it.
By the “topological landscape” we mean the 2-dimensional profile of the
minimal energy $U_{\rm min}(N_{CS},\rho)$ of gauge field configurations with
those two coordinates fixed. For pure gauge theory, such minimal energy
configurations themselves, are known as the “sphaleron path”: as changing
$N_{CS}$ from one integer to the next we are leading from one topological
valley to another, keeping minimal energy at any point. Those were derived by
Carter, Ostrovsky and one of us [12] by two different methods. The one related
with the instanton/sphaleron process will be discussed in the next section.
The second method is minimization with two Lagrange multipliers times two
conditions, (2) and (3). The minimal energy along the path was obtained in a
parametric form
$\displaystyle U_{\rm min}(\kappa,\rho)$ $\displaystyle=$
$\displaystyle(1-\kappa^{2})^{2}{3\pi^{2}\over g^{2}\rho}$ (4)
$\displaystyle{N}_{CS}(\kappa)$ $\displaystyle=$ $\displaystyle\frac{1}{4}{\rm
sign}(\kappa)(1-|\kappa|)^{2}(2+|\kappa|)$
The result shows a profile of the “topological mountain” of the gauge theory,
see Fig. 1, also known as “the sphaleron path”. Its maximum, at $\kappa=0$,
has $N_{CS}=\frac{1}{2}$ and its energy is known as the sphaleron mass
$\displaystyle M_{sph}=U_{\rm
min}\bigg{(}\frac{1}{2},\rho\bigg{)}={3\pi^{2}\over g^{2}\rho}$ (5)
Figure 1: The potential energy $U_{\rm min}\bigg{(}N_{CS},\rho\bigg{)}$ (in
units of $1/g^{2}\rho$) versus the Chern-Simons number ${N}_{CS}$ for the
“sphaleron path” between $N_{CS}=0$ and $N_{CS}=1$.
When the momentum scale $1/\rho$ is high, the gauge fields are very strong and
one can neglect the “vacuum structure” effects and keep only the classical
Yang-Mills equation. Of course, this equation is scale-invariant, and
therefore $U_{\rm min}\sim 1/\rho$. However, if $1/\rho$ is small enough, one
can no longer neglect vacuum effects.
In the electroweak theory these “vacuum effects” are described by the Higgs
field $\phi$. The scale is set by its VEV $v=\langle\phi\rangle\approx
246\,{\rm GeV}$. Ignoring first the Weinberg angle and using variational
methods, Klinkhamer and Manton [13] have included the Higgs field and solved
the YM+scalar equations. They observed that the Higgs field must vanish at the
origin $r=0$ to keep the topology of the gauge field intact, and therefore the
sphaleron is a “semi-empty bag” inside the Higgs vacuum. This adds some
energy: e.g. the leading term at large $\rho$ proportional to the bag volume
and the highest power of the Higgs VEV
$U_{\rm Higgs}\sim\rho^{3}v^{4}$
The $1/\rho$ gauge term plus Higgs terms with positive powers of the size
leads to a minimum at a certain $\rho^{*}$, which fixes the configuration
completely. The mass of the electroweak sphaleron is found to be about 9 TeV.
Thus, going from one topological valley of the electroweak theory to another,
one needs at least that much energy. All of that was clear since 1984.
Proceeding to our main object of interest, the sphaleron path in QCD, we note
that the QCD vacuum also has a rather complicated vacuum structure, with
nontrivial VEVs of various operators (known as “vacuum condensates”). There
are lattice data and models of the QCD vacuum structure, that provides some
understanding of the topological landscape at large $\rho$. It will be
discussed below, and here we just state that they also force the minimal
energy $U_{\rm min}(1/2,\rho)$ to start growing at large $\rho$, also
producing some optimal size $\rho^{*}$.
## 3 The tunneling paths
The first “tunneling path” was the original BPST instanton discovered in 1975.
It is the solution of the Euclidean YM equation. In terms of the landscape
illustrated in Fig. 1, it is a horizontal path, at energy zero, leading
directly from the bottom of one valley to the bottom of the next one. The
tunneling amplitude along it is $\sim{\rm exp}(-8\pi^{2}/g^{2})$, given by the
action of the instanton.
Many of the discussions on how to improve the tunneling path, including the
final state gluon radiation, were the subject of complicated technical papers
in the late 1980’s. We would not describe them, but just state that Khoze and
Ringwald [14], as well as Verbaarschot and Shuryak [15], have suggested that
4-dimensional configurations describing the $optimal$ instanton-antiinstanton
$I\bar{I}$ also provide the optimal path for sphaleron production. Since then,
most calculations follow this idea.
Before going to the details, let us explain why it is the case, following
important insights from [12, 16, 17]. The $I\bar{I}$ configuration is
schematically shown in the upper plot of Fig. 2. The shaded circles indicate
regions where the instanton and antiinstanton fields are strong. In one of
them, the fields are (close to) self-dual $\vec{E}=\vec{B}$, and in the other
antiself-dual $\vec{E}=-\vec{B}$. If both have the same sizes and color
orientations, symmetry of the plot suggest that the electric field $\vec{E}$
changes sign at $t\rightarrow-t$, and is therefore zero at $t=0$. The
3-dimensional plane in the middle of the plot must contain a pure magnetic
configuration, corresponding to semiclassical “turning points” where the
momentum vanishes. At such points the paths go from a tunneling process in
Euclidean time to a “real life” process in Minkowski space.
So, the $t=0$ plane is nothing else but the “unitarity cut”, familiar from
perturbative Feynman diagrams. The object on it $is$ the object produced. the
two halfs of the $I\bar{I}$ 4-dimensional configurations are the amplitude and
the conjugated amplitude. They describe the probability to produce this
object. Any discussion of the multi-gluon production, with the complicated
interferences between them, are not needed at all: what is produced is this
classical magnetic object.
Now, why is it that this tunneling path is better than the original instanton
(or $R\rightarrow\infty$ limit)? It may be confusing at first glance, since
the $I\bar{I}$ configurations are $not$ solutions of the YM equations. They
are tunnels $inclided$ $upward$ as indicated by the arrows in the lower plot
of Fig. 2. Indeed, to go uphill an external force is needed: but on those
paths the action is reduced! (As we will see, roughly by a factor 2, or many
decades in the rates.)
Instead of ending at the bottom of the valley, these paths end at some points
on the slopes (indicated by red and green balls). After the corresponding
magnetic objects are born, they roll down (explode) classically. Their action
is real, $|exp(iS)|=1$, and therefore their decays have probablity one.
Whatever decays they have, it does not affect the cross section.
As it is clear from the lower plot of Fig. 2, for small $R$ the instanton and
antiinstanton nearly annihilate each other and their combined action can be
reduced all the way to zero. If the product has $N_{CS}<1/2$, it will not roll
to the second valley, but return to the original one. The anomaly relation
then would indicate no fermionic transitions. In summary: there exist
topologically trivial and topologically nontrivial tunnelling paths! One
cannot economize more than (roughly) $half$ of the instanton+antiinstanton
action.
The issue of instanton-antiinstanton ($I\bar{I}$) interactions has its own
long history. When instanton and aniinstanton are well separated, the simple
“sum ansatz” gives twice the action, but what if the distance between them is
comparable to their size? One can invent many arbitrary interpolations.
The idea in [18] was to “follow the force” using $\partial S/\partial
x(\tau)$. In this way one gets a set of configurations, which are action
minima for infinitely many perturbations, except along one direction – the set
itself. One of us has independently generated this set numerically [19] for
the double well 111We did not know then that in mathematics our “sreamlines”
were known in complex analysis as “Lefschitz thimbles”, special lines
connecting saddles in the complex plane. . Yung [20] proposed the “Yung
ansatz” solving the streamline equation at large $R$.
For gauge theory instantons the problem looked more complicated. First of all,
even a sum ansatz could not be used, as special cancellations near the centers
(in singular gauge) were spoiled as the field strength gets singular. For this
purpose a “ratio ansatz” was developed to cure this. Furthermore, it looked
that the interaction should depend on at least 3 variables,
$\rho_{I},\rho_{A},R$, even for identical color orientations. Verbaarschot
[21] however, noticed that since the classical YM theory has conformal
symmetry, the answer should depend on their single conformal-invariant
combination 222Many years later, it was realized that this is a geodesic
distance between two points in AdS5 space, if $\rho$ is the extra coordinate.
${R^{2}+(\rho_{1}^{2}-\rho_{2}^{2})^{2}\over\rho_{1}\rho_{2}}$ (6)
Using an appropriate conformal map, the antiinstanton was set inside the
instanton, and the problem was mapped onto the double-well potential. As a
general surprise, Verbaarschot’s configurations happen to be described rather
accurately by Yung ansatz, not only at large $R$ as was originally claimed,
but in fact all the way to $R\rightarrow 0$ 333The formula itself remained
complicated, and nobody – Yung included – suspected it to be a pure gauge,
with zero field strength!.
Figure 2: The upper plot shows a schematic picture of the instanton-
antiinstanton configuration. The horizontal axis is (Euclidean) time $t$, and
$R$ is the distance between their centers. The blue triangle indicates $t=0$,
a 3-dimensional hyper-surface in which the produced magnetic object resides.
The lower plot with the red arrow, refers to the “reclined tunnel”
corresponding to large $R$. The green arrow on the left indicates tunneling
for small $R$, with a Chern-Simons number of the produced object $N_{CS}<1/2$.
For this last case, the classical explosion returns the system to the original
valley.
To complete the story, let us mention that in [12] it was shown that the
“streamline” $I\bar{I}$ configurations, well approximated by Yung ansatz, do
indeed describe $U_{\rm min}(N_{CS},\rho)$ in complete agreement with the
constrained minimization already shown in Fig 1. The parameter $\kappa$ is
played by the relative separation $R$.
Figure 3: The inversion of the 3-sphere $S_{\rho}^{3}$ of radius $\rho$ onto
the upper part of Minkowski space with $t\geq 0$ or $R_{+}^{4}$, through the
3-sphere $S^{3}_{2\rho}$ of radius $2\rho$.
## 4 Classically exploding sphalerons
Both static and time-dependent exploding solutions for the pure-gauge
sphaleron have been originally discussed by Carter,Ostrovsky and Shuryak (COS)
[12]. A simpler derivation, to be used below, has been subsequently given in
[22]. The technique relies on an off-center conformal transformation of the
$O(4)$ symmetric Euclidean instanton solution, which is analytically continued
to Minkowski space-time. However, the chief focus of that work [22] was the
description of the fermion production.
The original $O(4)$-symmetric solution is given by the following ansatz
$gA_{\mu}^{a}=\eta_{a\mu\nu}\partial_{\nu}F(y),\,\,\,\,F(y)=2\int_{0}^{\xi(y)}d\xi^{\prime}f(\xi^{\prime})$
(7)
with $\xi={\rm log}(y^{2}/\rho^{2})$ and $\eta$ the ’t Hooft symbol. Upon
substitution of the gauge fields in the gauge Lagrangian one finds the
effective action for $f(\xi)$
$S_{\rm eff}=\int d\xi\left[{\dot{f}^{2}\over 2}+2f^{2}(1-f)^{2}\right]$ (8)
corresponding to the motion of a particle in a double-well potential. In the
Euclidean formulation, as written, the effective potential is inverted
$V_{E}=-2f^{2}(1-f)^{2}$ (9)
and the corresponding solution is the well known BPST instanton, a path
connecting the two maxima of $V_{E}$, at $f=0,f=1$. Any other solution of the
equation of motion following from $S_{\rm eff}$ obviously generalizes to a
solution of the Yang-Mills equations for $A_{\mu}^{a}(x)$ as well. The
sphaleron itself is the static solution at the top of the potential between
the minima $f=-1/2$.
The next step is to perform an off-center conformal transformation as
illustrated in Fig. 3
$(x+a)_{\mu}={2\rho^{2}\over(y+a)^{2}}(y+a)_{\mu}$ (10)
with $a_{\mu}=(0,0,0,\rho)$. It changes the original spherically symmetric
solution to a solution of Yang-Mills equation depending on new coordinates
$x_{\mu}$, with a separate dependences on the time $x_{4}$ and the
3-dimensional radius $r=\sqrt{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}}$.
The last step is the analytic continuation to Minkowski time $t$, via
$x_{4}\rightarrow it$. The original parameter $\xi$ in terms of these
Minkowskian coordinates, which we still call $x_{\mu}$, has the form
$\xi={1\over 2}{\rm log}\bigg{(}{y^{2}\over\rho^{2}}\bigg{)}={1\over 2}{\rm
log}\left({(t+i\rho)^{2}-r^{2}\over(t-i\rho)^{2}-r^{2}}\right)$ (11)
which is purely imaginary. To avoid carrying the extra $i$, we use the real
combination
$\xi_{E}\rightarrow-i\xi_{M}={\rm arctan}\left({2\rho t\over
t^{2}-r^{2}-\rho^{2}}\right)$ (12)
and in what follows we will drop the suffix $E$. Switching from imaginary to
real $\xi$, corresponds to switching from the Euclidean to Minkowski spacetime
solution. It changes the sign of the acceleration, or the sign of the
effective potential $V_{M}=-V_{E}$, to that of the normal double-well problem.
The solution of the equation of motion is given in [22] 444There was a
misprint in the index of this expression in the original paper.
$f(\xi)={1\over 2}\left[1-\sqrt{1+\sqrt{2\epsilon}}\,{\rm
dn}\left(\sqrt{1+\sqrt{2\epsilon}}(\xi-K),{1\over\sqrt{m}}\right)\right]$ (13)
where ${\rm dn}(z,k)$ is one of the elliptic Jacobi functions,
$2\epsilon=E/E_{s},2m=1+1/\sqrt{2\epsilon}$, $E=V(f_{\rm in})$ is the
conserved energy of the mechanical system normalized to that of the sphaleron
$E_{s}=V(f=1/2)=1/8$. (13)reduces to the SO(4) solution derived by Luscher
[23], and the hypertorus solution obtained by Schechter [24]. Since starting
from exactly the maximum takes a divergent rolling time, we will start from
the nearby turning point with
$f(0)=f_{\rm in}={1\over 2}-\tau,\,\,\,\,\,\,f^{\prime}(0)=0$ (14)
where a small displacement $\tau$ ensures that the “rolling downhill” from the
maximum takes a finite time and that the half-period $K$ – given by an
elliptic integral – in the expression is not divergent. In the plots below we
will use $\kappa=0.01$, but the results dependent on its value very weakly.
The solution above describes a particle tumbling periodically between two
turning points, and so the expression above defines a periodic function for
all $\xi$. However, as it is clear from (12), for our particular application
the only relevant domain is $\xi\in[-\pi/2,\pi/2]$. Using the first 3 nonzero
terms of its Taylor expansion
$f\approx 0.4929-0.00706\,\xi^{2}-0.0011\,\xi^{4}-0.000078\,\xi^{6}$ (15)
we find a parametrization with an accuracy of $10^{-5}$, obviously invisible
in the plot and more than enough for our considerations.
The gauge potential has the form [22]
$gA_{4}^{a}=-f(\xi){8t\rho
x_{a}\over[(t-i\rho)^{2}-r^{2}][(t+i\rho)^{2}-r^{2}]}$ (16) $gA^{a}_{i}=4\rho
f(\xi){\delta_{ai}(t^{2}-r^{2}+\rho^{2})+2\rho\epsilon_{aij}x_{j}+2x_{i}x_{a}\over[(t-i\rho)^{2}-r^{2}][(t+i\rho)^{2}-r^{2}]}$
which is manifestly real. From those potentials we generate rather lengthy
expressions for the electric and magnetic fields, and eventually for CP-
violating operators, using Mathematica.
Note that the sphaleron solution corresponds to $t=0$ or static, which is a
pure magnetic solution with $gA_{4}^{a}=0$. The magnetic field squared is
spherically symmetric and simple
$\vec{B}^{2}={96\rho^{4}\over(\rho^{2}+r^{2})^{4}}$ (17)
Figure 4: Upper plot: kinematics for two emerging quarks with momenta
$\vec{k}_{1,2}$ from an exploding sphaleron, fusing into a meson of momentum
$\vec{p}$; Lower plot: squared projection of outgoing quark waves into a
mesonic distribution amplitude with momentum $p\,({\rm GeV})$.
The ensuing explosion starting from the sphaleron at $t=0$, we describe using
the YM equations. This is justified only if the sphaleron size is sufficiently
small $\rho\ll\rho_{\rm max}$ compared to the maximal size defined by the
vacuum effects. If $\rho v\sim 1$ (where $v$ is the Higgs VEV in the
electroweak theory, or the dual Higgs VEV in QCD), the equations get modified
by additional terms with scalar fields.
It is known for a long time, that the Adler-Bell-Jackiw chiral anomaly relates
the divergence of the axial current to the topological charge $Q$. Also
divergence of the Chern-Simons topological current is proportional to it. As a
result, their combination is conserved, and changes in the axial charge and
Chern-Simons number are locked
$\Delta Q_{5}=(2N_{f})\Delta N_{CS}$ (18)
where $N_{f}=3$ is the number of light quark flavors.
In agreement with these general arguments, we have constructed in [22] the
analytical solutions of the Dirac equation in the field of exploding
sphalerons. As a function of time, one can see how quarks are accelerated from
the lower Dirac sea to the physical continuum of positive energies. The
spectrum of outgoing quarks has a simple form
${dN\over dk}=4\pi k^{2}|\psi(k)|^{2}=\rho(2k\rho)^{2}e^{-2k\rho}$ (19)
which is similar to a Planckian spectrum with an effective temperature $T_{\rm
eff}=1/(2\rho)$. For the maximum value set by the vacuum instanton size
distribution $\rho_{\rm max}\approx 0.3\,{\rm fm}\approx 1/(600\,{\rm MeV})$,
the effective temperature is $300\,{\rm MeV}$. The mean momentum is
$\langle k\rangle={3\over 2\rho}\sim 900\,{\rm MeV}$
A simple quantum estimate of the production amplitude, follows from the
projection of the emerging quarks onto the distribution amplitudes (DAs) of
the outgoing mesons at any momenta as illustrated in Fig.4 (upper). More
specifically, we have
$\langle\vec{k}_{1},\vec{k}_{2}|\vec{p}\rangle\sim\int
dk_{1\perp}^{2}dk_{1l}dk_{1\perp}^{2}dk_{1l}\psi(\vec{k}_{1})\psi(\vec{k}_{2})f(p_{\perp}^{2})\varphi(x=k_{1l}/P)\delta(k_{1\perp}^{2}-p_{\perp}^{2})\delta(k_{2\perp}^{2}-p_{\perp}^{2})\delta(k_{1l}+k_{2}l-P)$
(20)
where $\psi(k)$ is the outgoing quark wave (19), and the functions
$\varphi(x)f(p_{\perp}^{2})$ are the DAs of the corresponding mesons with
longitudinal fraction ($x$) and transverse ($p_{\perp}$) momenta. For
simplicity, we take a Gaussian $f(p_{\perp})\sim{\rm exp}(-{\rm
const}*p_{\perp}^{2})$ and a flat $\varphi(x)=1$ which approximate well say a
pion. The squared projection for a sphaleron of size $\rho=0.3\,{\rm
fm}=1/(0.6\,{\rm GeV})$ is shown in the lower plot, as a function of the
outgoing meson momentum $p_{\perp}$.
(Note that in this estimate we ignored all the other particles produced. In
reality the total mass of the cluster puts its own kinematical restrictions.
For example, for three-meson decay modes to be discussed, the tail at large
momenta is cutoff above $M/3$.)
## 5 The instanton size distribution in the QCD vacuum
By now, the subject of instantons in the QCD vacuum is well established and
broad, and clearly goes beyond the scope of this review. For us, the only
relevant issue is the instanton size distribution $dn/d\rho$ in the vacuum. It
has been evaluated in various models and on the lattice. For definiteness we
use the lattice results from [25]. The average size was found to be
$\langle\rho\rangle\approx 0.30\pm 0.01\,{\rm fm}$, a bit smaller than in the
ILM. The mean distance was found instead to be $0.61\pm 0.02\,fm$. The data on
the instanton size distribution are shown in Fig.5. (The figure is taken from
[26] and the lattice data from Hasenfratz et al [25]). The left plot shows the
size distribution itself. Recall that the semiclassical theory predicts it to
be $dn/d\rho\sim\rho^{b-5}$ at small sizes, with $b=11N_{c}/3=11$ for pure
gauge $N_{c}=3$ theory. The right plot – in which this power is taken out –is
constant at small $\rho$, which agrees with the semiclassical prediction.
The other feature is a peak at $\rho\approx 0.3\,{\rm fm}$ – the value first
proposed phenomenologically in [27], decades before the lattice data. The peak
is due to a suppression at large sizes. Trying to understand its origin, one
may factor out all known effects. The right plot shows that after this is
done, a rather simple suppression factor $\sim{\rm exp}(-{\rm
const}*\rho^{2})$ describes it well, for about 3 decades. What is the physical
origin of this suppression?
There are two answers to that question, which are perhaps “Poisson dual” to
each other [28]. The first is that it is due to the mutual repulsion between
an instanton and the rest of the instanton-antiinstanton ensemble. (It is
described in the mean field approximation and numerically, see the review
[29]).
Another one, proposed in [26], is that the coefficient is proportional to the
dual magnetic condensate, that of Bose-condensed monopoles. It has been
further argued there that it can be related to the string tension $\sigma$, so
that the suppression factor should be
${dn\over d\rho}=\bigg{(}{dn\over d\rho}\bigg{)}_{\rm
semiclassical}exp[-2\pi\sigma\rho^{2}]$ (21)
where the Higgs VEV is traded for the string tension $\sigma$ via the dual
Higgs model of confinement. If this idea is correct, this suppression factor
should be missing at $T>T_{c}$, in which the dual magnetic condensate is
absent. However, in this regime, quantum/thermal fluctuations generate at high
$T$ a similar factor [30]
${dn\over d\rho}=\bigg{(}{dn\over d\rho}\bigg{)}_{T=0}exp[-(2N_{c}+{N_{f}\over
3})(\pi\rho T)^{2}]$ (22)
related to the scattering of the quarks and gluons of the quark-gluon-plasma
(QGP) on the instanton [31]. Empirically, the suppression factor at all
temperatures looks Gaussian in $\rho$, interpolating between those limiting
regimes.
Figure 5: (left) The instanton size $\rho$ [fm] distribution $dn/(d\rho
d^{4}z)$. (right) The combination $\rho^{-6}dn/d\rho d^{4}z$, in which the
main one-loop behavior drops out for $N_{c}=3,N_{f}=0$. The points are from
the lattice work for this theory, with $\beta$=5.85 (diamonds), 6.0 (squares)
and 6.1 (circles). Their comparison should demonstrate that results are
lattice-independent. The line corresponds to the proposed expression, see
text.
## 6 Quark flavor signatures of instanton/sphaleron process at small
$M=3-10\,{\rm GeV}$
### 6.1 The instanton-induced glueball and $\eta_{c}$ decays
As emphasized earlier, the instantons are virtual paths at zero energy,
playing a significant role in the theory of the QCD vacuum, for a review see
e.g. [29]. Their existence has been unambiguously revealed by the cooled
lattice simulations carried by many groups, e.g. Leinweber [32].
The subject of this work, the instanton/sphaleron process, requires external
forces able to provide the amount of energy needed for the sphaleron
production. Furthermore, one should look for reactions involving gluons in
combinations to which instantons are naturally coupled. Those are in scalar
combinations or $G_{\mu\nu}^{2}$ and pseudoscalar combinations or
$G_{\alpha\beta}G_{\gamma\delta}\epsilon^{\alpha\beta\gamma\delta}$. In order
of increasing mass, they are:
(i) decays of a scalar glueball ($M\approx 1.6\,{\rm GeV}$);
(ii) decays of a pseudoscalar glueball ($M\approx 2.5\,{\rm GeV}$);
(iii) decays of pseudoscalar quarkonia555We are not aware of any study of
$\eta_{b}$ decays in this resepect. ($M_{\eta_{c}}\approx 3\,{\rm GeV}$)
The main idea of this work is that gluon-gluon or (especially) Pomeron-Pomeron
collisions also couple to such operators by fusion, and through them one can
possibly investigate instanton/sphaleron processes of larger masses $M_{\rm
sph}$ at hadron colliders.
Historically, the issue of instanton-induced decays was first noticed by
Bjorken [33]. He pointed out that $\eta_{c}$ decays have 3 large 3-body modes,
several percents each of the total width:
$\eta_{c}\rightarrow KK\pi;\,\,\,\pi\pi\eta;\,\,\,\pi\pi\eta^{\prime}$
Note that there is no $\pi\pi\pi$ decay mode, or other decay modes we may
think of without strangeness: the ’t Hooft vertex $must$ have all light quark
flavors in it, including $\bar{s}s$.
More generally, charmonium decays are especially nice since one can use the
well known $J/\psi$ decays as a ′′control group′′, since its three-gluon
annihilation mode does not go to operators that couple naturally to
instantons. Indeed, the branchings of the 2- and 3-body decays of $J/\psi$ are
much smaller, and the average multiplicity of its decays is much larger than
3. That is why the few-percent branching of these 3-meson $\eta_{c}$ decays is
remarkable by itself.
The actual calculations of $\pi\pi,KK,\eta\eta$ decays of scalar and
$KK\pi,\eta\pi\pi,\eta^{\prime}\pi\pi$ decays of $\eta_{c}$ were made by
Zetocha and Schafer [34]. Their results contain rather high power of the
instanton radius and therefore strongly depend on its value. So the authors
used the inverted logic, evaluating from each data point the corresponding
value of the mean instanton size $\bar{\rho}$. The results reproduced the
decay widths reasonably well. Furthermore, these calculations provided about
the most accurate evaluation of the average instanton size available, in the
range of $\bar{\rho}=0.29-0.30\,fm$, common to all decay channels.
Let us start with the flavor composition. The specific form of the effective
operator between light quarks induced by instantons has been derived by ’t
Hooft
$(\bar{u}_{R}u_{L})(\bar{d}_{R}d_{L})(\bar{s}_{R}s_{L})+(L\leftrightarrow R)$
with the coefficients following from the LSZ reduction of the zero modes. For
the the lightest quark clusters, of mass $M\sim 3\,{\rm GeV}$, we expect the
production of 6 quarks in combination $uuddss$. Before discussing the decay
modes, we need to look closer at $N_{f}=3$.
### 6.2 The simplified forms of the effective Lagrangian
One issue is that in [34] the emitted mesons are treated in the “local
approximation”, with the vertex Lagrangian directly projected to meson “decay
constants” (which are the values of the wave functions at the origin). In
reality, mesons fly with a momentum of about $1\,{\rm GeV}$, for which the
projections from the initial quasi-local state to the final state (the form
factors) are not exactly one. This issue of “projection” (which we have
already addressed above) gets more important as one considers sphalerons with
larger masses.
Another (more technical) issue is the correct inclusion of all diagrams.
Whatever form chosen for the 6-q Lagrangian, there are $3!=6$ channels
(various connections of quarks and antiquarks), and they all needs to be
included. While it is possible to go from one form to to another with Fierz
transformation of operators, we were not sure all 6 connections were included
on equal footing, and decided to repeat the calculation.
Another reason for redoing it is that the 3-flavor version of the Lagrangian
used in [34] (their (26)) is very complicated, containing expressions with
three color matrices and structure constants of the $SU(3)$ group
$f^{abc}\lambda^{a}\lambda^{b}\lambda^{c},\,\,\,\,d^{abc}\lambda^{a}\lambda^{b}\lambda^{c}$
As we will show, these can be eliminated in favor of a simpler form, which we
use. (In B.1 we will also show one more generic form of this Lagrangian based
on the so called Weingarten coefficients of the product of unitary matrices
averaged over the groups.)
We start by explaining the two main technical complications in this problem. A
single instanton (or sphaleron) is constructed in a color $SU(2)$ setting,
with the gauge fields $A^{a}_{\mu}T^{a}_{ij}$ with color indices $i,j=1,2$.
The quark zero modes $\psi_{\alpha}^{j}$ in the chiral representation
($\gamma_{5}$ is diagonal) carry a matching spin index $\alpha=1,2$ and color
index $j=1,2$, coupled to each other in a “hedgehog” way.
However, the gauge group in QCD is $SU(3)$ and in it, the instanton appears
rotated from its standard form (just described) to some arbitrary 2d plane
$\in SU(3)$ by a unitary $3\times 3$ matrix, e.g.
$\psi^{a}_{\alpha}\rightarrow U^{a}_{i}\psi^{i}_{\alpha}$. The ’t Hooft
effective Lagrangian contains $2N_{f}=6$ (or 8 or 10) quark zero modes, and
therefore the 6-th (or 8-th or 10-th) power of this matrix. Although the
explicit expressions for “isotropic” averaging of these powers of $U$ have
been known for some time (see e.g. appendix of [35]) these formulae are rather
complicated, and contain convolutions with the structure constants
$f^{abc},d^{abc}$ of $SU(3)$. Schafer and Zetocha [34] used such form, see
eqn(26) of their paper.
Another technical issue is as follows. Multifermion operators can be
identically represented in many different forms, since one can color-couple
different $\bar{q}q$ pairs. (Or even $qq$ pairs as is convenient for color
superconductivity and baryon production.) Those “Fierzing” transformations are
jus identities, but with flavor, color and Dirac indices involved, they can
create a multitude of operators. Yet, whatever form of the vertex operator is
used, one still need to include it in all possible channels. For example, for
the operator structure $\bar{q}\bar{q}\bar{q}qqq$ projected onto 3 mesons,
there are obviously $3!=6$ ways to relate it to three mesonic $\bar{q}q$ wave
functions. For 8-q operators there are $4!$, etc.
Here and in Appendix B we show how one can simplify the operator structure, by
keeping its flavor determinantal form intact, which has no structure constants
as quoted in [36], and used for heavy-light multi-quark systems [37]. We then
explicitly do all possible convolutions with all mesonic wave functions in
Mathematica, avoiding Fierzing altogether. More specifically [36] (See eq.
2.56)
$\displaystyle{\cal V}^{L+R}_{qqq}=$
$\displaystyle\frac{\kappa}{N_{c}(N_{c}^{2}-1)}\bigg{(}\frac{2N_{c}+1}{2(N_{c}+2)}\,{\rm
det}(UDS)$ $\displaystyle+\frac{1}{8(N_{c}+1)}\,\bigg{(}{\rm
det}(U_{\mu\nu}D_{\mu\nu}S)+{\rm det}(U_{\mu\nu}DS_{\mu\nu})+{\rm
det}(UD_{\mu\nu}S_{\mu\nu})\bigg{)}\bigg{)}+(L\leftrightarrow R)$ (23)
with a strength
$\kappa=\frac{{n_{I+\bar{I}}}}{2}\bigg{(}\frac{4\pi^{2}\rho^{3}}{M\rho}\bigg{)}^{3}$
(24)
and the short hand notations ($Q\equiv U,D,S$)
$Q=\overline{q}_{R}q_{L}\qquad
Q_{\mu\nu}=\overline{q}_{R}\sigma_{\mu\nu}q_{L}\qquad
Q^{a}=\overline{q}_{R}\sigma^{a}q_{L}$ (25)
with $\sigma_{\mu\nu}=\frac{1}{2}[\gamma_{\mu},\gamma_{\nu}]$. Note that the
spin contribution is sizably suppressed by $1/8N_{c}$ in the large $N_{c}$
limit, when compared to the scalar contribution. It is clear from our Fierzing
arguments in Appendix B and symmetry, that only two determinantal invariants
will survive after Fierzing as in (6.2), with only the structures $UDS$ and
$U_{\mu\nu}D_{\mu\nu}S$ and their permutations allowed. The only non-trivial
results are their weight coefficients. This observation holds for 4, 8 and 10
quark vertices as well, assuming they allow for zero modes. For instance, for
4-quark vertices the general structure is
$\displaystyle{\cal V}^{L+R}_{qq}=\,{\kappa}_{2}\,A_{2N}\,\bigg{(}{\rm
det}(UD)+B_{2N}\,{\rm det}(U_{\mu\nu}D_{\mu\nu})\bigg{)}+(L\leftrightarrow R)$
(26)
which is readily checked to hold with
$\displaystyle A_{2N}=\frac{(2N_{c}-1)}{2N_{c}(N_{c}^{2}-1)}\qquad
B_{2N}=\frac{1}{4(2N_{c}-1)}$ (27)
Note that at large $N_{c}$, the suppression of the spin contribution is still
exactly $1/8N_{c}$, with $A_{\\#q}\approx 1/N_{c}^{\\#}$. Remarkably, both the
value of the $A,B$ coefficients and their determinantal structures are fixed
uniquely in this limit by symmetry and scaling. For completeness, the 8-quark
vertices are of the form
$\displaystyle{\cal V}^{L+R}_{qqqq}={\kappa}_{4}\,A_{4N}$
$\displaystyle\bigg{(}{\rm det}(UDSC)+B_{4N}\,\bigg{(}{\rm
det}(U_{\mu\nu}D_{\mu\nu}SC)+\,{\rm perm.}\bigg{)}$ (28)
$\displaystyle+C_{4N}\bigg{(}{\rm
det}(U_{\mu\nu}D_{\mu\nu}S_{\alpha\beta}C_{\alpha\beta})+\,{\rm
perm.}\bigg{)}\bigg{)}+(L\leftrightarrow R)$
although for the heavy charm the use of the $L,R$ zero modes may not be
justified.
In the Weyl basis $\sigma_{\mu\nu}\rightarrow i\eta^{a}_{\mu\nu}\sigma^{a}$
with the ′t Hooft symbol satisfying
$\eta^{a}_{\mu\nu}\eta^{b}_{\mu\nu}=4\delta^{ab}$, and (6.2) can be simplified
$\displaystyle{\cal V}^{L+R}_{qqq}=$
$\displaystyle\frac{\kappa}{N_{c}(N_{c}^{2}-1)}\bigg{(}\frac{2N_{c}+1}{2(N_{c}+2)}\,{\rm
det}(UDS)$ $\displaystyle-\frac{1}{2(N_{c}+1)}\,\bigg{(}{\rm
det}(U^{a}D^{a}S)+{\rm det}(U^{a}DS^{a})+{\rm
det}(UD^{a}S^{a})\bigg{)}\bigg{)}+(L\leftrightarrow R)$ (29)
${\cal V}_{qqq}$ is only active in flavor singlet 6-quark states. The flavor
determinantal interactions can be made more explicit by using the permutation
operators in flavor space as the symmetric group $S_{3}$ of permutations is
composed of $3!$ terms only, 3 cyclic-permutations with positive signature and
3 pair-permutations with negative signature. Clearly, the 3-body instanton
induced interaction (6.2) does not vanish only in flavor singlet $uds$ states
(repulsive). Its 2-body reduction is attractive in states with a pair of
antisymmetric flavor diquarks $ud$, $us$, $ds$ (attractive). A more explicit
writing of (6.2) suitable for numerical analysis in terms of explicit $3\times
3$ flavor determinants is
$\displaystyle{\cal
V}^{L+R}_{qqq}=\frac{\kappa}{N_{c}(N_{c}^{2}-1)}\bigg{[}\bigg{(}\frac{2N_{c}+1}{2(N_{c}+2)}\bigg{)}\begin{Vmatrix}\overline{u}_{R}u_{L}&\overline{u}_{R}d_{L}&\overline{u}_{R}s_{L}\\\
\overline{d}_{R}u_{L}&\overline{d}_{R}d_{L}&\overline{d}_{R}s_{L}\\\
\overline{s}_{R}u_{L}&\overline{s}_{R}d_{L}&\overline{s}_{R}s_{L}\\\
\end{Vmatrix}$
$\displaystyle-\frac{1}{2(N_{c}+1)}\sum_{a=1}^{3}\bigg{(}\begin{Vmatrix}\overline{u}_{R}\sigma^{a}u_{L}&\overline{u}_{R}\sigma^{a}d_{L}&\overline{u}_{R}s_{L}\\\
\overline{d}_{R}\sigma^{a}u_{L}&\overline{d}_{R}\sigma^{a}d_{L}&\overline{d}_{R}s_{L}\\\
\overline{s}_{R}u_{L}&\overline{s}_{R}d_{L}&\overline{s}_{R}s_{L}\\\
\end{Vmatrix}+\begin{Vmatrix}\overline{u}_{R}\sigma^{a}u_{L}&\overline{u}_{R}d_{L}&\overline{u}_{R}\sigma^{a}s_{L}\\\
\overline{d}_{R}u_{L}&\overline{d}_{R}d_{L}&\overline{d}_{R}s_{L}\\\
\overline{s}_{R}\sigma^{a}u_{L}&\overline{s}_{R}d_{L}&\overline{s}_{R}\sigma^{a}s_{L}\\\
\end{Vmatrix}+\begin{Vmatrix}\overline{u}_{R}u_{L}&\overline{u}_{R}d_{L}&\overline{u}_{R}s_{L}\\\
\overline{d}_{R}u_{L}&\overline{d}_{R}\sigma^{a}d_{L}&\overline{d}_{R}\sigma^{a}s_{L}\\\
\overline{s}_{R}u_{L}&\overline{s}_{R}\sigma^{a}d_{L}&\overline{s}_{R}\sigma^{a}s_{L}\\\
\end{Vmatrix}\bigg{)}\bigg{]}$ $\displaystyle+(L\leftrightarrow R)$ (30)
### 6.3 Mesonic decays of sphalerons
Convoluting the vertex function in the form (6.2) with various mesons wave
functions in all possible $3!=6$ ways we obtain the matrix elements for a
number of 3-meson decay channels, listed in the Table. The meson definitions
and couplings are defined in Appendix A.
| | PDG2020 | input [34] | —M—
---|---|---|---|---
$K\bar{K}\pi$ | $K^{+}K^{-}\pi^{0}$ | $7.3\pm 0.4$ (all 4) | $5.5\pm 1.7$ (all 4) | 5.07 $K_{K}^{2}K_{\pi}$
| $K^{+}\bar{K}^{0}\pi^{-}$ | | | 7.27 $K_{K}^{2}K_{\pi}$
| $K^{0}\bar{K}^{0}\pi^{0}$ | | | 5.07$K_{K}^{2}K_{\pi}$
| $K^{-}K^{0}\pi^{+}$ | | | 7.27$K_{K}^{2}K_{\pi}$
$\pi\pi\eta$ | $\pi^{+}\pi^{-}\eta$ | $1.7\pm 0.5$ | $4.9\pm 1.8$ (both) | 4.92 $K_{\pi}^{2}K_{\eta}^{s}$
| $\pi^{0}\pi^{0}\eta$ | | | 2.46$K_{\pi}^{2}K_{\eta^{\prime}}^{s}$
$\pi\pi\eta^{\prime}$ | $\pi^{+}\pi^{-}\eta^{\prime}$ | $4.1\pm 1.7$ (both) | $4.1\pm 1.7$ (both) | 5.20 $K_{\pi}^{2}K_{\eta^{\prime}}^{s}$
| $\pi^{0}\pi^{0}\eta$ | | | 2.60$K_{\pi}^{2}K_{\eta^{\prime}}^{s}$
$\bar{K}K\eta$ | $K^{+}K^{-}\eta$ | $1.36\pm 0.15$ (both) | | 3.68 $K_{K}^{2}F_{\eta}^{q}$
| $K^{0}\bar{K}^{0}\eta$ | | | 3.68 $K_{K}^{2}F_{\eta}^{q}$
$\bar{K}K\eta^{\prime}$ | $K^{+}K^{-}\eta^{\prime}$ | | | 3.53 $K_{K}^{2}F_{\eta^{\prime}}^{q}$
| $K^{0}\bar{K}^{0}\eta^{\prime}$ | | | 3.53 $K_{K}^{2}F_{\eta^{\prime}}^{q}$
$\eta\eta\eta$ | | | | 1.32$(K_{\eta}^{q})^{2}K_{\eta}^{s}$
Table 1: The first column gives the generic names of the decay channels of
$\eta_{c}$, while the second column records the specific channels. The third
column contains the corresponding branching ratio (percents) according to the
Particle Data Table 2020. For comparison, we show in the fourth column the
corresponding numbers used in [34]. The last column gives the decay matrix
elements. The meson-specific constants (wave function at the origin) are
defined in Appendix A.
Our first comment on the table is that in that two decades from the work in
[34], some of the experimental branching ratios have improved their accuracy,
while some are substantially modified. This needs to be kept in mind when
comparing the predictions to experiment.
Of course, we can construct many ratios out of the Table. Here, we will
mention two in particular
$\displaystyle{\Gamma(K\bar{K}\pi)\over\Gamma(\eta\pi\pi)}$ $\displaystyle=$
$\displaystyle{\sum|M|^{2}\over\sum|M|^{2}}{0.111\over 0.135}\approx
10\,\,\,\,\,\,\,\bigg{(}{\rm exp}:{7.3\pm 0.4\over 2.55\pm 0.75}\bigg{)}$ (31)
$\displaystyle{\Gamma(\eta\pi\pi)\over\Gamma(\eta^{\prime}\pi\pi)}$
$\displaystyle=$ $\displaystyle{\sum|M|^{2}\over\sum|M|^{2}}{0.135\over
0.0893}\approx 0.9\,\,\,\bigg{(}{\rm exp}:{2.55\pm 0.75\over 4.1\pm
1.7}\bigg{)}$
where the last ratio corresponds to the three-body phase space. The ratios are
only in qualitative agreement with the reported measurements. Clearly more
studies are needed.
### 6.4 Baryonic decays of sphalerons
Another fascinating issue, not so far discussed in the literature, is whether
the 6-fermion effective Lagrangian can be used to produce a baryon-antibaryon
pair, rather than 3 mesons.
At first sight one finds a problem which looks quite severe. The operator
involves the quark set $u,d,s$, and the natural baryons to look at are
$\Lambda$ or $\Sigma^{0}$ baryons. As for any baryon, their color wave
function is antisymmetric in color, $\epsilon_{c_{1}c_{2}c_{3}}$. The flavor
determinant is also antisymmetric in flavor $\epsilon_{f_{1}f_{2}f_{3}}$.
Fermi statistics then require the remaining part of the wave function to be
antisymmetric. In the lowest shell of the quark model, this remaining part is
made of three quark spins. It is not possible to create
$\epsilon_{s_{1}s_{2}s_{3}}$ spin wave functions, since the spin indices are
$1,2$ only. Indeed it is well known, that the lowest baryonic octet does not
have the ninth state, a singlet, unlike the mesons.
This notwithstanding, let us also approach the issue phenomenologically. we
assume, following Bjorken, that $\eta_{c}$ decays possess topological paths,
while $J/\psi$ decays do not. So, what do we see in the baryonic sector?
Both $\eta_{c}$ and $J/\psi$ have observed decays into $\bar{\Lambda}\Lambda$,
with similar branching $\sim 10^{-3}$. Yet their absolute widths differ by
roughly a factor of 200, in favor of the former case. What speaks against the
topological mechanism is the fact that in both cases the channels $\bar{p}p$
and $\Sigma^{+}\Sigma^{-}$ have similar yields to $\bar{\Lambda}\Lambda$,
although they cannot follow from our 6-fermion effective vertex.
In summary, topological production of $\bar{\Lambda}\Lambda$ does not appear
to take place in $\eta_{c}$ decays.
Yet, if the sphaleron mass is larger than 3 GeV, one may think of production
of two higher mass baryon resonances. Among those there are known $SU(3)$
singlets, which are anti-symmetric in flavor. The lowest of them are
$\Lambda(1405)\,\,:\,\,J^{P}=1/2^{-},\qquad\Lambda(1520)\,\,:\,\,J^{P}=3/2^{-}$
Their negative parity is explained in the quark model by a unit of orbital
momentum $L=1$. Unfortunately, this negative parity also protect them from any
mixing with the usual $\Lambda$. As they are well known resonances, their
back-to-back production should be quite noticeable. Their widths are
$\Gamma=50$ and $157\,{\rm MeV}$, respectively. They both have very
characteristic decays,
$\Lambda(1405)\rightarrow\Lambda\gamma,\,\,\,\Lambda(1520)\rightarrow\Sigma\pi$
with branching ratios of the order of 50% for both.
### 6.5 Chirality correlation in baryonic decays
The $\Lambda\bar{\Lambda}$ channel is the most interesting for two reasons:
(i) Zero isospin means that $(ud)$ diquark has spin zero, and therefore the
whole spin of $\Lambda$ is carried by its strange quark;
(ii) weak decays of $\Lambda$ hyperon allows to observe its polarization.
Indeed, in the decay $\Lambda\rightarrow p\pi^{-}$ of polarized hyperon, the
direction of the proton is mostly along the initial polarization direction.
As a measure of $s$ quark chirality one can use
$\xi_{\Lambda}\equiv{\rm
cos}\big{(}\theta(\vec{p}_{\Lambda}\vec{p}_{p})\big{)}$ (32)
and calculate the distributions $P(\chi_{\Lambda\bar{\Lambda}})$ over the
relative chiralities, the product of
$\chi_{\Lambda\bar{\Lambda}}\equiv\xi_{\Lambda}\times\xi_{\bar{\Lambda}}$ (33)
Ordinary perturbative diagrams with one or two gluons (or photons) leading to
the production of a strange quark pair are “vector-like”, meaning that the
chiralities are the same,
$(\bar{s}_{L}s_{L})+(\bar{s}_{R}s_{R})$
This means either both $\xi$ are positive, or both negative, leading to
$\chi_{\Lambda\bar{\Lambda}}$ positive.
On the other hand, the instanton/sphaleron-induced vertex is, $non-diagonal$
in chirality
$(\bar{s}_{L}s_{R})+(\bar{s}_{R}s_{L})$
The produced $\Lambda$ and $\bar{\Lambda}$ should therefore have the
$opposite$ chiralities, and $\chi_{\Lambda\bar{\Lambda}}$ is negative. We are
not aware of such study even in any inclusive reactions, in which a pair of
Lambda hyperon decays are identified with some reasonable statistics.
As we discussed above, the exclusive production of $\bar{\Lambda}\Lambda$ from
the t’ Hooft-like Lagrangian is not possible. Yet strong chirality
correlations in question would perhaps persist in channels with other
associate hadrons. For example, in the production of
$\bar{\Lambda}(1405)\Lambda(1405)$ with their subsequent radiative decays into
$\Lambda+\gamma$, there should remain rather significant correlation of
polarizations.
If observed, it would be an excellent indication of the topological origin of
the vertex, pointing to an explicit violation of the $U_{A}(1)$ symmetry.
## 7 Sphaleron decays at medium masses:
$M=10-20\,{\rm GeV}$ and 10-fermion operators
### 7.1 Charm pairs and decays with 8-fermion operators
The field magnitude at the center of the instantons is comparable to
$m_{c}^{2}$. Although charm is not usually treated as a light flavor, it must
have a certain coupling to instantons. The fact of large instanton-induced
decays of $\eta_{c}$ confirms this idea.
The 8-flavor operators have also a flavor-asymmetric structure
$V\sim(\bar{u}u)(\bar{d}d)(\bar{s}s)(\bar{c}c)$
with a typical determinantal structure . There are $4!=24$ 4-meson channels
one should think of: for example those can be the same 3-meson channels
discussed above, $\pi\pi\eta,\pi\pi\eta^{\prime},KK\pi$ with an $added$
$\eta_{c}$. The other quite distinct channels are e.g. $KKDD$ $without$ any
pions. Note also that 4 pseudoscalar mesons without orbital momentum
correspond to operator $G^{2}$, not the pseudoscalar one.
### 7.2 Multi-glueball production
We already mentioned several times, that the total sphaleron mass is ${\cal
O}(3\pi^{2}/g^{2}\rho)$ while the energy of produced gluons are only
$O(1/\rho$) . The gauge quanta multiplicity is therefore $\sim
1/g^{2}\sim{\cal O}(100)$ in the electroweak theory and ${\cal O}(10)$ in QCD.
Therefore, in the “medium sphaleron mass” range indicated, the energy per
gluon is in the range $E_{g}=1-2\,{\rm GeV}$. We know, from lattice studies
and models, that under such conditions gluons are not yet “partons” with
independent life. Instead, they are paired into the lowest glueball states
with masses
$M_{0^{+}}\approx 1.6\,{\rm GeV}\,\qquad M_{2^{+}}\approx 2.2\,{\rm GeV}$ (34)
So, we propose that in this mass range a signficant part of the energy will go
into several lowest glueballs. All of them are now reasonably well identified
with scalar and tensor resonances.
### 7.3 Semiclassical production in constant electric field and in sphaleron
explosion
Another new interesting option is the production of $\bar{b}b$ pair by
sphaleron decays.
Before we provide some estimates, let us recall how the related classic
problem of the pair production of $e^{+}e^{-}$ in a constant electric field
can be described semiclassically. It is widely known as the Schwinger process,
as he solved it in detail in 1950. However, we will discuss neither the
Schwinger paper, nor even the earlier Heisenberg-Euler paper, but the much
earlier semiclassical work [38] from 1931 (well before anyone else).
The EOM of a charge relativistically moving in constant electric field is a
classic problem which everybody had encounter in E/M classes. Writing it in
the usual Minkowski form
${dp\over dt}={d\over dt}\bigg{(}{v\over\sqrt{1-v^{2}}}\bigg{)}={eE\over
m}\equiv a$ (35)
yields the hyperbolic solution
$v_{M}(t)={at\over\sqrt{1+a^{2}t^{2}}},\,\,\,\,x_{M}(t)={1\over
a}\big{(}\sqrt{1-a^{2}t^{2}}-1\big{)}$ (36)
for a particle that starts at rest with a nonrelativistic acceleration,
$x_{m}=at^{2}/2$. At large times, the motion turns ultrarelativistic with
$v_{M}\rightarrow 1$.
The analytical continuation of the the trajectory to Euclidean time $\tau=it$
yields
$v_{E}={a\tau\over\sqrt{1-a^{2}\tau^{2}}},\,\,\,x_{E}(\tau)={1\over
a}\big{(}\sqrt{1+a^{2}\tau^{2}}-1\big{)}$ (37)
At time $\tau=0$ the particle is at the origin with zero velocity,
$x_{E}=0,v_{E}=0$. For $-1/a<\tau<1/a$ it describes an Euclidean path in shape
of a semicircle. In Euclidean space, the world-time is no different from the
other coordinates, and the electric field $G_{01}$ is not different from the
magnetic field, so in the 0-1 plane the paths are circles, like they are in
all other planes.
To understand the physical meaning of the semicircle (blue dashed line) in the
Dirac sea interpretation one needs to split it into two halves. The path
describes tunneling through the “mass gap”, between energies $-M$ and $M$ in
the spectrum of states. In Minkowski space there are no states between
$E=-\sqrt{p^{2}+m^{2}}$ and $E=\sqrt{p^{2}+m^{2}}$ with real momentum $p$, In
the Euclidean world however, the momentum is imaginary!
Let us note that the rapidity $y_{M}={\rm arctanh}(v_{M})$ in Minkowski space,
becomes in Euclidean space a rotation angle
$y_{E}={\rm arctan}(v_{E})={\rm
arctan}\bigg{(}{a\tau\over\sqrt{1-a^{2}\tau^{2}}}\bigg{)}$ (38)
and at $\tau=\pm 1/a$ it is $\pm\pi/2$. The action $S=\int(-mds-eExdt)$ when
evaluated with the Euclidean path gives $S_{E}=\pi m^{2}/2eE$. The
semiclassical probability (square of the amplitude) of the pair production is
then
$P\sim e^{-2S_{E}}\sim e^{-{\pi m^{2}}/{eE}}$ (39)
When discussing the heavy quark production we will assume that the back
reaction of the produced quark on the gauge field can be neglected. This
implies that the total sphaleron mass is much larger than that of the produced
pair
${3\pi^{2}\over g^{2}(\rho)\rho}\gg 2M$ (40)
For $\rho\sim 1/3\,{\rm fm}$, corresponding to the maximum in the instanton
size distribution, the l.h.s. is about $M_{\rm sph}\approx 3\,{\rm GeV}$.
Therefore, the condition is satisfied for the strange quark pair $\bar{s}s$
but not for the charm quark pair $\bar{c}c$.
The electric field during the sphaleron explosion $E^{m}_{a}(\vec{x},t)$
follows from the expressions for the field potential (16). The component with
$a=3$ corresponds to the diagonal Pauli generator $\tau^{3}$, so that quark
color remains unchanged. The maximal magnitude of the field corresponds to the
$x^{3}$ or $z$ direction, $E^{3}_{3}$. The formula is a bit long to give here,
but its behavior is shown in Fig. 6
Figure 6: The snapshots of the electric field component $E^{3}_{3}(r,t)$ in
units of $1/g\rho^{2}$, as a function of $x^{3}/\rho$, for times $t/\rho=0.5$
(dashed), $t/\rho=1$ (solid) and $t/\rho=2$ (dotted) curves.
From our discussion of the semiclassical quark pair production in a
$homogeneus$ electric field it follows that the field must produce an
impulsion
$\Delta p=\int Edt\sim M$
to reach, in Euclidean notations, the rotation angle $y_{E}=\pi/2$ needed to
make it real. For an estimate, one can take $E\approx 1.5/\rho^{2}$ and
$\delta t\approx\rho$ ( following Fig. 6), from which the estimated impulsion
is
$\Delta p=E\delta t\approx{1.5/\rho}$
This gives an estimated limit for the mass $M$ of a quark which is $likely$ to
be produced
$M<1.5/\rho$ (41)
Using $1/\rho=0.6\,GeV$, one finds the r.h.s. to be $\approx 1\,GeV$. This
implies that strange quarks, with $M_{s}\sim 0.14\,GeV$ can be produced, but
not the charmed ones, with $M_{c}\sim 1.5\,GeV$. To satisfy this estimate, one
would need to decrease $\rho$ by about a factor 2. To produce a $b$ quark,
with $M_{b}\approx 5\,GeV$, one would need to decrease $\rho$ by a factor 6 or
so.
In order to get quantitative semiclassical description on has to do the
following: (i) convert the expressions for the field to Euclidean time; and
(ii) solve the relativistic classical EOM
$M{du^{\mu}\over ds}=F^{\mu\nu}u_{\nu}$
where $u^{\mu}=dx^{\mu}/ds$ and $s$ is the proper time,
$ds^{2}=dt^{2}-d\vec{x}^{2}$. Comparable and rather complex electric and
magnetic fields make the paths quite complicated. On top of that, the result
depends on the starting location of the particle. So, at this time, we have no
results on such a calculation to report.
Figure 7: Schematic picture illustrating the configuration of QCD strings in
four-jet events. The left corresponds to the usual case, when jets originate
from collisions at the origin in the transverse plane. The right corresponds
to an exploding sphaleron in which the strings are not connected to the
origin, but are close to the expanding shell (dotted circle).
## 8 Sphaleron decays at large masses, $M\sim 100\,GeV$
These masses fall in the range discussed in the theoretical literature, and
mostly searched at HERA and LHC, as they promise to have ${\cal O}(10)$ well
recognized gluonic and quark jets. Background evaluation for such events were
attempted, using current event generators tuned to multi-jet events.
Two comments are in order:
(i) In this mass range, it is possible to use double diffractive events at
LHC, which are expected to reduce backgrounds substantially in comparison to
ambient $pp$ collisions.
(ii) In the usual parton-parton collisions, the fragmentation function of
${\cal O}(10\,GeV)$ gluons is essentially the product of string breaking. One
end of these strings is on the leading gluon, the other ends at throughgoing
partons, or the origin in the transverse plane, see Fig. 7 left. But sphaleron
decay leaves out the interior of the exploding shell “empty” (more precisely,
with a pure gauge configuration). As these gluons become physical and
separated from each other, the strings would go $between$ them close to the
shell, rather than extending to he origin, see Fig. 7 right. This suggests the
appearance of some unusual gluon jet fragmentation functions, with a
significant depletion at small $p_{t}$.
## 9 Magnetic field and Chiral Magnetic Effect
In the presence of an electromagnetic field $\vec{b}_{EM}$ and chiral
disbalance of fermions, there exists the so called Chiral Magnetic Effect
(CME), an electric current along $\vec{b}_{EM}$ is created, for a review see
[39]. Experiments aimed at observing it in heavy ion collisions at RHIC have
been made, lately utilizing beams of nuclei with the same atomic weight but
different charges, but the analysis is not conclusive. The CME was observed in
semi-metals [39].
An interesting manifestation of the phenomenon must also be present in the
instanton-sphaleron production processes, which we propose in this paper.
The first part of it, related with instantons, has already been studied. While
calculating nucleon form factors using instantons [40], Faccioli observed that
the instanton zero mode acquires an electric dipole along the magnetic field.
(Of course, adding the antiinstanton contribution cancels it, unless the
topological $\theta$ angle is nonzero). Lattice simulations in a magnetic
field carried in [41] have confirmed reality of this effect.
The extensive analytic study of the electric dipole moment of a zero mode is
due to Basar, Dunne and Kharzeev [42]. Let us just recall a qualitative
explanation of the mechanism from their work. The Dirac equation in the
instanton background conserves total
$\vec{J}=\vec{L}+\vec{S}+\vec{T}$
the sum of orbital momentum, spin and (SU(2)) isospin. As noticed by t’Hooft,
the isospin-orbit $(\vec{T}\cdot\vec{L})$ term forces orbital momentum $not$
to be a good quantum number. Therefore, it is possible to mix opposite parity
states666This discussion is in the instanton background, which is CP non-
invariant. The QCD vacuum and other states are CP conserving. . The external
electromagnetic field $\vec{b}$ couples spin to angular momentum, and induces
such mixing. The zero fermion and antifermion modes are deformed in different
directions, creating a nonzero electric dipole along the $\vec{b}$ field.
(Antiinstantons of course would do the opposite: we discuss local, not global
CP violation.) No observable consequences of this effect have so far (to our
knowledge) been proposed.
Now, consider e.g. very peripheral heavy ion collisions, in which Pomeron-
Pomeron (or $\gamma$-Pomeron or $\gamma\gamma$) collision initiate instanton-
sphaleron production process. As usual, the large electric charges of the ions
generate a strong magnetic field at the production point. The exploding
instanton and sphaleron gauge field remains spherical, but, since the quark
zero mode is deformed, the exploding electric field will carry them
differently in direction of the magnetic field, as compared to two other
directions, producing anisotropy of the quark momenta.
The effect of a magnetic field can be seen by comparing $pp$ and $PbPb$
peripheral collisions. The orientation of the collision plane (to which
$\vec{b}$ is normal) can be deduced from the direction of two forward-moving
protons (or ion remnants).
The sign of the effect for a quark depends on its electric charge, and the
light quarks in question, $u,d,s$ have charges $+2/3,-1/3,-1/3$. So, for the
mesons $\pi^{+}$ versus $\pi^{-}$ (kaons, etc) the effects are added,
resulting in a charge-dependent deformation of the distribution. The instanton
and antiinstanton events still produce opposite signs of the effect. So one
may construct a meson $correlation$ observable inside an event ($all$
$\pi^{+}$ have one preferred direction, opposite to $\pi^{-}$).
Another (statistically challenging) option is to select events with $\Lambda$
(or $\bar{\Lambda}$) decays, which tells us about the strange quark chirality.
Statistically selecting “more-likely-instanton” or “more-likely-antiinstanton”
events, one can perhaps see an electric dipole in the event directly.
Concluding this discussion, let us again emphasize the following. The
instanton-sphaleron process is the “hydrogen atom” of topological effects. The
chiral correlations produced in it is maximally possible, as all quarks and
antiquarks have fixed chiralities. (For comparison, in heavy ion collisions we
think the axial charge of the fireball is between $\pm 20$ or so, on top of
thousands of unpolarized ones. While the proposed measurements may appear
quite exotic, the cross sections are relatively large, allowing LHC with the
right trigger to produce billions of diffractive events.
## 10 The “entrance factor”
So far our discussion was mostly semiclassical, we considered (Euclidean)
tunneling paths and (Minkowski) sphaleron explosions. Then we projected the
outgoing quanta to some final hadronic states.
Now we focus on the very initial stage of the collision. Here we first have to
make hard choices, selecting theoretical tools for its description. Here are
these choices: colliding protons can be seen as colliding (i) partons (gluons
or quarks); (ii) color dipoles; (iii) glueballs; or (iv) Pomerons. Let us
briefly discuss them subsequently.
At high mass scale, e.g. the sphaleon mass squared
$s^{\prime}=M^{2}_{\rm sph}\gg 100\,{\rm GeV}^{2}$
the parton-parton collision is the natural perturbative description, developed
since the 1970’s in the context of jets or heavy quanta production. It was
applied by Ringwald and Schrempp for DIS at HERA [43], and now by Khoze et al.
[44] to LHC setting, incorporating Mueller perturbative correction [45]. For
comparison of these predictions with HERA data see [46], and with LHC [47]. At
high mass regime the theoretical predictions have reasonable uncertainties,
such as in the semiclassical prefactor, but the main difficulty in it is large
background induced by ambient events. The main observable is a spherical
multi-jet events.
To illustrate the situation, we show the gluon-gluon cross section
$\sigma(\sqrt{s^{\prime}})$ of the instanton-sphaleron process in Fig.8. The
points are from Table 1 by Khoze et al. [44]. The instanton size in the range
of the plot changes from $1/\rho=1\,{\rm GeV}$ to $75\,{\rm GeV}$, and the
number of outgoing gluons at the high end reaches about a dozen.
The line, shown for comparison, is $1/M^{9}$. Recall that the one-loop
coefficient of the QCD beta function is $b=(11/3)N_{c}-(2/3)N_{f}$, or 9 if
the number of light flavors $N_{f}=3$. The effective action is twice that of
the instanton $2S_{inst}$ minus its depletion for sphaleron production
$-\Delta S$. The cross section should be, by dimension,
$\sigma\sim{1\over M^{2}}\bigg{(}{\Lambda_{QCD}\over M}\bigg{)}^{b(2-\Delta
S/S_{0})}$
One can see that the original estimates
${\Delta S\over S_{0}}\approx 1$ (42)
are supported by actual multidimensional integration.
The calculation along a path including both the Euclidean and Minkowskian
times has been preformed in [48], in the electroweak setting. Their result
(solid line in Fig.6 of that paper) shows that the action is reduced from
$\approx 12.5$ at the zero sphaleron mass to about $\approx 7$ at the large
sphaleron mass. It also supports the estimate (42).
The backgrounds come from multiple QCD reactions, which have cross sections
$\sigma_{background}\sim{\alpha_{s}^{2}\over M^{2}}$
It is therefore clear that the task of separating the signal from the
background becomes much harder as the cluster mass $M$ grows.
Figure 8: Cross section $\sigma(M)$ in pb of the process gluon-gluon-
instanton/sphaleron, versus the sphaleron mass $M(GeV)$.The points are
explained in the text, the line is the $1/M^{9}$ background dependence shown
for comparison.
As the momentum transfer scale decreases, one may naturally think of coherent
contribution of two (or more) partons. Color neutrality can be implemented
starting from color dipoles. Unlike partons, the colliding color dipoles have
a natural scale given by their sizes $a_{1},a_{2}$. Correlating dipoles with
instantons via Wilson loops has been done by us [16]. The cross sections
obtained can be directly tested in double-inelastic electron-positron (or
$\gamma^{*}\gamma^{*}$) collisions. Unfortunately, a description of a proton
in terms of color dipoles is not yet (to our knowledge) developed.
At small momentum transfer scale – in diffractive processes we will focus on –
the colliding objects are described in terms of (reggeized) hadron exchanges,
especially by tensor glueballs or (their extension to the whole Regge
trajectory), the Pomerons. In a number of relatively recent works it has been
shown that Pomeron exchanges require effective description in terms of
symmetric tensor. These facts, together with overall development of
holographic QCD models, had strengthen the Pomeron-graviton connection.
## 11 Sphaleron production in Pomeron-Pomeron collisions
We start with some general remarks related to experimental setting.
Double-Pomeron production processes at LHC are only at their initial stage. A
general but brief discussion of the existing detectors was made in section
1.2. Let us only add that apparently in the standard high luminocity LHC runs,
one cannot access masses less than say hundreds of ${\rm GeV}$, but can go to
any masses provided the dedicated low-luminocity runs are performed.
As we will detail below, Pomeron-Pomeron vertices are coupled to two
operators, scalar $G^{2}$ and pseudoscalar $G\tilde{G}$. The former is maximal
when electric field directions are parallel, in the second orthogonal.
Momentum kicks to two protons are directed with electric field. Therefore,
locaing roman pots in different azimuth, one can in principle “tag” and
separate these two contributions.
Returning now to current experiments at LHC, we note that exclusive channel
$\pi^{+}\pi^{-}$ has been already studied by CMS [49], for invariant masses
till $M<2\,{\rm GeV}$. It shows a $\rho$-meson peak, and contains hints at
some other resonances.
Heavier clusters unfortunately were only studied by the old UA8 collaboration
[3] with $p\bar{p}$ collisions at the SPS. The production cross section in
Pomeron-Pomeron collisions at the peak $M\sim 3\,{\rm GeV}$ is rather large
$\sigma_{PP}\sim 4\,{\rm mb}$.
But, as usual, what really matters is the Signal/Background (S/B) ratio. In
this respect our only normalization process is the $\eta_{c}$ decay. Assuming
standard $\bar{c}c\rightarrow gg$ annihilation, into
$G_{\mu\nu}\tilde{G}_{\mu\nu}$ pseudoscalar operator, one realizes that the
gluon can either enter the instanton/sphaleron tunneling, or produce quark
pairs perturbatively. Most likely, at fixed $M_{gg}$ it does not matter
whether $gg$ came from $\bar{c}c$ annihilation or Pomeron-Pomeron collisions.
In total, the three prominent 3-meson channels ascribed to the former have
certain branching ratio, from which one can conclude that
$\bigg{(}{Signal\over Background}\bigg{)}(M\approx 3\,{\rm GeV})>0.15$
One can think of many other modes, e.g. with glueballs, scalar or
pseudoscalar, but at a mass of only $M_{\eta_{c}}\approx 3\,{\rm GeV}$ those
would be suppressed by phase space 777 For a more accurate estimate, one needs
to figure out which other modes come from the signal. Unfortunately there are
many modes and their branching ratios are at the level less than a percent, so
it would be hard if not impossible to get an accurate number.. Perhaps it
increases the signal by a factor of 2 or something like that. Alternatively,
PP collisions can be approximated as $gg+gg$ collisions, or operators with
$two$ stress tensors. This perhaps increases the background, say also by a
factor of 2, so that these effects cancel out in the S/B ratio.
What is however even more important, if one looks in the “instanton-induced”
channels there would hardly be any background. It follows from comparison of
the $\eta_{c}$ decays to $KK\pi,\pi\pi\eta$ yields with pure pion final
states.
Now, we address the sphaleron mass distribution in PP collisions. We already
discussed in 5 the instanton size distribution
${dn\over d\rho}\sim\bigg{(}{1\over\rho^{5}}\bigg{)}(\rho\Lambda)^{b}{\rm
exp}\big{(}-2\pi\sigma\rho^{2}\big{)}$ (43)
with $b=(11/3)N_{c}-(2/3)N_{f}$ is the first coefficient of the beta function,
and $\sigma\approx(0.42\,{\rm GeV})^{2}$ the string tension. When combined
with the (pure gauge) expression of the sphaleron mass
$M_{sph}=3\pi^{2}/g^{2}\rho$, it yields the distribution over this mass. In
Fig. 9 we compare it with the data points from the UA8 experiment [3] (left
plot), and with the background (right plot) expected to scale as
$d\sigma/dM\sim 1/M^{3}$ and normalized to the high-mass UA8 data.
Figure 9: Left plot: Semiclassical distribution over the cluster mass
$M\,({\rm GeV})$, compared to the data points from the UA8 experiment; Right
plot is the logarithmic representation of the same curve (solid), now compared
with the dashed line representing the perturbative background.
As one can see, the vacuum distribution in instanton sizes and the
experimental cluster mass distribution are in qualitative agreement in shape,
and in predicted position of a peak. This supports our suggestion that those
clusters may in fact be QCD sphalerons.
Let us make few other points:
(i) the instanton-sphaleron production seems to be dominant at $M\sim{\rm few-
GeV}$;
(ii) the cross section to $M\sim 20\,{\rm GeV}$ decreases by about three
orders of magnitude. With the efficiency of the forward trigger of UA8, about
1300 events were detected at $theSp\bar{p}S$. The luminocity of LHC is more
than enough to cover these extra three orders of magnitude; (iii) where the
signal-to-background ratio is about $1:20$. The background – mostly two-jet
events – is very anisotropic, so perhaps the identification of an isotropic
signal can still be possible, even at $M\sim 20\,{\rm GeV}$.;
(iv) we recall that the clusters with $M\sim 20\,{\rm GeV}$ are the ones in
which 10 quark operators (with pairs of $\bar{b}b$ quarks) can be produced.
## 12 Sphaleron-gluon-quark vertex
To make explicit the sphaleron-quark vertex, we note that the instanton
density in (24) follows from the averaging of the gluon field in the instanton
vacuum
$n_{I+\bar{I}}=n_{I}+n_{\bar{I}}=\frac{\alpha_{s}}{8\pi}\langle{G^{2}}\rangle\qquad
n_{I-\bar{I}}=n_{I}-n_{\bar{I}}=\frac{\alpha_{s}}{8\pi}\langle{G\tilde{G}}\rangle\rightarrow
0$ (44)
where the second averaging is zero for $\theta=0$. The normalizations in (44)
are fixed by the QCD scale and axial anomalies in the instanton vacuum,
respectively. Note that in (44) $G^{2}$ and $G\tilde{G}$ count the number of
instantons and anti-instantons.
The single instanton-six-quark vertex and anti-instanton-six-quark vertex
follow by omitting the vacuum averaging in (44), and recalling that the left-
vertex is induced by an instanton, and the right vertex by an anti-instanton.
More specifically, from (6.2) and (44) we obtain
$\displaystyle{\cal
L}_{Gqqq}=\bigg{[}\frac{\alpha_{s}}{16\pi}\bigg{(}G^{2}+G\tilde{G}\bigg{)}\bigg{]}\bigg{(}\frac{4\pi^{2}\rho^{3}}{M\rho}\bigg{)}^{3}\bigg{[}\frac{{\cal
V}^{L}_{qqq}}{\kappa}\bigg{]}+\bigg{[}\frac{\alpha_{s}}{16\pi}\bigg{(}G^{2}-G\tilde{G}\bigg{)}\bigg{]}\bigg{(}\frac{4\pi^{2}\rho^{3}}{M\rho}\bigg{)}^{3}\bigg{[}\frac{{\cal
V}^{R}_{qqq}}{\kappa}\bigg{]}$ (45)
The sphaleron produced in the diffractive process is half a tunneling process
with half the topological charge, and not self-dual. At the turning point, the
sphaleron drags six quark zero modes out of the QCD vacuum with at $t=0$ the
vertex mediated by $G_{S}^{2}=2B^{2}$ since $G\tilde{G}_{S}=4E\cdot B=0$
$\displaystyle{\cal L}_{Sqqq}(t=0)=$
$\displaystyle\bigg{(}\bigg{[}\frac{\alpha_{s}}{16\pi}\,G_{S}^{2}\bigg{]}\bigg{(}\frac{4\pi^{2}\rho^{3}}{M\rho}\bigg{)}^{3}\bigg{[}\frac{{\cal
V}^{L}_{qqq}+{\cal V}^{R}_{qqq}}{\kappa}\bigg{]}\bigg{)}_{t=0}$
$\displaystyle+$
$\displaystyle\bigg{(}\bigg{[}\frac{\alpha_{s}}{16\pi}\,\dot{K}_{0S}\bigg{]}\bigg{(}\frac{4\pi^{2}\rho^{3}}{M\rho}\bigg{)}^{3}\bigg{[}\frac{{\cal
V}^{L}_{qqq}-{\cal V}^{R}_{qqq}}{\kappa}\bigg{]}\bigg{)}_{t=0}$ (46)
and by the rate of the Chern-Simons charge density $\dot{K}_{0S}$ following
from $\partial^{\mu}K_{\mu}=G\tilde{G}$.
## 13 Witten amplitude for diffractive production $pp\rightarrow ppX$
QCD is difficult to track in the infrared since its fundamental quark and
gluon constituents repackage in confined hadrons. The ensuing hadronic
dynamics is strongly coupled. Holographic QCD is a proposal guided by the
AdS/CFT or gauge/gravity duality discovered in string theory. The holographic
duality or principle states that boundary operators in the gauge theory can be
mapped to a higher dimensional string theory in bulk in a curved anti-deSitter
space. The original correspondence holds for type IIB superstring theory in
$AdS_{5}\times S_{5}$, but is commonly assumed to hold for a string theory in
a general background. The srting theory is in general difficult to solve, but
in the double limit of a large number of colors and strong gauge coupling
$\lambda=g^{2}N_{c}$, it reduces to a weakly coupled supergravity in the
classical limit, with a weak string coupling $g_{s}=g^{2}/4\pi$. The gauge
invariant operators at the boundary are mapped onto supergravity fields in
bulk with pertinent anomalous dimensions.
The n-point functions at the boundary of $AdS_{5}$ follow from variation of
the on-shell supergravity action in bulk with respect to the boundary values.
The results are tree-level Feynman graphs with fixed end-points on the
boundary also known as Witten diagrams. The Witten diagram for the diffractive
process of interest in this work $pp\rightarrow ppX$ is illustrated in Fig.
10. The insertions on the $AdS_{5}$ boundary refer to 4 nucleon operator
insertions ${\cal O}_{pi=1,...,4}$ and a glueball insertion ${\cal
O}_{X=G^{2},G\tilde{G}}$. ${\cal O}_{p}$ sources a Dirac fermion in bulk with
anomalous dimension $M=\tau-3/2$ ($\tau$ refers to the twist, with typically
$\tau=3$ to reproduce the hard scattering rules), and the ${\cal O}_{X}$
sources a dilaton or axion in bulk with anomalous dimension $\Delta_{X}=4$. In
the Regge kinematics, the exchange in bulk is mediated by two closed string
producing a dilaton $h$ or an axion $a$ which is equivalent to Pomeron-Pomeron
fusion into $h,a$ (${\bf P}{\bf P}\rightarrow h,a$).
The Holographic construction provides a first principle description of the
Pomeron (${\bf P}$) as a dual to a close string exchange [50, 11] or Reggeized
graviton in bulk [5, 6]. In this section we will use this framework to extract
the differential cross section for the reaction $pp\rightarrow ppX$ through
${\bf PP}$ fusion as discussed in [7] for pseudoscalar emission, and in [51]
for Higgs production. After a brief description for the pertinent kinematics
for this process in the Reggeized limit, we will summarize the main formula
for the production of $G^{2}$ and $G\tilde{G}$ glueballs with most of the
details given in D-E-F.
### 13.1 Kinematics for the Reggeized limit
For the process $p(p_{1})+p(p_{2})\rightarrow p(p_{3})+p(p_{4})+X(p_{5})$ we
will set the incoming protons back-to-back
$p_{1}=(E,0,0,p)\qquad p_{2}=(E,0,0,-p)\qquad p_{i=3,4,5}=(E_{i},q_{\perp
i},p_{iz})$ (47)
with longitudinal fractions $p_{3z}=x_{1}p$ and $p_{4z}=-x_{2}p$ for the
outgoing protons. The transverse momenta in the azimuthal plane are 2-vectors
$q_{\perp 3}=(q_{3}\,{\rm cos}\,\theta_{3},q_{3}\,{\rm sin}\,\theta_{3})\qquad
q_{\perp 4}=(q_{4}\,{\rm cos}\,\theta_{4},q_{4}\,{\rm sin}\,\theta_{4})$ (48)
The Mandelstam kinematics suggests five invariants
$\displaystyle s=(p_{1}+p_{2})^{2}\qquad t_{1}=(p_{1}-p_{3})^{2}\qquad
t_{2}=(p_{2}-p_{4})^{2}\qquad s_{1}=(p_{3}+p_{5})^{2}\qquad
s_{2}=(p_{4}+p_{5})^{2}$
which are reduced to four by azimuthal symmetry. We will choose the four
invariants as $t_{1},t_{2}$, the relative azimuthal angle
$\theta_{34}=\theta_{4}-\theta_{3}$, and the relative momentum fraction
$x_{F}=x_{1}-x_{2}$. Following [7], the Reggeized limit is characterized by
large $\sqrt{s}$ and small scattering or azimuthal angles,
$s\gg s_{1},s_{2}\gg-t_{1},-t_{2},m_{N}^{2}\qquad{\rm
and\,\,fixed\,}\qquad\mu=\frac{s_{1}s_{2}}{s}$ (50)
with the simplified kinematics
$s_{1,2}\approx\sqrt{\mu s}\qquad{t_{1,2}}\approx-q^{2}_{\perp
3,4}\qquad\mu\approx p_{5}^{2}+t_{1}+t_{2}+2\sqrt{t_{1}t_{2}}\,{\rm
cos}\,\theta_{34}$ (51)
In this limit and using (51), the differential cross section for
$pp\rightarrow pp0\mp$ is dominated by $x_{F}\approx 0$ [7], and reads
$\frac{d\sigma_{\mp}}{d\theta_{34}dt_{1}dt_{2}}\approx\frac{1}{16\pi^{4}s^{2}}\,{\rm
ln}\bigg{(}\frac{s}{\mu}\bigg{)}\,\frac{1}{4}\sum_{\rm spin}\bigg{|}{\cal
A}_{pp\rightarrow pp0\mp}\bigg{|}^{2}$ (52)
for the production amplitudes (G.4) and (G.4) respectively. We now make
explicit the production amplitude for scalar and pseudoscalar production.
Figure 10: Witten diagram for diffractive $pp\rightarrow ppX$ with
$X=(a,f)\equiv(G\tilde{G},G^{2})$ for the sphaleron at the boundary
### 13.2 Differential cross section: $G\tilde{G}$
The main elements entering the Witten amplitude in Fig 10 for $pp\rightarrow
pp0-$ through ${\bf PP}$ fusion are detailed in the Appendices D-E-F, and
specialized to the soft wall model in a slab of AdS5. Other models in AdS5
yield similar results. The ensuing squared spin-averaged amplitude entering
(52) is
$\displaystyle\frac{1}{4}\sum_{\rm spins}\bigg{|}{\cal A}_{pp\rightarrow
pp0-}\bigg{|}^{2}=\frac{16}{4}\,A^{2}(t_{1})\,A^{2}(t_{2})\,$
$\displaystyle\times\bigg{[}\bigg{(}\frac{\lambda}{\pi^{2}}+1\bigg{)}^{2}\bigg{(}\frac{\sqrt{\lambda}}{2\pi}\bigg{(}\gamma+\frac{\pi}{2}\bigg{)}\bigg{)}^{4}\frac{e^{-\sqrt{\lambda}(\gamma+\pi/2)^{2}(1/{\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2})+1/{\rm
ln}(s_{2}/\tilde{\kappa}_{N}^{2}))}}{({\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2}){\rm
ln}(s_{2}/\tilde{\kappa}_{N}^{2}))^{3}}\,\bigg{(}\frac{s_{1}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{1})-2)}\bigg{(}\frac{s_{2}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{2})-2)}\bigg{]}$
$\displaystyle\times\bigg{[}\bigg{(}B_{1}^{-}(t_{1},t_{2})\eta_{\beta\bar{\beta}}+B_{2}^{-}(t_{1},t_{2})k_{2\beta}k_{1\bar{\beta}}\bigg{)}\,\epsilon_{\alpha\bar{\alpha}\gamma\delta}k_{1}^{\gamma}k_{2}^{\delta}\bigg{]}$
$\displaystyle\times\bigg{[}\bigg{(}B_{1}^{-}(t_{1},t_{2})\eta_{\underline{\beta}\bar{\underline{\beta}}}+B_{2}^{-}(t_{1},t_{2})k_{2\underline{\beta}}k_{1\bar{\underline{\beta}}}\bigg{)}\,\epsilon_{\underline{\alpha}\bar{\underline{\alpha}}\underline{\gamma}\underline{\delta}}k_{1}^{\underline{\gamma}}k_{2}^{\underline{\delta}}\bigg{]}$
$\displaystyle\times\bigg{[}p^{\alpha}p^{\beta}p^{\underline{\alpha}}p^{\underline{\beta}}+\frac{1}{16}\bigg{(}t_{1}\eta_{\alpha\underline{\alpha}}p_{\beta}p_{\underline{\beta}}+t_{1}\eta_{\alpha\underline{\beta}}p_{\beta}p_{\underline{\alpha}}+t_{1}\eta_{\beta\underline{\alpha}}p_{\alpha}p_{\underline{\beta}}+(t_{1}\eta_{\beta\underline{\beta}}-k_{1\beta}k_{1\underline{\beta}})p_{\alpha}p_{\underline{\alpha}}\bigg{)}\bigg{]}$
$\displaystyle\times\bigg{[}\underline{p}^{\bar{\alpha}}\underline{p}^{\bar{\beta}}\underline{p}^{\underline{\bar{\alpha}}}\underline{p}^{\bar{\underline{\beta}}}+\frac{1}{16}\bigg{(}t_{2}\eta_{\bar{\alpha}\bar{\underline{\alpha}}}p_{\bar{\beta}}p_{\bar{\underline{\beta}}}+t_{2}\eta_{\bar{\alpha}\bar{\underline{\beta}}}p_{\bar{\beta}}p_{\bar{\underline{\alpha}}}+t_{2}\eta_{\bar{\beta}\bar{\underline{\alpha}}}p_{\bar{\alpha}}p_{\bar{\underline{\beta}}}+(t_{2}\eta_{\bar{\beta}\bar{\underline{\beta}}}-k_{1\bar{\beta}}k_{1\bar{\underline{\beta}}})p_{\bar{\alpha}}p_{\bar{\underline{\alpha}}}\bigg{)}\bigg{]}$
(53)
with the vertex
$\displaystyle 2B_{1}^{-}(t_{1},t_{2})-\mu
B_{2}^{-}(t_{1},t_{2})=4\bigg{(}{\bf C}^{-}_{1}-{\bf
C}^{-}_{2}\sqrt{t_{1}t_{2}}\,{\rm cos}\,\theta_{34}\bigg{)}$ (54)
and ${\bf C}^{-}_{1,2}$ given in (F.2-135) for the soft-wall model. Here, the
Pomeron trajectory is the Reggeized graviton trajectory
$\displaystyle\alpha_{\mathbb{P}}(t)=\alpha_{\mathbb{P}}(0)+\frac{\alpha^{\prime}}{2}t\equiv
2-\frac{2}{\sqrt{\lambda}}+\frac{\alpha^{\prime}}{2}t$ (55)
after restoring the t-dependence, with $\alpha^{\prime}=l_{s}^{2}$ (the
squared string length). For simplicity we have set $\lambda\rightarrow\infty$
in the A-form factors in (13.2), i.e. $A(j_{0}\rightarrow 2,k)\equiv A(t)$,
although this can be relaxed. A similar observation was made originally in [1]
(instantons) and more recently in [7] (holography) for the double diffractive
production of $\eta,\eta^{\prime}$ .
We recall that for the nucleon as a Dirac fermion, the energy momentum tensor
is characterized by three invariant form factors
$\left<p_{2}|T^{\mu\nu}(0)|p_{1}\right>=\overline{u}(p_{2})\left(A(k)\gamma^{(\mu}p^{\nu)}+B(k)\frac{ip^{(\mu}\sigma^{\nu)\alpha}k_{\alpha}}{2m_{N}}+C(k)\frac{k^{\mu}k^{\nu}-\eta^{\mu\nu}k^{2}}{m_{N}}\right)u(p_{1})\,,$
(56)
with $p=(p_{1}+p_{2})/2$ and $k=p_{2}-p_{1}$, and the normalization
$\left<p|T^{\mu}_{\mu}|p\right>=2A(0)m_{N}^{2}$. In Appendix D, $A(t)$ is
given explicitly for the soft wall model, which is found to be well
parametrized by the dipole form [52]
$A(k)=\frac{A(0)}{\bigg{(}1+\frac{K^{2}}{m_{A}^{2}}\bigg{)}^{2}}=\frac{A(0)}{\bigg{(}1-\frac{t}{m_{A}^{2}}\bigg{)}^{2}}\equiv
A(t)$ (57)
with $m_{A}=1.124$ GeV. It compares well with the reported lattice value
$m_{A,\,\rm lattice}=1.13$ GeV by the MIT group [53], for $A(0)=0.58$. The
gravitational form factor $A(k)$ is saturated by the $2^{++}$ glueball
trajectory without any quark mixing, essentially a quenched result. The
holographic C-term $C(k)=-4A(k)$ is also in good agreement with the lattice
results. The holographic B-term is found to vanish which is consistent with
the lattice findings. Note that (57) corresponds to a squared gravitational
radius of the nucleon as a bulk Dirac fermion
$\displaystyle\left<r^{2}_{G}\right>=6\bigg{(}\frac{d{\rm
log}A^{(}t)}{dt}\bigg{)}_{t=0}=(0.61\,{\rm fm})^{2}$ (58)
which is smaller than the nucleon squared charge radius
$\left<r^{2}_{C}\right>=(0.831\,{\rm fm})^{2}$ [54].
In the Reggeized limit, the dominant contributions in the last two brackets
stem from the first two terms with $p^{2},\underline{p}^{2}\sim s$ as the lead
gravitational coupling to the energy-momentum tensor, with the result
$\displaystyle\frac{1}{4}\sum_{\rm spins}\bigg{|}{\cal A}_{pp\rightarrow
pp0-}\bigg{|}^{2}\approx\frac{1}{4}\,A^{2}(t_{1})\,A^{2}(t_{2})\,$
$\displaystyle\times\bigg{[}\bigg{(}\frac{\lambda}{\pi^{2}}+1\bigg{)}^{2}\bigg{(}\frac{\sqrt{\lambda}}{2\pi}\bigg{(}\gamma+\frac{\pi}{2}\bigg{)}\bigg{)}^{4}\frac{e^{-\sqrt{\lambda}(\gamma+\pi/2)^{2}(1/{\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2})+1/{\rm
ln}(s_{2}/\tilde{\kappa}_{N}^{2}))}}{({\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2}){\rm
ln}(s_{2}/\tilde{\kappa}_{N}^{2}))^{3}}\,\bigg{(}\frac{s_{1}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{1})-2)}\bigg{(}\frac{s_{2}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{2})-2)}\bigg{]}$
$\displaystyle\times s^{4}t_{1}t_{2}\bigg{(}2B_{1}^{-}(t_{1},t_{2})-\mu
B_{2}^{-}(t_{1},t_{2})\bigg{)}^{2}\,{\rm sin}^{2}\,\theta_{34}$ (59)
The corresponding differential cross section is
$\displaystyle\frac{d\sigma_{-}}{d\theta_{34}dt_{1}dt_{2}}\approx\frac{1}{(8\pi)^{4}}\,A^{2}(t_{1})\,A^{2}(t_{2})\,t_{1}t_{2}\bigg{(}2B_{1}^{-}(t_{1},t_{2})-\mu
B_{2}^{-}(t_{1},t_{2})\bigg{)}^{2}\,{\rm sin}^{2}\,\theta_{34}$
$\displaystyle\times\bigg{[}\bigg{(}\frac{\lambda}{\pi^{2}}+1\bigg{)}^{2}\bigg{(}\frac{\sqrt{\lambda}}{2\pi}\bigg{(}\gamma+\frac{\pi}{2}\bigg{)}\bigg{)}^{4}\frac{e^{-\sqrt{\lambda}(\gamma+\pi/2)^{2}(1/{\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2})+1/{\rm
ln}(s_{2}/\tilde{\kappa}_{N}^{2}))}}{({\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2}){\rm ln}(s_{2}/\tilde{\kappa}_{N}^{2}))^{3}}$
$\displaystyle\times\bigg{[}\,\bigg{(}\frac{s_{1}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{1})-2)}\bigg{(}\frac{s_{2}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{2})-2)}\bigg{]}\bigg{[}s^{2}{\rm
ln}\bigg{(}\frac{s}{\mu}\bigg{)}\bigg{]}$ (60)
For $s_{1,2}\sim\sqrt{\mu s}$ and fixed $\mu$ in the Reggeized limit, (13.2)
asymptotes
${\rm ln}\,s\,\bigg{[}\bigg{(}\frac{e^{-\\#\sqrt{\lambda}/{\rm ln}\,s}}{({\rm
ln}\,s)^{\frac{3}{2}}}\bigg{)}^{2}\,s^{1+\frac{1}{2}(\alpha_{\mathbb{P}}(t_{1})+\alpha_{\mathbb{P}}(t_{2})-4)}\bigg{]}^{2}$
(61)
with the first logarithm accounting for the $x_{F}\sim 0$ phase space
enhancement [7], and the remainder for the 2-Pomeron fusion exchange with its
holographic diffusive signature in $5-2=3$ transverse directions. The rise in
$s$ can be tamed through the Eikonalization of the exchange.
Finally, we note the transverse plane dependence on ${\rm sin}^{2}\theta_{34}$
of the differential cross section (also through $\mu$ in (51)) as inherited
from the abnormal parity nature of the Chern-Simons vertex (129) at the origin
of the pseudo-scalar emission in bulk. The diffractive emission of the pseudo-
scalar glueball is strongly anisotropic in the transverse plane.
### 13.3 Differential cross section: $G^{2}$
A similar reasoning applies for the spin-averaged squared amplitude for
$pp\rightarrow pp0+$ which reads
$\displaystyle\frac{1}{4}\sum_{\rm spins}\bigg{|}{\cal A}_{pp\rightarrow
pp0+}\bigg{|}^{2}=\frac{16}{4}\,A^{2}(t_{1})\,A^{2}(t_{2})\,$
$\displaystyle\times\bigg{[}\bigg{(}\frac{\lambda}{\pi^{2}}+1\bigg{)}^{2}\bigg{(}\frac{\sqrt{\lambda}}{2\pi}\bigg{(}\gamma+\frac{\pi}{2}\bigg{)}\bigg{)}^{4}\frac{e^{-\sqrt{\lambda}(\gamma+\pi/2)^{2}(1/{\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2})+1/{\rm
ln}(s_{2}/\tilde{\kappa}_{N}^{2}))}}{({\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2}){\rm
ln}(s_{2}/\tilde{\kappa}_{N}^{2}))^{3}}\,\bigg{(}\frac{s_{1}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{1})-2)}\bigg{(}\frac{s_{2}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{2})-2)}\bigg{]}$
$\displaystyle\times\bigg{[}B^{+}(t_{1},t_{2},p_{5}^{2})\eta_{\alpha\bar{\alpha}}\eta_{\beta\bar{\beta}}\bigg{]}\times\bigg{[}B^{+}(t_{1},t_{2},p_{5}^{2})\eta_{\underline{\alpha}\underline{\bar{\alpha}}}\eta_{\underline{\beta}\bar{\underline{\beta}}}\bigg{]}$
$\displaystyle\times\bigg{[}p^{\alpha}p^{\beta}p^{\underline{\alpha}}p^{\underline{\beta}}+\frac{1}{16}\bigg{(}t_{1}\eta_{\alpha\underline{\alpha}}p_{\beta}p_{\underline{\beta}}+t_{1}\eta_{\alpha\underline{\beta}}p_{\beta}p_{\underline{\alpha}}+t_{1}\eta_{\beta\underline{\alpha}}p_{\alpha}p_{\underline{\beta}}+(t_{1}\eta_{\beta\underline{\beta}}-k_{1\beta}k_{1\underline{\beta}})p_{\alpha}p_{\underline{\alpha}}\bigg{)}\bigg{]}$
$\displaystyle\times\bigg{[}\underline{p}^{\bar{\alpha}}\underline{p}^{\bar{\beta}}\underline{p}^{\underline{\bar{\alpha}}}\underline{p}^{\bar{\underline{\beta}}}+\frac{1}{16}\bigg{(}t_{2}\eta_{\bar{\alpha}\bar{\underline{\alpha}}}p_{\bar{\beta}}p_{\bar{\underline{\beta}}}+t_{2}\eta_{\bar{\alpha}\bar{\underline{\beta}}}p_{\bar{\beta}}p_{\bar{\underline{\alpha}}}+t_{2}\eta_{\bar{\beta}\bar{\underline{\alpha}}}p_{\bar{\alpha}}p_{\bar{\underline{\beta}}}+(t_{2}\eta_{\bar{\beta}\bar{\underline{\beta}}}-k_{1\bar{\beta}}k_{1\bar{\underline{\beta}}})p_{\bar{\alpha}}p_{\bar{\underline{\alpha}}}\bigg{)}\bigg{]}$
(62)
Again, in the Reggeized limit, the dominant contributions in the last two
brackets stem from the first two terms with $p^{2},\underline{p}^{2}\sim s$,
with the result
$\displaystyle\frac{d\sigma_{+}}{d\theta_{34}dt_{1}dt_{2}}\approx\frac{1}{(8\pi)^{4}}\,A^{2}(t_{1})\,A^{2}(t_{2})\,B^{+2}(t_{1},t_{2},p_{5}^{2})$
$\displaystyle\times\bigg{[}\bigg{(}\frac{\lambda}{\pi^{2}}+1\bigg{)}^{2}\bigg{(}\frac{\sqrt{\lambda}}{2\pi}\bigg{(}\gamma+\frac{\pi}{2}\bigg{)}\bigg{)}^{4}\frac{e^{-\sqrt{\lambda}(\gamma+\pi/2)^{2}(1/{\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2})+1/{\rm
ln}(s_{2}/\tilde{\kappa}_{N}^{2}))}}{({\rm
ln}(s_{1}/\tilde{\kappa}_{N}^{2}){\rm
ln}(s_{2}/\tilde{\kappa}_{N}^{2}))^{3}}\bigg{]}$
$\displaystyle\times\bigg{[}\bigg{(}\frac{s_{1}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{1})-2)}\bigg{(}\frac{s_{2}}{{\tilde{\kappa}_{N}}^{2}}\bigg{)}^{2(\alpha_{\mathbb{P}}(t_{2})-2)}\bigg{]}\bigg{[}s^{4}{\rm
ln}\bigg{(}\frac{s}{\mu}\bigg{)}\bigg{]}$
with the scalar vertex $B^{+}(t_{1},t_{2},p_{5}^{2})$
$B^{+}(t_{1},t_{2},p_{5}^{2})=\bigg{(}{\bf
C}_{1}^{+}(p_{5}^{2}+(t_{1}+t_{2}))-2{\bf C}_{2}^{+}-{\bf
C}_{3}^{+}p_{5}^{2}\bigg{)}$ (64)
and $p_{5}^{2}=m^{2}_{0^{+}}$ on mass-shell. ${\bf C}_{1,2,3}^{+}$ are given
in (F.1-128) for the soft-wall model. Modulo $\mu$, the emission of the scalar
glueball is isotropic in the transverse plane, in contrast to the pseudo-
scalar glueball emission.
## 14 Summary
The subject of this paper – instanton-sphaleron processes, both in QCD and
electroweak setting – were at the forefront of the theoretical discussions in
the late 1980s. The optimal tunneling path was suggested by Yung [20] and
Verbaarschot [21]. The idea to use it to evaluate cross sections was suggested
by Ringwald and Khoze [14], as well as Verbaarschot and Shuryak [15]. A more
detailed description of the topological landscape and the realization that
this path leads to the production of sphalerons came in 2002, from Ref. [12].
The analytic form of a (pure gauge) sphaleron classical explosion was also
obtained in that work, with a complementary derivation by us in [22],
clarifying the production of fermions. So, from the theoretical perspective,
one might think the issue was completely clarified many years ago.
Ringwald and Schrempp [43] put a lot of efforts to identify the instanton-
induced reactions in deep-inelastic scattering at HERA. The large $Q^{2}$
scale involved, pushes the instantons to be very small, causing the
corresponding cross sections to be also small. However, the real problem
turned out to be the separation of the “instanton events” from the multi-jet
background. The more recent version for the LHC, by Khoze et al [44], relates
the instanton size to the fraction of the collision energy, again with a cross
section that is strongly decreasing with the invariant mass
$M=\sqrt{s^{\prime}}$ of the produced cluster. This mechanism was recently
revisited using single-diffractive medium cluster production and a more
favorable choice of kinematics, but with stringent cuts [55].
While we do not disagree with these approaches, we still think one needs a
different experimental strategy. One better starts with smaller clusters, with
invariant masses of several ${\rm GeV}$, which offer clearly identifiable
mesonic and perhaps baryonic final states, which can be related to existing
theoretical predictions. The background issue would be helped if instead of
min.bias $pp$ collisions one would use the double diffraction, or PP
collisions. Last but not least, using gluon-rich but colorless fusing
Pomerons, rather than fusing gluonic partons, eliminates the need for color
flux tubes and cleans up the stage.
Can it all be done in practice, and if so, why now? It looks like LHC has
reached certain stage of multiple searches at which all which remains to do is
to increase statistics. The search for instanton/sphaleron clusters may not be
“beyond the standard model” but is still worth trying.
The double diffractive direction we advocate is in especially nice position.
The first roman pot (RP) detectors are installed and running. ATLAS AFP is
tuned to a huge mass $M\sim 1\,{\rm TeV}$, while CMS-TOTEM has carried studies
of exclusive reactions at $M\sim 1\,{\rm GeV}$. The in between region we
suggest to explore is still basically “terra incognita”. It remained untouched
for decades, since the pioneering $UA8$ experiment.
Last but not least, the QCD sphalerons and their explosions are close
relatives of the electroweak sphalerons. We do not know how one may produce
those in the lab, but we do expect them to be very important at the
cosmological electroweak transition [56].
Acknowledgements This work has been triggered by the CERN workshop ”
Topological Effects in the Standard Model: Instantons, Sphalerons and Beyond
at LHC”, Dec-2020. We thank its organizers, and especially Matthias Schott,
for organizing it and for reviving this field in general. This work is
supported by the Office of Science, U.S. Department of Energy under Contract
No. DE-FG-88ER40388.
## Appendix A Some meson couplings used
For completeness, we give here some standard PCAC formulae and their numerical
values,
$K_{\pi}\equiv\langle
0|\bar{d}\gamma^{5}u|\pi^{+}\rangle=i{m_{\pi}^{2}f_{\pi}\over
m_{u}+m_{d}}\approx i(500\,MeV)^{2}$ (65) $K_{K}\equiv\langle
0|\bar{s}\gamma^{5}u|K^{+}\rangle=i{m_{K}^{2}f_{K}\over m_{u}+m_{s}}\approx
i(523\,MeV)^{2}$ (66)
The $\eta,\eta^{\prime}$ couplings depend on the mixing angle. We will use the
same values adopted in [34]
$\displaystyle K^{q}_{\eta}$ $\displaystyle\equiv$ $\displaystyle\langle
0|\bar{u}\gamma^{5}u|\eta\rangle=\langle
0|\bar{d}\gamma^{5}d|\eta\rangle=-i(358\,MeV)^{2}$ (67) $\displaystyle
K^{q}_{\eta^{\prime}}$ $\displaystyle\equiv$ $\displaystyle\langle
0|\bar{u}\gamma^{5}u|\eta^{\prime}\rangle=\langle
0|\bar{d}\gamma^{5}d|\eta^{\prime}\rangle=-i(320\,MeV)^{2}$ $\displaystyle
K^{s}_{\eta}$ $\displaystyle\equiv$ $\displaystyle\langle
0|\bar{s}\gamma^{5}s|\eta\rangle=i(435\,MeV)^{2}$ $\displaystyle
K^{s}_{\eta^{\prime}}$ $\displaystyle\equiv$ $\displaystyle\langle
0|\bar{s}\gamma^{5}s|\eta^{\prime}\rangle=i(481\,MeV)^{2}$
The other issue is to define the $\bar{s}s$ part of $\eta,\eta^{\prime}$
responsible for the observed $\eta\pi\pi$ and $\eta^{\prime}\pi\pi$ decay
rates. The $\eta-\eta^{\prime}$ mixing has been well researched. In terms of
singlet and octet states
$\displaystyle\eta_{8}=$
$\displaystyle{1\over\sqrt{6}}(\bar{u}u+\bar{d}d-2\bar{s}s)$ (68)
$\displaystyle\eta_{1}=$
$\displaystyle{1\over\sqrt{3}}(\bar{u}u+\bar{d}d+\bar{s}s)$
the standard definition of the mixing is
$\displaystyle\eta^{\prime}$ $\displaystyle=$ $\displaystyle{\rm
sin}(\theta_{p})\eta_{8}+{\rm cos}(\theta_{p})\eta_{1}$ (69)
$\displaystyle\eta$ $\displaystyle=$ $\displaystyle{\rm
cos}(\theta_{p})\eta_{8}-{\rm sin}(\theta_{p})\eta_{1}$
Using the mixing angle $\theta_{p}\approx-14.6^{o}$, one finds that the ratio
of the $\bar{s}s$ probabilities (squares of amplitudes) is
${ss_{\eta^{\prime}}\over{ss}_{\eta}}=\bigg{(}{0.764\over
0.644}\bigg{)}^{2}=1.40$ (70)
## Appendix B Instanton-induced $uuddss$ interactions
In this Appendix we outline the general structure of the induced ′t Hoot
vertex for multi-quark states. In particular, we show the explicit steps that
lead to the 6-quark vertex both in the color bleached and bleached form. its
generalization to 8- and 10-quark states follows. An alternative but tedious
formula involving color averaging over the color moduli with also be discussed
which can be generalized to even higher multi-quark states.
### B.1 Details of the color-spin reduction
The momentum-space kernel for the LSZ reduced zero-mode propagator in the
instanton background reads
$\displaystyle
L\equiv\bigg{[}\frac{(p^{2}\varphi^{\prime}(p))_{0}^{2}}{m\rho}\bigg{]}\bigg{[}\frac{1}{8}\,U(\delta_{\mu\nu}+i{\overline{\eta}}_{\mu\nu}^{a}\tau^{a})U^{\dagger}\times\gamma_{\mu}\gamma_{\nu}\bigg{]}\frac{1-\gamma_{5}}{2}$
(71)
in the zero momentum limit $(p^{2}\varphi^{\prime 2}(p)_{0}\equiv\sqrt{C}$.
The 6-quark $uuddss$ vertex stemming from (71) follows from averaging over the
color matrices $U$ the product
$\displaystyle\int dU\,[L]^{3}$ $\displaystyle=$
$\displaystyle\bigg{[}\frac{1}{8}\,U(\delta_{\alpha\beta}+i{\overline{\eta}}_{\alpha\beta}^{a}\tau^{a})U^{\dagger}\times\gamma_{\alpha}\gamma_{\beta}\bigg{]}$
(72) $\displaystyle\times$
$\displaystyle\bigg{[}\frac{1}{8}\,U(\delta_{\mu\nu}+i{\overline{\eta}}_{\mu\nu}^{b}\tau^{b})U^{\dagger}\times\gamma_{\mu}\gamma_{\nu}\bigg{]}\times\bigg{[}\frac{1}{8}\,U(\delta_{\lambda\tau}+i{\overline{\eta}}_{\lambda\tau}^{c}\tau^{c})U^{\dagger}\times\gamma_{\lambda}\gamma_{\tau}\bigg{]}$
using the identity [37]
$\displaystyle\int dU\,[U^{a}_{i}U^{\dagger
j}_{b}]^{3}=\bigg{[}\frac{1}{N_{c}}\delta^{a_{1}}_{b_{1}}\delta_{i_{1}}^{j_{1}}\,{\bf
1}_{1}\bigg{]}\bigg{[}\frac{1}{N_{c}}\delta^{a_{2}}_{b_{2}}\delta_{i_{2}}^{j_{2}}\,{\bf
1}_{2}\bigg{]}\bigg{[}\frac{1}{N_{c}}\delta^{a_{3}}_{b_{3}}\delta_{i_{3}}^{j_{3}}\,{\bf
1}_{3}\bigg{]}$
$\displaystyle+\bigg{(}\bigg{[}\frac{1}{N_{c}}\delta^{a_{1}}_{b_{1}}\delta^{j_{1}}_{i_{1}}\,{\bf
1}_{1}\bigg{]}\bigg{[}\frac{1}{4(N_{c}^{2}-1)}[\lambda_{2}^{A}]^{a_{2}}_{b_{2}}[\lambda_{3}^{A}]^{a_{3}}_{b_{3}}[\lambda_{2}^{B}]^{j_{2}}_{i_{2}}[\lambda_{3}^{B}]^{j_{3}}_{i_{3}}\bigg{]}+2\,{\rm
perm.}\bigg{)}$
$\displaystyle+\bigg{(}\frac{N_{c}}{8(N_{c}^{2}-1)(N_{c}^{2}-4)}\bigg{[}d^{ABC}([\lambda_{1}^{A}]^{a_{1}}_{b_{1}}[\lambda_{2}^{B}]^{a_{2}}_{b_{2}})[\lambda_{3}^{C}]^{a_{3}}_{b_{3}}\bigg{]}\bigg{[}d^{IJK}([\lambda_{1}^{I}]^{j_{1}}_{i_{1}}[\lambda_{2}^{J}]^{j_{2}}_{i_{2}})[\lambda_{3}^{K}]^{j_{3}}_{i_{3}}\bigg{)}\bigg{]}$
$\displaystyle+\bigg{(}\frac{1}{8N_{c}(N_{c}^{2}-1)}\bigg{[}f^{ABC}([\lambda_{1}^{A}]^{a_{1}}_{b_{1}}[\lambda_{2}^{B}]^{a_{2}}_{b_{2}})[\lambda_{3}^{C}]^{a_{3}}_{b_{3}}\bigg{]}\bigg{[}f^{IJK}([\lambda_{1}^{I}]^{j_{1}}_{i_{1}}[\lambda_{2}^{J}]^{j_{2}}_{i_{2}})[\lambda_{3}^{K}]^{j_{3}}_{i_{3}}\bigg{]}\bigg{)}$
(73)
which follows from the projection onto the color singlet channel. The result
after some algebra is
$\displaystyle\bigg{[}\frac{C}{m\rho N_{c}}\bigg{]}^{3}$
$\displaystyle\bigg{(}$ $\displaystyle
UDS+\frac{N_{c}(N_{c}-2)}{4(N_{c}^{2}-1)}\bigg{[}U^{A}D^{A}S+U^{A}DS^{A}+UD^{A}S^{A}\bigg{]}$
(74)
$\displaystyle-\frac{N_{c}^{2}}{16(N_{c}^{2}-1)}\bigg{[}U_{\mu\nu}^{A}D_{\mu\nu}^{A}S+U_{\mu\nu}^{A}DS_{\mu\nu}^{A}+UD_{\mu\nu}^{A}S_{\mu\nu}^{A}\bigg{]}\bigg{)}$
$\displaystyle-\frac{N_{c}^{3}(N_{c}-2)}{96(N_{c}^{2}-1)(N_{c}^{2}-4)}d^{ABC}\bigg{[}U_{\mu\nu}^{A}D_{\mu\nu}^{B}S^{C}+U_{\mu\nu}^{A}D^{B}S_{\mu\nu}^{C}+U^{A}D_{\mu\nu}^{B}S_{\mu\nu}^{C}\bigg{]}$
$\displaystyle+\frac{N_{c}^{2}(N_{c}-2)}{8(N_{c}^{2}-1)(N_{c}+2)}d^{ABC}U^{A}D^{B}S^{C}-\frac{N_{c}^{2}}{32(N_{c}^{2}-1)}if^{ABC}U_{\mu\nu}^{A}D_{\nu\rho}^{B}S_{\rho\mu}^{C}\bigg{)}$
where we have defined
$\displaystyle U=\overline{u}_{R}u_{L}\qquad
U^{A}=\overline{u}_{R}\lambda^{A}u_{L}\qquad
U_{\mu\nu}^{A}=\overline{u}_{R}\lambda^{A}\gamma_{\mu}\gamma_{\nu}u_{L}$
$\displaystyle D=\overline{d}_{R}d_{L}\qquad
D^{A}=\overline{d}_{R}\lambda^{A}d_{L}\qquad
D_{\mu\nu}^{A}=\overline{d}_{R}\lambda^{A}\gamma_{\mu}\gamma_{\nu}d_{L}$
$\displaystyle S=\overline{s}_{R}s_{L}\qquad
S^{A}=\overline{s}_{R}\lambda^{A}s_{L}\qquad
S_{\mu\nu}^{A}=\overline{s}_{R}\lambda^{A}\gamma_{\mu}\gamma_{\nu}s_{L}$ (75)
Throughout in the notation $\gamma_{\mu}\gamma_{\nu}$ the condition $\mu<\nu$
is subsumed. (B.1) was originally derived in [57] for $N_{c}=3$.
(74) can be considerably reduced by successive Fierzing to bleach the color
and bring it to a determinantal form typical of instanton induced interactions
that are manifestly $SU(N_{f})$ symmetric but $U(1_{f})$ violating. Since the
procedure is considerably lengthy, we will show how the Fierzing works for the
typical blocks in (74).
Consider the typical colored scalar block in (74)
$\displaystyle U^{A}D^{A}\equiv$
$\displaystyle\overline{u}_{R}\lambda^{A}u_{L}\overline{d}_{R}\lambda^{A}d_{L}$
$\displaystyle=$
$\displaystyle\overline{u}_{Ra}u_{Lb}\overline{d}_{Rc}d_{Ld}\,[\lambda^{A}]^{ab}[\lambda^{A}]^{cd}=2\overline{u}_{Ra}u_{Lb}\overline{d}_{Rb}d_{La}-\frac{2}{N_{c}}\overline{u}_{R}u_{L}\overline{d}_{R}d_{L}$
$\displaystyle=$
$\displaystyle-\frac{2}{N_{c}}\overline{u}_{R}u_{L}\overline{d}_{R}d_{L}-\overline{u}_{R}d_{L}\overline{d}_{R}u_{L}+\frac{1}{4}\overline{u}_{R}\gamma_{\mu}\gamma_{\nu}d_{L}\overline{d}_{R}\gamma_{\mu}\gamma_{\nu}u_{L}$
(76)
where we made use of the color identity in the second line
$\displaystyle[\lambda^{A}]^{ab}[\lambda^{A}]^{cd}=2\delta^{ad}\delta^{bc}-\frac{2}{N_{c}}\delta^{ab}\delta^{cd}$
(77)
and the Fierz re-arrangement in the third line
$\displaystyle\overline{u}_{Ra}u_{Lb}\overline{d}_{Rb}d_{La}=-\frac{1}{2}\overline{u}_{R}d_{L}\overline{d}_{R}u_{L}+\frac{1}{8}\overline{u}_{R}\gamma_{\mu}\gamma_{\nu}d_{L}\overline{d}_{R}\gamma_{\mu}\gamma_{\nu}u_{L}$
(78)
The same procedure carries for the colored tensor block with the result
$\displaystyle U_{\mu\nu}^{A}D_{\mu\nu}^{A}\equiv$
$\displaystyle\overline{u}_{R}\lambda^{A}\gamma_{\mu}\gamma_{\nu}u_{L}\overline{d}_{R}\lambda^{A}\gamma_{\mu}\gamma_{\nu}d_{L}$
$\displaystyle=$
$\displaystyle-\frac{2}{N_{c}}\overline{u}_{R}\gamma_{\mu}\gamma_{\nu}u_{L}\overline{d}_{R}\gamma_{\mu}\gamma_{\nu}d_{L}+12\overline{u}_{R}d_{L}\overline{d}_{R}u_{L}+\overline{u}_{R}\gamma_{\mu}\gamma_{\nu}d_{L}\overline{d}_{R}\gamma_{\mu}\gamma_{\nu}u_{L}$
(79)
using the Fierz re-arrangement
$\displaystyle\overline{u}_{Ra}\gamma_{\mu}\gamma_{\nu}u_{Lb}\overline{d}_{Rb}\gamma_{\mu}\gamma_{\nu}d_{La}=6\overline{u}_{R}d_{L}\overline{d}_{R}u_{L}+\frac{1}{2}\overline{u}_{R}\gamma_{\mu}\gamma_{\nu}d_{L}\overline{d}_{R}\gamma_{\mu}\gamma_{\nu}u_{L}$
(80)
The bleaching of color from the contributions involving the structure factors
$f^{ABC}$ and $d^{ABC}$ can be simplified by noting the respective identities
$\displaystyle
if^{ABC}[\lambda^{A}]^{ab}[\lambda^{B}]^{cd}[\lambda^{C}]^{ef}=2\bigg{(}\delta^{ad}\delta^{cf}\delta^{eb}-\delta^{af}\delta^{cb}\delta^{ed}\bigg{)}$
(81)
and
$\displaystyle d^{ABC}[\lambda^{A}]^{ab}[\lambda^{B}]^{cd}[\lambda^{C}]^{ef}=$
$\displaystyle
2\bigg{(}\delta^{ad}\delta^{cf}\delta^{eb}+\delta^{af}\delta^{cb}\delta^{ed}\bigg{)}-\frac{4}{N_{c}^{2}}\delta^{ab}\delta^{cd}\delta^{ef}$
$\displaystyle-$
$\displaystyle\frac{2}{N_{c}}\bigg{(}\delta^{ab}[\lambda^{A}]^{cd}[\lambda^{A}]^{ef}+\delta^{cd}[\lambda^{A}]^{ab}[\lambda^{A}]^{ef}+\delta^{ef}[\lambda^{A}]^{ab}[\lambda^{A}]^{cd}\bigg{)}$
(82)
(81-B.1) can be established by Fierzing in color and using the identities
$\displaystyle\frac{1}{8}d^{ABC}[\lambda^{A}]^{a}_{b}[\lambda^{B}]^{b}_{c}[\lambda^{C}]^{c}_{d}=\frac{N_{c}^{2}-4}{2N_{c}}C_{F}\,\delta^{a}_{d}$
$\displaystyle\frac{1}{8}f^{ABC}[\lambda^{A}]^{a}_{b}[\lambda^{B}]^{b}_{c}[\lambda^{C}]^{c}_{d}=\frac{i}{2}N_{c}C_{F}\,\delta^{a}_{d}$
(83)
with $C_{F}=(N_{c}^{2}-1)/2N_{c}$ the Casimir in the fundamental
representation. (B.1) can be totally color reduced using (77). Inserting
(B.1-B.1-81-B.1) in (74) yield (6.2) after lengthy algebraic re-arrangements.
Note that in (6.2) we are still using the notation $Q_{\mu\nu}$ but with
$\sigma_{\mu\nu}$ instead of $\gamma_{\mu}\gamma_{\nu}$ with $\mu<\nu$
subsumed.
### B.2 Alternative form
An alternative form for the 6-quark ′t Hooft vertex, can be reached by
reconsidering the full LSZ reduced amplitude in Weyl notations
$\displaystyle\bigg{[}\frac{n_{I}}{2}\bigg{]}_{\rho}\,$
$\displaystyle\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\frac{d^{4}k_{2}}{(2\pi)^{4}}\frac{d^{4}p_{1}}{(2\pi)^{4}}\frac{d^{4}p_{2}}{(2\pi)^{4}}\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q_{2}}{(2\pi)^{4}}\bigg{[}(2\pi)^{4}\delta^{4}(k_{1}+p_{1}+q_{1}-k_{2}-p_{2}-q_{2})\bigg{]}$
(84)
$\displaystyle\times\bigg{[}\bigg{<}u_{R}^{\dagger}(k_{2})k_{2}\bigg{[}\sqrt{2}\varphi^{\prime}(k_{2})\hat{k}_{2}\epsilon
U\bigg{]}\frac{1}{m}\bigg{[}\sqrt{2}\varphi^{\prime}(k_{1})U^{\dagger}\epsilon\hat{k}_{1}\bigg{]}k_{1}u_{L}(k_{1})$
$\displaystyle\qquad\times
d_{R}^{\dagger}(p_{2})p_{2}\bigg{[}\sqrt{2}\varphi^{\prime}(p_{2})\hat{p}_{2}\epsilon
U\bigg{]}\frac{1}{m}\bigg{[}\sqrt{2}\varphi^{\prime}(p_{1})U^{\dagger}\epsilon\hat{p}_{1}\bigg{]}p_{1}d_{L}(p_{1})$
$\displaystyle\qquad\times
s_{R}^{\dagger}(q_{2})q_{2}\bigg{[}\sqrt{2}\varphi^{\prime}(q_{2})\hat{q}_{2}\epsilon
U\bigg{]}\frac{1}{m}\bigg{[}\sqrt{2}\varphi^{\prime}(q_{1})U^{\dagger}\epsilon\hat{q}_{1}\bigg{]}q_{1}s_{L}(q_{1})\bigg{>}_{U}\bigg{]}+{L\leftrightarrow
R}$
after averaging over the instanton Z-position in 4-Euclidean space, without
taking the zero momentum limit as in (71). The left zero mode in the instanton
background in momentum space is
$\displaystyle\psi^{\alpha}_{iI}(p)=\sqrt{2}\varphi^{\prime}(p)(\hat{p}\epsilon
U)^{\alpha}_{i}\qquad\qquad\varphi^{\prime}(p)=\pi\rho^{2}\bigg{(}I_{0}K_{0}-I_{1}K_{1}\bigg{)}^{\prime}(z=\rho
p/2)$ (85)
in terms of which (84) reads
$\displaystyle\bigg{[}\frac{n_{I}}{2}\bigg{]}_{\rho}\,$
$\displaystyle\int\frac{d^{4}k_{1}}{(2\pi)^{4}}\frac{d^{4}k_{2}}{(2\pi)^{4}}\frac{d^{4}p_{1}}{(2\pi)^{4}}\frac{d^{4}p_{2}}{(2\pi)^{4}}\frac{d^{4}q_{1}}{(2\pi)^{4}}\frac{d^{4}q_{2}}{(2\pi)^{4}}\bigg{[}(2\pi)^{4}\delta^{4}(k_{1}+p_{1}+q_{1}-k_{2}-p_{2}-q_{2})\bigg{]}$
$\displaystyle\times\bigg{[}\frac{1}{m^{3}}(2k_{1}k_{2}\varphi^{\prime}(k_{1})\varphi^{\prime}(k_{2}))(2p_{1}p_{2}\varphi^{\prime}(p_{1})\varphi^{\prime}(p_{2}))(2q_{1}q_{2}\varphi^{\prime}(q_{1})\varphi^{\prime}(q_{2}))\bigg{]}$
$\displaystyle\times\bigg{<}\bigg{[}u_{R}^{\dagger}(k_{2})\epsilon
U\bigg{]}\bigg{[}U^{\dagger}\epsilon
u_{L}(k_{1})\bigg{]}\,\bigg{[}d_{R}^{\dagger}(p_{2})\epsilon
U\bigg{]}\bigg{[}U^{\dagger}\epsilon
d_{L}(p_{1})\bigg{]}\,\bigg{[}s_{R}^{\dagger}(q_{2})\epsilon
U\bigg{]}\bigg{[}U^{\dagger}\epsilon
s_{L}(q_{1})\bigg{>}_{U}\bigg{]}+{L\leftrightarrow R}$
The color bracket is explicitly
$\displaystyle\bigg{<}U^{a_{1}}_{c_{1}}U^{\dagger
b_{1}}_{d_{1}}U^{a_{2}}_{c_{2}}U^{\dagger
b_{2}}_{d_{2}}U^{a_{3}}_{c_{3}}U^{\dagger
b_{3}}_{d_{3}}\bigg{>}_{U}\bigg{[}\epsilon^{i_{1}c_{1}}\epsilon_{j_{1}b_{1}}\,\epsilon^{i_{2}c_{2}}\epsilon_{j_{2}b_{2}}\,\epsilon^{i_{3}c_{3}}\epsilon_{j_{3}b_{3}}\bigg{]}$
$\displaystyle\times$
$\displaystyle\bigg{[}u^{\dagger}_{Ra_{1}i_{1}}(k_{2})u_{L}^{d_{1}j_{1}}(k_{1})\,d^{\dagger}_{Ra_{2}i_{2}}(p_{2})d_{L}^{d_{2}j_{2}}(p_{1})\,s^{\dagger}_{Ra_{3}i_{3}}(q_{2})s_{L}^{d_{3}j_{3}}(q_{1})\bigg{]}$
(87)
The color averaging over the unitary matrices can be written in terms of the
Weingarten coefficients
$\displaystyle\bigg{<}U^{a_{1}}_{c_{1}}U^{\dagger
b_{1}}_{d_{1}}U^{a_{2}}_{c_{2}}U^{\dagger
b_{2}}_{d_{2}}U^{a_{3}}_{c_{3}}U^{\dagger b_{3}}_{d_{3}}\bigg{>}_{U}=$
$\displaystyle\frac{(N_{c}^{2}-2)}{N_{c}(N_{c}^{2}-1)(N_{c}^{2}-4)}\sum_{n=1}^{3!}\delta^{a_{1}a_{2}a_{3}}_{(d_{1}d_{2}d_{3})_{n}}\delta^{c_{1}c_{2}c_{3}}_{(b_{1}b_{2}b_{3})_{n}}$
$\displaystyle-$
$\displaystyle\frac{1}{(N_{c}^{2}-1)(N_{c}^{2}-4)}\sum_{n=1}^{3!}\delta^{a_{1}a_{2}a_{3}}_{(d_{1}d_{2}d_{3})_{n}}\bigg{(}\delta^{c_{1}c_{2}c_{3}}_{(b_{2}b_{1}b_{3})_{n}}+\delta^{c_{1}c_{2}c_{3}}_{(b_{1}b_{3}b_{2})_{n}}+\delta^{c_{1}c_{2}c_{3}}_{(b_{3}b_{2}b_{1})_{n}}\bigg{)}$
$\displaystyle+$
$\displaystyle\frac{2}{N_{c}(N_{c}^{2}-1)(N_{c}^{2}-4)}\sum_{n=1}^{3!}\delta^{a_{1}a_{2}a_{3}}_{(d_{1}d_{2}d_{3})_{n}}\bigg{(}\delta^{c_{1}c_{2}c_{3}}_{(b_{3}b_{1}b_{2})_{n}}+\delta^{c_{1}c_{2}c_{3}}_{(b_{2}b_{3}b_{1})_{n}}\bigg{)}$
(88)
The short-hand notation
$\displaystyle\delta^{a_{1}a_{2}a_{3}}_{(d_{1}d_{2}d_{3})_{n}}\equiv\sum^{3!}_{n=1}\bigg{(}\delta^{a_{1}}_{d_{1}}\delta^{a_{2}}_{d_{2}}\delta^{a_{3}}_{d_{3}}+{\rm
perm.}\bigg{)}$ (89)
refers to the product of three kroneckers in the sum over the n! permutations
of the permutation group $S_{3}$. For completeness, we note the analogous
relations for lower color averagings for two flavors
$\displaystyle\left<U^{a_{1}}_{c_{1}}U^{\dagger
b_{1}}_{d_{1}}U^{a_{2}}_{c_{2}}U^{\dagger b_{2}}_{d_{2}}\right>_{U}=$
$\displaystyle\frac{1}{N_{c}^{2}-1}\sum_{n=1}^{2!}\delta^{a_{1}a_{2}}_{(d_{1}d_{2})_{n}}\delta^{c_{1}c_{2}}_{(b_{1}b_{2})_{n}}-\frac{1}{N_{c}(N_{c}^{2}-1)}\sum_{n=1}^{2!}\delta^{a_{1}a_{2}}_{(d_{1}d_{2})_{n}}\delta^{c_{1}c_{2}}_{(b_{2}b_{1})_{n}}$
(90)
and the well known one for one flavor
$\displaystyle\left<U^{a}_{c}U^{\dagger
b}_{d}\right>_{U}=\frac{1}{N_{c}}\delta^{a}_{d}\delta_{b}^{c}$ (91)
## Appendix C Exploding quark modes in Sphaleron background
In so far, we have extracted the sphaleron-six-quark vertex using the $O(4)$
exact zero modes in a self-dual instanton (anti-instanton) background. While
at $t=0$ this is the case, as $t>0$ the zero modes become real and turn to
$O(3)$ exploding quark modes as we discussed in [22]. We now briefly review
how these modes are constructed and then substitute them in (12) to generate
${\cal L}(t>0)$ to describe the final state explosive vertex in the
diffractive $pp\rightarrow ppX$ process,
### C.1 O(4) and O(3) fermionic zero mode
A Dirac fermion in a general gauge field solves the equation
$\left(\partial_{\mu}-iA_{\mu}\right)\gamma_{\mu}\,{\psi}=0$ (92)
We will use the chiral basis with spin matrices
$\bar{\sigma}_{Ms}=(1,-i\vec{\sigma}_{s})$ and
$\displaystyle\gamma_{5}=\left(\begin{array}[]{cc}1&0\\\
0&-1\end{array}\right)\qquad\gamma_{\mu}=\left(\begin{array}[]{cc}0&\sigma_{Ms}\\\
\bar{\sigma}_{Ms}&0\end{array}\right)$ (97)
If we recall that the t′ Hooft symbol satisfies the color identity
$\bar{\sigma}_{Mc}\sigma_{Nc}=\sigma_{ac}\overline{\eta}_{aMN}$ with
$\sigma_{Mc}=(1,-i\vec{\sigma}_{c})$, then the positive chirality mode
$\psi_{+}$ solves (92)
$\left(\bar{\sigma}_{Ms}\partial_{M}+\frac{1}{2}\bar{\sigma}_{Ns}\bar{\sigma}_{Nc}\,\sigma_{Mc}\partial_{M}\,F\right)\,{\psi}_{+}=0$
(98)
with the spin and color matrices commuting. $F(\xi(y))$ for an $O(4)$
symmetric gauge field is given in (7). Note that while writing (98) we have
added a U(1) part to the gauge field for notational simplicity, and will
remove it in the final answer by inspection. A formal solution to (98) is
$(\psi_{+})^{a}_{\mu}=\varphi\epsilon_{\mu}^{a}$ wich is a singlet (hedgehog)
in color-spin space
$\sigma_{Ms}\chi_{Q}=\bar{\sigma}_{Mc}\epsilon\qquad\bar{\sigma}_{Ms}\epsilon=\sigma_{Mc}\epsilon$
(99)
Inserting (99) in (98) yields
$\left(\bar{\sigma}_{Ms}\partial_{M}+\frac{1}{2}\bar{\sigma}_{Ns}\sigma_{Ns}\,\bar{\sigma}_{Ms}\partial_{M}\,F\right)\,\varphi\epsilon=0$
(100)
To remove the redundant U(1) contribution noted above we use
$\bar{\sigma}_{Ns}\sigma_{Ns}\,\epsilon=\left(1+(\vec{\sigma}_{s})^{2}\right)\epsilon\rightarrow(\vec{\sigma}_{s})^{2}\epsilon=3\epsilon$
(101)
after which $\varphi$ is seen to solve
$\varphi^{\prime}+\frac{3}{2}F^{\prime}\varphi=0$, hence
$({\psi}_{+})^{a}_{\mu}(y)={\bf
C}\,e^{-\frac{3}{2}F(\xi(y))}\epsilon^{a}_{\mu}$ (102)
with
$\xi(y)=\frac{1}{2}{\rm
ln}\left(\frac{(t-\rho)^{2}+r^{2}}{(t+\rho)^{2}+r^{2}}\right)$ (103)
The overall normalization ${\bf C}$ is fixed by
${\bf
C}=\left|\int_{T_{\frac{1}{2}}}d^{4}y\,e^{-3F(\xi(y))}\right|^{-\frac{1}{2}}$
(104)
with $T_{\frac{1}{2}}$ the sphaleron tunneling time. (104) is agreement with
the result in [22] (see Eq. 22 with the exponent $2\rightarrow\frac{3}{2}$
when (101) is enforced).
The chiral O(3) symmetric zero mode follows by applying the off-center
inversion (10) in Section 4, onto the O(4) symmetric zero mode in (102). The
corresponding tansform is
${\tilde{\psi}}_{+}(x)=\frac{\sigma^{\dagger}_{\mu}\,(y+a)_{\mu}}{1/(y+a)^{2}}\,{\psi}_{+}(y)$
(105)
or more explicitly ($r=|\vec{x}|$)
$\displaystyle({\tilde{\psi}}_{+})^{a}_{\mu}(t,r)=\frac{{8\bf\,C}\rho^{6}}{((t+\rho)^{2}+r^{2})^{2}}\,e^{-\frac{3}{2}F(\xi(y))}\,\big{[}\left((t+\rho)+i\vec{\sigma}\cdot\vec{x}\right)\,\epsilon\big{]}^{a}_{\mu}$
(106)
This result is in agreement with the one derived in [22] prior to the
analytical continuation to Minkowski space.
## Appendix D Graviton-nucleon coupling
In bulk, the graviton couples to the Dirac fermion through its energy momentum
tensor [58, 52]
$-\frac{\sqrt{2\kappa^{2}}}{2}\int
d^{5}x\,\sqrt{g}\,h_{\mu\nu}T_{F}^{\mu\nu}=-\frac{\sqrt{2\kappa^{2}}}{2}\int
d^{5}x\,\sqrt{g}\,\bigg{(}\frac{e^{-\phi}}{2g_{5}^{2}}\frac{i}{2}\,z\,\overline{\Psi}\gamma^{\mu}\overset{\leftrightarrow}{\partial^{\nu}}\Psi-\eta^{\mu\nu}\mathcal{L}_{F}\bigg{)}$
(107)
with typically for the Dirac fermions in the soft-wall model
$\displaystyle\mathcal{L}_{F}=$
$\displaystyle\frac{e^{-\phi(z)}}{2g_{5}^{2}}\,\bigg{(}\frac{i}{2}\bar{\Psi}e^{N}_{A}\Gamma^{A}\big{(}\overrightarrow{D}_{N}-\overleftarrow{D}_{N}\big{)}\Psi-(M+V(z))\bar{\Psi}\Psi\bigg{)}\,,$
(108)
with $M=\tau-3$ the anomalous dimension. The potential
$V(z)=\tilde{\kappa}_{N}^{2}z^{2}$ is added to make the Dirac fermions massive
in bulk. Here $e^{N}_{A}=z\delta^{N}_{A}$ denotes the inverse vielbein, and
the covariant derivatives are defined as
$\displaystyle\overrightarrow{D}_{N}=$
$\displaystyle\overrightarrow{\partial}_{N}+\frac{1}{8}\omega_{NAB}[\Gamma^{A},\Gamma^{B}]$
$\displaystyle\overleftarrow{D}_{N}=$
$\displaystyle\overleftarrow{\partial}_{N}+\frac{1}{8}\omega_{NAB}[\Gamma^{A},\Gamma^{B}]$
(109)
The components of the spin connection are $\omega_{\mu z\nu}=-\omega_{\mu\nu
z}=\frac{1}{z}\eta_{\mu\nu}$, the Dirac gamma matrices satisfy anti-
commutation relation $\\{\Gamma^{A},\Gamma^{B}\\}=2\eta^{AB}$ with the
explicit choice $\Gamma^{A}=(\gamma^{\mu},-i\gamma^{5})$.
In the nucleon as a bulk Dirac fermion, the energy momentum tensor is
characterized by the three invariant form factors in (56). For the soft wall
model, we have explicitly [58, 52]
$A(k)=\frac{1}{2}\int
dz\sqrt{g}\,e^{-\phi}z\,\big{(}\psi_{R}^{2}(z)+\psi_{L}^{2}(z)\big{)}\,\mathcal{H}(k,z)=-\frac{C(k)}{(\alpha
z_{0}m_{N}/2)^{2}}$ (110)
with ${\cal H}(K,z^{\prime})$ the non-normalizable bulk-to-boundary graviton
propagator [59, 60, 61]
$\displaystyle\mathcal{H}(K,z)=$ $\displaystyle 4z^{\prime
4}\Gamma(a_{K}+2)U\Big{(}a_{K}+2,3;2\tilde{\kappa}_{N}^{2}z^{2}\Big{)}=\Gamma(a_{K}+2)U\Big{(}a_{K},-1;2\tilde{\kappa}_{N}^{2}z^{2}\Big{)}$
$\displaystyle=$
$\displaystyle\frac{\Gamma(a_{K}+2)}{\Gamma(a_{K})}\int_{0}^{1}dx\,x^{a_{K}-1}(1-x){\rm
exp}\Big{(}-\frac{x}{1-x}(2\tilde{\kappa}_{N}^{2}z^{2})\Big{)}$ (111)
and $a_{K}={K^{2}}/{8\tilde{\kappa}_{N}^{2}}\geq 0$. We have used the
transformation $U(m,n;y)=y^{1-n}U(1+m-n,2-n,y)$. (D) satisfies the
normalization condition ${\cal H}(0,z^{\prime})={\cal H}(K,0)=1$. The bulk
Dirac fermions $\Psi(x,z)=\psi_{L,R}(z)e^{-ip\cdot x}u_{L,R}(p)$ are
$\displaystyle\psi_{R}(z)=\frac{\tilde{n}_{R}}{\tilde{\kappa}_{N}^{\tau-2}}z^{\frac{5}{2}}\xi^{\frac{\tau-2}{2}}L_{0}^{(\tau-2)}(\xi)\qquad\psi_{L}(z)=\frac{\tilde{n}_{L}}{\tilde{\kappa}_{N}^{\tau-1}}z^{\frac{5}{2}}\xi^{\frac{\tau-1}{2}}L_{0}^{(\tau-1)}(\xi)\,$
(112)
Here $\xi=\tilde{\kappa}_{N}^{2}z^{2}$, $\tau=7/2-1/2=3$ is the twist
parameter, and $L_{n}^{(\alpha)}(\xi)$ are the generalized Laguerre
polynomials, with
$\tilde{n}_{R}=\tilde{n}_{L}\tilde{\kappa}_{N}^{-1}\sqrt{\tau-1}\qquad\tilde{n}_{L}=\tilde{\kappa}_{N}^{\tau}\sqrt{{2}/{\Gamma(\tau)}}\\\
$
More specifically [58, 52] ($k^{2}=-K^{2}=t\leq 0$)
$\displaystyle A(k)$ $\displaystyle=$ $\displaystyle
A(0)\,(a_{k}+1)\bigg{(}-\left(1+a_{k}+2a_{k}^{2}\right)+2\left({a_{k}}+2{a^{3}_{k}}\right)\Phi(-1,1,a_{k})\bigg{)}$
(113) $\displaystyle=$ $\displaystyle
A(0)\bigg{(}(1-2a_{k})(1+a_{k}^{2})+a_{k}(1+a_{k})(1+2a_{k}^{2})\bigg{(}\psi\bigg{(}\frac{1+a_{k}}{2}\bigg{)}-\psi\bigg{(}\frac{a_{k}}{2}\bigg{)}\bigg{)}\bigg{)}$
with $a_{k}={k^{2}}/{8\tilde{\kappa}_{N}^{2}}$, and $B(0)=0$.
$\Phi(-1,1,a^{\prime})$ refers to the LerchPhi function, and $\psi(x)$ refers
to the digamma function or harmonic number $H_{x}=\psi(x)+\gamma$. The
gravitational form factor $C(k)$ is proportional to $A(k)$ modulo a negative
overall constant $-(\alpha z_{0}m_{N}/2)^{2}<0$ which is left undetermined
since $\alpha$ is arbitrary in the tensor decomposition of the graviton [52].
Note that ${\cal H}(k,0)={\cal H}(0,z)$ is arbitrary (1-point function), so
that $A(0)$ is not fixed in holography.
## Appendix E Spin-j-nucleon coupling
For diffractive scattering at large $\sqrt{s}$, the spin-2 graviton coupling
reggeizes to spin-j coupling. The resummed spin-j couplings transmutes to the
Pomeron coupling. The generic form of the spin-j coupling is [52]
$\displaystyle\mathcal{V}^{\alpha\beta(TT)}_{h\bar{\Psi}\Psi}(j,p_{1},p_{2},k_{z})=-\frac{\sqrt{2\kappa^{2}}}{2}\int
dz\,\sqrt{g}\,e^{-\phi}\,\mathcal{H}(j,K,z)\,\bigg{[}z^{1+2(j-2)}\frac{1}{2}\bar{\Psi}(p_{2},z)\gamma^{[\alpha}p^{\beta]}\,\Psi(p_{1},z)z^{-(j-2)}\bigg{]}$
where ${\cal H}(j,K,z)$ is the spin-j bulk to boundary space-like propagator.
For the pure spin-2 graviton $h_{\mu}^{\mu}=0$ is traceless, so the coupling
to ${\cal L}_{F}$ in (107) drops out. The z-pre-factors in the internal braket
in (E) reflect on the spin-j generalization of the energy-momentum tensor
(higher twist vertices). For the soft-wall model, (E) takes the explicit form
$\mathcal{V}^{\alpha\beta(TT)}_{h\bar{\Psi}\Psi}(j,p_{1},p_{2},k_{z})=-\sqrt{2\kappa^{2}}\times
g_{5}^{2}A(j,K)\times\frac{1}{2}\bar{u}(p_{2})\gamma^{[\alpha}p^{\beta]}u(p_{1})$
(115)
with
$\displaystyle A(j,K)=$ $\displaystyle\frac{1}{2g_{5}^{2}}\int
dz\,\sqrt{g}\,e^{-\phi}\,\mathcal{H}(j,K,z)\bigg{[}z^{1+2(j-2)}\,\big{(}\psi_{R}^{2}(z)+\psi_{L}^{2}(z)\big{)}\,z^{-(j-2)}\bigg{]}$
$\displaystyle=$
$\displaystyle\frac{1}{2g_{5}^{2}}\frac{2^{2-\Delta(j)}\tilde{\kappa}_{N}^{j-2-\Delta(j)}}{\Gamma(\tilde{a}(j))}\int_{0}^{1}dx\,{x^{\tilde{a}(j)-1}}{(1-x)^{-\tilde{b}(j)}}\big{(}I_{z}^{R}(x)+I_{z}^{L}(x)\big{)}\,,$
(116)
where the fermionic integrals are defined as
($\xi=\tilde{\kappa}_{N}^{2}z^{2}$)
$I_{z}^{R/L}(x)=\int
dz\,\sqrt{g}\,e^{-\phi}\,z^{1+2(j-2)}\,\psi_{R/L}^{2}(z)\,\xi^{\frac{-(j-2)}{2}}\xi^{2-\frac{\Delta(j)}{2}}{\rm
exp}\Big{(}-\frac{2x\xi}{1-x}\Big{)}\,,$ (117)
and j-dependent parameters are
$\displaystyle\tilde{a}(j)=a_{k}+2-\frac{1}{2}\Delta(j)\qquad$
$\displaystyle{\rm and}$ $\displaystyle\qquad\tilde{b}(j)=3-\Delta(j)$
$\displaystyle\Delta(j)=2+\sqrt{2\sqrt{\lambda}(j-j_{0})}\qquad$
$\displaystyle{\rm and}$ $\displaystyle\qquad
j_{0}=2-\frac{2}{\sqrt{\lambda}}$ (118)
Using the bulk Dirac wavefunctions (112) for the soft-wall model, we can
further reduce the fermionic integrals (117) to have [52]
$\displaystyle
I_{z}^{R}(x)=\frac{1}{2}\times\tilde{\kappa}_{N}^{-2(j-2)}\times\bigg{(}\frac{\tilde{n}_{R}}{\tilde{\kappa}_{N}^{\tau-1}}\bigg{)}^{2}\times\Gamma\bigg{(}\frac{j-2}{2}+\tau-\frac{\Delta(j)}{2}+1\bigg{)}\times\bigg{(}\frac{1+x}{1-x}\bigg{)}^{-\frac{j-2}{2}-\tau+\frac{\Delta(j)}{2}-1}\,,$
$\displaystyle
I_{z}^{L}(x)=\frac{1}{2}\times\tilde{\kappa}_{N}^{-2(j-2)}\times\bigg{(}\frac{\tilde{n}_{L}}{\tilde{\kappa}_{N}^{\tau}}\bigg{)}^{2}\times\Gamma\bigg{(}\frac{j-2}{2}+\tau-\frac{\Delta(j)}{2}+2\bigg{)}\times\bigg{(}\frac{1+x}{1-x}\bigg{)}^{-\frac{j-2}{2}-\tau+\frac{\Delta(j)}{2}-2}\,.$
Using (E) in (E), the spin-j form factor of the Dirac fermion reduces to
$\displaystyle A(j,K)=$
$\displaystyle\frac{1}{4g_{5}^{2}}\frac{\tilde{\kappa}_{N}^{-(j-2)-\Delta(j)}}{\Gamma(\tilde{a}(j))}\int_{0}^{1}dx\,x^{\tilde{a}(j)-1}(1-x)^{-\tilde{b}(j)}$
$\displaystyle\times\bigg{(}\bigg{(}\frac{\tilde{n}_{R}}{\tilde{\kappa}_{N}^{\tau-1}}\bigg{)}^{2}\times\Gamma(c(j))\bigg{(}\frac{1+x}{1-x}\bigg{)}^{-c(j)}+\bigg{(}\frac{\tilde{n}_{L}}{\tilde{\kappa}_{N}^{\tau}}\bigg{)}^{2}\times\Gamma(c(j)+1)\bigg{(}\frac{1+x}{1-x}\bigg{)}^{-(c(j)+1)}\bigg{)}\,,$
with $\Delta(j)$, $\tilde{a}(j),\tilde{b}(j)$ given in (E) and
$\displaystyle c(j)=(\tau+1)+\frac{j-2}{2}-\frac{\Delta(j)}{2}$ (121)
$A(j,K)$ is the spin-j generalization of the spin-2 graviton form factor
(113). The x-integral form (E) can be interpreted as the holographic partonic
content of the Dirac fermion at strong coupling for spin-j, with $x$ playing
the role of the Bjorken momentum fraction.
## Appendix F Pomeron-Pomeron-glueball bulk coupling
The sphaleron at the boundary of the Witten vertex sources either $G^{2}$ or
$G\tilde{G}$. $G^{2}$ is dual to the dilaton in bulk, while $G\tilde{G}$ is
dual to the axion, both of which are described by massive glueballs of even
and odd parity respectively. $G^{2}$ yields a direct scalar Pomeron-Pomeron-
dilaton coupling times the pertinent bulk-to-boundary propagato. $G\tilde{G}$
yields an indirect pseudoscalar Pomeron-Pomeron-pseudoscalar coupling by
mixing to the bulk singlet and Chern-Simons term, times pertinent bulk-to-bulk
and bulk-to-boundary propagators
### F.1 Bulk dilaton-graviton-graviton coupling
The graviton in bulk is dual to a a scalar glueball or $G^{2}$ on the
boundary. The effective action for the graviton is given by the standard
Einstein-Hilbert action
$\frac{1}{2\tilde{g}_{5}^{2}}\int d^{5}x\sqrt{g}e^{-2\phi}\sqrt{R}$ (122)
with $R$ the scalar curvature and $2\tilde{g}_{5}^{2}=2\kappa^{2}=16\pi
G_{N}={8\pi^{2}}/{N_{c}^{2}}$. To extract the bulk gravitational coupling, we
linearize the metric around the flat metric
$\eta_{\mu\nu}\rightarrow\eta_{\mu\nu}+\sqrt{2\kappa^{2}}h_{\mu\nu}$. In de
Donder gauge with
$\partial_{\alpha}h^{\alpha}_{\mu}=\frac{1}{2}\partial_{\mu}f$ and
$f=h_{\mu}^{\mu}$, the expansion of (122) yields the quadratic contributions
$(\sqrt{2\kappa^{2}})^{2}\bigg{(}\frac{1}{8\tilde{g}_{5}^{2}}\,g^{MN}\eta^{\alpha\bar{\alpha}}\eta^{\beta\bar{\beta}}\partial_{M}h_{\alpha\bar{\alpha}}\,\partial_{N}h_{\beta\bar{\beta}}-\frac{1}{4\tilde{g}_{5}^{2}}\,g^{MN}\eta^{\alpha\bar{\alpha}}\eta^{\beta\bar{\beta}}\partial_{M}h_{\alpha\beta}\partial_{N}h_{\bar{\alpha}\bar{\beta}}\bigg{)}$
(123)
and the cubic contributions
$(\sqrt{2\kappa^{2}})^{3}\bigg{(}-\frac{1}{8\bar{g}_{5}^{2}}\,g^{MN}\eta^{\alpha\bar{\alpha}}\eta^{\beta\bar{\beta}}f\,\partial_{M}h_{\alpha\bar{\beta}}\,\partial_{N}h_{\bar{\alpha}\beta}-\frac{1}{4\bar{g}_{5}^{2}}\,g^{MN}\eta^{\alpha\bar{\alpha}}\eta^{\beta\bar{\beta}}\,\partial_{M}f\,h_{\alpha\bar{\beta}}\,\partial_{N}h_{\bar{\alpha}\beta}\bigg{)}$
(124)
where only the couplings through the tracefull $f$ and traceless
$h_{\alpha\beta}$ were retained. Note that $f(z=0,x)$ couples to
$T^{\mu}_{\mu}(x)$ of the gauge theory on the boundary, hence the
identification with the dual of $G^{2}$.
The induced interaction between the scalar and the graviton on the boundary,
follows from the bulk decomposition $h_{\mu\nu}(x,z)=h(z)h_{\mu\nu}(x)$ and
$f(x,z)=s(z)f(x)$, in the form
$\displaystyle\int
d^{4}x\,\eta^{\alpha\bar{\alpha}}\eta^{\beta\bar{\beta}}\bigg{(}$
$\displaystyle{\bf
C}^{+}_{1}\,f(x)\,\eta^{\mu\nu}\partial_{\mu}h_{\alpha\bar{\beta}}(x)\partial_{\nu}h_{\bar{\alpha}\beta}(x)$
$\displaystyle+$ $\displaystyle{\bf
C}^{+}_{2}\,f(x)\,h_{\alpha\bar{\beta}}(x)h_{\bar{\alpha}\beta}(x)+{\bf
C}_{3}^{+}\,\eta^{\mu\nu}\partial_{\mu}f(x)h_{\alpha\bar{\beta}}(x)\partial_{\nu}h_{\bar{\alpha}\beta}(x)\bigg{)}$
(125)
with the induced bulk coefficients
$\displaystyle{\bf C}_{1}^{+}=$
$\displaystyle-\frac{(\sqrt{2\kappa^{2}})^{3}}{4\tilde{g}_{5}^{2}}\int
dz\sqrt{g}e^{-\phi}\bigg{(}\frac{z^{2}}{2}s(z)h^{2}(z)\bigg{)}$
$\displaystyle{\bf C}_{2}^{+}=$
$\displaystyle+\frac{(\sqrt{2\kappa^{2}})^{3}}{4\tilde{g}_{5}^{2}}\int
dz\sqrt{g}e^{-\phi}\bigg{(}\frac{z^{2}}{2}s(z)h^{\prime
2}(z)+\frac{z^{2}}{2}s^{\prime}(z)(h^{2})^{\prime}(z)\bigg{)}$
$\displaystyle{\bf C}_{3}^{+}=$
$\displaystyle-\frac{(\sqrt{2\kappa^{2}})^{3}}{4\tilde{g}_{5}^{2}}\int
dz\sqrt{g}e^{-\phi}\bigg{(}z^{2}s(z)h^{2}(z)\bigg{)}$ (126)
with $2\tilde{g}^{2}_{5}=2\kappa^{2}=8\pi^{2}/N_{c}^{2}$. For the Witten
diagram in Fig. 10, we note that for a massive $0^{+}$ production, the vertex
is close to the boundary. If we denote by $z_{5}=z\approx 0$ the position of
the vertex in bulk, each of the two bulk-to-bulk graviton propagators factors
out
$\displaystyle G(z\approx 0,z^{\prime})\approx\frac{z^{4}}{4}{\cal
H}(K,z^{\prime})\approx h(z){\cal H}(K,z^{\prime})$ (127)
with ${\cal H}(K,z^{\prime})$ the non-normalizable bulk-to-boundary graviton
propagator in (D). Finally, we identify $s(z)$ with the normalizable scalar
$0^{+}$ glueball state in bulk. So the fields in (F.1) read
$\displaystyle h(z)\approx\frac{z^{4}}{4}\qquad s(z)\approx{\cal
S}(M_{0^{+}},z)$ (128)
### F.2 Bulk axion-graviton-graviton coupling
The boundary pseudoscalar glueball or $G\tilde{G}$ is dual to the axion field
$a$ in bulk. We now note that the axion mixes with the flavor singlet U(1)
gauge field $A_{M}$ in bulk and the latter interacts with the bulk gravitons
through the chiral-gravitational anomaly or Chern-Simons term [7]
$\frac{N_{c}}{1536\pi^{2}}\int
d^{5}x\sqrt{g}e^{-\phi}\,\epsilon^{MNPQR}g^{S\bar{S}}g^{T\bar{T}}\,{\rm
Tr}A_{M}\,R_{NPST}R_{QR\bar{T}\bar{S}}$ (129)
with $R_{NPST}$ the Riemann tensor. The mixing between the axion $a$ and the
scalar part of the singlet U(1) gauge field $A_{M=z}$ that is consistent with
the QCD UA(1) axial anomaly can be captured in (129) through the minimal gauge
shift $A_{z}\rightarrow A_{z}+\partial_{z}a$,
$\frac{N_{f}N_{c}}{1536\pi^{2}}\int
d^{5}x\sqrt{g}e^{-\phi}\,\epsilon^{zNPQR}g^{S\bar{S}}g^{T\bar{T}}\,\partial_{z}a\,R_{NPST}R_{QR\bar{T}\bar{S}}$
(130)
The coupling of the axion to the graviton follows by expanding the metric
$g_{MN}=\eta_{MN}+\delta_{\mu M}\delta_{\nu N}\sqrt{2\kappa^{2}}h_{\mu\nu}$,
with the result
$\displaystyle\frac{N_{f}N_{c}\kappa^{2}}{764\pi^{2}}\int
d^{5}x\sqrt{g}e^{-\phi}\,\epsilon^{\mu\nu\rho\sigma}\,\partial_{z}a\,$
$\displaystyle\bigg{(}\frac{9}{2}\eta^{\alpha\beta}\eta^{\gamma\delta}\partial_{\mu}h_{\nu\alpha}\partial_{\sigma}h_{\rho\beta}-6\eta^{\alpha\beta}\partial_{z}\partial_{\mu}h_{\nu\alpha}\partial_{\sigma}h_{\rho\beta}$
$\displaystyle\,\,\,\,+2\eta^{\alpha\beta}\partial_{z}\partial_{\mu}h_{\nu\alpha}\partial_{z}\partial_{\sigma}h_{\rho\sigma}+2\eta^{\alpha\beta}\eta^{\gamma\delta}\partial_{\gamma}\partial_{\mu}h_{\nu\alpha}\partial_{\sigma}(\partial_{\delta}h_{\rho\beta}-\partial_{\beta}h_{\rho\delta})\bigg{)}$
Note that $a(z=0,x)$ couples to the topological charge density or
$G\tilde{G}(x)$ of the gauge theory on the boundary.
Using the bulk decomposition for the axion and the graviton
$a_{z}(x,z)=\theta(z)a(x)\qquad h_{\mu\nu}(x,z)=h(z)h_{\mu\nu}(x)$ (132)
(F.2) yields the boundary interaction
$\displaystyle\int d^{4}x\,a(x)\,\bigg{(}{\bf
C}^{-}_{1}\,\eta^{\alpha\beta}\eta^{\gamma\delta}\partial_{\mu}h_{\nu\alpha}(x)\partial_{\sigma}h_{\rho\beta}(x)+{\bf
C}^{-}_{2}\,\eta^{\alpha\beta}\eta^{\gamma\delta}\partial_{\gamma}\partial_{\mu}h_{\nu\alpha}(x)\partial_{\sigma}(\partial_{\delta}h_{\rho\beta}(x)-\partial_{\beta}h_{\rho\delta}(x))\bigg{)}$
with the induced coefficients
$\displaystyle{\bf C}^{-}_{1}=$
$\displaystyle\frac{N_{f}N_{c}\kappa^{2}}{764\pi^{2}}\int
dz\sqrt{g}e^{-\phi}\,\theta^{\prime}(z)\,\bigg{(}\frac{9}{2}h^{2}(z)-6hh^{\prime}(z)+2h^{\prime
2}(z)\bigg{)}$ $\displaystyle{\bf C}^{-}_{2}=$
$\displaystyle\frac{N_{f}N_{c}\kappa^{2}}{764\pi^{2}}\int
dz\sqrt{g}e^{-\phi}\,\theta^{\prime}(z)\,h^{2}(z)$ (134)
Again, for the Witten diagram in Fig. 10, we note that for a massive $0^{-}$
production, the vertex is also close to the boundary. The same reasoning as in
(127-128) shows that the bulk fields in (F.2) read
$\displaystyle h(z)\approx\frac{z^{4}}{4}\qquad\theta(z)\approx{\cal
P}(M_{0^{-}},z)$ (135)
where the latter refers to the normalized pseudo-scalar gueball state in bulk.
## Appendix G Diffractive amplitudes
The use of Witten diagrams for DIS scattering in holographic QCD was pioneered
by [5, 5]. In particular, the Pomeron was identified in bulk with a Reggeized
graviton which transmutes to a string exchange. Numerous studies along these
lines followed, which we cannot fully and fairly cover here. We only mention
the original suggestion made in [50] and based on a holographic string
exchange which triggered all these studies, and which ultimatly identifies the
holographic Pomeron exchange with a stringy instanton exchange [62]. In this
Appendix, we use the Witten diagram approach as details how the Pomeron
interacts in bulk, and follow the conventions detailed in [52]
### G.1 Amplitude for emitting a pseudo-scalar
For the Witten diagram with outgoing pseudo-scalar glueball $X=G\tilde{G}=0-$,
the contribution is
$\displaystyle{\cal A}_{pp\rightarrow pp0-}(j_{1},j_{2},s,t)$
$\displaystyle=[{\cal
V}^{\alpha\beta(TT)}_{h\Psi\Psi}(j_{1},p_{1},p_{3},k_{1})]\times[{\mathbb{B}}^{-}_{\alpha\beta\bar{\alpha}\bar{\beta}}(t_{1},t_{2},k_{1},k_{2})]\times[{\cal
V}^{\bar{\alpha}\bar{\beta}(TT)}_{h\Psi\Psi}(j_{2},p_{2},p_{4},k_{2})]$
with $k_{1}=p_{1}-p_{3}$ and $k_{2}=p_{2}-p_{4}$ and $t_{1,2}=-k_{1,2}^{2}$,
for spin $j_{1,2}$ exchanges in the t-channel. The ${\mathbb{P}\mathbb{P}}0-$
vertex is
$\displaystyle{\mathbb{B}}^{-}_{\alpha\beta\bar{\alpha}\bar{\beta}}(t_{1},t_{2},k_{1},k_{2})=(B_{1}^{-}(t_{1},t_{2})\eta_{\beta\bar{\beta}}+B_{2}^{-}(t_{1},t_{2})k_{2\beta}k_{1\bar{\beta}})\epsilon_{\alpha\bar{\alpha}\gamma\delta}k_{1}^{\gamma}k_{2}^{\delta}$
(137)
(137) is consistent with charge conjugation and Lorentz symmetries, and
follows from the Chern-Simons vertex (F.2) after inserting the plane wave
decomposition
$a(x)=e^{-ip_{5}x}\qquad h_{\mu\nu}(x)=\epsilon_{\mu\nu}(k)e^{ik\cdot x}$
(138)
with the explicit vertex factors
$\displaystyle B_{1}^{-}(t_{1},t_{2})=2({\bf C}^{-}_{1}-{\bf
C}^{-}_{2}k_{1}\cdot k_{2})\qquad B_{2}^{-}(t_{1},t_{2})=2{\bf C}^{-}_{2}$
(139)
### G.2 Amplitude for emitting a scalar
For the Witten diagram with outgoing scalar glueball $X=G^{2}=0+$, the
contribution is
$\displaystyle{\cal A}_{pp\rightarrow pp0+}(j_{1},j_{2},s,t)=[{\cal
V}^{\alpha\beta(TT)}_{h\Psi\Psi}(j_{1},p_{1},p_{3},k_{1})]\times[{\mathbb{B}}^{+}_{\alpha\beta\bar{\alpha}\bar{\beta}}(t_{1},t_{2},k_{1},k_{2})]\times[{\cal
V}^{\bar{\alpha}\bar{\beta}(TT)}_{h\Psi\Psi}(j_{2},p_{2},p_{4},k_{2})]$
with the general scalar ${\mathbb{P}\mathbb{P}}0+$ vertex for the process
$h(k_{1})+h(k_{2})\rightarrow f(p_{5})$
$\displaystyle{\mathbb{B}}^{+}_{\alpha\beta\bar{\alpha}\bar{\beta}}(t_{1},t_{2},k_{1},k_{2})=B^{+}(t_{1},t_{2},p_{5}^{2})\,\eta_{\alpha\bar{\alpha}}\eta_{\beta\bar{\beta}}$
(141)
as quoted in (64). Note that (141) is consistent with parity, charge
conjugation and Lorentz symmetries. More explicitly, for the fusion process
$h(k_{1})+h(k_{2})\rightarrow f(p_{5})$ with
$h_{\mu\nu}(x)=\epsilon_{\mu\nu}^{TT}(k)\,e^{+ik_{1,2}\cdot x}\qquad
f(x)=e^{-ip_{5}\cdot x}$ (142)
and the transverse and traceless polarizations
$k^{\mu}\epsilon_{\mu\nu}^{TT}(k_{1,2})=0$ and
$\eta^{\mu\nu}\epsilon_{\mu\nu}^{TT}(k)=0$, (141) follows from the cubic
graviton coupling (124).
### G.3 Reggeization
The Reggeized form of the amplitude (G.1) follows from the double Mellin
transforms
$\displaystyle{\cal A}_{pp\rightarrow pp0-}(s_{1},s_{2},s,t)$
$\displaystyle=\int_{\mathbb{C}_{1}}\frac{dj_{1}}{2\pi
i}\left(\frac{s_{1}^{j_{1}-2}+(-s_{1})^{j_{1}-2}}{{\rm sin}\,\pi
j_{1}}\right)\int_{\mathbb{C}_{2}}\frac{dj_{2}}{2\pi
i}\left(\frac{s_{2}^{j_{2}-2}+(-s_{2})^{j_{2}-2}}{{\rm sin}\,\pi
j_{2}}\right){\cal A}_{pp\rightarrow pp0-}(j_{1},j_{2},s,t)$ (143)
The contours ${\mathbb{C}_{1,2}}$ are at the rightmost of the branch-points of
$A(j_{1,2},k)$ as defined in (E). Each of the Mellin transform in (G.3)
factorizes generically to
$\displaystyle\int_{\mathbb{C}}\frac{dj}{2\pi
i}\left(\frac{s^{j-2}+(-s)^{j-2}}{{\rm sin}\,\pi j}\right)\,A(j,k)$ (144)
The Pomeron amplitude follows by closing the contour to the left. The
imaginary part follows from the discontinuity of the $\Gamma$-function in
$A(j,k)$ with the result
$\displaystyle-\tilde{s}^{j_{0}}\int_{-\infty}^{j_{0}}\frac{dj}{\pi}\left(\frac{1+e^{-i\pi}}{{\rm
sin}\,\pi
j}\right)\tilde{s}^{j-j_{0}}\,\sin\left[\tilde{\xi}\sqrt{2\sqrt{\lambda}(j_{0}-j)}\right]\frac{\kappa_{N}^{2j}}{s^{2}}\Gamma(\Delta(j)-2)A(j,k)\bigg{|}_{j\rightarrow
j_{0},\,\Delta(j)\rightarrow 2}$ (145)
with $j_{0}=2-2/\sqrt{\lambda}$, and the t-dependence set to zero in the
exponent. It will be restored in the final result by inspection. We have set
$\tilde{s}\equiv{s}/{\tilde{\kappa}_{N}^{2}}$, and defined
$\tilde{\xi}\equiv\gamma+{\pi}/{2}$ with Euler constant $\gamma=0.55772$. In
the high energy limit $\sqrt{\lambda}/\tilde{\tau}\rightarrow 0$ with
$\tilde{\tau}\equiv\log\tilde{s}$, the j-integration yields
$\displaystyle
e^{j_{0}\tilde{\tau}}\left[(\sqrt{\lambda}/\pi)+i\right](\sqrt{\lambda}/2\pi)^{1/2}\;\tilde{\xi}\;\frac{e^{-\sqrt{\lambda}\tilde{\xi}^{2}/2\tilde{\tau}}}{\tilde{\tau}^{3/2}}\left(1+{\cal
O}\bigg{(}\frac{\sqrt{\lambda}}{\tilde{\tau}}\bigg{)}\right)\times\frac{\kappa_{N}^{2j}}{s^{2}}\Gamma(\Delta(j)-2)A(j,k)\bigg{|}_{j\rightarrow
j_{0},\,\Delta(j)\rightarrow 2}$ (146)
### G.4 Final amplitudes
Combining the above results in (G.1) gives for the pseudo-scalar glueball
emission amplitude
$\displaystyle\left[\left[(\sqrt{\lambda}/\pi)+i\right](\sqrt{\lambda}/2\pi)^{1/2}\;\tilde{\xi}\;\frac{e^{-\sqrt{\lambda}\tilde{\xi}^{2}/2\tilde{\tau}_{1}}}{\tilde{\tau}_{1}^{3/2}}\frac{A(j_{0},k_{1})}{\tilde{s}_{1}^{2-j_{0}}}\left[\frac{1}{2}\overline{u}(p_{3})\gamma^{[\alpha}p^{\beta]}u(p_{1})\right]\right]$
$\displaystyle\times$
$\displaystyle\bigg{[}{\mathbb{B}}^{-}_{\alpha\beta\bar{\alpha}\bar{\beta}}(t_{1},t_{2},k_{1},k_{2})\bigg{]}$
$\displaystyle\times$
$\displaystyle\left[\left[(\sqrt{\lambda}/\pi)+i\right](\sqrt{\lambda}/2\pi)^{1/2}\;\tilde{\xi}\;\frac{e^{-\sqrt{\lambda}\tilde{\xi}^{2}/2\tilde{\tau}_{2}}}{\tilde{\tau}_{2}^{3/2}}\frac{A(j_{0},k_{2})}{\tilde{s}_{2}^{2-j_{0}}}\left[\frac{1}{2}\overline{u}(p_{4})\gamma^{[\bar{\alpha}}{\underline{p}}^{\bar{\beta}]}u(p_{2})\right]\right]$
(147)
with $p=(p_{1}+p_{3})/2$ and $\underline{p}=(p_{2}+p_{4})/2$. Similarly,
combining the above results in (G.2) yields the scalar glueball emission
amplitude
$\displaystyle\left[\left[(\sqrt{\lambda}/\pi)+i\right](\sqrt{\lambda}/2\pi)^{1/2}\;\tilde{\xi}\;\frac{e^{-\sqrt{\lambda}\tilde{\xi}^{2}/2\tilde{\tau}_{1}}}{\tilde{\tau}_{1}^{3/2}}\frac{A(j_{0},k_{1})}{\tilde{s}_{1}^{2-j_{0}}}\left[\frac{1}{2}\overline{u}(p_{3})\gamma^{[\alpha}p^{\beta]}u(p_{1})\right]\right]$
$\displaystyle\times$
$\displaystyle\bigg{[}{\mathbb{B}}^{+}_{\alpha\beta\bar{\alpha}\bar{\beta}}(t_{1},t_{2},k_{1},k_{2})\bigg{]}$
$\displaystyle\times$
$\displaystyle\left[\left[(\sqrt{\lambda}/\pi)+i\right](\sqrt{\lambda}/2\pi)^{1/2}\;\tilde{\xi}\;\frac{e^{-\sqrt{\lambda}\tilde{\xi}^{2}/2\tilde{\tau}_{2}}}{\tilde{\tau}_{2}^{3/2}}\frac{A(j_{0},k_{2})}{\tilde{s}_{2}^{2-j_{0}}}\left[\frac{1}{2}\overline{u}(p_{4})\gamma^{[\bar{\alpha}}{\underline{p}}^{\bar{\beta}]}u(p_{2})\right]\right]$
(148)
## References
* [1] E. Shuryak, I. Zahed, Semiclassical double pomeron production of glueballs and eta-prime, Phys. Rev. D 68 (2003) 034001. arXiv:hep-ph/0302231, doi:10.1103/PhysRevD.68.034001.
* [2] A. Kirk, et al., New effects observed in central production by the WA102 experiment at the CERN Omega spectrometer, in: 28th International Symposium on Multiparticle Dynamics (ISMD 98), 1998. arXiv:hep-ph/9810221.
* [3] A. Brandt, S. Erhan, A. Kuzucu, M. Medinnis, N. Ozdes, P. E. Schlein, M. T. Zeyrek, J. G. Zweizig, J. B. Cheze, J. Zsembery, A Study of inclusive double pomeron exchange in $p\bar{p}\to pX\bar{p}$ at $\sqrt{s}$ = 630-GeV, Eur. Phys. J. C 25 (2002) 361–377. arXiv:hep-ex/0205037, doi:10.1007/s10052-002-1031-x.
* [4] O. Suranyi, Recent CMS and CMS-TOTEM results on diffraction and exclusive production, in: 18th Conference on Elastic and Diffractive Scattering, 2020\. arXiv:2002.06959.
* [5] J. Polchinski, M. J. Strassler, Deep inelastic scattering and gauge / string duality, JHEP 05 (2003) 012. arXiv:hep-th/0209211, doi:10.1088/1126-6708/2003/05/012.
* [6] R. C. Brower, J. Polchinski, M. J. Strassler, C.-I. Tan, The Pomeron and gauge/string duality, JHEP 12 (2007) 005. arXiv:hep-th/0603115, doi:10.1088/1126-6708/2007/12/005.
* [7] N. Anderson, S. K. Domokos, J. A. Harvey, N. Mann, Central production of $\eta$ and $\eta-prime$ via double Pomeron exchange in the Sakai-Sugimoto model, Phys. Rev. D 90 (8) (2014) 086010. arXiv:1406.7010, doi:10.1103/PhysRevD.90.086010.
* [8] I. Iatrakis, A. Ramamurti, E. Shuryak, Pomeron Interactions from the Einstein-Hilbert Action, Phys. Rev. D 94 (4) (2016) 045005. arXiv:1602.05014, doi:10.1103/PhysRevD.94.045005.
* [9] C. Ewerz, P. Lebiedowicz, O. Nachtmann, A. Szczurek, Helicity in proton–proton elastic scattering and the spin structure of the pomeron, Phys. Lett. B 763 (2016) 382–387. arXiv:1606.08067, doi:10.1016/j.physletb.2016.10.064.
* [10] E. A. Kuraev, L. N. Lipatov, V. S. Fadin, Multi - Reggeon Processes in the Yang-Mills Theory, Sov. Phys. JETP 44 (1976) 443–450.
* [11] G. Basar, D. E. Kharzeev, H.-U. Yee, I. Zahed, Holographic Pomeron and the Schwinger mechanism, PoS QNP2012 (2012) 116. doi:10.22323/1.157.0116.
* [12] D. M. Ostrovsky, G. W. Carter, E. V. Shuryak, Forced tunneling and turning state explosion in pure Yang-Mills theory, Phys. Rev. D 66 (2002) 036004. arXiv:hep-ph/0204224, doi:10.1103/PhysRevD.66.036004.
* [13] F. R. Klinkhamer, N. S. Manton, A Saddle Point Solution in the Weinberg-Salam Theory, Phys. Rev. D 30 (1984) 2212. doi:10.1103/PhysRevD.30.2212.
* [14] V. V. Khoze, A. Ringwald, Valley trajectories in gauge theories.
* [15] E. V. Shuryak, J. J. M. Verbaarschot, On baryon number violation and nonperturbative weak processes at SSC energies, Phys. Rev. Lett. 68 (1992) 2576–2579. doi:10.1103/PhysRevLett.68.2576.
* [16] E. V. Shuryak, I. Zahed, Instanton induced effects in QCD high-energy scattering, Phys. Rev. D 62 (2000) 085014. arXiv:hep-ph/0005152, doi:10.1103/PhysRevD.62.085014.
* [17] M. A. Nowak, E. V. Shuryak, I. Zahed, Instanton induced inelastic collisions in QCD, Phys. Rev. D 64 (2001) 034008. arXiv:hep-ph/0012232, doi:10.1103/PhysRevD.64.034008.
* [18] I. I. Balitsky, A. V. Yung, Collective - Coordinate Method for Quasizero Modes, Phys. Lett. B 168 (1986) 113–119. doi:10.1016/0370-2693(86)91471-1.
* [19] E. V. Shuryak, Toward the Quantitative Theory of the ’Instanton Liquid’ 4. Tunneling in the Double Well Potential, Nucl. Phys. B 302 (1988) 621–644. doi:10.1016/0550-3213(88)90191-5.
* [20] A. V. Yung, Instanton Vacuum in Supersymmetric QCD, Nucl. Phys. B 297 (1988) 47\. doi:10.1016/0550-3213(88)90199-X.
* [21] J. J. M. Verbaarschot, Streamlines and conformal invariance in Yang-Mills theories, Nucl. Phys. B 362 (1991) 33–53, [Erratum: Nucl.Phys.B 386, 236–236 (1992)]. doi:10.1016/0550-3213(91)90554-B.
* [22] E. Shuryak, I. Zahed, Prompt quark production by exploding sphalerons, Phys. Rev. D 67 (2003) 014006. arXiv:hep-ph/0206022, doi:10.1103/PhysRevD.67.014006.
* [23] M. Luscher, SO(4) Symmetric Solutions of Minkowskian Yang-Mills Field Equations, Phys. Lett. B 70 (1977) 321–324. doi:10.1016/0370-2693(77)90668-2.
* [24] B. M. Schechter, Yang-Mills Theory on the Hypertorus, Phys. Rev. D 16 (1977) 3015\. doi:10.1103/PhysRevD.16.3015.
* [25] A. Hasenfratz, Spatial correlation of the topological charge in pure SU(3) gauge theory and in QCD, Phys. Lett. B 476 (2000) 188–192. arXiv:hep-lat/9912053, doi:10.1016/S0370-2693(00)00105-2.
* [26] E. V. Shuryak, Probing the boundary of the nonperturbative QCD by small size instantonsarXiv:hep-ph/9909458.
* [27] E. V. Shuryak, The Role of Instantons in Quantum Chromodynamics. 1. Physical Vacuum, Nucl. Phys. B 203 (1982) 93. doi:10.1016/0550-3213(82)90478-3.
* [28] A. Ramamurti, E. Shuryak, I. Zahed, Are there monopoles in the quark-gluon plasma?, Phys. Rev. D 97 (11) (2018) 114028. arXiv:1802.10509, doi:10.1103/PhysRevD.97.114028.
* [29] T. Schäfer, E. V. Shuryak, Instantons in QCD, Rev. Mod. Phys. 70 (1998) 323–426. arXiv:hep-ph/9610451, doi:10.1103/RevModPhys.70.323.
* [30] R. D. Pisarski, L. G. Yaffe, THE DENSITY OF INSTANTONS AT FINITE TEMPERATURE, Phys. Lett. B 97 (1980) 110–112. doi:10.1016/0370-2693(80)90559-6.
* [31] E. V. Shuryak, M. Velkovsky, The Instanton density at finite temperatures, Phys. Rev. D 50 (1994) 3323–3327. arXiv:hep-ph/9403381, doi:10.1103/PhysRevD.50.3323.
* [32] D. B. Leinweber, Visualizations of the QCD vacuum, in: Workshop on Light-Cone QCD and Nonperturbative Hadron Physics, 1999, pp. 138–143. arXiv:hep-lat/0004025.
* [33] J. D. Bjorken, Intersections 2000: What’s new in hadron physics, AIP Conf. Proc. 549 (1) (2000) 211–229. arXiv:hep-ph/0008048, doi:10.1063/1.1345244.
* [34] V. Zetocha, T. Schäfer, Instanton contribution to scalar charmonium and glueball decays, Phys. Rev. D 67 (2003) 114003. arXiv:hep-ph/0212125, doi:10.1103/PhysRevD.67.114003.
* [35] S. Chernyshev, M. A. Nowak, I. Zahed, Heavy hadrons and QCD instantons, Phys. Rev. D 53 (1996) 5176–5184. arXiv:hep-ph/9510326, doi:10.1103/PhysRevD.53.5176.
* [36] M. A. Nowak, M. Rho, I. Zahed, Chiral nuclear dynamics, 1996.
* [37] S. Chernyshev, M. A. Nowak, I. Zahed, Heavy hadrons and QCD instantons, Phys. Rev. D 53 (1996) 5176–5184. arXiv:hep-ph/9510326, doi:10.1103/PhysRevD.53.5176.
* [38] F. Sauter, Uber das Verhalten eines Elektrons im homogenen elektrischen Feld nach der relativistischen Theorie Diracs, Z. Phys. 69 (1931) 742–764. doi:10.1007/BF01339461.
* [39] D. E. Kharzeev, The Chiral Magnetic Effect and Anomaly-Induced Transport, Prog. Part. Nucl. Phys. 75 (2014) 133–151. arXiv:1312.3348, doi:10.1016/j.ppnp.2014.01.002.
* [40] P. Faccioli, Instanton contribution to the electromagnetic form-factors of the nucleon, Phys. Rev. C 69 (2004) 065211. arXiv:hep-ph/0312019, doi:10.1103/PhysRevC.69.065211.
* [41] P. V. Buividovich, M. N. Chernodub, E. V. Luschevskaya, M. I. Polikarpov, Numerical study of chiral symmetry breaking in non-Abelian gauge theory with background magnetic field, Phys. Lett. B 682 (2010) 484–489. arXiv:0812.1740, doi:10.1016/j.physletb.2009.11.017.
* [42] G. Basar, G. V. Dunne, D. E. Kharzeev, Electric dipole moment induced by a QCD instanton in an external magnetic field, Phys. Rev. D 85 (2012) 045026. arXiv:1112.0532, doi:10.1103/PhysRevD.85.045026.
* [43] A. Ringwald, F. Schrempp, Instanton induced cross-sections in deep inelastic scattering, Phys. Lett. B 438 (1998) 217–228. arXiv:hep-ph/9806528, doi:10.1016/S0370-2693(98)00953-8.
* [44] V. V. Khoze, F. Krauss, M. Schott, Large Effects from Small QCD Instantons: Making Soft Bombs at Hadron Colliders, JHEP 04 (2020) 201. arXiv:1911.09726, doi:10.1007/JHEP04(2020)201.
* [45] A. H. Mueller, On higher order semiclassical corrections to high-energy cross-sections in the one instanton sector, Nucl. Phys. B 364 (1991) 109–126. doi:10.1016/0550-3213(91)90580-Q.
* [46] V. Andreev, et al., Search for QCD instanton-induced processes at HERA in the high- $\boldsymbol{Q^{2}}$ domain, Eur. Phys. J. C 76 (7) (2016) 381. arXiv:1603.05567, doi:10.1140/epjc/s10052-016-4194-6.
* [47] A. M. Sirunyan, et al., Search for black holes and sphalerons in high-multiplicity final states in proton-proton collisions at $\sqrt{s}=13$ TeV, JHEP 11 (2018) 042. arXiv:1805.06013, doi:10.1007/JHEP11(2018)042.
* [48] F. L. Bezrukov, D. Levkov, C. Rebbi, V. A. Rubakov, P. Tinyakov, Suppression of baryon number violation in electroweak collisions: Numerical results, Phys. Lett. B 574 (2003) 75–81. arXiv:hep-ph/0305300, doi:10.1016/j.physletb.2003.09.015.
* [49] V. Khachatryan, et al., Exclusive and semi-exclusive pi+pi- production in proton-proton collisions at sqrt(s) = 7 TeVarXiv:1706.08310.
* [50] M. Rho, S.-J. Sin, I. Zahed, Elastic parton-parton scattering from AdS / CFT, Phys. Lett. B 466 (1999) 199–205. arXiv:hep-th/9907126, doi:10.1016/S0370-2693(99)01118-1.
* [51] R. Brower, M. Djuric, C.-I. Tan, Holographic double diffraction of Higgs and the AdS/CFT graviton/Pomeron, Int. J. Mod. Phys. A 29 (28) (2014) 1446013. arXiv:1412.3060, doi:10.1142/S0217751X14460130.
* [52] K. A. Mamo, I. Zahed, Diffractive photoproduction of $J/\psi$ and $\Upsilon$ using holographic QCD: gravitational form factors and GPD of gluons in the proton, Phys. Rev. D 101 (8) (2020) 086003. arXiv:1910.04707, doi:10.1103/PhysRevD.101.086003.
* [53] P. E. Shanahan, W. Detmold, Gluon gravitational form factors of the nucleon and the pion from lattice QCD, Phys. Rev. D 99 (1) (2019) 014511. arXiv:1810.04626, doi:10.1103/PhysRevD.99.014511.
* [54] W. Xiong, et al., A small proton charge radius from an electron–proton scattering experiment, Nature 575 (7781) (2019) 147–150. doi:10.1038/s41586-019-1721-2.
* [55] V. A. Khoze, V. V. Khoze, D. L. Milne, M. G. Ryskin, Hunting for QCD Instantons at the LHC in Events with Large Rapidity GapsarXiv:2104.01861.
* [56] D. Kharzeev, E. Shuryak, I. Zahed, Sphalerons, baryogenesis, and helical magnetogenesis in the electroweak transition of the minimal standard model, Phys. Rev. D 102 (7) (2020) 073003. arXiv:1906.04080, doi:10.1103/PhysRevD.102.073003.
* [57] M. A. Nowak, J. J. M. Verbaarschot, I. Zahed, Flavor Mixing in the Instanton Vacuum, Nucl. Phys. B 324 (1989) 1–33. doi:10.1016/0550-3213(89)90178-8.
* [58] Z. Abidin, C. E. Carlson, Nucleon electromagnetic and gravitational form factors from holography, Phys. Rev. D 79 (2009) 115003. arXiv:0903.4818, doi:10.1103/PhysRevD.79.115003.
* [59] S. Hong, S. Yoon, M. J. Strassler, On the couplings of vector mesons in AdS / QCD, JHEP 04 (2006) 003. arXiv:hep-th/0409118, doi:10.1088/1126-6708/2006/04/003.
* [60] Z. Abidin, C. E. Carlson, Gravitational form factors of vector mesons in an AdS/QCD model, Phys. Rev. D 77 (2008) 095007. arXiv:0801.3839, doi:10.1103/PhysRevD.77.095007.
* [61] C. A. Ballon Bayona, H. Boschi-Filho, N. R. F. Braga, Deep inelastic scattering from gauge string duality in the soft wall model, JHEP 03 (2008) 064\. arXiv:0711.0221, doi:10.1088/1126-6708/2008/03/064.
* [62] G. Basar, D. E. Kharzeev, H.-U. Yee, I. Zahed, Holographic Pomeron and the Schwinger Mechanism, Phys. Rev. D 85 (2012) 105005. arXiv:1202.0831, doi:10.1103/PhysRevD.85.105005.
|
# Electrotactile Feedback For Enhancing
Contact Information in Virtual Reality
S. Vizcay10000-0002-1837-5607, P. Kourtesis10000-0002-2914-1064, F.
Argelaguet10000-0002-6160-8015, C. Pacchierotti20000-0002-8006-9168 and M.
Marchal30000-0002-6080-7178
1Inria, Univ Rennes, IRISA, CNRS – 35042 Rennes, France. 2CNRS, Univ Rennes,
IRISA, Inria – 35042 Rennes, France. 3Univ Rennes, INSA, IRISA, Inria, CNRS –
35042 Rennes, France and Institut Universitaire de France.
###### Abstract
This paper presents a wearable electrotactile feedback system to enhance
contact information for mid-air interactions with virtual objects. In
particular, we propose the use of electrotactile feedback to render the
interpenetration distance between the user’s finger and the virtual content is
touched. Our approach consists of modulating the perceived intensity
(frequency and pulse width modulation) of the electrotactile stimuli according
to the registered interpenetration distance. In a user study (N=21), we
assessed the performance of four different interpenetration feedback
approaches: electrotactile-only, visual-only, electrotactile and visual, and
no interpenetration feedback. First, the results showed that contact precision
and accuracy were significantly improved when using interpenetration feedback.
Second, and more interestingly, there were no significant differences between
visual and electrotactile feedback when the calibration was optimized and the
user was familiarized with electrotactile feedback. Taken together, these
results suggest that electrotactile feedback could be an efficient replacement
of visual feedback for enhancing contact information in virtual reality
avoiding the need of active visual focus and the rendering of additional
visual artefacts. <ccs2012> <concept>
<concept_id>10003120.10003121.10003125.10011752</concept_id>
<concept_desc>Human-centered computing Haptic devices</concept_desc>
<concept_significance>500</concept_significance> </concept> <concept>
<concept_id>10003120.10003121.10003124.10010866</concept_id>
<concept_desc>Human-centered computing Virtual reality</concept_desc>
<concept_significance>500</concept_significance> </concept> <concept>
<concept_id>10003120.10003121.10003122.10003334</concept_id>
<concept_desc>Human-centered computing User studies</concept_desc>
<concept_significance>500</concept_significance> </concept> </ccs2012>
[500]Human-centered computing Haptic devices [500]Human-centered computing
Virtual reality [500]Human-centered computing User studies
###### keywords:
electrotactile feedback; virtual reality; human computer interaction; contact
precision; contact accuracy; contact rendering; surface; interpenetration;
haptic interface
## 1 Introduction
Immersive virtual reality (VR) is becoming a common platform for education and
training, where precision and accuracy are important aspects [Car17, JK18].
Visual feedback is important for contact accuracy and precision [TMGL20].
However, when interacting in VR, if no other measures are adopted, the user’s
avatar might interpenetrate the virtual content, which results in
significantly reducing the accuracy, precision, and the realism of the
interaction To avoid such an undesired behavior and facilitate mid-air
interactions in VR a popular technique is the God-object Method, where the
virtual hand is constrained by the collision with the virtual object(s) [ZS94,
SH16]. Using the God-object Method, there is a decoupling of the visual
representation of the user’s avatar (e.g., the virtual hand) from its actual
position by, e.g., constraining the avatar on the virtual object’s surface
even when the actual tracked position would be inside the object [PB16].
However, using this popular technique, while the virtual hand is rendered on
the collided object’s surface, the user’s physical hand is still able to move
in the real environment unconstrained (i.e., penetrating the virtual object’s
coordinates), which results in failing to perceive the inaccuracy and
imprecision of the contact [ZS94, SH16]. Considering the dominance of visual
information, a visual interpenetration feedback could provide the required
information to facilitate contact precision and accuracy. Nevertheless,
providing additional visual cues, would occlude other virtual surfaces and
render visual artefacts in the virtual environment. Visual artefacts
negatively affect contact precision and accuracy [QBY∗15]. To cope with this
limitation, a haptic feedback proportional to the interpenetration may provide
the user with information regarding the contact precision and accuracy. This
haptic information hence indicates the user’s interpenetration in the virtual
content [MPS∗18], which would enable to somehow mimic the behavior of real
contacts, where interpenetrating deeper in a virtual object elicits stronger
tactile sensations.
Haptic VR interfaces provide vibrotactile, pressure, or skin stretch feedback
sensations [PSS∗17]. However, these displays need to embed one or more
mechanical actuators that move an end-effector in contact with the user’s
skin, limiting their wearability, portability, and comfort [PSS∗17]. In this
respect, electrotactile haptic stimulation has been recently recognized as a
promising technique to provide rich tactile sensations across the hand or arm
through a single wearable actuator [CMH08]. Applying electrical stimulation to
the skin elicits tactile sensations by directly activating the sensory nerve
endings, de facto bypassing the skin mechanoreceptors. The electrical current
is produced by a local polarization near the skin, generated by two or more
electrotacile interfaces (electrodes). With respect to other tactile
solutions, electrotactile stimulation only requires very thin electrodes
placed at the sensation point (e.g., the fingertip), providing a wearable and
comfortable solution to tactile haptics.
In a VR pilot study (N=3), electrotactile feedback provided haptic information
pertinent to heat, texture, jolt, and texture [PW16]. In a small size user
study (N=10), electrotactile feedback was able to provide the VR user with
tactile sensations such as roughness, friction, and hardness [YK17]. In a VR
user study (N=19), electrotactile feedback was successful in improving
grasping [HDC∗16]. Despite its promising features, the benefits of
electrotactile feedback in VR interaction are still under-investigated. This
paper presents the design and evaluation of a wearable electrotactile feedback
system to enhance contact information for mid-air interactions with virtual
objects.
The main contributions of our work can be summarized as follows:
* •
Design of an electrotactile feedback technique for enhancing contact
information in VR. A technique where the intensity of the electrotactile
feedback is proportional to the interpenetration (i.e., distance between real
and virtual hand).
* •
A user study (N=21) for evaluating electrotactile interpenetration feedback’s
performance against no interpenetration feedback, a visual interpenetration
feedback, and a visual-electrotactile interpenetration feedback.
## 2 Related work
_Wearable haptics in VR._ Popular techniques to provide rich wearable haptic
feedback are through moving platforms, that can orient and/or translate on the
skin, pin-arrays, shearing belts and tactors, pneumatic jets, and balloon-
based systems [PSS∗17]. Frisoli et al. [FSSB08] were among the first to
present a fingertip haptic display for improving curvature discrimination
using a moving platform. It was composed of a parallel platform and a serial
wrist. Gabardi et al. [GSLF16] improved this device and applied it in VR
interaction. The pose of the user’s finger was tracked through external
optical sensors and collisions between the finger avatar and the virtual
surface were rendered by the device. Minamizawa et al. [MKKT07] presented a
wearable fingertip device for weight rendering in VR. It consisted of two DC
motors moving a belt in contact with the user’s fingertip. A similar device
was also used in [PSH∗16] for multi-finger manipulation of objects in VR and
in [SPdT∗20, DTPML18] for augmenting tangible objects in VR. More recently,
Girard et al. [GCWG17] developed a wearable fingertip device capable of
rendering 2-DoF skin stretch stimuli in VR. The considered use cases comprised
tapping on a virtual bottle and feeling the texture and weight of a virtual
object.
_Electrotactile haptics._ Due to the difficulty in designing effective
interaction paradigms, electrotactile interfaces have rarely been used in VR
scenarios. Jorgovanovic et al. [JDD∗14] evaluated the capabilities of human
subjects to control grasping force in closed loop using electrotactile
feedback. It was tested in an experiment of virtual robotic grasping. More
recently, a pilot study by Pamungkas and Ward [PW16] used electrotactile
stimulation on the hand to convey the sensation of contacts in VR. Use cases
comprised playing with a virtual bouncing ball, setting up a camp fire,
feeling different surface textures, and shooting a gun. Hummel et al. [HDC∗16]
developed an electrotactile delivery system composed of a small tactor with
eight electrodes for each finger. They tested it in three VR assembly tasks
for remote satellite servicing missions, which included pressing a button,
switching a lever switch, and pulling a module from its slot. Finally, Yem et
al. [YVKK18] employed electrotactile stimulation to elicit flexion/extension
illusory movements to the user’s fingertip. Doing so, the system was able to
induce haptic feedback sensations, providing information about virtual
objects’ stiffness and stickiness properties.
_Contact information in VR._ In the last years, researchers focused on the
sensation of making and breaking contact [dTHP∗20, CPMP17, CMPP19, YGKP20], as
it is proven to be an important factor for immersion in VR. Kuchenbecker et
al. [KFK∗08] endowed a desktop kinesthetic interface with a passive tactile
end-effector. The kinesthetic feedback provided by the desktop device bends
the internal springs of the tactile end-effector and brings a shell in contact
with the user’s finger, providing the sensation of making/breaking contact
with the virtual object. In a similar fashion, Gonzalez et al. [GBG15]
attached a ring of optical sensor to the end-effector, being able to measure
the distance of the fingertip from the ring and adjust the transition between
free space and contact Finally, Prachyabrued and Borst [PB16] evaluated the
impact of visual feedback in pinch grasping a virtual object, where best
performance was obtained when showing both hand models (tracked and
constrained) and when coloring the fingers of the object being grasped.
To the best of our knowledge, the effects of using electrotactile feedback for
enhancing contact information in VR mid-air interactions has scarcely been
studied.
## 3 Electrotactile Interpenetration Feedback
The tactile sensation elicited by electrotactile stimulation can be controlled
by both, the parameters chosen for the design of the electrodes and the
parameters of the delivered electrical signal [KTOH17, FGV08, KKMT99, LKS09].
Individual pads in an electrode can be disabled or enabled (enabled as either
cathode or anode) meanwhile signal’s intensity, pulse width and frequency can
be increased/decreased to apply some modulation. Pad selection determines the
location where the stimulus is going to be applied.
Changes in the current intensity is the most common way to increase/decrease
the strength of the sensation, but the feeling is highly variable across
subjects and relatively small increases can quickly elicit uncomfortable
sensations. Changes in the pulse width and frequency can also elicit changes
in the perceived strength of the sensation, although less significantly than
when altering the current intensity [Aie98]. Changes in the frequency are also
employed to increase/decrease how fast the tickles are delivered.
Given the disadvantages of intensity modulation, we fixed the intensity value
for each participant, chosen after a brief calibration so as to avoid any
undesired uncomfortable/painful sensation. We chose pulse width modulation to
control the strength of the sensation and frequency modulation to control its
speed. We chose to place the electrodes at the fingertip, as it is the area of
the body most often used for pointing, grasping, and manipulating objects
[PSS∗17]. To produce the stimulation at the center of the finger pad, we
select an anode-cathode configuration with the widest possible dynamic range
for the given electrode. As mentioned before, we modulate the electrotactile
interpenetration feedback proportionally to the distance between the
constrained user’s fingertip avatar (i.e., the object surface) and its actual
position as tracked by the system. In other words, both the perceived
intensity and speed of the electrotactile sensation increase as the user’s
physical hand interpenetrates the virtual object.
To achieve this effect, the pulse width is modulated linearly from 200 $\mu$s
to 500 $\mu$s. On the other hand, we considered the perception of frequency to
be logarithmic [VS13] and we apply a gamma correction so as to linearize how
fast the tingles are perceived. The proposed frequency modulation allows us to
explore a range of frequencies from 30 Hz to 200 Hz, which is the maximum
frequency of the device. Fig. 1 shows the two inputs with respect to the
normalized interpenetration (1: maximum, 0: zero, see subsection 3.2 for the
details). Choices regarding the range of stimulation and the active pads have
been done after a short preliminary study, so as to provide the most
comfortable tactile sensations to the finger pad.
Figure 1: electrotactile interpenetration feedback modulation via pulse width
(left) and frequency (right).
### 3.1 Electrical Stimulator and Electrodes
Our custom electrotactile stimulator has 32 channels, to which up to 5
electrodes can be connected simultaneously. Each channel may be defined either
as a cathode or as an anode. The stimulator produces a biphasic cathodic
stimulation through square waves. Electrical intensity and pulse width can be
set per each channel in the ranges of [0.1, 9] mA and [30, 500] $\mu$s,
respectively. The stimulus frequency can be set in the range [1, 200] Hz.
We used electrodes whose cathodes are distributed in a matrix array of 2 rows
by 3 columns, surrounded by two larger lateral anodes that are interconnected
by a segment going through both rows of cathodes (see the insets in the top-
left corner of Figure 2). This particular layout has being designed to widen
the dynamic range of sensations.
Figure 2: Experimental apparatus. The electrical stimulator is attached to the
forearm and the electrode is placed in contact with the finger pad. The two
insets represent detailed views of the electrode. The user’s hand is tracked
by a HTC Vive Tracker.
### 3.2 System Design
The interpenetration of the finger avatar into the virtual content drives our
pulse width and frequency modulation. After a series of pilot user studies, we
determined that, for a cube of edge of 15 cm, 3 cm represents the maximum
interpenetration achieved in most interactions. For this reason, we normalized
the interpenetration $d$ by this value, i.e., a normalized interpenetration
$\hat{d}$ of 1 means a true interpenetration into the virtual object of 3 cm.
The position of the participant’s hand is tracked using a HTC Vive Tracker
attached to the back of the hand. The _physics-based_ hand avatar (physics-
based as its interactions are handled by the physics engine) collides with the
other objects in the virtual environment while trying to follow the position
indicated by the tracker. As described at the beginning of section 3, this
avatar resolves all virtual collisions and avoids any undesired virtual
interpenetration. Whenever it comes into contact with a virtual object, its
motion is constrained to stay on the object’s surface, regardless of how much
the user’s hand moves into the object. As detailed above, this discrepancy
between the user’s real hand and its avatar drives our electrotactile
interpenetration feedback.
## 4 Evaluation
This section describes a user study evaluating the proposed electrotactile
interpenetration feedback system. Our main objective was to assess the
potential advantages and disadvantages of using electrotactile
interpenetration feedback while performing a task requiring a high level of
accuracy and precision. In particular, we focused on a task in which the user
is asked to contact a virtual surface with his/her index finger while
minimizing the amount of interpenetration between the virtual surface and the
real index finger. In such a task, when the virtual hand avatar cannot
interpenetrate virtual surfaces, it is difficult for the user to precisely
assess the amount of interpenetration between his/her finger and the virtual
surface without any additional feedback.
Our main hypothesis is that additional interpenetration feedback, either
visual and/or electroactile, would increase the awareness of the user,
effectively decreasing the amount of interpenetration. Yet, electrotactile
interpenetration feedback will provide a more coherent experience. In addition
to consider the electrotactile interpenetration feedback method presented in
the previous section, for comparison purposes, we also modulated the
availability of visual cues. First, we considered a visual interpenetration
feedback technique. Second, we considered the interaction with two different
virtual objects, a fully-opaque virtual cube and a wireframe-only cube. We
also considered the latter so as to limit the visual cues which could
anticipate the contact with the virtual object.
Hence, the independent variable (IV) $Interpenetration$ $Feedback$ has four
levels: 1) No Interpenetration Feedback; 2) Visual Interpenetration Feedback;
3) Electrotactile Interpenetration Feedback, 4) Visuo-electrotactile
Interpenetration Feedback (i.e., interpenetration information is provided both
visually and haptically). The four levels of the IV are examined in two object
rendering methods: 1) fully-opaque and 2) wireframe-only. In the following, we
detail how these techniques are implemented in our experimental study.
### 4.1 Experimental Task
The task required participants to touch a virtual surface (a cube of size
$15\times 15\times 15$ cm) which was located on a virtual table (see Figure
3). Participants started the experience in a seated position in front of said
virtual table. The cube was placed on the user’s left side, while a resting
area was located on the user’s right side. For each trial, participants were
asked to start with their right hand in the resting area, to move to touch
with their index finger the top side of the virtual cube precisely while
keeping a steady contact for three seconds, and then they were notified that
should return their hand again to the resting area. Participants were
instructed to minimize the interpenetration between the virtual surface and
their real index finger while ensuring a continuous contact with the cube. To
ensure that all participants kept a consistent timing, auditory feedback (a
short beep sound) was used to notify the users when they had to start the
trial and when the three seconds of contact were achieved. During the whole
interaction, participants were asked to keep their hand in a closed fist, with
the index finger fully extended for facilitating a similar posture and an
approximate collocation between the physical and the virtual hand. The
experimenter was prompting each participant to have this posture when a new
block was starting, while during the breaks between the blocks, the
participant could have a rest and move her/his hand and fingers freely or just
relax, which assisted with avoiding a postural fatigue.
### 4.2 Electrotactile Interpenetration Feedback Calibration
The calibration of the electrotactile interpenetration feedback is a key step
to ensure that all participants perceived a consistent tactile feedback. As
the calibration cannot be done automatically, participants were asked to
calibrate the intensity of the electrotactile interpenetration feedback
multiple times throughout the experiment. If the calibration is not done per
each user, parameters such as the skin’s conductivity and/or a tolerance could
result in stimuli that are not perceived (i.e., too low intensity) or
uncomfortable or even painful (i.e., too high intensity).
The right range of current intensity was determined using the method of
limits. Starting from a low intensity value, the intensity was increased
considering steps of 0.1 mA. This incremental procedure continued, step by
step, until the participant confirmed that the electrotactile feedback was
perceived, defining his/her sensation detection threshold. Then, the
incremental approach continued until the participant confirmed that the
electrotactile feedback caused discomfort, defining his/her pain/discomfort
threshold. Similarly, starting from a high intensity value, the intensity was
decreased considering steps of 0.1 mA. This starting high intensity value was
defined as the 3rd quartile of the range between the detection and
pain/discomfort thresholds. This decremental procedure continued, step by
step, until the participant confirmed that the electrotactile feedback was not
perceived anymore.
During the entire calibration process, the signal frequency was set to 200 Hz
and the pulse width to 500 $\mu$s. These values were chosen as they represent
the strongest stimulation that could be achieved through the interpenetration
feedback, as detailed in the previous Section. The final intensity value to be
used throughout the experiment was calculated from a linear interpolation
between the detection and discomfort thresholds. We chose an intermediate
value between these thresholds, as it would result in a perceivable yet
comfortable stimulus
### 4.3 Participants and Apparatus
A total of 21 individuals [age: $mean(SD)=25.67(2.74)$; $range=23-33$ years
old; gender: 14 males, 6 females, and 1 non-binary] were recruited through an
in-campus advertisement and a call on institutional email lists. 21
participants were familiar with using VR head-mounted displays (VR-HMDs), 10
participants had a previous experience with receiving tactile feedback in VR
(predominantly vibrotactile), and only 2 had experienced electrotactile
feedback in the past.
An HTC Vive Pro HMD was used for immersion. The PC had an NVIDIA RTX 2070. An
HTC Vive Tracker attached to the back of the palm (see Figure 2) and 2
Lighthouse base stations were used for tracking. See Figure 2 for the
placement of electrotactile stimulator, which weighted 300 g. A Velcro strap
was used to hold the electrode, its tightness was adjusted to achieve a
comfortable yet firm placement of the electrode.
The VR application was built using Unity in conjunction with the SteamVR Unity
plugin (see Figure 3). The collisions were directly handled by the SteamVR
plugin and the Unity physics simulation. A blackboard was used for providing
any critical information/instructions within the participant’s field of view.
### 4.4 User Representation and Visual Interpenetration feedback
The user could only see a visual representation of their right hand
(proxy/avatar). As stated earlier, the hand avatar cannot go through the
virtual object so as to better mimic a real interaction. The hand is always
kept in a closed fist, with the index finger fully extended, which is the same
pose we asked participants to keep while interacting with the cube (see Figure
4).
An outline effect was used as visual interpenetration feedback (see Figure 4.
Outline effects are commonly used to denote interactivity or selection
[ZCH∗19, SCZ∗20]. However, the implemented outline was dynamically rendered
proportional to interpenetration’s depth. The outline was parameterized using
a scale factor and the size of the border. The scale factor went from $\times
1$ (edge of $15$ cm3) to $\times 1.2$ (edge of $18$ cm3). The border varied
between 1 and 5 pixels.
When both electrotactile and visual interpenetration feedbacks were provided,
a mismatch between the two modalities may result in a haptics’ uncanny valley
effect, which compromises the performance [BGFOH18]. For this reason, the
modification of border size and width of the outline was determined through a
pilot study. The choice of this particular parameters’ range aimed to ensure
that, comparably to the electrotactile interpenetration feedback, the
perception of the visual interpenetration feedback’s intensity is proportional
to the size of the interpenetration.
Figure 3: Depiction of the subjective view of the user during the experiment.
The user could touch the virtual cube placed on the top of the table of the
left. The semitransparent cube on the right side defines the resting area. The
blackboard in front of the user provided general information during the
experiment.
### 4.5 Experimental Protocol
This study received the approval of the authors’ institution ethical
committee. As it was carried out during the COVID-19 pandemic, a preventive
hygiene and disinfection protocol was implemented. Participants signed an
informed consent form prior to their participation The set up consisted of a
VR area, a table, and a chair. The table was co-located with the virtual
table. A pre-exposure questionnaire was filled to acquire demographic data.
Participants went through a tutorial for performing the experimental task.
The experiment followed a latin square design, 4 types of interpenetration
feedback $\times$ 2 types of cube shading $\times$ 2 parts of the experiment
(i.e., $4\times 2\times 2$). Each experiment’s part had 4 blocks, where each
block consisted of 12 repetitions. In total, each participant performed 96
repetitions (i.e., 48 per experiment’s part). In total, 3 calibrations were
performed a) Before Part 1; b) After Part 1; c) After Part 2. Finally, a post-
exposure questionnaire was filled to acquire the participant’s subjective
ratings on the types of feedback, with a focus on electrotactile
interpenetration feedback.
Figure 4: Visual shading conditions (left & right) and visual interpenetration
feedback (right).
### 4.6 Dependent Variables and Hypotheses
In order to answer the hypothesis of our experiment and considering the
experimental task we measured the following metrics: average ($avg\\_d$),
standard deviation ($std\\_d$) and maximum ($max\\_d$) interpenetration during
the three seconds in which the user was asked to remain in contact with the
virtual cube surface. As we did not track the user’s finger but the back of
the hand, the interpenetration was computed by measuring the distance between
the position of the _real_ fingertip in the virtual environment and the
position of the avatar fingertip constrained on the surface of the cube. The
interpenetration measures provide information regarding the awareness of the
real interpenetration (accuracy) and their ability to keep a constant and not
oscillatory contact with the virtual surface (precision). In this respect our
hypotheses were:
* •
[H1] The addition of interpenetration feedback (either visual or
electrotactile) will increase the precision and the accuracy of the task.
* •
[H2] The combination of the visual and electrotactile interpenetration
feedback will result on the best precision and accuracy.
* •
[H3] Precision and accuracy will be higher for the fully shaded condition.
* •
[H4] Precision and accuracy will improve along the experiment due to learning
effects.
The sensation and discomfort thresholds (in mA) were obtained during the three
calibration processes. We expected that the thresholds will vary due to
changes in skin conductivity or due to habituation. In this respect our
hypothesis was:
* •
[H5] Sensation and discomfort thresholds will monotonically increase during
the experiment.
Finally, the participants’ subjective report were collected through a post-
exposure questionnaire. The questionnaire is included as supplementary
material. The questionnaire encompassed questions pertaining to the perceived
usefulness, coherence, and resemblance with a real interaction of the
electrotactile interpenetration feedback, as well as some comparison questions
between the interpenetration feedback modalities. In this respect our
hypotheses were:
* •
[H6] Electrotactile interpenetration feedback will be reported as useful and
coherent.
* •
[H7] There will be no significant differences between visual and
electrotactile interpenetration feedback in terms of usefulness, coherence,
and/or resemblance.
## 5 Results
The homogeneity and normality assumptions were examined. The performance
variables violated the normality assumption. The values were converted into
logarithms since the variables’ distributions were substantially positively
skewed. No violation was then detected. The same amendments were made for the
intensity values, which also violated the normality assumption. Finally, as
the shading condition showed no significant results, which suggests the
rejection of H3. In sake of clarity, the shading condition will not be
considered in the results report. Thus, for the following analyses the
conditions were the type of interpenetration feedback and the part of
experiment.
A two-way repeated measures ANOVA was performed for each performance variable,
as well as a post-hoc pairwise comparisons test respectively. When the
sphericity assumption was violated, then the Greenhouse Geisser’s correction
was considered. Furthermore, a pairwise comparison was conducted to explore
potential differences in the intensity values amongst the three calibrations
(i.e., initial, middle, and final). Also, a Pearson’s correlational analysis
was performed to investigate the presence of relationships between the
intensity values and the performance variables. Finally, the post-exposure
self-reports (Likert scale) by the participants were explored. A non-
parametric pairwise comparison was implemented to examine the presence of
differences on the questionnaire’s items. The Bonferroni correction was
implemented accordingly to the number of comparison’s pairs in each analysis
to amend p-values’ inflation due to multiple comparisons. The descriptive
statistics for all performance variables and the intensities, as well as the
results for the maximum interpenetration are all included in the supplementary
material for completeness.
### 5.1 Index Finger Interpenetration
Average Index Interpenetration - Accuracy The ANOVA’s outcomes indicated that
all effects were statistically significant. The type of feedback produced a
main effect of $F(1.97,39.31)=25.89$, $p<.001$ postulating a substantial
difference amongst the feedback types. In line with H1, an
$\mathcal{\omega}^{2}=0.54,95\%$ $CI[0.35,0.66]$ suggesting a very large
effect of the type of feedback on average interpenetration (i.e., accuracy).
In support of H4, the part of the experiment yielded an effect of
$F(1,20)=31.74,p<.001$, suggesting a significant difference between the two
parts of the experiment with a very large effect size of
$\mathcal{\omega}^{2}=0.58,95\%$ $CI[0.26,0.75]$. The interaction between the
parts of the experiment and the types of feedback produced an effect of
$F(2.11,42.21)=5.72$, $p=.006$ indicating significant differences among the
pairs between the types of feedback and the parts of experiment. An
$\mathcal{\omega}^{2}=0.18,95\%$ $CI[-0.01,0.34]$ was yielded for these
interactions indicating that they have a large effect on the accuracy.
The post-hoc pairwise comparisons are displayed in Figure 5. The results
support both H1 and H2. In the 1st part, the visual $g=0.90,p<.001$ and
combined feedback $g=1.10,p<.001$ were substantially improved compared to the
no interpenetration feedback condition, with a large effect in both
comparisons. Also, the visual $g=0.52,p=.026$ and combined feedback
$g=0.67,p=.002$ were substantially better than the electrotactile condition,
with a moderate effect size in both comparisons. In the 2nd part, the visual
$g=1.10,p<.001$, combined $g=1.69,p<.001$, and electrotactile interpenetration
feedback $g=1.10,p<.001$, were significantly better compared to the no
interpenetration feedback condition, with a very large effect in every
comparison. No further significant differences were detected in both parts.
Furthermore, the electrotactile $g=0.86,p<.001$ and combined feedback
$g=0.69,p<.001$ were significantly improved in the 2nd part compared to the
1st part, with a large and a moderate effect correspondingly. In contrast, the
equivalent comparisons for the no interpenetration feedback and visual
interpenetration feedback were not statistically significant. Notably, only
the size of change for electrotactile interpenetration feedback was
significantly greater than the size of change for the no interpenetration
feedback $g=0.64,p=.005$ and visual interpenetration feedback $g=0.60,p=.010$,
with a moderate effect size for both.
Figure 5: Average Interpenetration in cm. Comparisons per Type of Feedback and
Part of the Experiment. IF = Interpenetration Feedback
Standard Deviation Index Interpenetration - Precision The results of the ANOVA
suggested statistically significant effects at .05 significance level.
Supporting H1, the main effect of the the type of feedback on index
oscillations (i.e., precision) yielded an effect of
$F(1.88,37.69)=28.92,p<.001$, and of $\mathcal{\omega}^{2}=0.57,95\%$
$CI[0.39,0.68]$ postulating significant differences between the diverse types
of feedback with a very large effect size. Alligned with H4, the main effect
for the part of experiment on the precision produced an $F(1,20)=9.32,p=.006$,
and of $\mathcal{\omega}^{2}=0.27,95\%$ $CI[-0.01,0.54]$ suggesting a
significant difference between the two part of the experiment with a large
effect. The interaction among the diverse pairs derived from the parts of the
experiment and the types of feedback yielded an interaction effect of
$F(2.38,47.66)=4.30,p=.014$, and of $\mathcal{\omega}^{2}=0.13,95\%$
$CI[-0.03,0.29]$ postulating significant difference between the diverse pairs
with a large effect size.
The post-hoc paiwise comparisons are illustrated in Figure 6. The results
appear to support both H1 and H2. In the 1st part of experiment, the visual
$g=1.06,p<.001$ and combined feedback $g=1.109,p<.001$ were founded
substantially greater than the no interpenetration feedback condition, with a
very large effect in both comparisons. Also, the visual $g=0.79,p<.001$ and
combined feedback $g=0.90,p<.001$ were found to be substantially better than
the electrotactile interpenetration feedback, with a large effect size in both
comparisons. In the 2nd part of experiment, the visual $g=1.32,p<.001$,
combined $g=1.71,p<.001$, and electrotactile interpenetration feedback
$g=1.07,p<.001$, were found to be significantly better than the no
interpenetration feedback condition, with a very large effect in every
comparison. No further significant differences were detected in both parts.
Furthermore, only the electrotactile $g=0.74,p<.001$ was found to be
substantially improved in the 2nd part compared to the 1st part, with a large
effect. The equivalent comparisons for the no interpenetration feedback,
visual interpenetration, and combined interpenetration feedback were not
statistically significant. Importantly, the size of change for electrotactile
interpenetration feedback from the 1st to the 2nd part was significantly
larger than the size of change of the no interpenetration feedback
$g=0.54,p=.027$ and visual interpenetration feedback $g=0.59,p=.012$, with a
moderate effect size in both comparisons.
Figure 6: Index Oscillation in cm. Comparisons per Type of Feedback and Part
of the Experiment. IF = Interpenetration Feedback
### 5.2 Calibration of Electrotactile Feedback
The pairwise comparisons are displayed in Figure 7. The sensation threshold
(i.e., the intensity that the participant commences perceiving the
electrotactile feedback), the discomfort threshold (i.e.,intensity that the
participant commences feeling discomfort due to electrotactile feedback), and
the actual value (i.e., the weighted intensity that was implemented to provide
electrotactile interpenetration feedback) of the middle (i.e., after the 1st
and before the 2nd part of experiment) and final calibration (i.e., after the
2nd part of experiment) were statistically greater than the equivalent
intensities of initial calibration (i.e., before the 1st part of experiment).
Specifically, the sensation threshold $g=1.45,p<.001$, the discomfort
threshold $g=0.91,p<.001$, and the actual value $g=1.08,p<.001$ of middle
calibration were substantially larger than the equivalent intensities of
initial calibration, while the effect size was large to very large in each
comparison. Similarly, the sensation threshold $g=0.70,p=.005$, the discomfort
threshold $g=0.83,p<.001$, and the actual value $g=0.88,p<.001$ of final
calibration were significantly larger than the equivalent intensities of
initial calibration, while the effect size was large in every comparison.
However, no significant differences were detected between final and middle
calibration, which postulates the rejection of H5. Finally, none of the
intensity values was significantly correlated either positively or negatively
with any of the performance variables.
Figure 7: Intensity values in mA. Comparisons for each calibration.
### 5.3 Subjective Results
The descriptive statistics of the post-exposure self-reports are displayed in
Table 1, which support H6. In support of H7, the Wilcoxon signed rank test
yielded no significant differences in the comparisons between the visual and
electrotactile interpenetration feedback pertaining to usefulness (i.e., how
useful was the feeback for the task’s completion), coherence (i.e., how much
coherent was the feedback with touching the surface), and resemblance (i.e.,
how much the feedback resembles touching a real surface). The Wilcoxon rank
sum test, for the question pertaining to the combined feedback condition,
indicated a significant difference $p=.024$ between electrotactile and
combined feedback, postulating that the participants reported that they relied
significantly more on the electrotactile interpenetration feedback rather than
both interpenetration feedbacks (i.e., visual and electrotactile). However, in
the same question, there was no significant difference between visual and
electrotactile interpenetration feedback suggesting a balanced preference.
Table 1: Self-reports on Electrotactile Interpenetration Feedback Criterion | Median | Mode | Range
---|---|---|---
Usefulness | 6 | 5 | 2 - 7
Coherence | 5 | 5 | 4 - 7
Resemblance | 2.5 | 1 | 1 - 6
Pleasantness | 4 | 4 | 3 - 6
Perceptual Update | 5.5 | 6 | 2 - 7
Combined Synchronicity | 6 | 7 | 2 - 7
N = 21; Max = 7; $\geq 4$ indicate positive response;
Median = middle number of the provided responses;
Mode = most frequent response
Moreover, the participants reported that the visual interpenetration feedback
was useful and coherent, albeit that provided a feeling that is different from
touching a real surface. Similarly, it was reported that the electrotactile
interpenetration feedback was useful to very useful and coherent, albeit that
provided a feeling that is different to very different from touching a real
surface. Also, the self-reports suggested that the electrotactile
interpenetration feedback provided neither a pleasant nor unpleasant
sensation, as well as that it provided a frequent to very frequent perceptual
update of pressing down the surface. Finally, the self-reports indicated that
the combination of electrotactile and visual interpenetration feedback had a
very coherent to completely coherent synchronization, and it was perceived as
moderately similar as to touching a real surface.
## 6 Discussion
Our findings agree with previous studies, where bimodal feedback increased the
accuracy and precision in contact tasks [TMGL20]. Also, our results are in
accordance with a meta-analysis of user interaction studies, where the bimodal
(i.e., visuohaptic) feedback was found to improve performance [BPG∗06].
However, a haptic uncanny effect may substantially compromise the benefits of
bimodal feedback on performance [BGFOH18]. In our study, the dynamic visual
interpenetration feedback was matched to the varying intensities of
electrotactile interpenetration feedback, which aligns with the suggestions to
mitigate or evade a haptic uncanny effect [BGFOH18]. Hence, our results
further support the importance of having matched feedback modalities, when
implementing bimodal feedback.
Furthermore, our quantitative and qualitative outcomes align with the
qualitative results of Pamungkas, et al. [PW13], where the participants
reported that the electrotactile interpenetration feedback assisted them with
performing a VR robot-teleoperation task faster and more accurately. However,
in our quantitative user study the benefits of providing electrotactile
interpenetration feedback became apparent only in the 2nd part of experiment,
suggesting the presence of a learning effect. Also, the intensity values for
providing electrotactile interpenetration feedback were substantially
increased in the middle calibration, which were used in the 2nd part of
experiment. In conjunction, our findings postulate that the advantages of
electrotactile interpenetration feedback may surface, when the user has been
familiarized with it, and/or the intensity of feedback is well-adjusted to
skin’s conductivity changes and/or user’s increased tolerance to the
electrotactile interpenetration feedback. However, the intensity values were
not correlated with neither accuracy nor precision. Thus, the increased
familiarity with the electrotactile interpenetration feedback that was
inferred due to the observed learning effect, appears to better explain why
the advantageous effects of electrotactile interpenetration feedback on
performance were surfaced only during the 2nd part of experiment.
It should be noted that the learning effect was not generally present. The
learning effect was identified only for the bimodal visuotactile feedback
regarding accuracy, as well as for the electrotactile interpenetration
feedback regarding both accuracy and precision. These findings indicate that
the learning effect was important only in conditions where the electrotactile
interpenetration feedback was used. Also, the size of the improvement in the
accuracy and precision from the 1st to the 2nd part of experiment was
significantly greater for the electrotactile interpenetration feedback
condition compared to the size of improvement for no interpenetration feedback
and visual interpenetration feedback conditions respectively. This finding
also postulates that the learning effect was observed due to the novelty of
the electrotactile interpenetration feedback, and not due a practice effect
related to the task. Nevertheless, the effects of the familiarity with
electrotactile interpenetration feedback and the refined calibration on user’s
performance should be further scrutinized in prospective studies.
Finally, the results postulate the effectiveness of electrotactile and/or
visual interpenetration feedback. However, the implemented visual
interpenetration feedback requires constant visual focus, would occlude other
visual information, and render visual artefacts in a fully interactive virtual
environment, which may reduce substantially the contact precision and accuracy
[QBY∗15]. In contrast, the suggested electrotactile interpenetration feedback
appears to facilitate multiple contact interactions, although this should be
further examined. Thus, the presented electrotactile feedback system seems to
be the most appropriate and effective choice for improving precision and
accuracy of mid-air interactions in immersive VR
## 7 Conclusion
In this paper, we proposed an electrotactile rendering algorithm to enhannce
contact information in a virtual environment based on the modulation of the
pulse width and frequency of the electrical signal. In order to explore the
benefits and limitations of the proposed method, we conducted a user study
evaluating the electrotactile interpenetration feedback by examining the
performance in touching accurately and precisely a virtual rigid object.
The results showed that participants achieved significantly better performance
in terms of accuracy and precision when visual and/or electrotactile
interpenetration feedback was provided. More importantly, there were no
significant differences between visual and electrotactile interpenetration
feedback when the calibration was optimized and the user was familiarized with
electrotactile interpenetration feedback. Considering that visual
discrimination is dominant in such scenarios and a visual interpenetration
feedback renders visual artefacts, the findings indicate a promising future
for electrotactile-based systems for improving contact precision and accuracy
in interactive VR systems.
Further works could explore other alternatives for the modulation and location
of the electrotactile feedback or consider other tactile actuators. Also,
future work should attempt to replicate the experimental findings with finger
tracking capabilities. Further experiments could explore interactions
requiring more fingers (e.g. grasp operation) or the interaction with dynamic
objects (e.g. pushing a virtual object). Finally, further iterations of the
system should strive to improve its wearability.
## Acknowledgments
This work was supported by the European Union’s Horizon 2020 research and
innovation program under grant agreement No. 856718 (TACTILITY).
## References
* [Aie98] Aiello G. L.: Multidimensional electrocutaneous stimulation. _IEEE Trans. Rehabilitation Engineering 6_ , 1 (mar 1998), 95–101.
* [BGFOH18] Berger C. C., Gonzalez-Franco M., Ofek E., Hinckley K.: The uncanny valley of haptics. _Science Robotics 3_ , 17 (2018).
* [BPG∗06] Burke J. L., Prewett M. S., Gray A. A., Yang L., Stilson F. R., Coovert M. D., Elliot L. R., Redden E.: Comparing the effects of visual-auditory and visual-tactile feedback on user performance: a meta-analysis. In _Proc. 8th international Conf. Multimodal interfaces_ (2006), pp. 108–117.
* [Car17] Carruth D. W.: Virtual reality for education and workforce training. In _2017 15th International Conf. Emerging eLearning Technologies and Applications (ICETA)_ (2017), pp. 1–6.
* [CMH08] Chouvardas V., Miliou A., Hatalis M.: Tactile displays: Overview and recent advances. _Displays 29_ , 3 (2008), 185–194. doi:https://doi.org/10.1016/j.displa.2007.07.003.
* [CMPP19] Chinello F., Malvezzi M., Prattichizzo D., Pacchierotti C.: A modular wearable finger interface for cutaneous and kinesthetic interaction: control and evaluation. _IEEE Trans. Industrial Electronics 67_ (2019), 706–716.
* [CPMP17] Chinello F., Pacchierotti C., Malvezzi M., Prattichizzo D.: A three revolute-revolute-spherical wearable fingertip cutaneous device for stiffness rendering. _IEEE Trans. Haptics 11_ , 1 (2017), 39–50.
* [dTHP∗20] de Tinguy X., Howard T., Pacchierotti C., Marchal M., Lécuyer A.: Weatavix: Wearable actuated tangibles for virtual reality experiences. In _Haptics: Science, Technology, Applications: 12th International Conference, EuroHaptics 2020, Leiden, The Netherlands, September 6–9, 2020, Proceedings_ (2020), vol. 12272, Springer Nature, p. 262.
* [DTPML18] De Tinguy X., Pacchierotti C., Marchal M., Lécuyer A.: Enhancing the stiffness perception of tangible objects in mixed reality using wearable haptics. In _Proc. of IEEE Conf. Virtual Reality and 3D User Interfaces_ (2018), pp. 81–90.
* [FGV08] Folgheraiter M., Gini G., Vercesi D.: A multi-modal haptic interface for virtual reality and robotics. _Journal of Intelligent and Robotic Systems 52_ , 3-4 (2008), 465–488.
* [FSSB08] Frisoli A., Solazzi M., Salsedo F., Bergamasco M.: A fingertip haptic display for improving curvature discrimination. _Presence: Teleoperators and Virtual Environments 17_ , 6 (2008), 550–561.
* [GBG15] Gonzalez F., Bachta W., Gosselin F.: Smooth transition-based control of encounter-type haptic devices. In _Proc. of IEEE International Conf. Robotics and Automation_ (2015), pp. 291–297.
* [GCWG17] Gosselin F., Chabrier A., Weistroffer V., Girard A.: Design and development of miniature wearable haptic interfaces for the fingertips. In _Proc. of IEEE World Haptics Conference_ (2017), pp. 611–616.
* [GSLF16] Gabardi M., Solazzi M., Leonardis D., Frisoli A.: A new wearable fingertip haptic interface for the rendering of virtual shapes and surface features. In _Proc. IEEE Haptics Symposium_ (2016), pp. 140–146.
* [HDC∗16] Hummel J., Dodiya J., Center G. A., Eckardt L., Wolff R., Gerndt A., Kuhlen T. W.: A lightweight electrotactile feedback device for grasp improvement in immersive virtual environments. In _Proc. of IEEE Conf. Virtual Reality_ (2016), pp. 39–48.
* [JDD∗14] Jorgovanovic N., Dosen S., Djozic D. J., Krajoski G., Farina D.: Virtual grasping: closed-loop force control using electrotactile feedback. _Computational and mathematical methods in medicine 2014_ (2014).
* [JK18] Jensen L., Konradsen F.: A review of the use of virtual reality head-mounted displays in education and training. _Education and Information Technologies 23_ , 4 (2018), 1515–1529.
* [KFK∗08] Kuchenbecker K., Ferguson D., Kutzer M., Moses M., Okamura A.: The touch thimble: Providing fingertip contact feedback during point-force haptic interaction. In _Proc. Symposium on Haptic interfaces for virtual environment and teleoperator systems_ (2008), pp. 239–246.
* [KKMT99] Kajimoto H., Kawakami N., Maeda T., Tachi S.: Tactile feeling display using functional electrical stimulation. In _Proc. of ICAT_ (1999), p. 133.
* [KTOH17] Kaczmarek K. A., Tyler M. E., Okpara U. O., Haase S. J.: Interaction of Perceived Frequency and Intensity in Fingertip Electrotactile Stimulation: Dissimilarity Ratings and Multidimensional Scaling. _IEEE Transactions on Neural Systems and Rehabilitation Engineering 25_ , 11 (nov 2017), 2067–2074. doi:10.1109/TNSRE.2017.2702628.
* [LKS09] Lozano C. A., Kaczmarek K. A., Santello M.: Electrotactile stimulation on the tongue: Intensity perception, discrimination, and cross-modality estimation. _Somatosensory & motor research 26_, 2-3 (2009), 50–63.
* [MKKT07] Minamizawa K., Kajimoto H., Kawakami N., Tachi S.: A wearable haptic display to present the gravity sensation-preliminary observations and device design. In _Proc. World Haptics_ (2007), pp. 133–138.
* [MPS∗18] Meli L., Pacchierotti C., Salvietti G., Chinello F., Maisto M., De Luca A., Prattichizzo D.: Combining wearable finger haptics and augmented reality: User evaluation using an external camera and the microsoft hololens. _IEEE Robotics and Automation Letters 3_ , 4 (2018), 4297–4304.
* [PB16] Prachyabrued M., Borst C. W.: Design and Evaluation of Visual Interpenetration Cues in Virtual Grasping. _IEEE Trans. Visualization and Computer Graphics 22_ , 6 (jun 2016), 1718–1731.
* [PSH∗16] Pacchierotti C., Salvietti G., Hussain I., Meli L., Prattichizzo D.: The hRing: a wearable haptic device to avoid occlusions in hand tracking. In _Proc. IEEE Haptics Symposium_ (2016).
* [PSS∗17] Pacchierotti C., Sinclair S., Solazzi M., Frisoli A., Hayward V., Prattichizzo D.: Wearable haptic systems for the fingertip and the hand: Taxonomy, review, and perspectives. _IEEE Trans. Haptics 10_ , 4 (2017), 580–600.
* [PW13] Pamungkas D. S., Ward K.: Tele-operation of a robot arm with electro tactile feedback. In _IEEE/ASME International Conf. Advanced Intelligent Mechatronics: Mechatronics for Human Wellbeing_ (2013), pp. 704–709. doi:10.1109/AIM.2013.6584175.
* [PW16] Pamungkas D. S., Ward K.: Electro-tactile feedback system to enhance virtual reality experience. _International Journal of Computer Theory and Engineering 8_ , 6 (2016), 465–470.
* [QBY∗15] Qian K., Bai J., Yang X., Pan J., Zhang J.: Virtual reality based laparoscopic surgery simulation. In _Proc. 21st ACM Symposium on Virtual Reality Software and Technology_ (2015), pp. 69–78.
* [SCZ∗20] Sidenmark L., Clarke C., Zhang X., Phu J., Gellersen H.: Outline Pursuits: Gaze-assisted Selection of Occluded Objects in Virtual Reality. In _Proc. of ACM CHI Conf. Human Factors in Computing Systems_ (apr 2020).
* [SH16] Sagardia M., Hulin T.: A fast and robust six-dof god object heuristic for haptic rendering of complex models with friction. In _Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology_ (New York, NY, USA, 2016), VRST ’16, Association for Computing Machinery, p. 163–172. doi:10.1145/2993369.2993374.
* [SPdT∗20] Salazar S. V., Pacchierotti C., de Tinguy X., Maciel A., Marchal M.: Altering the stiffness, friction, and shape perception of tangible objects in virtual reality using wearable haptics. _IEEE Trans. Haptics 13_ , 1 (2020), 167–174.
* [TMGL20] Triantafyllidis E., McGreavy C., Gu J., Li Z.: Study of multimodal interfaces and the improvements on teleoperation. _IEEE Access 8_ (2020), 78213–78227. doi:10.1109/ACCESS.2020.2990080.
* [VS13] Varshney L. R., Sun J. Z.: Why do we perceive logarithmically? _Significance 10_ , 1 (feb 2013), 28–31.
* [YGKP20] Young E. M., Gueorguiev D., Kuchenbecker K. J., Pacchierotti C.: Compensating for fingertip size to render tactile cues more accurately. _IEEE Trans. Haptics 13_ , 1 (2020), 144–151.
* [YK17] Yem V., Kajimoto H.: Wearable tactile device using mechanical and electrical stimulation for fingertip interaction with virtual world. In _IEEE Virtual Reality (VR)_ (2017), pp. 99–104. doi:10.1109/VR.2017.7892236.
* [YVKK18] Yem V., Vu K., Kon Y., Kajimoto H.: Effect of Electrical Stimulation Haptic Feedback on Perceptions of Softness-Hardness and Stickiness while Touching a Virtual Object. In _Proc. of IEEE Conf. Virtual Reality and 3D User Interfaces_ (2018), pp. 89–96.
* [ZCH∗19] Zhao Y., Cutrell E., Holz C., Morris M. R., Ofek E., Wilson A. D.: SeeingVR: A set of tools to make virtual reality more accessible to people with low vision. In _Proc. of ACM CHI Conf. Human Factors in Computing Systems_ (2019), vol. 14, pp. 1–14.
* [ZS94] Zilles C., Salisbury J. K.: A constraint-based god-object method for haptic display. In _ASME Haptic Interfaces for Virtual Environment and Teleoperator Systems_ (1994), vol. 1, pp. 149–150.
|
# On boundaries of geodesically complete CAT(0) spaces
Conrad Plaut
Department of Mathematics
The University of Tennessee
Knoxville TN 37996
<EMAIL_ADDRESS>
###### Abstract
We give concrete, “infinitesimal” conditions for a proper geodesically
complete CAT(0) space to have semistable fundamental group at infinity.
## 1 Introduction
For a CAT(0) space $X$ there is a notion of boundary $\partial X$; details are
discussed later in this paper, but see also [4]. If a group $G$ acts properly
and cocompactly by isometries on $X$ then $G$ is called a CAT(0) group. If one
strengthens the CAT(0) assumption to Gromov hyperbolicity it is well-known
that the boundaries of any two such spaces $X$ on which $G$ acts must have
boundaries that are homeomorphic, so there is a topopogically well-defined
notion of the boundary of a hyperbolic group. It was shown by Swarup ([17]) in
1996 that connected boundaries of Gromov Hyperbolic groups must be Peano
continuua (see also [2], [3]). In contrast, Croke-Kleiner ([6]) showed in 2000
that the same group may act properly and cocompactly on homeomorphic CAT(0)
spaces with non-homomeomorphic boundaries, and a definitive statement on the
topological structure of boundaries of one-ended (i.e. with connected
boundary) CAT(0) spaces remains elusive. On the one hand, arbitrary metric
compacta can be realized as boundaries of CAT(0) spaces (attributed to Gromov
with a proof sketched in [11], Proposition 2). But if $X$ is a cocompact,
proper CAT(0) space then there are the following known constraints: According
to Swenson ([18]), $\partial X$ must be finite dimensional. According to
Geoghagen-Ontandeda ([11]) if the dimension of $\partial X$ is $d$ then the
$d$-dimensional Čech cohomology with integer coefficients is non-trivial. In
the same paper, the authors show that cocompact proper CAT(0) spaces must be
“almost geodesically complete” in a sense attributed to Michael Mihalik that
extends the following notion of geodesically complete (also known as the
geodesic extension property): Every geodesic extends to a geodesic defined for
all $\mathbb{R}$.
A natural candidate for a definitive general topological statement about
boundaries of proper cocompact CAT(0) spaces is that they are always “pointed
$1$-moveable”, a concept from classical shape theory. The reason for this is
that Geoghegan-Swenson ([9], Theorem 3.1) showed that a one ended proper
CAT(0) space has semistable fundamental group at infinity if and only if the
boundary is pointed $1$-movable, and it is a long-standing open question
whether proper, cocompact CAT(0) spaces all have semistable fundamental group
at infinity (or simply are “semistable at infinity”). In the compact metric
case, pointed $1$-movable is equivalent to the notion of “weakly chained”
introduced in [16]. The later is very simple to define, but we do not need
definitions of any of these concepts here; rather we use Theorem 1 from [16],
stated as Theorem 1 below, which only involves the following new definition
from [16]. Let $(x,y)$ be a pair of distinct points in a metric space $X$.
Then $(x,y)$ is called a (distance) sink ([16]) if $(x,y)$ is a local minimum
of the distance function. That is, $(x,y)$ is not a sink if and only if there
are points $x^{\prime},y^{\prime}$ arbitrarily close to $x,y$ with
$d(x^{\prime},y^{\prime})<d(x,y)$. In a metric space $X$, for any $x_{0}\in X$
and $r>0$, $\Sigma_{x_{0}}(r)$ denotes the metric sphere
$\\{y:d(x_{0},y)=r\\}$.
###### Theorem 1
Let $X$ be a proper, geodesically complete CAT(0) space with connected
boundary and $x_{0}\in X$. Suppose there exist some $K>0$ and a positive real
function $\iota$, called the refining increment, such that for all
sufficiently large $t$,
1. 1.
$\underset{s\rightarrow t^{+}}{\lim}\iota(s)>0$ (in particular if $\iota$ is
lower semicontinuous from the right) and
2. 2.
if $d(x,y)<\iota(t)$ and $(x,y)$ is a sink in $\Sigma_{x_{0}}(t)$ then $x,y$
may be joined by a curve in $X_{t}\cap B(x,K)\cap B(y,K)$.
Then $\partial X\ $is weakly chained (hence $\partial X$ is pointed 1-movable
and $X$ is semistable at infinity).
We need the following notations to state our main application–we give more
details later in the paper. The $\pi$-truncated metric of any metric space is
the minimum of $\pi$ and the original metric. For any $x$ in a metric space
$X$, let $S_{x}$ denote the space of directions at $x$ with the angle $\alpha$
as metric, the completion of which is known to be a CAT(1) space when $X$ is
CAT($k$). A local cone in a CAT($k$) space $X$ is a closed metric ball
$C=\overline{B(o,\rho_{o})}$ for some $o\in X$ called the apex, such that
there is an isometry from $C$ into the $k$-cone $C_{k}(S_{o})$ that takes $o$
to the apex $0$ of the $k$-cone. The number $\rho_{o}>0$ is called the cone
radius at $o$. We say that a geodesic space $X$ is (resp. uniformly) locally
conical if every $x\in X$ is the apex of a local cone
$\overline{B(x,\rho_{x})}$ (resp. and the cover $\mathcal{C}$ by the interiors
of the local cones has a Lebesgue number $\rho>0$). Such a cover $\mathcal{C}$
is called a uniform cone cover of $X$. It is easy to check that if $X$ is a
locally conical geodesic space then $X$ is uniformly locally conical if either
there is a positive lower bound on the cone radii at all points or $X$ is
cocompact.
###### Theorem 2
If $X$ is a proper, geodesically complete CAT(0) space with connected boundary
and a uniform cone covering $\mathcal{C}$ such that for all $o\in
A_{\mathcal{C}}$, the complement of every $\frac{\pi}{2}$-ball in $S_{o}$ is
connected, then $\partial X$ is weakly chained. This condition on $S_{o}$ is
in particular true when
1. 1.
$S_{o}$ (with the angle metric) is a geodesic space or
2. 2.
$S_{o}$ has no cut points and is itself locally conical with all cone radii at
least $\frac{\pi}{2}$.
With a minor caveat, the geodesic completeness of $X$ is equivalent to $S_{o}$
being geodesically complete for all $o$ (Lemma 13). In other words, once $X$
is known to be proper, uniformly locally conical and having connected
boundary, the hypotheses of Theorem 2 reduce entirely to “infinitesimal”
questions about the space of directions at each apex.
If $K$ is an $M_{k}$-polyhedral complex, we always assume it has a specific
geometric type or “shape” assigned to each cell, and the set Shapes($K$) of
these shapes is finite. Bridson showed in his thesis (exposition in [4]) that
when Shapes($K$) is finite, $K$ has a natural geodesic metric induced by this
choice of shapes, and we will always take this metric on $K$. The space of
directions at any point is a spherical ($k=1$) polyhedral complex called the
link $Lk(x,K)$. In fact, the angle metric on $Lk(x,K)$ is precisely the
$\pi$-truncated metric of the induced geodesic metric when $Lk(x,K)$ is
considered as a spherical polyhedral complex. Each vertex $v$ of $K$ is the
apex of a local cone $\overline{B(v,\rho)}$, where $\rho$ is at least the
infimum of distances to any face not containing $v$ of a cell that does
contain $v$. Every $x\in K$ is contained in a ball $B(x,\varepsilon)$ that is
isometric to a ball of the same radius in some $\overline{B(v,\rho)}$ with $v$
a vertex such that $\overline{B(x,\varepsilon)}$ is a local cone, and this
$\varepsilon$ has a positive uniform lower bound (Theorem I.7.39, Lemma
I.7.54, [4]). That is, $K$ is uniformly locally conical. Now if $x\in
B(v,\rho)$ where $v\neq x$, then $Lk(x,K)$ is isometric to the spherical
suspension of $Lk(u,Lk(v,K))$, where $u$ is the direction of the geodesic from
$v$ to $x$ (Lemma 9). Now suppose that $Lk(v,K)$ has at least two points and
no free faces. By Lemma 13, $Lk(v,K)$ is geodesically complete, and since it
is locally conical, $Lk(u,Lk(v,K))$ is also geodesically complete by Lemma 14.
Moreover, by Lemma 14 and Lemma 9, the conditions of Theorem 2 are always
satisfied for any non-vertex $x$ if they are satsified for every vertex.
Putting all of this together we obtain:
###### Theorem 3
Let $K$ be a CAT(0) Euclidean polyhedral complex with Shapes($K$) finite and
connected boundary. If for each vertex $v$ in $K$, $Lk(v,K)$ has at least two
points and no free faces, and the complements of all $\frac{\pi}{2}$-balls in
$Lk(v,K)$ are connected, then $\partial K$ is weakly chained. In particular
this is true when $Lk(v,K)$ satisfies condition (1) or (2) in Theorem 2 at
each vertex $v$.
###### Example 4
The well-known torus complexes of Croke-Kleiner ([6]) are locally isometric to
two or four Euclidean half-spaces glued along a line, so the link at every
vertex consists geometrically of semicircles (length $\pi$) attached “in
parallel” to one another at their endpoints. In particular, the links are
geodesic spaces with the angle metric and have no free faces. Theorem 3 now
provides an easy proof that the fundamental groups of torus complexes are
semistable at infinity. This is a known result–we do not have a reference but
Michael Mihalik explained that this can be read off from a presentation of the
group using some of his earlier results on the subject. Indeed many of the
results of this sort are based on assumptions about how the groups are
presented, for example for Coxeter and Artin groups ([13]). In contrast,
Theorem 3 requires no knowledge of a group acting on $K$ and in fact $X$ is
not even required to be cocompact.
We say that a metrized CAT(0) polyhedral complex $K$ with Shapes($K$) finite
satisfies Moussong’s condition if for every vertex $v$, every edge in
$Lk(v,K)$ has length at least $\frac{\pi}{2}$. Spherical geometry then implies
that $Lk(v,K)$ is locally conical at every vertex with cone radius at least
$\frac{\pi}{2}$. In this case, Theorem 2 gives a purely combinatorial
sufficient condition on the link at each vertex:
###### Corollary 5
Let $K$ be a CAT(0) Euclidean polyhedral complex with Shapes($K$) finite and
connected boundary that satisfies Moussong’s condition. If the link at each
vertex has at least two points and no free faces or cut points, then $\partial
K$ is weakly chained.
###### Example 6
Moussong showed ([14]) that the Davis complex ([7]) of any Coxeter group has a
non-positively curved metric statisfying Moussong’s condition. See also [8]
for background on Coxeter groups. Mihalik showed in 1996 ([13]) that all
Coxeter groups are semistable at infinity, but Corollary 5 gives an
alternative proof when the link has no free faces or cut points. However,
since any polyhedron may be the link of the Davis complex of a Coxeter group
(see Lemma 7.2.2, [8]), Corollary 5 does not apply in this way to all Coxeter
groups.
###### Example 7 (Geoghagen)
Take a unit square. Attach 16 unit squares around the boundary wrapping around
twice (topologically attaching a Moebius band via its median circle to the
boundary of the square). Add 32 unit squares wrapping twice around the new
boundary. Continue this process to create an infinite square complex. Up to
isometry there are only two kinds of vertices: “corner vertices”, which lie in
seven squares and “side vertices”, which lie in six squares. Therefore this
complex is uniformly locally conical. The link at any side vertex is isometric
to three semicircles glued at their endpoints with the induced geodesic
metric, which has no free faces and the complements of $\frac{\pi}{2}$-balls
are connected. The link at the corner vertices is topologically the same, but
geometrically consists of two segments of length $\frac{3\pi}{2}$ and one
segment of length $\frac{\pi}{2}$. If $u$ is the point at the center of the
latter segment then the complement of $B(u,\frac{\pi}{2})$ has two components.
Note also that the induced geodesic metric and the angle metric on the link do
not coincide at corner vertices; the angle metric is not geodesic. Moreover,
Moussong’s condition is not satisfied, although none of the links has a cut
point. The boundary of this space is the 2-adic solenoid, which is known to
topologists to not be pointed 1-connected, and was alternatively shown in [16]
to not be weakly chained. This shows that the condition about
$\frac{\pi}{2}$-balls in Theorem 2 cannot simply be removed. Also note that
the failure of Theorem 2 in this case is strictly for geometric reasons, not
combinatorial or topological ones. Finally, note that although there are only
finitely many isometry types of local cones in this example, it is not
cocompact. For example, only the first square has four “corner vertices”.
## 2 Cones and suspensions
We recall a couple of special cases of Berestovskii’s consruction of metric
cones and suspensions and establish some basic results for which we have no
references. Let $S$ be a metric space with distance $\rho$ and $\pi$-truncated
metric denoted by $\alpha$. The Euclidean cone $C_{0}(S)$ consists of
$[0,\infty)\times S$ with all points of the form $(0,v)$ identified to a
single point called the apex. We will denote the equivalence class of the
point $(t,v)$ by $tv$, and the apex will be denoted by $0$ or $0v$ depending
on the situation. Note that with this notation $S$ is naturally identified
with the set of all all $1v$ in $X$, which we will denote by $1S$. However
this identification is not generally an isometry and therefore we will
distinguish notationally between elements $1v$ in $X$ and $v$ in $S$. If
$u(s)$ is a curve in $S$ then the curve $tu(s)$ in $X$ will be denoted simply
by $tu$. $X$ is metrized analogously to how $\mathbb{R}^{2}$ is metrized as
the cone of the unit circle with angle as metric. That is, for $s,t>0$ and
$v,w\in S$, $d(tv,sw)^{2}=s^{2}+t^{2}-2st\cos\alpha(v,w)$ (cones $C_{k}X$ for
other curvatures $k$ use the corresponding cosine laws).
Let $M_{k}^{2}$ denote the 2-dimensional space form of constant curvature $k$.
We are primarily interested in $k=0,1$, so $M_{0}^{2}$ is the plane and
$M_{1}^{2}$ is the sphere of curvaure $1$. The former has diameter $\infty$
and the latter has (intrinsic!) diameter $\pi$. Recall that a CAT($k$) space
$X$ is a metric space such that if $d(x,y)$ is less than the diameter of
$M_{k}^{2}$ then $x,y$ are joined by a geodesic and if a geodesic triangle has
a comparison triangle in $M_{k}^{2}$ (no restriction for $k\leq 0$) then
Alexandrov’s comparisons for curvature $\leq k$ hold (see [4] for more
details). Berestovskii ([1]) showed in 1983 that $S$ is a CAT(1) space if and
only if $C_{0}S$ is a CAT(0) space (and more strongly the same is true for
$k$-cones for any $k\in\mathbb{R}$). We assume now that $S$ is a CAT(1) space
and review a few facts about geodesics in $C_{0}S$. Suppose that
$u:[0,K]\rightarrow S$ is an arclength parameterized geodesic from $v$ to $w$
in $S$ of length $K<\pi$. For simplicity we consider the constant map to be a
geodesic from $v$ to $v$. By definition of the metric, the function
$f_{\alpha}(s,t)=su(t)$ is an isometry from a Euclidean sector $E(v,w)$ of
angle $K$ in the plane parameterized with polar coordinates, to the set
$Z(v,w)=\\{su(t):0\leq s\leq\infty,0\leq t\leq K\\}$. Therefore the curves in
$Z(v,w)$ corresponding to line segments in the Euclidean sector are geodesics
in $Z$.
A geodesic in the case $v=w$ is called a radial geodesic, i.e of the form
$\gamma_{w}(t)=tw$, $0\leq t<\infty$. Now suppose that $\gamma$ is a geodesic
from $tv$ to $sw$ that does not meet the apex. Since geodesics are unique, the
concatenation of the radial geodesics between the apex and $tv$ and $sw$
cannot be a geodesic and we conclude that $\alpha(v,w)<\pi$. This means that
$\gamma$ corresponds to a line in $E(v,w)$.
When $\alpha(v,w)=\pi$, for any $0<r_{1},r_{2}$, by definition
$d(r_{1}v,r_{2}w)=\left|r_{1}-r_{2}\right|$ and therefore the concatenation of
the radial geodesic from $r_{1}v$ to $0$ with the radial geodesic from $0$ to
$r_{2}w$ is a geodesic from $r_{1}v$ to $r_{2}w$. All of these geodesics in
$X$ are unique since $X$ is a CAT(0) space.
###### Example 8
The Euclidean cone of an arbitrary metric space $S$ is “sink-free” in the
sense that it has no sinks ([16]). In fact one can move any pair of points
towards the apex, strictly decreasing the distance between them (and any non-
apex point may similarly move towards the apex). Such cones need not be
locally path connected (e.g. when $S$ is a Cantor set).
The spherical suspension $\Sigma S$ of a metric space $S$ is defined
analogously using the $\pi$-truncated metric, taking the product of the space
with $[0,\pi]$, identifying each $0\times S$ and $\pi\times S$ with points
$\overline{0}$ and $\overline{\pi}$, respectively. The space is metrized using
the spherical cosine law, i.e. as $S^{2}$ as metrized as the suspension of a
circle of length $2\pi$. For $u\in S$ and $\theta\in(0,\pi)$ we denote the
point corresponding to the ordered pair $(u,\theta)$ by $u_{\theta}$.
###### Lemma 9
If $S$ is a CAT(1) space then the spaces of directions at a point $t_{0}v\in
C_{0}S$ is isometric to $S$ when $t_{0}=0$. If $t_{0}>0$ then $S_{t_{0}v}$ is
isometric to the spherical suspension of $S_{v}$, and if $B(v,\rho)$ is a
local cone in $S$ then $B(t_{0}v,\rho)$ is a local cone in $C_{0}S$.
Sketch of Proof. The case when $t_{0}=0$ simply follows from the definition of
the cone metric. Suppose $t_{0}>0$ and let $\Gamma_{0}$ denote the segment of
the radial geodesic of $v$ outward from $t_{0}v$. Suppose that $\gamma$ is a
geodesic in $S$ starting at $v$. By definition of the cone metric, every
geodesic in $C_{0}S$ starting at $t_{0}v$ in the same sector as $\gamma$ is
uniquely determined by the angle $\theta$ between it and $\Gamma_{0}$; we will
denote any such geodesic by $\gamma_{\theta}$. This identifies $S_{t_{0}v}$
(as a set) with the spherical suspension of $S_{v}$, with $\overline{0}$
corresponding to the direction of $\Gamma_{0}$.
Suppose that $\gamma_{\theta_{1}}^{1}$ and $\gamma_{\theta_{2}}^{2}$ are
geodesics starting at $t_{0}v$, with directions $v_{1},v_{2}$, respectively at
$t_{0}v$. The cases when $\theta_{i}$ equal $0$ or $\pi$ are trivial; suppose
$0<\theta_{1},\theta_{2}<\pi$. Let $\overline{w_{i}}$ be unit vectors in the
$(x,y)$ plane with
$\left\langle\overline{w_{1}},\overline{w_{2}}\right\rangle=\cos\angle(\overline{w_{1}},\overline{w_{2}})=\cos\angle\left(\gamma^{1},\gamma^{2}\right)$.
Let $\overline{v_{i}}$ be unit vectors whose orthogonal projections onto the
$(x,y)$-plane are parallel to $\overline{w_{i}}$ and
$\alpha(\overline{v_{i}},\overline{w_{i}})=\theta_{i}$. Writing
$\overline{v_{i}}=\overline{w_{i}}+(\overline{v_{i}}-\overline{w_{i}})$ and
cancelling orthogonal terms we have
$\left\langle\overline{v_{1}},\overline{v_{2}}\right\rangle=\left\langle\overline{w_{1}},\overline{w_{2}}\right\rangle+\left\langle\overline{v_{1}}-\overline{w_{1}},\overline{v_{2}}-\overline{w_{2}}\right\rangle=\left\langle\overline{w_{1}},\overline{w_{2}}\right\rangle\pm\left\|\overline{v_{1}}-\overline{w_{1}}\right\|\left\|\overline{v_{2}}-\overline{w_{2}}\right\|$
(1)
with the sign of the last term depending on whether $\theta_{i}$ are on the
same side of $\frac{\pi}{2}$ (in which case it is $+$). This shows that
$\cos\angle\left(\overline{v_{1}},\overline{v_{2}}\right)$, which is what we
need to show is equal to $\cos\alpha(\gamma_{\theta_{1}},\gamma_{\theta_{2}})$
may be calculated using only
$\left\langle\overline{w_{1}},\overline{w_{2}}\right\rangle$ and lengths of
vectors in the $z$-direction. Now
$\cos\angle(\gamma_{\theta_{1}}^{1},\gamma_{\theta_{2}}^{2})=\underset{t\rightarrow
0}{\lim}1-\frac{d(\gamma_{\theta_{1}}^{1}(t),\gamma_{\theta_{2}}^{2}(t))^{2}}{2t^{2}}\text{.}$
By definition of the cone metric, the right term may be computed from
$d(\gamma^{1}(t),\gamma^{2}(t))$ using the Euclidean Formula 1, and
$\underset{t\rightarrow
0}{\lim}1-\frac{d(\gamma^{1}(t),\gamma^{2}(t))^{2}}{2t^{2}}=\cos\angle(\gamma^{1},\gamma^{2})=\left\langle\overline{w_{1}},\overline{w_{2}}\right\rangle\text{.}$
This shows the first part of the lemma. For the second part note that if
$B(v,\rho)$ is a local cone in $S$ then for all $0<t\leq\rho$,
$d(\gamma^{1}(t),\gamma^{2}(t))^{2}=2t^{2}(1-\cos\angle(\gamma^{1},\gamma^{2})\text{.}$
Therefore the above limit is constant and we see that
$d(\gamma_{\theta_{1}}^{1}(t),\gamma_{\theta_{2}}^{2}(t))^{2}=2t^{2}(1-\cos\angle(\gamma_{\theta_{1}}^{1},\gamma_{\theta_{2}}^{2})\text{,}$
i.e. $B(t_{0}v,\rho)$ is a local cone in $C_{0}S$.
###### Lemma 10
(Radial Geodesics Don’t Bifurcate) Let $S$ be a CAT(1) space. If $\gamma$ is a
geodesic in $C_{0}S$ that intersects a radial geodesic $\beta$ in more than
one point then $\gamma$ is a (possibly infinite) segment of $\beta$.
Proof. Let $\beta(t)=tu$ for some $u\in S$. By assumption, $\gamma$ contains
two points $t_{1}u,t_{2}u$ with $0\leq t_{1}<t_{2}$. By uniqueness,
$\gamma=\beta$ between those two points. Now suppose that another point $sw$
lies on $\gamma$ with $w\neq u$ and $s>t_{2}$. Then the segment of $\gamma$
from $t_{1}u$ to $sw$ lies in $Z(u,w)$ and the segment of $\gamma$ from
$t_{1}u$ to $t_{2}u$ also lies in $Z(u,w)$. But his means that one segment of
$\gamma$ in $Z(u,w)$ is radial and another segment is not, which is impossible
in Euclidean geometry. The proof for $s<t_{1}$ is similar, showing that
$\gamma(t)=tu$ where ever it is defined.
###### Remark 11
One known consequence of the above lemma is that there is the continuous
“radial retraction” from any $B(0,\rho)\backslash\\{0\\}$ onto the sphere
$\Sigma_{0}(\rho)$, which just takes every $tv$ with $0<t\leq\rho$ to
$\rho\cdot v$.
###### Definition 12
Let $X$ be a metric space. A curve $c$ in $X$ is called a local geodesic if
the restriction of $c$ to any sufficiently small closed interval is a geodesic
(so in a Riemannian manifold this would simply be what is normally referred to
as a “geodesic”). $X$ is called geodesically complete if for every non-
constant geodesic $\gamma:[a,b]\rightarrow X$ there is some $\varepsilon>0$
such that $\gamma$ extends to a curve $\gamma^{e}:[a,b+\varepsilon]\rightarrow
X$ such that the restriction of $\gamma^{e}$ to
$[b-\varepsilon,b+\varepsilon]$ is a geodesic.
Note that if $X$ is a complete metric space then $X$ is geodesically complete
if and only if every local geodesic extends to a local geodesic defined on all
of $\mathbb{R}$.
###### Lemma 13
If $S$ is a complete CAT(1) space with at least two points then the following
are equivalent:
1. 1.
$X=C_{0}(S)$ is geodesically complete.
2. 2.
$S$ is geodesically complete.
3. 3.
Every non-trivial geodesic in $S$ extends to a geodesic of length at least
$\pi$.
Proof. Since there is a 1-1 correspondence between non-radial geodesics in $X$
and geodesics in $S$, if $X$ is geodesically complete then so is $S$. Suppose
$S$ is geodesically complete, $u\in S$ and $\gamma$ is a non-trivial geodesic
in $S$ starting at $u$. Then $\gamma$ extends as a local geodesic of length
$\pi$ to a point $v$. We claim that $d(u,v)=\pi$, which means that this
extension is in fact a geodesic, completing the proof of the third part. We
assume that $\gamma:[0,\pi]\rightarrow X$ is parameterized by arclength.
Consider the following statement $S(t):d(u,\gamma(t))=t$. Since $\gamma$ is a
local geodesic, $S(t)$ is true for small positive $t$. By continuity of the
distance function, if $S(s)$ is true for all $s\leq t$ then $S(t)$ is true.
Therefore we need only show that if $S(t)$ is true for some $t<\pi$ then
$S(t+\varepsilon)$ is true for some $\varepsilon>0$. Since $\gamma$ is a local
geodesic, there is some $\varepsilon>0$ such that the restrictions of $\gamma$
to $[t,t+\varepsilon]$ and $[t-\varepsilon,t+\varepsilon]$ are geodesics. The
first of these statements implies that the angle between the reversal of
$\gamma$ starting at $\gamma(t)$ and the restriction of $\gamma$ starting at
$\gamma(t)$ is $\pi$. Since both segments are bona fide geodesics we may apply
the CAT(1) condition to conclude that
$d(u,\gamma(t+\varepsilon))=t+\varepsilon$, completing the proof of
$2\Rightarrow 3$.
If the third part is true then by the 1-1 correspondence mentioned above,
every non-radial geodesic in $X$ extends to a geodesic defined on
$\mathbb{R}$, and all radial geodesics by definition extend outwards from the
apex. The only remaining question is whether the reversal any radial geodesic
$\gamma_{u}$ extends through the apex. Since $S$ has at least two points there
is some $v\neq u$ in $S$. If $\alpha(u,v)=\pi$ then
$\angle(\gamma_{u},\gamma_{v})=\pi$ and $\gamma_{v}$ extends
$\overline{\gamma_{v}}$ as a geodesic beyond the apex. If $\alpha(u,v)<\pi$
then $u$ and $v$ are joined by a geodesic in $S$, which extends to a geodesic
to some $v$ such that $\alpha(u,v)=\pi$, completing the proof.
###### Corollary 14
If $S$ is a geodesically complete CAT(1) space then the spherical suspension
$\Sigma S$ of $S$ is geodesically complete and the complement of every
$\frac{\pi}{2}$-ball in $\Sigma S$ is (path) connected.
Proof. That $\Sigma S$ is geodesically complete follows from Lemma 13 and the
definition of the spherical suspension metric. Let $v_{\theta}\in\Sigma S$;
without loss of generality suppose $\theta\leq\frac{\pi}{2}$. We will show
that if $w_{\mu}\neq\overline{\pi}$ with
$d(w_{\mu},v_{\theta})\geq\frac{\pi}{2}$ then there is a path from $w_{\mu}$
to $\overline{\pi}$ that stays outside $B(v_{\theta},\frac{\pi}{2})$. If $v=w$
then by definition of the spherical suspension metric there is a uniquely
determined isometric circle determined by $v=w$ and we may move in either
direction from $w_{\mu}$ (depending on whether $\theta>\mu$) to
$\overline{\pi}$, staying outside $B(v_{\theta},\frac{\pi}{2})$. If $w\neq v$
then the geodesic from $v$ to $w$ extends to length $\pi$. Therefore we may
move from $w_{\mu}$ away from $v_{\theta}$ along the corresponding geodesic to
the “antipodal point” of $v_{\theta}$ and proceed as in the first step.
## 3 Mk-polyhedral complexes
Recall that a free face in an $M_{k}$-polyhedral complex is a face that lies
in exactly one cell of higher dimension. The proof of the next lemma involves
Proposition II.5.10, [4] (and uses some similar arguments), which states that
an $M_{k}$-polyhedral complex with curvature bounded above and finite shapes
is geodesically complete if and only if it has no free faces. This statement
is not quite correct according to the traditional definitions because discrete
complexes are geodesically complete (there are no non-trivial local
geodesics), and even if one considers the empty set as a face of dimension
$-1$, strictly speaking it is free if and only if the complex consists of
exactly one vertex. And for example discrete spherical complexes of curvature
$\leq 1$ occur as the space of directions in 1-dimensional Euclidean
complexes. The proof of Proposition II.5.10 is by induction on dimension
starting with $n=0$, which is precisely when the statement is not true. But
this minor issue is easily solved by starting with $n=1$ and handling the
discrete case as a special case.
###### Lemma 15
If $K$ is an $M_{k}$-polyhedral complex with non-positive curvature and
Shapes($K$) finite then $K$ is geodesically complete if and only if the link
at each vertex has at least two vertices and is either discrete or has no free
faces.
Proof. Since $K$ has non-positive curvature, the space of directions at each
point, hence the link at each vertex $v$ is a CAT(1) space. Suppose $X$ is not
geodesically complete. Since $X$ has non-positive curvature it is connected
and hence this is equivalent to having a free face $F$. If $F$ is a vertex
then at that vertex the link is a single point. If is $F$ higher dimensional
then let $v$ be any vertex of $F$ and $E_{1}$ be an edge containing $v$, not
contained $F$ but contained in the unique higher dimensional cell containing
$F$. Let $E_{2}$ be an edge in $F$ containing $v$. Then the geodesic in
$Lk(v,K)$ from the directions $u_{1}$, $u_{2}$ corresponding to $E_{1}$,
$E_{2}$, respectively, cannot be extended beyond $u_{2}$. That is, $Lk(v,K)$
is not geodesically complete, and since it is not discrete it has a free face.
Conversely, if $Lk(v,K)$ is a single vertex for some vertex $v$ in $K$ then
there is an edge in that direction. Moreover, there is no edge having angle
$\pi$ with that edge, so the edge cannot be extended as a geodesic past $v$.
That is, $K$ is not geodesically complete. Finally, suppose there is some
vertex $v$ such that $Lk(v,K)$ is not discrete and has a free face. Since
$Lk(v,K)$ is not discrete, it is not geodesically complete. But some
$B(v,\varepsilon)$ is isometric to $B(0,\varepsilon)$ in $C_{0}Lk(v,K)$ and
the proof is finished by Lemma 13.
###### Example 16
Let $S$ be a complete $\pi$-geodesic space of diameter $\pi$ (e.g. a CAT(1)
space with $\pi$-truncated metric $\alpha$). Then $S$ is a length space if and
only if $S$ is sink-free. Necessity follows from Example 28 in [16]. For the
converse, we need only consider the case $d(x,y)=\pi$. If $(x,y)$ is not a
sink, there exist points $x^{\prime},y^{\prime}$ arbitrarily close to $x,y$
such that $d(x^{\prime},y^{\prime})<\pi$ and since $\alpha$ is a
$\pi$-geodesic metric, $x^{\prime},y^{\prime}$ are joined by a geodesic. Then
a midpoint between $x^{\prime},y^{\prime}$ is an “almost midpoint” for $x,y$,
which is classically known to be sufficient to show that $X$ is a length space
(cf. [15], Proposition 7 for an exposition). In particular, if $S$ is compact
then $S$ is geodesic if and only if it is sink-free.
###### Proposition 17
Suppose $X$ is a geodesically complete CAT(0) space, $x_{0}\in X$, $r>0$, and
$(x,y)$ is a sink in $\Sigma_{x_{0}}(r)$. In addition, suppose that $x,y$ lie
in a local cone with apex $o$ and cone radius $\rho>0$. Then
1. 1.
the geodesics $\gamma_{x},\gamma_{y}$ coincide up to $o$,
2. 2.
$d(x,o)=d(y,o)=\frac{d(x,y)}{2}$ and
3. 3.
$\rho\geq\frac{d(x,y)}{2}$.
Proof. Let $\delta:=d(x,y)$, $\beta=\gamma_{xy}$ and $\gamma^{x},\gamma^{y}$
be geodesics starting at $x,y$, respectively, that extend
$\gamma_{x},\gamma_{y}$ to geodesic rays. Suppose first that
$\angle(\gamma^{x},\beta)<\pi$. Since $S_{x}$ is $\pi$-geodesic, there is a
geodesic $\xi$ starting at $x$ such that
$\angle(\gamma_{x},\xi),\angle(\xi,\beta)<\frac{\pi}{2}$. By the triangle
inequality in $S_{x}$, $\angle(\gamma_{x},\xi)>\frac{\pi}{2}$ and so by the
CAT(0) condition all points on $\xi$ lie strictly outside $\Sigma_{x_{0}}(r)$.
Since $\angle\left(\xi,\beta\right)<\frac{\pi}{2}$, by the “single-sided
limit” method to measure angles (c.f. Proposition 3.5 in [4]), any point
$x^{\prime}$ sufficiently close to $x$ on $\xi$ satisfies
$d(x^{\prime},y)<d(x,y)$. In the plane, consider the comparison triangle with
corners $X_{0},X^{\prime},Y$ corresponding to the one determined by
$x_{0},x^{\prime},y$. Since
$d(X_{0},X^{\prime})=d(x_{0},x^{\prime})>r=d(x_{0},y)=d(X_{0},Y)\text{,}$
by elementary geometry, if $Z$ is the point on the segment $X_{0}X^{\prime}$
with $d(X_{0},Z)=r$, $d(Z,Y)<d(X^{\prime},Y)$. By the CAT(0) condition, if $z$
is the projection of $x^{\prime}$ onto $\Sigma_{x_{0}}(r)$ then $d(z,y)\leq
d(Z,Y)<d(X^{\prime},Y)=d(x^{\prime},y)$. That is, $z$ is arbitrarily close to
$x$ but $d(z,y)<d(x,y)$. By definition, $(x,y)$ is not a sink in
$\Sigma_{x_{0}}(r)$, a contradiction.
Therefore we may assume that
$\angle(\gamma^{x},\beta)=\angle(\gamma^{y},\overline{\beta})=\pi$ (2)
and by Equation 2 $\gamma:=\overline{\gamma^{x}}\ast\beta\ast\gamma^{y}$ is a
geodesic. Assume first that $o$ does not lie on $\gamma$ and consider the
Euclidean sector $E$ determined by $\gamma$ in the local cone. At $x$ and $y$
let $\beta_{x}$ and $\beta_{y}$ be geodesics corresponding to lines in $E$
that are perpendicular the line corresponding to $\gamma$ with the same
orientation. That is, moving an arbitrarily small but equal amount along
$\beta_{x}$ and $\beta_{y}$ to points $x^{\prime},y^{\prime}$ we have
$d(x^{\prime},y^{\prime})=d(x,y)$. Since
$\angle\left(\gamma^{x},\overline{\gamma_{x}}\right)=\pi$ and
$\angle\left(\gamma^{x},\beta_{x}\right)=\frac{\pi}{2}$, by the triangle
inequality
$\angle\left(\beta_{x},\overline{\gamma_{x}}\right)\geq\frac{\pi}{2}$, and
similarly,
$\angle\left(\beta_{y},\overline{\gamma_{y}}\right)\geq\frac{\pi}{2}$. By the
CAT(0) condition, $d(x_{0},x^{\prime}),d(x_{0},y^{\prime})>r$. Projecting
$x^{\prime},y^{\prime}$ onto $\Sigma_{x_{0}}(r)$ strictly reduces
$d(x^{\prime},y^{\prime})=d(x,y)$, showing $(x,y)$ is not a sink in
$\Sigma_{x_{0}}(r)$, a contradiction.
Therefore $o$ must lie on $\gamma$, hence on $\gamma^{x}$, $\beta$, or
$\gamma^{y}$. Suppose $o$ lies on $\gamma^{x}$ and $o\neq x$. Then
$\gamma_{ox}\ast\beta$ is radial and $\gamma_{ox}\ast\overline{\gamma_{x}}$ is
radial as long as it lies in the local cone at $o$. Since $y$ is also in the
local cone, $\gamma_{ox}\ast\overline{\gamma_{x}}$ lies in the local cone for
at least length $d(x,y)$ beyond $x$. By the triangle inequality, the midpoint
$m$ of $\beta$ satisfies $d(x_{0},m)\geq r-\frac{d(x,y)}{2}$. Since
$\overline{\gamma_{x}}$ is a geodesic to $x_{0}$ and $d(x_{0},y)=r$,
$\overline{\gamma_{x}}$ and $\beta$ cannot coincide for the entire length of
$\beta$. By definition this means that the radial geodesic
$\gamma_{ox}\ast\beta$ bifurcates inside the local cone, a contradiction to
Lemma 10.
Supose that $o=x$. Then $\beta\ast\gamma^{y}$ is a radial geodesic that
coincides with $\gamma_{y}\ast\gamma^{y}$ on $\gamma^{y}$. But the former does
not pass through $x$ and hence the radial geodesic $\gamma_{y}\ast\gamma^{y}$
bifurcates somewhere along $\beta$, which is in the local cone, a
contradiction. Similarly $o$ does not lie on $\gamma^{y}$.
Next suppose that $o$ lies on $\beta$, so $\gamma_{ox}$ is radial. By a
similar argument to what we have used above, the fact that radial geodesics do
not bifurcate implies that $\gamma_{x}$ must coincide with $\gamma_{ox}$ from
$o$ to $x$. By symmetry, $\gamma_{y}$ must coincide with $\gamma_{oy}$ from
$o$ to $y$ and $o=m$. Since $\beta$ is a geodesic,
$d(x,o)=d(y,o)=\frac{d(x,y)}{2}$. Since $x$ and $y$ lie in the local cone and
$o$ is the apex, $\rho\geq\frac{d(x,y)}{2}$.
The next lemma is probably known but we do not have a reference. It is useful
because it allows us to replace a given cover by metric balls with a cover by
metric balls of uniformly large size.
###### Lemma 18
Let $\mathcal{C=\\{}B(x_{\alpha},r_{\alpha})\\}_{\alpha\in\Lambda}$ be a
covering of a geodesic space $X$ by metric balls. If $\mathcal{U}$ has a
Lebesgue number $\lambda>0$ then there is a subcovering
$\mathcal{C}^{\prime}\mathcal{=\\{}B(x_{\alpha},r_{\alpha})\\}_{\alpha\in\Lambda^{\prime}}$
of $X$ with the following properties:
1. 1.
For all $\alpha\in\Lambda^{\prime}$, $r_{\alpha}\geq\frac{\lambda}{4}$ also
covers $X$.
2. 2.
$\mathcal{C}^{\prime}$ has Lebesgue number $\frac{\lambda}{2}$.
Proof. Without loss of generality, if $X$ has finite diameter then we can
assume that $\lambda$ is less than the diameter $D$ of $X$; if $X$ is
unbounded let $D>\lambda$ be arbitrary. In either case, since $X$ is a
geodesic space, there exist $a,b$ such that $d(a,b)=D$. Then for any $x\in X$
$d(x,a)\geq\frac{D}{2}$ or $d(x,b)\geq\frac{D}{2}$. Moving along the geodesic
$\gamma_{xa}$ or $\gamma_{xb}$ there is some point $y\in X$ such that
$d(x,y)=\frac{\lambda}{2}<\lambda$. Therefore there is some
$B(x_{\alpha},r_{\alpha})$ containing both $x$ and $y$. But since
$d(x,y)=\frac{\lambda}{2}$, by the triangle inequality, $\frac{\lambda}{2}\leq
2r_{\alpha}$, showing that the balls of radius $\geq\frac{\lambda}{4}$ in
$\mathcal{C}$ cover $X$. Now suppose that $d(x,y)<\frac{\lambda}{2}$. If $z$
is the midpoint of a geodesic from $x$ to $y$, by what we have just shown
there is some $B(x,r_{\alpha})$ containing $z$ with
$r_{\alpha}\geq\frac{\lambda}{4}$, and by this ball must contain both $x$ and
$y$.
Proof of Theorem 2. Let $\rho$ be a Lebesgue number for a uniform cone cover
of $X$. By Lemma 18 we can assume that the cone radii for the local cones are
all at least $\frac{\rho}{2}$. For every $t>0$ let
$\iota(t):=2\left(t-\sqrt{t^{2}-\frac{\rho^{2}}{4}}\right)>0$
and assume that $t$ is large enough that $\iota(t)<\rho$. Since $\iota$ is
positive and continuous we need only verify Theorem 1.2 for any $t$. Suppose
that $d(x,y)<\iota(t)$ and $(x,y)$ is a sink in $\Sigma_{x_{0}}(t)$. Since
$d(x,y)<\rho$, $x,y$ lie in a local cone with vertex $o$.
Since $(x,y)$ is a sink, by Proposition 17 $\gamma_{x},\gamma_{y}$ coincide up
to $o$ and $d(o,x)=d(o,y)=\frac{d(x,y)}{2}<t-\sqrt{t^{2}-\frac{\rho^{2}}{4}}$.
We also have
$d(x_{0},o)=t-\frac{\iota(t)}{2}=\sqrt{t^{2}-\frac{\rho^{2}}{4}\text{.}}$ (3)
Moreover, in $S_{o}$,
$\alpha(\gamma_{ox}^{\prime},\overline{\gamma_{o}}^{\prime})=\alpha(\gamma_{oy}^{\prime},\overline{\gamma_{o}}^{\prime})=\pi\geq\frac{\pi}{2}$.
By assumption there is a curve $c$ in $S_{o}$ from $\gamma_{ox}^{\prime}$ to
$\gamma_{oy}^{\prime}$ such that for all $q$,
$\alpha(\overline{\gamma_{o}}^{\prime},c(q))\geq\frac{\pi}{2}$. Since radial
geodesics do not bifurcate, and the cone radius is $\frac{\rho}{2}$, there is
a point $x^{\prime}$ on the unique (inside the cone) extension of
$\gamma_{ox}$ of distance $\frac{\rho}{2}$ from $o$, and an analogous point
$y^{\prime}$. Now the curve $\frac{\rho}{2}\cdot c$ is defined from
$x^{\prime}$ to $y^{\prime}$. By the CAT(0) inequality, for every $s$, by
Equation (3)
$d(\frac{\rho}{2}\cdot c(s)),x_{0})^{2}\geq
d(x_{0},o)^{2}+\frac{\rho^{2}}{4}=t^{2}\text{.}$
Therefore the projection $\widetilde{c}$ of $\frac{\rho}{2}\cdot c$ onto
$\Sigma_{x_{0}}(t)$ is defined and joins $x$ and $y$. Moreover, since
$\widetilde{c}$ remains inside $B(o,\frac{\rho}{2})$ by the triangle
inequality it remains inside $B(x,\rho)\cap B(y,\rho)$ and we may take
$K=\rho$ to finish the proof of the first part of the theorem.
Suppose that $S_{o}$ with the angle metric is a geodesic space. Suppose that
$a,b,c\in S_{o}$ with $a,b\notin B(c,\frac{\pi}{2})$. Since $X$ is
geodesically complete and locally conical, Lemma 13 implies that $S_{o}$ is
also geodesically complete. By the same lemma, the geodesics $\gamma_{ca}$ and
$\gamma_{cb}$ extend to geodesics of length $\pi$ to points
$a^{\prime},b^{\prime}$, respectively. Since
$d(a^{\prime},b^{\prime})\leq\pi$, any geodesic from $a^{\prime}$ to
$b^{\prime}$ must remain outside $B(c,\frac{\pi}{2})$ by the triangle
inequality. The extensions of $\gamma_{ca}$ and $\gamma_{cb}$ also remain
outside $B(c,\frac{\pi}{2})$ and therefore there is a path from $a$ to
$a^{\prime}$ then $a^{\prime}$ to $b^{\prime}$ then $b^{\prime}$ to $b$ that
stays outside $B(c,\frac{\pi}{2})$.
Now suppose $S_{o}$ has no cut points and is locally conical with all cone
radii at least $\frac{\pi}{2}$. Suppose $a,b,c$ are as in the previous
paragraph. Since $c$ is not a cut point, the complement of $c$ is connected,
hence path connected. That is, there is a curve from $a$ to $b$ that misses
$c$. Now any segment of $c$ that enters $B(c,\frac{\pi}{2})$ can be homotoped
onto $\Sigma_{c}(\frac{\pi}{2})$ using the radial retraction (Remark 11),
resulting in a curve from $a$ to $b$ that stays outside $B(c,\frac{\pi}{2})$.
###### Acknowledgement 19
In connection with this paper I had useful conversations with Ross Geoghegan,
Mike Mihalik, and Kim Ruane.
## References
* [1] Berestovskiĭ, V.; Borsuk’s problem on metrization of a polyhedron, Dokl. Akad. Nauk. SSSR 27 (1983), no. 1, 56-59.
* [2] Bestvina, Mladen and Mess, Geoffrey; The boundary of negatively curved groups. J. Amer. Math. Soc. 4 (1991), no. 3, 469–481.
* [3] Bowditch, B. H. Connectedness properties of limit sets. Trans. Amer. Math. Soc. 351 (1999), no. 9, 3673–3686.
* [4] Bridson, Martin and Haefliger, André, Metric Spaces of Non-positive curvature, Grundlehren der Mathematischen Wissenschaften, 319, Springer-Verlag, Berlin, 1999.
* [5] Conner, G.; Mihalik, M.; Tschantz, S. Homotopy of ends and boundaries of CAT(0) groups. Geom. Dedicata 120 (2006), 1–17.
* [6] Croke, C; Kleiner, B. Spaces with nonpositive curvature and their ideal boundaries. Topology 39 (2000), no. 3, 549–556.
* [7] Davis, M. Buildings are CAT(0), Geometry and Cohomology in Group Theory, LMS Lecture Note Series 252, Cambridge U. Press, Cambridge 1988.
* [8] Davis, M. The Geometry and Topology of Coxeter Groups, Princeton Univesity Press 2007\.
* [9] Geoghegan, R.; Swenson, E., On semistability of CAT(0) groups. Groups Geom. Dyn. 13 (2019), no. 2, 695–705.
* [10] Geoghegan, R., Topological methods in group theory, Graduate Texts in Mathematics, vol. 243, Springer, New York, 2008.
* [11] Geoghegan, R.; Ontaneda, P. Boundaries of cocompact proper CAT(0) spaces. Topology 46 (2007), no. 2, 129–137.
* [12] Mihalik, M. Semistability at the end of a group extension. Trans. Amer. Math. Soc. 277 (1983), no. 1, 307–321.
* [13] Mihalik, M. Semistability of Artin and Coxeter groups. J. Pure Appl. Algebra 111 (1996), no. 1-3, 205–211.
* [14] Moussong, G. PhD thesis, Ohio State University 1988.
* [15] Plaut, C. Metric spaces of curvature $\geq k$. Handbook of geometric topology, 819–898, North-Holland, Amsterdam, 2002.
* [16] Plaut, C. Weakly chained spaces, preprint.
* [17] Swarup, G. A. On the cut point conjecture. Electron. Res. Announc. Amer. Math. Soc. 2 (1996), no. 2, 98–100.
* [18] Swenson, E. A cut point theorem for CAT(0) groups. J. Differential Geom. 53 (1999), no. 2, 327–358.
|
# Exploding paraxial beams, vortex beams, and cylindrical beams of light with
finite power in linear media, and their enhanced longitudinal field
Miguel A. Porras Grupo de Sistemas Complejos, ETSIME, Universidad Politécnica
de Madrid, Rios Rosas 21, 28003 Madrid, Spain
###### Abstract
We present a set of paraxial light beams with cylindrical symmetry, smooth and
localized transversal profile carrying finite power, that develop intensity
singularities when they are focused in a linear medium, such as vacuum. They
include beams with orbital angular momentum and with radial polarization, in
which case they develop punctual phase and polarization singularities
surrounded by infinitely bright rings, along with singular longitudinal
fields. In practice, these effects are manifested in focal intensities and
spot sizes, vortex bright ring intensities and radii, and strengths of the
longitudinal field, that strongly change with the lens aperture radius.
Continuous control of these focal properties is thus exercised without
changing the light incident on the lens, with substantially the same collected
power, and while maintaining paraxial focusing conditions. As solutions of the
Schrödinger equation, these exploding beams have analogues in other areas of
physics where this equation is the fundamental dynamical model.
## I Introduction
Inspired by what happens to some wave functions in quantum mechanics PERES ,
Aiello has recently introduced a class of paraxial light beam with a localized
transversal profile and finite power that develops a singularity when its is
ideally focused AIELLO . Real, apertured versions of these beams have
subsequently been demonstrated in experiments AIELLO2 . These beams, even if
apertured, outperform standard Gaussian beams of similar intensity and power
in terms of focal intensity and resolution. In a sense, these beams are able
to reproduce with a finite amount of power the focusing behavior of plane
waves, though their finite-aperture realizations do not exceed the diffraction
limit AIELLO . The electric field of these beams is factorized in the
coordinates $x$ and $y$ in the transversal plane, i. e., it has a rectangular
geometry, which greatly facilitates its analytical treatment.
In many experimental settings, however, cylindrical symmetry is advisable, if
not mandatory; in particular, lenses, whose aperture plays a crucial role in
the behavior of these beams, are most often circular. In this paper we
describe cylindrical symmetric beams with finite power and similar exploding
behavior in their intensity when they are focused. Considering beams with
cylindrical geometry, we can also examine other types of beams that are of
great interest today. We describe exploding beams with orbital angular
momentum, and exploding radially polarized beams, both with finite power. They
form a punctual dark vortex surrounded by an infinitely bright ring,
accompanied by an infinitely strong longitudinal component.
Their apertured versions approach the above ideal behavior as the lens
aperture is increased, which offers a practical way to control these
properties without changing the illuminating beam, including the collected
power, e. g., to enhance the amplitude of the axial component up to $66$
percent of that of the transverse component under paraxial conditions. These
exploding beams, vortex beams and radially polarized beams with large, but
paraxial apertures, greatly outclass the performance of standard illuminating
fields such as Gaussian or Laguerre-Gaussian beams of similar power and
intensity. As their Cartesian counterparts, exploding beams of finite aperture
do not beat the diffraction limit for plane wave illumination, but approach it
closely. The practical advantages of exploding versus uniform illumination for
controlling the properties of the focused light is discussed at the end of
this paper.
Of course, the exploding behavior of these light beams has nothing to do with
blow-up light beams in self-focusing nonlinear media KIVSHAR ; FIBICH . Yet,
they are somewhat similar in that they both appear in paraxial fields, are
dissolved when more precise models are considered, and their mathematical
existence has physical repercussions. The results presented here can
straightforwardly extended to waves in other areas of physics such as
probability waves in quantum mechanics, or mechanical waves in acoustics, and
to matter waves and electron beams.
## II Ideal exploding beams and vortex beams
In the Fresnel regime, the focused electric field $\psi(x,y,z)e^{i(kz-\omega
t)}$ of frequency $\omega$ and propagation constant $k$ (e. g., $k=\omega/c$
in vacuum), and the field $\psi(x,y)e^{-i\omega t}$ at the input plane of a
thin lens of focal length $f$, are related by
$\psi(x,y,z)=\frac{k}{2\pi
iz}\int\\!\\!dx^{\prime}dy^{\prime}\psi(x^{\prime},y^{\prime},0)e^{\frac{ik}{2z}[(x-x^{\prime})^{2}+(y-y^{\prime})^{2}]}\,,$
(1)
where $\psi(x,y,0)=\psi(x,y)e^{-ik(x^{2}+y^{2})/2f}$, and the integral extends
over the aperture of the lens. We are interested in an illuminating field of
the form $\psi(x,y)=\psi(r)e^{is\varphi}$, where $(r,\varphi)$ are polar
coordinates in the transversal plane, and $s=0,\pm 1,\pm 2,\dots$ i. e., in
light beams with revolution symmetry about the propagation axis without and
with orbital angular momentum associated with vortex of topological charge
$s$. For these fields, Fresnel integral in (1) can more conveniently be
written as
$\displaystyle\psi(r,z)e^{is\varphi}$ $\displaystyle=$
$\displaystyle\frac{ke^{is\varphi}}{i^{|s|+1}z}e^{\frac{ikr^{2}}{2z}}\int_{0}^{R}dr^{\prime}r^{\prime}\psi(r^{\prime})$
(2) $\displaystyle\times$ $\displaystyle e^{\frac{-ikr^{\prime
2}}{2}\left(\frac{1}{f}-\frac{1}{z}\right)}J_{|s|}\left(\frac{k}{z}rr^{\prime}\right)\,,$
where $J_{n}(\cdot)$ is the Bessel function of the first kind and order $n$
GRADS , and $R$ is the radius of the lens aperture.
We consider the illuminating, collimated field with transversal distribution
of amplitude
$\psi(r)=\sqrt{\frac{P}{A}}\frac{(r/\sigma)^{|s|}}{(1+r^{2}/\sigma^{2})^{\mu+1}}\,,$
(3)
where the length $\sigma$ scales the field transversally, and $P$ and $A$ are
constants to be conveniently fixed. With the real number $\mu$ satisfying
$\mu>(|s|-1)/2$, the power of the illuminating field in the entire transversal
plane, $2\pi\int_{0}^{\infty}drr|\psi(r)|^{2}$, is finite, and therefore this
field is in principle physically realizable. With the choice
$A=\pi\sigma^{2}\Gamma(|s|+1)\Gamma(2\mu+1-|s|)/\Gamma(2\mu+2)$, where
$\Gamma(\cdot)$ is the Gamma function, the beam power is just the constant $P$
appearing in (3). Examples of (3) with $s=0$ and $|s|=1$ are depicted as solid
curves in Figs. 1(a) and 2(a).
At the focal plane, $z=f$, the integral in (2) with (3) can be carried out
analytically (see 6.565.4 in Ref. GRADS ) when neglecting the finiteness of
the lens aperture, i. e., with $R=\infty$:
$\displaystyle\psi(r,f)e^{is\varphi}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{P}{A}}\frac{k}{i^{|s|+1}f}\frac{\sigma^{2}}{2^{\mu}\Gamma(\mu+1)}$
(4) $\displaystyle\times$ $\displaystyle
e^{\frac{ikr^{2}}{2f}}\left(\frac{k}{f}\sigma
r\right)^{\mu}K_{|s|-\mu}\left(\frac{k}{f}\sigma r\right)e^{is\varphi}\,,$
for $r>0$, where $K_{\nu}(\cdot)$ is the modified Bessel function of the
second kind and order $\nu$ GRADS . Using that $K_{-\nu}(s)=K_{\nu}(s)$, and
the asymptotic behavior
$K_{\nu}(\alpha)\simeq(1/2)\Gamma(\nu)(\alpha/2)^{-\nu}$ for $\nu>0$ for small
argument GRADS , (4) is seen to approach infinity for $r\rightarrow 0$ when
$\mu<|s|/2$. For these values of $\mu$, the vortex-less beam ($s=0$) reaches
infinitely large amplitude at the beam center $r=0$, and the vortex beam
($s\neq 0$) vanishes at $r=0$, since $J_{|s|}(0)=0$ for $s\neq 0$, i.e., has
an punctual vortex surrounded by an infinitely bright ring. In short, the
unapertured focused field of illumination in (3) carries finite power and
produces infinitely intense field approaching the focal point if
$\frac{|s|-1}{2}<\mu<\frac{|s|}{2}\,,$ (5)
and this unbounded field surrounds an infinitely narrow, dark vortex when
$s\neq 0$. The singularity originates from the small but persistent tails of
(3) at large radius that are superposed with uniform phases at the focal
point, whose contribution to the beam power is nevertheless finite and small.
We point out that the family of light beams with amplitude
$\psi(r)e^{is\varphi}e^{-ir^{2}/2f}$ in (3) at $z=0$ and the singular field
$\psi(r,f)e^{is\varphi}$ in (4) at $z=f$ are solutions of the Schödinger
equation
$\frac{\partial\psi}{\partial z}=\frac{i}{2k}\Delta_{\perp}\psi\,,$ (6)
where $\Delta_{\perp}=\partial^{2}/\partial_{x}^{2}+\partial^{2}/\partial
y^{2}$, and as such these beams can be directly translated to other areas of
physics where the Schrödinger equation is the dynamical model, e. g., in the
dynamics of free electron wave packets in quantum mechanics BLIOKH .
Figure 1: (a) Intensity $|\psi(r)|^{2}$ of the exploding illumination in (3)
with power $P=10$ W, $\sigma=1$ mm, $s=0$, and $\mu=-0.25$. (b) Intensity of
the unapertured exploded field at the focal plane as given by (4) when
$\omega=2.5$ rad/fs-1, $k=\omega/c=8.33$ mm-1, and $f=200$ mm (solid). (c)
Peak intensity at $r=0$, numerically evaluated from (2) with (3), versus $z$
about the focus. (d) and (e) Radial profiles at planes close to the focal
plane, evaluated from (2) with (3). For comparison, in all plots the dashed
curves represent the same quantities for Gaussian illumination of the same
power and peak intensity. The vertical lines in (c) delimit the focal region
of the focused Gaussian beam.
Figure 2: (a) Intensity $|\psi(r)|^{2}$ of the exploding illumination in (3)
with power $P=10$ W, $\sigma=1$ mm, $s=1$, and $\mu=0.25$. (b) Intensity of
the unapertured exploded field at the focal plane as given by (4) when
$\omega=2.5$ rad/fs-1, $k=\omega/c=8.33$ mm-1 (vacuum), and $f=200$ mm. (c)
Gray curve: Peak intensity at the maximum of the bright ring surrounding the
vortex, numerically evaluated from (2) with (3), versus $z$ about the focal
point. Black curve: radius of the maximum of the bright ring versus $z$. (d)
and (e) Radial profiles at planes close to the focal plane evaluated from (2)
with (3). For comparison, in all plots the dashed curves represent the same
quantities for Laguerre-Gauss illumination with $s=1$ and zero radial order,
$\psi(r)\propto r\exp(-r^{2}/a^{2})e^{i\varphi}$, having the same power and
peak intensity.
Figures 1 (a) and (b) compare the exploding illuminating field in (3) at $z=0$
and exploded field in (4) at $z=f$ for $s=0$ and $\mu=-0.25$ (solid curves)
with illuminating Gaussian beam of the same power and peak intensity and its
focused field (dashed curves). Figures 1(c), (d) and (e) show peak intensities
at $r=0$ versus $z$ and radial profiles at values of $z$ close to $f$
illustrating how the beam profile approaches the singular profile in (4),
compared to the same quantities for the Gaussian illumination (dashed curves).
The singularity is only formed at the focal plane because the fast
oscillations $\exp\left[\frac{-ikr^{\prime
2}}{2}\left(\frac{1}{f}-\frac{1}{z}\right)\right]$ in (2) out of focus makes
the integral to converge to finite values. Compared to the Gaussian
illumination, the singularity develops explosively in a tiny axial region of
the standard depth of focus [vertical lines in (c)].
Thus, as already pointed out in AIELLO and AIELLO2 , the exploding profile in
(3) reproduces, to a certain extent, what happens when a plane wave of
infinite lateral extent and power is ideally focused, namely, both focused
fields result in an infinitely bright point at the focus, with the substantial
difference that the exploding profile carries finite power, and is therefore
physically realizable, in the same sense that a standard Gaussian beam is.
This is the cylindrically symmetric counterpart of the concentrating beam
factorized in $x$ and $y$ described and realized in AIELLO and AIELLO2 .
Also with cylindrical symmetric intensity, Figs. 2(a) and (b) show
illuminating exploding and exploded transversal profiles of vortex beams with
orbital angular mementum given by (3) and (4) with $s=1$ and $\mu=0.25$ (solid
curves), compared to illuminating Laguerre-Gauss beam with $s=1$ (and zero
radial order) having the same power and peak intensity, and its focused field
(dashed curves). The peak intensity at the radius of the bright ring
surrounding the vortex grows up explosively with $z$ to infinity at $f$, as
seen in Fig. 2(c) (solid gray curve), and the same time that the bright ring
shrinks acceleratedly down to zero to form a punctual vortex at $f$, as also
seen in Fig. 2(c) (solid black curve). For comparison, Fig. 2(c) also shows
the peak intensity and radius of the bright ring for the Laguerre-Gauss
illumination (dashed curves). The ripples in the explosively growing and
shrinking radial profiles at planes close to the focal plane, observed in Fig.
2(d) and (e), disappear in the smoother, singular profile at the focal plane.
## III Observable effects of the exploding behavior
Figure 3: Physical properties of finite-aperture exploding beams. (a) Focal
intensity when exploding profiles (3) with $s=0$, $\sigma=1$ mm, power $P=10$
W, frequency $\omega=2.5$ fs-1 and the indicated values of $\mu$ illuminate a
lens of focal length $f=200$ mm and increasing radius $R$. (b) The same in
vertical logarithmic scale to visualize the much smaller focal intensity
produced by Gaussian illumination of the same power and peak intensity on the
lens. (c) FWHM diameter of the focal spot produced by the exploding profile
(3), by the Gaussian profile, and by a plane wave, as functions of the radius
of the lens. (d) Radial profiles of intensity at the focal plane due to
exploding (solid), Gaussian of the same peak intensity and power (dashed), and
plane wave (dotted) illuminations for lens radii $R=51.65$ mm (0.25 NA) and
$R=115.47$ mm (0.5 NA). For the Gaussian illumination there is no change with
$R$.
The above description is an ideal behavior that originates from ignoring the
finite aperture of focusing systems, and that disappears when finite
transversal extents are considered. Still, the existence of these singular
beams in a linear medium has observable effects in experiments that are absent
with standard illuminating beams. If a Gaussian or any other standard light
beam carrying finite power is focused, the width of the focal spot and its
intensity do not change once the lens aperture is appreciably larger than the
beam spot size; by contrast, focusing the exploding field in (3) produces, in
principle, a brighter and narrower focal spot as the aperture radius $R$ of
the focusing system is opened. This behavior also mimics what happens when a
plane wave illuminates the lens of increasing radius $R$, with the important
difference that the focused power does not change significantly when opening
the aperture (once $R\gg\sigma$), since the power in the beam tails is
increasingly negligible. In addition, focusing the exploding vortex fields in
(3) produces increasingly brighter and narrower ring surrounding the vortex at
the focal plane when opening the aperture without significantly changing the
beam power. These effects are described below under conditions of paraxial
focusing, conditions under which they are supposed to be physically valid.
For beams without orbital angular momentum ($s=0$, hence $-1/2<\mu<0$), the
intensity at the focal point $(r,z)=(0,f)$ with a lens of aperture radius $R$
can easily be evaluated from (2) with (3) to be
$|\psi(0,f)|^{2}=\frac{P}{A}\left(\frac{k\sigma^{2}}{2f|\mu|}\right)^{2}\left[1-\left(1+\rho^{2}\right)^{|\mu|}\right]^{2}\,,$
(7)
where $\rho=R/\sigma$, which is seen in Fig. 3(a) to grow without bound with
$R$ for any allowed value of $\mu$ even when the aperture radius $R$ is much
larger than the beam spot size of (3) on the lens, measured by $\sigma$. The
curves end when the angle of a marginal ray as seen from the focus is
$30^{\circ}$, taken, according to Siegman SIEGMAN , as a limit of validity of
the paraxial approximation, i. e., for a numerical aperture of NA $=0.5$.
Evaluation of the derivative of (7) with respect to $\mu$ reveals more
pronounced intensity enhancements (at large values $R/\sigma$) when using
values of $\mu$ about $-0.42$, as can also be appreciated in Fig. 3(a). By
contrast, the intensity at the focal point of Gaussian illumination approaches
a constant value as soon as the aperture radius $R$ is slightly larger than
its spot size, as seen in Fig. 3(b). Note the logarithmic vertical scale to
visualize the tiny focal intensity of the focused Gaussian illumination of the
same power and peak intensity. Intensities two orders of magnitude higher than
those attainable with the Gaussian illumination are obtained while keeping
paraxial focusing conditions. Also, the diameter of the focal spot decreases
as the aperture radius is increased up to the limit imposed by the paraxial
approximation, in contrast with the constant and much larger focal diameter
with the Gaussian beam of the same power and peak intensity, as seen in Fig.
3(c). Bright spots of area two orders of magnitude smaller than with the
Gaussian illumination are obtained. Two examples of focused transversal
profiles with numerical apertures $0.5$ and $0.25$ are compared to the low-
intensity and much broader focused Gaussian beam in Fig. 3(d).
Figures 3(c) and (d) include also diffraction-limited focal diameters and
transversal profiles (dotted curves) corresponding to plane wave illumination
on the lens of radius $R$. It is clear that focusing the exploding profile
with finite power imitates focusing of a plane wave, in the sense that
increasing the numerical aperture results in tighter focal spots. The peak
intensities of the focused transversal profiles with illumination (3) and a
plane wave are equated in Fig. 3(d) to visualize that the width of the focal
spot is, for any aperture radius, only slightly above the diffraction limit,
in contrast to what happens with the Gaussian illumination.
Figure 4: Physical properties of finite-aperture exploding vortex beams. (a)
Focal intensity when exploding profiles (3) with $|s|=1$, $\sigma=1$ mm, power
$P=10$ W, frequency $\omega=2.5$ fs-1 and $\mu=0.08$ illuminate a lens of
focal length $f=200$ mm and increasing radius $R$ (solid curve) compared to
the same quantity for Laguerre-Gauss (LG) illumination of the same power and
peak intensity on the lens. (b) Radius of the maximum of the bright ring at
the focal plane produced by the exploding profile (3) (solid), by the
Laguerre-Gauss profile (dashed), and by an uniform plane wave with a punctual
vortex (dotted), as functions of the radius of the lens. The gray curves of
the same type represent the inner radius and outer radius (half width at half
maximum of the maximum intensity) for the corresponding illuminations. (c)
Radial profiles of intensity at the focal plane due to exploding (solid),
Laguerre-Gauss of the same peak intensity and power (dashed), and vortex plane
wave (dotted) illuminations for lens radii $R=51.65$ mm (0.25 NA) and
$R=115.47$ mm (0.5 NA). For the Gaussian illumination there is no change with
$R$. (d) Intensity of the bright ring and its radius as a function of
propagation distance about the focus for the exploding beam with radius
$R=115.47$ mm (solid curves) compared to the ideal exploding beam. The axial
region is so short that the same quantities for Laguerre-Gauss beams do not
appreciably change (dashed lines).
For exploding beams with orbital angular momentum ($|s|\neq 0$, hence
$(|s|-1)/2<\mu<|s|/2$) the intensity of the bright ring at the focal plane
grows similarly when opening the aperture, in contrast to what happens to
Laguerre-Gauss beams, as seen in Fig. 4(a). As with $s=0$, this enhancement is
further magnified by using the value $\mu\simeq(|s|-1)/2+0.08$, e. g., the
value $\mu=0.08$ for $|s|=1$ chosen in Fig. 4. Again focal intensities about
two orders of magnitude higher than with Laguerre-Gaussian illumination of the
same power, peak intensity and vorticity on the lens are easily attained.
Figure 4(b) shows the shrinking radius of the bright ring at the focal plane
when increasing the aperture radius $R$, in contrast to the constant radius
with the Laguerre-Gauss illumination, and mimicking the shrinking diffraction-
limited radius for plane wave illumination with a punctual vortex (solid,
dashed and dotted curves, respectively). The gray curves of the same type
locate the inner and outer radius of the corresponding bright rings, in order
to visualize the constant thickness of the bright ring at the focal plane for
the Laguerre-Gauss illumination, and the shrinking thicknesses for exploding
and plane wave illuminations. Two examples of transversal profiles at the
focal plane with numerical apertures $0.5$ and $0.25$ are seen in Fig. 4(c),
where they are compared to the much broader and less intense focused Laguerre-
Gauss, and are also seen to be slightly broader than the diffraction-limited
profiles of the same numerical apertures. Interestingly, the exploding beam
focused by a lens of finite radius behaves very approximately as the ideal
exploding beam from an infinite aperture out of the focal plane, as
illustrated in Fig. 4(d) plotting the intensity of the maximum of the bright
ring and its radius about the focus. Only in a tiny fraction (about one wave
length) of the focal region (of about $0.5$ mm) the bright ring intensity and
its radius depart from the ideal behavior, reaching at the focus finite and
non-zero values, respectively.
## IV Enhanced longitudinal component
The ideally infinite gradients of these singular beams and vortex beams at the
focal plane suggests the presence of a strong longitudinal or axial component
of the electric field, even under paraxial conditions, since the axial
component is directly related to the gradient of the transversal part.
Enhancement of the axial component commonly relies on nonparaxial focusing and
the use of specific fields such as radially polarized beams QUABIS ; COLLIN .
While paraxial, standard, radially polarized fields present quite small
longitudinal components, exploding radially polarized fields have longitudinal
components comparable in amplitude to that of the transversal component.
For a paraxial beam of transversal components
$\bm{\psi}_{\perp}=\psi_{x}\bm{u}_{x}+\psi_{y}\bm{u}_{y}$, the axial component
can be evaluated from LAX
$\psi_{z}=\frac{i}{k}\nabla_{\perp}\cdot\bm{\psi}_{\perp}=\frac{i}{k}\left(\frac{\partial\psi_{x}}{\partial
x}+\frac{\partial\psi_{y}}{\partial y}\right)\,.$ (8)
Strictly speaking, and according to Gauss law for the electric field, the term
$(i/k)\partial\psi_{z}/\partial z$ should be included in the right hand side
of (8), but according to the perturbative method in LAX , the $x$ and $y$
derivatives are leading-order terms determining the axial component under
paraxial conditions, and the $z$ derivative is a small correction. In
addition, the axial component is evaluated below only at the focal plane,
where it takes maximum values and therefore $\partial\psi_{z}/\partial z=0$.
In order to maintain the cylindrical symmetry of the intensity of the
vectorial field, we consider left or right handed circular polarizations,
$\bm{\psi}_{\perp}=\psi(r,z)e^{is\varphi}\bm{u}_{l,r}$, where
$\bm{u}_{l,r}=(\bm{u}_{x}\pm i\bm{u}_{y})/\sqrt{2}$, and the upper and lower
signs stand for left and right handed polarizations, respectively, and where
$\psi(r,z)$ is given by Fresnel integral in (2) with (3). Some calculation of
derivatives leads to
$\psi_{z}(r,z)e^{i(s\pm
1)\varphi}=\frac{1}{\sqrt{2}}\frac{i}{k}\left[\frac{\partial\psi(r,z)}{\partial
r}\mp\frac{s}{r}\psi(r,z)\right]e^{i(s\pm 1)\varphi}\,.$ (9)
With $s=-1$ (hence $0<\mu<1/2$) and left handed circular polarization, and
with $s=+1$ (also $0<\mu<1/2$) and right handed polarization, the axial
components do not carry any vorticity, can then have maxima at the beam center
(where the transversal components vanish), which are equal in both cases and
given by $\psi_{z}(r,z)=(i/\sqrt{2}k)\left[\partial\psi(r,z)/\partial
r+\psi(r,z)/r\right]$.
We can now easily consider the radially or azimuthally polarized beams
$\bm{\psi}_{\perp}=\frac{\psi(r,z)}{\sqrt{2}}\left[e^{-i\varphi}\bm{u}_{l}\pm
e^{i\varphi}\bm{u}_{r}\right]\,,$ (10)
having the same amplitude associated with the transversal components,
$|\bm{\psi}_{\perp}|=|\psi(r,z)|$, as that of the individual vortex beams with
circular polarizations. The axial component vanishes for azimuthal
polarization [for the minus sign in 10], and is given by
$\psi_{z}=\frac{i}{k}\left[\frac{\partial\psi(r,z)}{\partial
r}+\frac{1}{r}\psi(r,z)\right]=\frac{i}{k}\frac{1}{r}\frac{\partial[r\psi(r,z)]}{\partial
r}\,,$ (11)
for radial polarization [for the plus sign in 10]. Writing the ideal focal
profile in (4) for $|s|=1$ in the compact form
$\psi(r,f)=C\alpha^{\mu}K_{1-\mu}(\alpha)$, where $\alpha=k\sigma r/f$ and $C$
is the constant in the first row in (4), and where we have neglected the
negligible curvature factor $e^{ikr^{2}/2f}$ in the Debye approximation
(symmetric focused field about the focal plane), one can readily evaluate the
axial component in (11) as
$\psi_{z}=\frac{i\sigma}{f}C\alpha^{\mu}\left[K_{2-\mu}(\alpha)-2\frac{K_{1-\mu}(\alpha)}{\alpha}\right]\,,$
(12)
which is singular at $\alpha=0$, i. e., at $r=0$. Indeed the ratio of
amplitudes of the axial and transversal components close to the origin behaves
as $|\psi_{z}|/|\psi|=2\sigma\mu/f\alpha=2\mu/kr$, implying that the
singularity in the axial field is stronger than that of the transversal field.
Of course, these are mathematical equations for ideal exploding radially
polarized beams, but they have physical manifestations in their apertured
versions. From (11), (2) with finite radius $R$ and illumination (3) with
$|s|=1$, the axial component of the radially polarized beam at the focal plane
can be evaluated to be
$\psi_{z}(0,f)=\frac{ik}{f^{2}}\sqrt{\frac{P}{A}}\frac{\sigma^{3}}{2\mu}\left[\frac{\rho^{2}}{(1+\rho^{2})^{\mu}}+\frac{1-(1+\rho^{2})^{1-\mu}}{1-\mu}\right]\,,$
(13)
where again $\rho=R/\sigma$. The amplitude of the axial component grows as
$(R/\sigma)^{2-2\mu}$ at large $R/\sigma$, while the amplitude of the
transversal component at the bright ring surrounding the polarization
singularity grows with the potential law $(R/\sigma)^{1-2\mu}$ (also at large
$R/\sigma$) of lower power, as can be seen in Fig. 5(a). This difference
implies a linear increase with $R/\sigma$ of the ratio of the axial and
transversal amplitudes, as can be appreciated in Fig. 5(b), that reaches a
maximum value of $0.665$ in the limit of validity of the paraxial
approximation (solid curve). As expected, for Laguerre-Gauss radially
polarized beams, this ratio reaches a constant maximum value that does not
exceed $0.049$ (dashed curve), and, again, the behavior of the axial component
of the exploding radially polarized beam mimics the behavior of radially
polarized uniform plane wave with a punctual polarization singularity in its
center, whose maximum ratio is about $0.757$ (dotted curve). For illustration
Figs. 5(c) and (d) depict transversal profiles of amplitude of the axial and
transversal components at the focal plane for NA $=0.25$ and $0.5$. The above
analysis indicates that the individual, circularly polarized exploding vortex
beams have also enhanced axial components, but we have ignored them because
they simply are $\sqrt{2}$ smaller than that of the radially polarized beam.
Figure 5: Exploding radially polarized beams. (a) Amplitude at the focal
point of the axial component $|\psi_{z}(0,f)|$ (black curve), as given by
(13), and of the transversal component
$|\bm{\psi}_{\perp}(r_{m},f)|=|\psi(r_{m},f)|$ at its radial maximum $r_{m}$
(gray curve), when the radially polarized beam in (10), with the exploding
profile $\psi(r)$ in (3) with $\sigma=1$ mm, $P=10$ W, frequency $\omega=2.5$
fs-1, $\mu=0.08$ and $|s|=1$, is focused in vacuum with a lens of focal length
$f=200$ mm, both as functions of the radius $R$ of the lens aperture. (b)
Quotient of the axial and transversal amplitudes as a function of $R$ (solid
curve) for the above conditions, for radially polarized Laguerre-Gauss
illumination of the same power and peak intensity on the lens (dashed curve),
and for radially polarized plane wave illumination with a punctual
polarization singularity in its center (dotted curve). (c) and (d) Radial
profiles of the amplitude of the axial (black) and transversal (gray)
components for $R=51.65$ mm (NA= $0.25$) and $R=115.47$ mm (NA $=0.5$).
## V Exploding versus uniform illumination
Figure 6: Solid curves: For the focusing conditions and exploding
illumination as in Fig. 3, (a) peak intensity at focus and (b) FWHM of the
focal spot as functions of the lens aperture $R$. Dotted curves: The same but
for top-hat illumination of the same power of radius $R$, i. e., always
filling the aperture. Dashed curves: The same for top-hat illumination of the
same power and fixed radius $R_{0}=30$ mm.
From what is exposed so far, it arises the practical question of the possible
advantages of using exploding illumination over uniform illumination (of
finite transversal extent). Of course the response depends on the particular
application. Smooth profiles are more convenient in nonlinear optics
applications to avoid instabilities and collapse. Reshaping Gaussian-like beam
profiles into uniform flat-top profiles involves large power losses if based
on truncation; otherwise it requires using sophisticated diffractive and
refractive optics DICKEY , or more recently developed techniques FRIES , but
uniform illumination provides better resolution MAZNEV in applications such
as microscopy.
Suppose here we dispose of the fixed amount of power $P$ supplied by a laser
system which can be shaped as the exploding profile or as a flat-top profile,
and wish to increase the intensity and/or diminish the size of the focal spot
by opening the aperture, or simply to control these properties. As seen in
Fig. 6 (a) and (b) the intensity is considerably higher and the spot size
slightly smaller (the diffraction limit) with a flat-top profile that always
fills the aperture (dotted curves) than with exploding illumination (solid
curves), but this arrangement requires reshaping the flat-top profile each
time the aperture radius is increased. The situation is different in a setting
where reshaping of the illumination is difficult or not possible (dashed
curves). The peak intensity of the flat-top illumination of the power $P$ and
fixed radius $R_{0}$ is at first lower than with the exploding illumination
because of truncation losses, then higher when more power is collected, and
finally constant once $R_{0}$ is smaller than the aperture radius $R$,
becoming eventually smaller than the focal intensity with the exploding
illumination. Similarly, the focal spot is tighter but then constant and wider
than with the exploding illumination of increasing aperture radius. Thus, the
exploding profile offers the possibility of a smoother control of the
intensity and width of the focused spot over wider ranges without the
necessity of reshaping the input beam. Similar conclusion holds for the
intensity and thickness of the bright ring of exploding vortex beams, and for
the strength of the axial component of exploding radially polarized beams.
## VI Conclusions
In conclusion, we have reported the existence of cylindrically symmetric,
paraxial beams, vortex beams, and radially polarized beams of light that
produce singular (infinite) intensities when they are ideally focused. They
mimic the behavior of ideally focused plane waves but with a localized
transversal profile that carries a finite amount of power.
With real focusing systems of finite lateral extent, this peculiar behavior
manifests as focused beams whose intensity and spot size, vortex intensity and
radius, and strength of the longitudinal component strongly change with the
aperture size, even if the focused power is not significantly altered. With
the same exploding illumination, focal intensities exceeding by two orders of
magnitude, focal spots and vortex radii smaller by two orders of magnitude,
and longitudinal component one order of magnitude higher than the same
properties for standard illuminating beams of similar power and intensity can
be achieved by increasing the aperture radius while keeping paraxial focusing.
These effects are absent with standard beams, and can find application in
linear or nonlinear optics experiments where a precise control of the width
and intensity of the focal spot, of the radius and intensity of the vortex
bright ring, or the strength of the axial component of the electric field, are
crucial, e. g., in second harmonic generation, also with radially polarized
light YEW and in surfaces BISS , acceleration of electrons GUPTA ; VARIN ,
particle trapping ZHAN , laser material processing DREVINSKAS , etc. These
exploding beams can also be used as alternative beams to standard Laguerre-
Gauss beams for quantum entanglement of states of orbital angular momentum
FICKLER . Outside the field of optics, given the generality of the Schrödinger
equation, exploding free electron wave packets BLIOKH and vortex electron
beams MCMORRAN do exist and could find application in electron microscopy, as
well as exploding acoustic vortices for Mie particle trapping MARZO .
The author acknowledges support from Projects of the Spanish Ministerio de
Economía y Competitividad No. MTM2015-63914-P, and No. FIS2017-87360-P.
## References
* (1) A. Peres, Quantum theory: Concepts and methods (Kluver Academic Publishers, San Diego, CA, 1994), 5th ed., pp. 81–82.
* (2) A. Aiello, “Spontaneous generation of singularities in paraxial optical fields,” Opt. Lett. 41, 1668–1671 (2016).
* (3) A. Aiello, M. Paúr, B. Stoklasa, Z. Hradil, J. Rehacek, and L. L. Sánchez-Soto, “Observation of concentrating paraxial beams,” OSA Continuum 3, 2387–2394 (2020).
* (4) Y. S. Kivshar and D. E. Pelinovsky, “Self-Focusing and Transverse Instabilities of Solitary Waves,” Phys. Rep. 331, 117–195 (2000).
* (5) G. Fibich, The nonlinear Schrödinger Equation: Singular solutions and optical collapse (Springer: Heidelberg, 2015).
* (6) I. S. Gradshteyn and I. M. Ryzhik,Table of Integrals, Series, and Products, Fifth Edition (Academic, 1994).
* (7) K.Y. Bliokh, Y.P. Bliokh, S. Savel’ev, and F. Nori, “Semiclassical Dynamics of Electron Wave Packet States with Phase Vortices,” Phys. Rev. Lett. 99, 190404 (2007).
* (8) A. E. Siegman, Lasers, (Mill Valley, 1986). See pp. 628-630.
* (9) S. Quabis, R. Dorn, M. Eberler, O. Glöckl, and G. Leuchs, “Focusing light to a tighter spot,” Opt. Commun. 179 1–7 (2000).
* (10) H Wang, L. Shi, B. Lukyanchuk, and C. Sheppard, “Creation of a needle of longitudinally polarized light in vacuum using binary optics,” Nature Photonics 2, 501–505 (2008).
* (11) M. Lax, W. H. Louisell, and W. B. McKnight, “From Maxwell to paraxial wave optics,” Phys. Rev. A 11, 1365–1370 (1975).
* (12) F. M. Dickey, L. S. Weichman, and R. N. Shagam, “Laser beam shaping techniques,” Proc. SPIE 4065, 338–348 (2000).
* (13) F. Fries, M Fröbel, P. Y. Ang, S. Lenk, and S. Reineke, “Real-time beam shaping without additional optical elements,” Light Sci. Appl. 7, 18 (2018).
* (14) A. A. Maznev and O. B. Wright, “Upholding the diffraction limit in the focusing of light and sound,” Wave Motion 68, 182–189 (2017).
* (15) E.Y.S. Yew and C. J. R. Sheppard, “Second harmonic generation polarization microscopy with tightly focused linearly and radially polarized beams,” Opt. Commun. 275, 453–457 (2007).
* (16) D. P. Biss and T. G. Brown, “Polarization-vortex-driven second-harmonic generation,” Opt. Lett. 28, 923–925 (2003).
* (17) D. N. Gupta, N. Kant, D. E. Kim, and H. Suk, “Electron acceleration to GeV energy by a radially polarized laser,” Phys. Lett. A 368, 402–407 (2007).
* (18) C. Varin, M. Piché, M. A. Porras, “Acceleration of electrons from rest to GeV energies by ultrashort transverse magnetic laser pulses in free space,” Phys. Rev. E 71, 026603 (2005).
* (19) Q. Zhan, “Trapping metallic Rayleigh particles with radial polarization,” Opt. Express 12, 3377–3382 (2004).
* (20) R. Drevinskas, J. Zhang, M. Beresna, M. Gecevicius, A. G. Kazanskii, Y. P. Svirko, and P. G. Kazansky, “Laser material processing with tightly focused cylindrical vector beams,” Appl. Phys. Lett. 108, 221107 (2016).
* (21) R. Fickler, R. Lapkiewicz, W. N. Plick, M. Krenn, C. Schaeff, S. Ramelov, and A. Zeilinger, Science 338, 640 (2012).
* (22) B. J. McMorran, A. Agrawal, I. M. Anderson, A. A. Herzing, H. J. Lezec, J. J. McClelland, and J. Unguris, “Electron vortex beams with high quanta of orbital angular momentum, ” Science 331, 192–195 (2011).
* (23) A. Marzo, M. Caleap, and B. W. Drinkwater, “Acoustic Virtual Vortices with Tunable Orbital Angular Momentum for Trapping of Mie Particles,” Phys. Rev. Lett. 120, 044301 (2018).
|
# Inertial Proximal Deep Learning Alternating Minimization for Efficient
Neutral Network Training
###### Abstract
In recent years, the Deep Learning Alternating Minimization (DLAM), which is
actually the alternating minimization applied to the penalty form of the deep
neutral networks training, has been developed as an alternative algorithm to
overcome several drawbacks of Stochastic Gradient Descent (SGD) algorithms.
This work develops an improved DLAM by the well-known inertial technique,
namely iPDLAM, which predicts a point by linearization of current and last
iterates. To obtain further training speed, we apply a warm-up technique to
the penalty parameter, that is, starting with a small initial one and
increasing it in the iterations. Numerical results on real-world datasets are
reported to demonstrate the efficiency of our proposed algorithm.
Index Terms— Nonconvex alternating minimization, Penalty, Inertial method,
Network training.
## 1 Introduction
The deep neural network has achieved great success in computer vision and
machine learning. Mathematically, training a $L$-layer neural network can be
formulated as:
$\small\min_{{\bf W}_{1},{\bf W}_{2},\ldots,{\bf W}_{L}}\\{\mathcal{L}({\bf
y},\sigma_{L}({\bf W}_{L}...\sigma_{1}({\bf W}_{1}{\bf
a}_{0})))+\sum_{l=1}^{L}R_{l}({\bf W}_{l})\\},$
where ${\bf a}_{0}$ denotes the training sample and ${\bf y}$ denotes the
labelled set, $R_{l}$ is the regularization in the $l$-th layer, and
$\sigma_{l}$ is the $l$-th layer’s activation. The absence of $R_{l}$ yields
the fully connected networks. The main workhorse of the deep neutral network
training task is the SGD [1] and its variants. The core part of SGD for
training neural networks is to compute the gradient with respect to ${\bf
W}_{l}$, i.e., the backpropagation [2]. For simplicity of presentation, we
assume $\sigma_{l}$ is a $\mathbb{R}\mapsto\mathbb{R}$ mapping. According to
the chain rule, it follows: $\frac{\partial\mathcal{L}}{\partial{\bf
W}_{1}}=\frac{\partial\mathcal{L}}{\partial\sigma_{L}}\frac{\partial\sigma_{L}}{\partial\sigma_{L-1}}\ldots\frac{\partial\sigma_{2}}{\partial\sigma_{1}}\frac{\partial\sigma_{1}}{\partial{\bf
W}_{1}}.$ If $|\frac{\partial\sigma_{l}}{\partial\sigma_{l-1}}|<1$, the
gradient decays fast or even vanishes when $L$ is large. This phenomenon,
which has been mentioned in [3], hurts the speed and performance of SGD in the
deep layer case. On the other hand, the convergence of stochastic training
method is based on the Lipschitz continuous assumption on the gradient, which
fails to hold for various applications. To overcome these drawbacks, papers
[4, 5, 6, 7] propose gradient free methods by the Alternating Direction
Methods of Multipliers (ADMM) or Alternating Minimization. The core idea of
this method is the decomposition of the training task into a sequence of
substeps which are just related to one-layer activations. Due to that each
substep can always find its global minimizer, the gradient free method can
achieve notable speedups [4, 5, 6]. Another advantage of the gradient free is
parallelism due to that ADMM and AM have natural conveniences to be
implemented concurrently. Besides the acceleration and parallelism, ADMM and
ADM also enjoy the advantage of mild theoretical guarantees to converge
compared with SGD: The theory of SGD is heavily dependent on the global
smoothness of $\mathcal{L}({\bf y},\sigma_{L}({\bf W}_{L}...\sigma_{1}({\bf
W}_{1}{\bf a}_{0})))$ with respect to ${\bf W}_{1},{\bf W}_{2},\ldots,{\bf
W}_{L}$, which usually fails to hold. This paper studies the alternating
minimization based method, namely Deep Learning Alternating Minimization
(DLAM) and aims to develop an improved version of DLAM.
The training task can be reformulated as a nonlinear constrained optimization
problem. DLAM is actually the AM method applied to its penalty. In [8], the
authors propose a new framework based on using convex functions to appropriate
a non-decreasing activation. The AM method is the state-of-art solver for the
convex optimization problem. In paper [9], the authors extend the DLAM to the
online learning with co-activation memory technique. By representing the
activation function as a proximal operator form, [10] proposed a new penalty
framework, which can also be minimized by AM. With rewriting equivalent
biconvex constraints to activation functions, [11] proposes AM for Fenchel
lifted networks. With a higher dimensional space to lift the ReLU function,
paper [6] develops a smooth multi-convex formulation with AM. In [12], the
authors develop the AM methodology for the nested neural network.
The contribution of this paper can be concluded as follows: 1) A novel
algorithm. We develop a new algorithm for deep neural networks training. In
each iteration, the initial technique is employed to predict an auxiliary
point with current and last iterate. For acceleration, we use a small penalty
parameter in the beginning iterations and then increase it to a larger one.
The DLAM can be regarded as a special case of this scheme. 2) Sound results.
Various proximal operaters are widely used in statistical learning comunity
[13], which bring various applications problems potentially could be solved by
the proposed algorithm in this work. We present the numerical experiments to
demonstrate the performance of our algorithm. The convergence is verified, and
comparisons with latest classical solvers are presented.
## 2 Problem Formulation and Algorithms
Notation: Given an integer $N>0$, $[N]:=\\{1,2,\ldots,N\\}$. Let
$(a^{k})_{k\geq 1}$ and $(b^{k})_{k\geq 1}$ be positive, $a^{k}=o(b^{k})$
means $\lim_{k}\frac{a^{k}}{b^{k}}=0$. We denote that ${\bf z}^{k}:=({\bf
z}_{1}^{k},\ldots,{\bf z}_{l}^{k},\ldots,{\bf z}_{L}^{k})$ and $[{\bf
z}^{k};{\bf z}_{l}]:=({\bf z}_{1}^{k+1},\ldots,{\bf z}_{l-1}^{k+1},{\bf
z}_{l},{\bf z}_{l+1}^{k},\ldots,{\bf z}_{L}^{k})$. Similar notations are
defined for ${\bf W}^{k}$ and ${\bf a}^{k}$. $\mathcal{L}(\cdot)$ represents
the loss function and $\|\cdot\|$ is the L2 norm. For a map
$\sigma(\cdot):\mathbb{R}^{d}\mapsto\mathbb{R}^{d}$ and vectors ${\bf w},{\bf
v}\in\mathbb{R}^{d}$, we denote $\mathcal{S}^{\sigma}({\bf w},{\bf
v},r):=\mathop{\mathrm{argmin}}_{{\bf x}\in\mathbb{R}^{d}}\\{\|\sigma({\bf
x})-{\bf w}\|^{2}+r\|{\bf x}-{\bf v}\|^{2}\\},$ where $r>0$ is a proxmial
parameter. If $\sigma({\bf x})={\bf x}$, then $\mathcal{S}^{\sigma}({\bf
w},{\bf v},r)=\frac{{\bf w}+r{\bf v}}{1+r}$; If $\sigma(\cdot)$ is set to be
the ReLU function [14], $[\mathcal{S}^{\sigma}({\bf w},{\bf v},r)]_{i}$ is one
of these three items $\\{[\frac{{\bf w}+r{\bf v}}{1+r}]_{i},[{\bf
v}]_{i},\min\\{{\bf v}_{i},0\\}\\}$ minimizing $|\sigma(\cdot)-{\bf
w}_{i}|^{2}+r|\cdot-{\bf v}_{i}|^{2}$ [15]. For loss function with
regularization term, we denote an operator as $\mathcal{P}^{\mathcal{L}}({\bf
w},{\bf v},r):=\mathop{\mathrm{argmin}}_{{\bf
x}\in\mathbb{R}^{d}}\\{\mathcal{L}({\bf w},{\bf x})+\frac{r}{2}\|{\bf x}-{\bf
v}\|^{2}\\}.$ If $\mathcal{L}({\bf w},{\bf x})$ is set to be
$\frac{1}{2}\|{\bf x}-{\bf w}\|^{2}$, then $\mathcal{P}^{\mathcal{L}}({\bf
w},{\bf v},r)=\frac{{\bf w}+r{\bf v}}{1+r}$; If $\mathcal{L}({\bf w},{\bf x})$
is set to be hinge loss $\max(0,1-w\cdot x)$, then
$\mathcal{P}^{\mathcal{L}}({\bf w},{\bf v},r)$ is one of these three items
$\\{v_{i},w_{i}^{-1},v_{i}+w_{i}^{-1}\\}$. More proximal operators are widely
used in statistical learning community [13].
Penalty formulation: We reformulate the $L$-layer training task as linearly
constrained optimization problem by introducing $({\bf a}_{l})_{l\in[L-1]}$
and $({\bf W}_{l})_{l\in[L]}$:
$\displaystyle\min_{{\bf z},{\bf W},{\bf a}}\\{\mathcal{L}({\bf y},{\bf
z}_{L})+\sum_{l=1}^{L}R_{l}({\bf W}_{l})\\},$ (1) s.t. $\displaystyle{\bf
z}_{l}={\bf W}_{l}{\bf a}_{l-1},~{}l\in[L];~{}{\bf a}_{l}=\sigma_{l}({\bf
z}_{l}),~{}l\in[L-1].$
If $L=1$, we then need to minimize a function with linear constraints, which
can be efficiently solved by ADMM. When $L>1$, Problem (1) is a nonlinear
constrained problem, which is difficult to solve directly. Thus, people
consider its penalty problem. Given a very large penalty parameter
$\bar{\rho}>0$, we aim to solve a reformulated problem:
$\displaystyle\min_{{\bf z},{\bf W},{\bf a}}\\{\Phi_{\bar{\rho}}({\bf z},{\bf
W},{\bf a}):=\mathcal{L}({\bf y},{\bf z}_{L})+\sum_{l=1}^{L}R_{l}({\bf
W}_{l})$ (2) $\displaystyle\quad+\frac{\bar{\rho}}{2}(\sum_{l=1}^{L}\|{\bf
z}_{l}-{\bf W}_{l}{\bf a}_{l-1}\|^{2}+\sum_{l=1}^{L-1}\|{\bf
a}_{l}-\sigma_{l}({\bf z}_{l})\|^{2})\\}.$
An extreme case is setting $\bar{\rho}=+\infty$, in which (2) is identical to
(1). Actually, even for linearly constrained nonconvex minimization, penalty
method is also a good choice due to that it enjoys much more mild convergent
assumption and easily-set parameters than the nonconvex ADMM [16]. In the
numerical experiments, we update $\bar{\rho}$ from iteration to iteration, and
$\rho_{k+1}$ is set to be $1.1\rho_{k}$ at the end of each iteration.
It should be noted that the formulation in this work is different from Zeng’s
work [15], there is no acitvation $a_{L}=\sigma_{L}(z_{L})$ before the last
layer, and further analysis is based on this formulation.
Inertial methods: The DLAM is actually the alternating minimization applied to
(2). In the nonconvex community, the inertial technique [17] (also called as
heavy-ball or momentum) is widely used and proved to be algorithmically
efficient [18, 19, 20, 21]. Besides acceleration and good practical
performance for nonconvex problems, the advantage of inertial technique is
illustrated by weaker conditions avoiding saddle points [22]. The procedure of
inertial method is quite simple, it uses linear combination of current and
last point for next iteration. For example, the gradient descent minimizing a
smooth function $f$ employs the inertial term ${\bf\hat{x}}^{k}={\bf
x}^{k}+\alpha({\bf x}^{k}-{\bf x}^{k-1})$ ($\alpha\geq 0$) as ${\bf
x}^{k+1}={\bf x}^{k}-\nabla f({\bf\hat{x}}^{k})$.
Algorithm: We employ the alternating minimization method.We use the inertial
technique for ${\bf z}^{k}$, ${\bf W}^{k}$ and ${\bf a}^{k}$,
$\small\left\\{\begin{array}[]{lll}\hat{{\bf z}}^{k}&=&{\bf z}^{k}+\alpha({\bf
z}^{k}-{\bf z}^{k-1})\\\ \hat{{\bf W}}^{k}&=&{\bf W}^{k}+\beta({\bf
W}^{k}-{\bf W}^{k-1})\\\ \hat{{\bf a}}^{k}&=&{\bf a}^{k}+\gamma({\bf
a}^{k}-{\bf a}^{k-1})\\\ {\bf
z}_{l}^{k+1}&\in&\displaystyle\mathop{\mathrm{argmin}}_{{\bf
z}_{l}}\\{\Phi_{{\rho}_{k}}([{\bf z}^{k};{\bf z}_{l}],{\bf W}^{k},{\bf
a}^{k})+\frac{\delta{\rho}_{k}}{2}\|{\bf
z}_{l}-{\bf\hat{z}}^{k}_{l}\|^{2}\\}\\\ {\bf
W}_{l}^{k+1}&\in&\displaystyle\mathop{\mathrm{argmin}}_{{\bf
W}_{l}}\\{\Phi_{{\rho}_{k}}({\bf z}^{k+1},[{\bf W}^{k};{\bf W}_{l}],{\bf
a}^{k})+\frac{\delta{\rho}_{k}}{2}\|{\bf W}_{l}-\hat{{\bf
W}}^{k}_{l}\|^{2}\\}\\\ {\bf
a}_{l}^{k+1}&\in&\displaystyle\mathop{\mathrm{argmin}}_{{\bf
a}_{l}}\\{\Phi_{{\rho}_{k}}({\bf z}^{k+1},{\bf W}^{k+1},[{\bf a}^{k};{\bf
a}_{l}])+\frac{\delta{\rho}_{k}}{2}\|{\bf a}_{l}-\hat{{\bf
a}}^{k}_{l}\|^{2}\\}\\\ \end{array}\right..$
We can see if $\alpha=\beta=\gamma=\delta=0$, the algorithm above then
degenerates the DLAM. We first use the linear combinations to predict two new
points $\hat{{\bf W}}^{k},\hat{{\bf a}}^{k}$ (inertial step). In the substeps
of updating ${\bf z}^{k+1}_{l}$, ${\bf W}^{k+1}_{l}$ and ${\bf a}^{k+1}_{l}$,
we use a proximal-point way, i.e., adding the regular terms
$\frac{\delta{\rho}_{k}}{2}\|{\bf z}_{l}-\hat{{\bf z}}^{k}_{l}\|^{2}$,
$\frac{\delta{\rho}_{k}}{2}\|{\bf W}_{l}-\hat{{\bf W}}^{k}_{l}\|^{2}$ and
$\frac{\delta{\rho}_{k}}{2}\|{\bf a}_{l}-\hat{{\bf a}}^{k}_{l}\|^{2}$ in the
minimizations to get the sufficient descent. Let $d_{l}$ is the width of the
$l$-layer, and we have ${\bf W}_{l}\in\mathbb{R}^{d_{l-1}\times d_{l}}$.
As we mentioned at the beginning of this section, $\bar{\rho}$ is the penalty
parameter, which shall be large to yield the sufficient closed approximate
problem. A natural problem is that large $\bar{\rho}$ leads to small change of
$({\bf z}_{L}^{k})_{k\geq 1}$ and $({\bf W}^{k})_{k\geq 1}$, which indicates
the algorithm slowed down in this case. To overcome this drawback, we set a
small $\rho_{0}>0$ as the initialization and then increase it in the
iterations as $\rho_{k+1}=\min\\{\theta\rho_{k},\bar{\rho}\\}$, where
$\theta>1$. Such techniques have been used in the image processing and
nonconvex linearly constrained problems [23, 24]. With increasing penalty
parameter strategy, our algorithm can be displayed in Algorithm 1.
Algorithm 1 Inertial Proximal Deep Learning Alternating Minimization for
Neutral Network Training (iPDLAM)
0: parameters $\alpha\geq 0$, $\beta\geq 0$, $\gamma>0$, $\delta>0$,
$\theta>1$, $\bar{\rho}\geq\rho_{0}>0$, $K>0$Initialization: $({\bf
z}_{l}^{0})_{l\in[L]}$, $({\bf a}_{l}^{0})_{l\in[L-1]}$, $({\bf
W}_{l}^{0})_{l\in[L]}$for $k=1,2,\ldots$, $K$ 1\. $\hat{{\bf z}}^{k}={\bf
z}^{k}+\alpha({\bf z}^{k}-{\bf z}^{k-1})$; 2. $\hat{{\bf W}}^{k}={\bf
W}^{k}+\beta({\bf W}^{k}-{\bf W}^{k-1})$; 3. $\hat{{\bf a}}^{k}={\bf
a}^{k}+\gamma({\bf a}^{k}-{\bf a}^{k-1})$; 4. ${\bf
z}_{l}^{k+1}=\mathcal{S}^{\sigma_{l}}({\bf a}_{l}^{k},\frac{{\bf
W}_{l}^{k}{\bf
a}_{l-1}^{k}+\delta{\bf\hat{z}}^{k}_{l}}{1+\delta},1+\delta),~{}i\in[L-1]$; 5.
${\bf z}_{L}^{k+1}=\mathcal{P}^{\mathcal{L}}({\bf y},\frac{{\bf W}_{L}^{k}{\bf
a}_{L-1}^{k}+\delta\hat{{\bf z}}^{k}_{L}}{1+\delta},\rho_{k}(1+\delta))$; 6.
Update ${\bf W}_{l}^{k+1}$; 7. Update ${\bf a}_{l}^{k+1}$; 8. Update
$\rho_{k+1}=\min\\{\theta\rho_{k},\bar{\rho}\\}$end for
## 3 Numerical Experiments
In this section, we present the numerical results of our algorithm. We follow
the experimental setup introduced by [7]. Specifically, we consider the DNN
training model (1) with ReLU activation, the squared loss, and the network
architecture being an MLPs with hidden layers, on the two datasets, MNIST [25]
and Fashion MNIST [26]. The specific settings were summarized as follows:
* •
For the MNIST data set, we implemented a 784-(1500$\times$2)-10 MLPs (i.e.,
the input dimension d0 = 28 $\times$ 28 = 784, the output dimension d3 = 10,
and the numbers of hidden units are all 1500), and set
$\alpha=\beta=\gamma,\delta$ with different values to testify the proposed
algorithm iPDLAM. The sizes of training and test samples are 60000 and 10000,
respectively.
* •
The parameters of the DLAM is adopted as the default parameters given in [15].
The learning rate of SGD and its variants (.i.e. RMSprop, Adam, Adadelta,
AMSGrad, Adamax.) is 0.001 (a very conservative learning rate to see if SGD
can train the DNNs). More greedy learning rates such as 0.01 and 0.05 have
also been used, and similar results of training are observed.
* •
For each experiment, we used the same mini-batch sizes (512) and
initializations for all algorithms. Specifically, all the weights ${\bf
W}_{l}^{0}$ are randomly initialized from a Gaussian distribution with a
standard deviation of 0.01 and the bias vectors are initialized as vectors of
all 0.1, while the auxiliary variables ${\bf z}_{l}$ and state variables ${\bf
a}_{l}$ are initialized by a single forward pass.
The proposed algorithm is implemented based on PyTorch. And the experiments
are conducted on a desktop computer with Intel® Core™ CPU i7-8700k @ 4.70 GHz
, 16 GB memory, and running Ubuntu 18.04 Server OS. The CPU contains 12
physical cores. We use PyTorch 1.5.0 to implement the proposed algorithm.
### 3.1 The closed solutions of substeps in iPDLAM
Given the activation $\sigma_{l}(\cdot)$ to be ReLU function, the loss
function is $\ell_{2}$-norm, $R_{l}(\cdot)$ is 0, and the bias parameter added
to the network, the closed solution of substeps in iPDLAM is presented in this
subsection. The parameters are discussed in the following subsections. Besides
the proximal operators on ${\bf z}_{l}^{k+1}$ for $l\in[L]$, the closed
solutions for ${\bf W}_{l}^{k+1}$ and ${\bf b}_{l}^{k+1}$ for $l\in[L]$ are:
$\small{\bf W}_{l}^{k+1}=(({\bf z}_{l}^{k+1}-{\bf b}_{l}^{k})({\bf
a}_{l-1}^{k})^{T}+\delta{\bf\hat{w}}_{l}^{k})(({\bf a}_{l-1}^{k})({\bf
a}_{l-1}^{k})^{T}+\delta{\bf I})^{-1},$ ${\bf b}_{l}^{k+1}=({\bf
z}_{l}^{k+1}-{\bf W}_{l}^{k+1}{\bf a}_{l-1}^{k}+\delta*\hat{{\bf
b}}^{k}_{l})/(1+\delta),$ (3)
in which, ${\bf a}_{0}$ is the input data.
In the following, we present numerical experiments to verify the theoretical
analysis, the iPDLAM’s superem performance over baselines under different
$\alpha,\beta,\gamma$ and $\delta$. And further more, we discuss the variables
update order’s impect on the final performance.
### 3.2 Comparisons with DLAM
Fig. 1: Comparations with DLAM on Function values, training and test accuracy
versus the iteration when $\alpha=\beta=\gamma=0.2,0.5,0.7$ on MNIST (left)
and Fashion MNIST (right).
Firstly, we present the objective function values, training accuracy and test
accuracy when $\alpha=\beta=\gamma=0.2,0.5,0.7$ for the two datasets; $\delta$
is set as $1$. The maximum iteration is set as $200$. The results of test 1
are given in Figure 1. We can see that iPDLAM outperforms DLAM: iPDLAM can
achieve smaller objective function values and larger training and test
accuracy. It is worth mentioning that the curves of the function values are
not always descent. This fact does not contradict our theory due to that we
prove $(\xi_{k})_{k}$ is decreasing after sufficient iteration (Lemma 1)
rather than the function values.
From Table 1, we can see that the inertial term contributes when $0<\alpha<1$.
When $\alpha>1$, iPDLAM performs poorly. This phenomenon coincides with our
theoretical findings.
| MNIST | Fashion MNIST
---|---|---
$\alpha$ | FV | TrA | TeA | FV | TrA | TeA
0.1 | 0.14 | 95.4% | 88.2% | 0.14 | 94.6% | 93.2%
0.2 | 0.12 | 95.7% | 88.9% | 0.12 | 95.0% | 93.4%
0.3 | 0.13 | 95.8% | 89.2% | 0.13 | 95.5% | 93.7%
0.4 | 0.14 | 96.2% | 89.5% | 0.14 | 95.7% | 93.9%
0.5 | 0.14 | 96.4% | 89.3% | 0.14 | 96.2% | 94.1%
0.6 | 0.10 | 96.6% | 89.2% | 0.10 | 96.6% | 94.2%
0.7 | 0.03 | 96.8% | 89.6% | 0.09 | 96.3% | 94.2%
0.8 | 0.28 | 92.0% | 85.8% | 0.29 | 91.6% | 91.8%
0.9 | 0.61 | 95.1% | 87.6% | 0.64 | 94.6% | 93.0%
1.0 | 5510 | 89.0% | 78.1% | 5600 | 89.1% | 86.2%
1.1 | nan | nan | nan | nan | 10.4% | 10.4%
1.2 | nan | nan | nan | nan | 10.4% | 10.4%
Table 1: Function values, training accuracy and test accuracy for different
$\alpha$ ($=\beta=\gamma$) on MNIST and Fashion MNIST after 200 iterations.
_Function Values_ (FV), _Training Accuracy_ (TrA) and _Test Accuracy_ (TeA).
### 3.3 Robust performance on different values of $\delta$
In the second test, we use $\alpha=\beta=\gamma=0.7$, the parameter $\delta$
is set as $0.1,0.2,0.3,0.4,1$. The dataset is the MNIST. The results of the
second test are given in Figure 2. In the five cases, the training and test
accuracy versus the iterations actually perform very similar for the five
cases. The results show that the algorithm is insensitive to $\delta$.
### 3.4 Results against classical Deep Learning optimizers
We compare iPDLAM with SGD, AdaGrad[27], Adadelta[28], Adam and Adamax[29].
The training and test accuracy versus the iteration (epoch) for different
algorithms are reported in Figure 3. Although our algorithm cannot beat
classical algorithms on training accuracy, it performs better than most of
them on test accuracy. Our algorithm can learn a quite good parameter in very
small iterations (less than 10). Moreover, the proposed iPDLAM is alawys
better than classical optimizers at first 40 iterations, both in It is also
the fastest one to reach stable, even when SGD fails to converge (the blue one
on the bottom in Figure 3).
Based on the experiment results, it is recomended to adopt the iPDLAM to train
the neural networks at the first stage as a warm-up strategy, which is
expected to greatly reduce the total training time cost, and speed up the
training process.
Fig. 2: Training and test accuracy versus the iteration for different
$\delta$.
Fig. 3: Training accuracy versus the iteration and time for different
algorithms on MNIST.
### 3.5 The update order of variables
We compare the alternating minimization methods and use the following cyclic
orders to update the variables:
* •
Reverse: ${\bf z}_{L}\rightarrow{\bf z}_{L-1}\rightarrow\ldots\rightarrow{\bf
z}_{1}\rightarrow{\bf W}_{L}\rightarrow{\bf
W}_{L-1}\rightarrow\ldots\rightarrow{\bf W}_{1}\rightarrow{\bf
a}_{L}\rightarrow{\bf a}_{L-1}\rightarrow\ldots\rightarrow{\bf
a}_{1}\rightarrow{\bf z}_{L}\ldots$
* •
Increase: ${\bf z}_{1}\rightarrow{\bf z}_{2}\rightarrow\ldots\rightarrow{\bf
z}_{L}\rightarrow{\bf W}_{1}\rightarrow{\bf
W}_{2}\rightarrow\ldots\rightarrow{\bf W}_{L}\rightarrow{\bf
a}_{1}\rightarrow{\bf a}_{2}\rightarrow\ldots\rightarrow{\bf
a}_{L-1}\rightarrow{\bf z}_{1}\ldots$
* •
Nested Reverse (NR): ${\bf z}_{L}\rightarrow{\bf W}_{L}\rightarrow{\bf
a}_{L-1}\rightarrow{\bf z}_{L-1}\rightarrow{\bf
W}_{L-1}\rightarrow\ldots\rightarrow{\bf a}_{2}\rightarrow{\bf
z}_{2}\rightarrow{\bf W}_{2}\rightarrow{\bf a}_{1}\rightarrow{\bf
z}_{1}\rightarrow{\bf W}_{1}\rightarrow{\bf z}_{L}\rightarrow\ldots$
The numerical results are reported in Figure 4. The variables update order has
little impact on the optimization process, which suggests iPDLAMs could be
further applied to parallel and distributed settings. Further experiments
results are provided in the supplymentary material, and the implementation of
this work is public available.
Fig. 4: The variables update order has a little impact on the optimization
process.
## 4 Conclusion
In this paper, we propose an improved alternating minimization, named iPDLAM,
for the neural network training. The development of the algorithm is based on
the inertial techniques applied to the penalty formulation of the training
task. Different from the stochastic training methods, our algorithm enjoys
solid convergence guarantee, and the numerical results show that the proposed
algorithm takes smaller iterations to reach the same training and test
accuracy compared with various classical training algorithms.
## References
* [1] H. Robbins and S. Monro, “A stochastic approximation method,” The annals of mathematical statistics, pp. 400–407, 1951.
* [2] D. E. Rumelhart, G. E. Hinton, R. J. Williams, et al., “Learning representations by back-propagating errors,” Cognitive modeling, vol. 5, no. 3, p. 1, 1988.
* [3] S. Hochreiter, Y. Bengio, P. Frasconi, J. Schmidhuber, et al., “Gradient flow in recurrent nets: the difficulty of learning long-term dependencies,” 2001\.
* [4] G. Taylor, R. Burmeister, Z. Xu, B. Singh, A. Patel, and T. Goldstein, “Training neural networks without gradients: A scalable admm approach,” in International conference on machine learning, pp. 2722–2731, 2016.
* [5] Z. Zhang, Y. Chen, and V. Saligrama, “Efficient training of very deep neural networks for supervised hashing,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1487–1495, 2016.
* [6] Z. Zhang and M. Brand, “Convergent block coordinate descent for training tikhonov regularized deep neural networks,” in Advances in Neural Information Processing Systems, pp. 1721–1730, 2017.
* [7] J. Wang, F. Yu, X. Chen, and L. Zhao, “Admm for efficient deep learning with global convergence,” arXiv preprint arXiv:1905.13611, 2019.
* [8] A. Askari, G. Negiar, R. Sambharya, and L. E. Ghaoui, “Lifted neural networks,” arXiv preprint arXiv:1805.01532, 2018.
* [9] A. Choromanska, E. Tandon, S. Kumaravel, R. Luss, I. Rish, B. Kingsbury, R. Tejwani, and D. Bouneffouf, “Beyond backprop: Alternating minimization with co-activation memory,” stat, vol. 1050, p. 24, 2018.
* [10] J. Li, C. Fang, and Z. Lin, “Lifted proximal operator machines,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 4181–4188, 2019.
* [11] F. Gu, A. Askari, and L. E. Ghaoui, “Fenchel lifted networks: A lagrange relaxation of neural network training,” arXiv preprint arXiv:1811.08039, 2018.
* [12] M. Carreira-Perpinan and W. Wang, “Distributed optimization of deeply nested systems,” in Artificial Intelligence and Statistics, pp. 10–19, 2014.
* [13] P. L. Combettes and J.-C. Pesquet, “Proximal splitting methods in signal processing,” in Fixed-point algorithms for inverse problems in science and engineering, pp. 185–212, Springer, 2011.
* [14] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814, 2010.
* [15] J. Zeng, T. T.-K. Lau, S. Lin, and Y. Yao, “Global convergence of block coordinate descent in deep learning,” vol. 97 of Proceedings of Machine Learning Research, (Long Beach, California, USA), pp. 7313–7323, PMLR, 09–15 Jun 2019.
* [16] T. Sun, D. Li, H. Jiang, and Z. Quan, “Iteratively reweighted penalty alternating minimization methods with continuation for image deblurring,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3757–3761, IEEE, 2019.
* [17] B. T. Polyak, “Some methods of speeding up the convergence of iteration methods,” USSR Computational Mathematics and Mathematical Physics, vol. 4, no. 5, pp. 1–17, 1964.
* [18] P. Ochs, Y. Chen, T. Brox, and T. Pock, “ipiano: Inertial proximal algorithm for nonconvex optimization,” SIAM Journal on Imaging Sciences, vol. 7, no. 2, pp. 1388–1419, 2014.
* [19] T. Pock and S. Sabach, “Inertial proximal alternating linearized minimization (ipalm) for nonconvex and nonsmooth problems,” SIAM Journal on Imaging Sciences, vol. 9, no. 4, pp. 1756–1787, 2016.
* [20] N. Loizou and P. Richtárik, “Momentum and stochastic momentum for stochastic gradient, newton, proximal point and subspace descent methods,” arXiv preprint arXiv:1712.09677, 2017.
* [21] N. Loizou and P. Richtárik, “Accelerated gossip via stochastic heavy ball method,” in 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 927–934, IEEE, 2018.
* [22] T. Sun, D. Li, Z. Quan, H. Jiang, S. Li, and Y. Dou, “Heavy-ball algorithms always escape saddle points,” IJCAI, 2019.
* [23] Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008.
* [24] T. Sun, L. Qiao, and D. Li, “Bregman reweighted alternating minimization and its application to image deblurring,” Information Sciences, 2019.
* [25] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
* [26] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
* [27] J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” Journal of Machine Learning Research, vol. 12, no. Jul, pp. 2121–2159, 2011.
* [28] M. D. Zeiler, “Adadelta: an adaptive learning rate method,” arXiv preprint arXiv:1212.5701, 2012.
* [29] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” ICLR, 2014.
|
# Edge states for second order elliptic operators
David Gontier CEREMADE, University of Paris-Dauphine, PSL University, 75016
Paris, France<EMAIL_ADDRESS>
###### Abstract.
We present a general framework to study edge states for second order elliptic
operators. We associate an integer valued index to some bulk materials, and we
prove that for any junction between two such materials, localised states must
appear at the boundary whenever the indices differ.
© 2020 by the author. This paper may be reproduced, in its entirety, for non-
commercial purposes.
## 1\. Introduction and statement of the main results
The bulk-edge correspondence states that one can associate an integer valued
index ${\mathcal{I}}\in{\mathbb{Z}}$ to some bulk materials (represented here
by Schrödinger (PDE) or Hill’s (ODE) operators). When the material is cut,
edge states appear at the boundary whenever ${\mathcal{I}}\neq 0$. In
addition, it is believed that any junction between a left and a right
materials having indices ${\mathcal{I}}_{L}$ and ${\mathcal{I}}_{R}$ must also
have edge states near the junction whenever
${\mathcal{I}}_{L}\neq{\mathcal{I}}_{R}$. We prove this fact in this paper.
Since the original works of Hatsugai [Hat93a, Hat93b], most studies on bulk-
edge correspondence focused on tight-binding models (e.g. [GP13, ASBVB13]),
set on half-spaces. In these tight-binding models, boundary conditions at the
cut are quite simple to describe, and it turns out that the index is
independent of these boundary conditions.
In the context of continuous models, it is unclear that one can define an
index which is indeed independent of the chosen boundary conditions. In
[Gon20], we proved that it was the case in a simple one-dimensional model for
dislocations. We extend this work here, and give a general framework to define
the edge index for different self-adjoint extensions of Schrödinger operators.
We consider two types of continuous models. In the first part, we study
families of Hill’s operator (ODE) set on ${\mathbb{C}}^{n}$, of the form
$h_{t}:=-\partial_{xx}^{2}+V_{t},\quad\text{acting on}\quad
L^{2}({\mathbb{R}},{\mathbb{C}}^{n}),$
where $t\mapsto V_{t}$ is a continuous periodic family of bounded potentials,
with values in the set of $n\times n$ hermitian matrices. On the second part
of the article, we study its PDE version, that is families of Schrödinger’s
operators of the form
$H_{t}:=-\Delta+V_{t},\quad\text{acting on}\quad
L^{2}\left({\mathbb{R}}\times(0,1)^{d-1},{\mathbb{C}}\right).$
Here, ${\mathbb{R}}\times(0,1)^{d-1}$ is a tube in ${\mathbb{R}}^{d}$.
Projecting $H_{t}$ in finite dimensional basis, we can think of the PDE case
as the $n=\infty$ version of the ODE one.
In these models, we interpret the bulk-edge index as the intersection of
Lagrangian planes on a boundary space ${\mathcal{H}}_{b}$. Roughly speaking,
this space contains the values $(\psi(0),\psi^{\prime}(0))$ of the admissible
wave-functions $\psi$. In the context of Hill’s operators, we will have
${\mathcal{H}}_{b}={\mathbb{C}}^{n}\times{\mathbb{C}}^{n}$, while for
Schrödinger operators, ${\mathcal{H}}_{b}=H^{3/2}(\Gamma)\times
H^{1/2}(\Gamma)$, where $\Gamma=\\{0\\}\times(0,1)^{d-1}$ is the cut.
The link between edge states and Lagrangian planes was already mentioned e.g.
in [ASBVB13] and [GP13] for discrete models (tight-binding approximation).
Based on the recent developments on Lagrangian planes and second order
elliptic operator by Howard, Latushkin and Sukhtayev in a series on papers
[HS16, HLS17, HLS18], we extend the picture to the continuous cases of Hill’s
and Schrödinger operators, and show that the presence of edge modes comes from
the crossings of these planes. This framework also allows to treat both the
ODE and PDE setting, as noticed by the some of the previous authors in [LS18],
based on the seminal work of Booß-Bavnbek and Furutani on infinite dimensional
Lagrangian planes [BBF98, BBZ13, Fur04].
Our main result can be stated as follows (here for the Hill’s case). It
extends the one in the previous work [Gon20] and [Dro18]. Let
$n\in{\mathbb{N}}\setminus\\{0\\}$ be fixed, and let
$V_{t}(x):=V(t,x):{\mathbb{T}}^{1}\times{\mathbb{R}}\to{\mathcal{S}}_{n},$ (1)
be a periodic family of matrix-valued bounded potentials (which are not
necessarily periodic in $x$). Here, ${\mathbb{T}}^{1}\approx[0,1]$ is the one-
dimensional torus, and ${\mathcal{S}}_{n}$ denotes the set of $n\times n$
hermitian matrices. We assume that $t\mapsto V_{t}$ is continuous from
${\mathbb{T}}^{1}$ to $L^{\infty}({\mathbb{R}},{\mathcal{S}}_{n})$. We
consider the family of (bulk) Hill’s operators
$h_{t}:=-\partial_{xx}^{2}+V_{t}\quad\text{acting on}\quad
L^{2}({\mathbb{R}},{\mathbb{C}}^{n}).$
For $E\in{\mathbb{R}}$, we say that $E$ is in the gap of the family $(h_{t})$
if $E\notin\sigma(h_{t})$ for all $t\in{\mathbb{T}}^{1}$. We also consider the
family of (edge) Hill’s operators
$h^{\sharp}_{D,t}:=-\partial_{xx}^{2}+V_{t}\quad\text{acting on}\quad
L^{2}({\mathbb{R}}^{+},{\mathbb{C}}^{n}),$
with Dirichlet boundary conditions at $x=0$. While $E$ is not in the spectrum
of the bulk operator $\sigma(h_{t})$, it may belong to the spectrum of the
bulk operator $h^{\sharp}_{D,t}$. In this case, the corresponding eigenstate
is called an edge mode.
As $t$ runs through ${\mathbb{T}}^{1}\approx[0,1]$, a spectral flow may appear
for the family $h^{\sharp}_{D,t}$. We denote by ${\rm
Sf}(h^{\sharp}_{D,t},E,{\mathbb{T}}^{1})$ the net number of eigenvalues of
$h^{\sharp}_{D,t}$ going downwards in the gap where $E$ lives. We define the
bulk/edge index of $(h_{t})$ as this spectral flow:
${\mathcal{I}}(h_{t},E):={\rm
Sf}\left(h^{\sharp}_{D,t},E,{\mathbb{T}}^{1}\right).$
Our main theorem is the following (see Theorem 30 for the proof in the Hill’s
case, and Theorem 40 for the one in the Schrödinger case).
###### Theorem 1 (Bulk-edge correspondence for junctions).
Let $t\mapsto V_{R,t}$ and $t\mapsto V_{L,t}$ be two continuous periodic
families of bounded potentials on ${\mathbb{R}}$. Let $E\in{\mathbb{R}}$ be in
the gap of both corresponding (bulk) Hill’s operators $(h_{L,t})$ and
$(h_{R,t})$. Let $\chi:{\mathbb{R}}\to[0,1]$ be any switch function,
satisfying $\chi(x)=1$ for $x<-X$ and $\chi(x)=0$ for $x>X$ for some $X>0$
large enough, and let
$h^{\chi}_{t}:=-\partial_{xx}^{2}+V_{L,t}(x)\chi(x)+V_{R,t}(x)(1-\chi(x)).$
Then
${\rm
Sf}\left(h^{\chi}_{t},E,{\mathbb{T}}^{1}\right)={\mathcal{I}}(h_{R,t},E)-{\mathcal{I}}(h_{L,t},E).$
The operator $h^{\chi}_{t}$ is a domain wall operator. On the far left, we see
the potential $V_{L,t}$, while on the far right, we see $V_{R,t}$, so this
operator models a junction between a left potential and a right one.
The proofs in the Hill’s and Schrödinger cases are quite different. The Hill’s
case being finite dimensional, we are able to describe precisely the spectral
flow. We prove in particular that the bulk/edge index is independent of the
chosen boundary conditions. We interpret this spectral flow as a Maslov index
of some pair of Lagrangian planes, and as the winding number of some
unitaries. This somehow generalises some results in [GP13] or [ASBVB13] for
tight-binding models. The arguments we use are model independent, and can be
generalised in many ways.
### 1.1. Plan of the paper
In Section 2, we recall some basic facts on symplectic spaces and self-adjoint
extensions of operators. We then prove our results concerning Hill’s operators
in Section 3, and explain how to adapt the proofs for Schrödinger operators in
Section 4. We discuss the role of the boundary space
${\mathcal{H}}_{b}=H^{3/2}(\Gamma)\times H^{1/2}(\Gamma)$ in Appendix A.
### 1.2. Notation of the paper
We denote by ${\mathbb{N}}:=\\{1,2,3,\cdots\\}$ and by
${\mathbb{N}}_{0}:=\\{0,1,2,3,\cdots\\}$. For $\Omega\subset{\mathbb{R}}^{d}$
an open set, we denote by $L^{p}(\Omega,{\mathbb{C}})$ the usual Lebesgue
space, and by $H^{s}(\Omega,{\mathbb{C}})$ the Sobolev ones. The set
$H^{s}_{0}(\Omega,{\mathbb{C}})$ is the completion of
$C^{\infty}_{0}(\Omega,{\mathbb{C}})$ for the $H^{s}$ norm.
Let ${\mathcal{H}}_{1}$ and ${\mathcal{H}}_{2}$ be two Hilbert spaces. For a
bounded operator $A:{\mathcal{H}}_{1}\to{\mathcal{H}}_{2}$, its dual $A^{*}$
is the map from ${\mathcal{H}}_{2}\to{\mathcal{H}}_{1}$ so that
$\forall x_{1}\in{\mathcal{H}}_{1},\ \forall
x_{2}\in{\mathcal{H}}_{2},\quad\langle
x_{1},A^{*}x_{2}\rangle_{{\mathcal{H}}_{1}}=\langle
Ax_{1},x_{2}\rangle_{{\mathcal{H}}_{2}}.$
The operator $A$ is unitary if $A^{*}A={\mathbb{I}}_{{\mathcal{H}}_{1}}$ and
$AA^{*}={\mathbb{I}}_{{\mathcal{H}}_{2}}$.
For $E$ a Banach space, we say that a map $t\mapsto v(t)\in E$ is continuously
differentiable if $v^{\prime}(t)$ is well-defined in $E$ for all $t$ (that is
$\|v^{\prime}(t)\|_{E}<\infty$), and if $t\mapsto v^{\prime}(t)$ is
continuous.
### 1.3. Acknowledgements
This work was funded by the CNRS international cooperation tool PICS.
## 2\. First facts and notations
### 2.1. Lagrangian planes in complex Hilbert spaces
Let us first recall some basic facts on symplectic Banach/Hilbert spaces. In
the original work of Maslov [MBA72], popularised by Arnol’d [Arn67], the
authors consider real Banach spaces $E$. Following the recent developments, we
present the theory for complex Banach spaces.
#### 2.1.1. Basics in symplectic spaces
Let $E$ be a complex Banach space. A symplectic form on $E$ is a non
degenerate continuous sesquilinear form $\omega:E\times E\to{\mathbb{C}}$ such
that
$\forall x,y\in E,\quad\omega(x,y)=-\overline{\omega(y,x)}.$
For $\ell$ a vectorial subspace of $E$, we denote by
$\ell^{\circ}:=\left\\{x\in E,\quad\forall
y\in\ell,\quad\omega(x,y)=0\right\\}.$
The space $\ell^{\circ}$ is always closed. Such a subspace is called isotropic
if $\ell\subset\ell^{\circ}$, co-isotropic if $\ell^{\circ}\subset\ell$, and
Lagrangian if $\ell=\ell^{\circ}$. We also say that $\ell$ is a Lagrangian
plane in the latter case. The set of all Lagrangian planes of $E$, sometime
called the Lagrangian-Grassmanian, is denoted by $\Lambda(E)$.
###### Example 2 (In ${\mathbb{R}}^{2n}$).
In the real Hilbert space $E={\mathbb{R}}^{n}\times{\mathbb{R}}^{n}$, the
canonical symplectic form is given by (we write ${\mathbf{x}}=(x,x^{\prime})$,
${\mathbf{y}}=(y,y^{\prime})$, etc. the elements in
${\mathbb{R}}^{n}\times{\mathbb{R}}^{n}$)
$\forall{\mathbf{x}},{\mathbf{y}}\in{\mathbb{R}}^{n}\times{\mathbb{R}}^{n},\quad\omega({\mathbf{x}},{\mathbf{y}}):=\langle
x,y^{\prime}\rangle_{{\mathbb{R}}^{n}}-\langle
x^{\prime},y\rangle_{{\mathbb{R}}^{n}},$
When $n=1$, the Lagrangian planes are all the one-dimensional vectorial
subspaces of ${\mathbb{R}}^{2}$. Conversely, if $({\mathbb{R}}^{N},\omega)$ is
a symplectic space, then $N=2n$ is even, and all Lagrangian planes are of
dimension $n$.
###### Example 3 (In ${\mathbb{C}}^{2n}$).
Similarly, in the complex Hilbert space ${\mathbb{C}}^{2n}$, the canonical
symplectic form is given by
$\forall{\mathbf{z}}_{1},{\mathbf{z}}_{2}\in{\mathbb{C}}^{n}\times{\mathbb{C}}^{n},\quad\omega({\mathbf{z}}_{1},{\mathbf{z}}_{2}):=\langle
z_{1},z_{2}^{\prime}\rangle_{{\mathbb{C}}^{n}}-\langle
z_{1}^{\prime},z_{2}\rangle_{{\mathbb{C}}^{n}}.$
When $n=1$ for instance, the Lagrangian planes are the one-dimensional
vectorial spaces $L={\rm Vect}_{\mathbb{C}}({\mathbf{z}})$ with
${\mathbf{z}}=(z,z^{\prime})$ satisfying the extra condition
$\overline{z}z^{\prime}\in{\mathbb{R}}$. Up to a phase, we may always assume
$z\in{\mathbb{R}}$, in which case $z^{\prime}\in{\mathbb{R}}$ as well. So the
Lagrangian planes are the one-dimensional subspaces of ${\mathbb{C}}^{2}$ of
the form ${\rm Vect}_{\mathbb{C}}({\mathbf{z}})$ with
${\mathbf{z}}\in{\mathbb{R}}^{2n}$.
###### Example 4 (In ${\mathbb{C}}^{N}$).
Another example is given by the symplectic form
$\forall{\mathbf{z}}_{1},{\mathbf{z}}_{2}\in{\mathbb{C}}^{N},\quad\widetilde{\omega}({\mathbf{z}}_{1},{\mathbf{z}}_{2})={\mathrm{i}}\langle{\mathbf{z}}_{1},{\mathbf{z}}_{2}\rangle_{{\mathbb{C}}^{N}}.$
With this symplectic form, a vector ${\mathbf{z}}\in{\mathbb{C}}^{N}$ is never
isotropic, since
$\widetilde{\omega}({\mathbf{z}},{\mathbf{z}})={\mathrm{i}}\|{\mathbf{z}}\|^{2}\neq
0$ for $z\neq 0$. In particular, $({\mathbb{C}}^{N},\widetilde{\omega})$ does
not have Lagrangian subspaces.
We record the following result.
###### Lemma 5.
If $\ell_{1}\subset\ell_{1}^{\circ}$ and $\ell_{2}\subset\ell_{2}^{\circ}$ are
two isotropic subspaces with $\ell_{1}+\ell_{2}=E$, then
$\ell_{1}=\ell_{1}^{\circ},\quad\ell_{2}=\ell_{2}^{\circ},\quad\text{are
Lagrangians, and}\quad\ell_{1}\oplus\ell_{2}=E.$
###### Proof.
Since $\ell_{1}+\ell_{2}=E$, we have
$\\{0\\}=\ell_{1}^{\circ}\cap\ell_{2}^{\circ}$. In particular,
$\ell_{1}\cap\ell_{2}\subset\ell_{1}^{\circ}\cap\ell_{2}^{\circ}=\\{0\\}$ as
well, so $\ell_{1}\oplus\ell_{2}=E$. Let $x\in\ell_{1}^{\circ}\subset E$, and
write $x=x_{1}+x_{2}$ with $x_{1}\in\ell_{1}$ and $x_{2}\in\ell_{2}$. Then, we
have $x_{2}\in\ell_{1}^{\circ}$, so
$x_{2}\in\ell_{1}^{\circ}\cap\ell_{2}\subset\ell_{1}^{\circ}\cap\ell_{2}^{\circ}=\\{0\\}$.
This proves that $x=x_{1}\in\ell_{1}$, hence $\ell_{1}^{\circ}=\ell_{1}$. The
proof for $\ell_{2}$ is similar. ∎
#### 2.1.2. Lagrangian planes of Hilbert spaces and unitaries.
In the case where $E={\mathcal{H}}_{b}$ is an Hilbert space, with inner
product $\langle\cdot,\cdot\rangle_{{\mathcal{H}}_{b}}$, for all
$x\in{\mathcal{H}}_{b}$, the map
$T_{x}:y\mapsto\omega(x,y)$
is linear and bounded, so, by Riesz’ representation theorem, there exists
$v\in{\mathcal{H}}_{b}$ so that $T_{x}(y)=\langle
v,y\rangle_{{\mathcal{H}}_{b}}$. We denote by $Jx:=v$ this element. This
defines a operator $J:{\mathcal{H}}_{b}\to{\mathcal{H}}_{b}$, satisfying
$\forall x,y\in{\mathcal{H}}_{b},\quad\omega(x,y)=\langle
x,Jy\rangle_{{\mathcal{H}}_{b}}.$
In particular, since $\omega$ is bounded, we have
$\|Jy\|_{{\mathcal{H}}_{b}}^{2}=\langle
Jy,Jy\rangle_{{\mathcal{H}}_{b}}=\omega(Jy,y)\leq C_{\omega}\|Jy\|\cdot\|y\|,$
so $J$ is bounded. In addition, from the relation
$\omega(x,y)=-\overline{\omega(y,x)}$, we get that
$\langle x,Jy\rangle_{{\mathcal{H}}_{b}}=-\langle
Jx,y\rangle_{{\mathcal{H}}_{b}}.$
In other words, $J=-J^{*}$. Finally, since $\omega$ is not degenerate, we have
${\rm Ker}(J)=\\{0\\}$.
###### Example 6.
On ${\mathbb{C}}^{2n}$ with the canonical symplectic form $\omega(x,y)=\langle
x_{1},y_{2}\rangle_{{\mathbb{C}}_{n}}-\langle
x_{2},y_{1}\rangle_{{\mathbb{C}}^{n}}$, we have
$J=\begin{pmatrix}0_{n}&{\mathbb{I}}_{n}\\\
-{\mathbb{I}}_{n}&0_{n}\end{pmatrix}.$
Later in the article, we will make the following Assumption A:
$\textbf{Assumption A:}\quad J^{2}={\mathbb{I}}_{{\mathcal{H}}_{b}}.$ (2)
In this case, since $J$ is bounded skew self-adjoint with
$J^{2}=-{\mathbb{I}}$, we can write
${\mathcal{H}}_{b}={\rm Ker}\left(J-{\mathrm{i}}\right)\oplus{\rm
Ker}\left(J+{\mathrm{i}}\right).$ (3)
The hermitian form $-{\mathrm{i}}\omega$ is positive definite on ${\rm
Ker}\left(J-{\mathrm{i}}\right)$, negative definite on ${\rm
Ker}\left(J+{\mathrm{i}}\right)$. We have
$\forall x\in{\rm
Ker}(J-{\mathrm{i}}),\quad\omega(x,x)={\mathrm{i}}\|x\|_{{\mathcal{H}}_{b}}^{2},\quad\forall
y\in{\rm
Ker}(J+{\mathrm{i}}),\quad\omega(y,y)=-{\mathrm{i}}\|y\|_{{\mathcal{H}}_{b}}^{2},$
and finally
$\forall x\in{\rm Ker}\left(J-{\mathrm{i}}\right),\ \forall y\in{\rm
Ker}\left(J+{\mathrm{i}}\right),\quad\omega(x,y)=0.$ (4)
The following result goes back to Leray in its seminar [Ler78] (see also
[BBF98] and [BBZ13, Lemma 2 and 3]). We skip its proof for the sake of
brevity.
###### Lemma 7.
If Assumption A holds, then there is a one-to-one correspondence between the
Lagrangian planes $\ell$ of ${\mathcal{H}}_{b}$ and the unitaries $U$ from
${\rm Ker}\left(J-{\mathrm{i}}\right)$ to ${\rm
Ker}\left(J+{\mathrm{i}}\right)$, with
$\ell=\left\\{x+Ux,\quad x\in{\rm Ker}\left(J-{\mathrm{i}}\right)\right\\}.$
###### Corollary 8.
If $\dim{\rm Ker}\left(J-{\mathrm{i}}\right)\neq\dim{\rm
Ker}\left(J+{\mathrm{i}}\right)$, then there are no Lagrangian planes. This
happens for instance for the symplectic space
$({\mathbb{C}}^{n},\widetilde{\omega})$, with
$\widetilde{\omega}(z,z^{\prime})={\mathrm{i}}\langle
z,z^{\prime}\rangle_{{\mathbb{C}}^{n}}$ (see Example 4), for which we have
$Jz={\mathrm{i}}z$, so ${\rm Ker}(J-{\mathrm{i}})={\mathbb{C}}^{n}$ while
${\rm Ker}(J+{\mathrm{i}})=\\{0\\}$.
The next Lemma shows that the crossing of two Lagrangian planes can be read
from their respective unitaries. It can be found e.g. in [BBZ13, Lemma 2]. Our
definition slightly differs from the one in [BBZ13], since we put the adjoint
on the left operator (this seems more natural for what follows).
###### Lemma 9.
Let $\ell_{1}$ and $\ell_{2}$ be two Lagrangian planes of
$\Lambda({\mathcal{H}}_{b})$, with corresponding unitaries $U_{1}$ and $U_{2}$
from ${\rm Ker}(J-{\mathrm{i}})$ to ${\rm Ker}(J+{\mathrm{i}})$. Then there is
a natural isomorphism
${\rm Ker}\left(U_{2}^{*}U_{1}-{\mathbb{I}}_{{\rm
Ker}(J-{\mathrm{i}})}\right)\approx\ell_{1}\cap\ell_{2}.$
###### Proof.
If $x^{-}\in{\rm Ker}\left(J-{\mathrm{i}}\right)$ is such that
$U_{2}^{*}U_{1}x^{-}=x^{-}$, then we have $U_{1}x^{-}=U_{2}x^{+}$ in ${\rm
Ker}(J+{\mathrm{i}})$, so $x:=x^{-}+U_{1}x^{-}=x^{-}+U_{2}x^{-}$ is in
$\ell_{1}\cap\ell_{2}$. Conversely, if $x\in\ell_{1}\cap\ell_{2}$, then,
writing $x=x^{-}+x^{+}$ shows that $U_{1}x^{-}=U_{2}x^{-}$, so
$U_{2}^{*}U_{1}x^{-}=x^{-}$. ∎
#### 2.1.3. Another unitary
In Section 3.1 below, we will consider periodic paths of Lagrangians
$\ell_{1}(t)$ and $\ell_{2}(t)$, and define the Maslov index of the pair
$(\ell_{1},\ell_{2})$. When ${\mathcal{H}}_{b}$ is finite dimensional, we will
prove that it equals the winding number of the determinant of
$U_{2}^{*}(t)U_{1}(t)$. Unfortunately, since $U_{1}$ and $U_{2}$ are not
endomorphism, we cannot split $\det(U_{2}^{*}U_{1})$ into
$\det(U_{1})/\det(U_{2})$. In this section, we present another one-to-one
correspondence between Lagrangian planes and some other unitaries (which will
be endomorphisms). The results of this section are new to the best of our
knowledge.
We now make the stronger assumption that ${\mathcal{H}}_{b}$ is of the form
${\mathcal{H}}_{b}={\mathcal{H}}_{1}\times{\mathcal{H}}_{2}$, where
${\mathcal{H}}_{1}$ and ${\mathcal{H}}_{2}$ are two complex Hilbert spaces,
and that, relative to this decomposition, $J$ is of the form
$\textbf{Assumption B:}\quad J=\begin{pmatrix}0&V^{*}\\\
-V&0\end{pmatrix},\quad\text{for some (fixed) unitary
$V:{\mathcal{H}}_{1}\to{\mathcal{H}}_{2}$}.$ (5)
It implies $J^{2}=-{\mathbb{I}}$ as before. Furthermore, we can identify
${\rm Ker}(J-{\mathrm{i}})=\begin{pmatrix}1\\\
{\mathrm{i}}V\end{pmatrix}{\mathcal{H}}_{1},\quad\text{and}\quad{\rm
Ker}(J+{\mathrm{i}})=\begin{pmatrix}1\\\
-{\mathrm{i}}V\end{pmatrix}{\mathcal{H}}_{1}.$
Defining the maps $Q^{\pm}:{\mathcal{H}}_{1}\to{\rm Ker}(K\pm{\mathrm{i}})$ by
$\forall x\in{\mathcal{H}}_{1},\quad
Q_{\pm}(x):=\frac{1}{\sqrt{2}}\begin{pmatrix}x\\\
\mp{\mathrm{i}}Vx\end{pmatrix},\quad\text{with dual}\quad
Q_{\pm}^{*}\begin{pmatrix}x\\\ \mp{\mathrm{i}}x\end{pmatrix}=\sqrt{2}x,$
we see that, for all $x\in{\mathcal{H}}_{1}$,
$\left\|Q^{\pm}x\right\|^{2}_{{\mathcal{H}}_{b}}=\frac{1}{2}\left\|\begin{pmatrix}x\\\
\pm{\mathrm{i}}Vx\end{pmatrix}\right\|_{{\mathcal{H}}_{b}}=\frac{1}{2}\left\|x\right\|_{{\mathcal{H}}_{1}}+\frac{1}{2}\left\|Vx\right\|_{{\mathcal{H}}_{2}}=\|x\|_{{\mathcal{H}}_{1}},$
so $Q^{\pm}$ are unitaries. In particular, if $U$ is a unitary from ${\rm
Ker}(J-{\mathrm{i}})$ to ${\rm Ker}(J+{\mathrm{i}})$, then
${\mathcal{U}}:=Q_{+}^{*}UQ_{-}$
is a unitary from ${\mathcal{H}}_{1}$ to itself, so an endomorphism. In what
follows, we use straight letters $U$ for unitaries from ${\rm
Ker}(J-{\mathrm{i}})\to{\rm Ker}(J+{\mathrm{i}})$, and curly letters
${\mathcal{U}}$ for unitaries of ${\mathcal{H}}_{1}$. We therefore proved the
following.
###### Lemma 10.
If Assumption B holds, then there is a one-to-one correspondence between the
Lagrangian planes $\ell$ of
$({\mathcal{H}}_{1}\times{\mathcal{H}}_{2},\omega)$ and the unitaries
${\mathcal{U}}$ of ${\mathcal{H}}_{1}$, with
$\ell=\left\\{\begin{pmatrix}1\\\
{\mathrm{i}}V\end{pmatrix}x+\begin{pmatrix}1\\\
-{\mathrm{i}}V\end{pmatrix}{\mathcal{U}}x,\quad
x\in{\mathcal{H}}_{1}\right\\}.$
In addition, if $\ell_{1}$ and $\ell_{2}$ are two Lagrangian planes with
corresponding unitaries ${\mathcal{U}}_{1}$ and ${\mathcal{U}}_{2}$, then
there is a natural isomorphism
${\rm
Ker}\left({\mathcal{U}}_{2}^{*}{\mathcal{U}}_{1}-{\mathbb{I}}_{{\mathcal{H}}_{1}}\right)\approx\ell_{1}\cap\ell_{2}.$
### 2.2. Self-adjoint extensions of Hill’s operators
We now review some basic facts on self-adjoint operators (see e.g. [RS75,
Chapter X.1] for complete introduction). We first recall some general
definitions, and then focus on second order elliptic operators. We show the
connection with symplectic spaces using the second Green’s identity (see Lemma
11 below).
#### 2.2.1. Self-adjoint operators
Let ${\mathcal{H}}$ be a separable Hilbert space, and let $A$ with dense
domain ${\mathcal{D}}_{A}$ be any operator on ${\mathcal{H}}$. In the sequel,
we sometime write $(A,{\mathcal{D}}_{A})$. The adjoint of
$(A,{\mathcal{D}}_{A})$ is denoted by $(A^{*},{\mathcal{D}}_{A^{*}})$.
For $A$ a symmetric, hence closable, operator on ${\mathcal{H}}$, we denote by
$(A_{\rm min},{\mathcal{D}}_{\rm min})$ its closure. The adjoint of $(A_{\rm
min},{\mathcal{D}}_{\rm min})$ is denoted by $(A_{\rm max},{\mathcal{D}}_{\rm
max})$. Since $A$ is symmetric, we have $A_{\rm min}\subset A_{\rm max}$
($A_{\rm max}$ is an extension of $A_{\rm min}$). The operator $A_{\rm min}$
is self-adjoint iff ${\mathcal{D}}_{\rm min}={\mathcal{D}}_{\rm max}$.
Otherwise, any self-adjoint extension of $A$, if exists, is of the form
$(\widetilde{A},\widetilde{{\mathcal{D}}})$ with
$A_{\rm min}\subset\widetilde{A}\subset A_{\rm max},\quad\text{in the
sense}\quad{\mathcal{D}}_{\rm
min}\subset\widetilde{{\mathcal{D}}}\subset{\mathcal{D}}_{\rm max}.$
In particular, once ${\mathcal{D}}_{\rm min}$ and ${\mathcal{D}}_{\rm max}$
have been identified, the self-adjoint extensions are simply given by domains
$\widetilde{{\mathcal{D}}}$ with ${\mathcal{D}}_{\rm
min}\subset\widetilde{{\mathcal{D}}}\subset{\mathcal{D}}_{\rm max}$, and the
operator $\widetilde{A}$ acts on this domain via
$\forall x\in\widetilde{{\mathcal{D}}},\quad\widetilde{A}x:=A_{\rm max}x.$
In what follows, we sometime write $(A_{\rm max},\widetilde{{\mathcal{D}}})$
instead of $(\widetilde{A},\widetilde{{\mathcal{D}}})$ to insist that only the
domain matters.
There are several ways to identify the self-adjoint extensions of $A$. The
original proof by von Neumann [vN30] uses the Cayley transform. As noticed in
[RS75, Chapter X.1] following [DS65], the connection with boundary values is
not so clear which this approach. Another approach can be found e.g. in
[BBF98, Section 3.1], where the authors give a correspondence between the
self-adjoint extensions of $A$ and the Lagrangian planes of the abstract space
${\mathcal{D}}_{\rm max}/{\mathcal{D}}_{\rm min}$, with the symplectic form
$\forall[x],[y]\in{\mathcal{D}}_{\rm max}/{\mathcal{D}}_{\rm
min},\quad\omega\left([x],[y]\right):=\langle x,A_{\rm
max}y\rangle_{{\mathcal{H}}}-\langle A_{\rm max}x,y\rangle_{\mathcal{H}}.$
Again, the connection with boundary conditions is not so clear in this
setting.
In what follows, we present the approach in [LS18] (see also [CJM15]), which
is specific to second order elliptic operators. It uses the second Green’s
identity.
#### 2.2.2. Self-adjoint extensions of Hill’s operators on the semi line
We first present the theory in the case where $A=h$ is a second order ODE
(Hill’s operators). We postpone the analysis for general second order elliptic
operator to Section 4 below.
Let $n\in{\mathbb{N}}$ and let $V:{\mathbb{R}}\to{\mathcal{S}}_{n}$ be a
bounded potential with values in ${\mathcal{S}}_{n}$, the set of $n\times n$
hermitian matrices. We consider the Hill’s operator
$h:=-\partial_{xx}^{2}+V(x)\quad\text{acting
on}\quad{\mathcal{H}}:=L^{2}({\mathbb{R}},{\mathbb{C}}^{n}).$
The bulk operator $h$ with core domain
$C^{\infty}_{0}({\mathbb{R}},{\mathbb{C}}^{n})$ is symmetric. Since the
potential $V$ is bounded, the operator $h$ is essentially self-adjoint, with
domain (see [Kat13, Chapter 4])
${\mathcal{D}}:={\mathcal{D}}_{\rm min}={\mathcal{D}}_{\rm
max}=H^{2}({\mathbb{R}},{\mathbb{C}}^{n}).$
When restricting this operator to the half line, we obtain the edge operator
$h^{\sharp}:=-\partial_{xx}^{2}+V(x)\quad\text{acting
on}\quad{\mathcal{H}}^{\sharp}:=L^{2}({\mathbb{R}}^{+},{\mathbb{C}}^{n}).$
On the core $C^{\infty}_{0}({\mathbb{R}}^{+},{\mathbb{C}}^{n})$, it is
symmetric, and its closure has domain
${\mathcal{D}}_{\rm
min}^{\sharp}:=H^{2}_{0}({\mathbb{R}}^{+},{\mathbb{C}}^{n}).$
The adjoint of $(h^{\sharp}_{\rm min},{\mathcal{D}}_{\rm min}^{\sharp})$ is
the operator $(h^{\sharp}_{\rm max},{\mathcal{D}}_{\rm max}^{\sharp})$ where
$h_{\rm max}^{\sharp}:=-\partial_{xx}^{2}+V(x)$ has domain
${\mathcal{D}}_{\rm max}^{\sharp}:=H^{2}({\mathbb{R}}^{+},{\mathbb{C}}^{n}).$
We have ${\mathcal{D}}_{\rm min}^{\sharp}\subsetneq{\mathcal{D}}_{\rm
max}^{\sharp}$, so $h^{\sharp}$ is not essentially self-adjoint. This reflects
the fact that some boundary conditions must be chosen at $x=0$. The
particularity of second order elliptic operator comes from the second Green’s
identity.
###### Lemma 11 (Green’s formula).
For all $\phi,\psi\in{\mathcal{D}}_{\rm max}^{\sharp}$,
$\langle\phi,h_{\rm max}^{\sharp}\psi\rangle_{{\mathcal{H}}^{\sharp}}-\langle
h_{\rm
max}^{\sharp}\phi,\psi\rangle_{{\mathcal{H}}^{\sharp}}=\langle\phi(0),\psi^{\prime}(0)\rangle_{{\mathbb{C}}^{n}}-\langle\phi^{\prime}(0),\psi(0)\rangle_{{\mathbb{C}}^{n}}.$
This suggests to introduce the boundary space
${\mathcal{H}}_{b}:={\mathbb{C}}^{n}\times{\mathbb{C}}^{n}$
with its canonical symplectic form $\omega$ defined in Example 3. We also
introduce the map ${\rm Tr}:{\mathcal{D}}_{\rm
max}^{\sharp}\to{\mathcal{H}}_{b}$ defined by
$\forall\phi\in{\mathcal{D}}_{\rm max}^{\sharp},\quad{\rm
Tr}(\phi):=(\phi(0),\phi^{\prime}(0))\in{\mathcal{H}}_{b}.$ (6)
With these notations, the second Green’s identity reads
$\forall\phi,\psi\in{\mathcal{D}}^{\sharp}_{\rm max},\quad\langle\phi,h_{\rm
max}^{\sharp}\psi\rangle_{{\mathcal{H}}^{\sharp}}-\langle h_{\rm
max}^{\sharp}\phi,\psi\rangle_{{\mathcal{H}}^{\sharp}}=\omega\left({\rm
Tr}(\phi),{\rm Tr}(\psi)\right).$
We denote by $\|\cdot\|_{\sharp}$ the graph norm of $h^{\sharp}$, that is
$\forall\phi\in{\mathcal{D}}_{\rm
max}^{\sharp},\quad\|\phi\|_{\sharp}^{2}:=\|\phi\|_{{\mathcal{H}}^{\sharp}}^{2}+\|h^{\sharp}_{\rm
max}\phi\|_{{\mathcal{H}}^{\sharp}}^{2}.$
Recall that, by definition, a closed extension of $h^{\sharp}$ has a domain
which is closed for this norm.
###### Lemma 12.
The map ${\rm Tr}:({\mathcal{D}}_{\rm
max}^{\sharp},\|\cdot\|_{\sharp})\to{\mathcal{H}}_{b}$ is well-defined,
continuous and onto.
###### Proof.
Since $V$ is bounded, the graph norm $\|\cdot\|_{\sharp}$ is equivalent to the
usual $H^{2}({\mathbb{R}}^{+},{\mathbb{C}}^{n})$ norm on
${\mathcal{D}}^{\sharp}_{\rm max}=H^{2}({\mathbb{R}}^{+},{\mathbb{C}}^{n})$.
The Sobolev embedding shows that
$H^{2}({\mathbb{R}}^{+},{\mathbb{C}}^{n})\hookrightarrow
C^{1}([0,\infty),{\mathbb{C}}^{n})$ with continuous embedding. This implies
that ${\rm Tr}$ is a bounded linear operator. Let $C,S\in
C^{\infty}({\mathbb{R}}^{+},{\mathbb{R}})$ be two compactly supported smooth
functions with $C(0)=S^{\prime}(0)=1$ and $C^{\prime}(0)=S(0)=0$. Given an
element $(u,u^{\prime})\in{\mathcal{H}}_{b}$, we have $(u,u^{\prime})={\rm
Tr}(\psi)$ for $\psi(x):=uC(x)+u^{\prime}S(x)\in{\mathcal{D}}_{\rm
max}^{\sharp}$, so ${\rm Tr}$ is onto. ∎
The next result shows that self-adjoint extensions of $h^{\sharp}$ can be seen
as Lagrangian planes of ${\mathcal{H}}_{b}$.
###### Theorem 13.
Let ${\mathcal{D}}^{\sharp}$ be a domain satisfying ${\mathcal{D}}_{\rm
min}^{\sharp}\subset{\mathcal{D}}^{\sharp}\subset{\mathcal{D}}_{\rm
max}^{\sharp}$, and let $\ell:={\rm Tr}({\mathcal{D}}^{\sharp})$. The adjoint
of $(h^{\sharp}_{\rm max},{\mathcal{D}}^{\sharp})$ is the operator
$(h^{\sharp}_{\rm max},({\mathcal{D}}^{\sharp})^{*})$ with domain
$({\mathcal{D}}^{\sharp})^{*}:={\rm Tr}^{-1}\left(\ell^{\circ}\right).$
In particular, ${\mathcal{D}}^{\sharp}\subset{\mathcal{D}}_{\rm max}$ defines
a self-adjoint extension of $h^{\sharp}$ iff it is of the form
${\mathcal{D}}^{\sharp}:={\rm Tr}^{-1}(\ell),\quad\text{for some Lagrangian
plane}\quad\ell\in\Lambda({\mathcal{H}}_{b}).$
###### Remark 14.
Since ${\rm Tr}$ is a continuous map, and since any subspace of the form
$\ell^{\circ}$ is closed in ${\mathcal{H}}_{b}$, we deduce that
$({\mathcal{D}}^{\sharp})^{*}={\rm Tr}^{-1}(\ell^{\circ})$ is closed for the
graph norm of ${\mathcal{H}}^{\sharp}$. Note that ${\mathcal{D}}^{\sharp}$ is
not required to be closed.
###### Proof.
The following proof only uses that ${\rm Tr}$ is onto. It will therefore be
valid also for the Schrödinger PDE case (with the corresponding boundary space
and ${\rm Tr}$ operator, see Section 4 below).
Since ${\mathcal{D}}_{\rm
min}^{\sharp}\subset{\mathcal{D}}^{\sharp}\subset{\mathcal{D}}_{\rm
max}^{\sharp}$ and $({\mathcal{D}}_{\rm max}^{\sharp})^{*}={\mathcal{D}}_{\rm
min}^{\sharp}$, we have ${\mathcal{D}}_{\rm
min}^{\sharp}\subset\left({\mathcal{D}}^{\sharp}\right)^{*}\subset{\mathcal{D}}_{\rm
max}$ as well. Let
$\phi\in({\mathcal{D}}^{\sharp})^{*}\subset{\mathcal{D}}_{\rm max}^{\sharp}$ .
By definition of the adjoint, and the second Green’s identity, we have
$\forall\psi\in{\mathcal{D}}^{\sharp},\quad 0=\langle\phi,h^{\sharp}_{\rm
max}\psi\rangle_{{\mathcal{H}}^{\sharp}}-\langle h^{\sharp}_{\rm
max}\phi,\psi\rangle_{{\mathcal{H}}^{\sharp}}=\omega({\rm Tr}(\phi),{\rm
Tr}(\psi)).$
We deduce that ${\rm Tr}(\phi)\in\ell^{\circ}$.
Conversely, let $(u,u^{\prime})\in\ell^{\circ}\subset{\mathcal{H}}_{b}$ and
let $\psi_{0}\in{\mathcal{D}}_{\rm max}^{\sharp}$ be such that ${\rm
Tr}(\psi_{0})=(u,u^{\prime})$. By definition of $\ell^{\circ}$ and the second
Green’s identity, we get
$\forall\phi\in{\mathcal{D}}^{\sharp},\quad 0=\omega\left({\rm
Tr}(\psi_{0}),{\rm Tr}(\phi)\right)=\langle\psi_{0},h_{\rm
max}^{\sharp}\phi\rangle_{{\mathcal{H}}^{\sharp}}-\langle h_{\rm
max}^{\sharp}\psi_{0},\phi\rangle_{{\mathcal{H}}^{\sharp}}.$
In particular, the map $T_{\psi_{0}}:{\mathcal{D}}^{\sharp}\to{\mathbb{C}}$
defined by
$T_{\psi_{0}}:\phi\mapsto\langle\psi_{0},h_{\rm
max}^{\sharp}\phi\rangle_{{\mathcal{H}}^{\sharp}}=\langle h_{\rm
max}^{\sharp}\psi_{0},\phi\rangle_{{\mathcal{H}}^{\sharp}}$
is bounded on ${\mathcal{D}}^{\sharp}$ with
$\|T_{\psi_{0}}\phi\|_{{\mathcal{H}}^{\sharp}}\leq\|h^{\sharp}_{\rm
max}\psi_{0}\|_{{\mathcal{H}}^{\sharp}}\|\phi\|_{{\mathcal{H}}^{\sharp}}$. So
$\psi_{0}$ is in the adjoint domain $({\mathcal{D}}^{\sharp})^{*}$. This
proves that $(u,u^{\prime})={\rm Tr}(\psi_{0})\in{\rm
Tr}(({\mathcal{D}}^{\sharp})^{*})$. Altogether, we proved that
${\rm
Tr}\left(({\mathcal{D}}^{\sharp})^{*}\right)=\ell^{\circ},\quad\text{hence}\quad({\mathcal{D}}^{\sharp})^{*}={\rm
Tr}^{-1}(\ell^{\circ}),$
where we used in the last inequality that $({\mathcal{D}}^{\sharp})^{*}$ is
closed for the graph norm of ${\mathcal{H}}^{\sharp}$. ∎
In what follows, we denote by $\left(h^{\sharp},\ell^{\sharp}\right)$ the
self-adjoint extensions of $h^{\sharp}$ with domain ${\rm
Tr}^{-1}(\ell^{\sharp})$.
Before we go on, let us gives some examples of Lagrangian planes and their
corresponding unitaries ${\mathcal{U}}$ for some usual self-adjoint
extensions. Note that we have
${\mathcal{H}}_{b}={\mathcal{H}}_{1}\times{\mathcal{H}}_{2}$ with
${\mathcal{H}}_{1}={\mathcal{H}}_{2}={\mathbb{C}}^{n}$, so the unitary
${\mathcal{U}}:{\mathbb{C}}^{n}\to{\mathbb{C}}^{n}$ is an element in ${\rm
U}(n)$.
###### Example 15 (Dirichlet and Neumann boundary conditions).
The Dirichlet extension of $h^{\sharp}$ corresponds to the Lagrangian plane
$\ell_{D}:=\\{0\\}\times{\mathbb{C}}^{n}$, and the Neumann one corresponds to
$\ell_{N}:={\mathbb{C}}^{n}\times\\{0\\}$. To identify the corresponding
unitary, we note that $(0,u^{\prime})\in\ell_{D}$ can be written as
$\begin{pmatrix}0\\\ u^{\prime}\end{pmatrix}=\begin{pmatrix}1\\\
{\mathrm{i}}\end{pmatrix}(-\tfrac{{\mathrm{i}}}{2}u^{\prime})+\begin{pmatrix}1\\\
-{\mathrm{i}}\end{pmatrix}(\tfrac{{\mathrm{i}}}{2}u^{\prime})$
Comparing with Lemma 10, this gives the unitary
${\mathcal{U}}_{D}:=-{\mathbb{I}}_{n}\in{\rm U}(n)$. The proof for Neumann
boundary conditions is similar, and we find
${\mathcal{U}}_{N}:={\mathbb{I}}_{n}\in{\rm U}(n)$.
###### Example 16 (Robin boundary conditions).
Consider $\Theta$ and $\Pi$ two hermitian $n\times n$ matrices so that
$\Theta^{*}=\Theta,\ \Pi^{*}=\Pi,\ \Theta\Pi=\Pi\Theta,\quad\Theta\ \text{or}\
\Pi\quad\text{is invertible}.$
Let $\ell_{\Theta,\Pi}$ be the subspace
$\ell_{\Theta,\Pi}:=\left\\{(\Theta x,\Pi x),\
x\in{\mathbb{C}}^{n}\right\\}\subset{\mathcal{H}}_{b}.$
We claim that $\ell_{\Theta,\Pi}$ is Lagrangian. First, we have
$\omega((\Theta x,\Pi x),(\Theta y,\Pi y))=\langle\Theta x,\Pi
y\rangle_{{\mathbb{C}}^{n}}-\langle\Pi x,\Theta
y\rangle_{{\mathbb{C}}^{n}}=0,$
so $\ell_{\Theta,\Pi}\subset\ell_{\Theta,\Pi}^{\circ}$. On the other hand, let
$(z,z^{\prime})\in\ell_{\Theta,\Pi}^{\circ}$. Assume that $\Theta$ is
invertible (the proof when $\Pi$ is invertible is similar). We have,
$\forall x\in{\mathbb{C}}^{n},\quad\langle z,\Pi
x\rangle_{{\mathbb{C}}^{n}}=\langle z^{\prime},\Theta
x\rangle_{{\mathbb{C}}^{n}},\quad\text{so}\quad\langle\Pi z-\Theta
z^{\prime},x\rangle_{{\mathbb{C}}^{n}}=0.$
We deduce that $\Pi z=\Theta z^{\prime}$. In particular, setting
$z_{0}=\Theta^{-1}z$, we have $z=\Theta z_{0}$ and $z^{\prime}=\Pi z_{0}$, so
$(z,z^{\prime})=(\Theta z_{0},\Pi z_{0})\in\ell_{\Theta,\Pi}$. This proves
that $\ell_{\Theta,\Pi}$ is Lagrangian. We say that the corresponding self-
adjoint extension has the $(\Theta,\Pi)$-Robin boundary condition. To identify
the corresponding unitary, we write
$\begin{pmatrix}\Theta x\\\ \Pi x\end{pmatrix}=\begin{pmatrix}1\\\
{\mathrm{i}}\end{pmatrix}\tfrac{1}{2}\left(\Theta-{\mathrm{i}}\Pi\right)x+\begin{pmatrix}1\\\
-{\mathrm{i}}\end{pmatrix}\tfrac{1}{2}\left(\Theta+{\mathrm{i}}\Pi\right)x.$
Identifying with Lemma 10, we recognise the unitary
${\mathcal{U}}_{\Theta,\Pi}:=(\Theta+{\mathrm{i}}\Pi)(\Theta-{\mathrm{i}}\Pi)^{-1}\quad\in{\rm
U}(n).$
Note that the Dirichlet boundary condition corresponds to the pair
$(0,{\mathbb{I}}_{n})$ and the Neumann boundary condition corresponds to the
pair $({\mathbb{I}}_{n},0)$.
###### Example 17 (Dirac operator on the half-line).
Let $\not{D}^{\sharp}$ be a Dirac operator of the form
$\not{D}^{\sharp}:=-{\mathrm{i}}\partial_{x}+V$ acting on
$L^{2}({\mathbb{R}}^{+},{\mathbb{C}}^{n})$. The operator $\not{D}^{\sharp}$ is
symmetric on the core $C^{\infty}_{0}({\mathbb{R}}^{+})$. The first Green’s
identity shows that
$\forall\phi,\psi\in
H^{1}({\mathbb{R}}^{+},{\mathbb{C}}^{n}),\quad\langle\phi,\not{D}^{\sharp}\psi\rangle-\langle\not{D}^{\sharp}\phi,\psi\rangle=-{\mathrm{i}}\langle\phi(0),\psi(0)\rangle_{{\mathbb{C}}^{n}}.$
In this case, the boundary space is ${\mathcal{H}}_{b}={\mathbb{C}}^{n}$, with
the symplectic form $\widetilde{\omega}(z_{1},z_{2})={\mathrm{i}}\langle
z_{1},z_{2}\rangle$. A theorem similar to Theorem 13 holds. However, as
noticed in Example 4, the symplectic space
$({\mathcal{H}}_{b},\widetilde{\omega})$ has no Lagrangian planes. We recover
the classical result that $\not{D}^{\sharp}$ has no self-adjoint extensions on
the half line.
### 2.3. The Lagrangian planes $\ell^{\pm}(E)$
In the previous section, we linked the boundary conditions at $x=0$ with the
Lagrangian planes of the boundary space ${\mathcal{H}}_{b}$. We now focus on
the Cauchy solutions of $H\psi=E\psi$. Since we are also interested in the
behaviour at $-\infty$, we introduce
${\mathcal{H}}^{\sharp,\pm}:=L^{2}({\mathbb{R}}^{\pm})$ and the maximal
domains
${\mathcal{D}}^{\sharp,\pm}_{\rm
max}:=H^{2}({\mathbb{R}}^{\pm},{\mathbb{C}}^{n}).$
The space ${\mathcal{D}}^{\sharp}_{\rm max}$ considered previously corresponds
to ${\mathcal{D}}^{\sharp,\pm}_{\rm max}$. We also denote by ${\rm
Tr}^{\pm}:{\mathcal{D}}^{\sharp,\pm}_{\rm max}\to{\mathcal{H}}_{b}$ the
corresponding boundary trace operator
$\forall\psi\in{\mathcal{D}}^{\sharp,\pm}_{\rm max},\quad{\rm
Tr}^{\pm}(\psi)=(\psi(0),\psi^{\prime}(0)).$
Note that, due to the orientation of the line ${\mathbb{R}}$, the Green’s
identity on the left-side reads
$\forall\phi,\psi\in{\mathcal{D}}^{\sharp,-}_{\rm max},\quad\langle\phi,h_{\rm
max}^{\sharp,-}\psi\rangle_{{\mathcal{H}}^{\sharp,-}}-\langle h_{\rm
max}^{\sharp,-}\phi,\psi\rangle_{{\mathcal{H}}^{\sharp,-}}=-\omega\left({\rm
Tr}^{-}(\phi),{\rm Tr}^{-}(\phi)\right).$
The following result, which is straightforward in the finite dimensional
Hill’s case, is an essential tool for the proof of the next theorem.
###### Lemma 18.
Let ${\mathds{1}}_{{\mathbb{R}}^{\pm}}$ be the restriction operators from
${\mathcal{H}}$ to ${\mathcal{H}}^{\sharp,\pm}$. Then
${\mathcal{D}}^{\sharp,\pm}_{\rm
max}={\mathds{1}}_{{\mathbb{R}}^{\pm}}\left({\mathcal{D}}\right).$
In addition, for all $\psi\in{\mathcal{D}}$, we have ${\rm
Tr}^{-}\left({\mathds{1}}_{{\mathbb{R}}^{-}}\psi\right)={\rm
Tr}^{+}\left({\mathds{1}}_{{\mathbb{R}}^{+}}\psi\right)$.
This lemma states that all elements in ${\mathcal{D}}^{\sharp,\pm}_{\rm max}$
can be extended to the whole line ${\mathbb{R}}$ while keeping smoothness at
the cut. It is linked to the existence of an extension operator. Conversely,
we do not lose regularity when restricting a function to some domain. We leave
the proof, as it is similar to the one of Lemma 12.
We now set
${\mathcal{S}}^{\pm}(E):={\rm Ker}\left(h_{\rm
max}^{\sharp,\pm}-E\right)=\left\\{\psi\in{\mathcal{D}}^{\sharp,\pm}_{\rm
max},\quad-\psi^{\prime\prime}+V\psi=E\psi\right\\}.$
The solutions in ${\mathcal{S}}^{\pm}(E)$ can be seen as Cauchy solutions. In
particular, elements $\psi^{\pm}$ of ${\mathcal{S}}^{\pm}(E)$ can be
reconstructed from their boundary values ${\rm
Tr}^{\pm}\left(\psi^{\pm}\right)\in{\mathcal{H}}_{b}$. We set
$\ell^{\pm}(E):=\left\\{{\rm
Tr}^{\pm}(\psi),\quad\psi\in{\mathcal{S}}^{\pm}(E)\right\\}\quad\subset{\mathcal{H}}_{b}.$
(7)
It is unclear at this point that any Cauchy solutions of
$-\psi^{\prime\prime}+V\psi=E$ is either in ${\mathcal{S}}^{+}(E)$ or
${\mathcal{S}}^{-}(E)$. For instance, one can have solutions which are not
integrable at $\pm\infty$. This happens in particular when $V$ is periodic and
$E$ is in the essential spectrum of the corresponding Hill’s operator $h$. In
this case, we expect all Cauchy solutions to be quasi-periodic. These
solutions are not in ${\mathcal{S}}^{+}(E)$ nor in ${\mathcal{S}}^{-}(E)$.
On the other hand, these two spaces might cross. Actually, we have
###### Lemma 19.
For the bulk operator $h$, we have that for all $E\in{\mathbb{R}}$,
$\dim{\rm Ker}\left(h-E\right)=\dim\left(\ell^{+}(E)\cap\ell^{-}(E)\right).$
In particular, $E$ is eigenvalue of $h$ iff
$\ell^{+}(E)\cap\ell^{-}(E)\neq\\{0\\}$.
###### Proof.
A simple proof uses Cauchy’s theorem. Let us give an alternating proof, which
will also work in the next PDE setting.
If $\psi\in{\mathcal{D}}$ satisfies $(h-E)\psi=0$, then, by Lemma 18, its
restrictions $\psi^{\pm}:={\mathds{1}}_{{\mathbb{R}}^{\pm}}\psi$ are in
${\mathcal{D}}_{\rm max}^{\sharp,\pm}$. In addition, they satisfy $(h_{\rm
max}^{\sharp}-E)\psi^{\pm}=0$, so $\psi^{\pm}\in{\mathcal{S}}^{\pm}(E)$.
Taking traces and using again Lemma 18 shows that ${\rm Tr}^{+}(\psi^{+})={\rm
Tr}^{-}(\psi^{-})\in\ell^{+}(E)\cap\ell^{-}(E)$.
Conversely, let $\psi^{\pm}\in{\mathcal{S}}^{\pm}(E)$ be such that ${\rm
Tr}^{+}(\psi^{+})={\rm Tr}^{-}(\psi^{-})$, and consider the function
$\psi\in{\mathcal{H}}$ defined by
$\psi(x):=\begin{cases}\psi^{+}(x)\quad\text{for}\quad x>0,\\\
\psi^{-}(x)\quad\text{for}\quad x<0.\end{cases}$
It is unclear yet that $\psi$ is regular enough (i.e. belongs to
${\mathcal{D}}$). For $f\in{\mathcal{D}}$, we have
$\displaystyle\langle\psi,(h-E)f\rangle_{{\mathcal{H}}}$
$\displaystyle=\langle\psi^{+},{\mathds{1}}_{{\mathbb{R}}^{+}}(h-E)f\rangle_{{\mathcal{H}}^{+}}+\langle\psi^{-},{\mathds{1}}_{{\mathbb{R}}^{-}}(h-E)f\rangle_{{\mathcal{H}}^{-}}$
$\displaystyle=\langle\psi^{+},(h_{\rm
max}^{\sharp,+}-E)f^{+}\rangle_{{\mathcal{H}}^{+}}+\langle\psi^{-},(h_{\rm
max}^{\sharp,-}-E)f^{-}\rangle_{{\mathcal{H}}^{-}}$
$\displaystyle=\omega\left({\rm Tr}^{+}(\psi^{+}),{\rm
Tr}^{+}(f^{+})\right)-\omega\left({\rm Tr}^{-}(\psi^{-}),{\rm
Tr}^{-}(f^{-})\right)=0.$
So $T_{\psi}:f\mapsto\langle\psi,(h-E)f\rangle_{{\mathcal{H}}}$ is bounded on
${\mathcal{D}}$. We deduce first that $\psi$ is in the domain
${\mathcal{D}}^{*}={\mathcal{D}}$, and that $(h-E)\psi=0$. So $\psi$ is an
eigenvector for the eigenvalue $E$. ∎
The previous result shows that one can detect eigenvalues as the crossings of
$\ell^{+}(E)$ and $\ell^{-}(E)$. We now prove that, when $E$ is in the
resolvent set, we have instead
$\ell^{+}(E)\oplus\ell^{-}(E)={\mathcal{H}}_{b}$. Our proof only uses the fact
that the bulk operator $h$ is essentially self-adjoint.
###### Theorem 20.
For all $E\in{\mathbb{R}}\setminus\sigma(h)$, the sets $\ell^{\pm}(E)$ are
Lagrangian planes of ${\mathcal{H}}_{b}$, and
${\mathcal{H}}_{b}:=\ell^{+}(E)\oplus\ell^{-}(E).$
This shows for instance that there are as many Cauchy’s solutions which decay
to $+\infty$ as solutions which decay to $-\infty$ (here, they both form
subspaces of dimension $n$). This is somehow reminiscent of the Weyl’s
criterion [Wey10] (see also [RS75, Theorem X.7]).
###### Proof.
We first claim that for any $E\in{\mathbb{R}}$, $\ell^{\pm}(E)$ are isotropic
spaces. Let $\phi,\psi\in{\mathcal{S}}^{+}(E)$. By Green’s identity, we have
$\omega({\rm Tr}(\phi),{\rm Tr}(\psi))=\langle\phi,h_{\rm
max}^{\sharp}\psi\rangle_{{\mathcal{H}}^{\sharp}}-\langle h_{\rm
max}^{\sharp}\phi,\psi\rangle_{{\mathcal{H}}^{\sharp}}=\langle\phi,E\psi\rangle_{{\mathcal{H}}^{\sharp}}-\langle
E\phi,\psi\rangle_{{\mathcal{H}}^{\sharp}}=0.$
In the last equality, we used that $E$ is real-valued. This proves that
$\ell^{+}(E)\subset\ell^{+}(E)^{\circ}$. The proof for $\ell^{-}(E)$ is
similar.
Let $E\in{\mathbb{R}}\setminus\sigma(h)$, so that the bulk operator $(h-E)$ is
invertible. We have (recall that
${\mathcal{H}}=L^{2}({\mathbb{R}},{\mathbb{C}}^{n})$)
${\mathcal{H}}={\mathcal{H}}^{+}\oplus{\mathcal{H}}^{-},\quad\text{where}\quad{\mathcal{H}}^{\pm}:=\left\\{\psi\in{\mathcal{H}},\
\psi=0\ \text{on}\ \overline{{\mathbb{R}}^{\mp}}\right\\}.$
Since $(h-E)$ is invertible with ${\mathcal{D}}=(h-E)^{-1}{\mathcal{H}}$, this
gives a decomposition
${\mathcal{D}}={\mathcal{D}}^{+}\oplus{\mathcal{D}}^{-},\quad{\mathcal{D}}^{\pm}:=(h-E)^{-1}{\mathcal{H}}^{\pm},$
and
${\mathcal{H}}_{b}={\rm Tr}\left({\mathcal{D}}^{+}\right)+{\rm
Tr}\left({\mathcal{D}}^{-}\right).$
The elements $\psi\in{\mathcal{D}}^{+}$ are such that
$(-\partial_{xx}^{2}+V-E)\psi=f$, for some $f\in{\mathcal{H}}$ with support
contained in ${\mathbb{R}}^{+}$. In particular, its restriction
$\psi^{-}:={\mathds{1}}_{{\mathbb{R}}^{-}}\psi$ is in ${\mathcal{D}}_{\rm
max}^{\sharp,-}$ and satisfies $(h_{\rm max}^{\sharp,-}-E)\psi^{-}=0$ on
${\mathbb{R}}^{-}$. So $\psi^{-}\in{\mathcal{S}}^{-}(E)$. Taking boundary
traces show that
${\rm Tr}\left({\mathcal{D}}^{+}\right)\subset\ell^{-}(E),\quad\text{and,
similarly, }\quad{\rm Tr}\left({\mathcal{D}}^{-}\right)\subset\ell^{+}(E).$
In particular, we have ${\mathcal{H}}_{b}=\ell^{+}(E)+\ell^{-}(E)$. We
conclude with Lemma 5.
∎
###### Remark 21.
In the proof, we use that $h=-\partial_{xx}^{2}+V$ is self-adjoint on the
whole line, and deduce that $\ell^{+}(E)$ and $\ell^{-}(E)$ are both
Lagrangian planes. Note however that $\ell^{+}(E)$ is independent of $V$ on
${\mathbb{R}}^{-}$. So $\ell^{+}(E)$ is a Lagrangian plane whenever there
exists an extension of $V$ on ${\mathbb{R}}^{-}$ for which the corresponding
bulk operator has $E$ in its resolvent set.
At this point, we defined two types of Lagrangian planes for a given operator
$h$. First, we defined the planes $\ell^{\sharp}$ representing the boundary
conditions of a self-adjoint extension of the edge Hamiltonian $h^{\sharp}$.
Then, we defined the planes $\ell^{+}(E)$ as the set of traces of ${\rm
Ker}(h_{\rm max}^{\sharp}-E)$. If ${\rm
Tr}(\psi)\in\ell^{+}(E)\cap\ell^{\sharp}$, then $\psi$ is in the domain of
$h^{\sharp}$, and satisfies $(h^{\sharp}-E)\psi=0$. So $\psi$ is an
eigenvector for the eigenvalue $E$. This proves the following result (compare
with Lemma 19).
###### Lemma 22.
Let $E\in{\mathbb{R}}\setminus\sigma(h)$, and consider a self-adjoint
extension $(h^{\sharp},\ell^{\sharp})$ of the edge operator. Then
$\dim{\rm
Ker}\left(h^{\sharp}-E\right)=\dim\left(\ell^{+}(E)\cap\ell^{\sharp}\right).$
If Assumption B holds, we can introduce ${\mathcal{U}}^{+}(E)$ and
${\mathcal{U}}^{\sharp}$ the unitaries corresponding to the Lagrangian planes
$\ell^{+}(E)$ and $\ell^{\sharp}$ respectively, and we have
$\dim{\rm
Ker}\left(h^{\sharp}-E\right)=\dim\left(\ell^{+}(E)\cap\ell^{\sharp}\right)=\dim\left(({\mathcal{U}}^{\sharp})^{*}{\mathcal{U}}^{+}(E)-1\right).$
For instance, the self-adjoint extension $(h^{\sharp},\ell^{+}(E))$ has the
energy $E$ as eigenvalue of multiplicity $n$.
## 3\. Families of Hill’s operators
In the previous section, we exhibit the relationships between self-adjoint
extensions, Lagrangian planes, and unitaries. We now consider periodic
families of these objects, namely $h^{\sharp}_{t}$, $\ell_{t}^{\sharp}$ and
${\mathcal{U}}_{t}^{\sharp}$. For each such family, we define an index, namely
a spectral flow across $E$ for the family $h^{\sharp}_{t}$, a Maslov index for
the bifamily $(\ell_{t}^{+}(E),\ell_{t}^{\sharp})$ and a spectral flow across
$1$ for the family $({\mathcal{U}}_{t}^{\sharp})^{*}{\mathcal{U}}_{t}^{+}(E)$.
All these objects are defined in the following sections, and we prove that
they all coincide.
All these indices can be defined for continuous families. However, since the
proofs are simpler in the continuously differentiable case, we restrict
ourselves to this case. All these indices depends only on the homotopy class
of the corresponding loops, so similar results hold in the continuous case.
### 3.1. Families of Lagrangians, and Maslov index
We first define the Maslov index of two families of Lagrangian spaces. This
index originates from the work of Maslov in [MBA72, Arn67]. In these works,
the index was defined for finite dimensional real symplectic spaces (namely
${\mathbb{R}}^{2n}$ in Example 2). A modern approach can be found in [Fur04],
where the infinite dimensional case is studied as well. Here, we present a
simple version of the theory, which is enough for our purpose.
Let $({\mathcal{H}}_{b},\omega)$ be a symplectic Hilbert space (not
necessarily finite dimensional). We define a topology on
$\Lambda({\mathcal{H}}_{b})$ by setting
$\forall\ell_{1},\ell_{2}\in\Lambda({\mathcal{H}}_{b}),\quad{\rm
dist}(\ell_{1},\ell_{2}):=\|P_{1}-P_{2}\|_{\rm op},$
where $P_{1}$ and $P_{2}$ are the orthogonal projectors on $\ell_{1}$ and
$\ell_{2}$ respectively. A family $\ell(t)$ in $\Lambda({\mathcal{H}}_{b})$ is
said to be continuous, continuously differentiable, etc. if the corresponding
family of projectors $P(t)$ is so in
${\mathcal{B}}\left({\mathcal{H}}_{b}\right)$.
#### 3.1.1. Definition with quadratic crossing forms
Consider two continuously differentiable families
${\mathbb{T}}^{1}\mapsto\ell_{1}(t)$ and ${\mathbb{T}}^{1}\mapsto\ell_{2}(t)$.
Let $t^{*}\in{\mathbb{T}}^{1}$ be such that
$\ell_{1}(t^{*})\cap\ell_{2}(t^{*})\neq\\{{\mathbf{0}}\\}$. We define the
sequilinear form $b$ on $\ell_{1}(t^{*})\cap\ell_{2}(t^{*})$ by
$\forall x,y\in\ell_{1}(t^{*})\cap\ell_{2}(t^{*}),\quad
b(x,y):=\omega(x,P_{1}^{\prime}(t^{*})y)-\omega(x,P_{2}^{\prime}(t^{*})y).$
(8)
###### Lemma 23.
The sesquilinear form $b$ is hermitian, that is $b(x,y)=\overline{b(y,x)}$.
###### Proof.
Let $P_{t}:=P_{1}(t)$. First, since ${\rm Ran}\,P_{1}(t)=\ell_{1}(t)$ is
isotropic for all $t$, we have
$\forall x,y\in{\mathcal{H}}_{b},\quad\forall
t\in{\mathbb{T}}^{1},\quad\omega(P_{t}(x),P_{t}(y))=0.$
Differentiating gives
$\omega(P_{t}(x),P^{\prime}_{t}(y))=-\omega(P^{\prime}_{t}(x),P_{t}(y))=\overline{\omega(P_{t}(y),P^{\prime}_{t}(x))}.$
Taking $t=t^{*}$ and $x,y\in\ell_{1}(t^{*})\cap\ell_{2}(t^{*})$, so that
$P_{t^{*}}(x)=x$ and $P_{t^{*}}(y)=y$ gives
$\forall
x,y\in\ell_{1}(t^{*})\cap\ell_{2}(t^{*}),\quad\omega(x,P_{t}^{\prime}(y))=\overline{\omega(y,P_{t}^{\prime}(x))}.$
A similar equality holds for $P_{t}=P_{2}(t)$, which proves that $b$ is
hermitian. ∎
In particular, all eigenvalues of $b$ are real-valued. We say that $t^{*}$ is
a regular crossing if $\ell_{1}(t^{*})\cap\ell_{2}(t^{*})$ is finite
dimensional (say of dimension $k\in{\mathbb{N}}$), and if all its eigenvalues
$(\mu_{1},\cdots,\mu_{k})$ are non null (so the corresponding quadratic form
is non degenerate). For such crossings, we set
${\rm deg}(t^{*})=\sum_{j=1}^{k}{\rm sgn}\left(\mu_{j}\right).$
The pair $(\ell_{1},\ell_{2})$ is regular if all crossings are regular. For
such pair, the Maslov index is defined by
$\boxed{{\rm Mas}\left(\ell_{1},\ell_{2},{\mathbb{T}}^{1}\right):=\sum_{t^{*}\
\text{regular crossing}}{\rm deg}(t^{*})\quad\in{\mathbb{Z}}.}$
It is clear from the definition that ${\rm
Mas}(\ell_{1},\ell_{2},{\mathbb{T}}^{1})=-{\rm
Mas}(\ell_{2},\ell_{1},{\mathbb{T}}^{1})$. This definition does not require
Assumptions A (nor B).
#### 3.1.2. Definition with the unitaries $U$
In the case where Assumption $A$ holds, we can relate the Maslov index to a
spectral flow. Consider two continuously differentiable loops of Lagrangian
$\ell_{1}(t)$ and $\ell_{2}(t)$ from $t\in{\mathbb{T}}^{1}$ to
$\Lambda({\mathcal{H}}_{b})$. Let $U_{1}(t)$ and $U_{2}(t)$ be the
corresponding unitaries from ${\rm Ker}(J-{\mathrm{i}})$ to ${\rm
Ker}(J+{\mathrm{i}})$. Then $U_{1}$ and $U_{2}$ are continuously
differentiable for the operator norm topology. From Lemma 9, we have that for
all $t\in{\mathbb{T}}^{1}$,
$\dim{\rm
Ker}\left(U_{2}^{*}U_{1}-{\mathbb{I}}_{\ker(J-{\mathrm{i}})}\right)=\dim\left(\ell_{1}(t)\cap\ell_{2}(t)\right).$
In particular, if all crossings are regular, then
$\dim(\ell_{1}\cap\ell_{2})={\rm Ker}(U_{2}^{*}U_{1}-1)$ is always finite
dimensional. Let $t^{*}\in{\mathbb{T}}^{1}$ be such that the kernel is non
empty, of dimension $k\in{\mathbb{N}}$. By usual perturbation theory for
operators [Kat13], we deduce that there are $k$ continuously differentiable
branches of eigenvalues of the unitary $U_{2}^{*}U_{1}$ crossing $1$ around
$t^{*}$. More specifically, we have the following.
###### Lemma 24.
Let $U(t)$ be a periodic continuously differentiable family of unitaries, and
let $t^{*}\in{\mathbb{T}}^{1}$ and $\varepsilon>0$ be such that, for all
$t\in(t^{*}-\varepsilon,t^{*}+\varepsilon)$,
$t\neq t^{*}\implies
1\notin\sigma\left(U(t)\right),\quad\text{while}\quad\dim{\rm
Ker}\left(U(t^{*})-1\right)=k\in{\mathbb{N}}.$
Then, there is $\delta>0$ and $k$ continuously differentiable functions
$\left(\theta_{j}(t)\right)$ from $t\in(t^{*}-\varepsilon,t^{*}+\varepsilon)$
to $\SS^{1}:=\\{z\in{\mathbb{C}},\ |z|=1\\}$, so that
$\sigma\left(U(t)\right)\cap
B(1,\eta)=\\{\theta_{1}(t),\cdots,\theta_{k}(t)\\}\cap B(1,\eta).$
The functions $\theta_{j}$ are the branches of eigenvalues of $U$. We say that
$t^{*}$ is a regular crossing if $k:=\dim{\rm Ker}(U(t)-1)<\infty$, and if
$\theta_{j}^{\prime}(t^{*})\neq 0$ for all $1\leq j\leq k$. Note that since
$\theta_{j}$ has values in $\SS^{1}$, we have
$\theta_{j}^{\prime}(t^{*})\in{\mathrm{i}}{\mathbb{R}}$. The degree of $t^{*}$
is
${\rm deg}(t^{*}):=\sum_{j=1}^{k}{\rm
sgn}\left(-{\mathrm{i}}\theta^{\prime}_{j}(t^{*})\right).$
This is the net number of eigenvalues crossing $1$ in $\SS^{1}$ in the
positive (counter-clockwise) direction. Finally, if all crossing are regular,
the spectral flow of $U$ across $1$ is
$\boxed{{\rm Sf}\left(U,1,{\mathbb{T}}^{1}\right):=\sum_{t^{*}\ \text{regular
crossing}}{\rm deg(t^{*})}\quad\in{\mathbb{Z}}.}$
###### Lemma 25.
Let $\ell_{1}(t)$ and $\ell_{2}(t)$ be two continuously differentiable
families of Lagrangians in $\Lambda({\mathcal{H}}_{b})$, and let $U_{1}(t)$
and $U_{2}(t)$ be the corresponding unitaries. Then $t^{*}\in{\mathbb{T}}^{1}$
is a regular crossing of $(\ell_{1},\ell_{2})$ iff it is a regular crossing of
$U_{2}^{*}U_{1}$. If all crossings are regular, then,
${\rm Mas}\left(\ell_{1},\ell_{2},{\mathbb{T}}_{1}\right)={\rm
Sf}\left(U_{2}^{*}U_{1},1,{\mathbb{T}}^{1}\right).$
###### Proof.
For the sake of simplicity, we assume that only $\ell_{1}$ depends on $t$. The
proof is similar in the general case.
Let $t^{*}$ be a regular crossing point, and let
$k:=\dim\left(\ell_{1}(t^{*})\cap\ell_{2}\right)=\dim{\rm
Ker}(U_{2}^{*}(t^{*})U_{1}(t^{*})-1).$
Let $\theta_{1},\theta_{2},\cdots\theta_{k}$ be the branches of eigenvalues
crossing $1$ at $t=t^{*}$, as in Lemma 24, and let
$x_{1}^{-}(t),\cdots,x_{k}^{-}(t)$ be a corresponding continuously
differentiable set of orthonormal eigenfunctions in ${\rm
Ker}(J-{\mathrm{i}})$. First, we have, for all $1\leq i,j\leq k$, and all
$t\in(t^{*}-\varepsilon,t^{*}+\varepsilon)$,
$\langle
x_{i}^{-},\left[U_{2}^{*}U_{1}-\theta_{j}\right]x_{j}^{-}\rangle_{{\mathcal{H}}_{b}}=0.$
Differentiating and evaluating at $t=t^{*}$ shows that
$\langle
x_{i}^{-},\partial_{t}\left[U_{2}^{*}U_{1}-\theta_{j}\right]x_{j}^{-}\rangle_{{\mathcal{H}}_{b}}+\langle\left[U_{2}^{*}(t^{*})U_{1}-1\right]^{*}x_{i},(\partial_{t}x_{j}^{-})\rangle_{{\mathcal{H}}_{b}}=0.$
At $t=t^{*}$, we have $U_{2}^{*}U_{1}x_{i}^{-}=x_{i}^{-}$, so
$U_{1}^{*}U_{2}x_{i}^{-}=x_{i}^{-}$ as well, and the last term vanishes. We
get the Hellman-Feynman equation
$\delta_{ij}\theta_{j}^{\prime}(t^{*})=\langle
U_{2}x_{i}^{-},(\partial_{t}U_{1})x_{j}^{-}\rangle_{{\mathcal{H}}_{b}}\big{|}_{t=t^{*}}=\langle
U_{1}x_{i}^{-},(\partial_{t}U_{1})x_{j}^{-}\rangle_{{\mathcal{H}}_{b}}\big{|}_{t=t^{*}}.$
On the other hand, we set $x_{j}(t):=x_{j}^{-}(t)+U_{1}(t)x_{j}^{-}(t)$ in
$\ell_{1}$. Since $x_{j}\in\ell_{1}$ for all $t$, we have $P_{1}x_{j}=x_{j}$
for all $t$. Differentiating gives
$\left(\partial_{t}P_{1}\right)x_{j}+P_{1}\left(\partial_{t}x_{j}\right)=(\partial_{t}x_{j}).$
Since $P_{1}\left(\partial_{t}x_{j}\right)\in\ell_{1}$, which is Lagrangian,
we obtain
$\omega\left(x_{i},\left(\partial_{t}P_{1}\right)x_{j}\right)=\omega(x_{i},\partial_{t}x_{j}).$
(9)
In addition, we have from $x_{j}(t):=x_{j}^{-}(t)+U_{1}(t)x_{j}^{-}(t)$ that
$\partial_{t}x_{j}=\left[1+U_{1}\right](\partial_{t}x_{j}^{-})+\left(\partial_{t}U_{1}\right)x_{j}^{-}.$
Since $x_{j}^{-}\in{\rm Ker}(J-{\mathrm{i}})$ for all $t$, we have
$(\partial_{t}x_{j}^{-})\in{\rm Ker}(J-{\mathrm{i}})$ as well, and the first
term is in $\ell_{1}$. On the other hand, $(\partial_{t}U_{1})x_{j}^{-}$ is in
${\rm Ker}(J+{\mathrm{i}})$. Together with (4), this gives
$\omega\left(x_{i},\left(\partial_{t}P_{1}\right)x_{j}\right)=\omega(x_{i},\partial_{t}x_{j})=\omega(x_{i},\left(\partial_{t}U_{1}\right)x_{j}^{-})=\omega(U_{1}x_{i}^{-},\left(\partial_{t}U_{1}\right)x_{j}^{-}).$
Using that $\omega(x,y)=\langle x,Jy\rangle_{{\mathcal{H}}_{b}}$ and that
$(\partial_{t}U_{1})x_{j}^{-}\in{\rm Ker}(J+{\mathrm{i}})$, we obtain, at
$t=t^{*}$, and recalling the definition of $b$ in (8),
$b\left(x_{i},x_{j}\right)=-{\mathrm{i}}\langle
U_{1}x_{i}^{-},(\partial_{t}U_{1})x_{j}^{-}\rangle_{{\mathcal{H}}_{b}}=\delta_{ij}(-{\mathrm{i}})\theta^{\prime}_{j}(t_{*}).$
The sesquilinear form $b$ is therefore diagonal in the $(x_{1},\cdots,x_{k})$
basis, with corresponding eigenvalues $(-{\mathrm{i}}\theta_{j}^{\prime})$.
The proof follows. ∎
#### 3.1.3. Definition with the unitary ${\mathcal{U}}$.
In the case where the stronger Assumption B in (5) is also satisfies, one has
a similar result with the unitaries ${\mathcal{U}}$ instead of $U$. We state
it without proof, as it is similar to the previous one.
###### Lemma 26.
Assume $({\mathcal{H}}_{b}={\mathcal{H}}_{1}\times{\mathcal{H}}_{2},\omega)$
satisfies Assumption 5. Let $\ell_{1}(t)$ and $\ell_{2}(t)$ be two
continuously differentiable families of Lagrangians in
$\Lambda({\mathcal{H}}_{b})$, and let ${\mathcal{U}}_{1}(t)$ and
${\mathcal{U}}_{2}(t)$ be the corresponding unitaries of ${\mathcal{H}}_{1}$.
Then $t^{*}\in{\mathbb{T}}^{1}$ is a regular crossing of $(\ell_{1},\ell_{2})$
it is a regular crossing of ${\mathcal{U}}_{2}^{*}{\mathcal{U}}_{1}$. If all
crossings are regular, then,
${\rm Mas}\left(\ell_{1},\ell_{2},{\mathbb{T}}_{1}\right)={\rm
Sf}\left({\mathcal{U}}_{2}^{*}{\mathcal{U}}_{1},1,{\mathbb{T}}^{1}\right).$
The importance of this lemma comes from the fact that, in the finite
dimensional case (${\mathcal{H}}_{1}\approx{\mathbb{C}}^{n}$), the spectral
flow of a periodic family ${\mathcal{U}}(t)\in{\rm U}(n)$ across $1$ (or any
other point in $\SS^{1}$) equals the winding number of $\det{\mathcal{U}}(t)$:
${\rm Sf}\left({\mathcal{U}},z\in\SS^{1},{\mathbb{T}}^{1}\right)={\rm
Winding}\left(\det({\mathcal{U}}),{\mathbb{T}}^{1}\right).$
In our case where ${\mathcal{U}}={\mathcal{U}}_{2}^{*}{\mathcal{U}}_{1}$, we
have
$\det({\mathcal{U}}_{2}^{*}{\mathcal{U}}_{1})=\det({\mathcal{U}}_{1})/\det({\mathcal{U}}_{2})$,
hence
${\rm
Winding}\left(\det({\mathcal{U}}_{2}^{*}{\mathcal{U}}_{1}),{\mathbb{T}}^{1}\right)={\rm
Winding}\left({\mathcal{U}}_{1},{\mathbb{T}}^{1}\right)-{\rm
Winding}\left({\mathcal{U}}_{2},{\mathbb{T}}^{1}\right).$
We obtain the following result.
###### Lemma 27.
In the case where ${\mathcal{H}}_{b}={\mathbb{C}}^{n}\times{\mathbb{C}}^{n}$
with its canonical symplectic form, we have
${\rm
Mas}\left(\ell_{1},\ell_{2},{\mathbb{T}}^{1}\right)={\mathcal{I}}(\ell_{1},{\mathbb{T}}^{1})-{\mathcal{I}}(\ell_{2},{\mathbb{T}}^{1}),$
where, for a periodic continuous family $\ell(t)$ in
$\Lambda({\mathcal{H}}_{b})$, with corresponding unitary
${\mathcal{U}}(t)\in{\rm U}(n)$, we set
$\boxed{{\mathcal{I}}(\ell,{\mathbb{T}}^{1}):={\rm
Winding}\left(\det{\mathcal{U}},{\mathbb{T}}^{1}\right)\quad\in{\mathbb{Z}}.}$
In the infinite dimensional case, such a splitting is not valid. This gives
some complications (see Section 4). In order to have a unified definition with
the next Section, we recall that, in Example 15, we proved that Dirichlet
boundary conditions corresponds to the Lagrangian plane
$\ell_{D}=-{\mathbb{I}}_{d}$. So, setting
$\ell_{D}:=\\{0\\}\times{\mathbb{C}}^{n}$, we have the alternative definition
${\mathcal{I}}(\ell,{\mathbb{T}}^{1})={\rm
Mas}(\ell,\ell_{D},{\mathbb{T}}^{1})={\rm
Sf}\left({\mathcal{U}},-1,{\mathbb{T}}^{1}\right).$
### 3.2. Families of Hill’s operators, spectral flow
We now focus on a periodic family of Hill’s operator $h_{t}$. Let
${\mathbb{T}}^{1}\mapsto V_{t}$ be a periodic family of potentials satisfying
(1), and set
$h_{t}:=-\partial_{xx}^{2}+V_{t}(x).$
We assume that $t\mapsto V_{t}$ is continuously differentiable as a map from
${\mathbb{T}}^{1}$ to the Banach space
$L^{\infty}({\mathbb{R}},{\mathcal{S}}_{n})$. Since ${\mathbb{T}}^{1}$ is
compact, $V(t,x)$ is uniformly bounded. In particular, as in Section 2.2.2,
the operator $h_{t}$ is essentially self-adjoint with fixed domain
${\mathcal{D}}=H^{2}({\mathbb{R}}^{+},{\mathbb{C}}^{n})$ for all
$t\in{\mathbb{T}}^{1}$.
The spectrum of the family $(h_{t})_{t\in{\mathbb{T}}^{1}}$ is the set
$\sigma\left(h_{t},{\mathbb{T}}^{1}\right):=\bigcup_{t\in{\mathbb{T}}^{1}}\sigma(h_{t}).$
It is the compact union of all spectra of $(h_{t})$ for
$t\in{\mathbb{T}}^{1}$. Since $t\mapsto\sigma(h_{t})$ is continuous,
$\sigma\left(h_{t},{\mathbb{T}}^{1}\right)$ is a closed set in ${\mathbb{R}}$.
The complement of $\sigma\left(h_{t},{\mathbb{T}}^{1}\right)$ is the resolvent
set of the family $(h_{t})_{t\in{\mathbb{T}}^{1}}$.
We now consider a corresponding family of edge self-adjoint operators, of the
form $(h_{t}^{\sharp},\ell_{t}^{\sharp})$. We say that this family is
continuous, continuously differentiable, etc. if the corresponding family of
Lagrangian planes $\ell_{t}^{\sharp}$ is (in $\Lambda({\mathcal{H}}_{b})$).
Fix $E\in{\mathbb{R}}$ in the resolvent set of
$(h_{t})_{t\in{\mathbb{T}}^{1}}$. As $t$ varies in ${\mathbb{T}}^{1}$, the
spectrum of the bulk operator $h_{t}$ stays away from $E$. However, for the
edge operators $(h^{\sharp}_{t},\ell_{t}^{\sharp})$, some eigenvalues may
touch the energy $E$. If $t^{*}\in{\mathbb{T}}^{1}$ is such that $\dim{\rm
Ker}\left(h^{\sharp}_{t^{*}}-E\right)=k\in{\mathbb{N}}$, then, as in Lemma 24,
we can find $\varepsilon>0$, $\eta>0$ and $k$ continuously differentiable
branches of eigenvalues $\lambda_{j}(t)\in{\mathbb{R}}$ so that, for
$t\in(t^{*}-\varepsilon,t^{*}+\varepsilon)$,
$\sigma\left(h_{t}^{\sharp}\right)\cap
B(E,\eta)=\\{\lambda_{1}(t),\cdots,\lambda_{k}(t)\\}\cap B(E,\eta).$
At $t=t^{*}$, we have $\lambda_{1}(t^{*})=\cdots=\lambda_{k}(t^{*})=E$. The
crossing $t^{*}$ is regular if $\lambda_{j}^{\prime}(t^{*})\neq 0$ for all
$1\leq j\leq k$. For such a crossing, we set
${\rm deg}(t^{*})=\sum_{j=1}^{k}{\rm
sgn}\left(\lambda_{j}^{\prime}(t^{*})\right).$
We say that the energy $E$ is a regular energy if all crossings at $E$ are
regular. For such an energy, we define the spectral flow of $(h_{t}^{\sharp})$
across $E$ as the net number of eigenvalues crossing $E$ downwards:
$\boxed{{\rm Sf}(h^{\sharp}_{t},E,{\mathbb{T}}^{1}):=-\sum_{t^{*}\text{regular
crossing}}{\rm deg}(t^{*})\quad\in{\mathbb{Z}}.}$
The main result of this section is the following.
###### Theorem 28.
Let $(a,b)\subset{\mathbb{R}}$ be any interval in
${\mathbb{R}}\setminus\sigma\left(h_{t},{\mathbb{T}}_{1}\right)$. Then
* •
almost all $E$ in $(a,b)$ is a regular energy for
$(h_{t}^{\sharp},\ell_{t}^{\sharp})$;
* •
for any regular energy $E$ in $(a,b)$, we have
${\rm Sf}(h^{\sharp}_{t},E,{\mathbb{T}}^{1})={\rm
Mas}\left(\ell_{t}^{+}(E),\ell_{t}^{\sharp},{\mathbb{T}}_{1}\right)={\mathcal{I}}(\ell_{t}^{+}(E),{\mathbb{T}}^{1})-{\mathcal{I}}(\ell_{t}^{\sharp},{\mathbb{T}}^{1}).$
###### Proof.
The first part comes from Sard’s lemma, and can be proved as in [Gon20, Lemma
III.18].
Fix $E$ is regular energy, let $t^{*}$ be a crossing point so that $\dim{\rm
Ker}\left(h_{t^{*}}^{\sharp}-E\right)=k\in{\mathbb{N}}$, and let
$\lambda_{1},\cdots,\lambda_{k}$ be the corresponding branches of eigenvalues.
The idea of the proof is to follow the two families of branches
$(t,\lambda_{j}(t))$ and $(t,E)$, describing respectively $\ell^{\sharp}_{t}$
and $\ell^{+}_{t}(E)$.
For the first branch, let $\psi_{1}(t),\cdots,\psi_{k}(t)$ be a continuously
differentiable family or ${\mathcal{H}}^{\sharp}$-orthonormal eigenvectors in
${\mathcal{D}}^{\sharp}_{t}:={\rm Tr}^{-1}\left(\ell_{t}^{\sharp}\right)$, so
that $h_{t}^{\sharp}\psi_{j}(t)=\lambda_{k}(t)\psi_{j}(t)$, and let
$u_{j}:={\rm Tr}\left(\psi_{j}\right),\quad\text{so
that}\quad\ell^{+}_{t^{*}}(E)\cap\ell^{\sharp}_{t^{*}}={\rm
Vect}\left(u_{1}(t^{*}),\cdots,u_{k}(t^{*})\right).$
We have, for all $t\in(t^{*}-\varepsilon,t^{*}+\varepsilon)$, that (recall
that $h_{t}^{*}\subset h_{t,\rm max}^{*}$)
$\langle\psi_{i}(t),(h^{\sharp}_{t,{\rm
max}}-\lambda_{j}(t))\psi_{j}(t)\rangle_{{\mathcal{H}}^{\sharp}}=0.$
Differentiating the first equation and evaluating at $t=t^{*}$ gives
$\langle\psi_{i},\partial_{t}(h^{\sharp}_{t,{\rm
max}}-\lambda_{j})\psi_{j}\rangle_{{\mathcal{H}}^{\sharp}}\big{|}_{t=t^{*}}+\langle\psi_{i},(h^{\sharp}_{t^{*},{\rm
max}}-E)\left(\partial_{t}\psi_{j}\right)\rangle_{{\mathcal{H}}^{\sharp}}\big{|}_{t=t^{*}}=0$
For the last term, we put the operator $(h_{t^{*},{\rm max}}^{\sharp}-E)$ on
the other side using the second Green’s identity. Since
$(h^{\sharp}_{t^{*},{\rm max}}-E)\psi_{i}=0$. we get
$\langle\psi_{i},(h^{\sharp}_{t^{*},{\rm
max}}-E)\left(\partial_{t}\psi_{j}\right)\rangle_{{\mathcal{H}}^{\sharp}}\big{|}_{t=t^{*}}=\omega\left(u_{i},\partial_{t}u_{j}\right)\big{|}_{t=t^{*}}=\omega\left(u_{i},\left(\partial_{t}P^{\sharp}_{t^{*}}\right)u_{j}\right)\big{|}_{t=t^{*}}.$
For the last equality, we introduced $P^{\sharp}_{t}$ the projection on
$\ell_{t}^{\sharp}$, and used an equality similar to (9). This gives our first
identity
$\delta_{ij}\lambda_{j}^{\prime}(t^{*})=\langle\psi_{i},\partial_{t}(h^{\sharp}_{t,{\rm
max}})\psi_{j}\rangle_{{\mathcal{H}}^{\sharp}}\big{|}_{t=t^{*}}+\omega\left(u_{i},\partial_{t}\left(P^{\sharp}_{t}\right)u_{j}\right)\big{|}_{t=t^{*}}.$
###### Remark 29.
In the case where the domain $\ell^{\sharp}_{t}=\ell^{\sharp}$ is independent
of $t$, we recover the Hellman-Feynman identity
$\langle\psi_{i},\partial_{t}(h^{\sharp}_{t}-\lambda_{j})\psi_{j}\rangle_{{\mathcal{H}}^{\sharp}}\big{|}_{t=t^{*}}=0$.
For the second branch, let $(\phi_{1}(t),\cdots,\phi_{k}(t))$ be a smooth
family of linearly independent functions in ${\mathcal{S}}^{+}(t,E)$, and so
that, at $t=t^{*}$,
$\phi_{j}(t^{*})={\mathds{1}}_{{\mathbb{R}}^{+}}\psi_{j}(t^{*})$. We set
$v_{j}={\rm Tr}(\phi_{j}),\quad\text{so, at $t=t^{*}$},\quad
v_{j}(t^{*})=u_{j}(t^{*}).$
This time, we have, for all $t\in(t^{*}-\varepsilon,t^{*}+\varepsilon)$,
$\left\langle\phi_{i},\left(h^{\sharp}_{t,\rm
max}-E\right)\phi_{j}\right\rangle=0.$
Differentiating and evaluating at $t=t^{*}$ gives as before
$\left\langle\phi_{i},\partial_{t}\left(h^{\sharp}_{t^{*},\rm
max}\right)\phi_{j}\right\rangle=-\omega(v_{i},\partial_{t}v_{j})=-\omega\left(v_{i},\left(\partial_{t}P_{t}^{+}\right)v_{j}\right)=-\omega\left(u_{i},\left(\partial_{t}P_{t}^{+}\right)u_{j}\right)$
Gathering the two identities shows that
$\delta_{ij}\lambda_{j}^{\prime}(t^{*})=-\omega\left(u_{i},\left(\partial_{t}P_{t}^{+}\right)u_{j}\right)\big{|}_{t=t^{*}}+\omega\left(u_{i},\partial_{t}\left(P^{\sharp}_{t}\right)u_{j}\right)\big{|}_{t=t^{*}}.$
We recognise the sesquilinear form $b$ defined in (8). This form is therefore
diagonal in the $(u_{1},\cdots,u_{j})$ basis, and its eigenvalues are the
$-\lambda_{j}^{\prime}(t^{*})$. The result follows. ∎
While our proofs are valid only for continuously differentiable families, they
can be extended to the continuous case. Indeed, it is well-known that the
spectral flow, the Maslov index, and the winding number only depends only on
the homotopy class of the corresponding families.
For instance, the spectral flow is independent of the energy $E$ in the gap
$(a,b)$. So all indices are independent of $E$ in the gap, and Theorem 28 also
holds for non-regular $E\in(a,b)$ (with the corresponding continuous versions
for the definitions of the indices).
### 3.3. Applications
Let us give two applications of the previous theory. The first one shows that
a spectral flow must appear when modifying Robin boundary conditions. The
second one concerns the important case of junctions between two materials.
#### 3.3.1. Robin boundary conditions
In the case $n=1$, consider a fixed (independent of $t$) bounded potential
$V_{t}(x)=V(x)$. We consider the self-adjoint Robin operator
$h^{\sharp}_{t}=-\partial_{xx}^{2}+V$ on $L^{2}({\mathbb{R}}^{+})$, with the
$t$-dependent domain
${\mathcal{D}}_{t}:=\left\\{\psi\in H^{2}({\mathbb{R}}^{+}),\quad\cos(\pi
t)\psi(0)-\sin(\pi t)\psi^{\prime}(0)=0\right\\}.$
Note that ${\mathcal{D}}_{t+1}={\mathcal{D}}_{t}$, so $H_{t}^{\sharp}$ is
indeed periodic in $t$. For $t=0$, we recover Dirichlet boundary condition,
and for $t=\frac{1}{2}$, we recover Neumann boundary conditions, so Robin
boundary conditions somehow interpolates between these two cases. The
Lagrangian plane of ${\mathcal{H}}_{b}={\mathbb{C}}\times{\mathbb{C}}$
corresponding to the extension ${\mathcal{D}}_{t}$ is
$\ell_{t}^{\sharp}={\rm Vect}_{\mathbb{C}}\begin{pmatrix}\sin(\pi t)\\\
\cos(\pi t)\end{pmatrix}\subset{\mathbb{C}}\times{\mathbb{C}}.$
It is of the form $\ell_{t}^{\sharp}\\{(\Theta x,\Pi x),\ x\in{\mathbb{C}}\\}$
for $\Theta=\sin(\pi t)$ and $\Pi=\cos(\pi t)$. So, by the results of Example
16, the corresponding unitary ${\mathcal{U}}(t)\in{\rm U}(1)\approx\SS^{1}$,
is
${\mathcal{U}}(t)=\dfrac{\sin(\pi t)+{\mathrm{i}}\cos(\pi t)}{\sin(\pi
t)-{\mathrm{i}}\cos(\pi t)}={\mathrm{e}}^{2{\mathrm{i}}\pi t}.$
We see that ${\mathcal{U}}(t)$ winds once positively around $\SS^{1}$ as $t$
runs through ${\mathbb{T}}^{1}$, that is
${\mathcal{I}}(\ell_{t}^{\sharp},{\mathbb{T}}_{1})=1.$
Using Theorem 28, and the fact that $\ell^{+}_{t}(E)$ is independent of $t$,
we obtain
${\rm Sf}\left(h_{t}^{\sharp},E,{\mathbb{T}}^{1}\right)=-1.$
We deduce that there is a spectral flow of exactly $1$ eigenvalue going
upwards in all spectral gaps of $h$. This includes the lower gap
$(-\infty,\inf\sigma(h))$.
#### 3.3.2. Junction between two materials.
We now consider a junction between a left and a right potentials $V_{L,t}$ and
$V_{R,t}$, where $t\mapsto V_{L,t}$ and $t\mapsto V_{R,t}$ are two periodic
continuously differentiable families of potentials in
$L^{\infty}({\mathbb{R}},{\mathcal{S}}_{n})$. Take $\chi$ a bounded switch
function, satisfying, for some $X>0$ large enough,
$\forall x\leq-X,\ \chi(x)=1,\quad\text{and}\quad\forall x\geq
X,\quad\chi(x)=0.$
We consider the domain wall Hill’s operator
$h^{\chi}_{t}:=-\partial_{xx}^{2}+V_{L,t}(x)\chi(x)+V_{R,t}(x)(1-\chi(x)).$
Let $E\in{\mathbb{R}}$ be in the resolvent set of the bulk operators $h_{R,t}$
and $h_{L,t}$ for all $t\in{\mathbb{T}}^{1}$. Again, some eigenvalues of
$h^{\chi}_{t}$ might cross $E$ as $t$ goes from $0$ to $1$, and we can define
a corresponding spectral flow ${\rm Sf}(h_{t}^{\chi},E,{\mathbb{T}}^{1})$.
###### Theorem 30 (Bulk-edge correspondence for junctions).
With the previous notation, let
$(a,b)\subset{\mathbb{R}}\setminus\\{\sigma\left(h_{R,t},{\mathbb{T}}^{1}\right)\cup\sigma\left(h_{L,t},{\mathbb{T}}^{1}\right)\\}$.
Then,
* •
almost all $E\in(a,b)$ is a regular energy for $h_{t}^{\chi}$;
* •
for any such regular energy, we have
${\rm Sf}(h_{t}^{\chi},E,{\mathbb{T}}^{1})={\rm
Mas}\left(\ell_{R,t}^{+}(E),\ell_{L,t}^{-}(E),{\mathbb{T}}^{1}\right)={\mathcal{I}}(\ell_{R,t}^{+}(E),{\mathbb{T}}^{1})-{\mathcal{I}}(\ell_{L,t}^{-}(E),{\mathbb{T}}^{1}).$
The right-hand side is independent of the switch $\chi$.
###### Remark 31.
Applying this theorem with $V_{L}=V_{R}$ shows that, for $E$ in a gap of all
operators $h_{t}$ (so that there is no spectral flow), we have
$0={\rm
Mas}\left(\ell_{t}^{+}(E),\ell_{t}^{-}(E),{\mathbb{T}}^{1}\right)={\mathcal{I}}(\ell_{t}^{+}(E),{\mathbb{T}}^{1})-{\mathcal{I}}(\ell_{t}^{-}(E),{\mathbb{T}}^{1}).$
So
${\mathcal{I}}(\ell_{t}^{+}(E),{\mathbb{T}}^{1})={\mathcal{I}}(\ell_{t}^{-}(E),{\mathbb{T}}^{1})$.
This means that we can drop the upper-script $\pm$ in Theorem 30. The common
integer ${\mathcal{I}}(\ell_{t}^{\pm}(E),{\mathbb{T}}^{1})$ is the usual
bulk/edge index.
###### Proof.
The following proof uses the theory developed in the previous sections. We
give a different proof in the Schrödinger case (see Theorem 40 below).
Let us denote by $\ell^{\pm}_{\chi,t}(x_{0},E)$ the Lagrangian planes obtained
with the potential
$V^{\chi}_{t}(x):=V_{L,t}(x)\chi(x)+V_{R,t}(x)(1-\chi(x)),$
and when the real line ${\mathbb{R}}$ is cut at the location
$x_{0}\in{\mathbb{R}}$. By Lemma 19, we have
$\forall x_{0}\in{\mathbb{R}},\quad\dim{\rm
Ker}\left(h_{t}^{\chi}-E\right)=\dim\left(\ell_{\chi,t}^{+}(x_{0},E)\cap\ell_{\chi,t}^{-}(x_{0},E)\right).$
Adapting the proof of Theorem 28 shows that
$\displaystyle{\rm Sf}\left(h_{t}^{\chi},E,{\mathbb{T}}^{1}\right)$
$\displaystyle={\rm
Mas}\left(\ell^{+}_{\chi,t}(x_{0},E),\ell^{-}_{\chi,t}(x_{0},E),{\mathbb{T}}^{1}\right)$
$\displaystyle={\mathcal{I}}\left(\ell^{+}_{\chi,t}(x_{0},E),{\mathbb{T}}^{1}\right)-{\mathcal{I}}\left(\ell^{-}_{\chi,t}(x_{0},E),{\mathbb{T}}^{1}\right).$
Since $V$ is uniformly (hence locally) bounded, all Cauchy solutions to
$-\psi^{\prime\prime}+V\psi=E\psi$ are well defined and continuously
differentiable on the whole line ${\mathbb{R}}$. This implies that the maps
$x_{0}\mapsto\ell^{\pm}_{\chi,t}(x_{0},E)$ are also continuous. In particular,
since the index $M$ depends only on the homotopy class of the loops, we have
$M\left(\ell^{+}_{\chi,t}(x_{0},E),{\mathbb{T}}^{1}\right)=M\left(\ell^{+}_{\chi,t}(x_{0}=X,E),{\mathbb{T}}^{1}\right).$
The last quantity only involves the half space $\\{x\geq X\\}$. On this half
space, we have $V^{\chi}_{t}(x)=V_{R,t}(x)$, so
$M\left(\ell^{+}_{\chi,t}(x_{0}=X,E),{\mathbb{T}}^{1}\right)=M\left(\ell^{+}_{R,t}(X,E),{\mathbb{T}}^{1}\right)=M\left(\ell^{+}_{R,t}(E),{\mathbb{T}}^{1}\right).$
The proof that
$M\left(\ell^{-}_{\chi,t}(x_{0},E),{\mathbb{T}}^{1}\right)=M\left(\ell^{-}_{L,t}(E),{\mathbb{T}}^{1}\right)$
is similar, and the result follows. ∎
## 4\. The infinite dimensional case
In this section, we explain how to modify the previous proofs to handle the
infinite dimensional setting, by which we mean the PDE Schrödinger case. We
chose to put this section separately, since it introduces some technical
details, and since the results are slightly different.
### 4.1. Schrödinger operators on a tube
We now consider systems defined on a $d$-dimensional cylinder of the form
$\Omega:={\mathbb{R}}\times\Gamma\subset{\mathbb{R}}^{d}.$
Here and in what follows, $\Gamma=(0,1)^{d-1}$ is the $(d-1)$-dimensional unit
open square. A point in $\Omega$ is denoted by ${\mathbf{x}}=(x,{\mathbf{y}})$
with $x\in{\mathbb{R}}$ and ${\mathbf{y}}\in\Gamma$.
Let $V:\Omega\to{\mathbb{R}}$ be a real-valued potential, which we assume to
be bounded on $\Omega$. We consider bulk Schrödinger operator $H$ of the form
$H:=-\Delta+V,\quad\text{acting
on}\quad{\mathcal{H}}:=L^{2}(\Omega,{\mathbb{C}}).$
The case of general second order elliptic operators, and with values in
${\mathbb{C}}^{n}$, can be treated similarly. Again, we do not assume here
that $V$ is periodic, but only that $V$ is bounded.
The operator $H$ with core domain $C^{\infty}_{0}(\Omega)$ is symmetric, and
we have
${\mathcal{D}}_{\rm
min}=H^{2}_{0}(\Omega,{\mathbb{C}}),\quad\text{and}\quad{\mathcal{D}}_{\rm
max}=H^{2}(\Omega,{\mathbb{C}}).$
This time, the bulk operator is not self-adjoint, and indeed, boundary
conditions must be chosen on the boundary
$\partial\Omega={\mathbb{R}}\times\partial\Gamma$.
#### 4.1.1. The bulk Schrödinger operators
Let us explain in detail how we choose the boundary conditions on
$\partial\Omega={\mathbb{R}}\times\partial\Gamma$ for the bulk operator.
First, we see $V$ as a perturbation of $-\Delta$, so, once the domain of
$-\Delta$ has been chosen, the Schrödinger operator $-\Delta+V$ will have the
same domain (see for instance [Kat13, Chapter 5.4.3]).
We have $\Omega={\mathbb{R}}\times\Gamma$, so
${\mathcal{H}}:=L^{2}(\Omega)=L^{2}({\mathbb{R}})\otimes L^{2}(\Gamma),$
and, with respect to this decomposition,
$-\Delta=\left(-\partial_{xx}^{2}\right)\otimes\left(-\Delta_{\Gamma}\right).$
The operator $-\partial_{xx}^{2}$ is essentially self-adjoint on
$L^{2}({\mathbb{R}})$, so $-\Delta$ is self-adjoint on $L^{2}(\Omega)$
whenever $-\Delta_{\Gamma}$ is so on $L^{2}(\Gamma)$. In what follows, we fix
one such extension. We consider here the periodic realisation of
$-\Delta_{\Gamma}$, but the proofs are similar for other choices, such as the
Dirichlet extension, the Neumann one, or the quasi-periodic one. So we see
$\Gamma=(0,1)^{d-1}\approx{\mathbb{T}}^{d-1}$ as the $(d-1)$-torus, and we
consider the domain
${\mathcal{D}}_{\Gamma,{\rm per}}:=H^{2}({\mathbb{T}}^{d-1}).$
The operator $\left(-\Delta_{\Gamma,{\rm per}},{\mathcal{D}}_{\Gamma,{\rm
per}}\right)$ is self-adjoint. It is compact resolvent, diagonal in the
Fourier basis, and admits a spectral decomposition of the form
$-\Delta_{\Gamma,{\rm
per}}:=\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}4\pi|{\mathbf{k}}|^{2}|e_{\mathbf{k}}\rangle\langle
e_{\mathbf{k}}|,\quad\text{with}\quad
e_{\mathbf{k}}({\mathbf{y}}):={\mathrm{e}}^{{\mathrm{i}}2\pi{\mathbf{k}}\cdot{\mathbf{y}}}.$
The corresponding operator $-\Delta_{\rm
per}:=(-\partial_{xx}^{2})\otimes(-\Delta_{\Gamma,{\rm per}})$ is self-adjoint
on $L^{2}(\Omega)$, with domain
${\mathcal{D}}_{\rm
per}:=\overline{H^{2}({\mathbb{R}}){\otimes}{\mathcal{D}}_{\Gamma,{\rm
per}}}^{{\mathcal{D}}_{\rm max}}.$
The subscript ”${\rm per}$” only refers to periodicity in the last $(d-1)$
dimensions. The elements in ${\mathcal{H}}$ can be written in the partial
Fourier form
$f(x,{\mathbf{y}})=\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}f_{\mathbf{k}}(x)e_{\mathbf{k}}({\mathbf{y}}),\quad\text{with}\quad\|f\|_{{\mathcal{H}}}^{2}:=\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}\|f_{\mathbf{k}}\|_{L^{2}({\mathbb{R}})}^{2}<\infty.$
(10)
We have $f\in{\mathcal{D}}_{\rm per}$ if $\|(-\Delta_{{\rm
per}})f\|_{{\mathcal{H}}}<\infty$ as well, where
$\|(-\Delta_{\rm
per})f\|_{{\mathcal{H}}}^{2}=\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}\left(\|f_{\mathbf{k}}\|_{L^{2}({\mathbb{R}})}^{2}+(4\pi|{\mathbf{k}}|^{2})^{2}\|f_{\mathbf{k}}\|_{L^{2}({\mathbb{R}})}^{2}\right)<\infty.$
#### 4.1.2. Edge Schrödinger operators on a tube
We now define the edge Schrödinger operator
$H^{\sharp}:=-\Delta+V\quad\text{acting on}\quad
L^{2}(\Omega^{+},{\mathbb{C}}),\quad\text{where}\quad\Omega^{+}:={\mathbb{R}}^{+}\times\Gamma.$
This operator acts on the right half tube. We sometime write $H^{\sharp,+}$
for $H^{\sharp}$ and define $H^{\sharp,-}$ for the corresponding operator on
the left half tube $\Omega^{-}:={\mathbb{R}}^{-}\times\Gamma$. The operator
$H^{\sharp}$ with core domain $C^{\infty}_{0}(\Omega^{+})$ is symmetric, and
we have,
${\mathcal{D}}_{\rm
min}^{\sharp}=H^{2}_{0}(\Omega^{+},{\mathbb{R}}),\quad\text{and}\quad{\mathcal{D}}_{\rm
max}^{\sharp}=H^{2}(\Omega^{+},{\mathbb{R}}).$
Again, we need to specify the boundary conditions. On the boundary
${\mathbb{R}}^{+}\times\partial\Gamma$, we keep the same boundary conditions
as for the bulk one. In other words, writing
${\mathcal{H}}^{\sharp}:=L^{2}(\Omega^{+})=L^{2}({\mathbb{R}}^{+})\otimes
L^{2}(\Gamma),$
and setting
$-\Delta^{\sharp}_{{\rm per},\rm min}:=(-\partial_{xx}^{2})^{\sharp}_{\rm
min}\otimes\left(-\Delta_{\Gamma,{\rm
per}}\right)\quad\text{and}\quad-\Delta^{\sharp}_{{\rm per},\rm
max}:=(-\partial_{xx}^{2})^{\sharp}_{\rm max}\otimes\left(-\Delta_{\Gamma,{\rm
per}}\right),$
we only look for self-adjoint extensions $(-\Delta^{\sharp})$ of the Laplacian
satisfying
$-\Delta^{\sharp}_{{\rm per},\rm
min}\subset-\Delta^{\sharp}\subset-\Delta^{\sharp}_{{\rm per},\rm max}.$
We are only looking at the self-adjoint domains contained in
${\mathcal{D}}_{{\rm per},{\rm max}}^{\sharp}$. We identify this last domain
in the next Lemma (compare with Lemma 18).
###### Lemma 32.
A function is in ${\mathcal{D}}_{{\rm per},{\rm max}}^{\sharp}$ iff it is the
restriction to $\Omega^{+}$ of an element in the bulk domain
${\mathcal{D}}_{\rm per}$.
###### Proof.
This follows from the fact that there is an extension operator
$H^{2}(\Omega^{+})\to H^{2}(\Omega)$ which can be constructed with reflection
operators, see e.g. [GT15, Theorem 7.25] or [LM12, Theorem 8.1]. These
reflection operators keep the periodic properties at the boundary
${\mathbb{R}}\times\partial\Gamma$. ∎
In our case of periodic boundary conditions, we obtain
${\mathcal{D}}^{\sharp}_{{\rm per},{\rm
min}}=H^{2}_{0}({\mathbb{R}}^{+}\times{\mathbb{T}}^{d-1},{\mathbb{C}}),\quad\text{and}\quad{\mathcal{D}}^{\sharp}_{{\rm
per},{\rm max}}=H^{2}({\mathbb{R}}^{+}\times{\mathbb{T}}^{d-1},{\mathbb{C}}).$
### 4.2. Trace maps, and the boundary space ${\mathcal{H}}_{b}^{\prime}$
In order to express the second Green’s identity in this setting, we recall
some basic facts on the Dirichlet and Neumann trace operators.
#### 4.2.1. Boundary Sobolev-like spaces
We will need the boundary Hilbert spaces $H^{s}_{\rm per}(\Gamma)$. To define
them, we note that $(-\Delta_{\Gamma,{\rm per}})$ is positive. For $0\leq
s\leq 2$, and for functions $f,g\in{\mathcal{D}}_{\Gamma,{\rm per}}$, we set
$\langle f,g\rangle_{H^{s}_{\rm per}(\Gamma)}^{2}=\frac{1}{2}\langle
f,g\rangle_{L^{2}(\Gamma)}^{2}+\frac{1}{2}\left\langle(-\Delta_{\Gamma,{\rm
per}})^{s/2}f,(-\Delta_{\Gamma,{\rm
per}})^{s/2}g\right\rangle_{L^{2}(\Gamma)}.$
The Hilbert space $H^{s}_{\rm per}(\Gamma)$ is the completion of
${\mathcal{D}}_{\Gamma,{\rm per}}$ for the corresponding norm. The
$\frac{1}{2}$ factor is here so that $H^{s}_{\rm per}(\Gamma)=L^{2}(\Gamma)$.
In terms of Fourier coefficients, we have
$\langle
f,g\rangle_{H^{s}(\Gamma)}=\frac{1}{2}\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}\overline{f_{\mathbf{k}}}g_{\mathbf{k}}\left(1+(4\pi|{\mathbf{k}}|^{2})^{s}\right).$
Finally, for $0\leq s\leq 2$, we define $H^{-s}_{\rm per}(\Gamma)$ as the dual
of $H^{s}_{\rm per}(\Gamma)$ for the $L^{2}(\Gamma)$-inner product. It is also
an Hilbert space with corresponding inner product
$\langle f,g\rangle_{H^{-s}_{\rm
per}(\Gamma)}=\frac{1}{2}\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}\overline{f_{\mathbf{k}}}g_{\mathbf{k}}\left(1+(4\pi|{\mathbf{k}}|^{2})^{s}\right)^{-1}.$
It is classical that, for $-2\leq s^{\prime}\leq s\leq 2$, we have
$H^{s}_{{\rm per}}(\Gamma)\hookrightarrow H^{s^{\prime}}_{\rm per}(\Gamma)$
with compact embedding, and that $H^{s}_{{\rm per}}(\Gamma)$ is dense in
$H^{s^{\prime}}_{\rm per}(\Gamma)$. For $s<0$, the subscript ”${\rm per}$” can
be dropped, since it equals the usual $H^{s}(\Gamma)$ Sobolev space.
###### Remark 33.
Should we choose the Dirichlet Laplacian $-\Delta_{\Gamma,D}$, instead of the
periodic ones, we would obtain the usual $H^{s}_{0}(\Gamma)$ Sobolev spaces
for $s>0$, and the subscript $0$ can be dropped for $s\leq 0$.
#### 4.2.2. Dirichlet and Neumann trace operators
For $\psi\in C^{\infty}(\Omega^{+})$, we introduce the functions
$\gamma^{D}\psi$ and $\gamma^{N}\psi$ defined on $\Gamma$ by
$\forall{\mathbf{y}}\in\Gamma,\quad(\gamma^{D}\psi)({\mathbf{y}}):=\psi(x=0,{\mathbf{y}}),\quad\left(\gamma^{N}\psi\right)({\mathbf{y}})=\partial_{x}\psi(x=0,{\mathbf{y}}).$
Our definition differs from the usual one
$\gamma^{N}\psi=-\partial_{x}\psi(0,\cdot)$, where the minus sign comes the
outward normal direction of $\Gamma$ from the $\Omega^{+}$ perspective. Our
definition without the minus sign matches the one of the previous section.
It is classical that $\gamma^{D}$ can be extended from $H^{2}(\Omega^{+})$ to
$H^{3/2}(\Gamma)$ and that $\gamma^{N}$ can be extended from
$H^{2}(\Omega^{+})$ to $H^{1/2}(\Gamma)$, and that these extensions are onto
(see for instance [LM12, Theorem 8.3]). This suggests to introduce the
boundary space
${\mathcal{H}}_{b}:=H^{3/2}_{\rm per}(\Gamma)\times H^{1/2}_{\rm
per}(\Gamma),$
and the boundary trace operator
${\rm Tr}:{\mathcal{D}}_{{\rm per},{\rm
max}}^{\sharp}\to{\mathcal{H}}_{b},\quad{\rm
Tr}(\psi):=\left(\gamma^{D}\psi,\gamma^{N}\psi\right).$
The counterpart of Lemma 12 is the following.
###### Lemma 34.
The map ${\rm Tr}:\left({\mathcal{D}}_{{\rm per},{\rm
max}}^{\sharp},\|\cdot\|_{\sharp}\right)$ to ${\mathcal{H}}_{b}$ is well-
defined, continuous and onto.
In other terms, we do not loose the surjectivity of ${\rm Tr}$ by restricting
all domains to be periodic in the last $(d-1)$-direction.
###### Proof.
The fact that ${\rm Tr}$ is well-defined and continuous follows from the fact
that $\|\cdot\|_{\sharp}$ is equivalent to the usual $H^{2}(\Omega^{+})$ norm,
together with the continuity of the trace maps. Tos prove that ${\rm Tr}$ is
onto, one can adapt the proof of [LM12, Theorem 8.3]. We provide here an
alternative proof.
An element $f\in H^{s}_{\rm per}(\Gamma)$ is of the form
$f({\mathbf{y}})=\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}f_{\mathbf{k}}e_{\mathbf{k}}({\mathbf{y}}),\quad\text{with}\quad\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}\left(1+(4\pi|{\mathbf{k}}|^{2})^{s}\right)|f_{\mathbf{k}}|^{2}<\infty.$
Let $f\in H^{3/2}_{\rm per}(\Gamma)$ and $f^{\prime}\in H^{1/2}_{\rm
per}(\Gamma)$ with respective coefficients $(f_{\mathbf{k}})$ and
$(f_{\mathbf{k}}^{\prime})$. Consider also a smooth cut-off function $\chi(x)$
with $\chi(x)=1$ for $0\leq x<1/2$, $\chi(x)=0$ for $x>2$ and
$\int_{{\mathbb{R}}^{+}}\chi^{2}=1$. We set $\chi_{\mathbf{0}}=\chi$, and for
${\mathbf{k}}\in{\mathbb{Z}}^{d-1}\setminus\\{0\\}$
$\chi_{\mathbf{k}}(x):=\chi\left(x|{\mathbf{k}}|^{1/2}\right).$
For all ${\mathbf{k}}$, the function $\chi_{\mathbf{k}}$ is smooth, compactly
supported, with $\chi_{\mathbf{k}}(x)=1$ for all $x<1/2|{\mathbf{k}}|$. In
addition, we have the scalings
$\int_{R^{+}}|\chi_{\mathbf{k}}|^{2}=\dfrac{1}{|{\mathbf{k}}|},\quad\int_{R^{+}}|\chi_{\mathbf{k}}^{\prime}|^{2}=|{\mathbf{k}}|,\quad\int_{R^{+}}|\chi_{\mathbf{k}}^{\prime\prime}|^{2}=|{\mathbf{k}}|^{3}.$
Finally, we consider the function defined on $\Omega^{+}$ by
$\Psi(x,{\mathbf{y}}):=\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}\left(f_{\mathbf{k}}+xf_{\mathbf{k}}^{\prime}\right)\chi_{\mathbf{k}}(x)e_{\mathbf{k}}({\mathbf{y}}).$
The function $\Psi$ is compactly supported and, we check that ${\rm
Tr}(\Psi)=(f,f^{\prime})$. It remains to check that $\Psi$ is in
$H^{2}(\Omega^{+})$. We have
$\int_{\Omega^{+}}\|-\Delta\Psi\|_{L^{2}}^{2}\lesssim\int_{{\mathbb{R}}^{+}}\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}\left(|f_{\mathbf{k}}|^{2}|{\mathbf{k}}|^{4}|\chi_{\mathbf{k}}|^{2}+|f_{\mathbf{k}}|^{2}|\chi_{\mathbf{k}}^{\prime\prime}|^{2}+|f_{\mathbf{k}}^{\prime}|^{2}|\chi_{\mathbf{k}}^{\prime}|^{2}\right).$
Thanks to our scaling, we deduce that
$\int_{\Omega^{+}}\|-\Delta\Psi\|_{L^{2}}^{2}\lesssim\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}|f_{\mathbf{k}}|^{2}\cdot|{\mathbf{k}}|^{3}+\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}|f_{\mathbf{k}}^{\prime}|^{2}|{\mathbf{k}}|\lesssim\|f\|_{H^{3/2}_{\rm
per}}^{2}+\|f^{\prime}\|_{H^{1/2}_{\rm per}}^{2}.$
The result follows. ∎
We can now state the second Green’s identity in the PDE case.
###### Lemma 35 (Second Green’s formula).
For all $\phi,\psi\in{\mathcal{D}}^{\sharp}_{{\rm per},{\rm max}}$,
$\langle\phi,H^{\sharp}_{{\rm per},{\rm
max}}\psi\rangle_{{\mathcal{H}}^{\sharp}}-\langle H^{\sharp}_{{\rm per},{\rm
max}}\phi,\psi\rangle_{{\mathcal{H}}^{\sharp}}=\langle\gamma^{D}\phi,\gamma^{N}\psi\rangle_{L^{2}(\Gamma)}-\langle\gamma^{N}\phi,\gamma^{D}\psi\rangle_{L^{2}(\Gamma)}.$
Introducing the symplectic form $\omega$ on ${\mathcal{H}}_{b}$, defined by
$\forall(f,f^{\prime}),(g,g^{\prime})\in{\mathcal{H}}_{b},\quad\omega^{\prime}((f,f^{\prime}),(g,g^{\prime})):=\langle
f,g^{\prime}\rangle_{L^{2}(\Gamma)}-\langle
f^{\prime},g\rangle_{L^{2}(\Gamma)},$
the second Green’s identity simply reads
$\langle\phi,H^{\sharp}_{{\rm per},{\rm
max}}\psi\rangle_{{\mathcal{H}}^{\sharp}}-\langle H^{\sharp}_{{\rm per},{\rm
max}}\phi,\psi\rangle_{{\mathcal{H}}^{\sharp}}=\omega\left({\rm Tr}(\phi),{\rm
Tr}(\psi)\right).$
###### Remark 36.
The symplectic Hilbert space $({\mathcal{H}}_{b},\omega)$ does not satisfy
Assumption A in (2). Introducing the map $A:H^{1/2}_{\rm per}(\Gamma)\to
H^{3/2}_{\rm per}(\Gamma)$ so that
$\forall f\in H^{3/2}_{\rm per}(\Gamma),\ \forall g\in H^{1/2}_{\rm
per}(\Gamma),\quad\langle f,g\rangle_{L^{2}(\Gamma)}=\langle
f,Ag\rangle_{H^{3/2}_{\rm per}}=\langle A^{*}f,g\rangle_{H^{1/2}_{\rm per}},$
we have $J=\begin{pmatrix}0&A^{*}\\\ -A&0\end{pmatrix}$, but the operators $A$
and $A^{*}$ are compact (hence $J$ as well, and
$J^{2}\neq-{\mathbb{I}}_{{\mathcal{H}}_{b}}$). In particular, we cannot
consider the unitaries $U$ nor ${\mathcal{U}}$. Such situation was considered
in [BBZ13]. We present below in Appendix A a new approach which allows the use
of unitaries as in the previous section. The ”more natural” symplectic space
$H^{1/2}_{\rm per}(\Gamma)\times H^{-1/2}(\Gamma)$ appears at this point.
Now that our functional spaces and operators have been identified, all our
previous results for Hill’s operators (Section 2) hold in the Schrödinger
case. Indeed, these ones never used finite dimensional arguments. More
specifically, we have the following.
###### Theorem 37.
* •
For ${\mathcal{D}}^{\sharp}\subset{\mathcal{D}}^{\sharp}_{{\rm per},{\rm
max}}$, the operator $(H^{\sharp},{\mathcal{D}}^{\sharp})$ is self-adjoint iff
${\mathcal{D}}^{\sharp}:={\rm Tr}^{-1}\left(\ell\right),\quad\text{for a
Lagrangian plane}\ \ell\in\Lambda\left({\mathcal{H}}_{b}\right).$
* •
If $E\in{\mathbb{R}}\setminus\sigma(H_{\rm per})$, then the sets
$\ell^{\pm}(E):={\rm Tr}\left[{\rm Ker}\left(H^{\sharp,\pm}_{{\rm per},{\rm
max}}-E\right)\right]$
are Lagrangian planes of $\Lambda({\mathcal{H}}_{b})$, and
$\ell^{+}(E)\oplus\ell^{-}(E)={\mathcal{H}}_{b}$.
* •
For all $E\in{\mathbb{R}}$ (not necessarily in the resolvent set), we have
$\dim{\rm Ker}\left(H_{\rm
per}-E\right)=\dim\left(\ell^{+}(E)\cap\ell^{-}(E)\right).$
* •
If $E\in{\mathbb{R}}\setminus\sigma(H_{\rm per})$, then,
$\dim{\rm
Ker}\left(H^{\sharp}-E\right)=\dim\left(\ell^{+}(E)\cap\ell^{\sharp}\right).$
Again, for $\ell^{\sharp}\in\Lambda({\mathcal{H}}_{b})$, we denote by
$(H^{\sharp},\ell^{\sharp})$ the self-adjoint extensions of $H^{\sharp}$
corresponding to this Lagrangian plane.
In the finite dimensional Hill’s case, for any such extension
$(h^{\sharp,+},\ell^{\sharp})$ and $(h^{\sharp,-},\ell^{\sharp})$ with the
same Lagrangian plane $\ell^{\sharp}$, we have
$\sigma_{\rm ess}(h)=\sigma_{\rm ess}(h^{\sharp,+})\cup\sigma_{\rm
ess}(h^{\sharp,-}).$
In some sense, boundary conditions are finite dimensional (hence compact)
perturbations. In the Schrödinger case, we always have
$\sigma_{\rm ess}(H)\subset\sigma_{\rm ess}(H^{\sharp,+})\cup\sigma_{\rm
ess}(H^{\sharp,-}),$
which comes from the fact that Weyl sequences for $H$ can only escape to
$\pm\infty$. However, the inclusion may be strict. For instance, for
$E\in{\mathbb{R}}\setminus\sigma(H)$, the extension
$(H^{\sharp,+},\ell^{+}(E))$ has the energy $E$ as an eigenvalue of infinite
multiplicity. So $E\in\sigma_{\rm ess}(H^{\sharp,+})$. The corresponding Weyl
sequences localise near the cut.
This makes bulk-boundary correspondence more subtle in the Schrödinger case:
different self-adjoint extensions may give different results. For the usual
extensions however, we prove that the result are independent of the choice
(see the proof of Theorem 40 below).
#### 4.2.3. Families of Schrödinger operators
We now consider a family of Schrödinger operators of the form
$H_{t}:=-\Delta_{\rm per}+V_{t},\quad\text{acting on}\quad{\mathcal{H}}.$
Again, we assume that $t\mapsto V_{t}$ is continuously differentiable from
${\mathbb{T}}^{1}$ to $L^{\infty}(\Omega,{\mathbb{R}})$. We also consider a
family of (self-adjoint extensions of) edge operators
$\left(H_{t}^{\sharp},\ell_{t}^{\sharp}\right)$.
Let $E\in{\mathbb{R}}\setminus\sigma\left(H_{t}\right)$. We say that $E$ is a
regular energy if, for all $t\in{\mathbb{T}}^{1}$, the energy $E$ is not in
the essential spectrum of $H^{\sharp}_{t}$. In particular, this implies
$\dim{\rm
Ker}\left(H^{\sharp}_{t}-E\right)=\dim\left(\ell_{t}^{+}(E)\cap\ell^{\sharp}_{t}\right)<\infty$.
In addition, we require all corresponding crossings to be regular.
Noticing that the definition of the Maslov index in Section 3.1 does not
require Assumption A, we can apply the first part of the proof of Theorem 28
to the Schrödinger case, and we obtain the following.
###### Theorem 38.
Let $(a,b)\subset{\mathbb{R}}$ be such that, for all $t\in{\mathbb{T}}^{1}$,
$(a,b)\cap\sigma\left(H_{t}\right)=\emptyset\quad\text{and}\quad(a,b)\cap\sigma_{\rm
ess}\left(H_{t}^{\sharp}\right)=\emptyset.$
Then,
* •
almost any $E\in(a,b)$ is a regular energy;
* •
for such a regular energy, we have
$\displaystyle{\rm Sf}\left(H_{t}^{\sharp},E,{\mathbb{T}}^{1}\right)$
$\displaystyle={\rm
Mas}\left(\ell^{+}(t),\ell^{\sharp}_{t},{\mathbb{T}}^{1}\right)$
Unfortunately, since the symplectic space $({\mathcal{H}}_{b},\omega)$ does
not satisfy Assumption A, we can not yet relate the last index as a spectral
flow of unitaries. We prove in Appendix A the following.
###### Lemma 39.
The Lagrangian planes of ${\mathcal{H}}_{b}$ are in one-to-one correspondence
with the unitaries of $L^{2}(\Gamma)$. In addition, if
$(\ell_{1}(t),\ell_{2}(t))$ is regular with corresponding unitaries
${\mathcal{U}}_{1}$ and ${\mathcal{U}}_{2}$, then
${\rm Mas}\left(\ell_{1},\ell_{2},{\mathbb{T}}^{1}\right)={\rm
Sf}\left({\mathcal{U}}_{2}^{*}{\mathcal{U}}_{1},1,{\mathbb{T}}^{1}\right).$
However, since ${\mathcal{U}}_{1}$ and ${\mathcal{U}}_{2}$ are now unitaries
in the infinite dimensional space $L^{2}(\Gamma)$, we can no longer use the
fact that the spectral flow is the winding number of the determinant. It is
unclear that we can split this index has a contribution of $\ell_{1}$ minus a
contribution of $\ell_{2}$. This will however be the case for junctions, as we
prove in the next section.
### 4.3. Junctions for Schrödinger operators
Let us consider the two families of potentials $V_{L,t}$ and $V_{R,t}$,
continuously differentiable from ${\mathbb{T}}^{1}$ to $L^{\infty}(\Omega)$.
For $\chi:\Omega\to[0,1]$ a bounded switch function with
$\chi(x,{\mathbf{y}})=1$ for $x<-X$ and $\chi(x,{\mathbf{y}})=0$ for $x>X$, we
set
$H_{t}^{\chi}:=-\Delta_{\rm
per}+V^{\chi}_{t}({\mathbf{x}}),\quad\text{with}\quad
V^{\chi}_{t}:=V_{L,t}\chi+V_{R,t}(1-\chi).$
As in Section 3.3.2, $H_{t}^{\chi}$ models a junction between a left and right
potential.
We denote by $H_{L,t}$ and $H_{R,t}$ the corresponding Hamiltonians. Let
$E\in{\mathbb{R}}$ be in the resolvent set of both $H_{L,t}$ and $H_{R,t}$ for
all $t\in{\mathbb{T}}^{1}$, so that the Lagrangian planes
$\ell^{\pm}_{L,t}(E)$ and $\ell^{\pm}_{R,t}(E)$ are all well-defined. We also
set $\ell_{D}^{\sharp}:=\\{0\\}\times H^{1/2}_{\rm
per}\in\Lambda({\mathcal{H}}_{b})$ the Lagrangian plane corresponding to
Dirichlet boundary conditions, and we set
${\mathcal{I}}\left(\ell_{L/R,t}^{\pm}(E),{\mathbb{T}}^{1}\right):={\rm
Mas}\left(\ell_{L/R,t}^{\pm}(E),\ell^{\sharp}_{D},{\mathbb{T}}^{1}\right)={\rm
Sf}\left(H^{\sharp,\pm}_{L/R,t},E,{\mathbb{T}}^{1}\right).$
This defines an integer-valued index for the Lagrangian planes of the form
$\ell_{t}^{\pm}(E)$ (which is our bulk/edge index).
###### Theorem 40 (Junctions in the Schrödinger case).
Let $(a,b)\subset{\mathbb{R}}$ be such that, for all $t\in{\mathbb{T}}^{1}$,
$(a,b)\cap\left(\sigma(H_{t,R})\cup\sigma(H_{t,L})\right)=\emptyset.$
Then, for all $E\in(a,b)$, we have
${\rm Sf}\left(H_{t}^{\chi},E,{\mathbb{T}}^{1}\right)={\rm
Mas}\left(\ell_{R}^{+}(E),\ell_{L}^{-}(E),{\mathbb{T}}^{1}\right)={\mathcal{I}}\left(\ell_{R}^{+}(E),{\mathbb{T}}^{1}\right)-{\mathcal{I}}\left(\ell_{L}^{-}(E),{\mathbb{T}}^{1}\right).$
This number is independent of $\chi$ and of $E$ in the gap.
Reasoning as in Remark 31, we see that the upper-scripts $\pm$ may be dropped
in the last line.
###### Proof.
Let us prove first the result for $\chi_{0}(x)={\mathds{1}}(x<0)$. We set for
simplicity $H_{t}:=-\Delta_{\rm per}+V_{t}^{\chi_{0}}$ acting on
$L^{2}(\Omega)$, and $H_{D,t}$ the corresponding operator acting on
$L^{2}(\Omega)\approx L^{2}(\Omega^{-}\cup\Omega^{+})$, and with Dirichlet
boundary conditions at $\\{0\\}\times\Gamma$. Since $V_{t}$ is uniformly
bounded, the operators $H_{t}$ and $H_{D,t}$ are uniformly bounded from below.
Consider $\Sigma\in{\mathbb{R}}$ such that
$\Sigma<\inf_{t\in{\mathbb{T}}^{1}}\inf\sigma(H_{t})\quad\text{and}\quad\Sigma<\inf_{t\in{\mathbb{T}}^{1}}\inf\sigma(H_{D,t}).$
We set $R_{t}:=(H_{t}-\Sigma)^{-1}$ and $R_{D,t}:=(H_{D,t}-\Sigma)^{-1}$,
which are both bounded operators. It is a standard result (see for instance
[RS80, Theorem XI.79]) that, for some $m\in{\mathbb{N}}$,
$R_{t}^{m}-R_{D,t}^{m}$ is a compact (even trace-class) operator. In
particular, for all $t\in{\mathbb{T}}^{1}$, we have
$\sigma_{\rm ess}(H_{t})=\sigma_{\rm ess}(H_{D,t}).$
Let $(a,b)$ denote an essential gap of these operators, and let $E\in(a,b)$ be
a regular energy for both operators. Using spectral theory, we have
${\rm Sf}\left(H_{t},E,{\mathbb{T}}^{1}\right)=-{\rm
Sf}\left(R_{t}^{m},(\Sigma-E)^{-m},{\mathbb{T}}^{1}\right)$
and similarly for $H_{D,t}$. Introducing
$R^{m}_{t}(s):=R_{t}+s(R_{D,t}^{m}-R_{t}^{m}),$
we see that $s\mapsto R^{m}_{t}(s)$ is a continuous family of operators
connecting $R_{t}$ and $R_{D,t}$. Since for all $s\in[0,1]$, $R^{m}_{t}(s)$ is
a compact perturbation of $R_{t}$, the essential gap does not close as $s$
varies. We deduce that these two spectral flows are equal (see for instance
[Wat16, Theorem 4.2.3]). So
${\rm Sf}\left(R_{t}^{m},(\Sigma-E)^{-m},{\mathbb{T}}^{1}\right)={\rm
Sf}\left(R_{D,t}^{m},(\Sigma-E)^{-m},{\mathbb{T}}^{1}\right),$
which gives
${\rm Sf}\left(H_{t},E,{\mathbb{T}}^{1}\right)={\rm
Sf}\left(H_{D,t},E,{\mathbb{T}}^{1}\right).$
Now, since $H_{D,t}=H^{\sharp,+}_{t}\oplus H^{\sharp,-}_{t}$, we see that $E$
is an eigenvalue of $H_{D,t}$ iff it is an eigenvalue of either
$H^{\sharp,+}_{t}$ or $H^{\sharp,-}_{t}$. Actually, we have
${\rm Sf}\left(H_{D,t},E,{\mathbb{T}}^{1}\right)={\rm
Sf}\left(H_{t}^{\sharp,+},E,{\mathbb{T}}^{1}\right)+{\rm
Sf}\left(H_{t}^{\sharp,-},E,{\mathbb{T}}^{1}\right).$
so, recalling that $\chi_{0}(x)={\mathds{1}}(x<0)$, so that
$\ell^{\pm}_{L/R,\chi_{0},t}=\ell^{\pm}_{L/R,t}$ we get as wanted
${\rm
Sf}\left(H_{t}^{\chi_{0}},E,{\mathbb{T}}^{1}\right)=M\left(\ell^{+}_{t},{\mathbb{T}}^{1}\right)-M\left(\ell^{-}_{t},{\mathbb{T}}^{1}\right).$
The minus sign comes from the Green’s identity on the $\Omega^{-}$ region.
For a general switch function $\chi$, the function $\chi-\chi_{0}$ is
compactly supported. In particular,
$H_{t}^{\chi}-H_{t}^{\chi_{0}}=(V_{L,t}-V_{R,t})(\chi-\chi_{0}),$
is a compact perturbation of $H_{t}^{\chi_{0}}$ for all
$t\in{\mathbb{T}}_{1}$. Again, by robustness of the spectral flow with respect
to compact perturbation, we obtain that ${\rm
Sf}(H_{t}^{\chi},E,{\mathbb{T}}^{1})={\rm
Sf}(H_{t}^{\chi_{0}},E,{\mathbb{T}}^{1})$. ∎
As we see from our proof, the main ingredient is the fact that the difference
of resolvent $(\Sigma-H)^{-1}-(\Sigma-H_{D})^{-1}$ is compact. Since this
holds also for Neumann boundary conditions instead of Dirichlet, the result is
also valid with Neumann boundary conditions. In particular, we must have
${\rm Mas}(\ell^{+}(E),\ell^{\sharp}_{N},{\mathbb{T}}^{1})={\rm
Mas}(\ell^{+}(E),\ell^{\sharp}_{D},{\mathbb{T}}^{1})+k,$
for some fixed constant $k\in{\mathbb{Z}}$, and for all families of potentials
$(V_{t})$. Taking $V_{t}$ independent of $t$ shows that $k=0$. So we have:
###### Corollary 41.
Let $H_{t}$ be a family of Schrödinger operators, and let $H^{\sharp}_{D,t}$
and $H^{\sharp}_{N,t}$ be the corresponding Dirichlet and Neumann extensions
on the semi-tube $\Omega^{+}$. Then, for all $E$ in the gap of $(H_{t})$, we
have
${\rm Sf}\left(H_{D,t}^{\sharp},E,{\mathbb{T}}^{1}\right)={\rm
Sf}\left(H_{N,t}^{\sharp},E,{\mathbb{T}}^{1}\right).$
As noticed in the discussion following 37, such an equality cannot hold for
all self-adjoint extensions of $H^{\sharp}_{t}$, since the essential spectrum
may not be conserved.
## Appendix A Embedding of symplectic spaces, and unitaries.
As noticed in Remark 36, the symplectic form $({\mathcal{H}}_{b},\omega)$ does
not satisfy Assumption A. In order to use the results of Section 2.1.3, we
embed this space is a larger one. In this appendix, we denote by
${\mathcal{H}}_{b}^{\prime}:=H^{3/2}_{\rm per}(\Gamma)\times H^{1/2}_{\rm
per}(\Gamma)$ the previous space, and reserve the notation ${\mathcal{H}}_{b}$
for the larger space.
### A.1. The boundary space ${\mathcal{H}}_{b}$.
There are several possibilities, one can choose
${\mathcal{H}}_{b}:=H^{1/2}_{\rm per}(\Gamma)\times
H^{-1/2}(\Gamma),\quad\text{or}\quad{\mathcal{H}}_{b}=L^{2}(\Gamma)\times
L^{2}(\Gamma).$
While the first choice seems more natural in the context of boundary
conditions (in particular, Robin boundary conditions are best described in
this space), we emphasise that the two choices are equivalent. For the ease of
notation, we consider the second choice ${\mathcal{H}}_{b}=L^{2}(\Gamma)\times
L^{2}(\Gamma)$. We explain at the end of the section what happens in the
second case, and why the two choices are equivalent.
We therefore set ${\mathcal{H}}_{b}:=L^{2}(\Gamma)\times L^{2}(\Gamma)$,
together with the symplectic form $\omega$ defined by
$\forall(f,f^{\prime}),(g,g^{\prime})\in{\mathcal{H}}_{b},\quad\omega((f,f^{\prime}),(g,g^{\prime}))=\langle
f,g^{\prime}\rangle_{L^{2}(\Gamma)}-\langle
f^{\prime},g\rangle_{L^{2}(\Gamma)}.$
We have ${\mathcal{H}}_{b}^{\prime}\hookrightarrow{\mathcal{H}}_{b}$ with
continuous (compact) embedding. In addition, ${\mathcal{H}}_{b}^{\prime}$ is
dense in ${\mathcal{H}}_{b}$, and
$\forall
x,y\in{\mathcal{H}}_{b}^{\prime}\subset{\mathcal{H}}_{b},\quad\omega^{\prime}(x,y)=\omega(x,y).$
###### Theorem 42.
Let $({\mathcal{H}}^{\prime},\omega^{\prime})$ and $({\mathcal{H}},\omega)$ be
two symplectic Hilbert spaces, with
${\mathcal{H}}^{\prime}\subset{\mathcal{H}}$ with continuous embedding,
${\mathcal{H}}^{\prime}$ dense in ${\mathcal{H}}$ and $\omega^{\prime}=\omega$
on ${\mathcal{H}}^{\prime}\times{\mathcal{H}}^{\prime}$. Then there is a one-
to-one correspondence between the Lagrangian planes of
${\mathcal{H}}^{\prime}$ and the ones of ${\mathcal{H}}$. More specifically,
$\displaystyle\ell^{\prime}\in\Lambda({\mathcal{H}}^{\prime})\quad\text{iff}\quad\ell^{\prime}=\ell\cap{\mathcal{H}}^{\prime}\
\text{for some}\ \ell\in\Lambda({\mathcal{H}}),$
$\displaystyle\ell\in\Lambda({\mathcal{H}}),\quad\text{iff}\quad\ell=\overline{\ell^{\prime}}^{{\mathcal{H}}}\
\text{for some}\ \ell^{\prime}\in\Lambda({\mathcal{H}}^{\prime}).$
In addition, if $(\ell_{1}^{\prime}(t),\ell_{2}^{\prime}(t))$ is a periodic
path of Lagrangian in ${\mathcal{H}}^{\prime}$, with corresponding path
$(\ell_{1}(t),\ell_{2}(t))$ in ${\mathcal{H}}$, we have
${\rm
Mas}^{\prime}\left(\ell_{1}^{\prime},\ell_{2}^{\prime},{\mathbb{T}}_{1}\right)={\rm
Mas}\left(\ell_{1},\ell_{2},{\mathbb{T}}_{1}\right).$
###### Proof.
Let $\ell^{\prime}$ be a any sub-vectorial space of ${\mathcal{H}}^{\prime}$,
and let $\ell:=\overline{\ell^{\prime}}^{{\mathcal{H}}}$, so that
$\ell^{\prime}=\ell\cap{\mathcal{H}}^{\prime}$. First, for all
$x^{\prime}\in(\ell^{\prime})^{\circ}$, we have
$\forall y^{\prime}\in\ell^{\prime},\
\omega(x^{\prime},y^{\prime})=\omega^{\prime}(x^{\prime},y^{\prime})=0,\quad\text{hence,
by density},\quad\forall y\in\ell,\ \omega(x^{\prime},y)=0.$
We deduce that $(\ell^{\prime})^{\circ}\subset\ell^{\circ}$, hence
$\overline{(\ell^{\prime})^{\circ}}_{{\mathcal{H}}}\subset\ell^{\circ}$ and
$(\ell^{\prime})^{\circ}\subset\ell^{\circ}\cap{\mathcal{H}}^{\prime}$. On the
other hand, if $x\in\ell^{\circ}\cap{\mathcal{H}}^{\prime}$, then we have
$\forall y^{\prime}\in\ell^{\prime}\subset\ell,\quad
0=\omega(x,y^{\prime})=\omega^{\prime}(x,y^{\prime}),$
so $x\in\ell^{\circ}$. We therefore proved
$\ell=\overline{\ell^{\prime}}^{\mathcal{H}}\Longleftrightarrow\ell^{\circ}=\overline{(\ell^{\prime})^{\circ}}^{\mathcal{H}}.$
In particular, $\ell$ is a Lagrangian plane of ${\mathcal{H}}$ iff
$\ell^{\prime}$ is a Lagrangian plane of ${\mathcal{H}}^{\prime}$.
If $\ell_{1}^{\prime}$ and $\ell_{2}^{\prime}$ are Lagrangian planes of
${\mathcal{H}}^{\prime}$, with corresponding planes $\ell_{1}$ and $\ell_{2}$
in ${\mathcal{H}}$, then by density of ${\mathcal{H}}^{\prime}$ in
${\mathcal{H}}$, we have
$\dim\left(\ell_{1}^{\prime}\cap\ell_{2}^{\prime}\right)=\dim\left(\ell_{1}\cap\ell_{2}\right).$
In the case where this dimension is finite, we even have
$\ell_{1}^{\prime}\cap\ell_{2}^{\prime}=\ell_{1}\cap\ell_{2}$, since finite
dimensional subspaces are always closed. As the Maslov index only involves
these finite dimensional spaces, the equality of the Maslov index follows. ∎
The advantage of ${\mathcal{H}}_{b}$ over ${\mathcal{H}}_{b}^{\prime}$ is
that, in the space ${\mathcal{H}}_{b}$, we have
$J=\begin{pmatrix}0&{\mathbb{I}}_{L^{2}(\Gamma)}\\\
-{\mathbb{I}}_{L^{2}(\Gamma)}&0\end{pmatrix},$
which satisfies Assumptions A and B. We obtain for instance the following.
###### Theorem 43.
The self-adjoint extensions of $H^{\sharp}$ are in one-to-one correspondence
with the Lagrangian planes of $({\mathcal{H}}_{b},\omega)$ which are in one-
to-one correspondence with the unitaries ${\mathcal{U}}$ of $L^{2}(\Gamma)$.
The correspondence is given by
${\mathcal{D}}^{\sharp}:=\left\\{\psi\in{\mathcal{D}}_{{\rm per},{\rm
max}}^{\sharp},\quad{\rm Tr}(\psi)=\begin{pmatrix}1\\\
{\mathrm{i}}\end{pmatrix}f+\begin{pmatrix}1\\\
-{\mathrm{i}}\end{pmatrix}{\mathcal{U}}f,\quad\text{for some $f\in
L^{2}(\Gamma)$}\right\\}$
We emphasise that the trace map ${\rm Tr}$ is not onto on ${\mathcal{H}}_{b}$,
so some $f\in L^{2}(\Gamma)$ may not be attained.
### A.2. The other choice of boundary space.
Let us explain what happens in the case where we choose
$\widetilde{{\mathcal{H}}_{b}}:=H^{1/2}_{\rm per}(\Gamma)\times
H^{-1/2}(\Gamma)$, with symplectic form
$\forall(f,f^{\prime}),(g,g^{\prime})\in{\mathcal{H}}_{b},\quad\widetilde{\omega}\left((f,f^{\prime}),(g,g^{\prime})\right):=\langle
f,g^{\prime}\rangle_{H^{1/2}_{\rm per},H^{-1/2}}-\langle
f^{\prime},g\rangle_{H^{-1/2},H^{1/2}_{\rm per}}.$
One can again apply Theorem 42, and deduce that there is a one-to-one
correspondence between $\Lambda({\mathcal{H}}_{b}^{\prime})$ and
$\Lambda(\widetilde{{\mathcal{H}}_{b}})$. Let us prove that this last space
also satisfies Assumption B.
Let $V$ be the map $V:H^{1/2}_{\rm per}\to H^{-1/2}$ be such that for all
$f\in H^{1/2}_{\rm per}(\Gamma)$ and all $g\in H^{-1/2}(\Gamma)$, we have
$\langle f,g\rangle_{H^{1/2}_{\rm per},H^{-1/2}}=\langle
f,V^{*}g\rangle_{H^{1/2}_{\rm per}}+\langle Vf,g\rangle_{H^{-1/2}}.$ (11)
We can check that $V$ is well-defined and unitary. This time, we have
$J=\begin{pmatrix}0&V^{*}\\\ -V&0\end{pmatrix}$, so
$(\widetilde{{\mathcal{H}}_{b}},\widetilde{\omega})$ also satisfies Assumption
B. The previous theorem in this framework states that the self-adjoint
extensions of $H^{\sharp}$ are in one-to-one correspondence the unitaries
${\mathcal{U}}$ of $H^{1/2}_{\rm per}(\Gamma)$ with
${\mathcal{D}}^{\sharp}:=\left\\{\psi\in{\mathcal{D}}_{{\rm per},{\rm
max}}^{\sharp},\quad{\rm Tr}(\psi)=\begin{pmatrix}1\\\
{\mathrm{i}}V\end{pmatrix}f+\begin{pmatrix}1\\\
-{\mathrm{i}}V\end{pmatrix}\widetilde{{\mathcal{U}}}f,\quad\text{for some
$f\in H^{1/2}_{\rm per}(\Gamma)$}\right\\}.$
In order to see the connection between the two approaches using
$({\mathcal{H}}_{b},\omega)$ and
$(\widetilde{{\mathcal{H}}_{b}},\widetilde{\omega})$, we introduce the maps
$V_{1}:H^{1/2}_{\rm per}(\Gamma)\to L^{2}(\Gamma)$, and
$V_{2}:L^{2}(\Gamma)\to H^{-1/2}(\Gamma)$, defined by
$\text{for}\quad
f({\mathbf{y}})=\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}f_{\mathbf{k}}e_{\mathbf{k}}({\mathbf{y}}),\quad\left(V_{1,2}f\right)({\mathbf{y}}):=\sum_{{\mathbf{k}}\in{\mathbb{Z}}^{d-1}}f_{\mathbf{k}}(1+2\pi|{\mathbf{k}}|)^{1/2}e_{\mathbf{k}}({\mathbf{y}}).$
Then $R_{1}$ is a unitary from $H^{1/2}_{\rm per}(\Gamma)$ and
$L^{2}(\Gamma)$, and $R_{2}$ is a unitary from $L^{2}(\Gamma)$ to
$H^{-1/2}(\Gamma)$. Note that the map $V$ defined in (11) is $V=R_{2}R_{1}$.
So the map $R:{\mathcal{H}}_{b}\to\widetilde{{\mathcal{H}}_{b}}$ defined by
$\forall(f,f^{\prime})\in{\mathcal{H}}_{b},\quad
R(f,f^{\prime}):=\left(R_{1}^{*}f,R_{2}f\right)\quad\in\widetilde{{\mathcal{H}}_{b}}$
is a unitary. In addition, we have
$\forall(f,f^{\prime}),(g,g^{\prime})\in{\mathcal{H}}_{b},\quad\omega((f,f^{\prime}),(g,g^{\prime}))=\widetilde{\omega}\left(R(f,f^{\prime}),R(g,g^{\prime})\right).$
We deduce that $\ell$ is a Lagrangian plane of ${\mathcal{H}}_{b}$ iff
$\widetilde{\ell}:=R\ell$ is a Lagrangian plane of
$\widetilde{{\mathcal{H}}_{b}}$. In addition, ${\mathcal{U}}$ is a unitary of
${\mathcal{H}}_{1}=L^{2}(\Gamma)$ iff
$\widetilde{{\mathcal{U}}}:=R_{1}^{*}UR_{1}$ is a unitary of
$\widetilde{{\mathcal{H}}_{1}}=H^{1/2}_{\rm per}(\Gamma)$. This proves that
the two approaches are equivalent.
###### Example 44 (Robin boundary conditions).
Let $\Theta:H^{1/2}_{\rm per}\to H^{-1/2}$ be an invertible operator,
symmetric in the sense
$\forall f\in H^{1/2}_{\rm per}(\Gamma),\ \forall g\in
H^{-1/2}(\Gamma),\quad\langle
f,\Theta^{*}g\rangle_{H^{1/2}}(\Gamma)=\langle\Theta f,g\rangle_{H^{-1/2}}.$
Then we can check that
$\ell_{\Theta}:=\left\\{\begin{pmatrix}f\\\ \Theta f\end{pmatrix},\quad f\in
H^{1/2}_{\rm per}\right\\}$
is a Lagrangian plane. The corresponding self-adjoint extension has the
$\Theta$-Robin conditions. To identity the corresponding unitary, we write
that
$\begin{pmatrix}f\\\ \Theta f\end{pmatrix}=\begin{pmatrix}1\\\
{\mathrm{i}}V\end{pmatrix}\frac{1}{2}\left(f-{\mathrm{i}}V^{*}\Theta
f\right)+\begin{pmatrix}1\\\
-{\mathrm{i}}V\end{pmatrix}\frac{1}{2}\left(f+{\mathrm{i}}V^{*}\Theta
f\right).$
We recognise the unitary $\widetilde{{\mathcal{U}}}$ of $H^{1/2}_{\rm
per}(\Gamma)$ defined by
$\widetilde{{\mathcal{U}}}:=\left({\mathbb{I}}-{\mathrm{i}}V^{*}\Theta\right)^{-1}({\mathbb{I}}+{\mathrm{i}}V^{*}\Theta).\quad\text{(Cayley
transform of $V^{*}\Theta$)}.$
## References
* [Arn67] V. Arnol’d, Characteristic class entering in quantization conditions, Funct. Anal. Appl., 1 (1967), pp. 1–13.
* [ASBVB13] J. Avila, H. Schulz-Baldes, and C. Villegas-Blas, Topological invariants of edge states for periodic two-dimensional models, Math. Phys., Analysis and Geometry, 16 (2013), pp. 137–170.
* [BBF98] B. Booss-Bavnbek and K. Furutani, The Maslov index: a functional analytical definition and the spectral flow formula, Tokyo J. Math., 21 (1998), pp. 1–34.
* [BBZ13] B. Booß-Bavnbek and C. Zhu, The Maslov index in weak symplectic functional analysis, Annals of Global Analysis and Geometry, 44 (2013), pp. 283–318.
* [CJM15] G. Cox, C. K. Jones, and J. L. Marzuola, A Morse index theorem for elliptic operators on bounded domains, Commun. Partial Differential Equations, 40 (2015), pp. 1467–1497.
* [Dro18] A. Drouot, The bulk-edge correspondence for continuous dislocated systems, arXiv preprint arXiv:1810.10603, (2018).
* [DS65] N. Dunford and J. T. Schwartz, Linear operators. Part II. Spectral theory, vol. 2, 1965.
* [Fur04] K. Furutani, Fredholm–Lagrangian–Grassmannian and the Maslov index, J. Geometry and Physics, 51 (2004), pp. 269–331.
* [Gon20] D. Gontier, Edge states in ordinary differential equations for dislocations, J. Math. Phys., 61 (2020).
* [GP13] G. Graf and M. Porta, Bulk-edge correspondence for two-dimensional topological insulators, Comm. Math. Phys., 324 (2013), pp. 851–895.
* [GT15] D. Gilbarg and N. S. Trudinger, Elliptic partial differential equations of second order, springer, 2015.
* [Hat93a] Y. Hatsugai, Chern number and edge states in the integer quantum Hall effect, Phys. Rev. Lett., 71 (1993), p. 3697.
* [Hat93b] , Edge states in the integer quantum Hall effect and the Riemann surface of the Bloch function, Phys. Rev. B, 48 (1993), p. 11851.
* [HLS17] P. Howard, Y. Latushkin, and A. Sukhtayev, The Maslov index for Lagrangian pairs on $\mathbb{R}^{2n}$, J. Math. Anal. Appl., 451 (2017), pp. 794–821.
* [HLS18] , The Maslov and Morse indices for system of Schrödinger operators on $\mathbb{R}$, Indiana University mathematics journal, 67 (2018).
* [HS16] P. Howard and A. Sukhtayev, The Maslov and Morse indices for Schrödinger operators on $[0,1]$, Journal of Differential Equations, 260 (2016), pp. 4499–4549.
* [Kat13] T. Kato, Perturbation theory for linear operators, vol. 132, Springer Science & Business Media, 2013.
* [Ler78] J. Leray, Analyse Lagrangienne et mécanique quantique, Séminaire Jean Leray, (1978), pp. 1–313.
* [LM12] J. Lions and E. Magenes, Non-homogeneous boundary value problems and applications, vol. 1, Springer Science & Business Media, 2012.
* [LS18] Y. Latushkin and S. Sukhtaiev, The Maslov index and the spectra of second order elliptic operators, Advances in Mathematics, 329 (2018), pp. 422–486.
* [MBA72] V. Maslov, V. Bouslaev, and V. Arnol’d, Théorie des perturbations et méthodes asymptotiques, Dunod, 1972.
* [RS75] M. Reed and B. Simon, Methods of modern mathematical physics, vol. II: Fourier Analysis, Self-Adjointness, vol. 2, Elsevier, 1975.
* [RS80] M. Reed and B. Simon, Methods of modern mathematical physics, vol. III, Scattering theory, Bull. Amer. Math. Soc, 2 (1980), pp. 0273–0979.
* [vN30] J. von Neumann, Allgemeine eigenwerttheorie hermitescher funktionaloperatoren, Mathematische Annalen, 102 (1930), pp. 49–131.
* [Wat16] N. Waterstraat, Fredholm operators and spectral flow, arXiv preprint arXiv:1603.02009, (2016).
* [Wey10] H. Weyl, Über gewöhnliche differentialgleichungen mit singularitäten und die zugehörigen entwicklungen willkürlicher funktionen, Mathematische Annalen, 68 (1910), pp. 220–269.
|
* Janyary 2021
Keywords: Vortex electron, orbital angular momentum, Smith-Purcell radiation,
non-paraxial effects, grating, quadrupole moment
# Smith-Purcell radiation of a vortex electron
A. Pupasov-Maksimova and D. Karlovetsb (a) Universidade Federal de Juiz de
Fora, Brasil (b) Tomsk State University, Russia
tretiykon<EMAIL_ADDRESS>
###### Abstract
A wide variety of emission processes by electron wave packets with an orbital
angular momentum $\ell\hbar$, called the vortex electrons, can be influenced
by a nonparaxial contribution due to their intrinsic electric quadrupole
moment. We study Smith-Purcell radiation from a conducting grating generated
by a vortex electron, described as a generalized Laguerre-Gaussian packet,
which has an intrinsic magnetic dipole moment and an electric quadrupole
moment. By using a multipole expansion of the electromagnetic field of such an
electron, we employ a generalized surface-current method, applicable for a
wide range of parameters. The radiated energy contains contributions from the
charge, from the magnetic moment, and from the electric quadrupole moment, as
well as from their interference. The quadrupole contribution grows as the
packet spreads while propagating, and it is enhanced for large $\ell$. In
contrast to the linear growth of the radiation intensity from the charge with
a number of strips $N$, the quadrupole contribution reveals an $N^{3}$
dependence, which puts a limit on the maximal grating length for which the
radiation losses stay small. We study spectral-angular distributions of the
Smith-Purcell radiation both analytically and numerically and demonstrate that
the electron’s vorticity can give rise to detectable effects for non-
relativistic and moderately relativistic electrons. On a practical side,
preparing the incoming electron’s state in a form of a non-Gaussian packet
with a quadrupole moment – such as the vortex electron, an Airy beam, a
Schrödinger cat state, and so on – one can achieve quantum enhancement of the
radiation power compared to the classical linear regime. Such an enhancement
would be a hallmark of a previously unexplored quantum regime of radiation, in
which non-Gaussianity of the packet influences the radiation properties much
stronger than the quantum recoil.
## 1 Introduction
It was argued that different radiation processes with the vortex electrons
carrying orbital angular momentum $\ell\hbar$ with respect to a propagation
axis can be investigated using beams of the electron microscopes [1, 2]. For
instance, Vavilov-Cherenkov radiation and transition radiation are affected by
vortex structure of the electron wave packet [3, 2] and an azimuthal asymmetry
of the transition radiation, if detected, would manifest the magnetic moment
contribution to the radiation. Another radiation process, which we study in
the present paper, is the Smith-Purcell (S-P) radiation [4] of the vortex
electrons. Specifically, we investigate how the OAM and the spatial structure
of the vortex wave packet influence the radiation characteristics, such as the
spectral-angular distributions of the radiated energy.
The Smith-Purcell radiation mechanism represents a relatively simple way to
generate quasi-monochromatic radiation from charged (electron) bunches passing
near a conducting diffraction grating and it has been proved to be useful in
developing compact free electron lasers [5, 6, 7, 8] and high-resolution
sensors for the particle beam diagnostics [9, 10, 11]. Besides the fundamental
interest to the properties of radiation generated by non-Gaussian packets (in
particular, by the vortex electrons), there are possible applications in
electron microscopy [12] and in acceleration of the vortex electrons via
inverse S-P effect [13]. We consider the simplest possible geometry in which
the electron wave packet moves above an ideally conducting diffraction grating
(see Fig. 1), which is made of $N$ rectangular strips of a width $a$ and with
a period $d$. The radiation spectrum of a classical charged particle moving
with the velocity $\langle u\rangle$ consists of diffraction lines according
to the following dispersion relation:
$\lambda_{g}=\frac{d}{g}\left(\frac{1}{\beta}-\cos\Theta\right),\qquad
g=1,2,3,\ldots,$ (1)
where $g$ is the diffraction order, $\beta$ is the ratio $\beta=\langle
u\rangle/c$. The width of the diffracion line is $\Gamma\approx 1/N$. Only
$g=1$ is considered below, the radiation wave length is denoted just by
$\lambda$ and the frequency $\omega=2\pi/\lambda$.
As we demonstrate hereafter, such a radiation mechanism is more sensitive to
the shape of the electron packet than Vavilov-Cherenkov radiation or
transition radiation, studied in [3, 2]. A quantum packet always spreads while
propagating; however, this does not affect the radiation properties if the
radiation formation length is shorter than the packet’s Rayleigh length, i.e.
the distance where the packet doubles its size. A non-relativistic or
moderately relativistic electron packet can become significantly wider while
moving above a grating with a micrometer or millimeter period, so the Smith-
Purcell radiation represents a good tool for studying these effects as the
grating can be longer than the Rayleigh length. As we have recently argued,
the spreading influences the radiation properties only for wave packets with
intrinsic multipole moments [14]111Which is the case for vortex electrons,
while spherically symmetric Gaussian packet do not possess intrinsic multipole
moments.
Figure 1: Generation of Smith-Purcell radiation by a Laguerre-Gaussian (LG)
packet characterized with a charge $e$, the magnetic moment ${\bm{\mu}}$, and
with the electric quadrupole moment $Q_{\alpha,\beta}$. Two latter quantities
are non-vanishing due to intrinsic angular momentum of the vortex electron.
The number of the grating strips $N$ cannot be larger than $N_{max}$ due to
the spreading.
First, in section 2, recalling basic properties of Laguerre-Gaussian wave
packets we perform a qualitative analysis to emphasize physics of possible
differences from the standard S-P radiation of an ordinary electron. We also
establish values of the parameters – a size of the wave-packet, the orbital
angular momentum, velocity, etc. – that are compatible with our calculation
scheme based on a multipole expansion.
In section 3 we calculate the spectral-angular distribution of the radiation
applying a method of the generalized surface currents [15], which represents a
generalization of the known models put forward by Brownell, et al. [16] and by
Potylitsyn, et al. [17]. Explicit relativistic expressions for the
electromagnetic fields produced by a Laguerre-Gaussian wave packet [18] are
presented in section 3.2. Calculations of the radiation fields in the wave
zone in section 3.3 involve standard planar Fourier integrals with respect to
$x-z$ coordinates and time. Integration along $z$ direction is tricky because
of the wave packet spreading and of the increasing quadrupole moment. To
guarantee validity of the multipole expansion when calculating the fields, it
is necessary to limit the maximal grating length.
With analytical expressions at hand, we analyze in section 4 corrections to
the Smith-Purcell radiation of the point charge. In section 4.1 we analyze the
shape and the position of the spectral line. Our analytical results suggest,
that the spreading of the quantum wave packet does not lead to a broadening of
the spectral line (in a contrast to the case of a classical spreading beam).
Numerical studies of the spectral lines reveal not only an absence of the
broadening, but even a slight narrowing of the lines due to the charge-
quadrupole interference.
The angular distribution is considered in section 4.2. The contribution from
the magnetic moment results in the azimuthal asymmetry similar to diffraction
radiation [2]. In section 4.3 we demonstrate that the quadrupole contribution
is dynamically enchanced along the grating. Such a coherent effect can be seen
in the nonlinear growth of the radiation intensity with the grating length. At
the same time, the maximum of the radiation intensity with respect to the
polar angle is shifted towards smaller angles.
For the currently achieved OAM values of $\ell\sim 1000$ [19] (see also [20,
21, 22]), contributions from both the magnetic moment and the electric
quadrupole moment can be, in principle, detected as discussed in the
Conclusion.
Throughout the paper we use the units with $\hbar=c=|e|=1$.
## 2 Vortex electrons and multipole moments
### 2.1 Laguerre-Gaussian packets and non-paraxial regime
Analogously to optics, there are two main models of the vortex particles – the
Bessel beams and the paraxial Laguerre-Gaussian (LG) packets [1]. There is
also a non-paraxial generalization of the latter, called the generalized LG
packet [18],
$\displaystyle\displaystyle\psi_{\ell,n}({\bm{r}},t)=\sqrt{\frac{n!}{(n+|\ell|)!}}\frac{i^{2n+\ell}}{\pi^{3/4}}\frac{\rho^{|\ell|}\ell^{\frac{2|\ell|+3}{4}}}{(\bar{\rho}(t))^{|\ell|+3/2}}\
L_{n}^{|\ell|}\left(\frac{\ell\rho^{2}}{(\bar{\rho}(t))^{2}}\right)\exp\Big{\\{}-it\frac{\langle
p\rangle^{2}}{2m}+i\langle p\rangle z+i\ell\phi_{r}-$ (2)
$\displaystyle\displaystyle-i\left(2n+|\ell|+\frac{3}{2}\right)\arctan(\frac{t}{t_{d}})-\frac{\ell((t_{d}-it))}{2t_{d}(\bar{\rho}(t))^{2}}\,\left(\rho^{2}+(z-\langle
u\rangle t)^{2}\right)\Big{\\}},$ (3) $\displaystyle\displaystyle\int
d^{3}r\,|\psi_{\ell,n}({\bm{r}},t)|^{2}=1,\qquad\rho=\sqrt{x^{2}+y^{2}}\,,$
(4)
which represents an exact non-stationary solution to the Schrödinger equation
and whose centroid propagates along the $z$ axis. The paraxial LG packets and
the Bessel beams represent two limiting cases of this model [23]. Below we
consider this packet with $n=0$ only. Note that the factor $3/2$ in the Gouy
phase in (4) comes about because the packet is localized in a 3-dimensional
space (cf. Eq.(61) in [23]).
In what follows, we employ the mean radius of the wave packet $\bar{\rho}_{0}$
instead of the beam waist $\sigma_{\perp}$,
$\sigma_{\perp}=\bar{\rho}_{0}/\sqrt{|\ell|},\quad\bar{\rho}(t)=\bar{\rho}_{0}\sqrt{1+t^{2}/t_{d}^{2}}.$
As explained in [24], depending on the experimental conditions one could
either fix the beam waist, and so the mean radius $\bar{\rho}_{0}$ scales as
$\sqrt{|\ell|}$, or fix the radius itself. In this paper, we follow the latter
approach and treat $\bar{\rho}_{0}$ as an OAM-independent value. An approach
with the fixed beam waist $\sigma_{\perp}$ can easily be recovered when
substituting $\bar{\rho}_{0}\to\sigma_{\perp}\,\sqrt{|\ell|}$.
Although non-paraxial effects are nearly always too weak to play any
noticeable role, the difference between this non-paraxial LG packet and the
paraxial one becomes crucial for a moderately relativistic particle with
$\beta\lesssim 0.9$. In this regime, which is the most important one for the
current study, it is only Eq.(4) that yields correct predictions for
observables and is compatible with the general CPT-symmetry [18]. As we argue
below, the non-paraxial effects are additionally enhanced when the packet’s
spreading becomes noticeable and the OAM is large.
The packet spreads and its transverse area is doubled during a diffraction
time $t_{d}$,
$\displaystyle
t_{d}=\frac{m\bar{\rho}_{0}^{2}}{|\ell|}=\frac{t_{c}}{|\ell|}\,\left(\frac{\bar{\rho}_{0}}{\lambda_{c}}\right)^{2}\gg
t_{c},$ (5)
which is large compared to the Compton time scale $t_{c}=\lambda_{c}/c\approx
1.3\times 10^{-21}\,\text{sec.}$, $\lambda_{c}\approx 3.9\times
10^{-11}\,\text{cm}$. When the LG packet moves nearby the grating, the finite
spreading time (5) puts an upper limit on the possible impact parameter $h$,
on the initial mean radius of the packet and on the grating length. Indeed,
both the corresponding solution of the Schrödinger equation and the multipole
expansion have a sense only for as long as $\bar{\rho}(t)<h$ or,
alternatively,
$\frac{t}{t_{d}}<\sqrt{\frac{h^{2}}{\bar{\rho}_{0}^{2}}-1}.$ (6)
Let $t_{max}=t_{d}\sqrt{\frac{h^{2}}{\bar{\rho}_{0}^{2}}-1}$ be the time
during which the packet spreads to the extent that it touches the
grating222This results in the so-called grating transition radiation, which we
do not study in this paper, although the problem in which a part of the
electron packet touches the grating and another part does not is definitely
interesting to explore.; then the corresponding number of strips $N_{max}$ is
$\displaystyle\displaystyle N_{max}=\beta
t_{max}/d=\frac{\beta}{|\ell|}\,\frac{\bar{\rho}_{0}}{\lambda_{c}}\,\frac{h}{d}\,\sqrt{1-\frac{\bar{\rho}_{0}^{2}}{h^{2}}}\,.$
(7)
The geometry implies that $\bar{\rho}_{0}<h=\bar{\rho}(t_{max})$ or
$\bar{\rho}_{0}\ll h$ for the long grating. In practice, only small
diffraction orders of the radiation can be considered, so that
$d\sim\beta\lambda$ for the emission angles $\Theta\sim 90^{\circ}$. So an
upper limit for the number of strips in this case is
$\displaystyle\displaystyle
N_{max}\lesssim\frac{1}{|\ell|}\,\frac{\bar{\rho}_{0}}{\lambda_{c}}\,\frac{h}{\lambda}.$
(8)
If this condition is violated, the multipole expansion is no longer
applicable. A rough estimate of the maximal number of strips for $h\approx
h_{eff}\sim 0.1\lambda,\beta\approx 0.5,\bar{\rho}_{0}\sim 1$ nm yields
$\displaystyle\displaystyle N_{max}\lesssim\frac{10^{3}}{|\ell|}.$ (9)
So if $N_{max}\gg 1$, then $|\ell|<10^{3}$.
The Laguerre-Gaussian electron packet carries, in addition to the charge,
higher multipole moments [25, 26]. In particular, at the distances larger than
the mean radius of the vortex packet,
$\displaystyle\displaystyle r\gtrsim\bar{\rho}(t),$ (10)
it is sufficient to keep a magnetic dipole moment and an electric quadrupole
moment [23, 18],
$\displaystyle\displaystyle{\bm{\mu}}={\hat{\bm{z}}}\,\frac{\ell}{2m}\left(1-\frac{1}{2}\,\ell^{2}\,\frac{\lambda_{c}^{2}}{\bar{\rho}_{0}^{2}}\right),\qquad\,Q_{\alpha\beta}(t)=\left(\bar{\rho}(t)\right)^{2}\,\text{diag}\\{1/2,1/2,-1\\}.$
(11)
The magnetic moment includes a non-paraxial correction according to Eq.(45) in
[23], which is written for the case $|\ell|\gg 1$. The field of the electron’s
quadrupole moment originates from a non-point source as the quadrupole has a
finite width, which is just equal to an rms-radius of the packet.
Note that although an LG packet with $\ell=0$ has a vanishing quadrupole
moment, an OAM-less packet with a non-vanishing quadrupole momentum can be
easily constructed by making this packet highly asymmetric in shape. Thus our
conclusions below can also be applied to an arbitrary non-symmetric wave
packet with a non-vanishing quadrupole moment.
When the spreading is essential – at $t\gtrsim t_{d}$ – the inequality (10)
can be violated and the multipole expansion cannot be used at all. Thus, the
conventional (paraxial) regime of emission takes place only when the spreading
is moderate, $t\lesssim t_{d}$. Remarkably, the non-paraxial regime of
emission favors moderately large values of the OAM, in contrast to the
enhancement of the magnetic moment contribution for which the OAM should be as
large as possible. This is because the quadrupole moment has a finite radius
and so the radiation is generated as if the charge were continuously
distributed along all the coherence length and not confined to a point within
this length [14].
### 2.2 Qualitative analysis and multipole expansion
Single-electron regime with a freely propagating packet is realized for low
electron currents – below the so-called start current, which is typically
about 1 mA [27]. The charge, the magnetic dipole and the electric quadrupole
moments (11) induce surface currents on the grating. These currents, in their
turn, generate electric and magnetic fields ${\bm{E}}_{e}$, ${\bm{E}}_{\mu}$,
${\bm{E}}_{Q}$, etc. The total radiation intensity $dW$ includes the multipole
radiation intensities as well their mutual interference, which serve as small
corrections to the classical radiation from the point charge $dW_{ee}$:
$\frac{dW}{d\omega d\Omega}\equiv
dW=dW_{ee}+dW_{e\mu}+(dW_{eQ}+dW_{\mu\mu})+(dW_{\mu
Q}+dW_{eO})+(dW_{QQ}+dW_{e16p}+dW_{\mu O})+\ldots$ (12)
In this paper, we adhere to such a perturbative regime and formally order
perturbative corrections following the order of the multipole expansion. The
leading order (LO) correction $dW_{e\mu}$ is given by the charge-magnetic-
moment radiation. The next-to-leading (NLO) order corrections include the
charge-electric-quadrupole radiation $dW_{eQ}$ and the radiation of the
magnetic moment $dW_{\mu\mu}$. The next-to-next-to-leading (NNLO) order
corrections already include the interference term between the magnetic
momentum and electric-quadrupole radiation $dW_{\mu Q}$ and the higher
multipole term with the charge-octupole radiation $dW_{eO}$. The quadrupole-
quadrupole radiation $dW_{QQ}$ appears with interference terms from higher
multipoles (octupole $dW_{\mu O}$ and 16-pole $dW_{e16p}$).
Moreover, in the overwhelming majority of practical cases it is sufficient to
calculate the charge contribution and the following interference terms
$dW=dW_{ee}+dW_{e\mu}+dW_{eQ}$ (13)
only, while $dW_{\mu\mu}$, $dW_{\mu Q}$, and the higher-order corrections can
be safely neglected. We emphasize that it does not mean that there are simple
inequalities like $dW_{ee}\gg dW_{e\mu}\gg dW_{eQ}\gg dW_{\mu Q}...$ for all
the angles and frequencies. For instance, in the plane perpendicular to the
grating, $\Phi=\pi/2$, the term $dW_{e\mu}$ vanishes while $dW_{eQ}$ does not
(see below). This makes the region of angles $\Phi\approx\pi/2$ preferable for
detection of the non-paraxial quadrupole effects.
An approach in which the particle trajectory is given holds valid when the
quantum recoil $\eta_{q}$ (ratio between the energy of the emitted photon and
the electron’s kinetic energy $\varepsilon$) is small compared to the
interference corrections $dW_{e\mu}$ and $dW_{eQ}$,
$\displaystyle\displaystyle\eta_{q}:=\frac{\omega}{\varepsilon}\ll\frac{dW_{e\mu}}{dW_{ee}},\quad\frac{\omega}{\varepsilon}\ll\frac{dW_{eQ}}{dW_{ee}},$
(14)
and the energy losses stay small compared to the electron’s energy. We
emphasize that the multipole contributions in Eq.(12) also have a quantum
origin as they are due to non-Gaussianity of the wave packet or, in other
words, due to its non-constant phase. So the series (12) is not quasi-
classical. On a more fundamental level, there are two types of quantum
corrections to the classical radiation of charge [28, 29, 30, 31, 32]:
* •
Those due to recoil,
* •
Those due to finite coherence length of the emitting particle.
While the quasi-classical methods like an operator method [32] and the eikonal
method [31] neglect the latter effects from the very beginning and take into
account the recoil only, here we demonstrate that there is an opposite non-
paraxial regime of emission.
Let us study dimensionless parameters that define multipole corrections to the
classical emission of a charge. The magnetic moment contribution (see Eqs.
(80),(82)) is proportional to the following ratio:
$\displaystyle\displaystyle\frac{dW_{e\mu}}{dW_{ee}}\sim\eta_{\mu}:=\ell\frac{\lambda_{c}}{\lambda},$
(15)
which is of the order of $\ell\cdot 10^{-7}$ for $\lambda\sim 1\,\mu$m, and
$\ell\cdot 10^{-10}$ for $\lambda\sim 1$ mm. It is well known that a spin-
induced magnetic moment contribution – the so-called "spin light" [33] – and
the recoil effects are of the same order of magnitude; thus we neglect both of
them in our approach. However, for the vortex electron the magnetic moment
contribution (15) is $\ell$ times enhanced, which legitimates the calculations
of $dW_{e\mu}$ via the multipole expansion [2]. Indeed, for the electrons with
$\beta\approx 0.4-0.8$ and the kinetic energy of $\varepsilon_{c}\sim 50-300$
keV, the small parameter governing the quantum recoil is
$\displaystyle\displaystyle\eta_{q}=\frac{\omega}{\varepsilon}\sim\frac{1}{m\lambda}\equiv\frac{\lambda_{c}}{\lambda},$
(16)
and $\eta_{q}\ll\eta_{\mu}$ yields
$|\ell|\gg 1,$
while the condition $\eta_{\mu}^{2}\ll\eta_{q}$ puts an upper limit on the OAM
value,
$\displaystyle\displaystyle|\ell|\lesssim\ell_{\text{max}}:=\sqrt{\frac{\lambda}{\lambda_{c}}}\sim\eta_{q}^{-1/2}\sim
10^{3}-10^{5},\ \lambda\sim 1\,\mu\text{m}-1\,\text{mm},$ (17)
and so the contribution $dW_{\mu\mu}$ stays small.
As shown in section 3.1, one can distinguish three different corrections from
the charge-quadrupole interference: $dW_{eQ_{0}}$, $dW_{eQ_{1}}$, and
$dW_{eQ_{2}}$. Their relative contributions are:
$\displaystyle\displaystyle\frac{dW_{eQ_{0}}}{dW_{ee}}\sim\eta_{Q_{0}}:=\frac{\bar{\rho}_{0}^{2}}{h_{\text{eff}}^{2}}-\text{quasi-
classical quadrupole contribution},$ (18)
$\displaystyle\displaystyle\frac{dW_{eQ_{1}}}{dW_{ee}}\sim\eta_{Q_{1}}:=\ell^{2}\frac{\lambda_{c}^{2}}{\bar{\rho}_{0}^{2}}-\text{ordinary
non-paraxial contribution
\cite[cite]{[\@@bibref{Number}{karlovets2018relativistic}{}{}]}},$ (19)
$\displaystyle\displaystyle\frac{dW_{eQ_{2}}}{dW_{ee}}\sim\eta_{Q_{2}}:=N^{2}\,\ell^{2}\frac{\lambda_{c}^{2}}{\bar{\rho}_{0}^{2}}-\text{dynamically
enhanced non-paraxial contribution
\cite[cite]{[\@@bibref{Number}{karlovets2019dynamical}{}{}]}},$ (20)
where an effective impact parameter of Smith-Purcell radiation naturally
appears
$\
h_{\text{eff}}=\frac{\beta\gamma\lambda}{2\pi}=\frac{\beta\gamma}{\omega}\sim
0.1\,\lambda\ \text{for}\ \beta\approx 0.4-0.8.$ (21)
The non-paraxial correction to the magnetic moment in (11) yields a correction
to $\eta_{\mu}$ of the order of $\eta_{\mu}\,\eta_{Q_{1}}$, which can be
safely neglected for our purposes.
As seen from (19), the non-paraxial regime does not necessarily imply a tight
focusing, $\bar{\rho}_{0}\gtrsim\lambda_{c}$, but it can also be realized when
the OAM is large, $\ell\gg 1$, whereas the focusing stays moderate,
$\bar{\rho}_{0}\gg\lambda_{c}$. As we fix $\bar{\rho}_{0}$ and not the beam
waist, the parameter (19) scales as $\ell^{2}$, which for the electrons with
$\bar{\rho}_{0}\sim 10$ nm and $\ell\sim 10^{3}$ yields
$\ell^{2}\left(\frac{\lambda_{c}}{\bar{\rho}_{0}}\right)^{2}\sim 10^{-3},$
while it is $10^{-5}$ for $\bar{\rho}_{0}\sim 100$ nm.
Importantly, these non-paraxial (quadrupole) effects are dynamically enhanced
when the packet spreading is essential on the radiation formation length,
which for Smith-Purcell radiation is defined by the whole length of the
grating. Spreading of the packet with the time and distance $\langle
z\rangle=t\beta$ leads to growth of the quadrupole moment and a corresponding
small dimensionless parameter $\eta_{Q_{2}}$ is
$t^{2}/t_{d}^{2}\to N^{2}$
times larger than (19):
$\displaystyle\displaystyle\eta_{Q_{2}}:=N^{2}\,\eta_{Q_{1}}=N^{2}\,\ell^{2}\left(\frac{\lambda_{c}}{\bar{\rho}_{0}}\right)^{2}.$
(22)
Thus the large number of strips $N\gg 1$ can lead to the non-paraxial regime
of emission with
$\displaystyle\displaystyle\eta_{Q_{1}}\ll 1,\ \eta_{Q_{2}}\lesssim 1,$ (23)
when the quadrupole contribution becomes noticeable. Somewhat contrary to
intuition, these non-paraxial effects get stronger when the packet itself gets
wider, see (94).
In the OAM-less case, $n=0$, $\ell=0$, the packet (4) turns into the ordinary,
spherically symmetric Gaussian packet, which has a vanishing quadrupole and
higher moments. Therefore, its spreading does not lead to such a non-linear
enhancement and the Smith-Purcell radiation from this packet in the wave zone
coincides with that from a point charge (see also [14]). Note that if we fix
the beam waist instead and, therefore, $\bar{\rho}_{0}\propto\sqrt{|\ell|}$,
then
$\eta_{Q_{0}}=\mathcal{O}(\ell),\,\eta_{Q_{1}}=\mathcal{O}(\ell),\,\eta_{Q_{2}}=\mathcal{O}(\ell
N^{2}).$ (24)
The dimensionless parameters from the NNL-order corrections $dW_{\mu\mu}$,
$dW_{\mu Q_{j}}$, $dW_{Q_{j}Q_{j}}$, $dW_{eO}$ are just products of the
leading and NL-order parameters,
$\displaystyle\displaystyle\eta_{\mu\mu}=\eta_{\mu}^{2}\,,\quad\eta_{\mu
Q_{j}}=\eta_{\mu}\eta_{Q_{j}}\,,\qquad\eta_{Q_{i}Q_{j}}=\eta_{Q_{i}}\eta_{Q_{j}},\quad
i=0,1,2,\ j=0,1,2.$ (25)
The following inequalities are in order
$\displaystyle\displaystyle\eta_{q}\ll\eta_{\mu}\,,\quad\eta_{q}\ll\eta_{Q_{j}}\,,\quad\eta_{\mu\mu}\lesssim\eta_{q}\,,\quad\eta_{\mu
Q_{j}}\lesssim\eta_{q}\,,\quad\eta_{Q_{i}Q_{j}}\lesssim\eta_{q},$ (26)
$\displaystyle\displaystyle i=0,1,2,\ j=0,1,2,$ (27)
For the same beam energies, we have the following estimate for the first
quadrupole parameter:
$\displaystyle\displaystyle h_{\text{eff}}\sim
0.1\,\lambda,\quad\eta_{Q_{0}}\sim
10^{2}\,\frac{\bar{\rho}_{0}^{2}}{\lambda^{2}}.$ (28)
According to (27) the inequalities
$\eta_{Q_{0}}^{2}\lesssim\eta_{q}\ll\eta_{Q_{0}}$ restrict the initial rms-
radius of the packet as follows:
$\displaystyle\displaystyle\ell_{\text{max}}^{-1}\ll\frac{\bar{\rho}_{0}}{h_{eff}}\lesssim\ell_{\text{max}}^{-1/2},$
(29)
which yields
$\displaystyle\displaystyle\lambda\sim 1\,\mu\text{m}:\
0.1\,\text{nm}\ll\bar{\rho}_{0}\lesssim 3\,\text{nm},$ (30)
$\displaystyle\displaystyle\lambda\sim 1\,\text{mm}:\
1\,\text{nm}\ll\bar{\rho}_{0}\lesssim 300\,\text{nm}.$ (31)
The packet radius should be smaller than the wavelength of the emitted
radiation, which is just a condition of the multipole expansion in the wave
zone.
The inequality $\eta_{Q_{1}}^{2}\lesssim\eta_{q}\ll\eta_{Q_{1}}$ defines
either a lower bound on $\ell$ or an upper bound for the rms-radius:
$\displaystyle|\ell|\,\lambda_{c}\,\ell_{\text{max}}^{1/2}\lesssim\bar{\rho}_{0}\ll|\ell|\,\lambda_{c}\,\ell_{\text{max}},$
(32)
which yields
$\displaystyle\displaystyle 0.1-1\,\text{nm}\lesssim\bar{\rho}_{0}\ll
3\,\text{nm}-30\,\text{nm},\ \ell\sim 10-100<\ell_{\text{max}}\sim 10^{3},\
\lambda\sim 1\,\mu\text{m},$ (33) $\displaystyle\displaystyle
1-100\,\text{nm}\lesssim\bar{\rho}_{0}\ll 0.3-300\,\mu\text{m},\ \ell\sim
10-10^{4}<\ell_{\text{max}}\sim 10^{5},\ \lambda\sim 1\,\text{mm}.$ (34)
This is compatible with (31) provided that the OAM is at least $\ell\sim 100$.
Finally, the restrictions for the number of strips $N$ can be derived from the
inequality $\eta_{Q_{2}}^{2}\lesssim\eta_{q}\ll\eta_{Q_{2}}$:
$\displaystyle\displaystyle\frac{\bar{\rho}_{0}}{|\ell|\ell_{\text{max}}\lambda_{c}}\ll
N\lesssim
N_{max}:=\frac{\bar{\rho}_{0}}{|\ell|\ell_{\text{max}}^{1/2}\lambda_{c}},$
(35)
where the ratio $\bar{\rho}_{0}/(|\ell|\ell_{\text{max}}\lambda_{c})$ itself
must be less than unity according to (32). So, the smallest value of $N$ can
well be $1$.
Let us estimate the largest possible number of strips for which our conditions
hold. For an optical or infrared photon, $\lambda\sim 1\,\mu$m, $\ell\sim
10^{2}$, and $\bar{\rho}_{0}\sim 1\,\text{nm}-3\,\text{nm}$ (according to
above findings), we get
$N_{max}\sim 3.$
So the grating must be really short. For a THz photon with $\lambda\sim 1$ mm,
$\ell\sim 10^{2}$, and $\bar{\rho}_{0}\sim 100\,\text{nm}-300\,\text{nm}$ we
have
$N_{max}\sim 30,$
or the same number of $N_{max}\sim 3$ for $\ell\sim 10^{3}$. These
inequalities specify the rough estimate (9). For Smith-Purcell radiation, the
large number of strips provides a narrow emission line, so the optimal OAM
value is therefore
$\displaystyle\displaystyle\ell\sim 10^{2}-10^{3},$ (36)
and the optimal grating period, which defines the radiation wavelength as
$d\sim\lambda$, is
$\displaystyle\displaystyle d\sim 10-1000\,\mu\text{m}.$ (37)
For the largest wavelength, the maximal grating length for which the higher-
multipole corrections can be neglected and the radiation losses stay small is
of the order of 3 cm.
Importantly, the maximal grating length $N_{max}d$ is much larger than the
Rayleigh length $z_{R}$ of the packet,
$\displaystyle\displaystyle z_{R}=\beta
t_{d}=\beta\frac{\lambda_{c}}{|\ell|}\,\left(\frac{\bar{\rho}_{0}}{\lambda_{c}}\right)^{2},$
(38)
which is of the order of $0.1\,\mu\text{m}$ for $\lambda\sim 1\,\mu\text{m}$
and the same parameters as above, or $z_{R}\sim 1$ mm for $\lambda\sim 1$ mm
and $\ell\sim 10^{2}$.
Summarizing, one can choose two baseline parameter sets:
* •
(IR): $\lambda\sim 1\,\mu\text{m}$, $\bar{\rho}_{0}=0.5-3\,\text{nm}$,
$\ell\sim 100$, $N\lesssim 10$,
* •
(THz): $\lambda\sim 1\,\text{mm}$, $\bar{\rho}_{0}=10-300\,\text{nm}$,
$\ell\sim 10^{2}-10^{3}$, $N\lesssim 100$.
As has been already noted, in practice the corresponding inequalities and the
subsequent requirements can often be relaxed, as the ratios like
$dW_{e\mu}/dW_{ee}$ are generally functions of angles and frequency. For
instance, the requirement $\eta_{q}\ll\eta_{\mu}$ does not have a sense in a
vicinity of $\Phi=\pi/2$ as $dW_{e\mu}$ vanishes at this angle. Finally, note
that typical widths of the electron packets after the emission at a cathode
vary from several Angstrom to a few nm, depending on the cathode [34, 35],
which meets our requirements.
## 3 Smith-Purcell radiation via generalized surface currents
### 3.1 Surface currents and radiation field
Following the generalized surface current model developed in Ref.[15] we
express the current density induced by the incident electromagnetic field of
the electron on the surface of an ideally conducting grating as a vector
product of ${\bm{E}}$, the normal to the surface ${\bm{n}}$ and the unit
vector to a distant point
${\bm{e}}_{0}=\frac{{\bm{r}}_{0}}{|{\bm{r}}_{0}|}=\left(\sin\Theta\cos\Phi,\sin\Theta\sin\Phi,\cos\Theta\right),$
${\bm{j}}(w)=\frac{1}{2\pi}\,{\bm{e}}_{0}\times\left[\,{\bm{n}}\times{\bm{E}}(w)\right].$
(39)
This expression is suitable for calculating the radiated energy in the far-
field only, as one should generally have a curl instead of ${\bm{e}}_{0}$ and
the induced current should not depend on the observer’s disposition.
Unlike the surface current density used in the theory of diffraction of plane
waves, this one has all three components, including the component
perpendicular to the grating surface. This normal component comes about
ultimately because the incident electric field has also all three components,
unlike the plane wave. For the ultrarelativistic energies, the normal
component of the surface current can be safely neglected and in this case the
generalize surface current model completely coincides [36] with the well-known
approach by Brownell, et al. [16]. The latter model was successfully tested,
for instance, in experiment [37] conducted with a $28.5$ GeV electron beam.
However for the moderate electron energies, needed for observation of the
effects we discuss in this work, the normal component of the surface current
is crucially important, which is why we employ the more general model of
Ref.[15].
To calculate the radiation fields at large distances we use Eq.(28) from [15]
${\bm{E}}^{R}\approx\frac{i\omega{\rm e}^{ikr_{0}}}{2\pi
r_{0}}\int{\bm{e}}_{0}\times\left[\,{\bm{n}}\times{\bm{E}}(k_{x},y,z,\omega)\right]{\rm
e}^{-ik_{z}z}dz\,,$ (40)
where the integration is performed along the periodic grating.
### 3.2 Electromagnetic field of a vortex electron
In Appendix we calculate explicit expressions for the electromagnetic fields
produced by a vortex electron [18] in the cartesian coordinates. One can
separate the field into the contributions of the charge, of the magnetic
moment, and of the quadrupole moment as follows:
$\displaystyle{\bm{E}}({\bm{r}},t)={\bm{E}}_{e}({\bm{r}},t)+{\bm{E}}_{\mu}({\bm{r}},t)+{\bm{E}}_{Q}({\bm{r}},t),$
(41)
$\displaystyle{\bm{E}}_{e}({\bm{r}},t)=\frac{1}{R^{3}}\\{\gamma{\bm{\rho}},R_{z}\\},$
(42)
$\displaystyle{\bm{E}}_{\mu}({\bm{r}},t)=\frac{\ell}{2m}\frac{3\beta\gamma}{R^{5}}R_{z}\\{y,-x,0\\},$
(43)
$\displaystyle{\bm{E}}_{Q}({\bm{r}},t)=\frac{\gamma}{4R^{3}}{\bm{\rho}}\Bigg{(}3\frac{\bar{\rho}_{0}^{2}}{R^{2}}\left(1-5\frac{R_{z}^{2}}{R^{2}}\right)+\ell^{2}\left(\frac{\lambda_{c}}{\bar{\rho}_{0}}\right)^{2}\Big{[}3\frac{T_{z}^{2}}{R^{2}}\left(1-5\frac{R_{z}^{2}}{R^{2}}\right)+3\frac{R_{z}^{2}}{R^{2}}-6\beta\frac{R_{z}T_{z}}{R^{2}}-1\Big{]}\Bigg{)}+$
(44)
$\displaystyle\frac{\gamma}{4R^{3}}({\bm{z}}-{\bm{\beta}}t)\Bigg{(}3\frac{\bar{\rho}_{0}^{2}}{R^{2}}\left(3-5\frac{R_{z}^{2}}{R^{2}}\right)+\ell^{2}\left(\frac{\lambda_{c}}{\bar{\rho}_{0}}\right)^{2}\Big{[}\frac{T_{z}^{2}}{R^{2}}\left(3-5\frac{R_{z}^{2}}{R^{2}}\right)+3\frac{R_{z}^{2}}{R^{2}}-1\Big{]}\Bigg{)}.$
(45)
where following notations are used:
${\bm{\beta}}=\left(0,0,\beta\right),\,{\bm{z}}=\left(0,0,z\right)$,
$\displaystyle{\bm{\rho}}=\\{x,y\\},\quad R_{z}:=\gamma(z-\beta t),\quad
T_{z}:=\gamma(t-\beta z),$ (46)
$\displaystyle{\bm{R}}=\\{{\bm{\rho}},\gamma(z-\beta
t)\\}\,,\quad\gamma=(1-\beta^{2})^{-1/2}$ (47)
We omit the magnetic fields, as to calculate the surface current below we need
the electric field only. In the problem of Smith-Purcell radiation, the
grating is supposed to be very long in the transverse direction, so we need
the Fourier transform of these fields.
### 3.3 Fourier transform of the fields
When calculating the Fourier transform of the electric fields produced by the
wave-packet
${\bm{E}}(q_{x},y,z,\omega)=\int\limits dxdt\,{\bm{E}}({\bm{r}},t)e^{i\omega
t-iq_{x}x}$
the following integral and its derivatives appear
$I_{\nu}(q_{x},y,z,\omega)=\int\limits_{-\infty}^{\infty}dt\int\limits_{-\infty}^{\infty}dx\frac{{\rm
e}^{i(\omega t-q_{x}x)}}{\left(x^{2}+y^{2}+\gamma^{2}(z-\beta
t)^{2}\right)^{(\nu/2)}}\,,\qquad\nu=3,5,7\,.$ (48)
We consider 3 master-integrals to reduce a number of derivatives with respect
to parameters, although $I_{3}$ alone would be enough to calculate the Fourier
transform of all the terms. The master integrals read
$\displaystyle\displaystyle
I_{3}(q_{x},y,z,\omega)=\frac{2\pi}{\gamma\beta}\exp\left(\frac{i\omega
z}{\beta}-\mu|y|\right)\frac{1}{|y|},$ (49) $\displaystyle\displaystyle
I_{5}(q_{x},y,z,\omega)=\frac{2\pi}{\gamma\beta}\exp\left(\frac{i\omega
z}{\beta}-\mu|y|\right)\frac{(1+\mu|y|)}{3|y|^{3}},$ (50)
$\displaystyle\displaystyle
I_{7}(q_{x},y,z,\omega)=\frac{2\pi}{\gamma\beta}\exp\left(\frac{i\omega
z}{\beta}-\mu|y|\right)\frac{(3+3\mu|y|+\mu^{2}|y|^{2})}{15|y|^{5}}\,,$ (51)
where $\mu=\sqrt{\frac{\omega^{2}}{\gamma^{2}\beta^{2}}+q_{x}^{2}}$,
$\nu=3,5,7$.
All the rest can be obtained by taking derivatives of the corresponding master
integral either over $t$ or $x$
$I_{\nu,t^{s},x^{p}}=\int\limits_{-\infty}^{\infty}dt\int\limits_{-\infty}^{\infty}dx\frac{t^{s}x^{p}{\rm
e}^{i(\omega t-q_{x}x)}}{\left(x^{2}+y^{2}+\gamma^{2}(z-\beta
t)^{2}\right)^{(\nu/2)}}=i^{p-s}\partial_{\omega}^{s}\partial_{q_{x}}^{p}I_{\nu}(q_{x},y,z,\omega)\,.$
(52)
Note that only $p=0$ ($y$ and $z$ components of electric field) and $p=1$ ($x$
component) cases are required. In particular, electric fields from the charge
and the magnetic momentum read
$\displaystyle{\bm{E}}_{e}(q_{x},y,z,\omega)=\gamma\left(i\partial_{q_{x}},y,\left(z+i\beta\partial_{\omega}\right)\right)\,I_{3}(q_{x},y,z,\omega)\,,$
(53)
$\displaystyle{\bm{E}}_{\mu}(q_{x},y,z,\omega)=\frac{3\ell\beta\gamma^{2}}{2m}\left(z+i\beta\partial_{\omega}\right)\left(y,-i\partial_{q_{x}},0\right)\,I_{5}(q_{x},y,z,\omega)\,,$
(54)
which after the differentiation reads
$\displaystyle{\bm{E}}_{e}(q_{x},y,z,\omega)=\frac{2\pi}{\beta}\left(-iq_{x},\text{sgn}(y)\sqrt{\left(\frac{\omega}{\beta\gamma}\right)^{2}+q_{x}^{2}},-i\frac{\omega}{\beta\gamma^{2}}\right)\,\frac{\exp\left\\{iz\frac{\omega}{\beta}-|y|\sqrt{\left(\frac{\omega}{\beta\gamma}\right)^{2}+q_{x}^{2}}\right\\}}{\sqrt{\left(\frac{\omega}{\beta\gamma}\right)^{2}+q_{x}^{2}}},$
(55)
$\displaystyle{\bm{E}}_{\mu}(q_{x},y,z,\omega)=-\frac{\ell}{2m}\frac{i2\pi\omega}{\beta\gamma}\left(\text{sgn}(y)\sqrt{\left(\frac{\omega}{\beta\gamma}\right)^{2}+q_{x}^{2}},iq_{x},0\right)\frac{\exp\left\\{iz\frac{\omega}{\beta}-|y|\sqrt{\left(\frac{\omega}{\beta\gamma}\right)^{2}+q_{x}^{2}}\right\\}}{\sqrt{\left(\frac{\omega}{\beta\gamma}\right)^{2}+q_{x}^{2}}},$
(56)
where $q^{2}=q_{0}^{2}-{\bm{q}}^{2}\neq 0$ and for the electron packet whose
center is at the distance $h$ from the grating one needs to substitute
$|y|\rightarrow|y-h|$.
Technically, the Fourier transform of the quadrupole fields follows the same
line. Starting from the formula (45), one should substitute $x$ and $t$
variables in the numerator by the differential operators $x\to
i\partial_{q_{x}}$, $t\to-i\partial_{\omega}$, acting on the master integrals
defined by the denominators, $R^{-\nu/2}\to I_{\nu}$. Resulting expressions
are calculated with the aid of computer algebra and can be found in the public
repository [38]. Here we only discuss the general structure of the
corresponding expressions. Consider the Fourier transform of a term where
$R_{z}=\gamma(z-\beta t)$ enters the numerator
$\int\limits dxdt\,f(x,y)\frac{R_{z}^{n}}{R^{\frac{\nu}{2}}}e^{i\omega
t-iq_{x}x}=\gamma^{n}f(i\partial_{q_{x}},y)(z+i\beta\partial_{\omega})^{n-1}(z+i\beta\partial_{\omega}){\rm
e}^{i\omega z/\beta}I_{f}(q_{x},y,\omega)\,.$
The commutator $[i\beta\partial_{\omega},{\rm e}^{i\omega z/\beta}]=-z{\rm
e}^{i\omega z/\beta}$ implies that $(z+i\beta\partial_{\omega}){\rm
e}^{i\omega z/\beta}I_{f}(q_{x},y,\omega)={\rm e}^{i\omega
z/\beta}(i\beta\partial_{\omega})I_{f}(q_{x},y,\omega)$ and we get the
integral where $z$-variable enters only the exponential factor
$\int\limits dxdt\,f(x,y)\frac{R_{z}^{n}}{R^{\frac{\nu}{2}}}e^{i\omega
t-iq_{x}x}=\gamma^{n}{\rm e}^{i\omega
z/\beta}f(i\partial_{q_{x}},y)(i\beta\partial_{\omega})^{n}I_{f}(q_{x},y,\omega)\,.$
(57)
Therefore we express $t$ using $z$ and $R_{z}$,
$t-\beta z=-\frac{1}{\beta}(z-\beta
t)+\frac{z}{\beta\gamma^{2}}=-\frac{R_{z}}{\beta\gamma}+\frac{z}{\beta\gamma^{2}}$
and rewrite (45) as a sum of (57)-like terms. As a result, the Fourier
transform of the quadrupole fields has the following factorized structure:
$\displaystyle\displaystyle{\bm{E}}_{Q}(q_{x},y,z,\omega)=\exp\left(iz\frac{\omega}{\beta}\right)\left({\bm{E}}_{Q_{0}}(q_{x},y,\omega)+{\bm{E}}_{Q_{1}}(q_{x},y,\omega)z+{\bm{E}}_{Q_{2}}(q_{x},y,\omega)z^{2}\right),$
(58)
where a $z$-dependent plane-wave is multiplied by a second order polynomial in
$z$-variable with coefficients being some functions. Note that the constant
term of the polynomial has the leading term proportional to the charge
contribution
$\exp\left(iz\frac{\omega}{\beta}\right){\bm{E}}_{Q_{0}}(q_{x},y,\omega)=\frac{\ell^{2}\lambda_{c}^{2}}{\bar{\rho}_{0}^{2}}{\bm{E}}_{e}(q_{x},y,z,\omega)+\ldots$
The charge and the magnetic dipole contributions depend on $z$ due to the
$z$-dependent plane-wave factor only. That is the Fourier transform of the
total electric field has the same structure as in (58),
$\displaystyle\displaystyle{\bm{E}}(q_{x},y,z,\omega)=\exp\left(iz\frac{\omega}{\beta}\right)\left({\bm{E}}_{0}(q_{x},y,\omega)+{\bm{E}}_{Q_{1}}(q_{x},y,\omega)z+{\bm{E}}_{Q_{2}}(q_{x},y,\omega)z^{2}\right),$
(59)
where
${\bm{E}}_{0}(q_{x},y,\omega)={\bm{E}}_{e}(q_{x},y,\omega)+{\bm{E}}_{\mu}(q_{x},y,\omega)+{\bm{E}}_{Q_{0}}(q_{x},y,\omega)$.
The terms linear and quadratic in z contain the quadrupole contribution only
and represent the non-paraxial contributions mentioned earlier. We will use
this structure in the next section to perform integration along the grating.
The surface current
${\bm{j}}=\frac{1}{2\pi}\,\left(-E_{x}e_{0y},\,E_{x}e_{0x}+E_{z}e_{0z},\,-E_{z}e_{0y}\right)\,,$
inherits the structure of Eq.(59)
${\bm{j}}(k_{x},y,z,\omega)=\exp\left(iz\frac{\omega}{\beta}\right)\left({\bm{j}}_{0}(k_{x},y,\omega)+{\bm{j}}_{Q_{1}}(k_{x},y,\omega)\,z+{\bm{j}}_{Q_{2}}(k_{x},y,\omega)\,z^{2}\right),$
(60)
where the first term contains all types of the contributions,
${\bm{j}}_{0}(k_{x},y,\omega)={\bm{j}}_{e}(k_{x},y,\omega)+{\bm{j}}_{\mu}(k_{x},y,\omega)+{\bm{j}}_{Q_{0}}(k_{x},y,\omega)$,
while the next terms are related to the quadrupole contribution only. Note
that here ${\bm{k}}=\omega\,{\bm{r}}_{0}/r_{0}$ is an on-mass-shell wave
vector, $k^{2}=\omega^{2}-{\bm{k}}^{2}=0$.
Integrating with respect to $z$-coordinate along the periodic grating we get
$\int\limits_{0}^{Nd}dz\left({\bm{j}}_{0}+{\bm{j}}_{Q_{1}}\,z+{\bm{j}}_{Q_{2}}\,z^{2}\right)\exp\left(iz\left(\frac{w}{\beta}-k_{z}\right)\right)=$
(61)
$=\left({\bm{j}}_{0}+{\bm{j}}_{Q_{1}}(i\partial_{k_{z}})+{\bm{j}}_{Q_{2}}(i\partial_{k_{z}})^{2}\right)F(\Theta_{1})\,,$
where
$\displaystyle
F(\Theta_{1})=\sum_{j=0}^{N}\int\limits_{jd}^{jd+a}dz\exp\left(iz\left(\frac{\omega}{\beta}-k_{z}\right)\right)=\frac{2\sin(\frac{a\Theta_{1}}{2})}{\Theta_{1}}\frac{\sin\left(\frac{Nd\Theta_{1}}{2}\right)}{\sin\left(\frac{d\Theta_{1}}{2}\right)}\exp\left(\frac{i\Theta_{1}}{2}(a+(N-1)d)\right),$
(62)
and we denote
$\Theta_{1}=\frac{\omega}{\beta}-k_{z}.$
Here $a$ is a strip width (see Fig.1). Note that
$\partial_{k_{z}}=-\partial_{\Theta_{1}}$, and to write the resulting formulas
in a compact form we will use the following notations:
$\partial_{\Theta_{1}}F(\Theta_{1})=F^{\prime}(\Theta_{1}),\
\partial^{j}_{\Theta_{1}}F(\Theta_{1})=F^{(j)}(\Theta_{1}).$
A standard interference factor due to diffraction on a grating is $|F|^{2}$.
As can be seen, the radiation from the charge and the magnetic moment is
modulated by the standard interference factor $|F|^{2}$, while the
interference of the charge with the quadrupole involves $F$ and its
derivatives. As a result, the non-symmetric shape of the electron packet
results in a small modification of the Smith-Purcell dispersion relation,
Eq.(1).
## 4 Multipole corrections to the spectral-angular distribution of the Smith-
Purcell radiation from the LG-wave packet
### 4.1 Spectral distribution of the Smith-Purcell radiation from the LG-wave
packet
The distribution of the radiated energy over the frequencies and angles,
$\frac{d^{2}W}{d\omega d\Omega}=r_{0}^{2}|{\bm{E}}^{R}|\,,$ (63)
represents a sum of the following terms (cf. Eq.(13)):
$\displaystyle
dW_{ee}=\frac{\omega^{2}}{4\pi^{2}}|{\bm{j}}_{e}|^{2}|F(\Theta_{1})|^{2}\,,$
(64) $\displaystyle
dW_{e\mu}=\frac{\omega^{2}}{4\pi^{2}}\left[{\bm{j}}_{e}{\bm{j}}_{\mu}^{*}+{\bm{j}}_{e}^{*}{\bm{j}}_{\mu}\right]|F(\Theta_{1})|^{2}\,,$
(65) $\displaystyle
dW_{eQ_{0}}=\frac{\omega^{2}}{4\pi^{2}}\left[{\bm{j}}_{e}{\bm{j}}_{Q_{0}}^{*}+{\bm{j}}_{e}^{*}{\bm{j}}_{Q_{0}}\right]|F(\Theta_{1})|^{2}\,,$
(66) $\displaystyle
dW_{eQ_{1}}=\frac{i\omega^{2}}{4\pi^{2}}\left[i{\bm{j}}_{e}{\bm{j}}_{Q_{1}}^{*}F(\Theta_{1})F^{\prime}(\Theta_{1})^{*}-{\bm{j}}_{e}^{*}{\bm{j}}_{Q_{1}}F(\Theta_{1})^{*}F^{\prime}(\Theta_{1})\right]\,,$
(67) $\displaystyle
dW_{eQ_{2}}=-\frac{\omega^{2}}{4\pi^{2}}\left[{\bm{j}}_{e}{\bm{j}}_{Q_{2}}^{*}F(\Theta_{1})F^{\prime\prime}(\Theta_{1})^{*}+{\bm{j}}_{e}^{*}{\bm{j}}_{Q_{2}}F(\Theta_{1})^{*}F^{\prime\prime}(\Theta_{1})\right]\,,$
(68) $\displaystyle dW_{\mu
Q_{j}}=-\frac{(-i)^{j}\omega^{2}}{4\pi^{2}}\left[{\bm{j}}_{\mu}{\bm{j}}_{Q_{j}}^{*}F(\Theta_{1})F^{(j)}(\Theta_{1})^{*}-{\bm{j}}_{\mu}^{*}{\bm{j}}_{Q_{j}}F(\Theta_{1})^{*}F^{(j)}(\Theta_{1})\right]\,,$
(69) $\displaystyle
dW_{\mu\mu}=\frac{\omega^{2}}{4\pi^{2}}|{\bm{j}}_{\mu}|^{2}|F(\Theta_{1})|^{2}\,,$
(70) $\displaystyle
dW_{Q_{j}Q_{k}}=\frac{(-i)^{j+k}\omega^{2}}{4\pi^{2}}\left[(-1)^{k}{\bm{j}}_{Q_{j}}{\bm{j}}_{Q_{k}}^{*}F^{(j)}(\Theta_{1})F^{(k)}(\Theta_{1})^{*}+(-1)^{j}{\bm{j}}_{Q_{j}}^{*}{\bm{j}}_{Q_{k}}F^{(j)}(\Theta_{1})^{*}F^{(k)}(\Theta_{1})\right].$
(71)
From here, the charge radiation and the charge-dipole interference can be
explicitly calculated
$\displaystyle\displaystyle\frac{d^{2}W_{ee}}{d\omega
d\Omega}=\exp\left(-\frac{2y}{h_{\text{eff}}}\sqrt{1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta}\right)$
(72)
$\displaystyle\displaystyle\times\frac{\cos^{2}\Theta+2\beta\gamma^{2}\cos^{2}\Phi\cos\Theta\sin^{2}\Theta+\sin^{2}\Phi\sin^{2}\Theta+\beta^{2}\gamma^{4}\cos^{2}\Phi\sin^{4}\Theta}{\gamma^{2}(1-\beta\cos\Theta)^{2}(1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta)}\,|F|^{2}\,,$
(73) $\displaystyle\displaystyle\frac{d^{2}W_{e\mu}}{d\omega
d\Omega}=\frac{\ell}{m}\exp\left(-\frac{2y}{h_{\text{eff}}}\sqrt{1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta}\right)$
(74)
$\displaystyle\displaystyle\times\frac{\omega\cos\Phi\sin\Theta\left(\beta\gamma^{2}\sin^{2}\Theta+\cos\Theta\right)}{\gamma^{2}(1-\beta\cos\Theta)^{2}\sqrt{1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta}}\,|F|^{2}\,.$
(75)
The charge contribution reproduces333In our coordinate system $\Phi=0$
corresponds to the $x$ axis on the grating plane (see Fig.1), thus our Eq.(72)
turns into Eq. (56) of [36] after a substitution $\Phi\to\phi+\pi/2$. Eq.(56)
of [36]. The charge-dipole interference results in an azimuthal asymmetry
arising from $\cos\Phi$, analogously to other types of polarization radiation
[39]. As noted earlier, the $e\mu$ contribution vanishes at $\Phi=\pi/2$.
The quadrupole contributions from ${\bm{j}}_{Q_{1}}$ and ${\bm{j}}_{Q_{2}}$
are defined by real parts of the product of currents, by the form factor $F$
and its derivatives. Nevertheless, explicit calculations show 444See the code
in the public repository [38] that these terms have also a factorized
structure
$dW_{eQ_{j}}=\exp\left(-\frac{2y}{h_{\text{eff}}}\sqrt{1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta}\right)P_{eQ_{j}}(k_{x},y,\omega)F_{eQ_{j}}(\omega,k_{z}).$
(76)
Here, the functions $P_{eQ_{j}}(k_{x},y,\omega)$ define the angular
distributions and $F_{eQ_{j}}(\omega,k_{z})$ determine positions of the
spectral lines and their width (therefore we will call
$F_{eQ_{j}}(\omega,k_{z})$ a spectral factor) . In figure 2 we compare the
radiation intensities normalized per 1 strip from gratings with $N=25$ and
$N=50$ strips. The spectral curves of $dW_{ee}$, $dW_{eQ_{0}}$ and
$dW_{eQ_{2}}$ have a similar shape and position; up to the sign this is also
the case for $dW_{e\mu}$, which is zero at $\Phi=\pi/2$ by the symmetry
considerations. The contribution $dW_{eQ_{1}}$ leads to a shift of the
spectral line, but its amplitude is rather small (a factor of $10^{2}$ is used
in Fig.2) and this shift is almost unobservable. A nonlinear amplification of
the quadrupole contribution to the radiation intensity is clearly seen in
figure 2. This effect becomes stronger for the radiation in the forward
direction (see figure 3).
Figure 2: Comparison of the radiation spectrum of an ordinary electron and of
a vortex electron packet ($\bar{\rho}_{0}=300\text{nm}$, $\ell=1000$) for two
gratings with $N=25$ and $N=50$. Quadrupole corrections $dW_{eQ_{j}}$ are
shown for $N=50$ only. Radiation intensities are normalized per 1 strip, the
zenith direction perpendicular to the grating plane
$\Theta=\Phi=\frac{\pi}{2}$ is considered. Difference of the full radiation
intensity and the charge one is shown by filling between the corresponding
curves. The grating period $d=1\,$ mm, $\beta=0.5$, $a=d/2$. Figure 3:
Radiation intensity at different polar angles (black solid line) and
contributions from the charge (green solid line), and the electric quadrupole
moment (black, red and blue dashed lines) with the following parameters:
$\beta=0.5$, $d=1\text{mm}$, $a=d/2$, $\bar{\rho}_{0}=300\text{nm}$,
$\ell=1000$, $N=50$, $\Phi=\frac{\pi}{2}$ .
Figures 2 and 3 correspond to the case when both number of strips $N=50$ and
OAM, $\ell=1000$ are close to maximal values estimated in section 2.2. Table 1
gives the corresponding dimensionless parameters. Note that two higher order
terms $\eta_{Q_{12}}$ and $\eta_{Q_{22}}$ also surpass quantum recoil in this
case. Moreover, $\eta_{Q_{22}}$, being two orders of magnitude smaller than
the leading correction $\eta_{Q_{2}}$, becomes more important than
$\eta_{Q_{0}}$ and $\eta_{Q_{1}}$ corrections. This means that within our
perturbative method, only charge and charge-quadrupole $\eta_{Q_{2}}$
contributions should be computed, while all the rest corrections can be
considered as next-to-leading order corrections which are at least two orders
of magnitude smaller (we plot the corresponding curves in figures 2 and 3 just
to demonstrate their shapes).
$\eta_{q}=\omega/\varepsilon$ | $\eta_{\mu}=\ell\lambda_{c}/\lambda$ | $\eta_{\text{Q0}}=\bar{\rho}_{0}^{2}/h_{\text{eff}}^{2}$ | $\eta_{\text{Q1}}=\ell^{2}\lambda_{c}^{2}/\bar{\rho}_{0}^{2}$ | $\eta_{\text{Q2}}=N^{2}\,\ell^{2}\lambda_{c}^{2}/\bar{\rho}_{0}^{2}$ | $N$
---|---|---|---|---|---
$1.95\times 10^{-10}$ | $1.95\times 10^{-7}$ | $2.67\times 10^{-6}$ | $1.69\times 10^{-6}$ | ${\bf 4.22\times 10^{-3}}$ | $50$
$\eta_{\mu\mu}=\eta_{\mu}^{2}$ | $\eta_{\mu Q_{0}}=\eta_{\mu}\eta_{Q_{0}}$ | $\eta_{\mu Q_{1}}=\eta_{\mu}\eta_{Q_{1}}$ | $\eta_{\mu Q_{2}}=\eta_{\mu}\eta_{Q_{2}}$ | $\ell$ | $\bar{\rho}_{0}\,,\mu\text{m}$
$3.8\times 10^{-14}$ | $5.2\times 10^{-13}$ | $3.29\times 10^{-13}$ | $8.23\times 10^{-10}$ | $1000$ | $0.3$
$\eta_{Q_{00}}=\eta_{Q_{0}}\eta_{Q_{0}}$ | $\eta_{Q_{01}}=\eta_{Q_{0}}\eta_{Q_{1}}$ | $\eta_{Q_{02}}=\eta_{Q_{0}}\eta_{Q_{2}}$ | $\eta_{Q_{11}}=\eta_{{Q_{1}}}\eta_{Q_{1}}$ | $\eta_{Q_{12}}=\eta_{{Q_{1}}}\eta_{{Q_{2}}}$ | $\eta_{Q_{22}}=\eta_{Q_{2}}\eta_{{Q_{2}}}$
$7.11\times 10^{-12}$ | $4.5\times 10^{-12}$ | $1.13\times 10^{-8}$ | $2.85\times 10^{-12}$ | $7.13\times 10^{-9}$ | $1.78\times 10^{-5}$
Table 1: Dimensionless parameters of the model which correspond to the Figure
2. The dynamical non-paraxial contribution $\eta_{\text{Q2}}$ is the biggest
one.
Studies of the radiation from classical beams show that horizontal and
vertical beam spreading lead to some modifications of the spectral line [40].
The horizontal spreading of the beam shifts the spectral line towards lower
frequencies while the vertical spreading results in the opposite shift. A
combination of both spreading types results in a broadening of the spectral
line. Here we show that quantum coherence of the wave packet may lead to a
different behavior. Namely, despite the vertical-horizontal spreading of the
wave-packet, the resulting spectral line does not demonstrate a broadening
until the quadrupole-quadrupole corrections come into play, which is the case
for long gratings with $N\gg N_{\text{max}}$ only.
Such a stabilization of the line width can be explained using (67), (68) and
properties of the function $F(\omega)$. First of all, instead of a full width
at half maximum (FWHM) one can consider a full width between zeros of the
spectral curves. Zeros of the interference contributions are defined by zeros
of the kernel $F$ itself555In a vicinity of a zero $\omega_{0}$,
$F(\omega_{0})=0$, the kernel $F$ can be factorized as
$F(\omega)=(\omega-\omega_{0})F_{\text{res}}(\omega)$. $(\omega-\omega_{0})$
is a real function therefore in (67), (68) the same factorization can be
applied to the radiation intensities. Therefore, all contributions (64)-(70)
have the same full width between zeros of the spectral curves (in Fig. 2 an
example of these coinciding zeros near the spectral maximum is presented).
This strongly restricts the possible broadening of the spectral line and at
large $N$ all contributions (64)-(70) tend to have the same width.
The quadrupole-quadrupole corrections $dW_{Q_{1}Q_{2}}$ and $dW_{Q_{2}Q_{2}}$
contain only derivatives of the kernel $F$. As a result, zeros of their
spectral curves (in a vicinity of the maximum) disappear and the corresponding
lines demonstrate a broadening (see figure 4). Importantly, if one takes into
account these contributions then the next corrections of the same order should
also be taken into account, such as interference of the charge with the
octupole magnetic moment, with 16pole electric moment and so forth (see (12)).
However, the octupole magnetic moment contribution has the same symmetry as
the magnetic momentum contribution, thus vanishing in the zenith direction.
Regarding $dW_{e16p}$ contribution, because it is the interference term the
zeros of the kernel $F$ will also prevent a broadening of the corresponding
spectral line. As a result, the next corrections that may lead to a broadening
are only the quadrupole-quadrupole ones. Therefore figure 4 and table 2
contain all necessary terms.
Figure 4: Broadening of the spectral line due to the quadrupole-quadrupole
corrections to the S-P radiation. The grating period $d=1\,$ mm, $\beta=0.5$,
$a=d/2$, $\bar{\rho}_{0}=300\text{nm}$, $\ell=1000$,
$\Phi=\Theta=\frac{\pi}{2}$. Number of strips $N=150$.
Numerical studies of the spectral lines reveal not only an absence of the
broadening, but even a slight narrowing of the lines due to the charge-
quadrupole interference. The FWHM for various grating lengths are presented in
Table 2 where a narrowing of the line (Charge+LO corrections column) can be
seen. When $N>100$ the quadrupole-quadrupole contributions become important
(Charge+LO+NLO corrections column of Table 2) and when $N>150$ broadening due
to the horizontal-vertical spreading surpasses the narrowing. Thus, for
$N<100$ we can safely compute $dW=dW_{e}+dW_{eQ_{2}}$ for parameters from
Table 1.
| Charge | Charge+LO corrections | Charge+LO+NLO corrections
---|---|---|---
$\Delta\omega$, THz, N=25 | 0.033408 | 0.033381 | 0.033383
narrowing, $\%$, N=25 | | $-0.081$ | $-0.077$
$\Delta\omega$, THz, N=50 | 0.016710 | 0.016658 | 0.016668
narrowing, $\%$, N=50 | | -0.31 | -0.25
$\Delta\omega$, THz, N=100 | 0.0083557 | 0.0082679 | 0.0083364
narrowing, $\%$, N=100 | | -1.05 | -0.23
$\Delta\omega$, THz, N=150 | 0.0055705 | 0.0054660 | 0.0056454
narrowing, $\%$, N=150 | | -1.8 | +1.3 (broadening)
Table 2: Comparison of the FWHM for the charge radiation, with the
interference terms included (line narrowing) and with the quadratic terms
included (line broadening). The grating period $d=1\,$ mm, $\beta=0.5$,
$\bar{\rho}_{0}=0.3\mu\text{m}$, $\ell=1000$, $h=0.39$ mm,
$\Phi=\frac{\pi}{2}$, $\Theta=\frac{\pi}{2}$
A physical reason for the line narrowing is also spreading of the wave packet.
Indeed, the natural width of the spectral line $\Delta\omega$ is related to
the time scale of the radiation process $\Delta t$ by a following uncertainty
relation:
$\Delta\omega\sim\frac{1}{\Delta t}\,.$ (77)
Due to the packet spreading, its interaction with the strips lasts longer,
especially at the end of the long grating with $dN_{\text{max}}\gg z_{R}$, and
so $\Delta t(z)$ grows. In other words, the spreading slightly increases the
radiation formation length.
### 4.2 Angular distributions at the Smith-Purcell wavelength
Let us denote
$\omega_{g}=\frac{2\pi}{\lambda_{g}}=\frac{2\pi\,g}{d\left(\beta^{-1}-\cos\Theta\right)},\
g=1,2,3,...$
then $|F|^{2}$ contains a Fejér kernel
$\displaystyle
F_{N}(\omega)=\frac{\sin^{2}\left(\frac{Nd\Theta_{1}}{2}\right)}{N\sin^{2}\left(\frac{d\Theta_{1}}{2}\right)}\xrightarrow[N\to\infty]{}2\pi\sum_{g}\delta\left(d\omega(1/\beta-\cos\Theta)-d\omega_{g}(1/\beta-\cos\Theta)\right)=$
(78) $\displaystyle=\sum_{g}\frac{2\pi
g}{gd(\beta^{-1}-\cos\Theta)}\delta\left(\omega-\omega_{g}\right)=\sum_{g}\frac{\omega_{g}}{g}\delta\left(\omega-\omega_{g}\right)$
(79)
which can be used to integrate over frequencies in a vicinity of the resonant
one. For the charge, the charge-dipole and the charge-$Q_{0}$ contributions,
the Fejér kernel can be substituted by a delta function when $N$ is large. For
a grating of finite length with $N$ strips the spectral line has a width
proportional to $1/N$. The angular distributions of the charge radiation and
of the charge-dipole radiation for the main diffraction order $g=1$ read
$\displaystyle\displaystyle\frac{dW_{ee}}{d\Omega}=N\frac{d^{2}\omega_{1}^{3}}{\pi^{2}}\sin^{2}\left(\frac{a\pi}{d}\right)\exp\left(-\frac{2\omega_{1}y}{\beta\gamma}\sqrt{1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta}\right)$
(80)
$\displaystyle\displaystyle\times\frac{\cos^{2}\Theta+2\beta\gamma^{2}\cos^{2}\Phi\cos\Theta\sin^{2}\Theta+\sin^{2}\Phi\sin^{2}\Theta+\beta^{2}\gamma^{4}\cos^{2}\Phi\sin^{4}\Theta}{\beta^{2}\gamma^{2}(1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta)},$
(81) $\displaystyle\displaystyle\frac{dW_{e\mu}}{d\Omega}=N\frac{\ell
d^{2}\omega_{1}^{4}}{m\pi^{2}}\,\frac{1}{\beta^{2}\gamma^{2}}\,\sin^{2}\left(\frac{\pi
a}{d}\right)\exp\left(-\frac{2\omega_{1}y}{\beta\gamma}\sqrt{1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta}\right)$
(82)
$\displaystyle\displaystyle\times\frac{\cos\Phi\sin\Theta\left(\beta\gamma^{2}\sin^{2}\Theta+\cos\Theta\right)}{\sqrt{1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta}}.$
(83)
Both the intensities linearly increase with $N$.
Integration of the non-paraxial terms, $dW_{eQ_{1}}$ and $dW_{eQ_{2}}$, is
more tricky. First we note that at large $N$ the spectral factors $F_{eQ_{j}}$
are concentrated near the Smith-Purcell frequency and have a width $\sim 1/N$.
$F_{eQ_{1}}$ is approximately an odd function and $F_{eQ_{2}}$ is an even
function of $\omega-\omega_{1}$ (see Fig. 2). Therefore $F_{eQ_{1}}$ produces
a shift of the spectral maximum, whereas $F_{eQ_{2}}$ amplifies the intensity.
At large $N$ these spectral factors are related with the Fejér kernel and its
derivatives.
For instance, the charge-$Q_{2}$ intensity has the following factor at large
$N$:
$\displaystyle\displaystyle
F_{eQ_{2}}=2d^{2}N^{3}\Theta_{1}^{2}\sin^{2}\left[\frac{a\Theta_{1}}{2}\right]F_{N}(\omega)+{\rm
O}(N)$ (84)
which is just proportional to the Fejér kernel and can be substituted by a
delta-function. As a result, the dynamically enhanced charge-quadrupole
interference term $dW_{eQ_{2}}/d\Omega$ reads
$\displaystyle\displaystyle\frac{dW_{eQ_{2}}}{d\Omega}=\ell^{2}\left(\frac{\lambda_{c}}{\bar{\rho}_{0}}\right)^{2}\,\frac{1}{3\beta^{4}\gamma^{4}}\,\frac{f_{2}(N)}{\lambda_{1}^{2}}\,\frac{dW_{ee}}{d\Omega}.$
(85)
Expectedly, this dynamical contribution is suppressed in the relativistic
case, $\gamma\gg 1$, when the spreading is marginal. Here
$\displaystyle\displaystyle f_{2}(N)=3\pi
ad\,\cot\left(\frac{a\pi}{d}\right)+3\pi^{2}\,a^{2}+3\pi\,ad(N-1)+d^{2}\,(\pi^{2}(2N^{2}-3N+1)-3)\approx$
(86) $\displaystyle\displaystyle\approx 2\pi^{2}d^{2}N^{2}\ \text{when}\ N\gg
1.$ (87)
Note that
$\displaystyle\displaystyle\frac{f_{2}(N)}{\lambda_{1}^{2}}\approx\frac{2\pi^{2}d^{2}N^{2}}{\lambda_{1}^{2}}\sim
2\pi^{2}N^{2}\ \text{at}\ N\gg 1,$ (88) $\displaystyle\displaystyle\text{as}\
\lambda_{1}\sim d\ \text{everywhere except}\ \Theta\to 0,$ (89)
which leads to a cubic growth for the quadrupole-charge contribution
$dW_{eQ_{2}}/d\Omega$ with respect to the number of strips
($dW_{eQ_{2}}/d\Omega\sim N^{2}dW_{ee}/d\Omega$, $dW_{ee}/d\Omega\sim N$). The
ratio of this correction to the radiation of the charge also has a non-linear
(quadratic) $N$-dependence
$\frac{dW_{eQ_{2}}}{d\Omega}/\frac{dW_{ee}}{d\Omega}\sim\ell^{2}\,N^{2}\,\left(\frac{\lambda_{c}}{\bar{\rho}_{0}}\right)^{2},\
N\gg 1.$
Importantly, $1/\lambda_{1}^{2}(\Theta)$ is the only additional angle-
dependent factor in $dW_{eQ_{2}}$ compared to $dW_{ee}$, so the dynamical
contribution is increased for smaller wavelengths – that is, for smaller
emission angles, $\Theta\to 0$. Namely, at $\beta\approx 0.5$ we have
$\displaystyle\displaystyle\frac{dW_{eQ_{2}}(\Theta=0)}{dW_{eQ_{2}}(\Theta=\pi/2)}\approx
4.$ (90)
A large $N$ asymptotic of the charge-$Q_{1}$ spectral factor reads
$F_{eQ_{1}}=-N\omega\sin^{2}\left[\frac{a\Theta_{1}}{2}\right]F^{\prime}_{N}(\omega)+{\rm
O}(1)\,,$ (91)
where a derivative of the Fejér kernel appears
$F^{\prime}_{N}(\omega)=\frac{d\Theta_{1}\sin\left(\frac{dN\Theta_{1}}{2}\right)\left[\cot\left(\frac{d\Theta_{1}}{2}\right)\sin\left(\frac{dN\Theta_{1}}{2}\right)-N\cos\left(\frac{dN\Theta_{1}}{2}\right)\right]}{N\omega\sin^{2}\left(\frac{d\Theta_{1}}{2}\right)}\xrightarrow[N\to\infty]{}\sum\frac{\omega_{g}}{g}\delta^{\prime}\left(\omega-\omega_{g}\right).$
Integration of (76) when $g=1$ can be done using a substitution of
$F^{\prime}_{N}(\omega)$ by a derivative of a delta-function
$\frac{dW_{eQ_{1}}}{d\Omega}=-N\omega_{1}\int\exp\left(\frac{-2y\omega}{\beta\gamma}\sqrt{1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta}\right)P_{eQ_{j}}(k_{x},y,\omega)\omega\sin^{2}\left[\frac{a\Theta_{1}}{2}\right]\delta^{\prime}\left(\omega-\omega_{g}\right)d\omega=$
$N\omega_{1}\partial_{\omega}\left.\left[\exp\left(\frac{-2y\omega}{\beta\gamma}\sqrt{1+\beta^{2}\gamma^{2}\cos^{2}\Phi\sin^{2}\Theta}\right)P_{eQ_{j}}(k_{x},y,\omega)\omega\sin^{2}\left(\frac{a\Theta_{1}}{2}\right)\right]\right|_{\omega\to\omega_{1}}\,.$
Using explicit expressions of the radiation intensities one can isolate the
dimensionless parameters related with the quadrupole contribution. From
Eq.(85), one can find $\eta_{Q_{2}}$, while $\eta_{Q_{1}}$ and $\eta_{Q_{2}}$
can be extracted from the ratios $dW_{eQ_{j}}/dW_{ee}$:
$\displaystyle\frac{dW_{eQ_{0}}}{dW_{ee}}=\frac{\bar{\rho}_{0}^{2}}{h_{\text{eff}}^{2}}P_{1}(\beta,\Theta,\Phi)+\ell^{2}\frac{\lambda_{c}^{2}}{\bar{\rho}_{0}^{2}}P_{2}\left(\beta,\frac{h}{d},\Theta,\Phi\right)\,,$
(92)
$\displaystyle\frac{dW_{eQ_{1}}}{dW_{ee}}=\ell^{2}\frac{\lambda_{c}^{2}}{\bar{\rho}_{0}^{2}}P_{3}\left(\beta,\frac{h}{d},\Theta,\Phi\right)\,,$
(93)
where $P_{1,2,3}$ are some smooth functions. One can identify three parameters
(18), (19). Note that $\frac{dW_{eQ_{0}}}{dW_{ee}}$ contains a linear
combination of independent parameters $\eta_{Q_{0}}$ and $\eta_{Q_{1}}$, which
we split for a convenience. For ultra-relativistic energies, all corrections
from the quadrupole radiation are suppressed.
In Figure 5 we plot an azimuthal distribution of the radiation intensity at
$\lambda=2\,\text{mm}$ and compare different contributions to the total
radiation intensity. We fix the impact parameter and initial radius of the
wave packet and consider two cases: $\ell=1000$, $N=50$ and $\ell=100$,
$N=500$. In both cases the maximal number of strips for a given impact
parameter, OAM and the initial radius $\bar{\rho}_{0}$ is used. A larger
grating length corresponds to the smaller angular momentum $\ell=100$. A
scaling invariance $\ell\to\epsilon\ell$, $N\to\epsilon^{-1}N$ of the
quadrupole correction $dW_{eQ_{2}}/dW_{ee}\sim\eta_{Q_{2}}$ can be observed in
Figure 5. In other words, large OAM lead to a quick spreading and require
short gratings, while small OAM result in a relatively slow spreading and
allow one to use longer gratings. At the same time, other corrections – in
particular from the magnetic moment – depend on $\ell$ only. Their observation
requires the largest possible OAM ($\ell\sim 10^{3}$ and higher) and
$\Phi\neq\pi/2$.
Note that for $\beta=0.5$, the charge and the charge-quadrupole contributions
have almost the same azimuthal dependence (which is defined mostly by the
exponential factor). The charge-magnetic moment contribution yields the small
azimuthal asymmetry. The analysis of Ref.[2] seems reasonable for the case of
the Smith-Purcell radiation too. In the case of THz radiation ($\lambda\sim
1\text{mm}$) this effect is almost unobservable (see figure 5 and Table 1). An
asymmetry of the order of 0.1 % can be seen for infrared S-P radiation,
$\lambda\sim 1\,\mu\text{m}$, which could in principle be measured.
(a) $\ell=1000$, $N=50$
(b) $\ell=100$, $N=500$
Figure 5: Azimuthal distribution of the radiation intensity (black solid line)
and contributions from the charge (green solid line), the magnetic moment
(cyan dashed line), and the electric quadrupole moment (black, red and blue
dashed lines) with the following parameters: $\beta=0.5$,
$\lambda=2\text{mm}$, $d=1\text{mm}$, $a=d/2$. The number of strips $N$ is
maximal in each subfigure for the given OAM, the impact parameter $h=0.13$mm
and $\bar{\rho}_{0}=0.3\,\mu\text{m}$.
### 4.3 Dynamical enhancement of the quadrupole contribution
Equations (85), (86) show that $dW_{eQ_{2}}$ contribution has a cubic growth
with the number of strips compared to the linear growth of $dW_{ee}$. This is
due to constructive interference of the quadrupole radiation from each strip,
taken into account that the quadrupole moment is increased (quadratically)
because of the spreading. Recall that the maximal grating length (interaction
length) and the number of strips are limited by (7) to guarantee that the mean
wave packet radius $\bar{\rho}(t)$ stays smaller than the impact parameter
$h$.
A large impact parameter $h$ should be chosen to obtain a large
$N_{\text{max}}$, therefore we consider $\bar{\rho}_{0}/h\ll 1$ and thus
approximately $N_{\text{max}}^{3}\propto h^{3}$. At the same time, the
radiation intensity decreases exponentially with the large impact parameters.
Taking the maximal number of strips, the dependence of the charge-quadrupole
interference term $dW_{eQ_{2}}$ on the impact parameter reads
$dW_{eQ_{2}}\left[\rho,\ell,h,N_{\text{max}}(\rho,\ell,h)\right]\propto\frac{\bar{\rho}_{0}}{\ell}\left(\frac{d}{\lambda_{c}h_{\text{eff}}^{5}}\right)h^{3}\,{\rm
e}^{-\frac{2h}{h_{\text{eff}}}}\,.$ (94)
The maximum of this contribution defines the optimal impact parameter
$h_{\text{opt}}\approx\frac{3}{2}\,h_{\text{eff}}\sim h_{\text{eff}}.$ (95)
Note that $dW_{eQ_{2}}$ is proportional to $\bar{\rho}_{0}/\ell$ when $N$
takes its maximal value (7). That is, a wide packet with a small OAM (recall
the corresponding lower bound (32)) can be chosen to simplify experimental
studies of the S-P radiation from the vortex electrons.
In figure 6 the behavior of the radiation intensity is shown for the optimal
value of $h$ and $N_{\text{max}}(h)=700$. Two cases of OAM, $\ell=150$ and
$\ell=100$, were considered. The maximal number of strips was calculated for
the larger OAM. The diffraction time is inversely proportional to $\ell$,
therefore a grating optimal for a wave packet with $\ell=150$ can be used also
in the case of $\ell<150$. In this case a factor $\ell^{2}/150^{2}$ reduces
the charge-quadrupole radiation intensity $dW_{eQ_{2}}$.
The non-linear dependence of the S-P radiation on the number of strips or on
the length of the grating due to the increasing quadrupole moment is clearly
seen in figure 6. If observed experimentally, such a non-linear dependence
would serve as a hallmark for a non-paraxial regime of electromagnetic
radiation, in which an electron packet emits photons as if its charge were
smeared over all the coherence length, somewhat similar to a multi-particle
beam but with a total charge $e$. Another possibility to detect this effect is
to change OAM for the same diffraction grating and the same scattering
geometry and to study corresponding modifications of the radiation intensity.
Figure 6: Radiation intensity at vertical plane for various number of strips
$N<700$. The number of strips $N=700$ is maximal given that the impact
parameter $h=3h_{\text{eff}}/2=27\,\mu\text{m}$, velocity $\beta=0.5$, period
$d=0.1\,$ mm and the initial mean radius $\bar{\rho}_{0}=300\,$nm.
An additional possibility to detect the charge-quadrupole contribution follows
from a polar dependence in (85). In most cases, the total radiation intensity
is approximately $dW=dW_{e}+dW_{eQ_{2}}$ because other corrections are at
least 2 orders of magnitude smaller than $dW_{eQ_{2}}$. In the case of large
$N$ and $\Phi=\frac{\pi}{2}$, the maximum of the radiation intensity with
respect to the polar angle $\Theta$ can be found by maximization of the
following expression:
$dW\propto{\rm e}^{\frac{-2h}{h_{\text{eff}}}}\left(h_{\text{eff}}^{-3}+\delta
h_{\text{eff}}^{-5}\right)\,,\quad\delta=\frac{\eta_{Q_{2}}}{6}\frac{d^{2}}{\gamma^{2}\beta^{2}}\,.$
(96)
To begin with the polar angle of maximal intensity for the charge radiation
$\Theta_{e}$ we just put $\delta=0$ which gives
$h_{\text{eff}}(\Theta_{e})=2h/3$ from a linear equation. A non-zero $\delta$
leads to the cubic equation. We use Cardano’s formula and assume $\delta$
small to calculate the first order correction
$h_{\text{eff}}(\Theta)=h_{\text{eff}}(\Theta_{e})-\frac{\delta}{h}\,.$ (97)
Next, we write effective impact parameters explicitly in terms of the polar
angle
$\frac{\beta\gamma
d}{2\pi}(\cos(\Theta_{e})-\cos(\Theta_{e}+\delta\Theta))=-\frac{\delta}{h}$
and for small shifts of the maximum, $\cos\delta\Theta\sim 1$,
$\sin\delta\Theta\sim\delta\Theta$ we get an estimate
$\delta\Theta=-\eta_{Q_{2}}\frac{\pi
d}{3h\gamma^{3}\beta^{3}\sin(\Theta_{e})}$ (98)
In figure 7 we plot an example of this effect. The approximate shifts
$\delta\Theta(N=800)=-4.64^{\circ}$, $\delta\Theta(N=400)=-1.16^{\circ}$ from
(98) and numerical calculations $\delta\Theta(N=800)=-4.48^{\circ}$,
$\delta\Theta(N=400)=-1.15^{\circ}$ are in a good agreement.
Figure 7: Polar dependence of radiation intensity and a shift of the maximum
due to the contribution from $dW_{eQ_{2}}$. The number of strips $N=800$ is
maximal given that the impact parameter $h=h_{eff}=22\,\mu\text{m}$, velocity
$\beta=0.7$, period $d=0.1\,$ mm, OAM $\ell=10$ and the initial mean radius
$\bar{\rho}_{0}=20\,$nm.
## 5 Conclusions
We calculated the Smith-Purcell radiation generated by a vortex electron from
an ideally conducting diffraction grating in the wavelength range from THz to
optical range. The state of an electron is given by the Laguerre-Gaussian
wave-packet with the orbital angular momentum. The corresponding intrinsic
magnetic dipole moment and electric quadrupole moment were taken into account.
The latter increases due to the “spreading” of the quantum wave-packet with
time.
The radiation of the magnetic moment has an azimuthal asymmetry, as for
transition radiation [2]. The asymmetry effect increases with an increase in
the orbital momentum of the wave packet, and for the currently achieved OAM
values $\ell\sim 10^{3}$ it does not exceed one percent.
In a contrast to the radiation of the magnetic moment, the azimuthal asymmetry
is absent in the radiation of the electric quadrupole moment. However, as the
electron moves near the grating, the spreading of the packet leads to an
increase of the quadrupole moment, which also can be seen in the radiation.
Although the quadrupole radiation is small as long as the multipole expansion
stays legitimate (almost always in practice), it leads to an interesting
effect: while the radiation intensity from a charge has a linear growth with
the number of the grating periods, the quadrupole contribution leads to a
faster cubic growth, which resembles coherence effect (superradiance) from a
classical beam of many particles. However, in our problem this is a purely
quantum effect of the spatial coherence of a vortex packet. For relativistic
particles, spreading of the quadrupole moment can be neglected, but for non-
relativistic and even moderately relativistic electrons (with kinetic energies
of 100-300 keV), this effect can lead to both a change in the angular
distribution and an increase in the total radiation loss.
Thus we have shown that the effects of spatial coherence of the wave packets
with intrinsic angular momentum can play a noticeable role for nonrelativistic
energies of the radiating particles. Moreover, our calculations show that
experimental observations of the quadrupole contribution to the S-P radiation
can be done with a moderate value of OAM, $\ell\sim 100$. From Eq. (94) it
follows that to study the dynamical effect which comes from the wave packet
spreading it is better to work with small OAM, choose an optimal impact
parameter (95) and provide the maximal grating length (7). For instance, a
$4^{\circ}$ shift of the polar angle of the maximum of Smith-Purcel radiation
($\lambda=0.14\,$mm) presented in figure 7 can be achieved with $\ell=10$ and
$\bar{\rho}_{0}=20\,$nm. Vortex electrons with such parameters can be
generated using the method realized by J. Verbeeck et. all ( $|\ell|=3$,
$\bar{\rho}_{0}\leavevmode\nobreak\ 0.5-10\,$nm, see Fig. 2 in [41]). The
resulting corrections can be detected just with a standard experimental setup
as in [42] upgraded to work with vortex wave packets. Since the same shift
should appear for LG wave packet with re-scaled parameters $\tilde{\ell}$,
$\bar{\rho}_{0}=20(\tilde{\ell}/\ell)\,$nm a possible experiment can choose
between the focusing and vorticity.
## Acknowledgements
We are grateful to A.A. Tishchenko and P.O. Kazinski for fruitful discussions.
This work is supported by the Russian Science Foundation (Project No.
17-72-20013).
## 6 Appendix
### 6.1 Electromagnetic fields of LG wave packet in the rest frame
Consider the vortex electron described by the LG packet (4) with $n=0$. Its
electromagnetic fields represent a sum of those of the charge $e$, of the
magnetic moment ${\bm{\mu}}$, and of the electric quadrupole moment
$Q_{\alpha\beta}$, (11). The fields in cylindrical coordinates in the rest
frame were calculated in [18]. In our problem, we prefer to use the Cartesian
coordinates:
$\displaystyle
E_{x}=\frac{x}{r^{3}}\left[1+\frac{1}{4}\left(\frac{3\bar{\rho}_{0}^{2}}{r^{2}}\left(1-5\frac{z^{2}}{r^{2}}\right)+\frac{l^{2}\lambda_{C}^{2}}{\bar{\rho}_{0}^{2}}\left(\frac{3t^{2}}{r^{2}}\left(1-5\frac{z^{2}}{r^{2}}\right)+3\frac{z^{2}}{r^{2}}-1\right)\right)\right],$
(99) $\displaystyle
E_{y}=\frac{y}{r^{3}}\left[1+\frac{1}{4}\left(\frac{3\bar{\rho}_{0}^{2}}{r^{2}}\left(1-5\frac{z^{2}}{r^{2}}\right)+\frac{l^{2}\lambda_{C}^{2}}{\bar{\rho}_{0}^{2}}\left(\frac{3t^{2}}{r^{2}}\left(1-5\frac{z^{2}}{r^{2}}\right)+3\frac{z^{2}}{r^{2}}-1\right)\right)\right],$
(100) $\displaystyle
E_{z}=\frac{z}{r^{3}}\left[1+\frac{1}{4}\left(\frac{3\bar{\rho}_{0}^{2}}{r^{2}}\left(3-5\frac{z^{2}}{r^{2}}\right)+\frac{l^{2}\lambda_{C}^{2}}{\bar{\rho}_{0}^{2}}\left(\frac{3t^{2}}{r^{2}}\left(3-5\frac{z^{2}}{r^{2}}\right)+3\frac{z^{2}}{r^{2}}-1\right)\right)\right],$
(101) $\displaystyle
H_{x}=\frac{z}{r^{5}}\left(3x\frac{l}{2m}-\frac{3l^{2}\lambda_{C}^{2}}{2\bar{\rho}_{0}^{2}}ty\right),$
(102) $\displaystyle
H_{y}=\frac{z}{r^{5}}\left(3y\frac{l}{2m}-\frac{3l^{2}\lambda_{C}^{2}}{2\bar{\rho}_{0}^{2}}tx\right),$
(103) $\displaystyle
H_{z}=\frac{l}{2m}\left(3\frac{z^{2}}{r^{2}}-1\right)\frac{1}{r^{3}}$ (104)
We now transform these fields to the laboratory frame in which the particle
moves along the $z$ axis with a velocity $\langle u\rangle\equiv\beta$
according to the law
$\langle z\rangle=\beta t$
Applying Lorentz transformations we get electric fields in the laboratory
frame
$\displaystyle\displaystyle E^{(\text{lab})}_{x}=\gamma(E_{x}+\beta H_{y}),$
(105) $\displaystyle\displaystyle E^{(\text{lab})}_{y}=\gamma(E_{y}-\beta
H_{x})\,,\ E^{(\text{lab})}_{z}=E_{z}$ (106)
Simultaneously, we need to transform the coordinates and the time as follows
666Note that Ref. [18] treats the fields at a distant point only, which
simplifies Lorentz transformations of angular variables. Here we use the
general formulas.:
$\displaystyle{\bm{\rho}}=\\{x,y\\}=\text{inv},\ z\rightarrow\gamma(z-\beta
t)=:R_{z},\quad t\rightarrow\gamma(t-\beta z)=:T_{z},$ (107) $\displaystyle
r^{2}\rightarrow\left(\rho^{2}+\gamma^{2}(z-\beta
t)^{2}\right)\,,\quad\gamma=(1-\beta^{2})^{-1/2}$ (108)
We omit the magnetic fields, as to calculate the surface current below we need
the electric field only.
### 6.2 Electromagnetic fields of LG wave packet in the laboratory frame
Let’s introduce a vector ${\bm{R}}=\\{{\bm{\rho}},\gamma(z-\beta t)\\}$ in the
laboratory frame. The components of the electric field in this frame read
$\displaystyle\displaystyle
E_{x}({\bm{r}},t)=\gamma\frac{x}{R^{3}}\Bigg{(}1+\frac{3}{4}\frac{\bar{\rho}_{0}^{2}}{R^{2}}\left(1-5\frac{R_{z}^{2}}{R^{2}}\right)+\frac{1}{4}\ell^{2}\left(\frac{\lambda_{c}}{\bar{\rho}_{0}}\right)^{2}\Big{[}3\frac{T_{z}^{2}}{R^{2}}\left(1-5\frac{R_{z}^{2}}{R^{2}}\right)+$
(109)
$\displaystyle\displaystyle+3\frac{R_{z}^{2}}{R^{2}}-6\beta\frac{R_{z}T_{z}}{R^{2}}-1\Big{]}\Bigg{)}+\frac{\ell}{2m}3\beta\gamma
y\frac{R_{z}}{R^{5}},$ (110) $\displaystyle\displaystyle
E_{y}({\bm{r}},t)=\gamma\frac{y}{R^{3}}\Bigg{(}1+\frac{3}{4}\frac{\bar{\rho}_{0}^{2}}{R^{2}}\left(1-5\frac{R_{z}^{2}}{R^{2}}\right)+\frac{1}{4}\ell^{2}\left(\frac{\lambda_{c}}{\bar{\rho}_{0}}\right)^{2}\Big{[}3\frac{T_{z}^{2}}{R^{2}}\left(1-5\frac{R_{z}^{2}}{R^{2}}\right)+$
(111)
$\displaystyle\displaystyle+3\frac{R_{z}^{2}}{R^{2}}-6\beta\frac{R_{z}T_{z}}{R^{2}}-1\Big{]}\Bigg{)}-\frac{\ell}{2m}3\beta\gamma
x\frac{R_{z}}{R^{5}},$ (112) $\displaystyle\displaystyle
E_{z}({\bm{r}},t)=\frac{R_{z}}{R^{3}}\Bigg{(}1+\frac{3}{4}\frac{\bar{\rho}_{0}^{2}}{R^{2}}\left(3-5\frac{R_{z}^{2}}{R^{2}}\right)+\frac{1}{4}\ell^{2}\left(\frac{\lambda_{c}}{\bar{\rho}_{0}}\right)^{2}\Big{[}3\frac{T_{z}^{2}}{R^{2}}\left(3-5\frac{R_{z}^{2}}{R^{2}}\right)+$
(113) $\displaystyle\displaystyle+3\frac{R_{z}^{2}}{R^{2}}-1\Big{]}\Bigg{)}.$
(114)
## References
* [1] K. Y. Bliokh, I. P. Ivanov, G. Guzzinati, L. Clark, R. Van Boxem, A. Béché, R. Juchtmans, M. A Alonso, P. Schattschneider, F. Nori, et al. Theory and applications of free-electron vortex states. Physics Reports, 690:1–70, 2017.
* [2] I. P. Ivanov and D. V. Karlovets. Detecting transition radiation from a magnetic moment. Physical review letters, 110(26):264801, 2013.
* [3] I. P. Ivanov, V. G. Serbo, and V. A. Zaytsev. Quantum calculation of the vavilov-cherenkov radiation by twisted electrons. Physical Review A, 93(5):053825, 2016.
* [4] S. J. Smith and E. M. Purcell. Visible light from localized surface charges moving across a grating. Physical Review, 92(4):1069, 1953.
* [5] J. M. Wachtel. Free-electron lasers using the smith-purcell effect. Journal of Applied Physics, 50(1):49–56, 1979.
* [6] A. N. Aleinik, A. S. Aryshev, E. A. Bogomazova, B. N. Kalinin, G. A. Naumenko, A. P. Potylitsyn, G. A. Saruev, and A. F. Sharafutdinov. Stimulated Smith-Purcell radiation. JETP Lett., 79(7):320–323, 2004.
* [7] M. Wang, P. Liu, G. Ge, and R. Dong. Free electron laser based on the Smith–Purcell radiation. Optics & Laser Technology, 39(6):1254–1257, 2007.
* [8] H. Bei, D. D. Dai, and Z. M. Dai. Simulation of Smith–Purcell radiation from compact terahertz source. High Power Laser Part. Beams, 20(2067), 2008.
* [9] G. Doucas, M. F. Kimmitt, J. H. Brownell, S. R. Trotz, and J. E. Walsh. A new type of high-resolution position sensor for ultra-relativistic beams. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 474(1):10–18, 2001.
* [10] G. Kube, H. Backe, W. Lauth, and H. Schoepe. Smith-Purcell radiation in view of particle beam diagnostic. In Proc. DIPAC, volume 1, 2003.
* [11] G. Kube. Radiation sources and their application for beam profile diagnostics. Proceedings of IBIC-2014, Monterey, California, USA, 2014.
* [12] O. I. Tolstikhin and T. Morishita. Strong-field ionization, rescattering, and target structure imaging with vortex electrons. Physical Review A, 99(6):063415, 2019.
* [13] N. Talebi. Schrödinger electrons interacting with optical gratings: quantum mechanical study of the inverse Smith–Purcell effect. New Journal of Physics, 18(12):123006, 2016.
* [14] D. V. Karlovets and A. M. Pupasov-Maksimov. Nonlinear quantum effects in electromagnetic radiation of a vortex electron. Physical Review A, 103(1):012214, 2021.
* [15] D. V. Karlovets and A. P. Potylitsyn. Generalized surface current method in the macroscopic theory of diffraction radiation. Physics Letters A, 373(22):1988–1996, 2009.
* [16] J. H. Brownell, J. Walsh, and G. Doucas. Spontaneous Smith-Purcell radiation described through induced surface currents. Physical Review E, 57(1):1075, 1998.
* [17] A. P. Potylitsyn, P. V. Karataev, and G. A. Naumenko. Resonant diffraction radiation from an ultrarelativistic particle moving close to a tilted grating. Physical Review E, 61(6):7039, 2000.
* [18] D. V. Karlovets. Dynamical enhancement of nonparaxial effects in the electromagnetic field of a vortex electron. Physical Review A, 99(4):043824, 2019.
* [19] E. Mafakheri, A. H. Tavabi, P.-H. Lu, R. Balboni, F. Venturi, C. Menozzi, G. C. Gazzadi, S. Frabboni, A. Sit, R. E. Dunin-Borkowski, et al. Realization of electron vortices with large orbital angular momentum using miniature holograms fabricated by electron beam lithography. Applied Physics Letters, 110(9):093113, 2017.
* [20] B. J. McMorran, A. Agrawal, I. M. Anderson, A. A. Herzing, H. J. Lezec, J. J. McClelland, and J. Unguris. Electron vortex beams with high quanta of orbital angular momentum. science, 331(6014):192–195, 2011.
* [21] X. Zhong, J. Lin, S. S. Kao, Z. Liao, J. Zhu, X. Huang, R. Zhang, and H. L. Xin. Atomistic defect makes a phase plate for the generation and high-angular splitting of electron vortex beams. ACS nano, 13(4):3964–3970, 2019.
* [22] X. Zhong, S. S. Kao, J. Lin, Z. Liao, J. Zhu, X. Huang, R. Zhang, and H. L. Xin. High-angular splitting electron vortex beams generated by topological defects. Microscopy and Microanalysis, 25(S2):88–89, 2019.
* [23] D. V. Karlovets. Relativistic vortex electrons: Paraxial versus nonparaxial regimes. Physical Review A, 98(1):012137, 2018.
* [24] M. J. Padgett, F. M. Miatto, M. P. J. Lavery, A. Zeilinger, and R. W. Boyd. Divergence of an orbital-angular-momentum-carrying beam upon propagation. New Journal of Physics, 17(2):023011, 2015.
* [25] D. Karlovets and A. Zhevlakov. Intrinsic multipole moments of non-gaussian wave packets. Physical Review A, 99(2):022103, 2019.
* [26] A. J. Silenko, P. Zhang, and L. Zou. Electric quadrupole moment and the tensor magnetic polarizability of twisted electrons and a potential for their measurements. Physical Review Letters, 122(6):063201, 2019.
* [27] H. L. Andrews, C. H. Boulware, C. A. Brau, and J. D. Jarvis. Dispersion and attenuation in a Smith-Purcell free electron laser. Physical Review Special Topics-Accelerators and Beams, 8(5):050703, 2005.
* [28] V. B. Berestetskii, E. M. Lifshitz, and L. P. Pitaevskii. Quantum Electrodynamics: Volume 4, volume 4. Butterworth-Heinemann, 1982.
* [29] V. G. Bagrov, V. V. Belov, and A. Yu. Trifonov. Theory of spontaneous radiation by electrons in a trajectory-coherent approximation. Journal of Physics A: Mathematical and General, 26(22):6431, 1993\.
* [30] A. I. Akhiezer and N. F. Shulga. A theory of relativistic particle radiation in the quasi-classical approximation. Physics Letters A, 144(6-7):415–418, 1990.
* [31] A. I. Akhiezer and N. F. Shulga. Semiclassical theory of high-energy particle radiation in external fields. Physics reports, 234(6):297–365, 1993.
* [32] V. N. Baier and V. M. Katkov. Processes involved in the motion of high energy particles in a magnetic field. Sov. Phys. JETP, 26:854, 1968.
* [33] V. A. Bordovitsyn, I. M. Ternov, and V. G. Bagrov. Spin light. Physics-Uspekhi, 38(9):1037, 1995.
* [34] B. Cho, T. Ichimura, R. Shimizu, and C. Oshima. Quantitative evaluation of spatial coherence of the electron beam from low temperature field emitters. Physical Review Letters, 92(24):246103, 2004.
* [35] D. Ehberger, J. Hammer, M. Eisele, M. Krüger, J. Noe, A. Högele, and P. Hommelhoff. Highly coherent electron beam from a laser-triggered tungsten needle tip. Physical Review Letters, 114(22):227601, 2015.
* [36] D. V. Karlovets. On the theory of polarization radiation in media with sharp boundaries. Journal of Experimental and Theoretical Physics, 113(1):27, 2011\.
* [37] V. Blackmore, G. Doucas, C. Perry, B. Ottewell, M. F. Kimmitt, M. Woods, S. Molloy, and R. Arnold. First measurements of the longitudinal bunch profile of a 28.5 gev beam using coherent Smith-Purcell radiation. Physical Review Special Topics-Accelerators and Beams, 12(3):032803, 2009.
* [38] A. M. Pupasov-Maksimov. Smith-Purcell radiation by a vortex electron. https://github.com/pupasov/vortex-sp, 2019.
* [39] I. P. Ivanov and D. V. Karlovets. Polarization radiation of vortex electrons with large orbital angular momentum. Physical Review A, 88(4):043840, 2013.
* [40] O. Haeberle, P. Rullhusen, J.-M. Salome, and N. Maene. Smith-Purcell radiation from electrons moving parallel to a gratingat oblique incidence to the rulings. Physical Review E, 55(4):4675, 1997.
* [41] J. Verbeeck , P. Schattschneider, S. Lazar, M. Stöger-Pollach, M., S. Löffler, A. Steiger-Thirsfeld and G. Van Tendeloo Atomic scale electron vortices for nanoresearch Applied Physics Letters, 99(20):203109, 2011.
* [42] R. Remez, A. Karnieli, S. Trajtenberg-Mills, N. Shapira, I. Kaminer, Y. Lereah, and A. Arie. Observing the quantum wave nature of free electrons through spontaneous emission. Physical Review Letters, 123(6):060401, 2019.
|
[figure]style=plain,subcapbesideposition=top ††thanks: These authors
contributed equally to this work. ††thanks: These authors contributed equally
to this work. ††thanks: Author to whom correspondence should be addressed:
<EMAIL_ADDRESS>
# Demonstration and modelling of time-bin entangled photons from a quantum dot
in a nanowire
Philipp Aumann Institute for Theoretical Physics, University of Innsbruck,
Innsbruck, Austria Maximilian Prilmüller Institute for Experimental Physics,
University of Innsbruck, Innsbruck, Austria Florian Kappe Institute for
Experimental Physics, University of Innsbruck, Innsbruck, Austria Laurin
Ostermann Institute for Theoretical Physics, University of Innsbruck,
Innsbruck, Austria Dan Dalacu National Research Council of Canada, Ottawa,
Canada Philip J. Poole National Research Council of Canada, Ottawa, Canada
Helmut Ritsch Institute for Theoretical Physics, University of Innsbruck,
Innsbruck, Austria Wolfgang Lechner Institute for Theoretical Physics,
University of Innsbruck, Innsbruck, Austria Parity Quantum Computing GmbH,
Innsbruck, Austria Gregor Weihs Institute for Experimental Physics,
University of Innsbruck, Innsbruck, Austria
###### Abstract
Resonant excitation of the biexciton state in an InAsP quantum dot by a phase-
coherent pair of picosecond pulses allows preparing time-bin entangled pairs
of photons via the biexciton-exciton cascade. We show that this scheme can be
implemented for a dot embedded in an InP nanowire. The underlying physical
mechanisms can be represented and quantitatively analyzed by an effective
three-level open system master equation. Simulation parameters including decay
and intensity dependent dephasing rates are extracted from experimental data,
which in turn let us predict the resulting entanglement and optimal operating
conditions.
In a future quantum world long-distance quantum communication will allow users
to communicate in perfect privacy and it will connect quantum computers for
distributed and blind computation tasks. Quantum repeaters [1] will be
necessary in order to establish the required long-distance entanglement and
for building even the simplest quantum repeaters we will need reliable, high-
rate and high-fidelity sources of entangled photon pairs besides quantum
memories and local quantum processing. The emitted photon pairs must propagate
with low loss and low decoherence in order to cover as much a distance as
possible. While the propagation loss in optical fibers is limited by intrinsic
material properties, the decoherence can be minimized by choosing a suitable
quantum information encoding [2]. Time-bin entanglement [3, 4] has emerged as
the optimal encoding for optical fiber quantum communication, because it is
immune to residual fiber birefringence as well as thermal and mechanical
fluctuations up to very high frequencies.
Figure 1: Scheme of the Time-Bin Entanglement Setup. Three phase-stable
interferometers facilitate the generation and projection of time-bin entangled
states. The delay of the pump interferometer, $\Delta t$ is chosen to be much
longer than the coherence time of the emitted photons to rule out photonic
first-order interference. The phases of the three interferometers
$\varphi_{\mathrm{P}}$, $\varphi_{b}$ and $\varphi_{x}$ are controlled via
phase plates. Each pump pulse excites the system with very low probability in
order to ensure that on average maximally one photon pair is created. The
interference of these two time bins can be observed when looking at the
coincident photon detections between outputs of the different analysis
interferometers. Inset: Quantum dot as a three level system (without dark
states). Green arrows indicate the direct population of the biexciton state
($\ket{b}$) via a virtual level (gray line). The single photon transition is
detuned from the exciton state ($\ket{x}$). Relaxation into the groundstate
($\ket{g}$) results in the successive emission of two photons at different
wavelengths.
So far all sources of time-bin entanglement have been probabilistic, even the
ones that used single quantum dots[5, 6]. Most work on quantum dots as
entanglement sources has concentrated on maximizing polarization entanglement,
for which elaborate growth and tuning techniques have been developed [7].
Polarization entanglement can be converted probabilistically to time-bin
entanglement [6] or by using ultra high-speed optical modulators, which,
however, are always very lossy and thus do not allow a near-deterministic
source. Therefore we consider the direct creation of single time-bin entangled
photon pairs from semiconductor quantum dots an important goal. The only known
way to achieve this involves using at least three energy levels in the quantum
dot, one of which must be metastable [8]. While research into deterministic
time-bin entanglement from quantum dots is on the way in our laboratory, in
this letter, as an intermediate step, we present the realization of
probabilistic time-bin entanglement from a quantum dot in an optimized
photonic structure.
[] []
Figure 2: Rabi Oscillations of a Quantum Dot Embedded in a Nanowire and Fit of
the Emission Probabilities to the Photon Counts (a) The solid lines correspond
to emission collected at the exciton wavelength, while the dashed lines
correspond to biexciton emission, respectively. The horizontal axis represents
the average laser power scaled such that the first maximum of the observed
oscillations occurs at $\pi$. Pink: $85\text{\,}\mathrm{ps}$ FWHM linearly
polarized pump. Brown: $20\text{\,}\mathrm{ps}$ FWHM linearly polarized pump.
Orange: $20\text{\,}\mathrm{ps}$ FWHM circular polarized pump, scaled to the
biexciton excitation angle of the brown curve. (b) We fitted the emission
probabilities predicted by the theoretical model to biexciton and exciton
emission counts for a pulse length of $85$ ps FWHM. The resulting parameter
values can be found in section S3 of the supplemental material. The dashed
line indicates the position of the $\pi/15$ power that has been used for the
time-bin measurement. The measurement error is estimated by the square root of
the number of the counts resulting in error bars smaller than the symbols.
In the past two decades a lot of work has gone into improving the out-coupling
efficiencies of photons from quantum dots [9] e.g. via the implementation of
circular Bragg gratings [10], enhancing emission into a collectable mode.
Alternatively, realizing quantum dots embedded in tapered nanowires turned out
to be a promising platform for coherent photon emission [11, 12, 13, 14]. The
tapered part of the nanowire acts as an antenna that matches the impedance of
the nanowire waveguide mode to the vacuum and thus achieves efficient out-
coupling [15].
In the following, we report the generation of time-bin entangled photon pairs
in indium arsenide phosphide (InAsP) quantum dots embedded in indium phosphide
(InP) nanowires via a resonant two-photon excitation [16, 17] (see figure 1).
Furthermore, we present an extension of our theoretical model from previous
work [18] that includes the density matrix of the time-bin entangled photons,
which allows suggesting optimal parameter values.
Experiment – The core of our setup is constituted by a quantum dot embedded in
a nanowire. Our samples were manufactured utilizing a selective-area vapor-
liquid-solid epitaxy which produced InAsP quantum dots embedded in defect-free
wurtzite InP nanowires [19]. A single electron-hole pair trapped in the
quantum dot is referred to as an exciton ($\ket{x}$), while the confinement of
two pairs is called a biexciton ($\ket{b}$). A recombination of a single pair
leads to the emission of a photon at a characteristic wavelength, as depicted
in the inset of figure 1. The biexciton-exciton photon cascade is used in
order to operate the quantum dot as a source of correlated photon pairs. The
emission spectrum of our quantum dot can be found in figure S1 in the
supplementary material.
Figure 3: Procedure for Simulating the Photonic Density Matrix from the
Quantum Dot Dynamics. After setting our model parameters to fit the
experimental Rabi data, we simulate the dynamics of the quantum dot’s density
matrix, obtaining the photon coincidence counts via calculating the two-photon
detection probabilities and thus reconstruct the photonic density matrix by
means of state tomography. More details on the procedure to estimate the
density matrix of the photons $\rho_{\text{photons}}$ from our theoretical
model are given in section S2 of the supplementary material.
The basic principle of the time-bin encoding scheme relies on the ability to
create a coherent superposition of two well-defined excitation processes. Its
simplest realisation relies on addressing the emitter with two pump pulses of
very low excitation probability each, and postselecting on biexciton - exciton
photon pair detection events. The two excitation pulses are created in an
unbalanced Mach-Zehnder interferometer and denoted by e (early) and l (late).
The phase between the two pulses $\Delta_{\phi}$ can be modified via a phase
plate and determines the phase of the entangled state. Denoting biexciton and
exciton photons by $b$ and $x$ respectively, the created state can be written
as:
$\displaystyle\ket{\Phi}$
$\displaystyle=\frac{1}{\sqrt{2}}\Bigl{(}\ket{e}_{b}\ket{e}_{x}+e^{i\Delta_{\phi}}\ket{l}_{b}\ket{l}_{x}\Bigr{)}$
$\displaystyle=:\frac{1}{\sqrt{2}}\left(\ket{ee}+e^{i\Delta_{\phi}}\ket{ll}\right).$
(1)
Using another two unbalanced Mach-Zehnder interferometers that are phase
stable with respect to the pump interferometer we carry out projective
measurements on the created entangled state. In order to perform quantum state
tomography, we analyze the result of $16$ different pairs of phase settings
and use a maximum likelihood approach. [20, 21] For collecting the $16$
different projections necessary for the quantum state tomography we employ
four different phase settings in the analysis interferometers each and detect
photons at each of the four output ports. We collect time tags of the detected
photons for $3600\text{\,}\mathrm{s}$ per phase setting and identify
coincident photon pairs by imposing a coincidence window of
$400\text{\,}\mathrm{p}\mathrm{s}$. The integration time was chosen such that
it would yield sufficient statistics for the maximum likelihood reconstruction
method [22].
For the generation of biexciton-exciton photon pairs, we employ resonant
pulsed two-photon excitation from $\ket{g}$ to $\ket{b}$ (see inset in figure
1). In order to calibrate and characterize the system, we observe Rabi
oscillations by measuring the photon counts as a function of the average laser
power as shown in figure 2. We see that it is critical to identify an
appropriate polarization as well as a sensible pulse duration. Choosing a
circular pump polarization violates optical selection rules and leads to
incoherent excitations rather than to a two-photon coherent coupling of the
ground and biexciton state. By comparing the oscillations resulting from a
linearly polarized pump and pulse lengths of $25$ ps and $85$ ps, we find a
significantly stronger coherence for the longer pulse. The similar slopes at
low excitation power of the biexciton and exciton emission probabilities for a
linearly polarized pump indicate the superior pair production efficiency of
this excitation scheme.
For the creation of time-bin entangled photons we thus use the optimized pulse
duration of $85$ ps [18] resulting in a substantial increase of the excitation
coherence and we determine the energy of a $\pi/15$-pulse to be adequate,
yielding an excitation probability of about $7.5$ $\%$ per pulse which reduces
the probability of emitting at both time bins to below $0.6$ $\%$. Our
theoretical model (see below) underpins the feasibility of the chosen
parameters and provides the basis for even further improvements in future
work.
Figure 4: Reconstructed Density Matrix of the Emitted Photons. Left: Real and
imaginary part of the reconstructed density matrix from the experiment
$\rho_{\mathrm{photons}}^{\mathrm{exp}}$. Right: Real and imaginary part of
the simulated density matrix $\rho_{\mathrm{photons}}^{\mathrm{sim}}$. The
agreement between the experimentally obtained and the simulated density matrix
is calculated by means of equation 12 yielding a value of around
$F_{\rho}\approx 0.96$.
Theoretical Model – We propose a quantum optical model in order to identify a
suitable set of parameter values to enhance the quality of entanglement of the
created photon pairs. This allows us to estimate the reconstructed density
matrix as shown in figure 4. Extending our earlier work [18], where we used a
model for the internal dynamics of the quantum dot, we include a procedure for
obtaining the photons’ density matrix from said quantum dot dynamics. Our
strategy is outlined in figure 3. The conceptual procedure introduced here is
not restricted to this particular experimental setup and thus can be seen as a
more fundamental framework for a general setting of time-bin entangled photons
from a quantum dot.
The pulsed laser driving field couples the ground state to the biexciton via
the ground state to exciton and exciton to biexciton transition dipoles. The
Hamiltonian in the effective interaction picture reads (for a derivation see
section S4 of the supplementary material)
$\displaystyle H=$
$\displaystyle(\Delta_{x}-\Delta_{b})\ket{x}\bra{x}-2\Delta_{b}\ket{b}\bra{b}$
(2)
$\displaystyle+\Omega(t)\Bigl{(}\ket{g}\bra{x}+\ket{x}\bra{b}+\mathrm{h.c.}\Bigr{)}.$
Here, $\Delta_{x}$ denotes the detuning from the exciton level to the laser
frequency, while $\Delta_{b}$ is the detuning from the biexciton level to the
two-photon transition, $\Omega(t)$ is the Rabi frequency featuring a Gaussian
time profile,
$\Omega(t)=\Omega_{0}\exp\left(-\frac{4\ln(2)(t-t_{0})^{2}}{\tau^{2}}\right),$
(3)
with amplitude $\Omega_{0}$, pulse duration (FWHM) $\tau$ and time offset
$t_{0}$.
To simulate the dynamics we solve the master equation in Lindblad form
numerically, i.e.
$\dot{\rho}=i\left[\rho,H\right]+\frac{1}{2}\sum_{j=1}^{6}\left(2R_{j}\rho
R_{j}^{\dagger}-R_{j}^{\dagger}R_{j}\rho-\rho R_{j}^{\dagger}R_{j}\right).$
(4)
where $\rho=\rho_{\mathrm{dot}}(t)$ is the quantum dot density matrix. We
consider six dissipative channels associated with six different Lindblad
operators $R_{j}$, where
$\displaystyle R_{1}=$ $\displaystyle\sqrt{\gamma_{x}}\ \ket{g}\bra{x},$ (5)
$\displaystyle R_{2}=$ $\displaystyle\sqrt{\gamma_{b}}\ \ket{x}\bra{b},$ (6)
describes the radiative decay of the biexciton and exciton levels with rates
$\gamma_{b}$ and $\gamma_{x}$, respectively, while
$\displaystyle R_{3}=$
$\displaystyle\sqrt{\gamma^{\text{const}}_{xg}+\gamma_{xg}}\
(\ket{x}\bra{x}-\ket{g}\bra{g})$ (7) $\displaystyle R_{4}=$
$\displaystyle\sqrt{\gamma^{\text{const}}_{bx}+\gamma_{bx}}\
(\ket{b}\bra{b}-\ket{x}\bra{x})$ (8)
introduce dephasing. The rates
${\gamma_{bx}=\gamma^{I_{0}}_{bx}\Bigl{(}\tfrac{\Omega(t)}{\Omega_{S}}\Bigr{)}^{n}}$
and
${\gamma_{xg}=\gamma^{I_{0}}_{xg}\Bigl{(}\tfrac{\Omega(t)}{\Omega_{S}}\Bigr{)}^{n}}$
are comprised of their amplitudes $\gamma^{I_{0}}_{bx}$ and
$\gamma^{I_{0}}_{xg}$ as well as the scaled time-dependent Rabi frequency to
the $n$-th power. Throughout this work we consider $n=2$. This laser intensity
dependent dephasing can be explained by phonons coupling to the quantum dot
[23]. The scaling factor ${\Omega_{S}=1}$ THz accounts for the correct
numerical values and leads to a unitless expression for the Rabi frequency. A
minor role is attributed to the rates of constant dephasing
$\gamma^{const}_{xg}$ and $\gamma^{const}_{bx}$ by the fit in figure 2.
In order to account for the decrease of photon counts for higher laser power
as depicted in figure 2, we introduce dark states modelling a laser power
dependent loss mechanism, as states outside the three-level approximation
become more prominent for higher laser powers. Moreover, this additional dark
state loss counteracts the increased exciton population via a single photon
transition that appears at higher laser intensities based on the broadening of
the spectral linewidth due to the laser dependent dephasing. For bookkeeping
purposes, we introduce two dark states $\ket{d_{x}}$ and $\ket{d_{b}}$, which
are populated by laser dependent exciton and biexciton decay, whereas in
general one dark state would suffice to constitute the same effect. The
corresponding Lindblad operators are given by
$\displaystyle R_{5}=$ $\displaystyle\sqrt{\gamma_{xd}}\ \ket{d_{x}}\bra{x},$
(9) $\displaystyle R_{6}=$ $\displaystyle\sqrt{\gamma_{bd}}\
\ket{d_{b}}\bra{b},$ (10)
with laser intensity dependent decay rates
${\gamma_{xd}=\gamma^{I_{0}}_{xd}\Bigl{(}\tfrac{\Omega(t)}{\Omega_{S}}\Bigr{)}^{n}}$
and
${\gamma_{bd}=\gamma^{I_{0}}_{bd}\Bigl{(}\tfrac{\Omega(t)}{\Omega_{S}}\Bigr{)}^{n}}$,
decay amplitudes $\gamma^{I_{0}}_{xd}$ and $\gamma^{I_{0}}_{bd}$, as well as
the same power $n$ as the dephasing mechanism.
Exemplary dynamics of the quantum dot when driven by a laser pulse are
depicted in figure S2 and numerical values for the system parameters can be
found in table S2 and S3 of the supplementary material.
[] []
Figure 5: Fidelity with the Bell State and Normalized Number of Photon Counts.
(a) Fidelity $F_{\ket{\Phi^{+}}}$ (see equation 14) between the theoretically
predicted density matrix $\rho_{\mathrm{photons}}$ and $\ket{\Phi^{+}}$ for
various values of Rabi frequency amplitude $\Omega_{0}$ and the pulse duration
$\tau$. The red dot marks the parameters chosen for figure 4, i.e. an
excitation angle of $\pi/15$ ($\Omega_{0}\approx 0.05$) and a pulse duration
of $85$ ps FWHM for the measurement of time-bin entangled photons. The dashed
lines indicate areas of constant pulse energy proportional to
$\Omega_{0}^{2}\tau$. (b) Normalized number of total counts predicted by the
simulated projective measurements. The blue line in the main plot follows a
constant count number of $0.32$. Additionally, the inset depicts the fidelity
along this line when going from small to large $\Omega_{0}$.
In order to obtain the density matrix of the emitted photons from the quantum
dot dynamics, we mimic the measurement of the photon coincidence counts in the
experiment: first, we calibrate our model by fitting the emission
probabilities
$\displaystyle
P_{i}=\gamma_{i}\int\bra{i}\rho_{\mathrm{dot}}(t)\ket{i}\mathrm{d}t$ (11)
for $i\in\\{x,b\\}$ to the biexciton and exciton Rabi data (see figure 2).
This comprehensive and numerical demanding fitting loop is outlined in section
S2 of the supplemental material. Then, the density matrix of the quantum dot
as a function of time $\rho_{\mathrm{dot}}(t)$ is the result of numerically
solving the master equation. Relating the density matrix of the photons
$\rho_{\mathrm{photons}}$ to $\rho_{\mathrm{dot}}(t)$ is achieved by
calculating the resulting photon coincidence counts, where we derive analytic
expressions for the detection probabilities of all $16$ projective
measurements (see S2 in the supplement) and subsequently use this estimate as
an input for the conventional state tomography. This procedure is depicted
schematically in figure 3. The density matrix resulting from this approach is
shown in figure 4 and compared to the density matrix obtained in the
experiment. In order to quantitatively compare the experimental results to our
simulation we employ to following definition of the fidelity for two mixed
states [24]:
$\displaystyle F_{\rho}=\mathrm{tr}\
\sqrt{\sqrt{\rho_{\mathrm{photons}}^{\mathrm{exp}}}\
\rho_{\mathrm{photons}}^{\mathrm{sim}}\
\sqrt{\rho_{\mathrm{photons}}^{\mathrm{exp}}}}\approx 0.96.$ (12)
The density matrix from theory and experiment have the same structural
appeareance, as they show similar values at the prominent matrix elements.
While the remaining entries of the simulated density matrix appear to be
rather flat, we observe additional small fluctuations of these entries for the
density matrix from the experiment.
Entanglement Quality of the Photons – Ultimately, our goal is to achieve two-
photon emission in a perfect Bell state,
$\displaystyle\ket{\Phi^{+}}=\frac{1}{\sqrt{2}}\Bigl{(}\ket{ee}+\ket{ll}\Bigr{)}.$
(13)
Therefore, we identify suitable values for the laser intensity
$I\propto\Omega^{2}$ and its pulse duration $\tau$ in our simulation, which
can assist in maximizing the fidelity,
$\displaystyle
F_{\ket{\Phi^{+}}}=\sqrt{|\bra{\Phi^{+}}\rho_{\mathrm{photons}}\ket{\Phi^{+}}|},$
(14)
to a Bell state in the experiment. Figure 5 shows a scan of the fidelity
$F_{\ket{\Phi^{+}}}$ over the corresponding parameter space spanned by
$\Omega_{0}$ and $\tau$. Here, we study the influence of the parameter
$\Omega_{0}$ instead of the intensity $I$ as this parameter is more natural to
the theoretical model. Once the model is calibrated to the experimental data,
$\Omega_{0}$ can be converted to the average laser power.
Similar to the Rabi oscillations in figure 2, we observe an oscillatory
pattern, which becomes less and less pronounced towards regions of higher
energy (upper right corner). This can mostly be attributed to the intensity-
dependent dephasing. For lower energies (lower left corner) the pattern
roughly follows areas of constant energy, indicated by the yellow dashed
lines. The red dot indicates the values chosen in the measurements that yield
the reconstructed time-bin encoded photonic density matrix in figure 4. We
show simulated density matrices for the same pulse length but different
average laser power in figure S4 of the supplemental material, where we
observe an increase of the diagonal entries of the density matrix towards
regions of lower fidelity which means that the photonic state is becoming more
classical in low fidelity regions for this pulse length. Reaching the regime
of maximal fidelity has to be deferred to a future experimental setup, where
our theoretical model can prove even more useful in fine-tuning the
experimental parameters.
For a source of entangled photons it is desirable to not only achieve a high
fidelity, but also to yield sufficient output. Figure 5 depicts the normalized
number of total expected counts of all simulated projective measurements (see
supplementary material S2). Again, we observe an oscillatory behaviour where
we find some degree of anti-correlation between the pattern of the counts and
the fidelity, i.e. dark areas with less output correspond to a relatively high
fidelity, whereas bright areas are connected to a smaller fidelity. Yet, these
two patterns are not perfectly anti-correlated, as we find slightly varying
fidelity for contours of constant counts. For some applications, a minimum
amount of photons is required. Consequently, one might be interested in the
optimal fidelity for a given photon count rate. For instance, we observe the
fidelity along a contour of constant counts in the inset of figure 5. For this
particular contour, we find the highest fidelity for long pulses with a
relatively low intensity. In cases where the rate of output photons is not an
issue, our study suggests that the optimal parameter regime is that of low
pulse energy (lower left corner).
Conclusions – In this work we have shown the coherent coupling of ground to
biexciton state of a InAsP quantum dot embedded in an InP nanowire via an
optimised two-photon resonant excitation scheme. We have used this method to
generate time-bin entangled photons, yielding a fidelity of
$F_{\ket{\Phi^{+}}}\approx 0.90$ (see equation 14) with respect to the
maximally entangled $\ket{\Phi^{+}}$ Bell state.
Additionally, we have presented a quantum optical model for simulating the
dynamics of the quantum dot. By making use of the experimental reconstruction
method, we have introduced a scheme for predicting the density matrix of the
emitted photons based on the simulation of the dynamics of the quantum dot.
The results of the model have been compared to the outcome of the experiment.
With this, we are able to identify optimal parameter regimes in order to
further increase the fidelity of the photons’ density matrix to a Bell state
and to provide a more general toolbox for the study of time-bin entangled
photons from a quantum dot.
## Acknowledgements
We want to thank Doris Reiter and her group for fruitful discussions. P.A. and
W.L. are supported by the Austrian Science Fund (FWF) through a START grant
under Project No. Y1067-N27 and the SFB BeyondC Project No. F7108, the Hauser-
Raspe foundation and the European Union’s Horizon 2020 research and innovation
program under grant agreement No. 817482. M.P. F.K. and G.W. acknowledge
partial support by the Austrian Science Fund (FWF) through projects W1259 (DK-
ALM), I4380 (AEQuDot), and F7114 (SFB BeyondC). This material is based upon
work supported by the Defense Advanced Research Projects Agency (DARPA) under
Contract No. HR001120C0068. Any opinions, findings and conclusions or
recommendations expressed in this material are those of the author(s) and do
not necessarily reflect the views of DARPA.
## Supplementary Material
The supplementary material shows the quantum dot’s emission spectrum and
details on the experimental methods. It features exemplary dynamics of the
quantum dot upon excitation by a laser pulse and provides an in-depth
mathematical assessment of the reconstruction of the photons’ density matrix
from the quamtum dot’s density matrix. Furthermore, it contains a summary of
the chosen values for the simulation parameters including the fit of the decay
rates and a derivation of the Hamiltonian in equation 2.
## Data availability
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## Conflict of Interest
The authors have no conflicts to disclose.
## References
## References
* [1] H. J. Briegel, W. Dür, J. I. Cirac, and P. Zoller. Quantum repeaters: The role of imperfect local operations in quantum communication. Physical Review Letters, 81(26):5932–5935, 12 1998.
* [2] W. Tittel and G. Weihs. Photonic entanglement for fundamental tests and quantum communication. Quant. Inf. Comp., 1:3–56, 2001.
* [3] J. D. Franson. Bell inequality for position and time. Phys. Rev. Lett., 62:2205–2208, May 1989.
* [4] W. Tittel, J. Brendel, H. Zbinden, and N. Gisin. Violation of bell inequalities by photons more than 10 km apart. Physical Review Letters, 81(17):3563–3566, 10 1998.
* [5] Harishankar Jayakumar, Ana Predojević, Thomas Kauten, Tobias Huber, Glenn S. Solomon, and Gregor Weihs. Time-bin entangled photons from a quantum dot. Nature Communications, 5:4251 EP –, 06 2014.
* [6] Marijn A. M. Versteegh, Michael E. Reimer, Aafke A. van den Berg, Gediminas Juska, Valeria Dimastrodonato, Agnieszka Gocalinska, Emanuele Pelucchi, and Val Zwiller. Single pairs of time-bin-entangled photons. Phys. Rev. A, 92:033802, Sep 2015.
* [7] Daniel Huber, Marcus Reindl, Saimon Filipe Covre da Silva, Christian Schimpf, Javier Martín-Sánchez, Huiying Huang, Giovanni Piredda, Johannes Edlinger, Armando Rastelli, and Rinaldo Trotta. Strain-tunable gaas quantum dot: A nearly dephasing-free source of entangled photon pairs on demand. Physical Review Letters, 121(3):033902–, 07 2018.
* [8] Christoph Simon and Jean-Philippe Poizat. Creating single time-bin-entangled photon pairs. 94(3):030502, 2005.
* [9] Pascale Senellart, Glenn Solomon, and Andrew White. High-performance semiconductor quantum-dot single-photon sources. Nature nanotechnology, 12(11):1026, 2017.
* [10] Hui Wang, Hai Hu, T-H Chung, Jian Qin, Xiaoxia Yang, J-P Li, R-Z Liu, H-S Zhong, Y-M He, Xing Ding, et al. On-demand semiconductor source of entangled photons which simultaneously has high fidelity, efficiency, and indistinguishability. Physical review letters, 122(11):113602, 2019.
* [11] M. E. Reimer, G. Bulgarini, A. Fognini, R. W. Heeres, B. J. Witek, M. A. M. Versteegh, A. Rubino, T. Braun, M. Kamp, S. Höfling, D. Dalacu, J. Lapointe, P. J. Poole, and V. Zwiller. Overcoming power broadening of the quantum dot emission in a pure wurtzite nanowire. Phys. Rev. B, 93:195316, May 2016.
* [12] Michael E. Reimer, Gabriele Bulgarini, Nika Akopian, Moïra Hocevar, Maaike Bouwes Bavinck, Marcel A. Verheijen, Erik P. A. M. Bakkers, Leo P. Kouwenhoven, and Val Zwiller. Bright single-photon sources in bottom-up tailored nanowires. Nature Communications, 3:737 EP –, 03 2012.
* [13] Tobias Huber, Ana Predojević, Milad Khoshnegar, Dan Dalacu, Philip J. Poole, Hamed Majedi, and Gregor Weihs. Polarization entangled photons from quantum dots embedded in nanowires. Nano Letters, 14(12):7107–7114, 2014. PMID: 25395237.
* [14] M. A. M. Versteegh, M. E. Reimer, K. D. Jons, D. Dalacu, P. J. Poole, A. Gulinatti, A. Giudice, and V. Zwiller. Observation of strongly entangled photon pairs from a nanowire quantum dot. Nature Communications, 5:5298, 2014.
* [15] Gabriele Bulgarini, Michael E. Reimer, Maaike Bouwes Bavinck, Klaus D. Jöns, Dan Dalacu, Philip J. Poole, Erik P. A. M. Bakkers, and Val Zwiller. Nanowire Waveguides Launching Single Photons in a Gaussian Mode for Ideal Fiber Coupling. Nano Letters, 14(7):4102–4106, July 2014. Publisher: American Chemical Society.
* [16] Zhe-Xian Koong, Guillem Ballesteros-Garcia, Raphaël Proux, Dan Dalacu, Philip J. Poole, and Brian D. Gerardot. Multiplexed Single Photons from Deterministically Positioned Nanowire Quantum Dots. Physical Review Applied, 14(3):034011, September 2020. Publisher: American Physical Society.
* [17] F. Basso Basset, M. B. Rota, C. Schimpf, D. Tedeschi, K. D. Zeuner, S. F. Covre da Silva, M. Reindl, V. Zwiller, K. D. Jöns, A. Rastelli, and R. Trotta. Entanglement Swapping with Photons Generated on Demand by a Quantum Dot. Physical Review Letters, 123(16):160501, October 2019. Publisher: American Physical Society.
* [18] Tobias Huber, Laurin Ostermann, Maximilian Prilmüller, Glenn S. Solomon, Helmut Ritsch, Gregor Weihs, and Ana Predojević. Coherence and degree of time-bin entanglement from quantum dots. Phys. Rev. B, 93:201301, May 2016.
* [19] Dan Dalacu, Khaled Mnaymneh, Jean Lapointe, Xiaohua Wu, Philip J Poole, Gabriele Bulgarini, Val Zwiller, and Michael E Reimer. Ultraclean emission from inasp quantum dots in defect-free wurtzite inp nanowires. Nano letters, 12(11):5919–5923, 2012.
* [20] Daniel F. V. James, Paul G. Kwiat, William J. Munro, and Andrew G. White. Measurement of qubits. Physical Review A, 64(5):052312–, 10 2001.
* [21] Hiroki Takesue and Yuita Noguchi. Implementation of quantum state tomography for time-bin entangled photon pairs. Optics Express, 17(13):10976–10989, 2009.
* [22] Christian Schwemmer, Lukas Knips, Daniel Richart, Harald Weinfurter, Tobias Moroder, Matthias Kleinmann, and Otfried Gühne. Systematic errors in current quantum state tomography tools. Physical Review Letters, 114(8):080403–, 02 2015.
* [23] A. J. Ramsay, Achanta Venu Gopal, E. M. Gauger, A. Nazir, B. W. Lovett, A. M. Fox, and M. S. Skolnick. Damping of Exciton Rabi Rotations by Acoustic Phonons in Optically Excited $\mathrm{InGaAs}/\mathrm{GaAs}$ Quantum Dots. Physical Review Letters, 104(1):017402, January 2010. Publisher: American Physical Society.
* [24] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, Cambridge, 2010.
|
# Machine Translationese: Effects of Algorithmic Bias on Linguistic Complexity
in Machine Translation
Eva Vanmassenhoveα
Dimitar Shterionovα
α Cognitive Science and AI, Tilburg University, The Netherlands
{e.o.j.vanmassenhove<EMAIL_ADDRESS>
β University of Maryland, College Park
<EMAIL_ADDRESS>Matthew Gwilliamβ
###### Abstract
Recent studies in the field of Machine Translation (MT) and Natural Language
Processing (NLP) have shown that existing models amplify biases observed in
the training data. The amplification of biases in language technology has
mainly been examined with respect to specific phenomena, such as gender bias.
In this work, we go beyond the study of gender in MT and investigate how bias
amplification might affect language in a broader sense. We hypothesize that
the ‘algorithmic bias’, i.e. an exacerbation of frequently observed patterns
in combination with a loss of less frequent ones, not only exacerbates
societal biases present in current datasets but could also lead to an
artificially impoverished language: ‘machine translationese’. We assess the
linguistic richness (on a lexical and morphological level) of translations
created by different data-driven MT paradigms – phrase-based statistical (PB-
SMT) and neural MT (NMT). Our experiments show that there is a loss of lexical
and morphological richness in the translations produced by all investigated MT
paradigms for two language pairs (EN$\leftrightarrow$FR and
EN$\leftrightarrow$ES).
## 1 Introduction
The idea of translation entailing a transformation is widely recognised in the
field of Translation Studies (TS) Ippolito (2014). Translations are specific
communicative acts occurring in a particular context governed by their own
laws. Some of the features that characterize translated texts are defined as
simplification, explicitation, normalization and leveling out Baker (1999).
The fingerprints left by the translation process and the language this results
into, have been referred to as ‘translationese’ Gellerstam (1986). Empirical
evidence of the existence of translationese can be found in studies showing
that machine learning techniques can be employed to automatically distinguish
between human translated and original text by looking at lexical and
grammatical information Baroni and Bernardini (2006); Koppel and Ordan (2011).
Translationese differs from original texts due to a combination of factors
including intentional (e.g. explicitation and normalization) and unintentional
ones (e.g. unconscious effects of the source language input on the target
language produced). Unlike other work on (human) translationese (or even
related work on ‘Post-editese’), we delve into the effects of machine
translation (MT) algorithms on language, i.e. ‘machine translationese’.
So far, generating accurate and fluent translations has been the main
objective of MT systems. As such, maintaining the richness and diversity in
the outputs has understandably not been a priority Vanmassenhove (2020).111One
might argue that for some tasks and domains, diversity can be harmful
(specific in-domain translations would prefer consistency over e.g. lexical
diversity). However, as MT systems have reached a quality that is (arguably)
close to that of human translations Läubli et al. (2018); Toral et al. (2018)
and as such are being used widely on a daily basis, we believe it is time to
look into the potential effects of (MT) algorithms on language
itself.222Google Translate alone translates more than 100 billions words per
day and is used by at least 500 million people according to estimates
(https://www.blog.google/products/translate/ten-years-of-google-translate/).
The main motivations behind this work are: (i) if algorithmic bias is indeed a
by-product of our algorithms, a statistically biased MT system might prefer
frequently occurring words (or sub-words) over others. Since MT systems do not
necessarily distinguish between different synonymous translations (lexical
richness) and morphological variants (grammatical richness), algorithmic bias
could lead to the loss of morphological variety (and thus interfere with the
ability of our systems to generate at all times a grammatically correct
option); (ii) the sociolinguistic perspective of machine translationese since
it has been established that language contact (e.g. via translationese) can
entail language changes Kranich (2014). If machine translationese (and other
types of ‘NLPese’) is a simplified version of the training data, what does
that imply from a sociolinguistic perspective and how could this affect
language on a longer term?
The main objective of the presented research is to establish whether there is
indeed a quantitatively measurable difference between the linguistic richness
of an MT system’s training data and its output in terms of morphological and
lexical diversity. To do so, we conduct an in-depth analysis that goes beyond
frequently used standard lexical diversity metrics such as TTR, Yule’s I and
MTLD. We assess the lexical and morphological diversity through an adapted
version of the Lexical Frequency Profile used to assess language acquisition,
a measure of morphological diversity based on Shannon and Simpson Diversity
and an novel automatic evaluation of synonym frequency. We focus on the most
prominent data-driven MT paradigms: Neural MT (NMT), both LSTM Bahdanau et al.
(2015); Sutskever et al. (2014) and Transformer Vaswani et al. (2017), and
Phrase-Based Statistical MT (PB-SMT). Up to our knowledge this is the first
research on lexical and morphological diversity of machine translation output,
i.e. machine translationese.
The contributions of this work can be summarised as: (i) a detailed analysis
of lexical and morphological diversity of machine translationese and the loss
thereof to quantify the effects of algorithmic bias; (ii) the adaptation of a
metric used in language acquisition for assessing lexical sophistication in
MT333In fact, our implementation of the LFP metric can be employed for any NLP
tasks.; (iii) the use of Shannon entropy and Simpson diversity to measure
morphological richness, and (iv) a novel, automatic evaluation of synonym
frequency.
## 2 Related Work
Several studies have exposed the societal biases present in datasets (racial
bias Merullo et al. (2019), political bias Fan et al. (2019), gender bias
Vanmassenhove and Hardmeier (2018). Existing NLP technology are likely to pick
up biases present in the training data and various explorations of e.g. gender
bias in NLP systems have indeed revealed the existence of harmful biases in
the output they generate Bolukbasi et al. (2016); Caliskan-Islam et al.
(2016); Garg et al. (2018); Vanmassenhove et al. (2018); Stanovsky et al.
(2019); Sun et al. (2019); Habash et al. (2019). Research related to bias has
often focused on gender or race. Especially in a field such as MT, the
implicit gender in a language such as English and its consecutive translations
into morphologically richer languages with gender agreement, makes it
relatively easy to expose and study biases related to gender in a contrastive
linguistic setting. In the context of this paper, we would like to note that
(statistical) bias is not limited to gender or race but can be defined as any
systematic inaccuracy in one direction leading to an under (or over)
estimation of observations.
A handful of recent work has mentioned the possibility of algorithmic bias on
top of the already existing societal biases in the training data Bolukbasi et
al. (2016); Caliskan-Islam et al. (2016); Garg et al. (2018). For instance,
Zhao et al. Zhao et al. (2017) observe a phenomenon they refer to as ‘bias
amplification’. They note that in their training data an activity such as
‘cooking’ is associated 33% times more with women compared to men. After
training a model on that dataset, the existing disparity is amplified to 68%
times more associations with women.
In the field of MT, Vanmassenhove et al. Vanmassenhove et al. (2019) address
the effects of statistical bias on language generation in an MT setting. They
assess lexical diversity using standard metrics –TTR, MTLD and Yule’s K– and
conclude that the translations produced by various MT systems (PB-SMT and NMT)
are consistently less diverse than the original training data. Their approach
was conducted on NMT systems that were trained without byte-pair-encoding
(BPE) Sennrich et al. (2016) which limits the creativity of the translation
systems.
Toral Toral (2019) measures the lexical diversity of 18 state-of-the-art
systems on 6 language pairs, reaching similar conclusions. They do so focusing
specifically on post-editese. The experiments indicate that post-editese is
simpler and more normalised than human translationese. The post-editese also
shows a higher degree of interference from the source compared to the human
translations. Daems et al. Daems et al. (2017), like Toral Toral (2019),
centers around the automatic detection of post-editese and does not look into
properties of unedited machine translationese. In Aranberri Aranberri (2020)
different freely available MT systems (neural and rule-based) are compared in
terms of automatic metrics (BLEU, TER) and translationese features (TTR,
length ratio input/output, perplexity, etc.) to investigate how such features
correlate with translation quality. Bizzoni et al. Bizzoni et al. (2020)
presents a comparison using similar translationese features of three MT
architectures and the human translations of spoken and written language.
In the field of PB-SMT, Klebanov and Flor Klebanov and Flor (2013) show that
PB-SMT suffers considerably more than human translations (HT) from lexical
loss, resulting in loss of lexical tightness and text cohesion. Aharoni et al.
Aharoni et al. (2014) proof that automatic and human translated sentences can
be automatically identified corroborating that human translations
systematically differ from the translations produced by PB-SMT systems.
Aside from Vanmassenhove et al. Vanmassenhove et al. (2019), the above
discussed related work uses metrics of lexical diversity to compare human
translations to (post-edited) machine translations. In this work, we compare
how and whether the output of an MT system differs (in terms of lexical and
morphological diversity) from the data it was originally trained on. This way,
we aim to investigate the effect of the algorithm (and algorithmic bias) on
language itself.
## 3 Machine Translation Systems
MT paradigms have changed quickly over the last decades. Since this is the
first attempt to quantify both the lexical and grammatical diversity of
machine translationese, we experimented with the current state-of-the-art
data-driven paradigms, LSTM and Transformer, as well as with PB-SMT. We used
data from the Europarl corpus Koehn (2005) for two language pairs,
English–French and English–Spanish in both direction (EN$\rightarrow$FR,
FR$\rightarrow$EN, EN$\rightarrow$ES and ES$\rightarrow$EN). We are interested
in both directions in order to verify whether there is a difference in terms
of (the potential loss of) diversity when comparing translations from a
morphologically poorer language (English) into morphologically richer ones
(French and Spanish) and vice versa. Our data is summarised in Table 1.444We
ought to address the fact that the Europarl data consists of both human-
uttered and translated text which have different properties in terms of
diversity. In this work we analyse the impoverishment of data when it passes
through the “filter” of the MT system, i.e. the effect of algorithm. As the
origin of the data, human-uttered or translated, has no impact on the inherent
workings of the MT system we do not take this into account in our analysis.
Lang. pair | Train | Test | Dev
---|---|---|---
EN-FR/FR-EN | 1,467,489 | 499,487 | 7,723
EN-ES/ES-EN | 1,472,203 | 459,633 | 5,734
Table 1: Number of parallel sentences for the training, testing and
development sets.
The specifics of the MT systems we trained are:
PB-SMT For the PB-SMT systems we used Moses Koehn et al. (2007) with default
settings and a 5-gram language model with pruning of bigrams. We also tuned
each system using MERT Och and Ney (2003) until convergence or for a maximum
of 25 iterations. During translation we mask unknown words with the UNK token
to avoid bleeding through (source) words which would artificially increase the
linguistic diversity.
NMT For the RNN and Transformer systems we used OpenNMT-
py.555https://opennmt.net/OpenNMT-py/ The systems were trained for maximum of
150K steps, saving an intermediate model every 5000 steps or until reaching
convergence according to an early stopping criteria of no improvements of the
perplexity (scored on the development set) for 5 intermediate models. The
options we used for the neural systems are:
* •
RNN: size: 512, RNN type: bidirectional LSTM, number of layers of the encoder
and of the decoder: 4, attention type: MLP, dropout: 0.2, batch size: 128,
learning optimizer: Adam Kingma and Ba (2014) and learning rate: 0.0001.
* •
Transformer: number of layers: 6, size: 512, transformer_ff: 2048, number of
heads: 8, dropout: 0.1, batch size: 4096, batch type: tokens, learning
optimizer Adam with beta${}_{2}=0.998$, learning rate: 2.
All NMT systems have the learning rate decay enabled and their training is
distributed over 4 nVidia 1080Ti GPUs. The selected settings for the RNN
systems are optimal according to Britz et al. Britz et al. (2017); for the
Transformer we use the settings suggested by the OpenNMT
community666http://opennmt.net/OpenNMT-py/FAQ.html as the optimal ones that
lead to quality on par with the original Transformer work Vaswani et al.
(2017).
For training, testing and validation of the systems we used the same data. To
build the vocabularies for the NMT systems we used sub-word units, allowing
NMT to be more creative; using sub-word units also mitigates to a certain
extent the out of vocabulary problem. To compute the sub-word units we used
BPE with 50,000 merging operations for all our data sets. In Table 2 we
present the vocabulary sizes of the data used to train our PB-SMT and NMT
systems.
| no BPE | with BPE
---|---|---
Lang. pair | EN | FR/ES | EN | FR/ES
EN-FR/FR-EN | 113,132 | 131,104 | 47,628 | 48,459
EN-ES/ES-EN | 113,692 | 168,195 | 47,639 | 49,283
Table 2: Vocabulary sizes. For completeness we also present the vocabulary
size without BPE, i.e. the number of unique words in the corpora.
The quality of our MT systems is evaluated on the test set using standard
evaluation metrics – BLEU Papineni et al. (2002) (as implemented in SacreBLEU
Post (2018)) and TER Snover et al. (2006) (as implemented in Multeval Clark et
al. (2011)). Our evaluation scores are presented in Table 3.
English as source
---
| EN$\rightarrow$FR | EN$\rightarrow$ES
System | BLEU$\uparrow$ | TER$\downarrow$ | BLEU$\uparrow$ | TER$\downarrow$
PB-SMT | 35.7 | 50.7 | 38.6 | 45.9
LSTM | 34.2 | 50.9 | 38.2 | 45.3
TRANS | 37.2 | 48.7 | 40.9 | 43.4
English as target
System | FR$\rightarrow$EN | ES$\rightarrow$EN
PB-SMT | 36.2 | 47.1 | 39.3 | 44.0
LSTM | 34.6 | 48.2 | 38.1 | 44.7
TRANS | 37.0 | 46.4 | 41.3 | 41.4
Table 3: Quality evaluation scores for our MT systems. TRANS denotes
Transformer systems.
We computed pairwise statistical significance using bootstrap resampling Koehn
(2004) and a 95% confidence interval. The results shown in Table 3 are all
statistically significant based on 1000 iterations and samples of 100
sentences. All metrics show the same performance trends for all language
pairs: Transformer (TRANS) outperforms all other systems, followed by PB-SMT,
and LSTM.
For all PB-SMT we replaced marked unknown words with only one token “UNK”.
While this does not effect the computation of BLEU and TER, it allows us not
to artificially boost the lexical and grammatical scores for these MT engines
(see Section 4) and assess their realistic dimensions.
## 4 Experiments and Results
Assessing linguistic complexity is a multifaceted task spanning over various
domains (lexis, morphology, syntax, etc.). The lexical and grammatical
diversity are two of its major components Bachman (2004); Bulté et al. (2008);
Bulté (2013). As such, we conduct an analysis using (i) lexical diversity and
sophistication metrics (Section 4.2) and (ii) grammatical diversity metrics
(Section 4.3). For lexical diversity, we use the following metrics: an adapted
version of the Lexical Frequency Profile (LFP), three standard metrics
commonly used to assess diversity –TTR, Yule’s I and MTLD–, and three new
metrics based on synonym frequency in translations. Up to our knowledge, this
research is the first to employ LFP to analyze synthetic data. For grammatical
diversity, we focus specifically on morphological inflectional diversity. We
adopt the Shannon entropy and Simpson’s diversity index to compute the entropy
of the inflectional paradigms of lemmas, measuring the abundance and the
evenness of wordforms per lemma.
Next, we will discuss the evaluation data and the aforementioned metrics
designed in order to compare the diversity of the training data with the
machine translationese. Our evaluation scripts are available at
https://github.com/dimitarsh1/BiasMT; due to its large size, the data is not
hosted in the github repository but is available upon request.
### 4.1 Evaluation data
To observe the effects of the MT algorithm on linguistic diversity, we used
the MT engines (Section 3) to translate the source side of the training set,
i.e. completely observed data. Data that has fully been observed during
training is most suitable for our objectives as we are interested in the
effects of the algorithm on language itself. It is also the most favourable
translation (and evaluation) scenario for the MT systems since all data has
been observed.
### 4.2 Lexical Diversity
#### Lexical Frequency Profile
To look at the lexical sophistication and diversity in the text produced by
the MT systems, we adapted the Lexical Frequency Profile (LFP) method Laufer
(1994); Laufer and Nation (1995). LFP is a measure that stems from research in
second language (L2) acquisition and student writing methods. It is designed
to study the lexical diversity or sophistication in texts produced by L2
learners. It is based on the observation that texts including a higher
proportion of less frequent words are more sophisticated than those containing
higher proportions of more frequent words Kyle (2019).
The LFP method measures diversity and sophistication by looking at frequency
bands. In its original version, the LFP analysis would distinguish between 4
bands: (i) percentage of words in a text belonging to the 1000 most frequent
words in that language, (ii) percentage of words in a text belonging to the
next 1000 most frequent words, (iii) a list of academic words that did not
occur in the first 2000 words and (iv) the remaining words. The lists used to
determine the word bands are predefined word lists such as Nation’s word lists
Nation (1984). One shortcoming of the approach is that a mismatch between
reference corpus and the target text can lead to misleading outcomes. However,
since we are looking into the (side-)effects of the training algorithm,
instead of using preset word lists in order to compute the LFP, we use the
original training data to generate the word frequency lists. This allows for a
better comparison between the original data and the machine translationese
while bypassing the potential mismatch issue.
Several studies Crossley et al. (2013); Laufer (1994); Laufer and Nation
(1995) have employed the LFP method to assess L2 acquisition in learners. From
these studies, it resulted that the less proficient a user of an L2, the more
words belonged to the first band and the least words belong to the list of
academic words or the remaining words (band 3 and 4 respectively of the
original formulation).
The lexical profile mentioned above is a detailed profile, showing 4 types of
words used by the learner. Because of interpretability issues, the ‘Beyond
2000’ is also frequently used to assess the profile of the users. It
distinguishes between the first two bands (comprising of the first 2000 words)
and the rest. This condensed profile has been found equally reliable and valid
as the original LFP having the advantage that it reduces the profile to one
single score facilitating a comparison between learners (or in our case MT
systems).
Since we are interested in the difference between the original training data
and the output of the MT systems, we compute the frequency bands on the
original training data instead of based on pre-set word lists used in L2
research. As such, we leave out the third band consisting of a list of
academic words. Presenting and computing the LFP this way, will give us
immediately the ‘Beyond 2000’ metric score as well (as we distinguish between
three bands only, the last one being anything beyond the first 2000 words).
The LFP for French, Spanish and English, from the EN$\rightarrow$FR,
EN$\rightarrow$ES, FR$\rightarrow$EN (denoted as ENFR) and ES$\rightarrow$EN
(denoted as ENES) data is presented in Table 4.
It shows that the original data is consistently more diverse than the output
of the MT systems as (i) the percentage of text occupied by the 1000 most
frequent words (B1) is lower than in the corresponding B1 scores for all MT
systems which implies that the 1000 most frequent words take up a smaller
percentage of the text in the original training data compared to in the output
of the different MT systems; and (ii) the so-called ‘Beyond 2000’ measure,
which in our LFP is equal to the third band (B3), showing us the percentage of
text occupied by the words that do not belong to the first two bands, is
consistently higher for the original data compared to the MT systems (meaning
that the less frequent words occupy a bigger proportion of the original data
than they do in its machine translationese variants). Note that it has been
established that LFPs are large-grained so small gains in vocabulary are
likely to be obscured Kyle (2019). The results indicate a consistent and clear
difference between the original data and the different types of machine
translationese for all language pairs.
Aside from the different LFP scores between the training data and the
translations, we also see a difference between the languages themselves.
French and Spanish have more variety (higher B1 and lower B3 (Beyond 2000)
values) compared to ENES and ENFR. Since the LFPs are computed on tokens, this
reflects the richer morphology in French and Spanish compared to English.
| FR | ES
---|---|---
| B1 | B2 | B3 | B1 | B2 | B3
ORIG | 79.80 | 6.59 | 13.61 | 77.80 | 6.83 | 15.36
PB-SMT | 81.78 | 6.48 | 11.74 | 79.77 | 6.86 | 13.36
LSTM | 82.95 | 6.18 | 10.88 | 80.34 | 6.84 | 12.81
TRANS | 82.01 | 6.24 | 11.75 | 82.35 | 6.99 | 10.67
| ENFR | ENES
| B1 | B2 | B3 | B1 | B2 | B3
ORIG | 80.83 | 7.10 | 12.07 | 80.81 | 7.11 | 12.08
PB-SMT | 82.06 | 7.04 | 10.90 | 82.25 | 7.01 | 10.74
LSTM | 83.23 | 6.93 | 9.81 | 83.29 | 6.93 | 9.78
TRANS | 82.25 | 7.05 | 10.70 | 82.35 | 6.99 | 10.67
Table 4: Lexical Frequency Profile (French, Spanish, English (ENFR and ENES)
with 3 bands (B1: 0-1000, B2: 1001-2000, B3: 2001-end) for the original data
and the output of the MT systems.
#### TTR, Yule’s I and MTLD
For completeness, we also present three more commonly used measures of lexical
diversity: type/token ratio (TTR) Templin (1975), Yule’s K (in practice, we
use the reverse Yule’s I) Yule (1944), and the measure of textual lexical
diversity (MTLD) McCarthy (2005).
TTR presents the ratio of the total number of different words (types) to the
total number of words (tokens). Higher TTR indicates a higher degree of
lexical diversity.
Yule’s characteristic constant (Yule’s K) Yule (1944) measures constancy of
text as the repetitiveness of vocabulary. Yule’s K and its inverse Yule’s I
are considered to be more resilient to fluctuations related to text length
than TTR Oakes and Ji (2013). The third lexical diversity metric is MTLD. MTLD
is evaluated sequentially as the mean length of sequential word strings in a
text that maintains a given TTR value McCarthy (2005).777In our experiments we
used 0.72 as a TTR threshold. We present the scores for TTR, Yule’s I and MTLD
for our data and MT engines in Table 5.
| FR | ES
---|---|---
| TTR | Yule’s I | MTLD | TTR | Yule’s I | MTLD
ORIG | 3.02 | 9.28 | 119.40 | 4.08 | 12.31 | 96.23
PB-SMT | 1.79 | 3.00 | 112.00 | 2.37 | 4.02 | 92.01
LSTM | 1.56 | 2.14 | 104.89 | 2.03 | 2.95 | 86.57
TRANS | 2.07 | 3.82 | 115.66 | 2.89 | 6.23 | 95.72
| ENFR | ENES
| TTR | Yule’s I | MTLD | TTR | Yule’s I | MTLD
ORIG | 2.89 | 6.64 | 108.70 | 2.88 | 6.61 | 108.46
PB-SMT | 1.74 | 2.07 | 94.65 | 1.82 | 2.25 | 93.18
LSTM | 1.50 | 1.53 | 86.93 | 1.44 | 1.42 | 87.91
TRANS | 2.04 | 3.10 | 101.95 | 2.09 | 3.26 | 99.62
Table 5: TTR, Yule’s I and MTLD scores. For all metrics, higher scores
indicate higher lexical richness. For ease of readability and comparison we
multiplied TTR scores by 1,000 and Yule’s I scores by 10,000.
The scores in Table 5 show that, overall, and according to all three metrics,
the original training data has a higher lexical diversity than the machine
translationese.
The data for the morphologically richer languages (FR, ES) as well as its
machine translationese variants (PB-SMT, LSTM and TRANS) have higher lexical
richness than the morphologically poor(er) language (EN).
#### Synonym Frequency Analysis
| FR | ES
---|---|---
| PTF$\downarrow$ | CDU$\downarrow$ | SynTTR$\uparrow$ | PTF$\downarrow$ | CDU$\downarrow$ | SynTTR$\uparrow$
ORIG | 9.666 | 2.725 | 15.10 | 9.131 | 4.539 | 21.13
PB-SMT | 9.715 | 2.957 | 11.87 | 9.236 | 4.637 | 17.4
LSTM | 9.748 | 3.154 | 10.96 | 9.32 | 4.782 | 15.34
TRANS | 9.717 | 3.077 | 12.25 | 9.285 | 4.687 | 17.15
Table 6: Synonym frequency metrics for our MT systems: primary translation
frequency (PTF), cosine distance from uniform (CDU) and TTR modified to only
consider words with multiple translation options (SynTTR). The SynTTR scores
were multiplied by 100,00 for easier viewing. Higher SynTTR scores indicate
greater diversity, while lower PTF and CDU scores indicate greater diversity.
The objective of synonym frequency analysis is to understand, for words with
multiple possible translations, with what frequency the various translations
for a given word appear in the translated text. It is called synonym frequency
in reference to the fact that when translating from one language to another,
it is common for a word in the source language to have a corresponding word in
the target language to which the source word is typically translated, and that
primary translated word can have many synonyms that constitute acceptable
alternative translation options. Note that we perform this analysis only in
one direction: from English into the morphologically richer languages French
and Spanish.
To examine synonym frequency, we first lemmatize the text using
SpaCy.888https://spacy.io/ Next, we map all nouns, verbs, and adjectives in
the source to their possible translation options retrieved from bilingual
dictionaries.999English-Spanish: https://github.com/mananoreboton/en-es-en-
Dic, English-French: https://freedict.org/downloads
We then count the number of appearances of these different translation options
for the ORIG as well as the MT data. For example, for the English word “look”
with translation options in Spanish {“mirar”, “esperar”, “buscar”, “parecer”,
“dar”, “vistazo”, “aspecto”, “ojeada”, “mirada”}, the number of appearances in
the TRANS data are as follows: {(“mirar”: 4002), (“esperar”: 3302), (“buscar”:
2814), (“parecer”: 1144), (“dar”: 977), (“vistazo”: 182), (“aspecto”: 46),
(“ojeada”: 0), (“mirada”: 0)}. From this mapping of translation option to
number of appearances we take a vector consisting only of the numbers of
appearances for each translation option, and refer to this as a translated
word distribution. That is, for the aforementioned example, the distribution
vector is: {4002, 3302, 2014, 1144, 997, 182, 46, 0, 0}. We use these counts
and distributions as described below.
Our first synonym frequency metric deals directly with the primary translation
frequency (PTF), where the “primary translation” is the translation for a
given source word that appears in the target text most often. We argue that
selecting secondary translation options for each source word less frequently,
and selecting the primary option more frequently, indicates a decrease of
lexical diversity. We measure the PTF by taking the average primary
translation prevalence over all source words for each MT system.
As a second metric we used the cosine distance between a uniform translated
word distribution, where each translation option would be equally prevalent,
and the actual translation distributions (we denote this metric as CDU). While
the ideal distribution of translations for a given word is almost certainly
non-uniform (and therefore not perfectly diverse), this metric still gives
valuable information about the tendencies of different systems to favor
certain translation options over others.
The third metric is a modified TTR which we refer to as Synonym TTR (or
SynTTR). Unlike with regular type/token ratio, rather than considering all
tokens that appear in the text, we consider as types only translation options
from the source-target mappings described above and as tokens we consider only
appearances of valid types. This metric exposes where translation systems
completely drop viable translation options from their vocabulary.
Table 6 shows the results for these 3 metrics. Interestingly, the MT systems
can be ranked in the same order according to all these metrics: PB-SMT $>$
TRANS $>$ LSTM, where $>$ denotes the comparison of lexical diversity (higher
to lower). However, across the 3 metrics, for both language pairs, the
reference translations (ORIG) appear to be the most lexically diverse in terms
of synonym frequency, with the lowest PTF and CDU and highest SynTTR. This
reinforces the idea that MT algorithms have a negative impact on the diversity
of language.
### 4.3 Grammatical Diversity
Grammatical diversity manifests itself on the sentence (syntactic complexity)
and word level (morphological complexity). With our experiments, we focus on
the morphological complexity by averaging the inflectional diversity of all
lemmas. To do so, we adopted two measures, originating from Information
Theory: Shannon entropy Shannon (1948) and Simpson’s Diversity Index Simpson
(1949). The former emphasizes on the richness aspect of diversity while the
latter on the evenness aspect of diversity. We used the Spacy-udpipe
lemmatizer to retrieve all lemmas.101010https://github.com/TakeLab/spacy-
udpipe
#### Shannon Entropy
Shannon entropy ($H$) measures the level of uncertainty associated with a
random variable ($X$). It has been applied in use-cases from economy, ecology,
biology, complex systems, language and many others Page (2007, 2011). In the
study of language Shannon entropy has previously been used for estimating the
entropy of language models Behr et al. (2003). We use it to measure the
entropy of wordforms given a lemma. In particular, the entropy of inflectional
paradigm of a specific lemma could be computed by taking the base frequency of
that lemma (frequency of all wordforms associated with that lemma) and the
probabilities of all the wordforms within the inflectional paradigm of that
particular lemma. Using such a formulation of entropy allows us to measure the
morphological variety (or the loss thereof) for the machine translationese
produced by each system – higher values of $H$ indicate higher diversity and
vice-versa.
We use Equation 1 to compute the entropy of the inflectional paradigm of a
lemma.
$H(\mathcal{l})=-\sum\limits_{\mathcal{wf}\in\mathcal{l}}p(\mathcal{wf}|\mathcal{l})\log
p(\mathcal{wf|\mathcal{l})})$ (1)
$H(\mathcal{l})$ denotes the entropy of the lemma $\mathcal{l}$ and, for the
wordform $\mathcal{wf}$, $p(\mathcal{wf}|\mathcal{l})$ is computed as the
fraction of the counts of the wordform, $count(\mathcal{wf})$, to the count of
all wordforms for the lemma $\mathcal{l}$, i.e.
$p(\mathcal{wf}|\mathcal{l})=\frac{count(\mathcal{wf})}{\sum_{wf^{*}\in\mathcal{l}}count(wf^{*})}$.
We use $\in$ to indicate wordforms of a given lemma.
#### Simpson’s Diversity Index
Like Shannon Entropy, Simpson’s Diversity Index ($D$) is a measure used to
determine variation in categorical data. Values close to 1 indicate higher
homogeneity, thus lower diversity and values close to 0 indicate higher
variability, thus higher diversity.
Following the same reasoning as with Shannon entropy, we compute Simpson’s
diversity index for each lemma and the corresponding wordforms according to
the formula in Equation 2.
$D(\mathcal{l})=\frac{1}{\sum\limits_{\mathcal{wf}\in\mathcal{l}}{p(\mathcal{wf}|\mathcal{l})^{2}}}$
(2)
We average the Shannon entropy and Simpson’s diversity index for all lemmas to
get an indicative score for each translation system or the original text. We
denote these with H and D, accordingly. To the best of our knowledge, our work
is the first to use Shannon entropy and Simpson’s diversity index for the
study of inflectional richness on a text level. The closest to our application
of these two diversity metrics for measuring inflectional richness is the work
by del Prado Martín et al. del Prado Martın et al. (2004). Their work on
morphological processing uses Shannon entropy to compute the amount of
information carried by a morphological paradigm.
An illustration of the Shannon entropy and Simpson’s diversity index of a
lemma is given in Table 7. We list the number of occurrences for every
wordform (male singular, male plural, female singular and female plural)
appearing in our datasets for the French lemma ‘président’ (EN: president). We
then compute $H$ and $D$ by applying Equation 1 and Equation 2 accordingly.
While both Shannon $H$ and Simpson’s $D$ scores usually range between 0–1, for
ease of readability we multiply the scores presented in Table 7 and Table 8 by
100 to present them in the range of $[0-100]$.
| lemma: président | |
---|---|---|---
| président | présidents | présidente | présidentes | H$\uparrow$ | D$\downarrow$
ORIG | 93774 | 2029 | 1490 | 8 | 18.11 | 92.95
PB-SMT | 99367 | 2019 | 496 | 1 | 12.81 | 95.16
LSTM | 95272 | 2039 | 291 | N/A | 12.17 | 95.3
TRANS | 92946 | 1952 | 617 | N/A | 13.86 | 94.74
Table 7: An illustration of the Shannon entropy and the Simpson’s diversity
index computed for the occurrences of the different wordforms of the French
lemma for ‘president’ (président).
For lemmas with a single wordform Shannon entropy and Simpson’s diversity
index will be $H=0.0$ and $D=1.0$, respectively. While this makes sense when
measuring the diversity of one morphological paradigm, they actually impact
the average scores H and D without contributing to the understanding of
diversity in a comparative study such as ours. In particular, lemmas with
single wordforms may be either an evidence of low diversity, e.g. a
translation system will always generate only one form or of high diversity,
e.g. rare words that are single wordform for a particular lemma (such as
synonyms of more common words) can indicate the ability of a system to
generate more diverse language (in terms of synonymy). That is why we computed
H and D on lemmas with two or more wordforms. For completeness, we also
present the number of single wordform lemmas. The Shannon entropy and
Simpson’s diversity index for French, Spanish and English for all datasets are
presented in Table 8. The scores are shown in the range $[0-100]$ as noted
above.
| FR | ES
---|---|---
| H$\uparrow$ | D$\downarrow$ | Single | H$\uparrow$ | D$\downarrow$ | Single
ORIG | 75.20 | 56.42 | 79k | 78.42 | 54.96 | 92k
PB-SMT | 69.00 | 59.64 | 51k | 71.79 | 58.56 | 54k
LSTM | 69.28 | 59.48 | 53k | 72.84 | 58.29 | 55k
TRANS | 73.13 | 57.70 | 58k | 77.23 | 56.26 | 64k
| ENFR | ENES
| H$\uparrow$ | D$\downarrow$ | Single | H$\uparrow$ | D$\downarrow$ | Single
ORIG | 59.04 | 63.43 | 78k | 59.05 | 63.42 | 78k
PB-SMT | 55.57 | 65.80 | 51k | 56.31 | 65.29 | 61k
LSTM | 53.15 | 67.02 | 50k | 53.85 | 66.64 | 48k
TRANS | 55.85 | 65.43 | 58k | 56.22 | 65.19 | 68k
Table 8: Shannon entropy (H) for French, Spanish, English (ENFR and ENES) and
Simpson’s diversity index (D) for original training data and the output of the
PB-SMT, LSTM and TRANS systems. Scores are multiplied by 100 for ease of
readability.
The H and D scores in Table 8 are an evidence of the negative impact of MT on
the morphological diversity – the scores for the ORIG indicate a consistent
higher diversity. Comparing the MT systems, it results that TRANS retains
morphological diversity better than the others. LSTM performs better than PB-
SMT for translations into the morphologically richer languages (FR and ES) but
PB-SMT seems much better than LSTM for translations into English. While the
loss of lexical diversity could, in some cases be a desirable side-effect of
MT systems (in terms of simplification or consistency), the uncontrolled loss
of morphological richness is problematic as it can prevent systems from
picking the grammatically correct option.
## 5 Conclusions
In this work, we explore the effects of MT algorithms on the richness and
complexity of language. We establish that there is indeed a quantitatively
measurable difference between the linguistic richness of MT systems’ training
data and their output – a product of algorithmic bias. These findings are in
line with previous results described in Vanmassenhove et al. Vanmassenhove et
al. (2019). Assessing diversity or richness in language is a multifacted task
spanning over various domains. As such, we approach this task from multiple
angles focusing on lexical diversity and sophistication, morphological variety
and a more translation specific metric focusing on synonymy. To do so, we
analyse the results of 9 different metrics including established, newly
proposed and adapted ones. The metrics suit we developed is unprecedented in
the study of MT quality and we believe it could drive future research on MT
evaluation.
Based on a wide range of experiments with 3 different MT architectures, we
draw the following main conclusions: (i) all 9 metrics indicate that the
original training data has more lexical and morphological diversity compared
to translations produced by the MT systems. This is the case for all language
pairs and directions; (ii) Comparing the MT systems among themselves, there is
a strong indication (for most metrics) that Transformer models outperform the
others in terms of lexical and morphological richness. We also ought to note
that, on average, the ranking of the systems in terms of diversity metrics
correlates with the quality of the translations (in terms of BLEU and TER).
This is something that would need to be further explored in future work; (iii)
The data for the morphologically richer languages (ES, FR) has higher lexical
and (evidently) morphological diversity than the English data both in the
original data and in the translations generated by all systems. However, for
PB-SMT, LSTM and TRANS the difference in scores is much smaller than the ORIG,
indicating that the MT systems have a stronger negative impact (in terms of
diversity and richness) on the morphologically richer languages.
## 6 Acknowledgements
We would like to thank the reviewers for their insightful comments and
feedback.
## References
* Aharoni et al. (2014) Roee Aharoni, Moshe Koppel, and Yoav Goldberg. 2014. Automatic detection of machine translated text and translation quality estimation. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL2014), Volume 2: Short Papers_ , pages 289–295.
* Aranberri (2020) Nora Aranberri. 2020. Can translationese features help users select an MT system for post-editing? _Procesamiento del Lenguaje Natural_ , 64:93–100.
* Bachman (2004) Lyle F Bachman. 2004. _Statistical analyses for language assessment_. Cambridge University Press.
* Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In _Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015)_ , San Diego, CA, USA. 15pp.
* Baker (1999) Mona Baker. 1999. The role of corpora in investigating the linguistic behaviour of professional translators. _International journal of corpus linguistics_ , 4(2):281–298.
* Baroni and Bernardini (2006) Marco Baroni and Silvia Bernardini. 2006. A new approach to the study of translationese: Machine-learning the difference between original and translated text. _Literary and Linguistic Computing_ , 21(3):259–274.
* Behr et al. (2003) Frederic H. Behr, Victoria Fossum, Michael Mitzenmacher, and David Xiao. 2003. Estimating and comparing entropies across written natural languages using PPM compression. In _Proceedings of the 2003 Data Compression Conference (DCC 2003)_ , page 416, Snowbird, UT, USA.
* Bizzoni et al. (2020) Yuri Bizzoni, Tom S Juzek, Cristina España-Bonet, Koel Dutta Chowdhury, Josef van Genabith, and Elke Teich. 2020. How human is machine translationese? comparing human and machine translations of text and speech. In _Proceedings of the 17th International Conference on Spoken Language Translation (IWSLT2020)_ , pages 280–290, Online.
* Bolukbasi et al. (2016) Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In _Advances in neural information processing systems_ , pages 4349–4357.
* Britz et al. (2017) Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc Le. 2017. Massive Exploration of Neural Machine Translation Architectures. In _Proceedings of the Association for Computational Linguistics (ACL2017)_ , pages 1442–1451, Vancouver, Canada.
* Bulté (2013) Bram Bulté. 2013. The development of complexity in second language acquisition. _A dynamic systems approach (Unpublished doctoral dissertation)_.
* Bulté et al. (2008) Bram Bulté, Alex Housen, Michel Pierrard, and Siska Van Daele. 2008. Investigating lexical proficiency development over time–the case of dutch-speaking learners of french in brussels. _Journal of French Language Studies_ , 18(3):277–298.
* Caliskan-Islam et al. (2016) Aylin Caliskan-Islam, Joanna J Bryson, and Arvind Narayanan. 2016. Semantics derived automatically from language corpora necessarily contain human biases. _arXiv preprint arXiv:1608.07187_ , pages 1–14.
* Clark et al. (2011) Jonathan H Clark, Chris Dyer, Alon Lavie, and Noah A Smith. 2011. Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL2011): Human Language Technologies, Volume 2: Short Papers_ , pages 176–181, Portland, Oregon, USA.
* Crossley et al. (2013) Scott A Crossley, Tom Cobb, and Danielle S McNamara. 2013. Comparing count-based and band-based indices of word frequency: Implications for active vocabulary research and pedagogical applications. _System_ , 41(4):965–981.
* Daems et al. (2017) Joke Daems, Orphée De Clercq, and Lieve Macken. 2017. Translationese and post-editese: How comparable is comparable quality? _Linguistica Antverpiensia New Series-Themes in Translation Studies_ , 16:89–103.
* Fan et al. (2019) Lisa Fan, Marshall White, Eva Sharma, Ruisi Su, Prafulla Kumar Choubey, Ruihong Huang, and Lu Wang. 2019. In plain sight: Media bias through the lens of factual reporting. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP2019)_ , pages 6343–6349.
* Garg et al. (2018) Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. _Proceedings of the National Academy of Sciences_ , 115(16):E3635–E3644.
* Gellerstam (1986) Martin Gellerstam. 1986. Translationese in swedish novels translated from english. _Translation studies in Scandinavia_ , 1:88–95.
* Habash et al. (2019) Nizar Habash, Houda Bouamor, and Christine Chung. 2019. Automatic gender identification and reinflection in arabic. In _Proceedings of the First Workshop on Gender Bias in Natural Language Processing_ , pages 155–165.
* Ippolito (2014) Margherita Ippolito. 2014. _Simplification, Explicitation and Normalization: Corpus-Based Research into English to Italian Translations of Children’s Classics_. Cambridge Scholars Publishing.
* Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In _Proceedings of the 3rd International Conference on Learning Representations: Poster Session_ , Banff, Canada.
* Klebanov and Flor (2013) Beata Beigman Klebanov and Michael Flor. 2013. Associative texture is lost in translation. In _Proceedings of the Workshop on Discourse in Machine Translation_ , pages 27–32, Sofia, Bulgaria.
* Koehn (2004) Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In _Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP2004)_ , pages 388–395, Barcelona, Spain.
* Koehn (2005) Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In _Proceedings of The Tenth Machine Translation Summit (MT Summit 2005)_ , pages 79–86, Phuket, Thailand.
* Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open-Source Toolkit for Statistical Machine Translation. In _Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL2007)_ , pages 177–180, Prague, Czech Republic.
* Koppel and Ordan (2011) Moshe Koppel and Noam Ordan. 2011. Translationese and its dialects. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL2011), Volume 1: Long Papers_ , pages 1318–1326.
* Kranich (2014) Svenja Kranich. 2014. Translations as a locus of language contact. In _Translation: A multidisciplinary approach_ , pages 96–115. Springer.
* Kyle (2019) Kristopher Kyle. 2019. Measuring lexical richness. _The Routledge Handbook of Vocabulary Studies_ , page 454.
* Läubli et al. (2018) Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP2018)_ , pages 4791–4796, Brussels, Belgium.
* Laufer (1994) Batia Laufer. 1994. The lexical profile of second language writing: Does it change over time? _RELC journal_ , 25(2):21–33.
* Laufer and Nation (1995) Batia Laufer and Paul Nation. 1995. Vocabulary size and use: Lexical richness in l2 written production. _Applied linguistics_ , 16(3):307–322.
* McCarthy (2005) Philip M McCarthy. 2005. An Assessment of the Range and Usefulness of Lexical Diversity Measures and the Potential of the Measure of Textual, Lexical Diversity (MTLD). In _PhD Thesis, Dissertation Abstracts International, Volume 66:12_. University of Memphis, Memphis, Tennessee, USA.
* Merullo et al. (2019) Jack Merullo, Luke Yeh, Abram Handler, Alvin Grissom II, Brendan O’Connor, and Mohit Iyyer. 2019. Investigating sports commentator bias within a large corpus of American football broadcasts. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019)_ , pages 6355–6361, Hong Kong, China.
* Nation (1984) IS Paul Nation. 1984. _Vocabulary lists: Words, affixes, and stems_. English Language Institute, Victoria University of Wellington.
* Oakes and Ji (2013) Michael P Oakes and Meng (eds) Ji. 2013. Quantitative Methods in Corpus-Based Translation Studies: A Practical Guide to Descriptive Translation Research. In _Studies in Corpus Linguistics, Volume 51_ , page 361. John Benjamins Publishing Company, Amsterdam, The Netherlands.
* Och and Ney (2003) Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. In _Computational Linguistics, Volume 29:1_ , pages 19–51. MIT Press, Cambridge, Massachusetts, USA.
* Page (2007) Scott E. Page. 2007. _The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies_. Princeton University Press.
* Page (2011) Scott E. Page. 2011. _Diversity and Complexity_. Princeton University Press.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In _Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL 2002)_ , pages 311–318, Philadephia, PA, USA.
* Post (2018) Matt Post. 2018. A call for clarity in reporting BLEU scores. In _Proceedings of the Third Conference on Machine Translation (WMT 2018): Research Papers_ , pages 186–191, Belgium, Brussels.
* del Prado Martın et al. (2004) Fermın Moscoso del Prado Martın, Aleksandar Kostić, and R Harald Baayen. 2004. Putting the bits together: An information theoretical perspective on morphological processing. _Cognition_ , 94(1):1–18.
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), Volume 1: Long Papers_ , pages 1715–1725, Berlin, Germany.
* Shannon (1948) Claude E Shannon. 1948. A mathematical theory of communication. _The Bell system technical journal_ , 27(3):379–423.
* Simpson (1949) Edward H Simpson. 1949. Measurement of diversity. _Nature_ , 163(4148):688–688.
* Snover et al. (2006) Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In _Proceedings of Association for Machine Translation in the Americas (AMTA 2006) 200:6_ , pages 223–231, Austin, Texas, USA.
* Stanovsky et al. (2019) Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 1679–1684, Florence, Italy.
* Sun et al. (2019) Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)_ , pages 1630–1640, Florence, Italy.
* Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In _Proceedings of Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems (NIPS 2014)_ , pages 3104–3112, Montreal, Quebec, Canada.
* Templin (1975) Mildred C. Templin. 1975. _Certain Language Skills in Children: Their Development and Interrelationships_. Greenwood Press, Westport, Connecticut, USA.
* Toral (2019) Antonio Toral. 2019. Post-editese: an exacerbated translationese. In _Proceedings of Machine Translation Summit XVII Volume 1: Research Track_ , pages 273–281, Dublin, Ireland. European Association for Machine Translation.
* Toral et al. (2018) Antonio Toral, Sheila Castilho, Ke Hu, and Andy Way. 2018. Attaining the unattainable? reassessing claims of human parity in neural machine translation. In _Proceedings of the Third Conference on Machine Translation (WMT 2018): Research Papers_ , pages 113–123, Brussels, Belgium. Association for Computational Linguistics.
* Vanmassenhove (2020) Eva Vanmassenhove. 2020. On the Integration of Linguistic Features into Statistical and Neural Machine Translation. _arXiv preprint arXiv:2003.14324_.
* Vanmassenhove and Hardmeier (2018) Eva Vanmassenhove and Christian Hardmeier. 2018. Europarl datasets with demographic speaker information.
* Vanmassenhove et al. (2018) Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2019)_ , pages 3003–3008, Brussels, Belgium.
* Vanmassenhove et al. (2019) Eva Vanmassenhove, Dimitar Shterionov, and Andy Way. 2019. Lost in translation: Loss and decay of linguistic richness in machine translation. In _Proceedings of Machine Translation Summit XVII (MT Summit 2019), Volume 1: Research Track_ , pages 222–232, Dublin, Ireland.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In _Proceedings of The Thirty-first Annual Conference on Neural Information Processing Systems 30 (NIPS 2017)_ , pages 5998–6008, Long Beach, CA, USA.
* Yule (1944) G. Udnv Yule. 1944. _The Statistical Study of Literary Vocabulary_. Cambridge University Press.
* Zhao et al. (2017) Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), Volume 1: Long Papers_ , pages 654–664, Vancouver, Canada.
|
# SPEECH RECOGNITION BY SIMPLY FINE-TUNING BERT
###### Abstract
We propose a simple method for automatic speech recognition (ASR) by fine-
tuning BERT, which is a language model (LM) trained on large-scale unlabeled
text data and can generate rich contextual representations. Our assumption is
that given a history context sequence, a powerful LM can narrow the range of
possible choices and the speech signal can be used as a simple clue. Hence,
comparing to conventional ASR systems that train a powerful acoustic model
(AM) from scratch, we believe that speech recognition is possible by simply
fine-tuning a BERT model. As an initial study, we demonstrate the
effectiveness of the proposed idea on the AISHELL dataset and show that
stacking a very simple AM on top of BERT can yield reasonable performance.
Index Terms— speech recognition, BERT, language model
## 1 Introduction
Conventional automatic speech recognition (ASR) systems consist of multiple
separately optimized modules, including an acoustic model (AM), a language
model (LM) and a lexicon. In recent years, end-to-end (E2E) ASR models have
attracted much attention, due to the believe that jointly optimizing one
single model is beneficial to avoiding not only task-specific engineering but
also error propagation. Current main-stream E2E approaches include
connectionist temporal classification (CTC) [1], neural transducers [2], and
attention-based sequence-to-sequence (seq2seq) learning [3].
LMs play an essential role in ASR. Even the E2E models that implicitly
integrate LM into optimization can benefit from LM fusion. It is therefore
worthwhile thinking: how can we make use of the full power of LMs? Let us
consider a situation that we are in the middle of transcribing a speech
utterance, where we have already correctly recognized a sequence of history
words, and we want to determine what the next word is being said. From a
probabilistic point of view, a strong LM, can then generate a list of
candidates where each of them is highly possible to be the next word. The list
may be extremely short that there is only one answer left. As a result, we can
use few to no clues in the speech signal to correctly recognize the next word.
There has been rapid development of LMs in the field of natural languages
processing (NLP), and one of the most epoch-making approach is BERT [4]. Its
success comes from a framework where a pretraining stage is followed by a
task-specific fine-tuning stage. Thanks to the un-/self-supervised objectives
adopted in pretraining, large-scale unlabeled datasets can be used for
training, thus capable of learning enriched language representations that are
powerful on various NLP tasks. BERT and its variants have created a dominant
paradigm in NLP in the past year [5, 6, 7].
In this work, we propose a novel approach to ASR, which is to simply fine-tune
a pretrained BERT model. Our method, which we call BERT-ASR, formulates ASR as
a classification problem, where the objective is to correctly classify the
next word given the acoustic speech signals and the history words. We show
that even an AM that simply averages the frame-based acoustic features
corresponding to a word can be applied to BERT-ASR to correctly transcribe
speech to a certain extent, and the performance can be further boosted by
using a more complex model.
Fig. 1: The input representation of the original BERT and the proposed BERT-
ASR. Fig. 2: Illustration of the decoding process of the proposed BERT-ASR.
## 2 BERT
BERT [4], which stands for Bidirectional Encoder Representations from
Transformers, is a pretraining method from a LM objective with a Transformer
encoder architecture [8]. The full power of BERT can be released only through
a pretraining–fine-tuning framework, where the model is first trained on a
large-scale unlabeled text dataset, and then all/some parameters are fine-
tuned on a labeled dataset for the downstream task.
The original usage of BERT mainly focused on NLP tasks, ranging from token-
level and sequence-level classification tasks, including question answering
[9, 10], document summarization [11, 12], information retrieval [13, 14],
machine translation [15, 16], just to name a few. There has also been attempts
to combine BERT in ASR, including rescoring [17, 18] or generating soft labels
for training [19]. In this section, we review the fundamentals of BERT.
### 2.1 Model architecture and input representations
BERT adopts a multi-layer Transformer [8] encoder, where each layer contains a
multi-head self-attention sublayer followed by a positionwise fully connected
feedforward network. Each layer is equipped with residual connections and
layer normalization.
The input representation of BERT was designed to handle a variety of down-
stream tasks, as visualized in Figure 1. First, a token embedding is assigned
to each token in the vocabulary. Some special tokens were added to the
original BERT, including a classification token ([CLS]) that is padded to the
beginning of every sequence, where the final hidden state of BERT
corresponding to this token is used as the aggregate sequence representation
for classification tasks, and a separation token ([SEP]) for separating two
sentences. Second, a segment embedding is added to every token to indicate
whether it belongs to sentence A or B. Finally, a learned positional embedding
is added such that the model can be aware of information about the relative or
absolute position of each token. The input representation for every token is
then constructed by summing the corresponding token, segment, and position
embeddings.
### 2.2 Training and fine-tuning
Two self-supervised objectives were used for pretraining BERT. The first one
is the masked language modeling (MLM), which is a denoising objective that
asks the model to reconstruct randomly masked input tokens based on context
information. Specifically, 15% of the input tokens are first chosen. Then,
each token is (1) replaced with [MASK] for 80% of the time, (2) replaced with
a random token for 10% of the time, (3) kept unchanged for 10% of the time.
During fine-tuning, depending on the downstream task, minimal task-specific
parameters are introduced so that fine-tuning can be cheap in terms of data
and training efforts.
## 3 Proposed Method
In this section, we explain how we fine-tune a pretrained BERT to formulate
LM, and then further extend it to consume acoustic speech signals to achieve
ASR.
Assume we have an ASR training dataset containing $N$ speech utterances:
$\bm{D}_{\text{ASR}}=\langle\bm{X}^{(i)},\bm{y}^{(i)}\rangle_{i=1}^{N}$, with
each $\bm{y}=(y_{1},...,y_{T})$ being the transcription consisting of $T$
tokens, and each $\bm{X}=(\bm{x}_{1},...,\bm{x}_{T^{\prime}})$ denoting a
sequence of $T^{\prime}$ input acoustic feature frames. The acoustic features
are of dimension $d$, i.e., $\bm{x}_{t}\in\mathbb{R}^{d}$, and the tokens are
from a vocabulary of size $V$.
### 3.1 Training a probabilistic LM with BERT
We first show how we formulate a probabilistic LM using BERT, which we will
refer to as BERT-LM. The probability of observing a symbol sequence $\bm{y}$
can be formulated as:
$\displaystyle
P(\bm{y})=P(y_{1},\dots,y_{T})=\prod_{t=1}^{T}{P(y_{t}|y_{1},\dots,y_{t-1})}.$
(1)
The decoding (or scoring) of a given symbol sequence then becomes an iterative
process that calculates all the terms in the product, which is illustrated in
Figure 2. At the $t$-th time step, the BERT model takes a sequence of
previously decoded symbols and the [CLS] token as input, i.e.,
$(\texttt{[CLS]},y_{1},\dots,y_{t-1})$. Then, the final hidden state
corresponding to [CLS] is sent into a linear classifier, which then outputs
the probability distribution $P(y_{t}|y_{1},\dots,y_{t-1})$.
To train the model, assume we have a training dataset with $N$ sentences:
$\bm{D}_{\text{text}}=\\{\bm{y}^{(i)}\\}_{i=1}^{N}$. An essential technique to
train the model is to exhaustively enumerate all possible training samples.
That is, each sentence with $T$ symbols is extended to $T$ different training
samples following the rule in Equation (2):
$\displaystyle(y_{1},...,y_{T})\to\begin{cases}(\texttt{[CLS]})\\\
(\texttt{[CLS]},y_{1})\\\ (\texttt{[CLS]},y_{1},y_{2})\\\ \dots\\\
(\texttt{[CLS]},y_{1},\dots,y_{t-1})\end{cases}.$ (2)
The training of the BERT-LM becomes simply minimizing the following cross-
entropy objective:
$\displaystyle\mathcal{L}_{\text{LM}}=-\sum_{i=1}^{N}\sum_{t=1}^{T}P(y^{(i)}_{t}|\texttt{[CLS]},y^{(i)}_{1},\dots,y^{(i)}_{t-1}).$
(3)
Fig. 3: Illustration of the alignment between the text and the acoustic
frames. Fig. 4: The average encoder.
Fig. 5: The conv1d resnet encoder.
### 3.2 BERT-ASR
We introduce our proposed BERT-ASR in this section. Since the time resolution
of text and speech is at completely different scales, for the model described
in Section 3.1 to be capable of taking acoustic features as input, we first
assume that we know the nonlinear alignment between an acoustic feature
sequence and the corresponding text, as depicted in Figure 3. Specifically,
for a pair of training transcription and acoustic feature sequence
$\langle(\bm{x}_{1},\dots,\bm{x}_{T^{\prime}}),(y_{1},\dots,y_{T})\rangle$, we
denote $\bm{F}_{i}$ to be the segment of features corresponding to $y_{i}$:
$\bm{F}_{i}=(\bm{x}_{t_{i-1}+1},\dots,\bm{x}_{t_{i}})\in\mathbb{R}^{t_{i}\times
d}$, which is from frame $t_{i-1}+1$ to $t_{i}$, and we set $t_{0}=0$. Thus,
the $T^{\prime}$ acoustic frames can be segmented into $T$ groups:
$\bm{X}=(\bm{F}_{1},\dots,\bm{F}_{T})$, and a new dataset containing segmented
acoustic feature and text pairs can be obtained:
$\bm{D}_{\text{seg}}=\langle\bm{F}^{(i)},\bm{y}^{(i)}\rangle_{i=1}^{N}$.
We can then augment the BERT-LM into BERT-ASR by injecting the acoustic
information extracted from $\bm{D}_{\text{seg}}$ into BERT-LM. Specifically,
as depicted in Figure 1, an acoustic encoder, which will be described later,
consumes the raw acoustic feature segments to generate the acoustic
embeddings. They are summed with the three types of embeddings in the original
BERT, and further sent into BERT. Note that the acoustic embedding
corresponding to the current word to be transcribed is added to the [CLS]
token as shown in Figure 2.
The probability of observing a symbol sequence $\bm{y}$ in Equations (1) can
therefore be reformulated as:
$\displaystyle
P(\bm{y})=\prod_{t=1}^{T}{P(y_{t}|y_{1},\dots,y_{t-1},\bm{F}_{1},\dots,\bm{F}_{t})}.$
(4)
Note that the acoustic segment of the current time step is also taken into
consideration, which is essential for the model to correctly transcribe the
current word being said. The training objective can be derived by
reformulating Equation (3) as:
$\displaystyle\mathcal{L}_{\text{ASR}}=$
$\displaystyle-\sum_{i=1}^{N}\sum_{t=1}^{T}P(y^{(i)}_{t}|$
$\displaystyle\langle\texttt{[CLS]},\bm{F}_{t}\rangle,\langle
y^{(i)}_{1},\bm{F}_{1}\rangle,\dots,\langle
y^{(i)}_{t-1},\bm{F}_{t-1}\rangle).$ (5)
In a nutshell, the training of BERT-ASR involves three steps.
1. 1.
Pretrain a BERT using a large-scale text dataset.
2. 2.
fine-tune a BERT-LM on the transcriptions of the ASR training dataset, as
described in Section 3.1,
3. 3.
fine-tune a BERT-ASR using both text and speech data of the ASR training
dataset.
### 3.3 Acoustic encoder
We now describe two kinds of architectures for the acoustic encoder mentioned
in Section 3.2. Formally, the acoustic encoder takes the whole acoustic frame
sequence $\bm{X}$ as input, and outputs the corresponding acoustic embeddings
$(AE_{1},\dots,AE_{T})$, where $AE_{t}\in\mathbb{R}^{d_{model}}$ with
$d_{model}$ being the BERT embedding dimension. The acoustic encoder must
contain the segment layer to obtain the acoustic segments.
#### 3.3.1 Average encoder
We first consider a very simple average encoder, as depicted in Figure 4.
First, the segmentation is performed. Then, for each $\bm{F}_{t}$, we average
on the time axis, and then the resulting vector is passed through a linear
layer to distill the useful information, while scaling the dimension from $d$
to $d_{model}$. Simple as it seems, as we will show later, initial results can
already be obtained with this average encoder.
#### 3.3.2 Conv1d resnet encoder
The drawback of the average encoder is that temporal dependencies between
different acoustic segments was not considered. Therefore, we investigate a
second encoder, which we will refer to as the conv1d resnet encoder, as
illustrated in Figure 5. While it has the identical segment and linear layers
as the average encoder, we add $L$ learnable residual blocks that operates on
$\bm{X}$. Each residual block contains two conv1d layers over the time axis
followed by ReLU activations. It is expected that taking the temporal
relationship between segments into account can boost performance.
Table 1: Results on the AISHELL-1 dataset. ”Orac.” and ”Prac.” denote the oracle decoding and practical decoding, respectively. ”Conv1d resnet X” denotes the conv1d resnet encoder with X resnet blocks. Best performance of the BERT-ASR are shown in bold. Model | Acoustic encoder | Perplexity | CER (Orac.) | CER (Prac.) | SER (Orac.) | SER (Prac.)
---|---|---|---|---|---|---
Dev | Test | Dev | Test | Dev | Test | Dev | Test | Dev | Test
Trigram-LM | - | 133.32 | 127.88 | - | - | - | -
LSTM-LM | - | 79.97 | 78.80 | - | - | - | -
BERT-LM | - | 39.74 | 41.72 | - | - | - | -
BERT-ASR | Average | 5.88 | 9.02 | 65.8 | 68.9 | 96.4 | 105.8 | 60.3 | 63.5 | 91.5 | 100.3
Conv1d resnet 1 | 4.91 | 7.63 | 55.8 | 59.0 | 89.6 | 99.6 | 50.0 | 53.8 | 84.4 | 94.1
Conv1d resnet 2 | 4.77 | 6.94 | 54.6 | 58.8 | 89.7 | 99.1 | 49.5 | 53.6 | 84.6 | 93.5
Conv1d resnet 3 | 4.83 | 7.41 | 54.8 | 58.9 | 89.8 | 99.4 | 49.6 | 53.6 | 84.6 | 93.9
Conv1d-resnet 4 | 4.78 | 7.29 | 54.6 | 59.0 | 89.5 | 99.3 | 49.4 | 53.9 | 84.4 | 93.8
GMM-HMM | - | - | - | 10.4 | 12.2 | - | -
DNN-HMM | - | - | - | 7.2 | 8.4 | - | -
## 4 Experiments
### 4.1 Experimental settings
We evaluate the proposed method on the AISHELL-1 dataset [20], which contains
170 hr Mandarin speech. We used the Kaldi toolkit [21] to extract 80-dim log
Mel-filter bank plus 3-dim pitch features and normalized them. The training
data contained around 120k utterances, and the exhaustive enumeration process
described in Section 3.1 resulted in 1.7M training samples. For the first step
of the proposed BERT-ASR, i.e., pretraining a BERT model using a large-scale
text dataset (cf. Section 3.2), we adopt an updated version of BERT, which is
whole word masking (WWM), whose effectiveness was verified in [22]. The major
difference between the updated BERT and the classic BERT is in the masking
procedure of MLM training. If a masked token belongs to a word, then all the
tokens that complete the word will be masked altogether. This is a much more
challenging task, since the model is forced to recover the whole word rather
than just recovering tokens. we directly used the hfl/chinese-bert-wwm
pretrained model provided by [22]111https://github.com/ymcui/Chinese-BERT-wwm,
which was trained on Chinese Wikipedia. The modeling unit was Mandarin
character. We conducted the experiments using the HuggingFace Transformer
toolkit [23]. The alignment we used during training was obtained by forced
alignment with an HMM/DNN model trained on the same AISHELL-1 training set.
We considered two decoding scenarios w.r.t the alignment strategy. First, the
oracle decoding is where we assumed that alignment is accessible. Second, to
match a practical decoding setting, as a naive attempt, we assumed that the
alignment between each utterance and the underlying text is linear, and
partitioned the acoustic frames into segments of equal lengths. The length was
calculated as the average number of frames per word in the training set, which
was 25 frames. In both scenarios, we considered beam decoding with a beam size
of 10.
### 4.2 Main results
We reported the perplexity (PPL) and character error rate (CER) in Table 1,
with the former being a metric to compare the performance of LMs and the
latter to compare performance of ASR systems. As a reference, we first
compared the PPL between different LMs. It can be clearly observed that the
BERT-LM outperformed the conventional trigram-LM and LSTM-LM, again showing
the power of BERT as a LM.
We then compare BERT-ASR with BERT-LM. By using a simple average encoder, a
significantly lower PPL could be obtained, showing that using acoustic clues
can greatly help guide recognition. Moreover, models with a complex acoustic
encoder like the conv1d resnet encoder could further reduce PPL. Looking at
the CERs, we observed that even with the simple average encoder, a preliminary
success could still be obtained. Furthermore, the conv1d resnet encoders
reduced the CER by almost 10%, showing that it is essential to have access to
global temporal dependencies before segmentation.
We finally consider the practical decoding scenario. There is a significant
performance degradation with the equal segmentation, and it is evidence to the
nonlinear relationship of the alignment. Thus, finding an alignment-free
approach will be an urgent future work [24, 25]. The performance of two
conventional ASR systems directly from the original paper of AISHELL-1 [20]
are also listed, and a significant gap exists between our method and the
baselines, showing that there is still much room for improvement.
Nevertheless, to the best of our knowledge, this is the first study to obtain
an ASR system by fine-tuning a pretrained large-scale LM. Moreover, it is
worth noticing that the proposed method is readily prepared for n-best re-
scoring [17], though in this paper we mainly focus on building an ASR system.
We thus leave this as future work.
### 4.3 Error Analysis
In this section, we examine two possible reasons for the current unsatisfying
results.
#### 4.3.1 Polyphone in Mandarin
Mandarin is a character-based language, where the same pronunciation can be
mapped to different characters. As our method is based on a character-based
BERT, it might be infeasible for the model to learn to map the same acoustic
signal to different characters. To examine if our model actually suffered from
this problem, the syllable error rates (SERs) were calculated and reported in
Table 1. It is obvious that the SERs are much lower than the CERs, showing the
existence of this problem. Thus, learning phonetically-aware representations
will be a future direction.
Table 2: Development set results with a ratio of the leading characters correctly recognized. Model | Ratio | CER | SER
---|---|---|---
Conv1d resnet 3 | 0 | 54.8 | 49.6
1/3 | 61.9 | 55.3
1/2 | 57.3 | 51.4
#### 4.3.2 Error propagation
BERT benefits from the self-attention mechanism and is therefore known for its
ability to capture global relationship and long-term dependencies. The full-
power of BERT may not be exerted with a relatively short context. That is to
say, our BERT-ASR can be poor at the earlier decoding steps. As a result, the
error in the beginning might propagate due to the recursive decoding process.
To examine this problem, we assume that the starting characters up to a
certain ratio are correctly recognized, and start the decoding process
depending on those characters. Although we expected the error rates to
decrease as the ratio increases, as shown in Table 2, the CERs and SERs were
not lower. Thus, we conclude that error propagation was not a major issue.
## 5 Conclusion
In this work, we proposed a novel approach to ASR by simply fine-tuning BERT,
and described the detailed formulation and several essential techniques. To
verify the proposed BERT-ASR, we demonstrated initial results on the Mandarin
AISHELL-1 dataset, and analyzed two possible sources of error. In the future,
we will investigate more complex model architectures and the possibility of
multi-task learning, in order to close the gap between our and conventional
ASR systems. We also plan to evaluate the BERT-ASR on other languages, and
apply the proposed method for n-best re-scoring [17].
## References
* [1] A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in Proc. ICML, 2006, pp. 369–376.
* [2] A. Graves, “Sequence transduction with recurrent neural networks,” arXiv preprint arXiv:1211.3711, 2012.
* [3] W. Chan, N. Jaitly, Q. Le, and O. Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in Proc. ICASSp, 2016, pp. 4960–4964.
* [4] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proc. NAACL-HLT, 2019, pp. 4171–4186.
* [5] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding by generative pre-training,” 2018\.
* [6] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
* [7] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.
* [8] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N Gomez, L. Kaiser, and I. Polosukhin, “Attention is All you Need,” in Proc. NeruIPS, pp. 5998–6008. 2017.
* [9] Z. Zhao Z. Wang Q. Ju H. Deng P. Wang W. Liu, P. Zhou, “K-BERT: Enabling language representation with knowledge graph,” in Proc. AAAI, 2020.
* [10] C. Qu, L. Yang, M. Qiu, W. B. Croft, Y. Zhang, and M. Iyyer, “Bert with history answer embedding for conversational question answering,” in Proc. SIGIR, 2019, pp. 1133–1136.
* [11] Y. Liu, “Fine-tune bert for extractive summarization,” arXiv preprint arXiv:1903.10318, 2019.
* [12] J. Xu, Z. Gan, Y. Cheng, and J. Liu, “Discourse-aware neural extractive text summarization,” in Proc. ACL, 2020, pp. 5021–5031.
* [13] W. Lu, J. Jiao, and R. Zhang, “Twinbert: Distilling knowledge to twin-structured bert models for efficient retrieval,” arXiv preprint arXiv:2002.06275, 2020.
* [14] R. Nogueira and K. Cho, “Passage re-ranking with bert,” arXiv preprint arXiv:1901.04085, 2019.
* [15] J. Zhu, Y. Xia, L. Wu, D. He, T. Qin, W. Zhou, H. Li, and T. Liu, “Incorporating bert into neural machine translation,” in Proc. ICLR, 2020.
* [16] T. Zhang*, V. Kishore*, F. Wu*, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” in Proc. ICLR, 2020.
* [17] J. Shin, Y. Lee, and K. Jung, “Effective sentence scoring method using bert for speech recognition,” in Proc. ACML, 2019, vol. 101, pp. 1081–1093.
* [18] J. Salazar, D. Liang, T. Q. Nguyen, and K. Kirchhoff, “Masked language model scoring,” in Proc. ACL, July 2020, pp. 2699–2712.
* [19] H. Futami, H. Inaguma, S. Ueno, M. Mimura, S. Sakai, and T. Kawahara, “Distilling the knowledge of bert for sequence-to-sequence asr,” arXiv preprint arXiv:2008.03822, 2020.
* [20] H. Bu, J. Du, X. Na, B. Wu, and H. Zheng, “Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline,” in Proc. O-COCOSDA, 2017, pp. 1–5.
* [21] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely, “The kaldi speech recognition toolkit,” in Proc. ASRU, Dec. 2011.
* [22] Y. Cui, W. Che, T. Liu, B. Qin, S. Wang, and G. Hu, “Revisiting pre-trained models for chinese natural language processing,” in Proc. Findings of EMNLP, 2020.
* [23] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, et al., “Huggingface’s transformers: State-of-the-art natural language processing,” arXiv preprint arXiv:1910.03771, 2019.
* [24] N. Moritz, T. Hori, and J. L. Roux, “Triggered attention for end-to-end speech recognition,” in Proc. ICASSP, 2019, pp. 5666–5670.
* [25] L. Dong and B. Xu, “Cif: Continuous integrate-and-fire for end-to-end speech recognition,” in Proc. ICASSP, 2020, pp. 6079–6083.
|
# The model theory of residue rings of models of Peano Arithmetic: The prime
power case
P. D’Aquino111Dipartimento di Matematica e Fisica, Università della Campania
“L. Vanvitelli”, viale Lincoln, 5, 81100 Caserta, Italy, e-mail:
<EMAIL_ADDRESS>and A. Macintyre222Partially supported by a
Leverhulme Emeritus Fellowship.
School of Mathematics, University of Edinburgh, King’s Buidings, Peter Guthrie
Tait Road Edinburgh EH9 3FD, UK, e-mail<EMAIL_ADDRESS>
Keywords: models of Peano Arithmetic, ultraproducts, local rings, valuations
and generalizations
###### Abstract
In [25] the second author gave a systematic analysis of definability and
decidability for rings $\mathcal{M}/p\mathcal{M}$, where $\mathcal{M}$ is a
model of Peano Arithmetic and $p$ is a prime in $\mathcal{M}$. In the present
paper we extend those results to the more difficult case of
$\mathcal{M}/p^{k}\mathcal{M}$, where $\mathcal{M}$ is a model of Peano
Arithmetic, $p$ is a prime in $\mathcal{M}$, and $k>1$. In [25] work of Ax on
finite fields was used, here we use in addition work of Ax on ultraproduct of
$p$-adics.
## 1 Introduction
### 1.1
Boris Zilber asked the second author the following question:
If $\mathcal{M}$ is a nonstandard model of full Arithmetic, and $n$ is a
nonstandard element of $\mathcal{M}$ congruent to $1$ modulo all standard
integers $m$, does the ring $\mathcal{M}/n\mathcal{M}$ interpret Arithmetic?
The problem arose in work of Zilber on commutator relations in quantum
mechanics, see [29].
Although until recently little attention had been given to rings
$\mathcal{M}/n\mathcal{M}$, one would naturally expect that
$\mathcal{M}/n\mathcal{M}$ is much simpler than $\mathcal{M}$, for example
being internally finite in $\mathcal{M}$, and thus $\mathcal{M}$ should not be
interpretable in $\mathcal{M}/n\mathcal{M}$, so Zilber’s Problem should have a
negative answer.
This is in fact how things turn out. From the outset of our work we looked at
much more general problems connected to definability in the rings
$\mathcal{M}/n\mathcal{M}$, to emphasize the vast gulf between the residue
rings $\mathcal{M}/n\mathcal{M}$ and $\mathcal{M}$. We chose to work with
models $\mathcal{M}$ of Peano Arithmetic, where one inevitably encounters
Gödelian phenomena, and contrast $\mathcal{M}$ with the ring
$\mathcal{M}/n\mathcal{M}$ where one is in a strongly non-Gödelian setting,
mainly because of Ax’s work from [5], already exploited by Macintyre in [25]
to give a detailed analysis of the case when $n$ is prime.
After a lecture of Macintyre in 2015 Tom Scanlon pointed out that mere non-
interpretability can easily be obtained via the interpretation of
$\mathcal{M}/n\mathcal{M}$ as a nonstandard initial segment $[0,n-1]$, using
definable pigeon-hole principle imposed by $\mathcal{M}$ on
$\mathcal{M}/n\mathcal{M}$. This is certainly the first completely clear
solution to Zilber’s Problem. But it seems to have no implications for getting
close to exact bounds for complexity of definitions in the
$\mathcal{M}/n\mathcal{M}$.
### 1.2
Our work on the $\mathcal{M}/n\mathcal{M}$ has three clearly defined stages,
of increasing difficulty.
Stage 1. $n$ prime. Then if $n$ is a standard prime $p$,
$\mathcal{M}/n\mathcal{M}\cong\mathbb{F}_{p}$, while if $n$ is a nonstandard
prime then $\mathcal{M}/n\mathcal{M}$ is a pseudofinite field (in Ax’s sense
[5]) of characteristic $0$. See [25] for this case, and the cited properties.
Pseudofinite fields of characteristic $0$ are models of the theory of all
finite prime fields, and this theory $PrimeFin$ has a nice set of first-order
axioms which we now sketch briefly.
A field $K$ is a model of $PrimeFin$ if
$1)$ $K$ is perfect
$2)$ $K$ has exactly one extension of each finite degree $n$
$3_{d})$ if $\Gamma$ is an absolutely irreducible plane curve of degree $d$
and $|K|>(d-1)^{4}$ then $\Gamma$ has a point in $K$
$4_{d})$ if $|K|\leq(d-1)^{4}$ then $|K|=p$ for some prime $p\leq(d-1)^{4}$
for each $0<d\in\mathbb{N}$.
The theory is decidable, and all models are pseudofinite, in the sense weaker
than Ax’s, namely that each model is elementarily equivalent to an
ultraproduct, possibly principal, of finite fields. Finally, each model of the
theory is elementarily equivalent to some $\mathcal{M}/n\mathcal{M}$, where
$\mathcal{M}\equiv\mathbb{Z}$.
Noninterpretability of $\mathcal{M}$ in any model of the theory can be seen
via Hrushovski’s result that pseudofinite fields are simple see [22].
Stage 2. $n=p^{k}$, $p$ prime, and $k>1$. This is covered in the present
paper, and is indispensable for Stage 3.
Stage 3. $n$ arbitrary. This will be considered in the last paper of the
series. We use the factorization theory in $\mathcal{M}$ of $n$ as an internal
finite product of prime powers, and an internal representation of
$\mathcal{M}/n\mathcal{M}$ as $\prod_{p|n}\mathcal{M}/p^{v_{p}(n)}\mathcal{M}$
(see [11], [7]). This gets combined with an internalized Feferman-Vaught
Theorem (see [21]), and the work of Stage 2, to get analogues of the main
results of Stage 1 and 2, and thereby give a thorough analysis in all cases of
the definability theory and axiomatizability of the
$\mathcal{M}/n\mathcal{M}$, with spin-offs on pseudofiniteness.
It is notable in Stage 2 that the $\mathcal{M}/p^{k}\mathcal{M}$ are Henselian
local rings, and models of a natural set of axioms involving TOAGS (see [12]),
truncated ordered abelian groups.
We make heavy use of many classical results on Henselian valuation rings, and
truncate them to get results for our $\mathcal{M}/p^{k}\mathcal{M}$. This is
not routine. Though we know a huge amount about the logic of Henselian fields
we know rather little about the logic of general Henselian local rings (even
familiar finite local rings [16]).
## 2 Algebraic properties of $\mathcal{M}/p^{k}\mathcal{M}$
### 2.1 $\mathcal{M}$ standard
Already for $\mathcal{M}$ standard there are nontrivial issues of decidability
(and definability). Let $n=p^{k}$, where $p$ is a prime and $k\geq 1$. It is
well-known that $\mathbb{Z}/p^{k}\mathbb{Z}$ is an henselian local ring [26].
The ideals in the ring of $p$-adic integers $\mathbb{Z}_{p}$ are generated by
powers of $p$. Moreover, $\mathbb{Z}/p^{k}\mathbb{Z}$ is isomorphic in a
natural way to $\mathbb{Z}_{p}/p^{k}\mathbb{Z}_{p}$. For any fixed prime $p$
there is an existential formula in the language of rings defining
$\mathbb{Z}_{p}$ in $\mathbb{Q}_{p}$. A uniform definition of $\mathbb{Z}_{p}$
in $\mathbb{Q}_{p}$ without reference to the prime needs an existential-
universal formula, see [10]. Ax in [5] proved that the theory of the class of
valued fields $\mathbb{Q}_{p}$ as $p$ varies among the primes is decidable.
So, the theory of the class of all $\mathbb{Z}/p^{k}\mathbb{Z}$ as $p$ varies
over all primes and $k$ over all positive integers, is also decidable.
Remark 1. Whether or not $\mathcal{M}$ is standard, if $n$ is divisible by
more than one prime, $\mathcal{M}/n\mathcal{M}$ is not local. If $n$ is
divisible by finitely many primes then $\mathcal{M}/n\mathcal{M}$ is
semilocal. In general for $\mathcal{M}$ nonstandard,
$\mathcal{M}/n\mathcal{M}$ is nonstandard semilocal if $n$ is divisible by
infinitely many primes (and there are always such $n$ when $\mathcal{M}$ is
nonstandard).
Remark 2. We note that for any $\mathcal{M}$, and prime $p$, standard or
nonstandard, $\mathcal{M}$ is not Henselian with respect to the $p$-adic
valuation $v_{p}$ on $\mathcal{M}$. First, suppose $p\not=2$. If $v_{p}$ is
Henselian then $1+p$ is a square, so $1+p=y^{2}$ for some $y\in\mathcal{M}$.
Hence, $(y-1)(y+1)=p$, and so either
1. a)
$y-1=1$ and $y+1=p$
2. b)
$y-1=-1$ and $y+1=-p$
3. c)
$y-1=p$ and $y+1=1$
4. d)
$y-1=-p$ and $y+1=-1$
a) implies $p=3$. b) and c) are impossible. d) implies $p=3$.
So suppose $p=3$. Then, if Hensel’s lemma holds, $1+2\cdot 3=7$ is a square.
Clearly, $PA$ implies $7$ is not a square.
Finally, assume $p=2$. If Hensel’s lemma holds then $1+8\cdot h$ is a square
for all $h$, so $17$ is a square, impossible in $PA$.
### 2.2 $\mathcal{M}$ nonstandard
Each of the rings $\mathbb{Z}/p^{k}\mathbb{Z}$ can be interpreted in
$\mathcal{M}$ in a uniform way for each prime $p$ and each positive integer
$k$, as can each of the natural maps
$\mathbb{Z}/p^{k+1}\mathbb{Z}\rightarrow\mathbb{Z}/p^{k}\mathbb{Z}$. But it
not possible to interpret $\mathbb{Z}_{p}$ in $\mathcal{M}$ as the inverse
limit of the $\mathbb{Z}/p^{k}\mathbb{Z}$’s with the associated maps.
$\L$os’ theorem implies that if $D$ is a nonprincipal ultrafilter on the set
of primes then the ultraproduct $\prod_{D}\mathbb{Z}_{p}$ is an Henselian
valuation ring, whose maximal ideal is $\prod_{D}\mu_{p}$ where $\mu_{p}$ is
the maximal ideal of $\mathbb{Z}_{p}$. Also the value group of the
ultraproduct of the local domains is the ultraproduct of the value groups of
the $\mathbb{Z}_{p}$’s, and so an ultrapower of $\mathbb{Z}$. Ax’s results in
[5] are needed to prove that the residue field
$\prod_{D}\mathbb{Z}_{p}/\prod_{D}\mu_{p}=\prod_{D}\mathbb{Z}_{p}/\mu_{p}$ is
a pseudofinite field.
We will first analyze the basic algebraic properties of each of the quotients
$\mathcal{M}/p^{k}\mathcal{M}$, and we will show that also for $\mathcal{M}$
nonstandard $\mathcal{M}/p^{k}\mathcal{M}$ is a local Henselian ring. We will
then appeal to classical results of model theory of Henselian fields (see
[20], [6]) to understand the theory of the class of all
$\mathcal{M}/p^{k}\mathcal{M}$ for $p$ prime and $k$ positive in
$\mathcal{M}$. We will make use also of some constructions and ideas in [15].
The valuation $v_{p}$ induces a “valuation” (which we will denote by $v$) on
the quotient ring $\mathcal{M}/p^{k}\mathcal{M}$. The residue field of
$\mathcal{M}\big{/}p^{k}\mathcal{M}$ is either $\mathbb{F}_{p}$ if $p$ is
standard, or a characteristic $0$ pseudofinite field if $p$ is nonstandard.
###### Theorem 2.1.
For general prime $p$ and $k>0$, $\mathcal{M}\big{/}p^{k}\mathcal{M}$ is a
local Henselian ring, and the unique maximal ideal is principal.
Proof: The units in $\mathcal{M}\big{/}p^{k}\mathcal{M}$ are $m+(p^{k})$ where
$m$ is prime to $p$. Clearly, the non units form an ideal, and this is the
unique maximal ideal of $\mathcal{M}\big{/}p^{k}\mathcal{M}$ and is generated
by $p+(p^{k})$.
Let $f(x)$ be a monic polynomial over $\mathcal{M}$, and
$\alpha\in\mathcal{M}$ such that $v_{p}({f}(\alpha))>0$ and
$v_{p}(f^{\prime}(\alpha))=0$. We want to show that there exists
$\beta\in\mathcal{M}$ such that $f(\beta)=0$ and $v_{p}(\alpha-\beta)>0$. We
use inside $\mathcal{M}$ the standard approximation procedure. The informal
recursion (as it would be done in $\mathbb{N}$) puts $\beta_{0}=\alpha$, and
$\beta_{1}=\beta_{0}+\epsilon_{1}$, where $\epsilon_{1}$ should be chosen
judiciously from $\mathcal{M}$. So,
$f(\beta_{1})=f(\beta_{0}+\epsilon_{1})=f(\beta_{0})+f^{\prime}(\beta_{0})\epsilon_{1}+o(\epsilon_{1}^{2})$.
Choose $\epsilon_{1}=-\frac{f(\alpha)}{f^{\prime}(\alpha)}$ (an
infinitesimal). Hence,
$v_{p}(f(\beta_{1}))=v_{p}(f(\beta_{0}+\epsilon_{1}))\geq(v_{p}(f(\beta_{0}))^{2}>0$,
and
$v_{p}(f^{\prime}(\beta_{1}))=v_{p}(f^{\prime}(\beta_{0})+\epsilon_{1}f^{\prime\prime}(\beta_{0})+o(\epsilon_{1}^{2}))=0$.
We iterate this procedure which can be coded in $\mathcal{M}$, and we get a
sequence $\beta_{j}$’s of elements of $\mathcal{M}$, such that
$v_{p}(f(\beta_{j}))>0$, $v_{p}(f^{\prime}(\beta_{j}))=0$,
$v_{p}(f(\beta_{j}))<v_{p}(f(\beta_{j+1}))$, and
$v_{p}(\beta_{0}-\beta_{j})>0$. If $v_{p}(f(\beta_{m}))\geq k$ for some $m$,
then $\beta_{m}$ is a solution of $f$ in $\mathcal{M}\big{/}p^{k}\mathcal{M}$.
By the Pigeonhole Principle this always happens since otherwise we would have
a definable injective map from $\mathcal{M}$ into an initial segment
$[0,p^{k})$, clearly a contradiction.
Note that the condition that the unique maximal ideal is principal is
elementary. In general, in a local ring the unique maximal ideal need not be
principal.
## 3 Truncations
The principal ideals of $\mathcal{M}\big{/}p^{k}\mathcal{M}$ are generated by
$p^{j}$ for $0<j\leq k$, and are linearly ordered by the divisibility
condition with minimum $(0)$ and maximum $(p)$.
On the ring $\mathcal{M}\big{/}p^{k}\mathcal{M}$ there is a truncated
valuation $v$ with values in $[0,k]$, defined by
$v(m+(p^{k}))=\left\\{\begin{array}[]{ll}v(m)&\mbox{ if }v(m)<k\\\ k&\mbox{ if
}v(m)\geq k\end{array}\right.$
satisfying
1. 1.
$v(x+y)\geq\min(v(x),v(y))$
2. 2.
$v(xy)=\min(k,v(x)+v(y))$
3. 3.
$v(1)=0$
4. 4.
$v(0)=k$
We can construe $v$ as a map to a “truncated ordered abelian group”,
henceforward called TOAG. In [12] axioms which are true in initial segments of
ordered abelian groups are identified. We list them here. Some are obvious,
while others need some calculations. In [12] it is also shown that models of
these axioms can be realized as initial segments of ordered abelian groups.
Notice that in [12] it is not specified if the order is discrete. We are
mostly interested in discrete orders and some extra axioms will be added
later. The language contains two binary operations $+$ and
$\mathbin{\ooalign{\hss\raise 4.30554pt\hbox{.}\hss\cr$-$}}$, a binary
relation $\leq$, and two constants $0$ and $\tau$.
Axioms:
1. 1.
$+$ is commutative
2. 2.
$x+0=x$ and $x+\tau=\tau$
3. 3.
$x\leq y$ and $x_{1}\leq y_{1}$ imply $x+x_{1}\leq y+y_{1}$
4. 4.
$+$ is associative
5. 5.
$x+y=x+z<\tau$ implies $y=z$ (cancellation rule)
6. 6.
If $x\leq y<\tau$ then there is a unique $z$ with $x+z=y$
7. 7.
there are $z$ such that $z<\tau$ and $x+z=\tau$, and
$\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}x=\min\\{z:x+z=\tau\\}$
8. 8.
$\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}x)=x$
9. 9.
If $0\leq x,y<\tau$ and $x+y=\tau$ then $y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}x)=x\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}y)$
10. 10.
If $x+(y+z)=\tau$ and $y+x<\tau$ then $x\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(y+z))=z\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(x+y))$
11. 11.
If $y+z<\tau$, $x+(y+z)=\tau$, $x+y=\tau$, and
$z+(y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}x))<\tau$
12. 12.
If $y+x=\tau$ and $y+z<\tau$ then $z+(y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}x))<\tau$
13. 13.
If $y+z=y+x=\tau$ and $z+(y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}x))<\tau$ then $x+(y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}z))=x\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}y)+z))$
14. 14.
If $y+z=y+x=\tau$ and $x+(y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}z))=\tau$ then $z+(y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}x))=\tau$
15. 15.
If $y+z=y+x=x+(y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}z))=\tau$ then $(y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}x))\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}z)=(y\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}z))\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}(\tau\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}x)$.
A proof of the following fundamental result is in [12].
###### Theorem 3.1.
Let $[0,\tau]$ be a TOAG with $+,\mathbin{\ooalign{\hss\raise
4.30554pt\hbox{.}\hss\cr$-$}}$ and $\leq$. Then there is an ordered abelian
group $(\Gamma,\oplus,\leq_{\Gamma})$ with $P$ the semigroup of non-negative
elements, and an element $\tau_{\Gamma}$ in $P$ such that $([0,\tau],+,\leq)$
is isomorphic to $([0,\tau_{\Gamma}],\oplus,\leq_{\Gamma})$, where the
structure on $[0,\tau_{\Gamma}]$ is the one induced by $\Gamma$.
Our primary interest in this paper is in models of Presburger, and extra
conditions are needed for characterizing the TOAGs which are initial segments
of models of Presburger. We expand the language for TOAG with an extra
constant symbol $1$. The extra axioms are also sufficient as shown in the
following theorem in [12].
###### Theorem 3.2.
A TOAG $[0,\tau]$ with least positive element $1$ is a Presburger TOAG if and
only if it satisfies the following conditions
1. 1.
$[0,\tau]$ is discretely ordered and every positive element is a successor;
2. 2.
for every positive integer $n$ and each $x\in[0,\tau]$ there is a
$y\in[0,\tau]$ and an integer $m<n$ such that
$x=\underbrace{(y+\ldots+y)}_{\text{n \mbox{ {\rm
times}}}}+\underbrace{(1+\ldots+1)}_{\text{m \mbox{ {\rm times}}}}$.
Let $PresTOAG$ be the set of axioms in the language containing $+$,
$\mathbin{\ooalign{\hss\raise 4.30554pt\hbox{.}\hss\cr$-$}}$, $\leq$, $0$ and
$\tau$ for Presburger TOAGs together with all the remainders axioms as above.
###### Theorem 3.3.
1. 1.
$PresTOAG$ is model complete.
2. 2.
$PresTOAG$ is not complete. The complete extensions are in natural
correspondence with $\hat{{\mathbb{Z}}}$, where to
$(f(p))_{f(p)\in\mathbb{Z}_{p}}$ corresponds
$PresTOAG+\tau\equiv f(p)(\mbox{mod }p^{k})$
for every $k\geq 1$.
3. 3.
The theory of models of $PresTOAG$ has quantifier-elimination in the language
augmented by the relations $\equiv_{n}$ (divisibility by $n$) for each $n\geq
2$.
Proof: 1. Let $S_{1},S_{2}$ be models of $PresTOAG$ and assume there is a
monomorphism $S_{1}\hookrightarrow S_{2}$. Since $\tau$ is in the language,
$S_{1}$ and $S_{2}$ have the same top element. Let $S_{2}$ be an initial
segment of a model $S$ of Presburger. Let $\lambda_{1},\ldots,\lambda_{m}\in
S_{1}$. We claim the type of $\langle\lambda_{1},\ldots,\lambda_{m}\rangle$ in
$S_{1}$ is the same as the type of
$\langle\lambda_{1},\ldots,\lambda_{m}\rangle$ in $S_{2}$. By the elimination
of quantifiers for Presburger Arithmetic the type of
$\langle\lambda_{1},\ldots,\lambda_{m}\rangle$ over any $S$ is determined by
order and congruence conditions in terms of $\tau$ and $1$, and so it is the
same in both $S_{1}$ and $S_{2}$.
2. Clearly the conditions got from $f\in\hat{{\mathbb{Z}}}$ are consistent. The completeness comes easily from Theorem 3.2 and the elimination of quantifiers for Presburger Arithmetic.
3. This follows from elimination of quantifiers for Presburger Arithmetic.
## 4 Henselian local rings
### 4.1
In Section 3 we considered a valuation $v$ on the Henselian local ring
$\mathcal{M}/p^{k}\mathcal{M}$ with values onto the Presburger TOAG $[0,k]$.
In addition, the residue field of the local ring is either $\mathbb{F}_{p}$ if
$p$ is standard, and otherwise a characteristic $0$ pseudofinite field. The
union of these alternatives says exactly that the residue field is a model of
the theory $PrimeFin$. Moreover, $v(p)=1$ if $p$ is standard. A natural
problem is whether a Henselian local ring with these properties is a quotient
of the valuation ring of a Henselian field with the same residue field, and
value group a model of Presburger Arithmetic, and in addition $v(p)=1$ if $p$
is standard.
The first order theory of a Henselian valued field $(K,v)$, in the language of
valued fields, is well understood in the particular cases when the value group
is a model of Presburger and the residue field is either a characteristic $0$
pseudofinite field or the residue field is $\mathbb{F}_{p}$ and $v(p)=1$ (i.e.
$K$ is unramified).
In these cases, $Th(K,v)$ is completely determined by the theory of the
residue field and the theory of the value group. As a consequence, one gets
that $Th(K,v)$ is decidable if and only if the theory of the residue field and
the theory of the value group are decidable.
###### Theorem 4.1.
For any $\mathcal{M}\big{/}p^{k}\mathcal{M}$ there is a ring $R$ such that
1) $R$ is a Henselian valuation domain of characteristic $0$ and unramified,
and the value group $\Gamma$ of $R$ is a $\mathbb{Z}$-group (i.e. a model of
Presburger), with initial segment $[0,k]$.
2) $\mathcal{M}\big{/}p^{k}\mathcal{M}$ is isomorphic to the quotient ring
$R\big{/}I$ for some principal ideal $I$ of $R$.
3) the residue field of $R$ is naturally isomorphic to the residue field of
$\mathcal{M}\big{/}p^{k}\mathcal{M}$.
Proof: Using the MacDowell-Specker Theorem (see [24]), choose $\mathcal{N}$ a
proper elementary end extension of $\mathcal{M}$. Let $d=p^{\delta}$ for some
$\delta\in\mathcal{N}$ and $\delta>\mathcal{M}$. By Theorem 2.1 the ring
$\mathcal{N}\big{/}p^{\delta}\mathcal{N}$ is a Henselian local ring with
respect to the valuation induced by the $p$-adic valuation on $\mathcal{N}$.
The set $\Delta=\\{x\in\mathcal{N}:v_{p}(x)>v_{p}(a)\mbox{ for all
}a\in\mathcal{M}\\}$ is a non-principal prime ideal of $\mathcal{N}$ and
contains $p^{\delta}$. Hence, $\mathcal{N}\big{/}\Delta$ is a domain, a local
ring with value group $\mathcal{M}$, and residue field either $\mathbb{F}_{p}$
or a characteristic $0$ pseudofinite field. Moreover,
$\mathcal{N}\big{/}\Delta$ is also Henselian since it is a homomorphic image
of $\mathcal{N}\big{/}p^{\delta}\mathcal{N}$ which is a Henselian ring. Let
$R=\mathcal{N}\big{/}\Delta$ then
$R\big{/}p^{k}R\cong\mathcal{M}\big{/}p^{k}\mathcal{M}$ (here we use that
$\mathcal{N}$ is an end extension of $\mathcal{M}$). $\Box$
Notice that the maximal ideal of $R$ is principal due to the fact that
divisibility is a linear order on valuation domains, and the value group is
discrete. As already noticed the maximal ideal of
$\mathcal{M}\big{/}p^{k}\mathcal{M}$ is principal generated by $p+(p^{k})$.
So far we have identified the $\mathcal{M}\big{/}p^{k}\mathcal{M}$ as
Henselian local rings, with two distinct cases (see [2], [3], [4]):
Case 1. $p$ standard, and $\mathcal{M}\big{/}p^{k}\mathcal{M}$ isomorphic to
some $S/\alpha S$, where $S\equiv\mathbb{Z}_{p}$, and $\alpha\in S$. In
particular, $v_{p}(p)=1$.
Case 2. $p$ nonstandard, and $\mathcal{M}\big{/}p^{k}\mathcal{M}$ isomorphic
to some $S/\alpha S$, with $S$ elementarily equivalent to a ring of power
series with exponents in a $\mathbb{Z}$-group and coefficients from a
pseudofinite field of characteristic $0$, with $\alpha\in S$.
In both cases we have the valuation onto a $PresTOAG$, and we can link up to
the results of [16] where we began to analyze the set of axioms for such
rings. Moreover, a trivial compactness argument shows that any local ring
modelling those axioms is elementarily equivalent to some
$\mathcal{M}\big{/}p^{k}\mathcal{M}$, where $\mathcal{M}\models PA$, see
below.
We note one important point. The rings $\mathcal{M}\big{/}p^{k}\mathcal{M}$
have the special property of recursive saturation, see [25], so we cannot
replace elementary equivalence by isomorphism in the preceding paragraph.
However, there are standard resplendency arguments which give the converse
when the ring $S/\alpha S$ is countable recursively saturated.
### 4.2
Note that we still have not proved the converse that any $S/\alpha S$ as in
Case 1 and 2 above is elementarily equivalent to some
$\mathcal{M}\big{/}p^{k}\mathcal{M}$.
Now, we do this and in addition we obtain decidability results.
###### Theorem 4.2.
Suppose $S$ is as in Cases 1 and Case 2 above, and $\alpha$ is a non-unit and
$\alpha\not=0$. Then $S/\alpha S$ is elementarily equivalent to an
ultraproduct of $\mathbb{Z}/p^{k}\mathbb{Z}$, for $p$ prime and $k>0$.
Proof: By Ax-Kochen-Ershov, $S$ is elementarily equivalent to an ultraproduct
of $\mathbb{Z}_{p}$’s. So, $S/\alpha S$ is elementarily equivalent to an
ultraproduct of $\mathbb{Z}_{p}/\alpha_{p}\mathbb{Z}_{p}$, where $\alpha_{p}$
is a non-zero, nonunit of $\mathbb{Z}_{p}$, and each
$\mathbb{Z}_{p}\big{/}\alpha_{p}\mathbb{Z}_{p}$ is isomorphic to some
$\mathbb{Z}/p^{h}\mathbb{Z}$. So, $S/\alpha S$ is elementarily equivalent to
an ultraproduct of various $\mathbb{Z}/p^{h}\mathbb{Z}$, so it is isomorphic
to some $\mathcal{M}\big{/}p^{k}\mathcal{M}$ where $\mathcal{M}$ is an
ultrapower of $\mathbb{Z}$ and $p$ is a prime in $\mathcal{M}$ and
$k\in\mathcal{M}$. $\Box$
###### Corollary 4.3.
The elementary theories of the $\mathcal{M}\big{/}p^{k}\mathcal{M}$ are
exactly the elementary theories of the $S/\alpha S$ where $S$ is as in Case 1
and 2.
Proof: By Theorem 4.1 and Theorem 4.2. $\Box$
Ax’s decidability results for the class of all $\mathbb{Q}_{p}$ gives (via
interpretability) decidability of the class of all $S/\alpha S$, and so of the
class of all $\mathcal{M}\big{/}p^{k}\mathcal{M}$.
We summarize what we have proved so far.
###### Theorem 4.4.
1. 1.
$\mathcal{M}\big{/}p^{k}\mathcal{M}$ are pseudofinite (or finite).
2. 2.
The theory of the $\mathcal{M}\big{/}p^{k}\mathcal{M}$ is the theory of the
$\mathbb{Z}\big{/}p^{h}\mathbb{Z}$.
3. 3.
The theory of the $\mathcal{M}\big{/}p^{k}\mathcal{M}$ is decidable.
Proof: 1) and 2) follow from Theorem 4.2. The theory of
$\mathbb{Z}\big{/}p^{h}\mathbb{Z}$ for a fixed prime $p$ and $h>0$ is
decidable by [5].
## 5 Axioms, definitions and model-completeness
Let $R=\mathcal{M}/p^{k}\mathcal{M}$.
If $k=1$ then $R$ is a model of the theory of finite prime fields (see [25],
[16]). The theory $PrimeFin$ of finite prime fields (and thus the theory of
the $R$’s) has uniform quantifier elimination in the definitional extension of
ring theory got by adding primitives $Sol_{n}(y_{0},\ldots,y_{n})$ expressing
$\exists t(y_{0}+\ldots+y_{n}t^{n}=0)$, see [1], and [16] for rings arising
from models of $PA$. Models $R_{1}$ and $R_{2}$ are elementarily equivalent if
and only if they have the same characteristic and agree on all
$Sol_{n}(\ell_{0},\ldots,\ell_{n})$ for
$\ell_{0},\ldots,\ell_{n}\in\mathbb{Z}$, see [5].
Some $Th(R)$ are model-complete, some not. The model-complete ones are those
where $R$ is either $\mathbb{F}_{p}$ or characteristic $0$, and for each $m$
the unique extension of dimension $m$ is got by adjoining an algebraic number,
see [18]. One gets an abundance of model-complete examples from Jarden’s
result in [23] that $\\{\sigma\in Gal(\mathbb{Q}^{alg}):Fix(\sigma)\mbox{ is
pseudofinite}\\}$ has measure $1$. An example of a non model-complete $R$ is
one where $\mathbb{Q}^{alg}\subseteq R$ (see [5]).
In the case of $k>1$ we need to elaborate the discussion in Section 4. There
we showed that $\mathcal{M}/p^{k}\mathcal{M}$ is isomorphic to some $S/\alpha
S$ where $\alpha\in S$, $S$ is an unramified Henselian domain, with a value
group a $\mathbb{Z}$-group, residue field a model of the theory of finite
prime fields. Conversely, any such $S/\alpha S$ is elementarily equivalent to
some $\mathcal{M}/p^{k}\mathcal{M}$ for some $\mathcal{M}$ model of $PA$.
So we have to investigate axioms, elementary invariants and model-completeness
for all $R$ of the form $S/\alpha S$, where $S$ and $\alpha$ are as before.
Each is valued in a Presburger TOAG. Moreover, the TOAG is interpretable (in
$\mathcal{L}_{rings}$) in $R$, by taking the underlying set $\Gamma$ of the
TOAG, to the set of principal ideals of $R$, linearly ordered by reverse
inclusion, so $(1)$ is the least element and $(0)$ is the top element
(obviously, we get a linear order since $S$ is a valuation domain). We get a
$\oplus$ on $\Gamma$ via $(\alpha)\oplus(\beta)=(\alpha\beta)$, making
$\Gamma$ into a TOAG with $(1)$ as $0$-element, and $(0)$ as $\infty$-element.
The valuation maps $\alpha$ to $(\alpha)$. In this way we get a Presburger
TOAG. This is a crucial axiom about $R$. The $1$ of the Presburger TOAG is
$(p)$ (the maximal ideal). In [16] we show that the elementary type of the
TOAG is given by the Presburger type of the penultimate element of $\Gamma$
(in our case $(k-1)$). Moreover, any Presburger TOAG is model-complete.
The maximal ideal $\mu$ of $R$ is of course definable as the set of nonunits
and the residue field of $R$ is interpretable. Naturally, we seek
“truncations” of the Ax-Kochen-Ershov theorem. On the basis of what we have
done above we get the following two basic theorems.
###### Theorem 5.1.
The theory of the class of $\mathcal{M}/p^{k}\mathcal{M}$ ($p$ prime and
$k\geq 1$) is axiomatized by the following conditions
1. 1.
Henselian local ring, valued in a Presburger TOAG, with residue field a model
of the theory of finite prime fields;
2. 2.
If the characteristic of the residue field is a prime $p$ then the valuation
of $p$ is the least positive element of the Presburger TOAG.
###### Theorem 5.2.
The elementary theory of an individual $\mathcal{M}/p^{k}\mathcal{M}$ is
uniquely determined by the Presburger type of the penultimate element of the
TOAG, and by the elementary theory of the residue field. Given a Presburger
type and a residue field, there is an $\mathcal{M}/p^{k}\mathcal{M}$ with the
corresponding invariants.
###### Remark 5.3.
1. 1.
In the above we work with the ring language, since the TOAG condition comes
from a condition on principal ideals.
2. 2.
Obviously there are decidability results parallel to the above (of the form,
for example, if the residue field is decidable, and the Presburger type of the
penultimate element of the TOAG is computable then
$\mathcal{M}/p^{k}\mathcal{M}$ is decidable).
### 5.1 Quantifier elimination
The analysis of formulas in the $\mathcal{M}/p^{k}\mathcal{M}$, rather than
the mere sentences of the preceding subsection, is quite tricky. It involves a
Denef-Pas analysis as in [16] for sentences, but with some complications, and
we give only a sketch of the argument.
By the preceding we are dealing (as far as definability is concerned) with
rings $S/\alpha S$ with $S$ a Henselian valuation domain with Presburger value
group, with residue field $k$ a model of the theory of finite prime field, and
unramified. We note the substantial result in [10] that $S$ is uniformly
$\exists\forall$-definable in the language of rings in its field of fractions
$K$.
It is convenient to fix in $S$ an element $t$ with $v(t)=1$, i.e. the least
positive element of the value group. We take $t=p$ if $k$ has characteristic
$p$, and $t$ arbitrary otherwise (note then that the type of $t$ has many
possibilities). We can assume $K$ is $\aleph_{1}$-saturated and so has an
angular component map $ac:K\rightarrow k$. This will not be definable in
general unless we have some saturation. See [27], and [16] for the $PA$
context for the details. Note that our $\mathcal{M}/p^{k}\mathcal{M}$ are
recursively saturated, and so if $\mathcal{M}$ is countable we have an $ac$.
We now work in the 3-sorted formalism $\mathcal{L}_{Denef-Pas}$, with sorts of
$K$, $k$, and $\Gamma$ (field, residue field, and value group with added
$\infty$). We have the usual field formalism on each of $K$, $k$, and on
$\Gamma$ the usual $+,-,<,\infty$. In addition we have two trans-sort
primitives, $ac$ and $v$, the former from $K$ to $k$, and the latter from $K$
to $\Gamma\cup\\{\infty\\}$. $K$ has characteristic $0$, $K$ has constant $t$
as above with $t=p$ if $k$ has characteristic $p$, and $v(t)=1$.
Our purpose is to analyze the structure of the sets
$\\{(y_{1},\ldots,y_{n})\in(S/\alpha S)^{n}:S/\alpha
S\models\psi(y_{1},\ldots,y_{n})\\}$
for a ring formula $\psi$.
For convenience in applying Denef-Pas, we work in $K$ rather than in $S$, but
we exploit the uniform definability of $S$ in $K$. So, we consider variables
$x_{1},\ldots,x_{n},\alpha$ and the formula expressing (in $K$)
$x_{1},\ldots,x_{n},\alpha$ are in $S$, and
$S/\alpha S\models\psi(x_{1}+\alpha S,\ldots,x_{n}+\alpha S).$ (1)
This is a first order condition. Thus by Denef-Pas it is uniformly, except for
finitely many $p$ (characteristic of $k$), equivalent to a formula
$\Theta(x_{1},\ldots,x_{n},\alpha)$ in $\mathcal{L}_{DP}$ with no bounded $K$
variables (but do not forget the constant $t$!). The basic formulas out of
which $\Theta$ is built are:
algebraic equations in $x_{1}+\alpha S,\ldots,x_{n}+\alpha S,t$ over $K$;
residue field formulas in $ac(\tau_{\ell})$ for various polynomials
$\tau_{\ell}$ over $\mathbb{Z}$ in $x_{1}+\alpha S,\ldots,x_{n}+\alpha S,t$.
Presburger formulas in $v(\mu_{m})$ for various polynomials $\mu_{m}$ in
$x_{1}+\alpha S,\ldots,x_{n}+\alpha S,t$.
In fact, $\Theta$ can be taken as a Boolean combination of these basic
formulas, as can be seen by inspection of the proof of Denef-Pas.
For the finitely many exceptional standard primes $p_{1},\ldots,p_{n}$ (which
can be found effectively from $\psi$, by [5]), $S$ is elementarily equivalent
to one of $\mathbb{Z}_{p_{1}},\ldots,\mathbb{Z}_{p_{n}}$ (the analogous easier
argument for sentences is sketched in [16]). By using Macintyre’s quantifier
elimination for each of $\mathbb{Z}_{p_{1}},\ldots,\mathbb{Z}_{p_{n}}$, and
changing the $ac$ if need be, one easily gets the power conditions
$P_{\ell}(x)$ to have the required Denef-Pas form. By using that $p=0$ in $k$
captures the characteristic $p$ condition, one can combine these finitely many
analyses with the one that works except for $p_{1},\ldots,p_{n}$ to get the
Denef-Pas result for all $\psi$, giving a new $\Theta$ that works always,
independent of $p$.
Now we can do further elimination semplification in the other sorts. For the
Presburger sort we simply have the classical elimination down to order and
congruence conditions, provided we have a constant for the least positive
element (and above we have stipulated that $v(t)=1=$ least positive element).
So Presburger quantifiers get eliminated. This leaves the issue of
quantification over $k$. Recall that $k$ ranges over models of the theory of
finite prime fields. By [5] and [1] one easily sees that (uniformly) $k$ has
quantifier elimination down to the solvability predicates $Sol_{n}$.
Now recall that we seek elimination results in the $S/\alpha S$, where the
above takes place in $K$. However, $k$ depends only on $S$ and not on
$\alpha$, and the Presburger conditions have the same value in $S/\alpha S$ as
in $S$, for $x_{1},\ldots,x_{n}$ proper dividing $\alpha$, so in fact we have
proved (recall our starting point (1)):
###### Theorem 5.4.
Assume the previous conditions on $S$, and adjoin constant $t$ with $v(t)=1$,
and $t=p$ if $k$ has characteristic $p$. Then uniformly in $S$ for any
$\psi(x_{1},\ldots,x_{n})$ in $\mathcal{L}_{ring,t}$, the language of rings
with $t$, there is another such formula $\psi^{+}(x_{1},\ldots,x_{n},y)$ such
that if $S$ is $\aleph_{1}$-saturated $S$ has an angular component $ac$ such
that for all $\alpha\in S$, $\alpha\not=0$ and $\beta_{1},\ldots,\beta_{n}\in
S$ properly dividing $\alpha$
$S/\alpha S\models\psi(\beta_{1}+\alpha S,\ldots,\beta_{n}+\alpha S)\mbox{
\hskip 7.22743pt }\Leftrightarrow\mbox{ \hskip 7.22743pt
}S\models\psi^{+}(\beta_{1},\ldots,\beta_{n},\alpha)$
where $\psi^{+}(x_{1},\ldots,x_{n},y)$ is a Boolean combination of three kinds
of sorted formulas (where now the sorts are local ring, residue ring, TOAG):
1. 1.
polynomial equations over $\mathbb{Z}[t]$ in $\beta_{1}+\alpha
S,\ldots,\beta_{n}+\alpha S$
2. 2.
solvability conditions over $\mathbb{Z}$ in $\beta_{1}+\alpha
S,\ldots,\beta_{n}+\alpha S$ and $ac(\beta_{1}),\ldots,ac(\beta_{n})$,
$ac(t)=1$ (i.e. using the predicates $Sol_{n}$)
3. 3.
Presburger conditions over monomials in $v(\beta_{1}),\ldots,v(\beta_{n})$ in
the Presburger TOAG $[0,v(t/\alpha)]$.
###### Proof.
This is sketched above, and is the obvious “truncated” analogue of Denef-Pas.
Note that [12] contains the background for 3. ∎
We do not attempt to go any deeper to minimize the role of the $ac$. This may
be worthwhile, but it is not needed for our last topic below.
### 5.2 Elementary embeddings and model-completeness
We work with rings $R\equiv S/\alpha S$, where $S$ is as above. The TOAG
valuation is algebraically interpretable in $R$. From Section 4, the
elementary type of $R$ is determined by the elementary type of the residue
field, and the Presburger type of the penultimate element of the TOAG. For
$R_{1}\equiv R_{2}$ as rings, and an embedding $f:R_{1}\rightarrow R_{2}$, we
want to find out when $f$ is elementary. There is no loss of generality in
analyzing the case when $f$ is a ring inclusion. It has no chance of being
elementary if the least positive element of the TOAG of $R_{1}$ is not the
least positive element of the TOAG of $R_{2}$. So we work in
$\mathcal{L}_{ring,t}$, the language of rings with $t$, with corresponding
assumptions on $R_{1}$ and $R_{2}$. Thus there is a natural inclusion of
residue fields $k_{1}\rightarrow k_{2}$, provided each $R_{i}$ satisfies that
$t$ generates the maximal ideal. This we now assume (recall that the
corresponding maximal ideals $\mu_{1}$ and $\mu_{2}$ are both generated by
$t$). Note one cost of adjoining $t$ is that our work has to take some account
of part of “the type of $t$”. We indicate, as we go along, what is involved.
Our purpose is to show that if the embedding $k_{1}\rightarrow k_{2}$ is
elementary then so is $R_{1}\rightarrow R_{2}$. Our convention about $t$
ensures that the induced map on TOAGs is elementary.
Now, neither $R_{1}$ nor $R_{2}$ need have an $ac$, as required for the Denef-
Pas analysis, and we have to resort to “tricks of the saturation trade” to
reduce to the case when $R_{2}$ has an $ac$ which restricts to $R_{1}$. If
there is any counterexample to our claim that if $k_{1}\rightarrow k_{2}$ is
elementary then so is $R_{1}\rightarrow R_{2}$, we select such a
counterexample (witnessed by a particular residue-field formula), and by a
standard compactness/saturation argument produce a counterexample with
stronger properties, namely those given in the next two paragraphs.
We are assuming $R_{1}\subseteq R_{2}$ (and in fact a local extension because
of the $t$-convention), $R_{1}\cong S/\alpha S$, $R_{1}\equiv R_{2}$ in the
rings language, $S$ is $\aleph_{1}$-saturated, and $k_{1}\preceq k_{2}$ (note
that since $ac(t)$ can be chosen as $1$, we need only consider $k_{1}\preceq
k_{2}$ in the ring language), and some formula $W(x_{1},\ldots,x_{n})$
witnessing that $R_{1}\not\preceq R_{2}$ (in $\mathcal{L}_{ring,t}$), i.e.
there are some $c_{1},\ldots,c_{n}$ in $R_{1}$ so that $R_{1}\models
W(c_{1},\ldots,c_{n})$ and $R_{2}\models\neg W(c_{1},\ldots,c_{n})$.
Then we have to work (quite hard) to get an $ac$ on $R_{2}$ restricting to one
on $R_{1}$. This involves modifying $R_{2}$ in general. First get $ac$ on
$R_{1}$ using the $\aleph_{1}$-saturation of $R_{1}$, recall that $S$ is
$\aleph_{1}$-saturated and by [8] has a normalized cross-section and thus an
$ac$ (appropriately normalized) which truncates to $R_{1}$.
Now go from $R_{2}$ to an $|R_{2}|^{+}$-saturated elementary extension
$\overline{R_{2}}$. This by an obvious adaptation of Cherlin’s argument in [8]
give an $\overline{ac}$ on $\overline{R_{2}}$ extending the $ac$ on $R_{1}$.
Now use $R_{1}\rightarrow\overline{R_{2}}$, which still satisfies the original
condition that $(R_{1},R_{2})$ did. By Theorem 5.4 the truncated Denef-Pas
version gives us $R_{1}\preceq\overline{R_{2}}$ since
$k_{1}\preceq\overline{k_{2}}$, where $\overline{k_{2}}$ is the residue field
of $\overline{R_{2}}$, and all polynomial equations
$f(\eta_{1},\dots,\eta_{n},ac(\eta_{1}),\dots,ac(\eta_{n}))=0,$
with $\eta_{1},\dots,\eta_{n}\in R_{1}$ maintain their truth value between
$R_{1}$ and $\overline{R_{2}}$ (trivially). But since
$R_{1}\preceq\overline{R_{2}}$ and $R_{2}\preceq\overline{R_{2}}$ we must have
$R_{1}\preceq R_{2}$.
So we have now the analogue of the result due to Ziegler [28] in generality.
###### Theorem 5.5.
Assume $R_{1}\subseteq R_{2}$ with the $t$ condition to guarantee
$k_{1}\subseteq k_{2}$. Then $R_{1}\preceq R_{2}$ if and only if $k_{1}\preceq
k_{2}$.
Finally this gives us a model-completeness result.
###### Theorem 5.6.
The theory of $R$ in the $t$-formalism is model-complete if and only if the
theory of the residue field $k$ is.
## 6 Neostability and interpretability
If $p$ is standard, the preceding shows that each
$\mathcal{M}\big{/}p^{k}\mathcal{M}$ is interpretable in an ultrapower of
$\mathbb{Z}_{p}$. By [17] the ultrapower is $NIP$, and so does not interpret
even $I\Delta_{0}$ (which has $IP$), a much weaker system than Peano
Arithmetic see [11].
When $p$ is nonstandard, $\mathcal{M}\big{/}p^{k}\mathcal{M}$ interprets
$\mathcal{M}\big{/}p\mathcal{M}$ which has $IP$ (see [19]), so
$\mathcal{M}\big{/}p^{k}\mathcal{M}$ has $IP$. However,
$\mathcal{M}\big{/}p^{k}\mathcal{M}$ lives in the $NTP_{2}$ enviroment of
neostability [9], since $\mathcal{M}\big{/}p^{k}\mathcal{M}$ is interpretable
(by the preceding) in the ring of power series in
$\mathcal{M}\big{/}p\mathcal{M}$ with value group a model of Presburger. By
[9] this ring of power series has $NTP_{2}$.
However, any model $\mathcal{M}$ of $I\Delta_{0}$ has $TP_{2}$, as we see by
the following construction. Let $a_{nm}$ be $p_{n}^{m}$ for $n,m$ positive
standard integers, and $p_{n}$ a prime in $\mathcal{M}$. Consider the formula
(of ring theory) $\varphi(x,y)$ saying that $y$ is a power of a prime $p$, and
$v_{p}(x)=v_{p}(y)$. From [11] this is given in $\mathcal{M}$ by a
$\Delta_{0}$-formula. Now,
1. 1.
the set $\\{\varphi(x,a_{nm}):m\in\mathbb{N}\\}$ is inconsistent, for fixed
$n$;
2. 2.
for any $f:\omega\rightarrow\omega$ the set
$\\{\varphi(x,a_{nf(n)}):n\in\mathbb{N}\\}$ is consistent.
1\. This is clear, since the type forces $v_{p_{n}}(x)=m$, for all $m$.
2\. One shows that for each $\overline{n}$, the set
$\\{\varphi(x,a_{nf(n)}):n\leq\overline{n}\\}$ is realized in $\mathcal{M}$,
in fact by the element $b=p_{0}^{f(0)}\cdot\ldots\cdot
p_{\overline{n}}^{f(\overline{n})}$.
Since $TP_{2}$ is preserved by interpretation, we have
###### Theorem 6.1.
For each $k\geq 1$ no model of $I\Delta_{0}$ is interpretable in any
$\mathcal{M}\big{/}p^{k}\mathcal{M}$.
Concluding remarks. In the sequel to this paper we consider
$\mathcal{M}\big{/}n\mathcal{M}$ for general $n$. The case when $n$ has only
finitely many prime divisors is no harder that what we did above. But the
general case is much harder.
## References
* [1] A. Adler and C. Kiefe, Pseudofinite fields, procyclic fields and model-completions, in Pacific Journal of Mathematics, vol. 62, (2), (1976), pp. 305-309.
* [2] J. Ax and S. Kochen, Diophantine problems over local fields, I. American Journal of Mathematics 87, (1965), pp. 605-630.
* [3] J. Ax and S. Kochen, Diophantine problems over local fields II, American Journal of Mathematics 87, (1965), pp. 631-648.
* [4] J. Ax and S. Kochen, Diophantine problems over local fields. III, Decidable fields. Annals of Mathematics 83 (2), (1966), pp. 437-456.
* [5] J. Ax, The elementary theory of finite fields, Annals of Mathematics 88, (1968), 239-271.
* [6] S.A. Basarab, Relative elimination of quantifiers for Henselian valued fields, Annals of Pure and Applied Logic 53, (1991), pp. 51-74.
* [7] A. Berarducci and P. D’Aquino, $\Delta_{0}$-complexity of the relation $y=\prod_{i\leq n}F(i)$, Annals of Pure and Applied Logic 75, (1995), pp. 49-56.
* [8] G. Cherlin, Model Theoretic Algebra - Selected Topics. Lecture Notes in Mathematics, vol. 521, Springer, (1976).
* [9] A. Chernikov, I. Kaplan and P. Simon, Groups and fields with $NTP_{2}$, in Proceedings of the American Mathematical Society, 143, (1), (2015), pp. 395-406
* [10] R. Cluckers, J. Derakhshan, E. Leenknegt, A. Macintyre, Uniformly defining valuation rings in Henselian valued fields with finite or pseudo-finite residue fields, Annals of Pure and Applied Logic 164, (2013), pp. 1236-1246.
* [11] P. D’Aquino, Local behaviour of the Chebyshev Theorem in Models of $I\Delta_{0}$, Journal of Symbolic Logic 57, (1992), pp. 12-27
* [12] P. D’Aquino, J. Derakhshan and A. Macintyre, Truncations of ordered abelian groups, to appear in Algebra Universalis.
* [13] P. D’Aquino and A. Macintyre, Non standard finite fields over $\Delta_{0}+\Omega_{1}$ , Israel Journal of Mathematics 117, (2000), pp. 311-333.
* [14] P. D’Aquino and A. Macintyre, Quadratic forms in models of $\Delta_{0}+\Omega_{1}$, Part II: Local equivalence, Annals of Pure and Applied Logic 162, (2011), pp. 447-456.
* [15] P. D’Aquino and A. Macintyre, Primes in Models of $I\Delta_{0}+\Omega_{1}$: Density in Henselizations, in Studies in Weak Arithmetics (ed. P. Cégielski et al.), CSLI Publications, (2013), pp. 85-91.
* [16] P. D’Aquino and A. Macintyre, Model theory of some local rings, IfCoLog Journal of Logics and their Applications, vol. 4, No. 4, (2017), pp 885-899.
* [17] F. Delon, Types sur $\mathbb{C}((X))$, Study Group on Stable Theories, Second year: 1978/79, Secretariat Math., 1981.
* [18] J. Derakhshan and A. Macintyre, Model-completeness for finitely ramified henselian fields valued in a $\mathbb{Z}$-group, submitted (2020)
* [19] J.-L. Duret, Les corps faiblement algébriquement clos non séparablement clos ont la propriété d’indépendence, in Model Theory of Algebra and Arithemtic, Pacholski et al. (Eds), Springer Lecture Notes, vol. 834, (1989), pp. 135-157.
* [20] Yu.L. Ershov, On the elementary theory of maximal normed fields, Soviet Math. Dokl., vol. 6, (1965), pp. 1390-1393 (In Russian).
* [21] S. Feferman and R. Vaught, The first order properties of algebraic systems, Fundamenta Mathematicae 47, (1959), pp. 57-103.
* [22] E. Hrushovski, Pseudo-finite fields and related structures, in Model Theory and Applications, Quaderni di Matematica vol. 11, Aracne, (2002), pp. 151-212.
* [23] M. Jarden, Rational points on algebraic varieties over large number fields, Bulletin of American Mathematical Society, 75 (1969), pp. 603–606.
* [24] R. Kossak and J. Schmerl, The Structure of Models of Peano Arithmetic, Oxford University Press, (2006).
* [25] A. Macintyre, Residue fields of models of P, in Logic, Methodology and Philosophy of Science VI (L. E. Cohen et al., eds.), Hannover 1979, North-Holland, Amsterdam, (1982), pp. 193-206.
* [26] M. Nagata, Some studies on semi-local rings, Nagoya Mathematical Journal 3, (1951), pp. 23-30.
* [27] J. Pas, Uniform $p$-adic cell decomposition and local zeta functions, Journal für die reine und angewandte Mathematik vol. 399, (1989), 137–172.
* [28] M. Ziegler, Die elementare Theorie der henselschen Körper, Inaugural Dissertation Köln, 1972.
* [29] B. Zilber, The semantics of the canonical commutation relation, arXiv:1604.07745, 2016.
|
# Deep Learning–Based Scene Simplification for Bionic Vision
Nicole Han University of CaliforniaSanta BarbaraCAUSA<EMAIL_ADDRESS>,
Sudhanshu Srivastava University of CaliforniaSanta BarbaraCAUSA
<EMAIL_ADDRESS>, Aiwen Xu University of CaliforniaSanta BarbaraCAUSA
<EMAIL_ADDRESS>, Devi Klein University of CaliforniaSanta BarbaraCAUSA
<EMAIL_ADDRESS>and Michael Beyeler 0000-0001-5233-844X University of
California,Santa BarbaraCAUSA<EMAIL_ADDRESS>
(2021)
###### Abstract.
Retinal degenerative diseases cause profound visual impairment in more than 10
million people worldwide, and retinal prostheses are being developed to
restore vision to these individuals. Analogous to cochlear implants, these
devices electrically stimulate surviving retinal cells to evoke visual
percepts (phosphenes). However, the quality of current prosthetic vision is
still rudimentary. Rather than aiming to restore “natural” vision, there is
potential merit in borrowing state-of-the-art computer vision algorithms as
image processing techniques to maximize the usefulness of prosthetic vision.
Here we combine deep learning–based scene simplification strategies with a
psychophysically validated computational model of the retina to generate
realistic predictions of simulated prosthetic vision, and measure their
ability to support scene understanding of sighted subjects (virtual patients)
in a variety of outdoor scenarios. We show that object segmentation may better
support scene understanding than models based on visual saliency and monocular
depth estimation. In addition, we highlight the importance of basing
theoretical predictions on biologically realistic models of phosphene shape.
Overall, this work has the potential to drastically improve the utility of
prosthetic vision for people blinded from retinal degenerative diseases.
retinal implant, visually impaired, scene simplification, deep learning,
simulated prosthetic vision, vision augmentation
††copyright: acmcopyright††journalyear: 2021††doi:
10.1145/1122445.1122456††conference: Augmented Humans ’21; February 22–24,
2021; Online††booktitle: Augmented Humans ’21††price: 15.00††isbn:
978-1-4503-XXXX-X/18/06††ccs: Human-centered computing Accessibility
technologies††ccs: Human-centered computing Empirical studies in
visualization††ccs: Human-centered computing Usability testing
Figure 1. Retinal implant (‘bionic eye’) for restoring vision to people with
visual impairment. _A)_ Light captured by a camera is transformed into
electrical pulses delivered through a microelectrode array to stimulate the
retina (adapted with permission from (Weiland and Humayun, 2005)). _B)_ To
create meaningful artificial vision, we explored deep learning–based scene
simplification as a preprocessing strategy for retinal implants (reproduced
from doi:10.6084/m9.figshare.13652927 under CC-BY 4.0). As a proof of concept,
we used a neurobiologically inspired computational model to generate realistic
predictions of simulated prosthetic vision (SPV), and asked sighted subjects
(i.e., virtual patients) to identify people and cars in a novel SPV dataset of
natural outdoor scenes. In the future, this setup may be used as input to a
real retinal implant.
Deep learning–based scene simplification for bionic vision.
## 1\. Introduction
Retinal degenerative diseases such as retinitis pigmentosa (RP) and age-
related macular degeneration (ARMD) lead to a gradual loss of photoreceptors
in the eye that may cause profound visual impairment in more than 10 million
people worldwide. Analogous to cochlear implants, retinal neuroprostheses
(also known as the _bionic eye_ , Fig. 1A) aim to restore vision to these
individuals by electrically stimulating surviving retinal cells to evoke
neuronal responses that are interpreted by the brain as visual percepts
(_phosphenes_). Existing devices generally provide an improved ability to
localize high-contrast objects, navigate, and perform basic orientation tasks
(Ayton et al., 2020). Future neural interfaces will likely enable applications
such as controlling complex robotic devices, extending memory, or augmenting
natural senses with artificial inputs (Fernandez, 2018).
However, despite recent progress in the field, there are still several
limitations affecting the possibility to provide useful vision in daily life
(Beyeler et al., 2017b). Interactions between the device electronics and the
underlying neurophysiology of the retina have been shown to lead to
distortions that can severely limit the quality of the generated visual
experience (Fine and Boynton, 2015; Beyeler et al., 2019b). Other challenges
include how to improve visual acuity, enlarge the field-of-view, and reduce a
complex visual scene to its most salient components through image processing.
Rather than aiming to restore “natural” vision, there is potential merit in
borrowing computer vision algorithms as preprocessing techniques to maximize
the usefulness of bionic vision. Whereas edge enhancement and contrast
maximization are already routinely employed by current devices, relatively
little work has explored the extraction of high-level scene information.
To address these challenges, we make three contributions:
1. i.
We adopt state-of-the-art computer vision algorithms to explore deep
learning–based scene simplification as a preprocessing strategy for bionic
vision.
2. ii.
Importantly, we use an established and psychophysically validated
computational model of bionic vision to generate realistic predictions of
simulated prosthetic vision (SPV).
3. iii.
We systematically evaluate the ability of these algorithms to support scene
understanding with a user study focused on a novel dataset of natural outdoor
scenes.
## 2\. Background
Retinal implants are currently the only FDA-approved technology to treat
blinding degenerative diseases such as RP and ARMD. Most current devices
acquire visual input via an external camera and perform edge extraction or
contrast enhancement via an external video processing unit (VPU), before
sending the signal through wireless coils to a microstimulator implanted in
the eye or the brain (see Fig. 1A). This device receives the information,
decodes it and stimulates the visual system with electrical current, ideally
resulting in artificial vision. Two devices are already approved for
commercial use: Argus II (60 electrodes, Second Sight Medical Products, Inc.,
(Luo and da Cruz, 2016)) and Alpha-IMS (1500 electrodes, Retina Implant AG,
(Stingl et al., 2013)). In addition, PRIMA (378 electrodes, Pixium Vision,
(Lorach et al., 2015)) has started clinical trials, with others to follow
shortly (Ayton et al., 2014; Ferlauto et al., 2018).
However, a major outstanding challenge in the use of these devices is
translating electrode stimulation into a code that the brain can understand. A
common misconception is that each electrode in the grid can be thought of as a
‘pixel’ in an image (Dagnelie et al., 2007; Chen et al., 2009; Lui et al.,
2011; Perez-Yus et al., 2017; Sanchez-Garcia et al., 2019), and most retinal
implants linearly translate the grayscale value of a pixel in each video frame
to a current amplitude of the corresponding electrode in the array (Luo and da
Cruz, 2016). To generate a complex visual experience, the assumption then is
that one simply needs to turn on the right combination of pixels.
In contrast, a growing body of evidence suggests that individual electrodes do
not lead to the perception of isolated, focal spots of light (Fine and
Boynton, 2015; Beyeler et al., 2019b; Erickson-Davis and Korzybska, 2020).
Although consistent over time, phosphenes vary drastically across subjects and
electrodes (Luo et al., 2016; Beyeler et al., 2019b) and often fail to
assemble into more complex percepts (Rizzo et al., 2003; Wilke et al., 2011).
Consequently, retinal implant users do not see a perceptually intelligible
world (Erickson-Davis and Korzybska, 2020).
A recent study demonstrated that the shape of a phosphene generated by a
retinal implant depends on the retinal location of the stimulating electrode
(Beyeler et al., 2019b). Because retinal ganglion cells (RGCs) send their
axons on highly stereotyped pathways to the optic nerve, an electrode that
stimulates nearby axonal fibers would be expected to antidromically activate
RGC bodies located peripheral to the point of stimulation, leading to percepts
that appear elongated in the direction of the underlying nerve fiber bundle
(NFB) trajectory (Fig. 2, _right_). Ref. (Beyeler et al., 2019b) used a
simulated map of NFBs in each patient’s retina to accurately predict phosphene
shape, by assuming that an axon’s sensitivity to electrical stimulation:
1. i.
decays exponentially with decay constant $\rho$ as a function of distance from
the stimulation site,
2. ii.
decays exponentially with decay constant $\lambda$ as a function of distance
from the cell body, measured as axon path length.
Figure 2. A simulated map of retinal NFBs (_left_) can account for visual
percepts (_right_) elicited by retinal implants (reprinted with permission
from (Beyeler et al., 2019a)). _Left_ : Electrical stimulation (red circle) of
a NFB (black lines) could activate retinal ganglion cell bodies peripheral to
the point of stimulation, leading to tissue activation (black shaded region)
elongated along the NFB trajectory away from the optic disc (white circle).
_Right_ : The resulting visual percept appears elongated as well; its shape
can be described by two parameters, $\lambda$ (spatial extent along the NFB
trajectory) and $\rho$ (spatial extent perpendicular to the NFB). See Ref.
(Beyeler et al., 2019a) for more information.
As can be seen in Fig. 2 (_left_), electrodes near the horizontal meridian are
predicted to elicit circular percepts, while other electrodes are predicted to
produce elongated percepts that will differ in angle based on whether they
fall above or below the horizontal meridian. In addition, the values of $\rho$
and $\lambda$ dictate the size and elongation of elicited phosphenes,
respectively, which may drastically affect visual outcomes. Understanding the
qualitative experience associated with retinal implants and finding ways to
improve is therefore indispensable to the development of visual
neuroprostheses and related vision augmentation technologies.
## 3\. Related Work
Most retinal implants are equipped with an external VPU that is capable of
applying simple image processing techniques to the video feed in real time. In
the near future, these techniques may include deep learning–based algorithms
aimed at improving a patient’s scene understanding.
Based on this premise, researchers have developed various image optimization
strategies, and assessed their performance by having sighted observers (i.e.,
_virtual patients_) conduct daily visual tasks under simulated prosthetic
vision (SPV) (Boyle et al., 2008; Dagnelie et al., 2007; Al-Atabany et al.,
2010; Li et al., 2018; McCarthy et al., 2014; Vergnieux et al., 2017).
Simulation allows a wide range of computer vision systems to be developed and
tested without requiring implanted devices.
SPV studies suggest that one benefit of image processing may be to provide an
importance mapping that can aid scene understanding; that is, to enhance
certain image features or regions of interest, at the expense of discarding
less important or distracting information (Boyle et al., 2008; Al-Atabany et
al., 2010; Horne et al., 2016; Sanchez-Garcia et al., 2019). This limited
compensation may be significant to retinal prosthesis patients carrying out
visual tasks in daily life.
One of the most commonly explored strategies is to highlight visually salient
information in the scene. In biologically-inspired models, visual saliency is
often defined as a _bottom-up_ process that highlights regions whose low-level
visual attributes (e.g., color, contrast, motion) may differ from their
immediate surroundings. Early work used this approach to build a visual
saliency map whose salient regions coincided with the regions gazed at by
human subjects when looking at images (Parikh et al., 2010). More recent
research showed that saliency was able to improve eye-hand coordination (Li et
al., 2017), obstacle avoidance (Stacey et al., 2011), object detection
(Weiland et al., 2012), and object recognition (Li et al., 2018; Wang et al.,
2016). However, saliency prediction improved markedly with the advent of deep
learning models, which are commonly trained on human eye movement data to
predict an observer’s fixation locations while freely-viewing a set of images.
The current state-of-the-art in saliency prediction is DeepGaze II (Kümmerer
et al., 2016), a probabilistic model that uses transfer learning from VGG-19
pre-trained on the SALICON dataset. DeepGaze has yet to be applied to the
field of bionic vision.
Current retinal prostheses are implanted in only one eye, and thus are unable
to convey binocular depth cues. Previous work has therefore explored the
possibility of obtaining depth information through additional peripherals,
such as an RGB-D sensor, and studied behavioral performance of virtual
patients typically navigating an obstacle course under SPV. For example, Ref.
(Perez-Yus et al., 2017) used depth cues to generate a simplified
representation of the ground to indicate the free space within which virtual
patients could safely walk around. Depth cues were also shown to help avoid
nearby obstacles that are notoriously hard to detect with other computer
vision algorithms, such as branches hanging from a tree (Lieby et al., 2011).
Ref. (McCarthy et al., 2014) used depth to increase the contrast of object
boundaries and showed that this method reduced the number of collisions with
ground obstacles. In addition, retinal prosthesis patients were shown to
benefit from distance information provided by a thermal sensor when trying to
avoid nearby obstacles and people (Sadeghi et al., 2019). However, recent
advances in deep learning enable the estimation of relative depth from single
monocular images, thereby eliminating the need of external depth sensors and
peripherals. One of the most promising deep neural networks is monodepth2
(Godard et al., 2019), which uses a self-supervised method to estimate per-
pixel monocular depth. Deep learning–based depth estimation has yet to be
applied to the field of bionic vision.
Finally, recent advances in semantic segmentation have found application in
bionic vision to simplify the representation of both outdoor scenes (Horne et
al., 2016) and indoor scenes (Sanchez-Garcia et al., 2019). The latter study
combined semantic and structural image segmentation to build a schematic
representation of indoor environments, which was then shown to improve object
and room identification in a SPV task (Sanchez-Garcia et al., 2019).
However, a common limitation of all the above studies is that their prosthetic
vision simulation assumed that phosphenes are small, isolated, independent
light sources. It is therefore unclear how their findings would translate to
real retinal prosthesis patients, whose phosphenes are large, elongated, and
often fail to assemble into more complex percepts (Rizzo et al., 2003; Wilke
et al., 2011; Beyeler et al., 2019b; Erickson-Davis and Korzybska, 2020). In
addition, since the above algorithms were developed in isolation and tested on
different behavioral tasks, a side-by-side comparison of their ability to aid
scene understanding is still lacking.
To address these challenges, we used a neurobiologically inspired
computational model of bionic vision to generate realistic predictions of SPV,
and applied it to several state-of-the-art computer vision algorithms that
might be used to aid scene understanding. To allow for a fair comparison
between algorithms, we asked virtual patients to make perceptual judgments
about natural outdoor scenes, and assessed their performance using objective
metrics as we systematically varied a number of model parameters.
## 4\. Methods
Following the workflow outlined in Fig. 1B, we created SPV videos of various
outdoor scenes captured by a head-mounted camera (Section 4.1). We first
processed the raw videos with one of four scene simplification strategies
based on state-of-the-art computer vision algorithms (Section 4.2). We then
fed the preprocessed videos into a prosthetic vision simulator to simulate the
artificial vision likely to be experienced by different retinal prosthesis
patients (Section 4.3). Example frames of the resulting SPV videos can be seen
in Fig. 3. Finally, we conducted a user study to evaluate how well the
resulting SPV videos could support scene understanding in a variety of outdoor
scenarios (Section 4.4).
### 4.1. Visual Stimuli
Stimuli consisted of $16$ first-person videos (each $5\text{\,}\mathrm{s}$
long) recorded on the University of California, Santa Barbara (UCSB) campus
using head-mounted Tobii Pro Glasses 2. All videos were recorded outdoors in
broad daylight, and were aimed at capturing scenarios that are relevant for
orientation and mobility of a retinal prosthesis patient (e.g., walking on a
sidewalk, crossing a street, strolling through a park). The final dataset was
carefully assembled so as to uniformly cover a variety of conditions. That is,
four videos did not include any people or cars; four videos had one or more
person present; four videos had one or more cars present; and four videos had
both people and cars present.
The raw dataset is publicly available as part of the Supplementary Material
(see Section 7).
Figure 3. Example frames of the simulated prosthetic vision (SPV)dataset. _A)_
Example frames containing: neither people nor cars (N), cars only (C), both
cars and people (CP), and people only (P). _B)_ Same example frames after
being processed with different scene simplification strategies (columns) and
SPV of a $32\times 32$ electrode array with different phosphene sizes and
elongations (rows). Simulations are shown for the original masks (no SPV),
small phosphenes with no axonal stimulation
($\rho=$100\text{\,}\mathrm{\SIUnitSymbolMicro
m}$,\lambda=$0\text{\,}\mathrm{\SIUnitSymbolMicro m}$$), medium-sized
phosphenes with intermediate axonal stimulation
($\rho=$300\text{\,}\mathrm{\SIUnitSymbolMicro
m}$,\lambda=$1000\text{\,}\mathrm{\SIUnitSymbolMicro m}$$), and large
phosphenes with strong axonal stimulation
($\rho=$500\text{\,}\mathrm{\SIUnitSymbolMicro
m}$,\lambda=$5000\text{\,}\mathrm{\SIUnitSymbolMicro m}$$). Phosphene size and
elongation drastically affect SPV quality, but previous work often ignored
these parameters in their predictions.
Figure 4. Example trial conducted using the SimplePhy online platform (Lago,
Miguel, 2021). After watching a five-second long video clip of a simulated
prosthetic vision (SPV)outdoor scene, participants had to indicate whether
they believe cars and people to be present in the scene. Participants also
indicated their confidence on a five-level Likert scale (1 = not confident at
all, 5 = completely confident).
### 4.2. Scene Simplification Strategies
Stimuli were processed by four different scene simplification strategies,
adapted from state-of-the-art computer vision algorithms.
#### 4.2.1. Highlighting Visually Salient Information
We used DeepGaze II (Kümmerer et al., 2016) to highlight visually salient
information in the recorded videos. DeepGaze II produced a saliency map that
assigned an importance value $\in[0,1]$ to each pixel in a given video frame.
Importance values were then linearly mapped to stimulus amplitudes applied to
the simulated retinal implant.
#### 4.2.2. Substituting Depth for Intensity
We used a self-supervised monocular depth estimation model called monodepth2
(Godard et al., 2019) (specifically the pre-trained mono+stereo_640x192 model)
to predict a per-pixel relative depth map from each frame in the videos. We
first sorted the depth values of all pixels in a frame and removed all depth
values above the 80th percentile (where the 0th and 100th percentile referred
to the nearest and farthest pixels to the viewer, respectively). We then
applied an exponential decay on the depth values such that the closest pixels
had grayscale value 180 and the farthest pixels had grayscale value 0.
#### 4.2.3. Object Segmentation
To segment objects of interest from background clutter, we used a combination
of the scene parsing algorithm from the MIT Scene Parsing Benchmark (Zhou et
al., 2016, 2017) and an object segmentation algorithm called detectron2 (Wu,
Yuxin et al., 2019). Given that all the stimuli were outdoor scenes, we
obtained the detected object binary masks that were labeled as a person,
bicycle, car, or bus for each video frame. If there was no object detected in
the scene, then we only represented the main structural edges from the scene-
parsing algorithm. The scene-parsing algorithm sometimes produces more than 50
parsed regions from the scene. In order to produce less clustered output, we
only preserve the regions labeled as roads or sidewalks. For the parsed
regions, we then extracted the structural edges for better visualization in
the end. The resulting binary masks were then linearly mapped to stimulus
amplitudes applied to the simulated retinal implant.
#### 4.2.4. Combining Saliency, Depth, and Segmentation
Recognizing the complementary qualities of the three algorithms described
above, we wondered whether a combination of saliency, depth, and object
segmentation could further improve scene understanding. While segmentation
excels at highlighting objects of interests, it might miss regions of interest
that do not have a clear semantic label (which would be highlighted by the
more bottom-up–driven saliency detector) or nearby obstacles (which would be
highlighted by the depth algorithm). To arrive at a binary mask of salient
objects, we thresholded the saliency map to retain only the
$10\text{\,}\mathrm{\char 37\relax}$ most salient pixels and combined it with
the object segmentation map using a logical OR. We then scaled the grayscale
value of each pixel in the new binary mask with a quadratic function of depth,
similar to the above:
$y=-\frac{45}{16}(\frac{8}{d_{\max}-d_{\min}}x-\frac{16}{d_{\max}-d_{\min}})^{2}+180$.
### 4.3. Simulated Prosthetic Vision
The preprocessed videos were then used as input stimuli to the pulse2percept
simulator (Beyeler et al., 2017a), which provides an open-source
implementation of Ref. (Beyeler et al., 2019b) (among others). The simulator
takes a downscaled version of the preprocessed image, and interprets the
grayscale value of each pixel in a video frame as a current amplitude
delivered to the simulated retinal implant. However, pulse2percept describes
the output of SPV not as a pixelated image, but determines the shape of each
phosphene based on the retinal location of the simulated implant as well as
model parameters $\rho$ and $\lambda$ (see Section 2). As can be seen in (Fig.
2, _left_), electrodes near the horizontal meridian were thus predicted to
elicit circular percepts, while other electrodes were predicted to produce
elongated percepts that differed in angle based on whether they fell above or
below the horizontal meridian.
Importantly, $\rho$ and $\lambda$ seem to vary drastically across patients
(Beyeler et al., 2019b). Although the reason for this is not fully understood,
it is clear that the choice of these parameter values may drastically affect
the quality of the generated visual experience. To cover a broad range of
potential visual outcomes, we thus simulated nine different conditions with
$\rho=\\{100,300,500\\}$ and $\lambda=\\{0,1000,5000\\}$.
To study the effect that the number of electrodes in a retinal implant has on
scene understanding, we simulated three different retinal implants consisting
of $8\times 8$, $16\times 16$, and $32\times 32$ electrodes arranged on a
rectangular grid. These sizes roughly correspond to existing and near-future
retinal implants.
### 4.4. Virtual Patients
We recruited 45 sighted undergraduate students (ages 18–21; 31 females, 14
males) from the student pool at UCSB to act as virtual patients in our
experiments. Subjects were asked to watch SPV videos depicting various outdoor
scenes and indicate whether they believe people and/or cars to be present in
the scene. We were primarily interested in investigating their perceptual
performance as a function of the four different scene simplification
strategies, three retinal implant resolutions, and nine combinations of model
parameters $\rho$ and $\lambda$. All experiments were performed under a
protocol approved by the university’s Institutional Review Board.
#### 4.4.1. Experimental Setup and Apparatus
The experiment was set up using a recent online platform called SimplePhy
(Lago, Miguel, 2021). All subjects completed the experiment online using a
personal laptop or computer.
We used a between-subjects design where each subject was randomly assigned to
one of the nine model parameter conditions
($\rho\in\\{100,300,500\\}\times\lambda\in\\{0,1000,5000\\}$). Each condition
was completed by five different subjects. Within each condition, each subject
completed all 16 videos with the four scene simplification strategies (depth,
saliency, segmentation, combination) and three electrode grid resolutions
($8\times 8$, $16\times 16$, $32\times 32$). Therefore, each subject completed
192 trials ($16\times 4\times 3$) in total, which took about 45–60 minutes to
finish.
#### 4.4.2. Experimental Task and Procedure
Subjects underwent a short online practice session consisting of 8 practice
trials, where they were shown original videos from the head-mounted camera
alongside their corresponding SPV videos. An example trial is shown in Fig. 4.
Note that the video sequences used in the practice session did not appear in
the actual experiment. After each video, a new screen appeared (‘response
screen’ in Fig. 4) on which subjects indicated whether they believed the scene
contained any people or cars. Subjects also indicated their confidence on a
five-level Likert scale (1 = not confident at all, 5 = completely confident).
Detecting cars and people is an essential component for orientation &
mobility. Increasing a patient’s ability to detect and recognize moving
objects may prevent them from dangerous situations in real-life scenarios.
#### 4.4.3. Evaluating performance
Perceptual performance was assessed using the sensitivity index ($d^{\prime}$,
“d-prime”), which is a dimensionless statistic from signal detection theory
that can be used to measure a participant’s perceptual sensitivity (Simpson
and Fitter, 1973):
(1) $d^{\prime}=Z(\textrm{hit rate})-Z(\textrm{false discovery rate}),$
where the function $Z(p)$, with $p\in[0,1]$, is the inverse of the cumulative
distribution function of the Gaussian distribution. Here, the hit rate was
calculated from the number of trials in which a participant correctly
identified people or cars to be present, and the false-discovery rate (FDR)
was calculated from the number of trials in which a participant indicated to
see either people or cars, although none of them were present. A higher
$d^{\prime}$ indicates better ability to discriminate between trials in which
a target is present (signal) and trials in which a target is absent (noise).
$d^{\prime}=0$ indicates that a participant is performing at chance levels.
We used bootstrapping to test for statistical significance. Ten thousand
bootstrap re-samples were used to estimate within-subject and between-subject
differences. All $p$ values were corrected using FDR to control the
probability of incorrectly rejecting the null hypotheses when doing multiple
comparisons (Li et al., 2015).
For the sake of completion, perceptual performance was also evaluated on four
common statistical indicators: accuracy (number of correct predictions),
precision (number of correct predictions divided by the number of all trials
containing either people or cars), recall (number of correct predictions
divided by the number of all trials that should have been identified as
containing either people or cars), and the F1 score (harmonic mean of the
precision and recall). Note that some of these are part of $d^{\prime}$.
## 5\. Results
### 5.1. Effect of Scene Simplification Strategy on Perceptual Performance
Fig. 5 shows the perceptual performance of virtual patients as a function of
the different scene simplification strategies. Subjects performed best using
the segmentation algorithm ($d^{\prime}:\mu=1.13,\sigma=1.01$). Performance
based on saliency ($d^{\prime}$: $\mu=0.07$, standard deviation
$\sigma=0.66,p<0.001$), depth ($d^{\prime}:\mu=0.29,\sigma=0.66,p<0.001$), and
combination ($d^{\prime}:\mu=1.01,\sigma=0.91,p<0.05$) was significantly
worse. Saliency performed worse, followed by depth ($p<0.01$) and the
combination algorithm ($p<0.001$). Interestingly, the combination algorithm
was not able to benefit from the complementary information contributed by the
individual saliency, depth, and segmentation algorithms.
These findings are further corroborated by other objective measures such as
accuracy, precision, and recall (see Table 1) that reveal object segmentation
as the most beneficial scene simplification strategy.
Subjects’ confidence ratings in the segmentation condition
($\mu=3.02,\sigma=1.10$) and combination condition ($\mu=2.96,\sigma=1.08$)
were both significantly higher than the those in the saliency condition
($\mu=2.65,\sigma=1.12$) and the depth condition ($\mu=2.68\sigma=1.07$; all
$p<0.001$). No difference between saliency and depth condition was found
($p=0.09$).
Figure 5. Effect of Scene Simplification Strategy. The p values are corrected using false-discovery rate (FDR), *¡.05, **¡.01, ***¡.001. The error bars show the 95$\%$ confidence interval. Condition | Accuracy | Precision | Recall | F1
---|---|---|---|---
Saliency | 0.51 | 0.53 | 0.46 | 0.46
Depth | 0.54 | 0.56 | 0.56 | 0.53
Segmentation | 0.68 | 0.73 | 0.63 | 0.68
Combination | 0.66 | 0.72 | 0.62 | 0.67
Table 1. Virtual patient’s ability to identify people and cars in outdoor
scenes using different scene simplification strategies (bold: best overall).
### 5.2. Effect of Phosphene Size and Elongation on Perceptual Performance
Fig. 6 shows the perceptual performance of virtual patients as a function of
phosphene size ($\rho$) and elongation ($\lambda$). As expected, smaller
phosphenes led to better perceptual performance, with
$\rho=$100\text{\,}\mathrm{\SIUnitSymbolMicro m}$$ scoring significantly
better ($d^{\prime}:\mu=0.81,\sigma=1.02$) than
$\rho=$300\text{\,}\mathrm{\SIUnitSymbolMicro m}$$
($d^{\prime}:\mu=0.6,\sigma=0.89,p=0.03$) and
$\rho=$500\text{\,}\mathrm{\SIUnitSymbolMicro m}$$
($d^{\prime}:\mu=0.52,\sigma=0.96,p=0.02$). No significant difference in
$d^{\prime}$ was found between the conditions with
$\rho=$300\text{\,}\mathrm{\SIUnitSymbolMicro m}$$ and
$\rho=$500\text{\,}\mathrm{\SIUnitSymbolMicro m}$$ ($p=0.28$).
Figure 6. Sensitivity index ($d^{\prime}$) as a function of phosphene width ($\rho$) and length ($\lambda$). The $p$ values were corrected using false-discovery rate (FDR): * $p<.05$, ** $p<.01$, *** $p<.001$. The error bars show the 95$\%$ confidence interval. Figure 7. Sensitivity index ($d^{\prime}$) for each tested combination of phosphene width ($\rho$) and length ($\lambda$).The p values are corrected using false-discovery rate (FDR), *¡.05, **¡.01, ***¡.001. The error bars show the 95$\%$ confidence interval. $\rho$ ($\text{\,}\mathrm{\SIUnitSymbolMicro m}$) | $\lambda$ ($\text{\,}\mathrm{\SIUnitSymbolMicro m}$) | Accuracy | Precision | Recall | F1
---|---|---|---|---|---
100 | 0 | 0.64 | 0.69 | 0.53 | 0.56
| 1000 | 0.59 | 0.58 | 0.70 | 0.62
| 5000 | 0.62 | 0.68 | 0.63 | 0.60
300 | 0 | 0.59 | 0.61 | 0.62 | 0.58
| 1000 | 0.62 | 0.63 | 0.58 | 0.58
| 5000 | 0.57 | 0.58 | 0.56 | 0.55
500 | 0 | 0.60 | 0.60 | 0.63 | 0.59
| 1000 | 0.58 | 0.56 | 0.66 | 0.59
| 5000 | 0.57 | 0.55 | 0.66 | 0.59
Table 2. Virtual patient’s ability to identify people and cars in outdoor
scenes as a function of phosphene size ($\rho$) and elongation ($\lambda$;
bold: best in box, italics: best overall).
A similar trend was evident with respect to phosphene elongation. Here,
$\lambda=0$ indicated circular phosphenes, similar to the SPV studies
described in Section 3, and led to similar performance as
$\rho=$100\text{\,}\mathrm{\SIUnitSymbolMicro m}$$
($d^{\prime}:\mu=0.75,\sigma=0.99$). And while there was a trend evident
indicating that more elongated phosphenes may lead to poorer perceptual
performance, this trend did not reach statistical significance
($\lambda=$1000\text{\,}\mathrm{\SIUnitSymbolMicro
m}$,d^{\prime}:\mu=0.63,\sigma=0.98,p>0.05$;
$\lambda=$5000\text{\,}\mathrm{\SIUnitSymbolMicro
m}$,d^{\prime}:\mu=0.53,\sigma=0.91,p>0.05$).
This trend could later be confirmed by investigating $d^{\prime}$ across all
nine model parameter conditions (see Fig. 7). Here we found a clear decreasing
trend in $d^{\prime}$ as phosphene size($\rho$) and phosphene
elongation($\lambda$) increased. However, notice that $d^{\prime}$ was
positive in all conditions, indicating that subjects performed better than
chance even when phosphenes were unusually large and elongated.
Similar patterns were found in all the other behavioral performance
measurements (see Table 2). Overall, the highest accuracy, precision, recall,
and F1 scores (italics) were achieved with the smallest tested phosphene size
($\rho=$100\text{\,}\mathrm{\SIUnitSymbolMicro m}$$), but not necessarily with
the shortest phosphene length.
Unfortunately, the subjects’ confidence ratings across conditions with
different phosphene sizes ($\rho$) and phosphene elongations ($\lambda$) did
not show any significant difference ($\rho=100:\mu=2.74,\sigma=1.19$;
$\rho=300:\mu=2.95,\sigma=1.07$; $\rho=500:\mu=2.80,\sigma=1.04$;
$\lambda=0:\mu=2.60,\sigma=1.14$; $\lambda=1000:\mu=3.18,\sigma=1.14$;
$\lambda=5000:\mu=2.71,\sigma=.93$, all $p>.05$).
Figure 8. Sensitivity index ($d^{\prime}$) as a function of electrode grid size. The p values are corrected using false-discovery rate (FDR), *¡.05, **¡.01, ***¡.001. The error bars show the 95$\%$ confidence interval. Resolution | Accuracy | Precision | Recall | F1
---|---|---|---|---
8$\times$8 | 0.57 | 0.59 | 0.57 | 0.54
16$\times$16 | 0.61 | 0.62 | 0.64 | 0.61
32$\times$32 | 0.61 | 0.62 | 0.65 | 0.61
Table 3. Virtual patient’s ability to identify people and cars in outdoor
scenes as a function of electrode grid size (bold: best overall).
### 5.3. Effect of Electrode Grid Size on Perceptual Performance
Fig. 8 shows the perceptual performance of virtual patients as a function of
electrode grid size. As expected, performance improved as the number of
electrodes in the array was increased from $8\times 8$
($d^{\prime}:\mu=0.47,\sigma=0.87$) to $16\times 16$
($d^{\prime}:\mu=0.72,\sigma=0.93,p<0.001$). However, further increasing the
number of electrodes to $32\times 32$ did not measurably affect performance
($p=0.37$).
This finding is again corroborated by accuracy, precision, recall, and F1
scores (Table 3), indicating virtually identical performance for $16\times 16$
and $32\times 32$.
Again, no significant difference in confidence ratings was found for different
electrode array resolution ($8\times 8:\mu=2.74,\sigma=1.12;16\times
16:\mu=2.85,\sigma=1.09;32\times 32:\mu=2.89,\sigma=1.10$, all $p>0.05$).
## 6\. Discussion
### 6.1. Object Segmentation May Support Scene Understanding
The present study provides the first side-by-side comparison of several deep
learning–based scene simplification strategies for bionic vision. Considering
a number of relevant implant configurations in combination with a
psychophysically validated model of SPV, we identified object segmentation as
the most promising image processing strategy to support outdoor scene
understanding of virtual patients (see Fig. 5 and Table 1). This finding is
consistent with recent studies indicating the potential utility of semantic
segmentation for bionic vision (Horne et al., 2016; Sanchez-Garcia et al.,
2019).
Object segmentation compared favorably with two other scene simplification
strategies: based on visual saliency and monocular depth estimation. Whereas
the saliency model struggled with the lighting conditions of the outdoor data
set (often highlighting regions of increased contrast, and falling victim to
shadows), the depth model often failed to highlight nearby obstacles. However,
these models may prove their value in less structured test environments, where
performance is less focused on semantic labeling and more concerned with the
decluttering of complex scenes or the avoidance of nearby obstacles.
### 6.2. Increased Phosphene Size Impedes Perceptual Performance
To the best of our knowledge, this study is also the first to study SPV with a
neurobiologically inspired, psychophysically validated model of phosphene
shape (Beyeler et al., 2019b). Whereas previous studies assumed that
phosphenes are isolated, focal spots of light (Dagnelie et al., 2007; Chen et
al., 2009; Lui et al., 2011; Perez-Yus et al., 2017; Sanchez-Garcia et al.,
2019), here we systematically evaluated perceptual performance across a wide
range of common phosphene sizes ($\rho$) and elongation ($\lambda$). As
expected, the best performance was achieved with small, circular phosphenes
($\rho=$100\text{\,}\mathrm{\SIUnitSymbolMicro m}$,\lambda=0$; see Fig. 6),
and increasing phosphene size and elongation negatively affected performance
(Fig. 7). This finding suggests that future studies of SPV should take into
account realistic phosphene shape when making predictions and drawing
conclusions.
However, it is worth mentioning that the sensitivity index ($d^{\prime}$)
remained positive in all tested conditions, indicating that subjects performed
better than chance even when phosphenes were unusually large and elongated.
This result suggests that all tested scene simplification strategies enabled
the virtual patients to perform above chance levels, no matter how degraded
the SPV quality.
### 6.3. Increasing the Number of Electrodes Does Not Necessarily Improve
Performance
As expected, perceptual performance improved as the size of the electrode grid
was increased from $8\times 8$ to $16\times 16$. However, further increasing
the number of electrodes to $32\times 32$ did not measurably affect
performance. This result is consistent with previous literature suggesting
that number of electrodes is not the limiting factor in retinal implants
(Behrend et al., 2011; Beyeler et al., 2017b).
### 6.4. Limitations and Future Work
Although the present results demonstrate the utility of deep learning–based
scene simplification for bionic vision, there are a number of limitations that
should be addressed in future work.
First, in an effort to focus on scenes important for orientation and mobility,
we limited our dataset to outdoor scenes. However, it would also be valuable
to evaluate the performance of different scene simplification strategies on
indoor scenarios. Because indoor scenes have different layouts and types of
objects, the algorithms studied here might have different performances
compared to outdoor scenes. For example, the saliency model might perform
better in highlighting salient regions without the interference of light and
shadow contrasts.
Second, to keep the perceptual judgments amenable to quantitative performance
measures, we limited the current study to a simple detection task involving
common semantic object categories (i.e., people and cars). This might explain
the superior performance of the semantic segmentation algorithm, which
operates with semantic labels. In contrast, the depth and saliency algorithms
might prove more valuable when applied to open-ended navigation tasks. In the
future, we plan to conduct such SPV studies in immersive virtual reality (VR)
to gain more comprehensive insight into the behavioral performance of virtual
patients.
Third, the present study should be understood as a first step towards the
ultimate goal of creating a retinal implant supported by deep learning–based
image preprocessing. Such a device would require all processing to happen in
real time at the edge. One solution could come in the form of low-power, low-
latency neuromorphic hardware coupled with an event-based vision sensor.
Future iterations of this work may include end-to-end training of scene
simplification strategies fitted to a specific implant technology or even an
individual patient. Overall this work has the potential to drastically improve
the utility of prosthetic vision for people blinded from retinal degenerative
diseases.
## 7\. Data Availability
All raw video sequences (original and preprocessed) are available on the Open
Science Framework (https://osf.io/s2udz). SPV models were based on the
pulse2percept Python package (Beyeler et al., 2017a). Code used to implement
the scene simplification strategies is available on GitHub
(https://github.com/bionicvisionlab/2021-han-scene-simplification, v0.1).
## Acknowledgments
This work was partially supported by the National Institutes of Health (NIH
R00 EY-029329 to MB). We would like to thank Yaoyi Bai and Sikun Lin for their
contribution to an earlier version of the depth algorithm, and Asa Young for
collecting the video stimuli. We would also like to thank Dr. Miguel Lago for
technical support with regards to the SimplePhy (Lago, Miguel, 2021) online
platform.
## References
* (1)
* Al-Atabany et al. (2010) W. I. Al-Atabany, T. Tong, and P. A. Degenaar. 2010\. Improved content aware scene retargeting for retinitis pigmentosa patients. _Biomed Eng Online_ 9 (Sept. 2010), 52. https://doi.org/10.1186/1475-925X-9-52
* Ayton et al. (2020) Lauren N. Ayton, Nick Barnes, Gislin Dagnelie, Takashi Fujikado, Georges Goetz, Ralf Hornig, Bryan W. Jones, Mahiul M. K. Muqit, Daniel L. Rathbun, Katarina Stingl, James D. Weiland, and Matthew A. Petoe. 2020. An update on retinal prostheses. _Clinical Neurophysiology_ 131, 6 (June 2020), 1383–1398. https://doi.org/10.1016/j.clinph.2019.11.029
* Ayton et al. (2014) Lauren N. Ayton, Peter J. Blamey, Robyn H. Guymer, Chi D. Luu, David A. X. Nayagam, Nicholas C. Sinclair, Mohit N. Shivdasani, Jonathan Yeoh, Mark F. McCombe, Robert J. Briggs, Nicholas L. Opie, Joel Villalobos, Peter N. Dimitrov, Mary Varsamidis, Matthew A. Petoe, Chris D. McCarthy, Janine G. Walker, Nick Barnes, Anthony N. Burkitt, Chris E. Williams, Robert K. Shepherd, Penelope J. Allen, and for the Bionic Vision Australia Research Consortium. 2014\. First-in-Human Trial of a Novel Suprachoroidal Retinal Prosthesis. _PLOS ONE_ 9, 12 (Dec. 2014), e115239. https://doi.org/10.1371/journal.pone.0115239 Publisher: Public Library of Science.
* Behrend et al. (2011) Matthew R. Behrend, Ashish K. Ahuja, Mark S. Humayun, Robert H. Chow, and James D. Weiland. 2011\. Resolution of the Epiretinal Prosthesis is not Limited by Electrode Size. _IEEE Transactions on Neural Systems and Rehabilitation Engineering_ 19, 4 (Aug. 2011), 436–442. https://doi.org/10.1109/TNSRE.2011.2140132
* Beyeler et al. (2017a) M. Beyeler, G. M. Boynton, I. Fine, and A. Rokem. 2017a. pulse2percept: A Python-based simulation framework for bionic vision. In _Proceedings of the 16th Science in Python Conference_ , K. Huff, D. Lippa, D. Niederhut, and M. Pacer (Eds.). 81–88. https://doi.org/10.25080/shinma-7f4c6e7-00c
* Beyeler et al. (2019a) Michael Beyeler, Geoffrey M. Boynton, Ione Fine, and Ariel Rokem. 2019a. Model-Based Recommendations for Optimal Surgical Placement of Epiretinal Implants. In _Medical Image Computing and Computer Assisted Intervention – MICCAI 2019_ _(Lecture Notes in Computer Science)_ , Dinggang Shen, Tianming Liu, Terry M. Peters, Lawrence H. Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan (Eds.). Springer International Publishing, 394–402. https://doi.org/10.1007/978-3-030-32254-0_44
* Beyeler et al. (2019b) Michael Beyeler, Devyani Nanduri, James D. Weiland, Ariel Rokem, Geoffrey M. Boynton, and Ione Fine. 2019b. A model of ganglion axon pathways accounts for percepts elicited by retinal implants. _Scientific Reports_ 9, 1 (June 2019), 1–16. https://doi.org/10.1038/s41598-019-45416-4
* Beyeler et al. (2017b) M. Beyeler, A. Rokem, G. M. Boynton, and I. Fine. 2017b. Learning to see again: biological constraints on cortical plasticity and the implications for sight restoration technologies. _J Neural Eng_ 14, 5 (June 2017), 051003\. https://doi.org/10.1088/1741-2552/aa795e
* Boyle et al. (2008) Justin R. Boyle, Anthony J. Maeder, and Wageeh W. Boles. 2008. Region-of-interest processing for electronic visual prostheses. _Journal of Electronic Imaging_ 17, 1 (Jan. 2008), 013002. https://doi.org/10.1117/1.2841708 Publisher: International Society for Optics and Photonics.
* Chen et al. (2009) S. C. Chen, G. J. Suaning, J. W. Morley, and N. H. Lovell. 2009. Simulating prosthetic vision: I. Visual models of phosphenes. _Vision Research_ 49, 12 (June 2009), 1493–506.
* Dagnelie et al. (2007) G. Dagnelie, P. Keane, V. Narla, L. Yang, J. Weiland, and M. Humayun. 2007\. Real and virtual mobility performance in simulated prosthetic vision. _J Neural Eng_ 4, 1 (March 2007), S92–101. https://doi.org/10.1088/1741-2560/4/1/S11
* Erickson-Davis and Korzybska (2020) Cordelia Erickson-Davis and Helma Korzybska. 2020. What do blind people “see” with retinal prostheses? Observations and qualitative reports of epiretinal implant users. _bioRxiv_ (Feb. 2020), 2020.02.03.932905. https://doi.org/10.1101/2020.02.03.932905
* Ferlauto et al. (2018) Laura Ferlauto, Marta Jole Ildelfonsa Airaghi Leccardi, Naïg Aurelia Ludmilla Chenais, Samuel Charles Antoine Gilliéron, Paola Vagni, Michele Bevilacqua, Thomas J. Wolfensberger, Kevin Sivula, and Diego Ghezzi. 2018. Design and validation of a foldable and photovoltaic wide-field epiretinal prosthesis. _Nature Communications_ 9, 1 (March 2018), 1–15. https://doi.org/10.1038/s41467-018-03386-7
* Fernandez (2018) Eduardo Fernandez. 2018\. Development of visual Neuroprostheses: trends and challenges. _Bioelectronic Medicine_ 4, 1 (Aug. 2018), 12\. https://doi.org/10.1186/s42234-018-0013-8
* Fine and Boynton (2015) I. Fine and G. M. Boynton. 2015. Pulse trains to percepts: the challenge of creating a perceptually intelligible world with sight recovery technologies. _Philos Trans R Soc Lond B Biol Sci_ 370, 1677 (Sept. 2015), 20140208. https://doi.org/10.1098/rstb.2014.0208
* Godard et al. (2019) C. Godard, O. M. Aodha, M. Firman, and G. Brostow. 2019\. Digging Into Self-Supervised Monocular Depth Estimation. In _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_. 3827–3837. https://doi.org/10.1109/ICCV.2019.00393 ISSN: 2380-7504.
* Horne et al. (2016) Lachlan Horne, Jose Alvarez, Chris McCarthy, Mathieu Salzmann, and Nick Barnes. 2016\. Semantic labeling for prosthetic vision. _Computer Vision and Image Understanding_ 149 (Aug. 2016), 113–125. https://doi.org/10.1016/j.cviu.2016.02.015
* Kümmerer et al. (2016) Matthias Kümmerer, Thomas S. A. Wallis, and Matthias Bethge. 2016\. DeepGaze II: Reading fixations from deep features trained on object recognition. _arXiv:1610.01563 [cs, q-bio, stat]_ (Oct. 2016). http://arxiv.org/abs/1610.01563 arXiv: 1610.01563.
* Lago, Miguel (2021) Lago, Miguel. 2021\. SimplePhy: An open-source tool for quick online perception experiments. _Behavior Research Methods_ (2021). https://doi.org/10.3758/s13428-020-01515-z
* Li et al. (2017) Heng Li, Tingting Han, Jing Wang, Zhuofan Lu, Xiaofei Cao, Yao Chen, Liming Li, Chuanqing Zhou, and Xinyu Chai. 2017. A real-time image optimization strategy based on global saliency detection for artificial retinal prostheses. _Information Sciences_ 415, Supplement C (Nov. 2017), 1–18. https://doi.org/10.1016/j.ins.2017.06.014
* Li et al. (2018) Heng Li, Xiaofan Su, Jing Wang, Han Kan, Tingting Han, Yajie Zeng, and Xinyu Chai. 2018. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision. _Artificial Intelligence in Medicine_ 84 (Jan. 2018), 64–78. https://doi.org/10.1016/j.artmed.2017.11.001
* Li et al. (2015) Junning Li, Yonggang Shi, and Arthur W. Toga. 2015. Controlling False Discovery Rate in Signal Space for Transformation-Invariant Thresholding of Statistical Maps. _Information processing in medical imaging : proceedings of the … conference_ 9123 (July 2015), 125–136. https://doi.org/10.1007/978-3-319-19992-4_10
* Lieby et al. (2011) P. Lieby, N. Barnes, C. McCarthy, Nianjun Liu, H. Dennett, J. G. Walker, V. Botea, and A. F. Scott. 2011\. Substituting depth for intensity and real-time phosphene rendering: Visual navigation under low vision conditions. In _2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society_. IEEE, Boston, MA, 8017–8020. https://doi.org/10.1109/IEMBS.2011.6091977
* Lorach et al. (2015) H. Lorach, G. Goetz, R. Smith, X. Lei, Y. Mandel, T. Kamins, K. Mathieson, P. Huie, J. Harris, A. Sher, and D. Palanker. 2015. Photovoltaic restoration of sight with high visual acuity. _Nat Med_ 21, 5 (May 2015), 476–82. https://doi.org/10.1038/nm.3851
* Lui et al. (2011) Wen Lik Dennis Lui, Damien Browne, Lindsay Kleeman, Tom Drummond, and Wai Ho Li. 2011. Transformative reality: Augmented reality for visual prostheses. In _2011 10th IEEE International Symposium on Mixed and Augmented Reality_. 253–254. https://doi.org/10.1109/ISMAR.2011.6092402 ISSN: null.
* Luo and da Cruz (2016) Y. H. Luo and L. da Cruz. 2016. The Argus((R)) II Retinal Prosthesis System. _Prog Retin Eye Res_ 50 (Jan. 2016), 89–107. https://doi.org/10.1016/j.preteyeres.2015.09.003
* Luo et al. (2016) Yvonne H.-L. Luo, Joe Jiangjian Zhong, Monica Clemo, and Lyndon da Cruz. 2016. Long-term Repeatability and Reproducibility of Phosphene Characteristics in Chronically Implanted Argus II Retinal Prosthesis Subjects. _American Journal of Ophthalmology_ 170 (Oct. 2016), 100–109. https://doi.org/10.1016/j.ajo.2016.07.021 Publisher: Elsevier.
* McCarthy et al. (2014) Chris McCarthy, Janine G. Walker, Paulette Lieby, Adele Scott, and Nick Barnes. 2014. Mobility and low contrast trip hazard avoidance using augmented depth. _Journal of Neural Engineering_ 12, 1 (Nov. 2014), 016003. https://doi.org/10.1088/1741-2560/12/1/016003 Publisher: IOP Publishing.
* Parikh et al. (2010) N. Parikh, L. Itti, and J. Weiland. 2010. Saliency-based image processing for retinal prostheses. _Journal of Neural Engineering_ 7, 1 (Jan. 2010), 016006. https://doi.org/10.1088/1741-2560/7/1/016006 Publisher: IOP Publishing.
* Perez-Yus et al. (2017) Alejandro Perez-Yus, Jesus Bermudez-Cameo, Gonzalo Lopez-Nicolas, and Jose J. Guerrero. 2017. Depth and Motion Cues With Phosphene Patterns for Prosthetic Vision. 1516–1525. http://openaccess.thecvf.com/content_ICCV_2017_workshops/w22/html/Perez-Yus_Depth_and_Motion_ICCV_2017_paper.html
* Rizzo et al. (2003) J. F. Rizzo, J. Wyatt, J. Loewenstein, S. Kelly, and D. Shire. 2003. Perceptual efficacy of electrical stimulation of human retina with a microelectrode array during short-term surgical trials. _Invest Ophthalmol Vis Sci_ 44, 12 (Dec. 2003), 5362–9.
* Sadeghi et al. (2019) Roksana Sadeghi, Arathy Kartha, Michael P. Barry, Paul Gibson, Avi Caspi, Arup Roy, and Gislin Dagnelie. 2019. Thermal and Distance image filtering improve independent mobility in Argus II retinal implant. _Journal of Vision_ 19, 15 (Dec. 2019), 23–23. https://doi.org/10.1167/19.15.23 Publisher: The Association for Research in Vision and Ophthalmology.
* Sanchez-Garcia et al. (2019) Melani Sanchez-Garcia, Ruben Martinez-Cantin, and Josechu J. Guerrero. 2019. Indoor Scenes Understanding for Visual Prosthesis with Fully Convolutional Networks. In _VISIGRAPP_. https://doi.org/10.5220/0007257602180225
* Simpson and Fitter (1973) A. J. Simpson and M. J. Fitter. 1973. What is the best index of detectability? _Psychological Bulletin_ 80, 6 (1973), 481–488. https://doi.org/10.1037/h0035203 Place: US Publisher: American Psychological Association.
* Stacey et al. (2011) A. Stacey, Y. Li, and N. Barnes. 2011. A salient information processing system for bionic eye with application to obstacle avoidance. In _2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society_. 5116–5119. https://doi.org/10.1109/IEMBS.2011.6091267 ISSN: 1558-4615.
* Stingl et al. (2013) K. Stingl, K. U. Bartz-Schmidt, D. Besch, A. Braun, A. Bruckmann, F. Gekeler, U. Greppmaier, S. Hipp, G. Hortdorfer, C. Kernstock, A. Koitschev, A. Kusnyerik, H. Sachs, A. Schatz, K. T. Stingl, T. Peters, B. Wilhelm, and E. Zrenner. 2013\. Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS. _Proc Biol Sci_ 280, 1757 (April 2013), 20130077\. https://doi.org/10.1098/rspb.2013.0077
* Vergnieux et al. (2017) Victor Vergnieux, Marc J.-M. Macé, and Christophe Jouffrais. 2017\. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation. _Artificial Organs_ 41, 9 (Sept. 2017), 852–861. https://doi.org/10.1111/aor.12868 Publisher: John Wiley & Sons, Ltd.
* Wang et al. (2016) Jing Wang, Heng Li, Weizhen Fu, Yao Chen, Liming Li, Qing Lyu, Tingting Han, and Xinyu Chai. 2016\. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision. _Artificial Organs_ 40, 1 (2016), 94–100. https://doi.org/10.1111/aor.12498 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/aor.12498.
* Weiland and Humayun (2005) J. D. Weiland and M. S. Humayun. 2005. A biomimetic retinal stimulating array. _IEEE Engineering in Medicine and Biology Magazine_ 24, 5 (Sept. 2005), 14–21. https://doi.org/10.1109/MEMB.2005.1511496 Conference Name: IEEE Engineering in Medicine and Biology Magazine.
* Weiland et al. (2012) J. D. Weiland, N. Parikh, V. Pradeep, and G. Medioni. 2012\. Smart image processing system for retinal prosthesis. In _2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society_. 300–303. https://doi.org/10.1109/EMBC.2012.6345928 ISSN: 1558-4615.
* Wilke et al. (2011) R. G. H. Wilke, G. Khalili Moghadam, N. H. Lovell, G. J. Suaning, and S. Dokos. 2011. Electric crosstalk impairs spatial resolution of multi-electrode arrays in retinal implants. _Journal of Neural Engineering_ 8, 4 (June 2011), 046016. https://doi.org/10.1088/1741-2560/8/4/046016
* Wu, Yuxin et al. (2019) Wu, Yuxin, Kirillov, Alexander, Massa, Francisco, Lo, Wan-Yen, and Girshick, Ross. 2019\. Detectron2. https://github.com/facebookresearch/detectron2
* Zhou et al. (2016) Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. 2016\. Semantic understanding of scenes through the ade20k dataset. _arXiv preprint arXiv:1608.05442_ (2016).
* Zhou et al. (2017) Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. 2017\. Scene Parsing through ADE20K Dataset. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_.
|
# Global health science leverages established collaboration network to fight
COVID-19
Stefano Bianchini Moritz Müller Pierre Pelletier Kevin Wirtz
## Abstract
How has the science system reacted to the early stages of the COVID-19
pandemic? Here we compare the (growing) international network for coronavirus
research with the broader international health science network. Our findings
show that, before the outbreak, coronavirus research realized a relatively
small and rather peculiar niche within the global health sciences. As a
response to the pandemic, the international network for coronavirus research
expanded rapidly along the hierarchical structure laid out by the global
health science network. Thus, in face of the crisis, the global health science
system proved to be structurally stable yet versatile in research. The
observed versatility supports optimistic views on the role of science in
meeting future challenges. However, the stability of the global core-periphery
structure may be worrying, because it reduces learning opportunities and
social capital of scientifically peripheral countries — not only during this
pandemic but also in its “normal” mode of operation.
Keywords COVID-19 $|$ Scientific Networks $|$ International Collaboration $|$
Health Sciences
## Introduction
International scientific collaboration is on the rise since the early 1980s
[1]. The phenomenon is one aspect of globalization in science. International
collaboration is observed in particular among productive researchers from top-
tier universities located in advanced national scientific systems [3, 2]. The
gain is (more) excellent research [1, 3]. The tendency of ‘excellence-
attracting-excellence’, however, entails the risk of increasing stratification
not only within but also between national science systems [2, 4]. In order to
catch-up scientifically, or at least not to fall behind, being well connected
to the global knowledge flows has become a science policy imperative in most
countries.
The paper at hand treats the outbreak of the novel coronavirus Sars-CoV-2 in
January 2020 as an exogenous shock to the international health science system.
Our main interest is in the structural effects of the shock on the
international health science network. Recent empirical studies have shown that
the scientific contribution to coronavirus related research from individual
countries has been very uneven; often framing it as a scientific race [5, 6].
[7] investigate the international coronavirus collaboration network, and find
that it has become more ‘elitist’ with the pandemic.
Our empirical analysis adds the insight that the contribution of countries to
coronavirus research is closely related to their contribution in the broader
domain of health sciences, and that the structure of the international
coronavirus research network rapidly converged to the structure of the global,
international health science network. Before we discuss the implications of
this finding, let us first turn to the empirical analysis.
## Data
We proxy scientific activity in the health sciences through peer-reviewed
articles in journals indexed by MEDLINE. The restriction to MEDLINE indexed
journals ensures that papers in the sample fall into our scope of biomedical
research and are of (minimum) scientific quality. Coronavirus related papers
are identified through a text search query suggested by PubMed Central Europe
on the papers’ title, abstract, and MESH terms.
Fig 1: Countries take on the same role in coronavirus research as in the
global health sciences. (A) Coronavirus and non-coronavirus papers by month.
(B) Top 10 countries in coronavirus-related research during COVID-19. (C)
Correlation of country rankings by coronavirus and non-coronavirus research by
month. (D) Surface plot of a local regression of (log of) joint coronavirus
papers during COVID-19 on (log of) joint coronavirus and non-coronavirus
papers pre-COVID-19. (E) Country centrality based on s-core decomposition of
the coronavirus and non-coronavirus network by month.
The analysis is based on the papers’ submission dates to stay close to the
actual research activity. Our working sample includes papers submitted in the
pre-COVID-19 period (Jan.–Dec.2019), as well as in the early phase of COVID-19
(Jan.–Apr.2020). In detail, we downloaded all papers appearing in MEDLINE
journals from the PubMed database as of December 2020. Due to the time lag
from submission to acceptance, the number of paper submissions in our sample
starts to drop in May; a data artifact that may bias statistics. Therefore, we
end the analysis period in April 2020.
Our final working sample consists of 837,427 papers. Distinguishing
coronavirus related research from non-coronavirus related research, and pre-
COVID-19 period (Jan.–Dec.2019) from COVID-19 period (Jan.–Apr.2020) yields
four categories: 614,141 non-coronavirus, pre-COVID-19 papers, 571
coronavirus, pre-COVID-19 papers, 210,171 non-coronavirus, COVID-19 papers,
and 12,544 coronavirus, COVID-19 papers.111All data and scripts are available
on gitHub https://github.com/P-Pelletier/Global-health-sciences-response-to-
COVID-19.
## Results
We first count papers per month (Fig 1A). In the pre-COVID-19 period,
coronavirus research output is relatively stable, at roughly 50 papers per
month. Starting with the January 2020 outbreak, coronavirus research grows
exponentially up to 8,159 submissions in April 2020. Other research output is
stable throughout, at about 51,000 papers, and even increases slightly with
the pandemic. Apparently, many (male) researchers took advantage of the
lockdown period to finish off research that piled up already before COVID-19
[8]. Potentially negative effects of the pandemic due to frictions in the
research machinery, or crowding out of non-coronavirus research are not yet
visible in this early period.
### National scientific production
Next, consider the contribution of individual countries to coronavirus
research. We employ a full-count assignment scheme – i.e., each paper with at
least one affiliation in a given country counts fully (one) for that country.
The distribution is highly skewed: the 10 most prolific countries generate 70
percent of coronavirus research during COVID-19; ranging from the US signing
2,686 to Spain with 381 papers (Fig 1B). All these countries are big players
in the health sciences, but not all bring in a strong track record in
coronavirus research.
So, how important was coronavirus-specific research capacity compared to
general health science capacity in the early response? Simple linear
regressions of coronavirus papers in Jan.– Apr.2020 on pre-pandemic
coronavirus and other paper counts provide some indication (Table 1). All
variables in the regression are transformed into logs, and normalized to zero
mean and unit variance to facilitate direct interpretation of coefficient
estimates. We find that pre-pandemic coronavirus research is highly (and
significantly) correlated with coronavirus research in January 2020, while
other research is not. However, this pattern reverses within the next three
months when coronavirus research takes off. Note also that variations in the
outcome variable are increasingly well explained, with $R^{2}$ from 0.7 in
January to 0.9 in April 2020; mostly due to prior non-coronavirus research.
Table 1: Coronavirus papers (2020) on prior papers (2019).
Jan. ‘20 Feb. ‘20 Mar. ‘20 Apr. ‘20 Total Coronav. ‘19 0.859 0.727 0.431 0.262
0.231 (0.131) (0.077) (0.044) (0.036) (0.031) Others ‘19 0.008 0.198 0.564
0.732 0.765 (0.067) (0.055) (0.044) (0.039) (0.035) $R^{2}$ 0.748 0.788 0.876
0.898 0.910
Notes: 200 observations. All variables in logs, with zero mean and unit
variance. Standard errors in parenthesis.
By the same token, countries take rapidly very similar positions in rankings
on coronavirus papers as they do in rankings on other health papers (see Fig
1C). We calculate the rank correlation coefficient $\tau_{X}$ of [10]. It is
similar to Kendall’s $\tau$, but handles ties the same as dominant
relationships (i.e. entering 1 and not 0 in the dominance matrix). This is
favorable in case of many ties in the rankings, as we have in corona pre-
Covid-19 research, but does not really change the results. The 90 percent
confidence interval around $\tau_{X}$ has been obtained through a traditional
jackknife, or leave-one-out, approach (see e.g. [11]). Fig 1C shows the
result. Until the outbreak in January 2020 (vertical dashed line in Fig 1C)
rank correlations are rather low at around 0.2. After the outbreak, the
(monthly) scientific output of countries in corona aligns with non-corona
research output until a (high) correlation of 0.8 in April 2020.
We summarize the first part of the analysis. Before the pandemic, leading
countries in the health sciences have not necessarily led coronavirus
research. Within a few months after the COVID-19 shock, leading countries in
the health sciences also led coronavirus research. The second part of the
analysis establishes the same dynamic for the international collaboration
networks.
### International scientific collaboration
Fig 2: Coronavirus research network converges to global health science
network. A) to D) are (accumulated) adjacency matrices of A) and B)
coronavirus, pre-COVID-19 network, C) non-coronavirus, pre-COVID-19 network,
D) coronavirus, COVID-19 network. In A), C) and D) countries ordered by
eigenvector centrality in non-coronavirus, pre-COVID-19 network (C). In B)
countries ordered through generalized block modeling. E) correlation
coefficient of the (monthly) coronavirus and non-coronavirus networks. All
correlations obtained are highly significant in QAP test (p$<$0.001).
We construct international scientific collaboration networks based on co-
authorship of papers in our sample. A node corresponds to a country. Edge
weights correspond to the number of joint papers (full-accounting scheme).
First, we consider link formation. Are prior collaborations on coronavirus
research (i.e. same topic), or any prior ties (i.e. different topics)
replicated for coronavirus research during the pandemic? Fig 1D provides some
indication. The surface plot is obtained from a local regression with least-
squares cross-validated bandwidths for the local constant estimator. It shows
the expected (log of) joint coronavirus papers during COVID-19, conditional on
(log of) joint coronavirus and non-coronavirus papers pre-COVID-19. Looking at
Fig 1D, we first note that most country-pairs (dots) had no pre-COVID-19 joint
coronavirus research. Their number of joint coronavirus papers during COVID-19
increases with the number of other joint papers before the pandemic. As we
increase from zero prior coronavirus papers, (expected) joint coronavirus
papers during COVID-19 (the surface in Fig 1D) also increases. Yet, it is
evident that bi-national collaboration on coronavirus related research after
the shock largely reflects bi-national collaboration on non-coronavirus
research before the shock.
Consequently, countries’ network centrality in the coronavirus research
network aligns with their centrality in the overall health science network. We
capture network centrality through (normalized) s-core decomposition [9]. The
s-core ranges from 0 for isolates in the network, to 1 for (highest) core
members. Fig 1E provides the monthly s-cores on the non-coronavirus network
(left panel), coronavirus network (middle panel), and the difference of
s-cores in coronavirus and non-coronavirus networks (right panel). The figure
shows the 60 most central countries in the non-coronavirus network and applies
that same ordering across all three panels. The remaining countries are highly
peripheral in the considered networks. The left panel shows that the global
network hierarchy is very stable. The core is formed by (mostly large)
countries of the global north, China being the exception. Centrality in the
coronavirus network is more dynamic (middle panel). Pre-COVID-19, most
countries are not involved in coronavirus related collaborations and, hence,
in the extreme periphery (white). The core of the coronavirus network includes
only a few countries leading other health sciences. Saudi Arabia stands out as
it is part of the core in the coronavirus network, but peripheral in the
overall health science network. (Variations in core membership over time may
be explained by lower research activity overall which leads to more erratic
signals.) After the shock, the structure of the coronavirus network shifts
rapidly towards the hierarchy in health science at large. This is easily seen
in the right panel that shows the difference between the s-core centrality in
the coronavirus and non-coronavirus network. Prior to the shock, s-core
differences range from -1 in dark blue (for countries at the extreme periphery
in the coronavirus network and in the core in the other network), over 0 in
white (same s-core in both networks), up to 1 in red (for countries in the
coronavirus network core and peripheral in the non-coronavirus network). After
the shock, the global core rapidly takes its role in coronavirus related
research, and so does the global periphery (all countries appear in light
colors with an s-core difference of around zero in April 2020).
Fig 2 pictures the networks in form of adjacency matrices. In order to
facilitate a comparison across networks, adjacency matrices of the pre-
pandemic coronavirus network (A), pre-pandemic non-coronavirus network (C),
and pandemic coronavirus network (D) are all ordered by eigenvector centrality
in (C).
The pre-pandemic coronavirus network (A) is relatively sparse and best
described by a block model (B): A regional middle east community and a
community of mostly developed countries, connected through USA, China, and
Saudi Arabia. The block model is obtained by minimizing the absolute
difference of the number of papers in logs over blocks. This finding is robust
to alternative algorithms. Essentially the same (community) structure
transpires from hierarchical community detection on the weighted pre-pandemic
coronavirus network using the OSLOM [12] as well as the LOUVAIN [13]
algorithm.
In contrast, we find that the pre-pandemic, non-coronavirus network (C)
corresponds to a nested-split graph. A nested-split graph is a specific type
of a hierarchical network, in which the most central node connects to all
other nodes, and less central nodes connect to subsets of alters of more
central nodes. Nested-split graphs emerge in network games where payoffs are
strategic complements in effort levels [14], which is a reasonable assumption
for science networks. The coronavirus network after the shock (D) closely
resembles the non-coronavirus network before the shock (C). The correlation
coefficient (E) of the (monthly) coronavirus and non-coronavirus adjacency
matrices (in logs) confirms that convergence. All correlations are highly
significant; based on a QAP test that creates a null distribution through re-
labeling of nodes while maintaining the structure of the networks [15].
## Conclusion
More elitist science in the COVID-19 era? Novel ways of organizing science?
Not really. The structural shift towards a highly hierarchical system in
coronavirus related research mostly picks up established structures of the
broader, globalized health sciences. Thus, we conjecture that coronavirus
research during the pandemic will further sustain, not break with, long-term
trends in international collaborations. The pandemic feeds into an ongoing
global stratification in science that reduces systematically learning
opportunities and social capital of scientifically peripheral countries.
Policy should therefore aim at a more inclusive science landscape — not only
during crises, but even more in its ‘normal mode of operation’.
## Acknowledgments
The research leading to the results of this paper has received financial
support from the CNRS through the MITI interdisciplinary program Enjeux
scientifiques et sociaux de l’intelligence artificielle-AAP 2020 [reference:
ARISE].
## References
* 1. J Adams, The fourth age of research. Nature 497, 557-560 (2013).
* 2. BF Jones, S Wuchty, B Uzzi, Multi-university research teams: Shifting impact, geography, and stratification in science. Science 322, 1259-1262 (2008).
* 3. RK Pan, K Kaski, S Fortunato, World citation and collaboration networks: uncovering the role of geography in science. Sci Rep 2, 902 (2012). https://doi.org/10.1038/srep00902
* 4. E Horlings, P Van den Besselaar, Convergence in science growth and structure of worldwide scientific output, 1993-2008. Working Paper. The Hague: Rathenau Instituut; (2013) Available: https://ieeexplore.ieee.org/stamp/ stamp.jsp?tp=&arnumber=6064471 Accessed 19.01.2021.
* 5. S Aviv-Reuven, A Rosenfeld, Publication patterns’ changes due to the Covid-19 pandemic: A longitudinal and short-term scientometric analysis, arXiv, 2010.02594, (2020)
* 6. P Radanliev, D De Roure, R Walton, M Van Kleek, O Santos, F Mantilla Montalvo, M La Treall, What country, university or research institute, performed the best on COVID-19? Bibliometric analysis of scientific literature. arXiv, 2005.10082, (2020).
* 7. CV Fry , X Cai, Y Zhang, CS Wagner, Consolidation in a crisis: Patterns of international collaboration in early COVID-19 research. PLoSONE 15(7), e0236307 (2020).
* 8. ML Bell, KC Fong, Gender differences in first and corresponding authorship in public health research submissions during the Covid-19 pandemic. AJPH 111, 159-163, (2021). https://doi.org/10.2105/AJPH.2020.305975.
* 9. M Eidsaa, E Almaas, s-core network decomposition: A generalization of k-core analysis to weighted networks. Phys. Rev. E, 88, 062819 (2013).
* 10. EJ Emond, DW Mason, A new rank correlation coefficient with application to the consensus ranking problem. J. Multi-Crit. Decis. Anal. 11, 17-28 (2002).
* 11. H Abdi, LJ Williams, “Jackknife” in Encyclopedia of Research Design, NJ Salkind, Eds. (SAGE, 2010), pp. 655-660.
* 12. A Lancichinetti, F Radicchi, JJ Ramasco, S Fortunato, Finding Statistically Significant Communities in Networks. PLoS ONE 6(4): e18961 (2011) doi:10.1371/journal.pone.0018961.
* 13. VD Blondel, J-L Guillaume, R Lambiotte, E Lefebvre (2008) Fast unfolding of communities in large networks. J. Stat. Mech. (2008) P10008.
* 14. MD König, CJ Tessone, Y Zenou, Nestedness in networks: A theoretical model and some applications. Theor. Econ., 9, 695-752 (2014).
* 15. D Krackhardt, QAP partialling as a test of spuriousness. Social Networks, 9, 171-186 (1987).
|
# If you’ve got it, flaunt it:
Making the most of fine-grained sentiment annotations
Jeremy Barnes, Lilja Øvrelid, and Erik Velldal
University of Oslo
Department of Informatics
<EMAIL_ADDRESS>
###### Abstract
Fine-grained sentiment analysis attempts to extract sentiment holders, targets
and polar expressions and resolve the relationship between them, but progress
has been hampered by the difficulty of annotation. Targeted sentiment
analysis, on the other hand, is a more narrow task, focusing on extracting
sentiment targets and classifying their polarity.
In this paper, we explore whether incorporating holder and expression
information can improve target extraction and classification and perform
experiments on eight English datasets. We conclude that jointly predicting
target and polarity BIO labels improves target extraction, and that augmenting
the input text with gold expressions generally improves targeted polarity
classification. This highlights the potential importance of annotating
expressions for fine-grained sentiment datasets. At the same time, our results
show that performance of current models for predicting polar expressions is
poor, hampering the benefit of this information in practice.
## 1 Introduction
Sentiment analysis comes in many flavors, arguably the most complete of which
is what is often called fine-grained sentiment analysis Wiebe et al. (2005);
Liu (2015). This approach models the sentiment task as minimally extracting
all opinion holders, targets, and expressions in a text and resolving the
relationships between them. This complex task is further complicated by
interactions between these elements, strong domain effects, and the subjective
nature of sentiment. Take the annotated sentence in Figure 1 as an example.
Knowing that the target “UMUC” is modified by the expression “5 stars” and not
“don’t believe” is important to correctly classifying the polarity.
Additionally, the fact that this is a belief held by “some others” as apposed
to the author of the sentence can help us determine the overall polarity
expressed in the sentence.
[theme = simple] [column sep=1em] Have & seen & some others & giving & UMUC &
{5 stars} & \- & {don’t believe} & them.
[edge start x offset=-20pt]65target 63holder 89target
Figure 1: An opinion annotation from the Darmstadt Review Corpus.
Compared to document- or sentence-level sentiment analysis, where distant
labelling schemes can be used to obtain annotated data, fine-grained
annotation of sentiment does not occur naturally, which means that current
machine learning models are often hampered by the small size of datasets.
Furthermore, fine-grained annotation is demanding, leads to relatively small
datasets, and has low inter-annotator agreement Wiebe et al. (2005); Wang et
al. (2017). This begs the question: is it worth it to annotate full fine-
grained sentiment?
Targeted sentiment Mitchell et al. (2013); Zhang et al. (2015) is a reduction
of the fine-grained sentiment task which concentrates on extracting sentiment
targets and classifying their polarity, effectively ignoring sentiment holders
and expressions. The benefit of this setup is that it is faster to annotate
and simpler to model. But would targeted sentiment models benefit from knowing
the sentiment holders and expressions?
In this work, we attempt to determine whether holder and expression
information is useful for extracting and then classifying sentiment targets.
Specifically, we ask the following research questions:
1. RQ1: Given the time and difficulty required to annotate opinion holders, expressions, and polarity, is this information useful to extract sentiment targets?
1. (a)
Does augmenting the input text with holders and expressions improve target
extraction?
2. (b)
Do target extraction models benefit from predicting holders and expressions?
3. (c)
Do target extraction models benefit from predicting the polarity of targets
and/or expressions?
2. RQ2: Can holder and expression information improve polarity classification on extracted targets?
1. (a)
Does augmenting the input text with holders and expressions improve polarity
classification?
2. (b)
Do potential benefits of augmenting the input depend on how we model the
target, i.e., using the [CLS] embeddings, mean pooling the target embeddings,
etc.?
3. (c)
Can sentiment lexicons provide enough information on expressions to give
improvements?
We conduct a series of experiments on eight English sentiment datasets (three
with full fine-grained sentiment and five targeted) with state-of-the-art
models based on fine-tuned BERT models. We show that (1) it is possible to
improve target extraction by also trying to predict the polarity, and that (2)
classification models benefit from having access to information about
sentiment expressions. We also (3) release the
code111https://github.com/ltgoslo/finegrained_modelling to reproduce the
experiments, as well as the scripts to download, preprocess, and collect the
datasets into a compatible JSON format, with the hope that this allows future
research on the same data.
## 2 Related work
Fine-grained approaches to sentiment analysis attempt to discover opinions
from text, where each opinion is a tuple of (opinion holder, opinion target,
opinion expression, polarity, intensity). Annotation of datasets for this
granularity requires creating in-depth annotation guidelines, training
annotators, and generally leads to lower inter-annotator scores than other
sentiment tasks, e.g., document- or sentence-level classification, as deciding
on the spans for multiple elements and their relationships is undeniably
harder than choosing a single label for a full text. Targeted sentiment, on
the other hand, generally concentrates only on target extraction and polarity
classification. This has the benefit of allowing non-experts and crowd-
sourcing to perform annotation, making it easier to collect larger datasets
for machine learning. This simplified annotation can be crowd-sourced, leading
to larger datasets for machine learning.
### 2.1 Datasets
The Multi-purpose Question Answering dataset (MPQA) Wiebe et al. (2005) is the
first dataset that annotated opinion holders, targets, expressions and their
relationships. The news wire data leads to complex opinions and a generally
difficult task for sentiment models. Normally, the full opinion extraction
task is modelled as extraction of the individual elements (holders, targets,
and expressions) and the subsequent resolution of the relationship between
them.
The Darmstadt Review Corpora Toprak et al. (2010) contain annotated opinions
for consumer reviews of universities and services. The authors annotate
holders, targets, expressions, polarity, modifiers, and intensity. They
achieve between 0.5 and 0.8 agreement using the $agr$ method Wiebe et al.
(2005), with higher disagreement on what they call “polar targets” – targets
that have a polarity but no annotated sentiment expression – holders, and
expressions.
The Open Domain Targeted dataset Mitchell et al. (2013) makes use of crowd
sourcing to annotate NEs from scraped tweets in English and Spanish Etter et
al. (2013) with their polarities. The authors use majority voting to assign
the final labels for the NEs, discarding tweets without sentiment consensus on
all NEs.
The 2014 SemEval shared task Pontiki et al. (2014) on aspect-based sentiment
analysis include labeled data from restaurant and laptop reviews for two
subtasks: 1) target extraction, which they call “aspect term extraction” and
2) classification of polarity with respect to targets (“aspect term
polarity”).
As most targeted datasets only contain a single target, or multiple targets
with the same polarity, sentence-level classifiers are strong baselines. In
order to mitigate this, Jiang et al. (2019) create a Challenge dataset which
has both multiple targets and multiple polarities in each sentence. Similarly,
Wang et al. (2017) also point out that most targeted sentiment methods perform
poorly with multiple targets and propose TDParse, a corpus of UK election
tweets with multiple targets per tweet.
### 2.2 Modelling
Katiyar and Cardie (2016) explore jointly extracting holders, targets, and
expressions with LSTMs. They find that adding sentence-level and relation-
level dependencies (is-from or is-about) improve extraction, but find that the
LSTM models lag behind CRFs with rich features.
| | domain | | sentences | holders | targets | expressions | polarity
---|---|---|---|---|---|---|---|---
| | | | # | avg. | # | avg. | max | # | avg. | max | # | avg. | max | $+$ | neu | $-$
Fine-grained Sentiment | MPQA | newswire | train | 4500 | 25 | 1306 | 2.6 | 27 | 1382 | 6.1 | 56 | 1656 | 2.4 | 14 | 675 | 271 | 658
| | dev | 1622 | 23 | 377 | 2.6 | 16 | 449 | 5.3 | 41 | 552 | 2.1 | 8 | 241 | 105 | 202
| | test | 1681 | 24 | 371 | 2.8 | 32 | 405 | 6.4 | 42 | 479 | 2.0 | 8 | 166 | 89 | 199
DS. Services | service | train | 5913 | 16 | 18 | 1.2 | 2 | 2504 | 1.2 | 7 | 1273 | 1.2 | 10 | 1623 | 46 | 838
| reviews | dev | 744 | 18 | 1 | 1.7 | 3 | 288 | 1.2 | 4 | 144 | 1.4 | 5 | 103 | 1 | 104
| | test | 748 | 17 | 2 | 1 | 1 | 328 | 1.2 | 5 | 168 | 1.4 | 6 | 241 | 7 | 80
DS. Uni | university | train | 2253 | 20 | 65 | 1.2 | 2 | 1252 | 1.2 | 5 | 837 | 1.9 | 9 | 495 | 149 | 610
| reviews | dev | 232 | 9 | 17 | 1.1 | 3 | 151 | 1.2 | 3 | 106 | 1.7 | 6 | 40 | 19 | 92
| | test | 318 | 20 | 12 | 1.3 | 4 | 198 | 1.2 | 6 | 139 | 2.0 | 5 | 77 | 18 | 103
Targeted Sentiment | TDParse | political | train | 2889 | 6.9 | - | - | - | 9088 | 1.2 | 7 | - | - | - | 1238 | 3931 | 3919
| tweets | dev | 321 | 6.6 | - | - | - | 1040 | 1.2 | 5 | - | - | - | 128 | 454 | 458
| | test | 867 | 6.9 | - | - | - | 2746 | 1.2 | 6 | - | - | - | 378 | 1162 | 1206
SemEval R. | restaurant | train | 2740 | 13 | - | - | - | 3293 | 1.4 | 19 | - | - | - | 1902 | 574 | 734
| reviews | dev | 304 | 11.3 | - | - | - | 350 | 1.4 | 5 | - | - | - | 226 | 54 | 63
| | test | 800 | 9.6 | - | - | - | 1128 | 1.4 | 8 | - | - | - | 724 | 195 | 195
SemEval L. | laptop | train | 2744 | 22.5 | - | - | - | 2049 | 1.5 | 6 | - | - | - | 870 | 402 | 747
| reviews | dev | 304 | 21.1 | - | - | - | 244 | 1.6 | 5 | - | - | - | 99 | 44 | 96
| | test | 800 | 18.6 | - | - | - | 633 | 1.6 | 7 | - | - | - | 327 | 162 | 128
Open | tweets | train | 1903 | 12.8 | - | - | - | 2594 | 1.6 | 8 | - | - | - | 578 | 1801 | 215
| | dev | 211 | 12.3 | - | - | - | 291 | 1.6 | 6 | - | - | - | 46 | 220 | 25
| | test | 234 | 11.6 | - | - | - | 337 | 1.6 | 7 | - | - | - | 74 | 232 | 31
Challenge | restaurant | train | 4297 | 8.8 | - | - | - | 11186 | 1.3 | 9 | - | - | - | 3380 | 5042 | 2764
| reviews | dev | 500 | 8.9 | - | - | - | 1332 | 1.3 | 8 | - | - | - | 403 | 604 | 325
| | test | 500 | 8.9 | - | - | - | 1336 | 1.3 | 8 | - | - | - | 400 | 607 | 329
Table 1: Stastistics of the datasets, including number of sentences, as well
as average, and max lengths (in tokens) for holder, target, and expression
annotations. Additionally, we include the distribution of polarity –
restricted to positive, neutral, and negative – in each dataset.
Regarding modelling the interaction between elements, there are several
previous attempts to jointly learn to extract and classify targets, using
factor graphs Klinger and Cimiano (2013), multi-task learning He et al. (2019)
or sequence tagging with collapsed tagsets representing both tasks Li et al.
(2019). In general, the benefits are small and have suggested that there is
only a weak relationship between target extraction and polarity classification
Hu et al. (2019).
## 3 Data
One of the difficulties of working with fine-grained sentiment analysis is
that there are only a few datasets (even in English) and they come in
incompatible, competing data formats, e.g., BRAT or various flavors of XML.
With the goal of creating a simple unified format to work on fine-grained
sentiment tasks, we take the eight datasets mentioned in Section 2 – MPQA
Wiebe et al. (2005), Darmstadt Services and Universities Toprak et al. (2010),
TDParse Wang et al. (2017), SemEval Restaurant and Laptop Pontiki et al.
(2014), Open Domain Targeted Sentiment Mitchell et al. (2013), and the
Challenge dataset from Jiang et al. (2019) – and convert them to a standard
_JSON_ format. The datasets are sentence and word tokenized using NLTK Loper
and Bird (2002), except for MPQA, DS. Service and DS. Uni, which already
contain sentence and token spans. All polarity annotations are mapped to
positive, negative, neutral, and conflict222We discard conflict during
evaluation because there are not enough examples to properly learn this class
in most datasets. As such, each sentence contains a sentence id, the tokenized
text, and a possibly empty set of opinions which contain a holder, target,
expression, polarity, and intensity. We allow for empty holders and
expressions in order generalize to the targeted corpora. Finally, we use 10
percent of the training data as development and another 10 percent for test
for the corpora that do not contain a suggested train/dev/test split. For
training and testing models, however, we convert the datasets to CoNLL format.
Table 1 presents an overview of the different datasets and highlights
important differences between them. The fully fine-grained sentiment datasets
(MPQA, DS. Services, and DS. Uni) tend to be larger but have fewer targets
annotated, due to a larger number of sentences with no targets. However, the
MPQA dataset contains much longer targets than the other datasets – an average
of 6, but a maximum of 56 tokens. It also contains more opinion holders and
expressions and these also tend to be longer, all of which marks MPQA as an
outlier among the datasets. The distribution of polarity is also highly
dependent on the dataset, with DS. Services being the most skewed and SemEval
Laptop the least skewed. Finally, the challenge dataset is by far the largest
with over 11,000 training targets. Additionally, Table 6 in Appendix A shows
the percentage of unique targets per dataset, as well as the percentage of
targets shared between the training set and the dev and test sets. Again MPQA
has the largest number of unique targets and the least overlap.333We do not,
however, consider partial overlap which may exaggerate the true uniqueness of
targets.
## 4 Experimental Setup
Figure 2: Our BERT-based _target extraction_ and _classification models_ ,
with the three strategies for extraction ((1) predict only targets, (2)
predict holders, targets and expressions, and (3) predict the polarity of the
targets and expressions as well) and five strategies for sentiment
classification (passing to the softmax layer the contextualized embedding from
(1) the [CLS] embedding, (2) the first token in the target (3) averaging all
embeddings in the target phrase, (4) taking the max of the target embeddings,
(5) concatenating the max, mean, and min).
We split the task of targeted sentiment analysis into the extraction of
sentiment targets and subsequent polarity classification of extracted targets,
given their context. Figure 2 shows the two tasks and the eight models used in
the experiments. As a base model, we take the target extraction and
classification models from Xu et al. (2019), which achieve state-of-the-art
performance on the SemEval task. The approach first fine-tunes BERT Devlin et
al. (2019) on domain-specific unlabeled data as a domain-adaptation step. We
use the datasets themselves to perform this step, except for the SemEval
datasets. For these, we follow Rietzler et al. (2020) and instead use larger
amounts of unlabeled data – 1,710,553 and 2,000,000 sentences for SemEval
Laptop and Restaurant respectively – taken from Amazon Laptop reviews He and
McAuley (2016) and the Yelp Dataset
Challenge.444https://www.yelp.com/dataset/challenge We further deviate from Xu
et al. (2019) by not pretraining the models on the SQUAD question answering
dataset and in-domain sentiment questions which they create, as this data is
not publicly available. Finally, a linear prediction is added after the BERT
model and the full model is updated on the sentiment task.
For target extraction, we use the contextualized BERT embeddings as input to a
softmax layer and predict the sequence of tags. We compare three prediction
strategies:
1. 1.
Targ.: The model predicts the labels $y\in\\{\textit{B,I,O}\\}$ for the
targets only.
2. 2.
Pred.: We additionally predict the labels for holders and expressions and
predict
$y\in\\{\textit{B-holder},\textit{I-holder},\textit{B-target},\textit{I-target},$
$\textit{B-expression},\textit{I-expression},O\\}$.
3. 3.
+Pol.: Finally, we add the polarity (positive, negative, neutral) to the
annotation specific BIO-tag, which leads to an inventory of 19 labels for the
full fine-grained setup and 7 for the targeted setup.
For polarity classification, we take as a baseline the classification
architecture from Xu et al. (2019), which makes use of the two-sentence
training procedure for BERT, by prepending the target before the sentence
separation token, and then adding the full sentence after. We compare five
strategies for producing the input to the softmax layer for predicting the
sentiment of the target:
1. 1.
[Cls]: this model uses the [CLS] embedding from the final BERT layer.
2. 2.
First: uses the contextualized BERT embedding from the first token of the
target in context.
3. 3.
Mean: instead takes the average of the BERT embeddings for the tokens in the
target.
4. 4.
Max: uses the max of the contextualized BERT embeddings for the tokens in the
target.
5. 5.
MaxMM: takes the max, min, and mean pooled representations and passes the
concatenation to the softmax layer, which has shown to perform well for
sentiment tasks Tang et al. (2014). However, this triples the size of the
input representation to the softmax layer.
The Targ. and [Cls] models correspond to the models used in Xu et al. (2019)
and serve as baselines. The extraction and classification models are fine-
tuned for 50 epochs using Adam with an initial learning rate of
$3\mathrm{e}{-5}$, with a linear warmup of 0.1 and all other hyperparameters
are left at default BERT settings (further details in Appendix B). The best
model on the development set is used for testing. Combined with the four input
manipulations (Table 2), this leads to eleven extraction experiments – Targ.
and Pred. on the original data which only has annotated targets are the same
and for simplicity we only show the results from Targ.– and twenty
classification experiments per dataset. In order to control for the effect of
random initialization, we run each experiment 5 times on different random
seeds and report the mean and standard deviation.
### 4.1 Training with gold annotations
original | Money Magazine rated E-Trade highly .
---|---
\+ holders | [<H ] Money Magazine [H>] rated E-Trade highly
\+ expressions | Money Magazine [<E ] rated [E>] E-Trade [<E ] highly [E>]
\+ full | [<H ] Money Magazine [H>] [<E ] rated [E>] E-Trade [<E ] highly [E>]
Table 2: We inform our models regarding annotations other than targets by
inserting special tags into the input text before and after annotated holders
and expressions.
Given that we are interested in knowing whether it is beneficial to include
information about additional annotations (holder, expressions, polarity), we
perform experiments where we systematically include these. We do so by adding
special tags, e.g.,, [<E ], into the input text surrounding the annotated
spans, as shown in Table 2. The models then have access to this information
both during training and at test time, albeit in an indirect way. For the
first set of experiments, we perform controlled experiments under ideal
conditions, i.e., having gold annotations during testing. This allows us to
isolate the effects of incorporating the additional annotations, without
worrying about noisy predictions
### 4.2 Training with predicted expressions
It is equally important to know whether the models are able to use noisy
predicted annotations. In order to test this, we train _expression prediction
models_ on the three full fine-grained sentiment corpora. We use the same
BERT-based model and hyperparameters from the target extraction models above
and train five models with different random seeds. Preliminary results
suggested that these models had high precision, but low recall. Therefore, we
take a simple ensemble of the five trained models, where for each token, we
keep labels predicted by at least one of the expression models in order to
increase recall.
We perform an additional set of experiments where we use sentiment lexicons
and assume any word in these lexicons is a sentiment expression. We use the Hu
and Liu lexicon Hu and Liu (2004), the SoCal and SoCal-Google lexicons Taboada
et al. (2006) and the NRC emotion lexicon Mohammad and Turney (2013), which
also contains sentiment annotations. The lexicons contain 6,789, 5,824, 2,142,
and 5,474 entries, respectively. The MPQA and Darmstadt experiments show the
effect of predicted vs. gold expressions, as well as domain transfer. The
experiments on the targeted datasets, on the other hand, will show us whether
it is possible to improve the targeted models with predicted expressions.
## 5 Results
| | MPQA | DS. Services | DS. Unis | Challenge | SemEval R. | SemEval L. | Open | TDParse
---|---|---|---|---|---|---|---|---|---
| Xu et al. (2019) | n/a | n/a | n/a | n/a | 78.0 | 84.3 | n/a | n/a
| BiLSTM-CRF | 12.2 (1) | 85.0 (1) | 84.4 (1) | 73.4 (1) | 72.5 (1) | 74.0 (1) | 62.2 (1) | 82.6
| original | 14.1 (2) | 85.9 (1) | 84.6 (0) | 75.8 (1) | 51.9 (1) | 71.3 (1) | 62.0 (4) | 81.7 (3)
Targ. | \+ holders | 11.9 (1) | 84.3 (1) | 83.6 (1) | - | - | - | - | -
\+ exp. | 11.6 (1) | 85.0 (0) | 83.4 (0) | - | - | - | - | -
\+ full | 10.5 (2) | 84.8 (1) | 83.8 (1) | - | - | - | - | -
Pred. | \+ holders | 12.1 (2) | 86.2 (0) | 84.6 (0) | - | - | - | - | -
\+ exp. | 14.9 (1) | 84.7 (1) | 84.5 (1) | - | - | - | - | -
\+ full | 13.0 (3) | 85.5 (1) | 84.3 (1) | - | - | - | - | -
+Pol. | BiLSTM-CRF | 13.9 (1) | 85.2 (1) | 83.7 (1) | 73.6 (1) | 73.7 (1) | 74.5 (1) | 62.3 (1) | 81.8 (1)
original | 13.8 (1) | 85.4 (1) | 84.3 (1) | 76.9 (1) | 52.5 (1) | 71.6 (1) | 62.9 (1) | 83.2 (0)
\+ holders | 13.8 (2) | 85.6 (1) | 84.4 (1) | - | - | - | - | -
\+ exp. | 13.5 (2) | 85.4 (1) | 84.3 (0) | - | - | - | - | -
\+ full | 12.0 (1) | 86.0 (1) | 84.6 (0) | - | - | - | - | -
Table 3: Average token-level F1 scores for the target extraction task across
five runs, (standard deviation in parenthesis). Bold numbers indicate the best
model per dataset, while blue and pink highlighting indicates an improvement
or loss in performance compared to the original data, respectively.
In this section we describe the main results from the extraction and two
classification experiments described in Section 4.
### 5.1 Target extraction
Table 3 shows the results for the extraction experiment, where token-level
$\text{F}_{1}$ is measured only on targets. The models perform poorer than the
state-of-the-art, as we did not finetune on the SQUAD question answering
dataset and in-domain sentiment questions or perform extensive hyperparameter
tuning. The average $\text{F}_{1}$ score depends highly on the dataset – MPQA
is the most difficult dataset with 13.1 $\text{F}_{1}$ on the original data,
while the Darmstadt Universities corpus is the easiest for target extraction
with 84.6. Augmenting the input text with further annotations, but predicting
only sentiment targets (Targ. in Table 3) hurts the model performance in all
cases. Specifically, adding holder tags leads to an average drop of 1.3
percentage points (pp), expressions 1.2 and full 1.5. Attempting to
additionally predict these annotations (Pred. in Table 3) leads to mixed
results – the model leads to improvements on MPQA + exp. and Darmstadt
Services + holders, no notable difference on MPQA + full and Darmstadt
Universities + exp., and a loss on the rest.
Adding the polarity to the target BIO tags (original +Pol. in Table 3) leads
to the most consistent improvements across experiments – an average of 0.5 pp
– with the largest improvement of 1.5 pp on the TDParse dataset. This suggests
a weak-to-moderate relationship between polarity and extraction, which
contradicts previous conclusions Hu et al. (2019). Finally, further adding the
holder and expression tags (+Pol. in Table 3) tends to decrease performance.
### 5.2 Polarity classification with gold annotations
Table 4 shows the macro $\text{F}_{1}$ scores for the polarity classification
task on the gold targets. The model performs better than the best reported
results on Challenge Jiang et al. (2019), and similar to previous results on
the SemEval corpora. Regarding the choice of target representation, First is
the strongest overall, with an average of 64.7 $\text{F}_{1}$ across the
original eight datasets, followed by Max (64.6), Mean (64.4), MaxMM (64.2),
and finally [Cls] (64.1). It is, however, unclear exactly which representation
is the best, as it differs for each dataset. But we can conclude that [Cls] is
in general the weakest model, while either First or Max provide good starting
points.
Adding holder annotations to the input text delivers only small improvements
on four of the fifteen experiments, and has losses on seven. The +exp. model,
however, leads to significant improvements on 10 experiments. The outlier
seems to be Darmstadt Services, which contains a large number of “polar
targets” in the data, which do not have polar expressions. This may explain
why including this information has less effect on this dataset. Finally, +full
performs between the original input and +exp.
### 5.3 Polarity classification with predicted annotations
The expression models achieve modest $\text{F}_{1}$ scores when trained and
tested on the same dataset – between 15.0 and 47.9 –, and poor scores when
transferred to a different dataset – between 0.9 and 14.9 (further details
shown in Table 7 in Appendix A). The lexicons often provide better cross-
dataset $\text{F}_{1}$ than the expression models trained on another dataset,
as they have relatively good precision on general sentiment terms.
Figure 3 shows a heatmap of improvements (blue) and losses (red) on the eight
datasets (x-axis) when augmenting the input text with expression tags from the
expression models and lexicons (y-axis). We compare the expression augmented
results to the original results for each pooling technique and take the
average of these improvements and losses. For a full table of all results, see
Table 5 in Appendix A.
Augmenting the input text with predicted sentiment expressions leads to losses
in 41 out of averaged 56 experiments shown in Figure 3 (or in 173 out of 280
experiments in Table 5). Curiously, the experiments that use an expression
model trained on the same dataset as the classification task, e.g., MPQA
predicted expressions on the MPQA classification task, have the largest losses
– the largest of which is MPQA (-2.78 on average). This seems to indicate that
the mismatch between the train prediction, which are near perfect, and the
rather poor test predictions is more problematic than cross-dataset
predictions, which are similar on train and test.
| | MPQA | DS. Services | DS. Unis | Challenge | SemEval R. | SemEval L. | Open | TDParse
---|---|---|---|---|---|---|---|---|---
| Previous Results | n/a | n/a | n/a | 70.3 | 80.1 | 78.3 | |
[Cls] | original | 63.5 (2) | 57.3 (1) | 57.6 (4) | 84.3 (0) | 74.1 (2) | 72.8 (1) | 54.6 (1) | 48.8 (1)
\+ holders | 63.1 (2) | 57.1 (1) | 60.5 (0) | - | - | - | - | -
\+ exp. | 64.0 (3) | 56.4 (0) | 62.9 (4) | - | - | - | - | -
\+ full | 61.9 (2) | 56.6 (1) | 62.8 (2) | - | - | - | - | -
First | original | 64.3 (2) | 57.8 (1) | 58.7 (4) | 84.4 (1) | 75.6 (1) | 74.3 (1) | 55.6 (2) | 46.6 (1)
\+ holders | 63.4 (2) | 57.7 (2) | 60.5 (3) | - | - | - | - | -
\+ exp. | 64.8 (2) | 57.0 (1) | 63.7 (2) | - | - | - | - | -
\+ full | 64.0 (1) | 55.2 (1) | 65.7 (4) | - | - | - | - | -
Mean | original | 63.5 (2) | 57.3 (1) | 60.2 (4) | 84.4 (1) | 74.1 (2) | 72.8 (1) | 56.8 (3) | 46.1 (1)
\+ holders | 63.1 (2) | 57.8 (1) | 56.7 (5) | - | - | - | - | -
\+ exp. | 64.3 (2) | 56.2 (1) | 64.1 (3) | - | - | - | - | -
\+ full | 64.2 (2) | 56.3 (1) | 63.7 (2) | - | - | - | - | -
Max | original | 60.8 (4) | 58.2 (1) | 57.8 (3) | 81.4 (1) | 73.9 (2) | 74.5 (2) | 61.4 (5) | 49.0 (3)
\+ holders | 61.9 (4) | 57.9 (1) | 53.9 (1) | - | - | - | - | -
\+ exp. | 64.3 (2) | 57.4 (1) | 61.5 (6) | - | - | - | - | -
\+ full | 62.7 (3) | 57.9 (1) | 54.5 (2) | - | - | - | - | -
MaxMM | original | 59.3 (2) | 57.8 (1) | 55.2 (3) | 81.3 (1) | 77.2 (1) | 74.5 (1) | 60.2 (5) | 48.5 (5)
\+ holders | 61.3 (1) | 57.8 (1) | 54.7 (3) | - | - | - | - | -
\+ exp. | 64.1 (2) | 59.8 (3) | 54.0 (2) | - | - | - | - | -
\+ full | 63.9 (1) | 57.7 (1) | 54.4 (4) | - | - | - | - | -
Table 4: Average macro $\text{F}_{1}$ scores for polarity classification
across five runs (standard deviation in parenthesis) on gold targets, also
adding information about holders and expressions. Bold indicates the best
model per dataset, while blue and pink highlighting indicates an improvement
or loss in performance compared to the original (targets only) data,
respectively.
The best expression prediction model is the one trained on MPQA, improving the
performance on Darmstadt Universties, Open, and SemEval Restaurants. This is
likely due to the fact that MPQA has the largest number of annotated
expressions, and that the domain is more general, leading to expression
predictions that generalize better. The expression models trained on Darmstadt
Services leads to small benefits on two corpora and the expression model
trained on Darmstadt Universities only leads to losses
The datasets that receive the most benefit from expression annotations are
Darmstadt Universities (6/7 experiments) and the TDParse dataset (5/7). In
both cases, the lexicon-based expression models provide more consistent
benefits than the trained expression prediction models. The fact that the
dataset that benefits most is the TDParse dataset suggests that expression
information is most useful when there are multiple targets with multiple
polarities.
There is no significant correlation between the performance of the expression
prediction model and the performance on the classification task on the three
fine-grained datasets. In fact, there is a small but insignificant negative
correlation (-0.33 p$=$0.13, -0.16 p$=$0.48, -0.26 p$=$0.25 for macro
Precision, Recall, or $\text{F}_{1}$ respectively, as measured by Pearson’s
correlation between the expression performances and the $\text{F}_{1}$ of the
classification models augmented with these predicted expressions). It seems
that the possible benefits depends more on the target dataset than the actual
expression model used.
Figure 3: Heatmap of average improvements (blue) and losses (red) on the
target classification tasks (x-axis) when augmenting the input text with
predicted sentiment expressions from the expression prediction models
(y-axis).
## 6 Conclusion
In this work we have explored the benefit of augmenting targeted sentiment
models with holder and sentiment expressions. The experiments have shown that
although augmenting text with holder and expression tags (RQ1 a) or
simultaneously predicting them (RQ1 b) have no benefit for target extraction,
predicting collapsed BIO + polarity tags consistently improves target
extraction (RQ1 c). Furthermore, augmenting the input text with gold
expressions generally improves targeted polarity classification (RQ2 a),
although it is not clear which target representation strategy is best (RQ2 b).
Furthermore, we have found benefits of including lexicon-based expressions for
the more complex targeted datasets (RQ2 c).
The rather poor performance of the learned expression models and the
difference between augmenting with gold or predicted expressions reveals the
need to improve expression prediction approaches, both by creating larger
corpora annotated with sentiment expressions, as well as performing further
research on the modeling aspect. Any future work interested in modelling more
complex sentiment phenomena should therefore be aware that we may first
require more high-quality annotated data if we wish to do so with current
state-of-the-art machine learning approaches.
Furthermore, we introduce a common format for eight standard English datasets
in fine-grained sentiment analysis and release the scripts to download and
preprocess them easily. We plan to include further datasets in our script in
the future, as well as extending our work to other languages with available
fine-grained corpora.
## References
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Etter et al. (2013) David Etter, Francis Ferraro, Ryan Cotterell, Olivia Buzek, and Benjamin Van Durme. 2013. Nerit: Named Entity Recognition for Informal Text. Technical Report 11, Human Language Technology Center of Excellence, Johns Hopkins University, Baltimore, Maryland.
* He et al. (2019) Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 504–515, Florence, Italy. Association for Computational Linguistics.
* He and McAuley (2016) Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In _Proceedings of the 25th International Conference on World Wide Web_ , WWW ’16, page 507–517, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
* Hu et al. (2019) Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, and Yiwei Lv. 2019. Open-domain targeted sentiment analysis via span-based extraction and classification. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 537–546, Florence, Italy. Association for Computational Linguistics.
* Hu and Liu (2004) Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In _Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , pages 168–177, Seattle, USA.
* Jiang et al. (2019) Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, and Min Yang. 2019. A challenge dataset and effective models for aspect-based sentiment analysis. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 6279–6284, Hong Kong, China. Association for Computational Linguistics.
* Katiyar and Cardie (2016) Arzoo Katiyar and Claire Cardie. 2016. Investigating LSTMs for joint extraction of opinion entities and relations. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 919–929, Berlin, Germany. Association for Computational Linguistics.
* Klinger and Cimiano (2013) Roman Klinger and Philipp Cimiano. 2013. Bi-directional inter-dependencies of subjective expressions and targets and their value for a joint model. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 848–854, Sofia, Bulgaria. Association for Computational Linguistics.
* Li et al. (2019) Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019. A unified model for opinion target extraction and target sentiment prediction. In _Proceedings the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019)_ , pages 6714–6721, Honolulu, Hawaii. AAAI Press.
* Liu (2015) Bing Liu. 2015. _Sentiment analysis: Mining Opinions, Sentiments, and Emotions_. Cambridge University Press, Cambridge, United Kingdom.
* Loper and Bird (2002) Edward Loper and Steven Bird. 2002. NLTK: The natural language toolkit. In _Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics - Volume 1_ , pages 63–70.
* Mitchell et al. (2013) Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. 2013\. Open domain targeted sentiment. In _Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing_ , pages 1643–1654, Seattle, Washington, USA. Association for Computational Linguistics.
* Mohammad and Turney (2013) Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a Word-Emotion Association Lexicon. _Computational Intelligence_ , 29(3):436–465.
* Pontiki et al. (2014) Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In _Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)_ , pages 27–35, Dublin, Ireland. Association for Computational Linguistics.
* Rietzler et al. (2020) Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2020. Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification. In _Proceedings of The 12th Language Resources and Evaluation Conference_ , pages 4933–4941, Marseille, France. European Language Resources Association.
* Taboada et al. (2006) Maite Taboada, Caroline Anthony, and Kimberly Voll. 2006. Methods for Creating Semantic Orientation Dictionaries. In _Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)_ , Genoa, Italy. European Language Resources Association (ELRA).
* Tang et al. (2014) Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1555–1565, Baltimore, Maryland. Association for Computational Linguistics.
* Toprak et al. (2010) Cigdem Toprak, Niklas Jakob, and Iryna Gurevych. 2010. Sentence and expression level annotation of opinions in user-generated discourse. In _Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics_ , pages 575–584, Uppsala, Sweden. Association for Computational Linguistics.
* Wang et al. (2017) Bo Wang, Maria Liakata, Arkaitz Zubiaga, and Rob Procter. 2017. TDParse: Multi-target-specific sentiment recognition on twitter. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , pages 483–493, Valencia, Spain. Association for Computational Linguistics.
* Wiebe et al. (2005) Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. _Language Resources and Evaluation (formerly Computers and the Humanities)_ , 39(2/3):164–210.
* Xu et al. (2019) Hu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 2324–2335, Minneapolis, Minnesota. Association for Computational Linguistics.
* Zhang et al. (2015) Meishan Zhang, Yue Zhang, and Duy-Tin Vo. 2015. Neural networks for open domain targeted sentiment. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 612–621, Lisbon, Portugal. Association for Computational Linguistics.
## Appendix A Additional tables
| | | MPQA | DS. Services | DS. Unis | Challenge | Open | SemEval R. | SemEval L. | TDParse
---|---|---|---|---|---|---|---|---|---|---
| original | [Cls] | 63.5 (2) | 57.3 (1) | 57.6 (4) | 84.3 (0) | 54.6 (1) | 74.1 (2) | 72.8 (1) | 48.8 (1)
| First | 64.3 (2) | 57.8 (1) | 58.7 (4) | 84.4 (1) | 55.6 (2) | 75.6 (1) | 74.3 (1) | 46.6 (1)
| Mean | 63.5 (2) | 57.3 (1) | 60.2 (4) | 84.4 (1) | 56.8 (3) | 74.1 (2) | 72.8 (1) | 46.1 (1)
| Max | 60.8 (4) | 58.2 (1) | 57.8 (3) | 81.4 (1) | 61.4 (5) | 73.9 (2) | 74.5 (2) | 49.0 (3)
| MaxMM | 59.3 (2) | 57.8 (1) | 55.2 (3) | 81.3 (1) | 60.2 (5) | 77.2 (1) | 74.5 (1) | 48.5 (5)
Predicted Expressions | MPQA | [Cls] | 60.3 (2) | 57.0 (1) | 61.3 (5) | 83.1 (1) | 57.5 (4) | 74.2 (2) | 72.2 (1) | 47.5 (2)
First | 61.6 (2) | 57.0 (2) | 59.8 (3) | 83.5 (1) | 55.2 (3) | 77.1 (1) | 73.9 (2) | 45.2 (1)
Mean | 60.3 (2) | 57.0 (1) | 61.3 (5) | 83.1 (1) | 57.5 (4) | 74.2 (2) | 72.2 (1) | 47.5 (2)
Max | 59.1 (2) | 58.1 (0) | 57.0 (4) | 82.3 (0) | 63.7 (1) | 75.0 (2) | 74.7 (1) | 48.5 (2)
MaxMM | 56.2 (5) | 58.1 (1) | 52.7 (2) | 81.3 (1) | 61.9 (3) | 75.6 (2) | 75.2 (1) | 45.8 (4)
DS. Services | [Cls] | 63.6 (1) | 56.3 (1) | 60.6 (1) | 82.4 (1) | 53.4 (4) | 72.1 (2) | 72.2 (1) | 49.4 (2)
First | 61.3 (2) | 54.5 (0) | 59.4 (3) | 82.6 (1) | 56.2 (9) | 76.3 (1) | 74.8 (1) | 45.4 (2)
Mean | 64.1 (2) | 56.0 (0) | 60.1 (5) | 82.4 (1) | 56.1 (3) | 73.0 (2) | 72.4 (1) | 49.8 (2)
Max | 61.4 (1) | 56.7 (1) | 55.2 (2) | 80.3 (1) | 63.0 (2) | 76.5 (1) | 74.5 (1) | 48.1 (3)
MaxMM | 58.5 (2) | 57.1 (1) | 54.8 (5) | 80.4 (1) | 59.0 (2) | 75.9 (2) | 73.2 (2) | 49.5 (4)
DS. Unis | [Cls] | 63.1 (1) | 57.0 (1) | 60.3 (3) | 82.8 (1) | 54.7 (2) | 73.6 (3) | 72.8 (1) | 48.1 (3)
First | 64.1 (2) | 56.9 (1) | 58.2 (2) | 82.6 (1) | 55.2 (3) | 70.8 (3) | 72.9 (2) | 44.4 (1)
Mean | 62.3 (1) | 57.0 (1) | 59.7 (3) | 82.8 (1) | 54.7 (2) | 73.6 (3) | 72.8 (1) | 48.1 (3)
Max | 59.6 (4) | 57.8 (1) | 53.4 (1) | 80.5 (1) | 62.0 (1) | 74.8 (2) | 71.0 (1) | 46.9 (2)
MaxMM | 59.8 (3) | 57.5 (1) | 52.3 (4) | 80.5 (0) | 59.9 (4) | 74.8 (1) | 74.2 (1) | 47.0 (4)
Lexicon Expressions | HuLiu | [Cls] | 60.3 (2) | 56.2 (1) | 60.8 (3) | 82.8 (1) | 54.0 (3) | 73.6 (1) | 73.2 (1) | 50.6 (1)
First | 61.2 (2) | 55.0 (1) | 61.2 (1) | 82.3 (1) | 46.5 (3) | 74.8 (1) | 74.2 (1) | 43.8 (1)
Mean | 60.3 (2) | 56.2 (1) | 60.8 (3) | 82.8 (1) | 54.0 (3) | 73.6 (1) | 73.2 (1) | 50.6 (1)
Max | 59.7 (3) | 57.2 (1) | 56.4 (2) | 81.0 (1) | 61.1 (2) | 75.5 (2) | 73.7 (3) | 47.2 (3)
MaxMM | 60.8 (3) | 57.1 (1) | 55.2 (2) | 80.8 (1) | 61.3 (3) | 73.8 (3) | 73.9 (1) | 49.2 (4)
NRC | [Cls] | 64.0 (2) | 56.9 (1) | 63.0 (2) | 83.1 (1) | 54.8 (3) | 72.0 (1) | 73.0 (1) | 49.4 (1)
First | 63.7 (2) | 56.9 (1) | 61.1 (2) | 83.3 (1) | 49.1 (5) | 74.9 (3) | 74.9 (0) | 46.1 (2)
Mean | 64.0 (2) | 56.9 (1) | 63.0 (2) | 83.1 (1) | 54.8 (3) | 72.0 (1) | 73.0 (1) | 49.4 (1)
Max | 61.1 (3) | 58.0 (1) | 55.6 (2) | 80.4 (1) | 62.0 (1) | 75.3 (2) | 74.6 (2) | 49.7 (3)
MaxMM | 59.5 (3) | 57.6 (1) | 56.9 (4) | 80.8 (1) | 61.3 (2) | 75.4 (2) | 74.8 (1) | 49.8 (4)
SoCal | [Cls] | 63.2 (2) | 56.6 (1) | 60.5 (4) | 83.0 (0) | 51.5 (4) | 69.8 (1) | 71.0 (1) | 50.0 (1)
First | 61.8 (2) | 53.7 (2) | 59.9 (4) | 81.8 (0) | 51.4 (6) | 72.8 (2) | 73.0 (1) | 45.4 (2)
Mean | 63.2 (2) | 56.6 (1) | 60.5 (4) | 83.0 (0) | 51.5 (4) | 69.8 (1) | 71.0 (1) | 50.0 (1)
Max | 59.2 (2) | 57.8 (1) | 54.5 (2) | 79.3 (1) | 62.3 (2) | 71.5 (4) | 71.8 (2) | 49.5 (4)
MaxMM | 59.7 (2) | 56.5 (2) | 55.6 (1) | 79.1 (1) | 60.9 (3) | 73.4 (3) | 73.0 (1) | 51.4 (2)
SoCal-Google | [Cls] | 62.6 (3) | 56.5 (1) | 60.0 (3) | 83.0 (1) | 53.2 (3) | 71.5 (1) | 72.9 (1) | 50.2 (1)
First | 62.1 (1) | 56.2 (1) | 60.8 (5) | 82.5 (1) | 49.7 (6) | 74.9 (2) | 74.0 (1) | 46.2 (0)
Mean | 62.6 (3) | 56.5 (1) | 60.0 (3) | 83.0 (1) | 53.2 (3) | 71.5 (1) | 72.9 (1) | 50.2 (1)
Max | 60.0 (3) | 57.8 (0) | 55.5 (5) | 80.9 (1) | 61.5 (3) | 74.8 (2) | 74.5 (2) | 49.5 (4)
MaxMM | 60.6 (4) | 57.1 (1) | 54.6 (3) | 80.5 (1) | 60.5 (3) | 73.5 (4) | 72.7 (2) | 45.8 (5)
Table 5: Macro $\text{F}_{1}$ scores for polarity classification of gold targets. Bold numbers indicate the best model per dataset, while blue and pink highlighting indicates an improvement or loss in performance compared to the original data (gold targets only), respectively. | % Unique | % Overlap
---|---|---
| train | dev | test | train-dev | train-test
MPQA | 85.7 | 88.5 | 89.2 | 15 | 19
DS. Services | 36.2 | 48.6 | 47.5 | 45.0 | 35.6
DS. Uni | 35.2 | 52.9 | 45.0 | 58.5 | 47.6
TDParse | 33 | 51.8 | 41.7 | 57.4 | 47.3
SemEval R. | 36.3 | 59.8 | 49.4 | 56.4 | 33.8
SemEval L. | 45.5 | 71.7 | 64.8 | 48.9 | 33.7
Open | 85 | 92.4 | 87.1 | 23 | 24
Challenge | 23.1 | 39.0 | 39.7 | 54.1 | 52
Table 6: Analysis of targets in the datasets. % Unique describes the number of targets that are found only in that split. % Overlap describes the percentage of dev/test targets that are found in the train set. We disregard partial matches, e.g., “chinese food” and “food”. | | MPQA | DS. Services | DS. Unis
---|---|---|---|---
trained | MPQA | 15.0 (1.7) | 1.0 (0.8) | 2.2 (1.2)
DS. Services | 0.9 (0.3) | 47.9 (7.3) | 14.9 (1.2)
DS. Unis | 1.4 (0.6) | 10.9 (1.5) | 18.5 (1.5)
lexicons | HuLiu | 4.7 | 17.9 | 16.0
NRC | 3.3 | 7.4 | 9.0
SoCal | 2.4 | 13.2 | 13.8
SoCal Google | 1.0 | 13.2 | 11.4
Table 7: Token-level macro $\text{F}_{1}$ scores for expression prediction
models (trained) and lexicon expressions (lexicons) when tested on the three
fine-grained datasets (x-axis). The trained model scores are the average and
standard deviation across five runs with different random seeds. The lexicon
models are deterministic and therefore only have a single score.
## Appendix B Training details
GPU Infrastructure | 1 NVIDIA P100, 16 GiB RAM
---|---
CPU Infrastructure | Intel Xeon-Gold 6126 2.6 GHz
Number of search trials | 50
Domain training duration | 2580 sec
Extraction fine-tuning duration | 15381 sec
Classification fine-tuning duration | 9080 sec
Model implementation | https://github.com/blinded/for/review
Hyperparameter | Assignment
---|---
number of epochs | 50
max. sequence length | 128
metric early stopping monitored | validation loss
batch size | 32
sentiment dropout | 0.3
learning rate optimiser | Bert Adam
fine-tuning learning rate | 3e-5
learning rate warmup proportion | 0.1
regularisation type | L2
regularisation value | 0.01
|
# Infinite-dimensional Thurston theory
and transcendental dynamics I:
infinite-legged spiders
Konstantin Bogdanov
###### Abstract.
We develop techniques that lay out a basis for generalizations of the famous
Thurston’s Topological Characterization of Rational Functions for an
_infinite_ set of marked points and branched coverings of infinite degree.
Analogously to the classical theorem we consider the Thurston’s $\sigma$-map
acting on a Teichmüller space which is this time infinite-dimensional — and
this leads to a completely different theory comparing to the classical
setting.
We demonstrate our techniques by giving an alternative proof of the result by
Markus Förster about the classification of exponential functions with the
escaping singular value.
###### Contents
1. 1 Introduction
2. 2 Prerequisites
1. 2.1 Escaping dynamics in the exponential family
2. 2.2 Quasiconformal maps
3. 2.3 Beurling-Ahlfors extension
4. 2.4 Teichmüller theory and quadratic differentials
3. 3 Setup of Thurston iteration
1. 3.1 Thurston iteration
2. 3.2 Captured exponential function
3. 3.3 Motivation of the “quasiregular setup”
4. 4 Strict contraction of $\sigma$
5. 5 $\operatorname{Id}$-type maps
6. 6 Spiders
1. 6.1 Curves in the punctured half-plane
2. 6.2 $H_{r}$ and Hawaiian earrings
3. 6.3 Homotopy type of a preimage
4. 6.4 Spiders
5. 6.5 Spiders and Teichmüller equivalence
7. 7 Invariant compact subset
8. 8 Proof of Classification Theorem
9. 9 Acknowledgements
## 1\. Introduction
This is the first article out of four, prepared in order to publish the
results of author’s doctoral thesis. In the second and third papers we use the
toolbox of the infinite-dimensional Thurston theory developed here to classify
the transcendental entire functions that are compositions of a polynomial and
the exponential for which their singular values escape on disjoint dynamic
rays (the most general mode of escape). In the fourth article we investigate
continuity of such families of functions with respect to potentials and
external addresses and use the continuity argument to extend our
classification to the case of escape on the (pre-)periodic rays.
### Motivation from transcendental dynamics
A part of complex dynamics that has been studied intensively over the last
decades is the dynamics of transcendental entire functions. Good introductory
references for the subject are [EL, S].
As in polynomial dynamics, for a transcendental entire function $f$ an
important and natural concept is its _escaping set_
$I(f)=\\{z\in\mathbb{C}:f^{n}(z)\to\infty\text{ as }n\to\infty\\}$. It is
proved in [SZ] for the exponential family
$\\{e^{z}+\kappa:\kappa\in\mathbb{C}\\}$ and in [RRRS] for more general
families of functions (of bounded type and finite order) that the escaping set
of every function in the family is organized in form of _dynamic rays_.
Roughly speaking, this means that every point $z\in I(f)$ can be joined to
$\infty$ by a unique simple arc contained in $I(f)$ (or is mapped to such arc
after finitely many iterations), so that the iterates $f^{n}$ restricted to
the arc are injective, continuous at $\infty$ (here we agree for the sake of
convenience that $f(\infty)=\infty$), and converge to $\infty$ uniformly.
About the escaping points that belong to a dynamic ray we say that they
_escape on rays_. This is the general mode of escape for many important
families of transcendental entire functions.
In many cases the points escaping on rays can be described by their potential
(or “speed of escape”) and external address (or “combinatorics of escape”,
i.e. the sequence of dynamic rays containing the escaping orbit). This is
analogous to the Böttcher coordinates for polynomials where the points in the
complement of the filled Julia set are encoded by their potential and external
angle (another more general way to introduce “Böttcher coordinates” for
trancendental entire functions of bounded type is described in [Re]).
One of the most fruitful directions in the study of holomorphic dynamical
systems is the investigation of parameter spaces. The most famous example is
the study of the Mandelbrot set which is the set of parameters $c$ in the
complex plane for which the critical value $c$ of the polynomial $z^{2}+c$
does not escape. The simplest analogue in the transcendental world is the
space of complex parameters $\kappa$, each associated to the function
$e^{z}+\kappa$. When we consider parameters $\kappa$ for which the singular
value $\kappa$ of $e^{z}+\kappa$ escapes, we, roughly speaking, investigate a
part of the “complement of the Mandelbrot set of the exponential family”. The
results in this direction are contained in [FRS, FS, F]
Now, we can ask a question. ($\star$) _For a transcendental entire function of
finite type (i.e. with finitely many singular values), is there in its
parameter space an entire function such that its singular values escape on
rays and have initially prescribed potentials and external addresses?_
In this article we set up a flexible machinery for proving this kind of
results for general families of entire functions and apply it to reprove the
well-known classification result for the exponential family [F].
A main tool is what one might call infinite-dimensional Thurston theory in
dynamics. It is a generalization of the famous _Thurston’s Topological
Characterization of Rational Functions_ [DH]. The subject is interesting not
only as a tool but also in its own right and requires an independent
exposition.
### Infinite-dimensional Thurston theory
One of the questions of big interest in dynamics is the interplay between
topology and geometry. Or saying differently, when there exists a geometrical
object corresponding in a certain sense to the underlying topological
structure? And if it exists, is it unique?
A major result in this direction is proven by William Thurston.
###### Theorem (Topological Characterization of Rational Functions [DH, H2]).
A post-critically finite branched covering $f:S^{2}\to S^{2}$ with hyperbolic
orbifold is realized via a rational map if and only if there is no Thurston
obstruction.
This result gives a complete answer to the question whether there exists a
post-critically finite rational map with a prescribed combinatorics of the
covering.
So it is natural to hope that if we prove an analogous “characterization of
entire functions” for “topological entire functions” representing some
properly chosen combinatorial model, then the corresponding entire function
will have properties as required in question ($\star$). It turns out that this
can be done (at least for certain explicit families of functions), but the
proof, though keeping some similarities such as iterations on a Teichmüller
space, goes in quite a different direction than the proof of the classical
theorem. The key differences between our and the classical case are that we
have to consider iterations on an _infinite-dimensional_ Teichmüller space
(since the set of marked points is infinite), and that our topological entire
function is a branched covering of _infinite_ degree. This drastically changes
the approach already on the level of the setup. So far the case of infinite
set of marked points for different type of functions was explored in [Br, CT,
F]. A generalization of the Characterization Theorem for post-singularly
finite topological exponential functions is provided in [HSS].
Before discussing details we present the “grand scheme” of how one can
construct entire function demanded in question ($\star$). This “grand scheme”
is partially inspired by the ideas in [F].
1. (1)
Construct a (non-holomorphic) “model map” for which the singular values escape
on rays with desired “speed of escape” and “combinatorics”. It will define
(analogously to the classical case) the Thurston’s $\sigma$-map acting on the
Teichmüller space of the complement of the singular orbits of the “model map”.
2. (2)
Construct a compact subspace in the Teichmüller space that is invariant under
$\sigma$.
3. (3)
Prove that $\sigma$ is strictly contracting in the Teichmüller metric on the
constructed subspace.
4. (4)
Using the argument of the Banach Fixed Point Theorem prove that $\sigma$ has a
unique fixed point on the constructed subspace, and this point corresponds to
the entire function with the desired conditions on its singular orbits.
This scheme was successfully employed to prove classification results for
compositions of polynomials with the exponential [B0], and we believe that it
works in a much greater generality (after adjustments of its internal
ingredients). In this article we develop the techniques of how to treat every
item (1)-(4) and use the scheme to answer the question ($\star$) in the “baby
case” of the exponential family. Among all, the item (2) is the “weightiest”
one, and we elaborate it in the two largest Sections 6 and 7.
### Setup of Thurston iteration
This is what happens in the item (1) of the “grand scheme” and Section 3.
In the parameter space that we consider we pick a function $f_{0}$ that has
escaping points realizing the desired “speeds of escape” and “combinatorics”
(from question ($\star$)). Based on $f_{0}$ we construct (by a postcomposition
with a “capture”) a special quasiregular function $f$ for which its singular
orbits coincide with the corresponding escaping orbits of $f_{0}$, and which
are equal to $f_{0}$ outside of some compact set. Moreover, its singular
values escape on rays (defined analogously as for entire maps). This function
$f$ plays the role of the topological branched covering in the classical
theorem.
Based on $f$ we can construct the $\sigma$-map acting on the Teichmüller space
(Definition 2.21) of $\mathbb{C}\setminus P_{f}$ where $P_{f}$ is the post-
singular set of $f$ (i.e. the set of marked points). The map
$\sigma:[\varphi]\to[\tilde{\varphi}]$ is defined in the usual way via
Thurston diagram (Section 3).
${\mathbb{C},P_{f}}$${\mathbb{C},\tilde{\varphi}(P_{f})}$${\mathbb{C},P_{f}}$${\mathbb{C},\varphi(P_{f})}$$\scriptstyle{\tilde{\varphi}}$$\scriptstyle{f}$$\scriptstyle{g=\varphi\circ
f\circ\tilde{\varphi}^{-1}}$$\scriptstyle{\varphi}$
An attempt to define the Thurston’s $\sigma$-map as in the classical theorem
for a _topological_ (rather than quasiregular) exponential function would
necessarily fail: the $\sigma$-map still can be defined on the level of
isotopy classes of homeomorphisms, but since the post-singular set is
infinite, the isotopy class of $\tilde{\varphi}\circ\varphi^{-1}$ relative
$\varphi(P_{f})$ might not necessarily contain a quasiconformal map (see
Subsection 3.3 for more details). Yet in this case $\sigma$ can be interpreted
as a map between different Teichmüller spaces, but we do not touch such
constructions in this article.
In the article we set up the iteration for the case of exponential family, but
it can be easily upgraded to _any_ family of transcendental entire functions.
### $\operatorname{Id}$-type maps and spiders
The goal of item (2) in the “grand scheme” is to construct a compact subset of
$\mathcal{T}_{f}$ which is invariant under $\sigma$.
The space $\mathcal{T}_{f}$ with its isotopies relative to infinitely many
points in $P_{f}$ is quite complicated, so one might want to take a look at
simpler subspaces. These considerations lead us to the notion of
$\operatorname{id}$-type maps (Definition 5.2). These are quasiconformal maps
that tend to identity as their argument tends to $\infty$ on the post-singular
dynamic rays of $f$. An important property of these maps is that the images of
post-singular dynamic rays of $f$ preserve their asymptotics near $\infty$,
and this property is invariant under $\sigma$ (under proper normalization). So
$\operatorname{id}$-type maps form a $\sigma$-invariant subspace of
$\mathcal{T}_{f}$ that respects the “combinatorics” of $f$.
To every $\operatorname{id}$-type map one can associate an _infinite-legged
spider_ and run the so-called spider-algorithm [HS, F]. The spider which is
really the driving concept of the proofs, is a convenient presentation of a
point in the Teichmüller space that provides a more tame description of its
homotopy information (every point in the Teichmüller space is described by the
positions of images of $P_{f}$, which is moduli space information, and
homotopy information). One can introduce certain equivalence relation on the
set of spiders, called projective equivalence (Definition 6.26), for which the
following theorem holds (more precise version is Theorem 6.28).
###### Theorem.
Projective equivalence of spiders is Teichmüller equivalence, i.e. two spiders
are projectively equivalent if and only if the associated
$\operatorname{id}$-type maps represent the same point in the Teichmüller
space.
This way we obtain a description of quite complicated isotopy types of
homeomorphisms relative countably many marked points via a sequence of
homotopy types of simple arcs, each relative just finitely many marked points
— this is a serious simplification. Another merit of the construction is that
these homotopy types behave well under Thurston iteration (Theorem 6.23).
We demonstrate how the construction works for the exponential family, but it
is generalizable for functions admitting escape on rays.
### Invariant compact subset
Such subset is described as a list of conditions that stay invariant under
$\sigma$. We give a very rough list at the moment, for more details we refer
directly to Theorem 7.1.
The compact invariant subset is the set of points represented by
quasiconformal maps $\varphi$ such that :
1. (1)
$\varphi$ is of $\operatorname{id}$-type;
2. (2)
for the marked points $P_{f}$ there exists an integer $N>0$ not depending on
$\varphi$ such that the images under $\varphi$ of the first $N$ points of
$P_{f}$ are inside of the disk $\mathbb{D}_{\rho}(0)$ with $\rho>0$ not
depending on $\varphi$, and images of the other points of $P_{f}$ are outside
of $\mathbb{D}_{\rho}(0)$;
3. (3)
for very point $z\in P_{f}$ outside of $\mathbb{D}_{\rho}(0)$, $\varphi(z)$ is
contained in a small disk $U_{z}$ around $z$ so that the disks $U_{z}$ are
mutually disjoint;
4. (4)
inside of $\mathbb{D}_{\rho}(0)$ the distances between marked points are
bounded from below;
5. (5)
the isotopy type $\varphi$ relative points of $P_{f}$ outside of
$\mathbb{D}_{\rho}(0)$ is “almost” the isotopy type of identity;
6. (6)
the isotopy type $\varphi$ relative points of $P_{f}$ inside of
$\mathbb{D}_{\rho}(0)$ is not “too complicated”, i.e. there are quantitative
bounds on how many times the marked points “twist” around each other.
Conditions (2),(3) separate the complex plane into two subsets:
$\mathbb{D}_{\rho}(0)$ where we have not so much control on the behavior of
$\varphi$ but have finitely many post-singular points, and the complement of
$\mathbb{D}_{\rho}(0)$ where the homotopy information is trivial but we have
infinitely many marked points.
Conditions (2)-(4) describe the position in the moduli space, while (5)-(6)
encode homotopy information of the point in the Teichmüller space.
It is natural to expect that these conditions define a compact subset,
invariance is less obvious though.
We provide the construction for the exponential family. Generalizations for
more than one singular value require additional conditions on the orbits.
### Strict contraction of the $\sigma$-map
Item (3) of the “grand scheme” is where the infinite-dimensionality of the
Teichmüller space shows up.
The statement that $\sigma$ is strictly contracting in the classical theorem
by Thurston is deduced from the well-known theorem of Teichmüller claiming
that the quasiconformal representative minimizing the Teichmüller distance on
a surface of finite analytic type is associated to a unique normalized
integrable quadratic differential. We consider a surface of infinite analytic
type, and in our case this statement is false.
We are interested in finding some other way to prove strict contraction of
$\sigma$ on the Teichmüller space. However, this seems to be not an easy task.
Fortunately, we can find a way around. If we consider _asymptotically
conformal_ points in the Teichmüller space, then from the Strebel’s Frame
Mapping Theorem follows the existence of a unique associated normalized
integrable quadratic differential, and we can prove strict contraction of
$\sigma$ on an asymptotically conformal subspace.
In particular, the following theorem holds (more precise version is Theorem
4.4).
###### Theorem.
$\sigma$ is invariant and strictly contracting on the subset of asymptotically
conformal points of the Teichmüller space.
In the article we prove the theorem for compositions of polynomials with the
exponential. For more general families exactly this method does not work, but
it might be possible to deduce strict contraction directly from the
construction of the invariant compact subset.
### Classification Theorem
As was mentioned earlier, the existence of a fixed point in the invariant
compact subset follows by a simple argument of Banach Fixed Point Theorem (the
subset is asymptotically conformal). We also show that the entire map
corresponding to the constructed fixed point has singular values that escape
on rays with the prescribed “speed of escape” and “combinatorics”.
In particular, we reprove the well-known classification of exponential
functions with singular values escaping on rays.
###### Theorem 1.1 (Classification Theorem for $e^{z}+\kappa$. [F]).
Let $\underline{s}$ be an exponentially bounded external address that is not
(pre-)periodic, and $t>t_{\underline{s}}$ be a real number. Then in the family
$e^{z}+\kappa$ there exists a unique entire function
$e^{z}+\kappa_{\underline{s},t}$ such that its singular value
$\kappa_{\underline{s},t}$ escapes on rays, has potential $t$ and external
address $\underline{s}$.
Conversely, every function in the family $e^{z}+\kappa$ such that its singular
value escapes on non-(pre-)periodic rays is one of these.
For the definitions of an exponentially bounded external address and
$t_{\underline{s}}$ we refer the reader to Subsection 2.1.
## 2\. Prerequisites
### 2.1. Escaping dynamics in the exponential family
The simplest among the entire functions of finite type are functions having
exactly one singular value. Every such function is equal to
$A_{1}\circ\exp\circ A_{2}$ for some affine maps
$A_{1},A_{2}:\mathbb{C}\to\mathbb{C}$ and is conformally conjugate to a
function of the form $e^{z}+\kappa$. Hence, to study dynamical properties of
entire functions with one singular value, it is enough to consider the
exponential family $e^{z}+\kappa$.
For the sake of brevity we adopt the following notation.
###### Definition 2.1 ($\mathcal{N}$).
Denote by $\mathcal{N}$ the family of transcendental entire functions of the
form $e^{z}+\kappa$ for $\kappa\in\mathbb{C}$.
In this section we assemble some preliminary results and definitions about the
structure of the escaping set of functions in $\mathcal{N}$. We follow the
exposition in [SZ] though use a slightly different notation that is more
suitable for us.
It is proven in [SZ] for the exponential family, and in [RRRS] for much more
general families, that the escaping set of such functions is organized in form
of _dynamic rays_.
###### Definition 2.2 (Ray tails).
Let $f$ be a transcendental entire function. A _ray tail_ of $f$ is a
continuous curve $\gamma:[0,\infty)\to I(f)$ such that for every $n\geq 0$ the
restriction $f^{n}|_{\gamma}$ is injective with
$\lim_{t\to\infty}f^{n}(\gamma(t))=\infty$, and furthermore
$f^{n}(\gamma(t))\to\infty$ uniformly in $t$ as $n\to\infty$.
###### Definition 2.3 (Dynamic rays, escape on rays, endpoints).
A _dynamic ray_ of a transcendental entire function $f$ is a maximal injective
curve $\gamma:(0,\infty)\to I(f)$ such that $\gamma|_{[t,\infty)}$ is a ray
tail for every $t>0$.
If a point $z\in I(f)$ belongs to a dynamic ray, we say that the point
_escapes on rays_(because in this case every iterate of the point belongs to a
dynamic ray).
If there exists a limit $z=\lim_{t\to 0}\gamma(t)$, then we say that $z$ is an
_endpoint of the dynamic ray_ $\gamma$.
From now we assume that $f\in\mathcal{N}$. Let $r\in\mathbb{R}$ be such that
the right half-plane
$\mathbb{H}_{r}=\\{z\in\mathbb{C}:\operatorname{Re\,}z>r\\}$ does not contain
the singular value of $f$. Then the preimage of $\mathbb{H}_{r}$ under $f$ has
countably many path-connected components, called _tracts_ , having $\infty$ as
a boundary point, so that for every tract $T$ the restriction $f|_{T}$ is a
conformal isomorphism (strictly speaking these are tracts of $f^{2}$ so we
slightly abuse the standard terminology). We denote by $T_{n}$ the tract
contained in the infinite strip between the straight lines
$\operatorname{Im\,}z=2\pi i(n-1/2)$ and $\operatorname{Im\,}z=2\pi i(n+1/2)$.
This way we enumerate the tracts by integer numbers. Clearly, this numeration
does not depend on a particular choice of $r$.
Let $\gamma:[0,\infty)\to I(f)$ be a ray tail of $f$. Then for every $n\geq 0$
there is $t_{n}\geq 0$ and $s_{n}\in\mathbb{Z}$ such that
$f^{n}\circ\gamma|_{[t_{n},\infty)}$ is contained in the tract $T_{s_{n}}$.
Moreover, all $t_{n}$ except finitely many are equal to $0$.
###### Definition 2.4 (External address).
Let $z$ be a point escaping on rays. We say that $z$ has _external address_
$\underline{s}=(s_{0}s_{1}s_{2}...)$ where $s_{n}\in\mathbb{Z}$, if each
$f^{n}(z)$ belongs to a ray tail contained in $T_{s_{n}}$ near $\infty$.
In this case we also say that the dynamic ray (or ray tail) containing $z$ has
external address $\underline{s}$.
It is clear that the external address does not depend on a particular choice
of $\mathbb{H}_{r}$ in the definition of tracts.
On the set of external addresses we can consider the usual shift-operator
$\sigma:(s_{0}s_{1}s_{2}...)\mapsto(s_{1}s_{2}s_{3}...)$.
Now we define the function which allows to characterize the “speed of escape”
of points escaping on rays.
###### Definition 2.5 ($F(t)$).
Denote by $F:\mathbb{R^{+}}\to\mathbb{R^{+}}$ the function
$F(t):=e^{t}-1.$
What one should know about $F$ is that its iterates grow very fast, as shown
in the next elementary lemma.
###### Lemma 2.6 (Super-exponential growth of iterates).
For every $t,k>0$ we have $F^{n}(t)/e^{n^{k}}\to\infty$ as $n\to\infty$.
Next definition relates the last two notions.
###### Definition 2.7 (Exponentially bounded external address,
$t_{\underline{s}}$).
We say that the sequence $\underline{s}=(s_{0}s_{1}s_{2}...)$ is
_exponentially bounded_ if there exists $t>0$ such that $s_{n}/{F^{n}(t)}\to
0$ as $n\to\infty$. The infimum of such $t$ we denote by $t_{\underline{s}}$.
Next statement claims that no other type of external addresses can appear for
points escaping on rays.
###### Lemma 2.8 (Only exponentially bounded $\underline{s}$ [SZ]).
If $z\in I(f)$ escapes on rays and has external address
$\underline{s}=(s_{0}s_{1}s_{2}...)$, then $\underline{s}$ is exponentially
bounded.
The key statement about the escaping set of functions in $\mathcal{N}$ is the
following theorem which is an accumulation of different results (of interest
for us) from [SZ].
###### Theorem 2.9 (Escape on rays and asymptotic formula [SZ]).
Let $f\in\mathcal{N}$. Then for every exponentially bounded external address
there exists a unique dynamic ray realizing it.
Further, if $\mathcal{R}_{\underline{s}}$ is the dynamic ray having
exponentially bounded external address $\underline{s}=(s_{0}s_{1}s_{2}...)$,
and no strict forward iterate of $\mathcal{R}_{\underline{s}}$ contains the
singular value of $f$, then $\mathcal{R}_{\underline{s}}$ can be parametrized
by $t\in(t_{\underline{s}},\infty)$ so that
$\mathcal{R}_{\underline{s}}(t)=t+2\pi is_{0}+O(e^{-t/2}),$ (2.1)
and
$f^{n}\circ\mathcal{R}_{\underline{s}}=\mathcal{R}_{\sigma^{n}\underline{s}}\circ
F^{n}.$
Asymptotic bounds $O(.)$ for $\mathcal{R}_{\sigma^{n}\underline{s}}(t)$ are
uniform in $n$ on every ray tail contained in $\mathcal{R}_{\underline{s}}$.
Every escaping point is mapped after finitely many iterations either on a
dynamic ray or to an endpoint of a ray. If the singular value of $f$ does not
escape, then all dynamic rays are parametrized as above, and every escaping
point escapes either on rays or as endpoints of rays.
The theorem above allows us to define the notion of potential.
###### Definition 2.10 (Potential).
Let $f\in\mathcal{N}$ and assume that $z$ escapes on rays with escaping
address $\underline{s}$. We say that $t$ is the potential of $z$ if $\lvert
f^{n}(z)-F^{n}(t)-2\pi is_{n}\rvert\to 0$ as $n\to\infty$.
From Theorem 2.1 follows that every point escaping on rays (without singular
values on forward iterates) has a well-defined potential, and different points
on the same ray have different potentials.
### 2.2. Quasiconformal maps
Standard references are [A, LV]. We start with a few preliminary definitions.
###### Definition 2.11 (Quadrilateral).
A _quadrilateral_ $Q(z_{1},z_{2},z_{3},z_{4})$ is a Jordan domain $Q$ together
with a sequence $z_{1},z_{2},z_{3},z_{4}$ of boundary points called vertices
of the quadrilateral. Order of vertices agrees with the positive orientation
with respect to $Q$. Arcs $z_{1}z_{2}$ and $z_{3}z_{4}$ are called $a$-sides,
arcs $z_{2}z_{3}$ and $z_{4}z_{1}$ are called $b$-sides.
Every quadrilateral $Q$ is conformally equivalent to the unique canonical
rectangle with the length of $b$-sides equal to 1. For a quadrilateral $Q$,
the length of the $a$-sides of the canonical rectangle is called a (conformal)
_modulus_ of $Q$ and is denoted by $\operatorname{mod\,}Q$.
###### Definition 2.12 (Maximal dilatation).
Let $U$ be a plane domain and $\psi$ be an orientation-preserving
homeomorphism of $U$. The _maximal dilatation_ of $\psi$ is called the number
$K(\psi)=\sup_{\overline{Q}\subset
U}\frac{\operatorname{mod\,}\psi(Q)}{\operatorname{mod\,}Q}$,
where the supremum is taken over all quadrilaterals $Q$ contained in $G$
together with its boundary.
Now we can define quasiconformal maps.
###### Definition 2.13 (Quasiconformal map).
An orientation-preserving homeomorphism $\psi$ of a plane domain $U$ is called
quasiconformal if its maximal dilatation $K(\psi)$ is finite. If $K(\psi)\leq
K<\infty$, then $\psi$ is called $K$-quasiconformal.
It is easy to show that the inverse of a $K$-quasiconformal mapping is
$K$-quasiconformal, and the composition of a $K_{1}$-quasiconformal and
$K_{2}$-quasiconformal mapping is $K_{1}K_{2}$-quasiconformal.
We also provide the analytic definition of quasiconformal maps. It is
equivalent to the previous one.
###### Definition 2.14 (Quasiconformal map).
A homeomorphism $\psi$ of a plane domain $U$ is quasiconformal if there exists
$k<1$ such that
1. (1)
$\psi$ has locally integrable, distributional derivatives $\psi_{z}$ and
$\psi_{\overline{z}}$ on $U$, and
2. (2)
$\lvert\psi_{\overline{z}}\rvert\leq k\lvert\psi_{z}\rvert$ almost everywhere.
Such $\psi$ is called $K$-quasiconformal, where $K=\frac{1+k}{1-k}$.
Every quasiconformal map is determined by its Beltrami coefficient.
###### Definition 2.15 (Beltrami coefficient).
The function $\mu_{\psi}(z)=\psi_{\overline{z}}(z)/{\psi_{z}(z)}$ (defined
a.e. on $U$) is called the _Beltrami coefficient_ of $\psi$.
Providing the Beltrami coefficient is almost the same as giving a
quasiconformal map. Suppose $\mu(z)$ is a measurable complex-valued function
defined on a domain $U\subset\hat{\mathbb{C}}$ for which
$\lvert\lvert\mu\rvert\rvert_{L^{\infty}}=k<1$. We can ask whether it is
possible to find a quasiconformal map $\psi$ satisfying the Beltrami equation
$\psi_{\overline{z}}(z)=\mu(z)\psi_{z}(z)$ (2.2)
where the partial derivatives $\psi_{z}(z)$ and $\psi_{\overline{z}}(z)$ are
defined in the sense of distributions and are locally integrable.
The answer for $U=\hat{\mathbb{C}}$ is contained in Measurable Riemann Mapping
Theorem.
###### Theorem 2.16 (Measurable Riemann Mapping Theorem [GL]).
The Beltrami equation 2.2 gives a one-to-one correspondence between the set of
quasiconformal homeomorphisms of $\hat{\mathbb{C}}$ that fix the points $0,1$
and $\infty$ and the set of measurable complex-valued functions $\mu$ on
$\hat{\mathbb{C}}$ for which $\lvert\lvert\mu\rvert\rvert_{L^{\infty}}<1$.
Furthermore, the normalized solution $\psi^{\mu}$ depends holomorphically on
$\mu$.
### 2.3. Beurling-Ahlfors extension
Next, we formulate two theorems that help us to reconstruct a quasiconformal
map on a half-plane from its boundary values given by a _quasisymmetric
function_.
###### Definition 2.17 (Quasisymmetric function).
Let $\xi$ be a continuous strictly increasing self-homeomorphism of the real
line. We say that $\xi$ is _$\rho$ -quasisymmetric_ if there exists a positive
constant $\rho$ such that for every $t>0$ and every $x\in\mathbb{R}$ we have
$\frac{1}{\rho}\leq\frac{\xi(x+t)-\xi(x)}{\xi(x)-\xi(x-t)}\leq\rho.$
The first theorem basically says that the restriction of the quasiconformal
self-homeomorphism of the upper half plane is quasisymmetric.
###### Theorem 2.18 (Boundary values of homeomorphisms [LV, Theorem 6.2]).
Let $\varphi$ be a self-homeomorphism of the closure of the upper half-plane
fixing $\infty$ such that for every quadrilateral $Q$ with boundary on the
real line holds
$\operatorname{mod\,}\varphi(Q)\leq K\operatorname{mod\,}Q$
for some $K\geq 1$.
Then for every $t>0$ and every $x\in\mathbb{R}$ holds
$\frac{1}{\lambda(K)}\leq\frac{\varphi(x+t)-\varphi(x)}{\varphi(x)-\varphi(x-t)}\leq\lambda(K),$
where $\lambda:[0,\infty]\to\mathbb{R}$ is a continuous function not depending
on $\varphi$ so that $\lambda(1)=1$.
###### Remark 2.19.
Strictly speaking, the authors in [LV] prove the theorem for
$K$-quasiconformal maps of the upper half plane but exactly the same proof
works in the setting as above.
The second theorem says that quasisymmetric maps can be promoted to
quasiconformal maps with a sharp estimate on the maximal dilatation.
###### Theorem 2.20 (Beurling-Ahlfors extension [BA]).
Let $\xi:\mathbb{R}\to\mathbb{R}$ be a function that is $\rho$-quasisymmetric.
Then it extends to a $K$-quasiconformal self-homeomorphism of the upper half-
plane with $K\leq\rho^{2}$.
### 2.4. Teichmüller theory and quadratic differentials
Good references are [GL, H1]. To avoid unnecessary technicality we give _ad
hoc_ definitions that work specifically in our context.
###### Definition 2.21 (Teichmüller space of $\mathbb{C}\setminus V$).
Let $V$ be a discreet subset of $\mathbb{C}$ without accumulation points in
$\mathbb{C}$. The _Teichmüller space_ of the Riemann surface
$\mathbb{C}\setminus V$ is the set of quasiconformal homeomorphisms of
$\mathbb{C}\setminus V$ modulo post-composition with an affine map and isotopy
relative $V$.
The points in the Teichmüller space of $\mathbb{C}\setminus V$ are equivalence
classes $[\varphi]$ of quasiconformal homeomorphisms $\varphi$ of
$\mathbb{C}\setminus V$.
###### Remark 2.22.
A more standard definition of the Teichmüller space on a Riemann surface
involves isotopy relative the _ideal boundary_ rather than the topological
boundary. For planar domains the two definitions are equivalent [GL].
Every Teichmüller space can be equipped with the special metric.
###### Definition 2.23 (Teichmüller distance).
Let $[\varphi_{0}],[\varphi_{1}]$ be two points in the Teichmüller space of
$\mathbb{C}\setminus V$. The Teichmüller distance
$d_{T}([\varphi_{0}],[\varphi_{1}])$ is defined as
$\inf\limits_{\psi\in[\varphi_{1}\circ(\varphi_{0})^{-1}]}\log K(\psi),$
that is, a lower bound of the logarithm of maximal dilatations of
quasiconformal maps that belong to $[\varphi_{1}\circ(\varphi_{0})^{-1}]$.
The Teichmüller space of $\mathbb{C}\setminus V$ equipped with the Teichmüller
distance is complete metric space.
We also need the notion of holomorphic quadratic differential.
###### Definition 2.24 (Holomorphic quadratic differential).
A _holomorphic quadratic differential_ on $\mathbb{C}\setminus V$ is a
meromorphic function on $\mathbb{C}\setminus V$.
We say that a holomorphic quadratic differential $q$ is _integrable_ if $q\in
L^{1}(\mathbb{C})$ (or equivalently $q\in L^{1}(\mathbb{C}\setminus V)$). An
integrable quadratic differential has either a simple pole, or a removable
singularity at punctures (i.e. at points of $V$). Further, it has a pole of
order 4 at infinity and at least two finite simple poles.
The next theorem is an important result of Teichmüller theory.
###### Theorem 2.25 (Teichmüller Uniqueness Theorem [GL]).
Suppose $\varphi_{0}$ is a quasiconformal map of $\mathbb{C}\setminus V$
having Beltrami coefficient $k_{0}\frac{\lvert q_{0}\rvert}{q_{0}}$ with an
integrable quadratic differential $q_{0}$ and $0\leq k_{0}<1$. Let $\varphi$
be any other quasiconformal map of $\mathbb{C}\setminus V$ such that
$\varphi^{-1}\circ\varphi_{0}\in[\operatorname{id}]$ in the Teichmüller space
of $\mathbb{C}\setminus\varphi_{0}(V)$, and let $\mu_{\varphi}$ be the
Beltrami coefficient of $\varphi$. Then either there exists a set of positive
measure in $\mathbb{C}\setminus V$ on which
$\lvert\mu_{\varphi}(z)\rvert>k_{0}$, or $\mu_{\varphi}(z)=k_{0}\frac{\lvert
q_{0}\rvert}{q_{0}}$ almost everywhere.
In this article we need a special _asymptotically conformal_ subset of the
Teichmüller space.
###### Definition 2.26 (Asymptotically conformal points [GL]).
A point $[\varphi]$ in the Teichmüller space of $\mathbb{C}\setminus V$ is
called _asymptotically conformal_ if for every $\epsilon>0$ there is a compact
set $C\subset\mathbb{C}\setminus V$ and a representative $\psi\in[\varphi]$
such that $\lvert\mu_{\psi}\rvert<\epsilon$ a.e. on $(\mathbb{C}\setminus
V)\setminus C$.
Every asymptotically conformal point evidently contains a _frame mapping_.
###### Definition 2.27 (Frame mapping [GL]).
Let $\varphi_{0}$ an extremal representative of $[\varphi_{0}]$ (i.e.
minimizing the Teichmüller distance in $[\varphi_{0}]$) in the Teichmüller
space of $\mathbb{C}\setminus V$, and $K_{0}$ be its maximal dilatation. If
there exists a quasiconformal map $\varphi\in[\varphi_{0}]$ such that
$\operatorname*{ess\,sup}_{(\mathbb{C}\setminus V)\setminus
C}|K_{\varphi}|<K_{0}$ where $C$ is a compact subset of $\mathbb{C}\setminus
V$, we say that $\varphi$ is a frame mapping of the equivalence class
$[\varphi_{0}]$.
Since every asymptotically conformal point contains a frame map, next theorem
provides a presentation of the extremal representatives of the asymptotically
conformal points.
###### Theorem 2.28 (The Frame Mapping Theorem [GL]).
If the point $[\varphi]$ in the Teichmüller space of $\mathbb{C}\setminus V$
has a frame mapping, then it has an extremal representative $\varphi_{0}$
which has Beltrami coefficient $\mu_{\varphi_{0}}=k_{0}\dfrac{|q_{0}|}{q_{0}}$
where $q_{0}$ is an integrable holomorphic quadratic differential with
$\|q_{0}\|_{L^{1}}=1$ and $0\leq k_{0}<1$. Further, if $0<k_{0}<1$, then
$q_{0}$ is uniquely determined.
## 3\. Setup of Thurston iteration
### 3.1. Thurston iteration
In this section we are going to introduce a procedure called Thurston
iteration on a Teichmüller space. It will be seen later that this procedure
allows us to find entire functions with the desired combinatorics of escape of
the singular values.
Let $f_{0}$ be a transcendental entire function of finite type, and
$\xi:\mathbb{C}\to\mathbb{C}$ be a quasiconformal map. Define the quasiregular
function $f=\xi\circ f_{0}$ and assume that every singular point of $f$ (i.e.
image of a singular point of $f_{0}$ under $\xi$) either escapes or is
(pre-)periodic under iterations of $f$. Let $P_{f}$ be the union of all
singular orbits of $f$ (including singular values). It is common to call
$P_{f}$ either the _post-singular_ set or the set of _marked points_.
As in the proof of Thurston’s Topological Characterization of Rational
Functions [DH], one is interested in existence of an entire map $g$ which is
Thurston equivalent to $f$.
###### Definition 3.1 (Thurston equivalence).
We say that $f$ is _Thurston equivalent_ to an entire map $g$ if there exist
two homeomorphisms $\varphi,\psi:\mathbb{C}\to\mathbb{C}$ such that
1. (1)
$\varphi=\psi$ on $P_{f}$,
2. (2)
the following diagram commutes
${\mathbb{C},P_{f}}$${\mathbb{C},\psi(P_{f})}$${\mathbb{C},P_{f}}$${\mathbb{C},\varphi(P_{f})}$$\scriptstyle{\psi}$$\scriptstyle{f}$$\scriptstyle{g}$$\scriptstyle{\varphi}$
3. (3)
$\varphi$ is isotopic to $\psi$ relative $P_{f}$.
Note that unlike in the classical theory we consider Thurston equivalence
relative the _infinite_ set of marked points $P_{f}$.
###### Definition 3.2 ($\mathcal{T}_{f}$).
Denote by $\mathcal{T}_{f}$ the Teichmüller space of $\mathbb{C}\setminus
P_{f}$.
Then the Thurston map
$\sigma:\mathcal{T}_{f}\to\mathcal{T}_{f}$
is defined as follows. Let $[\varphi]$ be a point in $\mathcal{T}_{f}$ where
$\varphi$ is quasiconformal. Then $\varphi\circ f$ defines a conformal
structure on $\mathbb{C}$. By the Uniformization theorem there exists a map
$\tilde{\varphi}$ that is a conformal homeomorphism from $\mathbb{C}$ with the
new complex structure (given by $\varphi\circ f$) to the standard $\mathbb{C}$
(and it will be quasiconformal with respect to the standard $\mathbb{C}$).
Then by definition $\sigma[\varphi]:=[\tilde{\varphi}]$. It is easy to check
that $\sigma$ is well-defined on the level of the Teichmüller space. The
procedure is encoded in the following diagram (called _Thurston diagram_):
${\mathbb{C},P_{f}}$${\mathbb{C},\tilde{\varphi}(P_{f})}$${\mathbb{C},P_{f}}$${\mathbb{C},\varphi(P_{f})}$$\scriptstyle{\tilde{\varphi}}$$\scriptstyle{f=\xi\circ
f_{0}}$$\scriptstyle{g}$$\scriptstyle{\varphi}$
The map $g=\varphi\circ f\circ\tilde{\varphi}^{-1}$ corresponding to the right
vertical arrow is by construction a transcendental entire function. As in the
classical case, the $\sigma$-map is evidently continuous.
As in the classical paper [DH], we are looking for a fixed point of $\sigma$.
It is easy to see that the following lemma holds.
###### Lemma 3.3 (Thurston equivalence and fixed points of $\sigma$).
If $[\varphi]=[\tilde{\varphi}]$, then the map $g=\varphi\circ
f\circ\tilde{\varphi}^{-1}$ on the right hand side of the Thurston diagram is
Thurston equivalent to $f$.
### 3.2. Captured exponential function
Since our primary goal is to prove the Classification Theorem for the
exponential family using Thurston iteration, we have to choose the map $f$
properly. More precisely, let $f_{0}\in\mathcal{N}$ be a function with the
non-escaping (finite) singular value $v$. Next, let
$\\{a_{n}\\}_{n=0}^{\infty}$ be an orbit of $f_{0}$ that escapes on rays
$\mathcal{R}_{n}$, and $R_{n}$ be the part of the dynamic ray
$\mathcal{R}_{n}$ from $a_{n}$ to $\infty$ (i.e. the ray tail with the
endpoint $a_{n}$).
Now we describe a construction of the _capture_. It is a carefully chosen
quasiconformal homeomorphism of $\mathbb{C}$ that is equal to identity outside
of a bounded set and mapping the singular value of $f_{0}$ to $a_{0}$.
Fix a bounded Jordan domain $U\subset\mathbb{C}\setminus\bigcup_{n>0}R_{n}$
containing both $v$ and $a_{0}$. Define an isotopy
$c_{u}:\mathbb{C}\to\mathbb{C},u\in[0,1]$ through quasiconformal maps, such
that $c_{0}=\operatorname{id}$ on $\mathbb{C}$, $c_{u}=\operatorname{id}$ on
$\mathbb{C}\setminus U$ for $u\in[0,1]$, and $c_{1}(v)=a_{0}$. We say that the
quasiconformal map $c=c_{1}$ is a capture.
###### Remark 3.4.
Note that the choice of the capture is not unique, so we select one of them.
Define the map $f:=c\circ f_{0}$. It is a _quasiregular_ map whose singular
orbits coincide with $\\{a_{n}\\}_{n=0}^{\infty}$, the orbit of $a_{0}$ under
$f_{0}$ (also $R_{n+1}=f(R_{n})$). In this article we are mainly interested in
the $\sigma$-map associated to the map $f$ constructed this way.
### 3.3. Motivation of the “quasiregular setup”
We need to say a few words in order to explain why we do not follow the
classical setup of Thurston iteration but use a _quasiregular_ rather than
_topological_ branched covering $f$ as in the Thurston’s Characterization
Theorem. The main trouble is that for post-singularly infinite topological
branched coverings $f$ the isotopy classes $[\varphi]$ and $\sigma[\varphi]$
generally belong to different Teichmüller spaces (which never happens when
$P_{f}$ is finite). As an example one could simply consider the exponential.
###### Lemma 3.5 (Classical definition of $\sigma$ does not work).
Let $f(z)=e^{z}$, and let $\\{a_{n}\\}=\\{f^{\circ n}(0)\\}_{n=0}^{\infty}$ be
the orbit of its singular value $0$. Then there exist orientation-preserving
(not quasiconformal) homeomorphisms
$\varphi,\tilde{\varphi}:\mathbb{C}\to\mathbb{C}$ such that $\varphi\circ
f\circ\tilde{\varphi}^{-1}$ is entire transcendental function but the isotopy
class $[\varphi\circ\tilde{\varphi}^{-1}]$ (relative $\tilde{\varphi}(P_{f})$)
does not contain any quasiconformal homeomorphisms.
###### Proof.
Pick an orientation-preserving homeomorphism $\varphi$ such that
$\varphi(\overline{z})=\overline{\varphi}(z)$, and $\varphi(a_{n})=a_{2n}$. In
particular, it preserves the real line.
Let $\tilde{\varphi}$ be a homeomorphism such that $g=\varphi\circ
f\circ\tilde{\varphi}^{-1}$ is entire — its existence can be proven as earlier
by Uniformization theorem — and normalized so that $g(z)=e^{z}+\kappa$. Since
$\varphi(0)=\varphi(a_{0})=a_{0}=0$, we have $\kappa=0$. Further, we may
assume that $\tilde{\varphi}$ also preserves the real line. Hence
$\tilde{\varphi}(a_{n})=a_{2n+1}$.
Now, consider the orientation-preserving homomorphism
$\varphi\circ\tilde{\varphi}^{-1}$, which preserves the real line. We want to
prove that the isotopy class $[\varphi\circ\tilde{\varphi}^{-1}]$ does not
contain any quasiconformal maps.
Let $\psi\in[\varphi\circ\tilde{\varphi}^{-1}]$ and consider the annuli
$A_{k}:=\\{z\in\mathbb{C}:a_{k}<\lvert z\rvert<a_{k+2}\\}.$
Then the annulus $B_{n}=\psi(A_{2n+1})$ separates the points $0,a_{2n}$ from
the points $a_{2n+2},\infty$, and this holds for all $n>0$.
From the general theory of quasiconformal maps (see e.g. [LV]) we know that
there is a real constant $C>0$ so that
$\operatorname{mod\,}(B_{n})\leq C\operatorname{mod\,}(A_{2n}).$
Hence
$\frac{\operatorname{mod\,}(\psi(A_{2n+1}))}{\operatorname{mod\,}(A_{2n+1})}=\frac{\operatorname{mod\,}(B_{n})}{\operatorname{mod\,}(A_{2n+1})}\leq\frac{C\operatorname{mod\,}(A_{2n})}{\operatorname{mod\,}(A_{2n+1})}\to
0$
as $n\to\infty$ and $\psi$ cannot be quasiconformal. ∎
The solution to this issue of “jumping” between different Teichmüller spaces
is to take a quasiregular function $f$ which appears with its own complex
plane, i.e. exactly what we did in the setup of the Thurston iteration.
## 4\. Strict contraction of $\sigma$
In this section we prove that the $\sigma$-map associated to the captured
exponential function (defined as in Subsection 3.2) is strictly contracting in
the Teichmüller metric on the subset of asymptotically conformal points in the
Teichmüller space (Definition 2.26). Instead of showing it directly, we prove
a row of more general lemmas which cover the case of the composition of a
polynomial with the exponential: it will not require any essential additional
efforts but we will be able to use this theorem in the subsequent articles
which handle the infinite-dimensional Thurston theory in this case.
We work in the more general setting of Subsection 3.1 (not with the captured
exponential function). More precisely, let $f_{0}$ be a transcendental entire
function of finite type, and $\xi:\mathbb{C}\to\mathbb{C}$ be a quasiconformal
map. Consider the quasiregular function $f=\xi\circ f_{0}$ and assume that
every singular point of $f$ either escapes or is (pre-)periodic. In this case
we have the map $\sigma:\mathcal{T}_{f}\to\mathcal{T}_{f}$ associated to $f$.
We want to apply it to the asymptotically conformal points in
$\mathcal{T}_{f}$.
###### Lemma 4.1 (Strict contraction of $\sigma$).
If points $[\varphi],[\psi]\in\mathcal{T}_{f}$ and their $\sigma$-images
$\sigma[\varphi],\sigma[\psi]\in\mathcal{T}_{f}$ are asymptotically conformal,
and $[\varphi]\neq[\psi]$, then
$d_{T}(\sigma[\varphi],\sigma[\psi])<d_{T}([\varphi],[\psi]).$
###### Proof.
Assume that $\sigma[\varphi]\neq\sigma[\psi]$, otherwise the statement is
trivial. Consider the joint Thurston diagram for maps $\varphi$ and $\psi$:
${\mathbb{C},\tilde{\psi}(P_{f})}$${\mathbb{C},P_{f}}$${\mathbb{C},\tilde{\varphi}(P_{f})}$${\mathbb{C},\psi(P_{f})}$${\mathbb{C},P_{f}}$${\mathbb{C},\varphi(P_{f})}$$g$$h$$f$$\tilde{\psi}$$\tilde{\varphi}$$\psi$$\varphi$
Here $g$ and $h$ are entire maps. Since we are only interested in the
Teichmüller distance between $\sigma[\varphi]$ and $\sigma[\psi]$, we can
denote $\eta:=\varphi\circ\psi^{-1}$ and consider the following wrapped
diagram:
${\mathbb{C},\tilde{\psi}(P_{f})}$${\mathbb{C},\tilde{\varphi}(P_{f})}$${\mathbb{C},\psi(P_{f})}$${\mathbb{C},\varphi(P_{f})}$$g$$\tilde{\eta}$$h$$[\eta]$
Since $g$ and $h$ are holomorphic, the $L^{\infty}$-norm of the Beltrami
differential of $\eta$ is preserved under the pull-back through $g$. It
follows immediately that
$d_{T}(\sigma[\varphi],\sigma[\psi])\leq d_{T}([\varphi],[\psi]).$
Next, observe that $[\eta]$ is an asymptotically conformal point (as a
composition of such classes). Then due to Frame Mapping Theorem 2.28 and
Teichmüller Uniqueness Theorem 2.25 there exists the unique map $\eta_{0}$
with the smallest maximal dilatation in $[\eta]$, and its Beltrami coefficient
has form $\mu_{\eta_{0}}=k_{0}\frac{|q_{0}|}{q_{0}}$ where $q_{0}$ is an
integrable holomorphic quadratic differential on
$\mathbb{C}\setminus\psi(P_{f})$ such that $\|q_{0}\|_{L^{1}}=1$. Let
$\tilde{\eta}_{0}$ be a solution of the Beltrami equation with the Beltrami
coefficient $\mu_{\eta\circ g}$. Then the Beltrami coefficient of
$\tilde{\eta}_{0}$ will have form
$\mu_{\tilde{\eta}_{0}}=k_{0}\frac{|q_{0}\circ g|}{q_{0}\circ
g}\frac{|g^{\prime 2}|}{g^{\prime 2}}$.
Since $\sigma[\eta]$ is asymptotically conformal (as a composition of
asymptotically conformal points) in $\sigma[\eta]$ there is a unique extremal
map $\tilde{\eta}_{1}$ and its Beltrami differential has the form
$\mu_{\tilde{\eta}_{1}}=k_{1}\frac{|q_{1}|}{q_{1}}$ where $q_{1}$ is an
integrable quadratic differential on $\mathbb{C}\setminus\tilde{\psi}(P_{f})$
such that $\|q_{1}\|_{L^{1}}=1$, and $k_{1}\leq k_{0}$.
If $k_{1}=k_{0}$, then due to Teichmüller Uniqueness Theorem 2.25 we have
$\mu_{\tilde{\eta}_{1}}=\mu_{\tilde{\eta}_{0}}$ and $q_{0}\circ g\,g^{\prime
2}/q_{1}\in\mathbb{R}$ in $\mathbb{C}\setminus\tilde{\psi}(P_{f})$. This
implies that $q_{0}\circ g\,g^{\prime 2}/q_{1}\equiv\operatorname{const}$. But
this means that $q_{0}$ cannot have more than one pole. Indeed, by Great
Picard Theorem $g$ attains every value with at most one exception infinitely
many times. Since $\lvert P_{f}\rvert>1$ ($f_{0}$ either has more than one
singular value, or belongs to the exponential family and its unique singular
value cannot be a fixed point), $P_{f}$ contains at least one such value $z$.
If $q_{0}$ has a pole at $z$, then $q_{1}$ has poles at infinitely many
preimages of $z$ under $g$. This is impossible since at most finitely many of
them belong to $P_{f}$. So $q_{1}$ can have at most one pole in $\mathbb{C}$.
For an integrable holomorphic quadratic differential on
$\mathbb{C}\setminus\psi(P_{f})$ this is possible only if $q_{1}=0$, and this
contradicts to the choice of $q_{1}$ so that $\|q_{1}\|_{L^{1}}=1$. So
$k_{1}<k_{0}$, and this means that $\sigma$ is strictly contracting. ∎
We want to use Lemma 4.1 to show that if $f_{0}$ is a composition of a
polynomial with the exponential and $\xi$ is asymptotically conformal as a map
(i.e. its maximal dilatation tends to $0$ as the argument tends to $\infty$),
then the $\sigma$-map is strictly contracting on the subset of asymptotically
conformal points in $\mathcal{T}_{f}$. In order to do so, we need to show that
$\sigma$ maps every asymptotically conformal point to another asymptotically
conformal point. We do this in the following two lemmas.
If $\mu(z)$ is the Beltrami coefficient of a quasiconformal map $\varphi$,
then we denote $K_{\varphi}(z):=\frac{1+\mu(z)}{1-\mu(z)}$.
###### Lemma 4.2 (Correction of the dilatation).
Let $\varphi$ be a $K$-quasiconformal automorphism of $\mathbb{C}$ such that
for some $R>0$ we have
$\operatorname*{ess\,sup}_{\mathbb{C}\setminus\mathbb{D}_{R}(0)}|K_{\varphi}(z)|\leq
1+\epsilon$. Then there exists $\rho\geq R$ and a quasiconformal automorphism
$\varphi^{\prime}$ such that $\varphi\equiv\varphi^{\prime}$ on
$\mathbb{C}\setminus\mathbb{D}_{\rho}(0)$,
$\operatorname*{ess\,sup}_{\mathbb{C}}|K_{\varphi^{\prime}}(0)|\leq(1+2\epsilon)^{3}$,
and $\varphi$ and $\varphi^{\prime}$ are isotopic relative
$(\mathbb{C}\setminus\mathbb{D}_{\rho}(0))\cup\\{0\\}$.
###### Proof.
Choose $r>1$ such that $\varphi(\mathbb{D}_{R}(0))$ is contained in a disk of
diameter $\delta<1$ with respect to the hyperbolic metric on
$\varphi(\mathbb{D}_{r}(0))$. Let $Q$ be a quadrilateral with its sides on the
boundary of $\mathbb{D}_{r}(0)$, and $M,M^{\prime}$ be moduli of $Q$ and
$\varphi(Q)$, respectively. After pre- and post-composition of
$\varphi|_{\mathbb{D}_{r}}$ with the canonical conformal maps of $Q$ and
$\varphi(Q)$ onto the rectangles $[0,M]\times[0,1]$ and
$[0,M^{\prime}]\times[0,1]$, respectively, we obtain a $K$-quasiconformal
homeomorphism $\chi$ of these rectangles (as quadrilaterals) whose maximal
dilatation does not exceed $1+\epsilon$ outside of some set having image
contained in a hyperbolic disk $D$ of diameter $\delta<1$ centered at
$z_{0}\in[0,M^{\prime}]\times[0,1]$.
Let $a:=\inf_{z\in D}\operatorname{Re\,}z,\,b:=\sup_{z\in
D}\operatorname{Re\,}z$, and $\gamma$ be the hyperbolic geodesic connecting
the most left and the most right points of $D$. Consider the partition of
$[0,M^{\prime}]\times[0,1]$ into three rectangles $A,B,C$ such that
$A=[0,a]\times[0,1],$ $B=[a,b]\times[0,1],$ $C=[b,M^{\prime}]\times[0,1].$
Note that if $z\in[0,M^{\prime}]\times[0,1]$ then its distance to the boundary
of $Q$ does not exceed $M^{\prime}/2$. Hence using the standard estimates for
the hyperbolic metric we obtain
$\operatorname{mod\,}B=b-a=M^{\prime}\int_{a}^{b}\frac{1}{2\frac{M^{\prime}}{2}}dx\leq
M^{\prime}\int_{\gamma}\lambda|dz|\leq M^{\prime}\delta$
where $\lambda$ is the hyperbolic density on $[0,M^{\prime}]\times[0,1]$.
Thus due to the estimates on the maximal dilatation of $\chi$ and Grötzsch
inequality for rectangles we have:
$\operatorname{mod\,}\varphi(Q)=M^{\prime}=\operatorname{mod\,}A+\operatorname{mod\,}B+\operatorname{mod\,}C\leq$
$(1+\epsilon)(\operatorname{mod\,}\chi^{-1}(A)+\operatorname{mod\,}\chi^{-1}(B))+M^{\prime}\delta\leq$
$(1+\epsilon)\operatorname{mod\,}Q+M^{\prime}\delta=(1+\epsilon)M+M^{\prime}\delta$
Hence
$\operatorname{mod\,}\varphi(Q)\leq\frac{1+\epsilon}{1-\delta}\operatorname{mod\,}Q$
and by making $r$ bigger (or equivalently by making $\delta$ smaller) we can
choose the quasisymmetry constant (Definition 2.17) of
$\chi|_{\partial\mathbb{D}_{r}(0)}$ arbitrarily close to $1+\epsilon$. Choose
it to be $1+2\epsilon$. Thus by Theorem 2.20 the Beurling-Ahlfors extension
$\mathcal{B}_{\varphi}$ of $\varphi|_{\partial\mathbb{D}_{r}(0)}$ would give
us a quasiconformal homeomorphism of $\mathbb{D}_{r}(0)$ onto
$\varphi(\mathbb{D}_{r}(0))$ with the maximal dilatation not exceeding
$(1+2\epsilon)^{2}$. Outside of $\varphi|_{\partial\mathbb{D}_{r}(0)}$ define
$\mathcal{B}_{\varphi}$ to be equal to $\varphi$.
If we choose a “fat” enough annulus
$\mathbb{D}_{\rho}(0)\setminus\overline{\mathbb{D}}_{r}(0)$ around
$\mathbb{D}_{r}(0)$ for some $\rho>r$, we can construct a quasiconformal map
$\theta^{\prime}$ such that
$\theta^{\prime}|_{\mathbb{C}\setminus\varphi(\mathbb{D}_{\rho}(0))}\equiv\operatorname{id}$,
$\theta^{\prime}(\mathcal{B}_{\varphi}(0))=0$, and its maximal dilatation on
$\varphi(\mathbb{D}_{\rho}(0))$ is not bigger than $1+2\epsilon$. Denote
$\varphi^{\prime}:=\theta^{\prime}\circ\mathcal{B}_{\varphi}$. Then
$\varphi^{\prime}(0)=0$,
$\varphi^{\prime}|_{\mathbb{C}\setminus\mathbb{D}_{\rho}}=\varphi$, its
maximal dilatation does not exceed $(1+2\epsilon)^{3}$. That it is isotopic to
$\varphi$ relative to the
$(\mathbb{C}\setminus\mathbb{D}_{\rho}(0))\cup\\{0\\}$ follows from Alexander
trick. ∎
Now we are ready to show that in the case of the composition of a polynomial
with the exponential the asymptotically conformal subset is invariant under
$\sigma$.
###### Lemma 4.3 (Invariance of asymptotically conformal subset).
Let $p$ be a non-constant polynomial, $f_{0}=p\circ\exp$ and $f=\xi\circ
f_{0}$ where $\xi:\mathbb{C}\to\mathbb{C}$ is asymptotically conformal (as a
map). Then the $\sigma$-image of every asymptotically conformal point is
asymptotically conformal.
###### Proof.
Let $[\psi]\in\mathcal{T}_{f}$ be asymptotically conformal. We want to prove
that $\sigma[\psi]\in\mathcal{T}_{f}$ is also asymptotically conformal. After
solving Beltrami equation for $\mu_{\psi\circ\xi\circ p}$ we obtain
$[\varphi]\in\mathcal{T}_{\exp(P_{f})\cup\\{0\\}}$ (note that $p(0)\in P_{f}$)
such that the following expanded Thurston diagram (with $g$ belonging to the
exponential family and a polynomial $q$) is commutative.
${\mathbb{C},P_{f}}$${\mathbb{C},\tilde{\psi}(P_{f})}$${\mathbb{C},\exp(P_{f})\cup\\{0\\}}$${\mathbb{C},\varphi(\exp(P_{f})\cup\\{0\\})}$${\mathbb{C},P_{f}}$${\mathbb{C},\psi(P_{f})}$$\exp$$\tilde{\psi}$$g$$\varphi$$\xi\circ
p$$q$$\psi$
Since $[\psi]$ and $\xi$ are asymptotically conformal (the latter as a map)
and the preimage of a neighborhood of $\infty$ under $p$ is a neighborhood of
$\infty$, the point $[\varphi]\in\mathcal{T}_{\exp(P_{f})\cup\\{0\\}}$ is also
asymptotically conformal.
If
$\operatorname*{ess\,sup}_{\mathbb{C}\setminus\mathbb{D}_{R}}|K_{\varphi}|\leq
1+\epsilon$, then using Lemma 4.2 we can find a quasiconformal homeomorphism
$\varphi^{\prime}$ which is isotopic to $\varphi$ relative $0$ and the
complement of a disk $\mathbb{D}_{\rho}(0)$ (basically we forget finitely many
marked points except $0$ that are contained in $\mathbb{D}_{\rho}(0)$), and so
that
$\operatorname*{ess\,sup}_{\mathbb{C}}|K_{\varphi^{\prime}}|\leq(1+2\epsilon)^{3}$.
Consider its lift $\tilde{\varphi}^{\prime}$. It is isotopic to $\tilde{\psi}$
relative $P_{f}\setminus X$ where $X$ is a finite subset of $P_{f}$.
Next, we can always find a quasiconformal homeomorphism $\theta$, equal to
identity outside of some bounded set so that
$\theta\circ\tilde{\varphi}^{\prime}$ is isotopic to $\tilde{\psi}$ relative
$P_{f}$. Further, we can also isotope $\theta\circ\tilde{\varphi}^{\prime}$ in
small neighborhoods of marked points in this bounded set so that the obtained
map is conformal in these neighborhoods.
Since $\epsilon$ can be chosen arbitrarily small, lemma is proven. ∎
After choosing $f_{0}\in\mathcal{N}$ and a capture $c$ like in Subsection 3.2,
from Lemmas 4.1, 4.3 immediately follows
###### Theorem 4.4 ($\sigma$ is strictly contracting on as. conf. subset).
If $f$ is the captured exponential function (defined as in Subsection 3.2),
then the $\sigma$-map is invariant and strictly contracting on the subset of
asymptotically conformal points in $\mathcal{T}_{f}$.
## 5\. $\operatorname{Id}$-type maps
The purpose of this subsection is to introduce the so-called
$\operatorname{id}$-type (or identity-type) maps which are quasiconformal
homeomorphisms of $\mathbb{C}$ satisfying certain conditions (see Definition
5.2). They have two important properties: they are almost equal to identity
near $\infty$ on certain dynamic rays, and the set of points
$[\varphi]\in\mathcal{T}_{f}$ that contain a map of $\operatorname{id}$-type
in its equivalence class is invariant under $\sigma$. This allows to reduce
Thurston iteration to the so-called Spider Algorithm [F, HS].
We work with the captured exponential function $f=c\circ f_{0}$ constructed in
Subsection 3.2 and start with a few definitions which take central position in
the theory that we develop.
###### Definition 5.1 (Standard spider).
$S_{0}=\cup_{n\geq 0}R_{n}$ is called the _standard spider_.
###### Definition 5.2 ($\operatorname{Id}$-type maps).
A quasiconformal map $\varphi:\mathbb{C}\to\mathbb{C}$ is of
$\operatorname{id}$-type if there is an isotopy
$\varphi_{u}:\mathbb{C}\to\mathbb{C},\ u\in[0,1]$ such that
${\varphi_{1}=\varphi},\ \varphi_{0}=\operatorname{id}$ and
$\lvert\varphi_{u}(z)-z\rvert\to 0$ uniformly in $u$ as $S_{0}\ni z\to\infty$.
###### Definition 5.3 ($\operatorname{Id}$-type points in $\mathcal{T}_{f}$).
We say that $[\varphi]\in\mathcal{T}_{f}$ is of $\operatorname{id}$-type if
$[\varphi]$ contains an $\operatorname{id}$-type map.
Alternatively, we can replace $S_{0}$ in Definition 5.2 by endpoints of
$S_{0}$, i.e. $P_{f}$. In this case there are more $quasiconformal$ maps
identified as of $\operatorname{id}$-type. Nevertheless, it is not hard to
show that they define the same subset of $\operatorname{id}$-type points in
$\mathcal{T}_{f}$. Definition 5.2 is simply more suitable for our purposes.
###### Definition 5.4 (Isotopy of $\operatorname{id}$-type maps).
We say that $\psi_{u}$ is an _isotopy of $\operatorname{id}$-type maps_ if
$\psi_{u}$ is an isotopy through maps of $\operatorname{id}$-type such that
$\lvert\psi_{u}(z)-z\rvert\to 0$ uniformly in $u$ as $S_{0}\ni z\to\infty$.
The identity is obviously of $\operatorname{id}$-type. Since $c^{-1}$ is equal
to identity outside of a compact set, it is also of $\operatorname{id}$-type,
as well as the composition $\varphi\circ c$ of the capture with any
$\operatorname{id}$-type map.
Next theorem claims that the $\sigma$-map is invariant on the subset of
$\operatorname{id}$-type points in $\mathcal{T}_{f}$.
###### Theorem 5.5 (Invariance of $\operatorname{id}$-type points).
If $[\varphi]$ is of $\operatorname{id}$-type, then $\sigma[\varphi]$ is of
$\operatorname{id}$-type as well.
More precisely, if $\varphi$ is of $\operatorname{id}$-type, then there is a
unique $\operatorname{id}$-type map $\hat{\varphi}$ such that $\varphi\circ
f\circ\hat{\varphi}^{-1}$ is entire.
Moreover, if $\varphi_{u}$ is an isotopy of $\operatorname{id}$-type maps,
then the functions $\varphi_{u}\circ f\circ\hat{\varphi}_{u}^{-1}$ have the
form $e^{z}+\kappa_{u}$ where $\kappa_{u}$ depends continuously on $u$.
###### Proof.
First, since $\hat{\varphi}$ is defined up to a postcomposition with an affine
map, and $S_{0}$ contains continuous curves joining finite points to $\infty$,
uniqueness is obvious. We only have to prove existence.
Since $c^{-1}$ is also of $\operatorname{id}$-type, $\varphi$ can be joined to
$c^{-1}$ through an isotopy of $\operatorname{id}$-type maps: we can simply
take in the correct order the concatenation of two isotopies to
$\operatorname{id}$ (concatenation of $\varphi\sim\operatorname{id}$ and
$\operatorname{id}\sim c^{-1}$).
Let $\psi_{u},u\in[0,1]$ be this isotopy with $\psi_{0}=c^{-1}$ and
$\psi_{1}=\varphi$ and let $\tilde{\psi}_{u}$ be the unique isotopy of
quasiconformal homeomorphisms fixing $0,1$ and $\infty$ (or simply the
solutions of the Beltrami equation 2.2 for $\mu_{\psi\circ f}$ normalized to
fix $0,1,\infty$.) Then the maps $h_{u}:=\psi_{u}\circ
f\circ\tilde{\psi}_{u}^{-1}$ will have the form
$h_{u}(z)=\beta_{u}e^{\alpha_{u}z}+\kappa_{u}$
where $\alpha_{u},\beta_{u},\kappa_{u}$ depend continuously on $u$.
Next define
$\hat{\psi}_{u}(z):=\alpha_{u}\tilde{\psi}_{u}(z)+\log(\beta_{u})$, where the
branch of the logarithm is chosen so that $\hat{\psi}_{0}=\operatorname{id}$.
In this case
$g_{u}(z)=\psi_{u}\circ f\circ\hat{\psi}_{u}^{-1}(z)=e^{z}+\kappa_{u}$
is the homotopy of entire functions such that $g_{u}\in\mathcal{N}$,
$g_{0}=f_{0}$ and $\kappa_{u}$ is continuous in $u$ and bounded on $[0,1]$.
Now, let $z\in S_{0}$ and $w=f(z)=f_{0}(z)$. Since
$\lvert\psi_{u}(w)-w\rvert\to 0$ as $S_{0}\ni w\to\infty$ uniformly in $u$, we
can write $\psi_{u}(w)=w+o(1)$.
Then
$\hat{\psi}_{u}(z)=\log(\psi_{u}(w)-\kappa_{u})=\log(f_{0}(z)-\kappa_{u})=\log(e^{z}+\kappa_{0}-\kappa_{u}).$
Hence, due to continuity and boundedness of $\kappa_{0}-\kappa_{u}$, for
$S_{0}\ni z\to\infty$ we have
$\lvert\hat{\psi}_{u}(z)-z\rvert\to 0$
uniformly in $u$.
The theorem is proven with the assignment $\hat{\varphi}:=\hat{\psi}_{1}$. ∎
Theorem 5.5 particularly means that the notion of external address is
preserved when we iterate $\operatorname{id}$-type maps, in a sense that under
proper normalization, that is, when we choose the map $\hat{\varphi}$ out of
its Teichmüller equivalence class, the images of dynamic rays under
$\hat{\varphi}$ preserve their original asymptotics (horizontal straight
lines).
###### Remark 5.6.
In the sequel we will use the hat-notation from Theorem 5.5. That is,
$\hat{\varphi}$ denotes the unique $\operatorname{id}$-type map so that
$\varphi\circ f\circ\hat{\varphi}^{-1}$ is entire. Due to Theorem 5.5 this
notation makes sense whenever $\varphi$ is of $\operatorname{id}$-type.
## 6\. Spiders
In this section we introduce the notion of a spider and a special tool which
is supposed to bound the complexity of “spider legs” — altogether this allows
to provide numerical estimates on the homotopy information of
$\operatorname{id}$-type points in the Teichmüller space. We start with the
latter.
### 6.1. Curves in the punctured half-plane
We begin with the definition that slightly extends the notion of the homotopy
of curves fixing endpoints.
###### Definition 6.1 (Pinched endpoints homotopy).
Let $U\subset\hat{\mathbb{C}}$ be a path-connected domain, and
$\gamma:(0,\infty)\to U$ be a curve such that $\lim\limits_{t\to
0}\gamma(t)\in\overline{U}$ and
$\lim\limits_{t\to\infty}\gamma(t)\in\overline{U}$.
We denote by $[\gamma]_{U}$ the equivalence class of $\gamma$ in $U$ under
homotopies preserving the limits at $t=0$ and $t=\infty$.
If the limit of $\gamma$ at $0$ or $\infty$ belongs to $U$, this is the usual
homotopy fixing endpoint, but when the limit is in the boundary of $U$, the
homotopies are considered through curves in $U$ that are “pinched” at the
endpoint (possibly two) in the boundary.
We consider such “pinched” homotopy types for two reasons: first, it describes
the homotopy type of the images of a ray tail (from $S_{0}$) under
$\operatorname{id}$-type maps representing the same point in the Teichmüller
space; second, as will be seen later, we obtain estimates on the “complexity”
of the homotopy information of $\operatorname{id}$-type points.
Now, we define _r-curves_ (“r” means right) which model ray tails (or more
precisely, their union with $\infty$).
###### Definition 6.2 (r-curve).
We say that a continuous curve $\gamma:[0,\infty]\to\hat{\mathbb{C}}$ is an
_r-curve_ if
1. (1)
$\gamma[0,\infty)\subset\mathbb{C}$,
2. (2)
$\gamma(\infty)=\infty$,
3. (3)
$\operatorname{Re\,}\gamma(t)\to+\infty$ as $t\to\infty$,
Let $V=\\{v_{i}\\}\subset\mathbb{C}$ be a finite or a countably infinite
sequence (of punctures) such that $\operatorname{Re\,}v_{i}\to+\infty$, and
let $\gamma$ be an r-curve such that $\gamma(0)\in V$ and
$\gamma|_{\mathbb{R}^{+}}\subset\mathbb{C}\setminus V$.
###### Remark 6.3.
In the case when $V$ is infinite the surface $\mathbb{C}\setminus V$ is
supposed to model the surface $\mathbb{C}\setminus P_{f}$.
Choose a point $r\in\mathbb{R}$ such that
$r<\min\\{\min\limits_{v\in
V}\\{\operatorname{Re\,}v\\},\min\limits_{t\in[0,\infty]}\\{\operatorname{Re\,}\gamma(t)\\}\\}.$
Both $V$ and $\gamma|_{\mathbb{R}^{+}}$ are contained in $\mathbb{H}_{r}$
while $\gamma(\infty)=\infty\in\partial\mathbb{H}_{r}$.
We want to have an algebraic description of $[\gamma]_{\mathbb{H}_{r}\setminus
V}$ (or more precisely $[\gamma|_{\mathbb{R}^{+}}]_{\mathbb{H}_{r}\setminus
V}$), for instance, in terms of certain fundamental group. But the problem is
that the endpoints of $\gamma$ are punctures, that is, the homotopy is not
allowed to pass through them. We cure this problem as follows.
Let
$H_{r}=H_{r}(V,\gamma):=(\mathbb{H}_{r}\setminus V)\cup\\{\infty\\}.$
For a curve $\delta:(0,\infty)\to\mathbb{C}\setminus V$ with $\lim_{t\to
0}\delta(t)\in V$ and $\lim_{t\to\infty}\delta(t)=\infty$ let
$\overline{\delta}:(0,\infty]\to H_{r}$ be the extension of this curve to
$\infty$, that is, $\overline{\delta}(t)=\delta(t)$ for $t\in(0,\infty)$ and
$\overline{\delta}(\infty)=\infty$.
###### Lemma 6.4.
For curves $\delta:(0,\infty)\to\mathbb{C}\setminus V$ with $\lim\limits_{t\to
0}\delta(t)\in V$ and $\lim\limits_{t\to\infty}\delta(t)=\infty$ the
correspondence $[\delta]_{\mathbb{H}_{r}\setminus
V}\mapsto[\overline{\delta}]_{H_{r}}$ is well defined and injective.
###### Proof.
That the correspondence is well defined follows from Definition 6.1.
Thus, we only need to prove that the curves $\delta_{1}$ and $\delta_{2}$ are
homotopic in $\mathbb{H}_{r}\setminus V$ if $\overline{\delta}_{1}$ and
$\overline{\delta}_{2}$ are homotopic in $H_{r}$.
Let $v\in V$ be the puncture at which start both $\overline{\delta}_{1}$ and
$\overline{\delta}_{2}$. Let also
$\Gamma:[0,\infty]\times[0,1]\to H_{r}\cup\\{v,\infty\\}$
with $\Gamma(t,0)=\overline{\delta}_{1}$ and
$\Gamma(t,1)=\overline{\delta}_{2}$ be a homotopy between
$\overline{\delta}_{1}$ and $\overline{\delta}_{2}$ in $H_{r}$, that is, for
$u\in[0,1]$ we have $\Gamma(0,u)=v$ and $\Gamma(\infty,u)=\infty$, and no
inner point of the strip $B:=[0,\infty]\times[0,1]$ is mapped to $v$ (but can
be mapped to $\infty$).
Without loss of generality we might assume that $\Gamma([0,1]\times[0,1])$ is
bounded, otherwise just rescale $\Gamma$ in the first parameter. For every
integer $n>0$ let $\beta_{n}$ be the restriction of $\Gamma$ on
$n\times[0,1]$, that is, a trajectory of a point $\overline{\delta}_{1}(n)$
under the homotopy. Since the image under $\Gamma$ of the boundary of the
rectangle $[0,n]\times[0,1]$ is contractible, each $\beta_{n}$ is homotopic
relative endpoints to a curve $\beta^{\prime}_{n}$ contained in
$H_{r}\setminus\\{\infty\\}$. Moreover, we can make a joint choice of all
$\beta^{\prime}_{n}$ so that $\beta^{\prime}_{n}\to\infty$ as $n\to\infty$
(that is, the whole curve $\beta^{\prime}_{n}$ appears in arbitrarily small
neighborhoods of $\infty$ in $H_{r}$ when $n$ is big).
Now, let $\alpha_{n}:\partial\mathbb{D}\to H_{r}\setminus\\{\infty\\}$ be the
closed curve in $H_{r}\setminus\\{\infty\\}$ that is equal to the
concatenation
$\beta_{n}^{-1}\cdot\overline{\delta}_{1}|_{[n,n+1]}\cdot\beta_{n+1}\cdot(\overline{\delta}_{2}|_{[n,n+1]})^{-1}.$
Each $\alpha_{n}$ is contractible in $H_{r}\setminus\\{\infty\\}$, so one can
continuously extend $\alpha_{n}$ to the unit disk and obtain a map
$\alpha^{\prime}_{n}:\overline{\mathbb{D}}\to H_{r}\setminus\\{\infty\\}$ that
coincides with $\alpha_{n}$ on $\partial\mathbb{D}$. Moreover, since
$\beta^{\prime}_{n}\to\infty$, we can choose $\alpha^{\prime}_{n}$ so that
$\alpha^{\prime}_{n}(\overline{\mathbb{D}})\to\infty$. After “gluing” all
$\alpha^{\prime}_{n}$ and the restriction $\Gamma|_{[0,1]\times[0,1]}$
together we obtain the desired homotopy.
This finishes the proof of the lemma. ∎
Figure 6.1. Example of splitting $\gamma$ (above) into $\gamma_{str}$ and
$\gamma_{loop}$
Now we are ready to describe the homotopy type of $\gamma$. In the sequel we
only consider the homotopy types in $H_{r}$ and denote
$[\gamma]=[\gamma]_{H_{r}}$.
Join $\gamma(0)$ to $\infty$ by the straight horizontal ray
$\langle\gamma(0)+t,t>0\rangle$. For every intersection of this ray with some
point $v\in V$, choose a small closed disk with center at $v$ such that all
disks are mutually disjoint and contain exactly one point of $V$ (its center),
and replace the parts of the straight line which are the diameters of the
disks by the upper half-circles of the disk boundary.
The obtained curve $\gamma_{str}$ defines the homotopy class $[\gamma_{str}]$
in $H_{r}$, containing curves that approximate “from above” the straight
horizontal ray from $\gamma(0)$ to $\infty$. Consider also a loop
$\gamma_{loop}\subset H_{r}$ with the base point at $\infty$ such that
$[\gamma_{str}]\cdot[\gamma_{loop}]=[\gamma]$ in $H_{r}$. It is not hard to
show that such a loop exists: for example, one can slightly homotope $\gamma$
in $H_{r}$ so that for some $\epsilon>0$ we have
$\gamma|_{(0,\epsilon]}(t)=\gamma(0)+t$ — then
$[\gamma_{loop}]=[(\gamma_{str}|_{[\epsilon,\infty]})^{-1}]\cdot[\gamma|_{[\epsilon,\infty]}]$.
The loop homotopy type $[\gamma_{loop}]$ can be viewed as an element of the
fundamental group $\pi_{1}(H_{r},\infty)$. The homotopy class
$[\gamma_{loop}]$ and the point $\gamma(0)$ uniquely define $[\gamma]$.
From now on we assume that $V$ is finite. Any such $V$ and an r-curve $\gamma$
with $\gamma(0)\in V$ and $\gamma|_{(0,\infty)}\in\mathbb{C}\setminus V$
define the map
$W:(V,\gamma)\mapsto W(V,\gamma)\in F(V),$
where $F(V)$ is the free group on $V$, as follows.
For every $w\in V$ construct a generator $g_{w}$ of $\pi_{1}(H_{r},\infty)$ in
the following way. Surround each $v\in V$ lying on the straight horizontal ray
$\langle w+t,t\geq 0\rangle$ by a disk $D_{v}$ with center at $v$ and radius
$r_{v}$, so that the disks are mutually disjoint and contain only one point of
$V$. Define the injective curve $a:[0,\infty]\to H_{r}$ so that $a(t)$ is
equal to $w+r_{w}+t$ when $w+r_{w}+t$ is outside of the disks $D_{v}$ and
coincides with the upper half-circles of $\partial D_{v}$ whenever $w+r_{w}+t$
passes through $D_{v}$ for $v\neq w$, and let $c(t):=w+r_{u}e^{-2\pi it}$ for
$t\in[0,1]$. Then $g_{w}:=[a^{-1}\cdot c\cdot a]$, and it does not depend on a
particular choice of radii of the disks $D_{v}$. In other words, we start at
$\infty$, move “to the left” along a straight line passing “from above” the
points $v\in V$ lying in our way, then make a loop around $w$ in the clockwise
direction, and return to $\infty$ the along the same curve, passing occasional
points in $V$ “from above”.
With this preferred set of generators the homotopy type $[\gamma_{loop}]$ has
the unique representation in $\pi_{1}(H_{r},\infty)$. That is,
$[\gamma_{loop}]=g_{v_{1}}g_{v_{2}}\dots g_{v_{m}}$ where $v_{i}\in
V,m\in\mathbb{N}$. Define
$W(V,\gamma):=v_{1}v_{2}\dots v_{m}.$
It is clear that $W$ does not depend on a particular choice of $H_{r}$ (i.e.
the choice of $r$).
Figure 6.2. Example of a generator $g_{w}$, for a case when
$\operatorname{Im\,}w=\operatorname{Im\,}v$
###### Remark 6.5.
There is a freedom of choice for the generators and $\gamma_{str}$, at least
in the sense of passing points of $V$ below or above, so it is just according
to our convention that we pass them from above.
We finish this subsection with a few straightforward but useful lemmas.
###### Lemma 6.6.
Let $\gamma\subset\mathbb{C}\setminus V$ be an r-curve with $\gamma(0)\in V$.
Further, let
$B:=[x_{1},\infty)\times[y_{1},y_{2}]$
be a (right-)infinite strip so that $\gamma|_{[0,\infty)}\subset B$. Denote
$V_{1}:=V\cap B$. Then $W(V,\gamma)\in F(V_{1})$, where $F(V_{1})$ is
identified with its image in $F(V)$ under the natural embedding.
###### Proof.
Follows from the construction of $W(V,\gamma)$. ∎
By $\lvert W(V,\gamma)\rvert$ we denote the length of the word $W(V,\gamma)$
in a free group. It is $\lvert W(V,\gamma)\rvert$ that will soon allow us to
measure the “complexity” of the homotopy information of
$\operatorname{id}$-type points in the Teichmüller space.
###### Lemma 6.7.
If $V_{1}\subset V$ and $\gamma(0)\in V_{1}$, then $\lvert
W(V_{1},\gamma)\rvert\leq\lvert W(V,\gamma)\rvert$.
###### Proof.
We can first find $W(V,\gamma)$ and then “glue in” the punctures from
$V\setminus V_{1}$. Then some generators of the fundamental group in
$H_{r}(V)$ become trivial. So the length of $W(V_{1},\gamma)$ cannot be bigger
than the length of $W(V,\gamma)$. ∎
###### Lemma 6.8.
Let $\gamma\subset\mathbb{C}\setminus V$ be an r-curve with the strictly
increasing real part and such that $\gamma(0)\in V$. Then $\lvert
W(V,\gamma)\rvert<C_{\lvert V\rvert}$ where $C_{\lvert V\rvert}>0$ depends
only on the size of $V$.
###### Proof.
Follows from the construction of $W(V,\gamma)$. ∎
### 6.2. $H_{r}$ and Hawaiian earrings
This is a minor subsection containing a rather standing aside fact that we
will need later.
Let $H_{r}$ be defined as in Subsection 6.1 for a countably infinite $V$.
Clearly we can choose such representatives $g_{v_{n}}^{\prime}$ of the
“straight” generators $g_{v_{n}}$ (defined similarly as for finite $V$) so
that each of them is a Jordan curve containing $\infty$, they are pairwise
disjoint except at $\infty$, and each $g_{v_{n}}^{\prime}$ is a concatenation
$\alpha_{n}^{-1}\cdot\beta_{n}$, where $\alpha_{n},\beta_{n}$ are horizontal
paths with the strictly increasing real part, and parametrized by it.
Recall that the _Hawaiian earring_ is the topological space homeomorphic to
$\mathcal{H}=\cup C_{n}$, where $C_{n}=\\{z\in\mathbb{C}:\lvert
z-1/n\rvert=1/n\\}$ for $n\in\mathbb{N}$.
###### Lemma 6.9 (Hawaiian earring skeleton).
Let $H:=\cup g_{v_{n}}^{\prime}$. Then $H$ is homeomorphic to $\mathcal{H}$
and is a deformation retract of $H_{r}$.
###### Sketch of the proof.
We can construct the homeomorphism $\psi:H\to\mathcal{H}$ by mapping
homeomorphically the consecutive generators $g_{v_{n}}^{\prime}$ to the
consecutive outer circles $C_{n}$ so that $\psi(g_{v_{n}}^{\prime})=C_{n}$,
$\psi(\infty)=0$, and $\psi(\alpha_{n}(t)),\psi(\beta_{n}(t))\to 0$ as
$t\to\infty$ uniformly in $n$ whenever defined.
Such $\psi$ is a continuous bijection. Since $H$ is compact and $\mathcal{H}$
is Hausdorff, $\psi$ is homeomorphism.
The second statement of the lemma follows easily from the fact that the
punctured closed unit disk deformation retracts onto its outer boundary. ∎
### 6.3. Homotopy type of a preimage
We are now interested in how the homotopy type, and in particular $\lvert
W(V,\gamma)\rvert$, changes after taking preimage under an entire map.
Although to prove Classification Theorem 1.1 we only need to consider the
exponential function, we study (without essential additional efforts) the case
of compositions of a polynomial with the exponential, which will be in our
focus in the subsequent articles devoted to more general families. We begin
with two preliminary lemmas and then provide bounds for $\lvert
W(V,\tilde{\gamma})\rvert$ of a preimage curve $\tilde{\gamma}$.
Let $V\subset\mathbb{C}$ be a _finite_ set, and let $\gamma$ be an r-curve
such that $\gamma(0)\in V$ and
$\gamma|_{\mathbb{R}^{+}}\subset\mathbb{C}\setminus V$. As earlier choose
$H_{r}=H_{r}(V,\gamma)$. We again consider homotopy types of $\gamma$ in
$H_{r}$.
###### Lemma 6.10 (Piecewise linear representative).
There exists an r-curve $\gamma_{pl}\in[\gamma]$ consisting of at most
$6\lvert W(V,\gamma)\rvert+2$ straight line segments.
###### Proof.
Let $m=\lvert W(V,\gamma)\rvert$. Take a generator $[g_{w}]$ of the
fundamental group ${\pi_{1}(H_{r},\infty)}$ for some $w\in V$. There exists a
representative of the generator $[g_{w}]$ consisting of $5$ straight line
segments: $1$ segment to pass from $\infty$ to a small neighborhood of $w$,
$3$ to make a triangular loop around $w$ in this small neighborhood, and $1$
to return back to $\infty$.
Analogously $[\gamma_{str}]$ has an representative consisting of at most $2$
straight line segments.
Using the presentation of $[\gamma]=[\gamma_{str}]\cdot[\gamma_{loop}]$ and a
presentation of $[\gamma_{loop}]=g_{v_{1}}...g_{v_{m}}$ via $m$ generators, we
already have a piece-wise linear representative of $[\gamma]$ containing at
most $5m+2$ straight line segments. Nevertheless, this representative is not
an r-curve because it has some midpoints at $\infty$: the endpoints of the
generators $g_{v_{i}}$. But this flaw can be easily adjusted by the “price” of
at most $m$ straight line segments: if $z_{1},z_{2}\in\mathbb{C}$ are such
that $\operatorname{Re\,}z_{1}=\operatorname{Re\,}z_{2}>\max_{v\in
V}\operatorname{Re\,}v$, then the concatenation
$\langle z_{1}+t,t\in[0,\infty]\rangle\cdot\langle
z_{2}+t,t\in[0,\infty]\rangle^{-1}$
of horizontal rays is homotopic to a straight line segment between $z_{1}$ and
$z_{2}$. This way we replace pairs of neighboring infinite rays on the
generators by $\Pi$-shaped parts.
Altogether this means that $[\gamma]$ has an r-curve representative consisting
of at most $5m+2+m=6m+2$ straight line segments. ∎
###### Lemma 6.11 (Intersections bound homotopy type).
For every point $w\in V$, consider the ray $L_{w}=\langle w-t,t>0\rangle$.
Assume that $\gamma|_{(0,\infty)}$ intersects $L_{w}$ at $k_{w}<\infty$
points, and let $k=\sum_{V}k_{w}$. Then
$\lvert W(V,\gamma)\rvert\leq(k+1)\lvert V\rvert.$
###### Proof.
Let $0<t_{1}<\dots<t_{m}<\infty$ be the values of $t$ for which
$\gamma|_{(0,\infty)}$ intersects at least one of the $L_{w}$. Then $m\leq k$.
It can also happen that $m<k$ since $\gamma$ might intersect a few $L_{u}$ at
the same time.
Choose
$s_{0}\in(0,t_{1}),\,s_{1}\in(t_{1},t_{2}),\dots,\,s_{m-1}\in(t_{m-1},t_{m}),\,s_{m}\in(t_{m},\infty)$,
and define
$\gamma_{0}=\gamma|_{[0,s_{0}]},\\\ \gamma_{1}=\gamma|_{[s_{0},s_{1}]},\\\
...\\\ \gamma_{m}=\gamma|_{[s_{m-1},s_{m}]},\\\
\gamma_{m+1}=\gamma|_{[s_{m},\infty]}$,
so we have $\gamma=\gamma_{0}\cdot\gamma_{1}\cdot...\cdot\gamma_{m+1}$.
For $z\in\mathbb{C}$ denote by $\delta_{z}$ the infinite ray $\langle
z+t,t\in[0,\infty]\rangle$. Then
$[\gamma]=[\gamma_{0}\cdot\delta_{s_{0}}]\cdot[\delta_{s_{0}}^{-1}\cdot\gamma_{1}\cdot\delta_{s_{1}}]\cdot...\cdot[\delta_{s_{m-1}}^{-1}\cdot\gamma_{m}\cdot\delta_{s_{m}}]\cdot[\delta_{s_{m}}\cdot\gamma_{m+1}].$
Each $[\delta_{s_{i-1}}^{-1}\cdot\gamma_{i}\cdot\delta_{s_{i}}]$ for
$i=\overline{1,m}$ is homotopic either to a consecutive product (possibly with
inverses) of a few generators $g_{v}$ without repetitions, or it is null-
homotopic.
Further, $[\gamma_{0}\cdot\delta_{s_{0}}]$ is homotopic either to
$\gamma_{str}$, or to a concatenation with a consecutive product (possibly
with inverses) of generators $g_{v}$ without repetitions, whereas
$[\delta_{s_{m}}\cdot\gamma_{m+1}]$ is evidently null-homotopic.
Hence
$\lvert W(V,\gamma)\rvert\leq m\lvert V\rvert+\lvert V\rvert\leq(k+1)\lvert
V\rvert.$
∎
Now we are ready to describe how $\lvert W(V,\gamma)\rvert$ behaves after
taking preimages under the exponential or a polynomial. The estimates are
justified by the following elementary lemma.
###### Lemma 6.12 (Preimages stay horizontal).
Let $f$ be either the exponential or a monic polynomial,
$\gamma:(0,\infty)\to\hat{\mathbb{C}}\setminus\\{\text{singular values of
}f\\}$ be an r-curve, and $\tilde{\gamma}$ be one of its lifts under $f$. The
following statements hold.
1. (1)
If $f$ is the exponential, then $\tilde{\gamma}$ is an r-curve.
2. (2)
If $f$ is a monic polynomial of degree $d\geq 1$, then there exists a degree
$d$ root of unity $\xi$ such that $\xi\tilde{\gamma}$ is an r-curve.
Thus, it makes sense to speak about how the homotopy type of an r-curve
changes under lifts.
###### Proposition 6.13 (Bounds for lifts by polynomials).
Let $p$ be a monic polynomial of degree $d\geq 1$ with its critical values
contained in $V$, and let the set $\tilde{V}$ be such that
$p(\tilde{V})\subset V$ and $p|_{\tilde{V}}$ is bijective. Denote by
$\tilde{\gamma}$ the homeomorphic r-curve preimage of $\gamma$ under $p$.
If $n=\lvert V\rvert$, and $m=\lvert W(V,\gamma)\rvert$, then
$\lvert W(\tilde{V},\tilde{\gamma})\rvert<6dn^{2}(m+1).$
###### Proof.
Choose some $H_{r^{\prime}}=H_{r^{\prime}}(\tilde{V},\tilde{\gamma})$ so that
$p(\mathbb{H}_{r^{\prime}})\supset\mathbb{H}_{r}$. Then every homotopy of
$\gamma$ in $H_{r}$ lifts to a homotopy of $\tilde{\gamma}$ in
$H_{r^{\prime}}$.
Next, let $\gamma_{pl}\in[\gamma]$ be the piecewise linear representative
consisting of at most $6m+2$ straight line segments as in the Lemma 6.10 and
denote by $\delta$ its lift under $p$ which is r-curve. Then $\delta$ is
homotopic to $\tilde{\gamma}$ in $H_{r^{\prime}}$.
Elementary computation shows that a preimage of a straight line segment of
$\gamma_{pl}$ under polynomial either intersects each straight line containing
an $L_{w}$ in no more that $d$ points, or is a subset of it. If the second
case takes place for some preimages of bounded segments of $\gamma_{pl}$, we
slightly deform $\gamma_{pl}$ (in its homotopy class) so that the first
condition holds. Note that since $\delta$ is horizontal, if the preimage of
the unique infinite straight segment of $\gamma_{pl}$ is a subset of a
horizontal straight line, then it does not intersect any $L_{w}$. Hence
without loss of generality we can assume that $\delta|_{(0,\infty)}$ interests
each $L_{w},w\in\tilde{V}$ (see Lemma 6.11) in at most $d(6m+2)$ points.
Then due to Lemma 6.11
$\lvert W(\tilde{V},\tilde{\gamma})\rvert=\lvert
W(\tilde{V},\delta)\rvert\leq(nd(6m+2)+1)n=$
$6mn^{2}d+2n^{2}d+n<6mn^{2}d+6n^{2}d=6dn^{2}(m+1).$
∎
We are interested in an analogous result for the exponential.
###### Proposition 6.14 (Bounds for lifts by the exponential).
Let $0\in V$,
$\gamma(0)\neq 0$ and let $\tilde{V}$ be such that $\exp(\tilde{V})\subset V$,
and $\exp|_{\tilde{V}}$ is injective. Denote by $\tilde{\gamma}$ the
homeomorphic r-curve preimage of $\gamma$ under the exponential.
If $n=\lvert V\rvert=\lvert\tilde{V}\rvert+1$, and $m=\lvert
W(V,\gamma)\rvert$, then
$\lvert W(\tilde{V},\tilde{\gamma})\rvert<6n^{2}(m+1).$
###### Remark 6.15.
Note that in this case $\lvert V\rvert=\lvert\tilde{V}\rvert+1$ since
asymptotic value of $\exp$ does not have a preimage.
###### Proof.
The proof is identical to the case of a polynomial in Proposition 6.13, except
that a preimage of a straight line segment under the exponential either
intersects $L_{w}$ in at most _one_ point (instead of $d$), or is a subset of
it. That is why the formula for the exponential coincides with the formula for
polynomials of degree $d=1$. ∎
At the end of this subsection we provide bounds for how $W(V,\gamma)$ changes
under lifts by compositions of a polynomial and the exponential.
###### Theorem 6.16 (Bounds for lifts by $p\circ\exp$).
Let $p$ be a monic polynomial of degree $d$, and $g=p\circ\exp$.
Further, let $V\subset\mathbb{C}$ be a finite set containing singular values
of $g$ and $\gamma:(0,\infty)\to\hat{\mathbb{C}}$ be an r-curve such that
$\gamma(0)\subset V\setminus\\{\text{singular values of }g\\}$ and
$\gamma|_{\mathbb{R}^{+}}\subset\mathbb{C}\setminus V$.
If $\tilde{V}\subset\mathbb{C}$ is a finite set such that $g(\tilde{V})\subset
V\setminus\\{\text{singular values of }g\\}$, and $g|_{\tilde{V}}$ is
injective, then for every r-curve preimage $\tilde{\gamma}$ of $\gamma$ under
$g$ holds
$\lvert W(\tilde{V},\tilde{\gamma})\rvert<42dn^{4}(m+1)$
where $n=\lvert V\rvert$ and $m=\lvert W(V,\gamma)\rvert$.
###### Proof.
We obtain the estimate simply by using consequtively Proposition 6.13 and
Proposition 6.14 with the same $\lvert V\rvert=n$.
$\lvert W(\tilde{V},\tilde{\gamma})\rvert<6n^{2}(6dn^{2}(m+1)+1)\leq
42dn^{4}(m+1).$
∎
### 6.4. Spiders
Now return to our usual setting with the captured exponential function
$f:=c\circ f_{0}$ defined as in Subsection 3.2. Moreover, we assume that the
external address of $a_{0}$ is not (pre-)periodic.
###### Definition 6.17 (Spider).
An image $S_{\varphi}$ of the standard spider $S_{0}$ under an
$\operatorname{id}$-type map $\varphi$ is called a _spider_.
The notion of a spider comes with a few natural follow-up definitions.
###### Definition 6.18 (Spider legs).
The image of a ray tail $R_{n}$ under a spider map is called a _leg_ (of a
spider).
###### Definition 6.19 (Subspider).
For a spider $S_{\varphi}$, a non-empty union of its legs is called a
_subspider_ of $S_{\varphi}$.
We also want to have equality relation on the set of spiders.
###### Definition 6.20 (Equal spiders).
We say that the spiders $S_{\varphi}$ and $S_{\psi}$ are equal if for all
$n\geq 0$ we have $\varphi(R_{n})=\psi(R_{n})$ (as point sets).
###### Lemma 6.21 (Equal spiders define the same point in $\mathcal{T}_{f}$).
Let $S_{\varphi}=S_{\psi}$. Then $\varphi$ is isotopic to $\psi$ relative
$S_{0}$, and consequently $[\varphi]=[\psi]$.
###### Proof.
We might assume that $\varphi|_{S_{0}}=\psi|_{S_{0}}$, otherwise just isotope
$\varphi$ in disjoint neighborhoods of the legs $\\{\varphi(R_{n})\\}$ so that
the legs get reparametrized. Consider the map $\psi\circ\varphi^{-1}$. It is
equal to identity on the locally connected boundary of the simply-connected
domain $\mathbb{C}\setminus\varphi(S_{0})$. Hence
$\psi\circ\varphi^{-1}|_{\mathbb{C}\setminus\varphi(S_{0})}$ is isotopic to
identity relative $\varphi(S_{0})$. This implies that $[\varphi]=[\psi]$. ∎
Denote $\mathcal{O}_{n}=\\{a_{i}\in P_{f}:i\leq n\\}$.
###### Definition 6.22 (Leg homotopy word).
Let $S_{\varphi}$ be a spider. Then the _leg homotopy word_ $W_{n}^{\varphi}$
of $\varphi(R_{n})$ is
$W_{n}^{\varphi}:=W(\varphi(\mathcal{O}_{n}),\varphi(R_{n})).$
Next theorem helps to estimate how the leg homotopy words change under
Thurston iteration.
###### Theorem 6.23 (Combinatorics of preimage).
Let $\varphi$ be of $\operatorname{id}$-type. Then
$\lvert W_{n}^{\hat{\varphi}}\rvert<A(n+2)^{4}\max\\{1,\lvert
W_{n+1}^{\varphi}\rvert\\},$
where $A$ is a positive real number.
###### Proof.
This is just a restatement of Theorem 6.16 in a particular context. That is,
when $V=\varphi(\mathcal{O}_{n+1})$, $\gamma=\varphi(R_{n+1})$ and
$W(V,\gamma)=W_{n+1}^{\varphi}$. Then if we take
$\tilde{\gamma}=\hat{\varphi}(R_{n})$,
$\lvert W_{n}^{\hat{\varphi}}\rvert<42(n+2)^{4}(\lvert
W_{n+1}^{\varphi}\rvert+1)<A(n+2)^{4}\max\\{\lvert
W_{n+1}^{\varphi}\rvert,1\\},$
where $A>0$ is a positive real number. ∎
### 6.5. Spiders and Teichmüller equivalence
In the last subsection we prove the results showing that the homotopy type of
$\operatorname{id}$-type maps can be uniquely encoded using spiders. More
precisely, for a map $\varphi$ of $\operatorname{id}$-type one can uniquely
recover its equivalence class $[\varphi]\in\mathcal{T}_{f}$ from the positions
of points in $\varphi(P_{f})$ and $W_{n}^{\varphi}$ for all $n\geq 0$.
###### Definition 6.24 (Equivalence of spiders).
We say that two spiders $S_{\varphi}$ and $S_{\psi}$ are _equivalent_ if for
all $n\geq 0$ we have $\varphi(a_{n})=\psi(a_{n})$ and legs with the same
index are homotopic, i.e. $[\varphi(R_{n})]=[\psi(R_{n})]$ in
$\mathbb{C}\setminus\varphi(P_{f})$.
###### Proposition 6.25 (Spider equivalence is Teichmüller equivalence).
Two spiders $S_{\varphi}$ and $S_{\psi}$ are equivalent if and only if
$[\varphi]=[\psi]$, i.e. $\varphi$ is isotopic to $\psi$ relative ${P_{f}}$.
###### Proof.
$(\Leftarrow)$ “If” direction follows directly from the definitions.
$(\Rightarrow)$ Assume that the spiders $S_{\varphi}$ and $S_{\psi}$ are
equivalent.
We need to prove that $S_{\varphi}$ can be isotoped into $S_{\psi}$ via an
ambient isotopy of $\mathbb{C}\setminus P_{f}$. The statement of the
proposition will then follow due to Lemma 6.21. To simplify the notation we
assume that $\varphi=\operatorname{id}$, i.e. $S_{\varphi}=S_{0}$ (in the
general case the proof is exactly the same).
The proof uses the same idea as Alexander’s trick, where we shrink all
“entanglements” into one point.
First, note that one can introduce a natural linear order on the set of the
legs of $S_{0}$: for every pair of legs $R_{i}$ and $R_{j}$ we can say which
one is _higher_ or _lower_. More precisely, choose a right half-plane
$\mathbb{H}_{r}$ containing both $R_{i}$ and $R_{j}$ and join $a_{i}$, the
endpoint of $R_{i}$, to a finite point of $\partial\mathbb{H}_{r}$ via an
injective path $\gamma$ inside of $\mathbb{H}_{r}\setminus(R_{i}\cup R_{j})$.
The union of two curves $\gamma$ and $R_{i}$ divides $\mathbb{H}_{r}$ into two
parts. We say that $R_{j}$ is _higher_ (resp. _lower_) than $R_{i}$ if it is
contained inside of the upper (resp. lower) part.
For every $n\geq 0$ there exists a minimal number $k_{n}\geq 1$ so that
$S_{0}\setminus\cup_{i=0}^{n}R_{i}$ is a disjoint union of $k_{n}$ subspiders
$S_{n}^{j}$, $j\in\\{1,...,k_{n}\\}$ such that for every
$j\in\\{1,...,k_{n}\\}$ and $i\leq n$ all legs of the subspider $S_{n}^{j}$
are simultaneously higher or lower that $R_{i}$. Clearly, $k_{n}\leq n+2$.
Let $U$ be a right half-plane containing $S_{0}$. Now define inductively a
sequence of shrinking Jordan domains contained in $U$.
* •
(Step 0) Define $k_{0}$ mutually disjoint Jordan domains $U_{0}^{j}$ so that
$S_{0}^{j}\subset U_{0}^{j}\subset U$
and the diameter of $U_{0}^{j}$ in the spherical metric is less than the
diameter of $S_{0}^{j}$ plus $1$.
* •
(Step n) Define $k_{n}$ mutually disjoint Jordan domains $U_{n}^{j}$ so that
for some $i\leq k_{n-1}$
$S_{n}^{j}\subset U_{n}^{j}\subset U_{n-1}^{i}$
and the diameter of $U_{n}^{j}$ in the spherical metric is less than the
diameter of $S_{n}^{j}$ plus $1/(n+1)$.
We are going to construct the isotopy $\psi_{u}^{\infty}$ which is the
concatenation of countably many “local” isotopies $\psi_{u}^{n},n\geq 0$.
Roughly speaking, via $\psi_{u}^{0}$ we isotope $\psi(R_{0})$ into $R_{0}$,
then via $\psi_{u}^{1}$ we isotope $\psi_{u}^{0}(R_{1})$ relative $P_{f}$ into
$R_{1}$ without moving $R_{0}$,…, then via $\psi_{u}^{n}$ we isotope
$\psi_{u}^{n-1}(R_{n})$ into $R_{n}$ without moving $R_{0},R_{1},...,R_{n-1}$,
and so on.
We construct this sequence of isotopies inductively. Without loss of
generality assume that $\psi(S_{0})\subset U$ and $\psi|_{\mathbb{C}\setminus
U}=\operatorname{id}$. Note that every leg $\psi(R_{n})$ is homotopic to
$R_{n}$ inside of $U\setminus P_{f}$ (consequence of the fact that $\psi$ is
of $\operatorname{id}$-type ).
* •
(Step 0) Let $\psi_{u}^{0},u\in[0,1]$ be an isotopy of
$\hat{\mathbb{C}}\setminus P_{f}$ (or equivalently of $\mathbb{C}\setminus
P_{f}$) so that
1. (1)
$\psi_{0}^{0}=\psi$,
2. (2)
$\psi_{1}^{0}|=\operatorname{id}$ on
$\mathbb{C}\setminus\cup_{j=1}^{k_{0}}U_{0}^{j}$,
3. (3)
$\psi_{u}^{0}\equiv\operatorname{id}$ on $\mathbb{C}\setminus U$.
* •
(Step n) Let $\psi_{u}^{n},u\in[0,1]$ be an isotopy of
$\hat{\mathbb{C}}\setminus P_{f}$ so that
1. (1)
$\psi_{0}^{n}=\psi_{1}^{n-1}$,
2. (2)
$\psi_{1}^{n}|=\operatorname{id}$ on
$\mathbb{C}\setminus\cup_{j=1}^{k_{n}}U_{n}^{j}$,
3. (3)
$\psi_{u}^{n}\equiv\operatorname{id}$ on
$\mathbb{C}\setminus\cup_{j=1}^{k_{n-1}}U_{n-1}^{j}$.
Note that every Step $n$ is well-defined since the preceding Step $n-1$
guarantees that the leg $\psi(R_{n})$ is contained inside of some
$U_{n-1}^{j}$ and is homotopic to $R_{n}$ inside of $U_{n-1}^{j}\setminus
P_{f}$.
In order to define the concatenation of these infinitely many isotopies we
repara-metrize them so that $\psi_{u}^{n}$ is defined on the interval
$[1-1/2^{n},1-1/2^{n+1}]$. As $n\to\infty$, maps $\varphi_{u}^{n}$ are not
equal to identity only in neighborhoods of $\infty$ that are getting smaller.
Hence the infinite concatenation
$\psi_{u}^{\infty}=\psi_{u}^{0}\cdot\psi_{u}^{1}\cdot...$ is well defined.
This finishes the proof of the proposition. ∎
###### Definition 6.26 (Projective equivalence of spiders).
We say that two spiders $S_{\varphi}$ and $S_{\psi}$ are projectively
equivalent if for all $n\geq 0$ we have $\varphi(a_{n})=\psi(a_{n})$ and
$W_{n}^{\varphi}=W_{n}^{\psi}$.
###### Proposition 6.27 (Projectively equivalent spiders are equivalent).
Two spiders $S_{\varphi}$ and $S_{\psi}$ are _projectively equivalent_ if and
only if they are equivalent.
###### Proof.
$(\Leftarrow)$ If $\varphi(R_{n})$ and $\psi(R_{n})$ are homotopic relative to
$\psi(P_{f})$, then they are homotopic relative any subset of $\psi(P_{f})$.
$(\Rightarrow)$ Assume now that the spiders $S_{\varphi}$ and $S_{\psi}$ are
projectively equivalent. Let $r\in\mathbb{R}$ be such that $\mathbb{H}_{r}$
contains both $S_{\varphi}$ and $S_{\psi}$, and consider
$H_{r}=H_{r}(\varphi(P_{f}),\varphi(R_{0}))$ (defined as before Lemma 6.4).
We want to prove that the spider legs $\varphi(R_{n})$ and $\psi(R_{n})$
belong to the same homotopy class in $H_{r}$ (the condition
$W_{n}^{\varphi}=W_{n}^{\psi}$ says only that $\varphi(R_{n})$ and
$\psi(R_{n})$ are homotopic in $H_{r}$ relative to only a finite set of
punctures $\varphi(\\{a_{i}\\}_{i=0}^{n})$).
We claim that the homotopy type of each $\varphi(R_{n})$ in $H_{r}$ is
uniquely defined by the knowledge of $W_{k}^{\varphi}$ for all $k\geq n$.
Pick $k\geq n$. Assume that we know the homotopy type of $\varphi(R_{n})$ in
the “finitely-punctured”
$H_{r}\cup\varphi(P_{f}\setminus\\{a_{i}\\}_{i=0}^{k})$ (for $k=n$ we have it
via $W_{n}^{\varphi}$) and know $W_{k+1}^{\varphi}$. Since $R_{k+1}\cap
R_{n}=\emptyset$ and $\varphi$ is of $\operatorname{id}$-type, the homotopy
type of $\varphi(R_{n})$ in
$H_{r}\cup\varphi(P_{f}\setminus\\{a_{i}\\}_{i=0}^{k+1})$ is uniquely defined.
Next, from Lemma 6.9 we know that $H_{r}$ deformation retracts onto a
subspace, homeomorphic to the Hawaiian earring $\mathcal{H}=\cup C_{n}$, so
that the “straight” generator $g_{v_{n}}$ of $\pi_{1}(H_{r},\infty)$ is mapped
to an element of $\pi_{1}(\mathcal{H},0)$ represented by the circle $C_{n}$.
It is well-known fact about Hawaiian earrings (e.g. see [dS]) that
$\pi_{1}(\mathcal{H},0)$ is a subgroup of $\varprojlim F_{j}$, where $F_{j}$
is the fundamental group of $\cup_{1}^{j}C_{i}$ with the base point at $0$,
considered as a subgroup of $\pi_{1}(\mathcal{H},0)$ after collapsing all
other $C_{i}$ to $0$. Hence, if
$\theta_{1},\theta_{2}\in\pi_{1}(\mathcal{H},0)$ are such that
$\theta_{1}=\theta_{2}$ in $F_{j}$ for all $j$, then $\theta_{1}=\theta_{2}$
in $\pi_{1}(\mathcal{H},0)$. From this follows that the homotopy type of
$\varphi(R_{n})$ in $H_{r}$ is uniquely determined by its homotopy types in
every $H_{r}\cup\varphi(P_{f}\setminus\\{a_{i}\\}_{i=0}^{k})$ for $k\geq n$. ∎
From Proposition 6.25 and Proposition 6.27 follows the theorem that identifies
the projective equivalence of spiders and Teichmüller equivalence.
###### Theorem 6.28 (Projective equivalence of spiders is Teichmüller
equivalence).
Two spiders $S_{\varphi}$ and $S_{\psi}$ are projectively equivalent if and
only if $[\varphi]=[\psi]$, i.e. $\varphi$ is isotopic to $\psi$ relative
${P_{f}}$.
###### Remark 6.29.
Note that the only properties of $S_{0}$ that we used in the proofs of Lemma
6.21 and Propositions 6.25, 6.27 are that $S_{0}$ is contained in some right
half-plane and all $R_{n}$ are mutually disjoint (except at $\infty$)
injective r-curves with the spherical diameters tending to $0$ as
$n\to\infty$. Hence the statements are obviously true for every such more
general choice of $S_{0}$ (and after replacement of $P_{f}$ by the set of
endpoints of $S_{0}$).
## 7\. Invariant compact subset
In this section we construct a compact invariant subset $\mathcal{C}_{f}$ of
the Teichmüller space $\mathcal{T}_{f}$. In the first theorem we present the
construction and prove invariance. A statement that $\mathcal{C}_{f}$ is
compact will be the content of the second theorem.
Let $f=c\circ f_{0}$ be the captured exponential function constructed in
Subsection 3.2, and for $n\geq 0$ let $t_{n}$ be the potential of the point
$a_{n}\in P_{f}$ (then $t_{n+1}=F(t_{n})$).
Define the set $\mathcal{P}^{\prime}:=\\{\frac{t_{n+1}+t_{n}}{2}:n\geq 0\\}$
and if $\rho=\frac{t_{n+1}+t_{n}}{2}\in\mathcal{P}^{\prime}$, let
$N(\rho):=n$. Further, define $D_{\rho}:=\mathbb{D}_{\rho}(0)$ and
$M_{\rho}:=e^{2t_{N(\rho)}}$.
The subset $\mathcal{C}_{f}$ is constructed in the form of a list of
conditions.
###### Theorem 7.1 (Invariant subset).
Fix some $\rho\in\mathcal{P}^{\prime}$ and denote $N:=N(\rho)$. Let
$\mathcal{C}_{f}(\rho)\subset\mathcal{T}_{f}$ be the _closure_ of the set of
points in $\mathcal{T}_{f}$ represented by $\operatorname{id}$-type maps
$\varphi$ for which there exists an isotopy $\varphi_{u},u\in[0,1]$ of
$\operatorname{id}$-type maps such that
$\varphi_{0}=\operatorname{id},\varphi_{1}=\varphi$, and the following
conditions are simultaneously satisfied.
1. (1)
(Marked points stay inside of $D_{\rho}$) If $n\leq N$,
$\varphi_{u}(a_{n})\in D_{\rho}.$
2. (2)
(Precise asymptotics outside of $D_{\rho}$) If $n>N$, then
$\lvert\varphi_{u}(a_{n})-a_{n}\rvert<1/n.$
3. (3)
(Separation inside of $D_{\rho}$) If $k<l\leq N$, then
$\lvert\varphi_{u}(a_{k})-\varphi_{u}(a_{l})\rvert>\frac{1}{(M_{\rho})^{N-l+1}}.$
4. (4)
(Bounded homotopy) If $n\leq N$, then
$\lvert
W_{n}^{\varphi_{u}}\rvert<A^{N-n+1}\left(\frac{(N+2)!}{(n+1)!}\right)^{4}$
where $A$ is the constant from Theorem 6.23.
Then if $\rho\in\mathcal{P}^{\prime}$ is big enough, the subset
$\mathcal{C}_{f}(\rho)$ is well-defined, invariant under the $\sigma$-map and
contains $[\operatorname{id}]$.
Let us briefly discuss the content of the theorem before proving it. Our
invariant subset (and compact, as will be seen later) is described as a
closure of a set of $\operatorname{id}$-type maps $\varphi$ satisfying
conditions (1)-(4).
Conditions (1)-(2) say that the maps $\varphi$ have to be “uniformly of
$\operatorname{id}$-type”, that is, the marked points outside of a disk
$D_{\rho}$ have precise asymptotics, while inside of $D_{\rho}$ we allow some
more freedom.
Condition (3) tells us that the marked points inside of $D_{\rho}$ cannot come
very close to each other — this is necessary to keep our set bounded in the
Teichmüller space. Moreover, it is needed to control the distance to the
asymptotic value — if a marked point is too close to it, then after Thurston
iteration its preimage has the real part close to $-\infty$, and this spoils
condition (1).
Condition (4) takes care of the homotopy information and provides bounds for
leg homotopy words of points inside of $D_{\rho}$. Note that similar bounds
for marked points outside of $D_{\rho}$ are encoded implicitly in condition
(2).
###### Proof of Theorem 7.1.
Let $\mathcal{C}_{f}^{\circ}(\rho)\subset\mathcal{C}_{f}(\rho)$ be the set of
points in $\mathcal{T}_{f}$ of which we take the closure in the statement of
the theorem, i.e. represented by $\operatorname{id}$-type maps $\varphi$ for
which there exists an isotopy $\varphi_{u}$ of $\operatorname{id}$-type maps
such that $\varphi_{0}=\operatorname{id},\varphi_{1}=\varphi$ and the
conditions (1)-(4) are simultaneously satisfied. Since $\sigma$-map is
continuous, it is enough to prove invariance of
$\mathcal{C}_{f}^{\circ}(\rho)$ for big $\rho$.
First, note that for big $\rho$ the set $\mathcal{C}_{f}^{\circ}(\rho)$
contains $[c^{-1}]$: $c^{-1}$ can be joined to identity via the isotopy
$c_{u}^{-1}$ where $c_{u}$ was constructed in Subsection 3.2.
Further, from the asymptotic formula 2.1 and the fact that
$\operatorname{Re\,}a_{n}/\operatorname{Im\,}a_{n}\to\infty$ follows that for
all $\rho\in\mathcal{P}^{\prime}$ big enough the first $N+1$ point
$a_{0},a_{1},...,a_{N}$ of the orbit $P_{f}$ are contained in $D_{\rho}$,
while the other points are outside and mutual distances between them and from
them to $D_{\rho}$ are bigger than one. In other words, conditions (1)-(2)
allow the first $N+1$ points to move under isotopy $\varphi_{u}$ only inside
of $D_{\rho}$, while for every $n>N$ the point $a_{n}$ moves inside of a disk
$D_{n}$ of radius $1/n$, and all these disks $D_{\rho},D_{N+1},D_{N+2},...$
are mutually disjoint with the mutual distance between them bigger than one.
Moreover, we can assume that for $n>N$ we have
$\operatorname{Re\,}a_{n}+1<\operatorname{Re\,}a_{n+1}$.
For all $\varphi\in\mathcal{C}_{f}^{\circ}(\rho)$ after concatenation with
$c_{u}^{-1}$ we obtain the isotopy $\psi_{u}$ of $\operatorname{id}$-type maps
with $\psi_{0}=c^{-1},\psi_{1}=\varphi$ and satisfying conditions (1)-(4).
Then $\hat{\psi}_{u}$ is an isotopy of $\operatorname{id}$-type maps with
$\hat{\psi}_{1}=\operatorname{id}$. Let $g_{u}(z)=\psi_{u}\circ
f\circ\hat{\psi}_{u}^{-1}(z)=e^{z}+\kappa_{u}$. Now we want to prove that
$\hat{\psi}_{u}$ satisfies each of the items (1)-(4): from this would follow
that $\hat{\varphi}\in\mathcal{C}_{f}^{\circ}(\rho)$.
We prove that each of the conditions (1)-(4) for $\hat{\varphi}$ follows from
the conditions (1)-(4) for $\varphi$.
(4) Note that since for $n>N$ we have
$\operatorname{Re\,}a_{n}+1<\operatorname{Re\,}a_{n+1}$, for $n>N$ we have
$\lvert W_{N}^{\psi_{u}}\rvert=0<1.$
Hence from Theorem 6.23 for $n\leq N$ we get
$\lvert W_{n}^{\hat{\psi}_{u}}\rvert<A(n+2)^{4}\max\\{\lvert
W_{n+1}^{\psi_{u}}\rvert,1\\}<A(n+2)^{4}A^{N-(n+1)+1}\left(\frac{(N+2)!}{(n+2)!}\right)^{4}=$
$A^{N-n+1}\left(\frac{(N+2)!}{(n+1)!}\right)^{4}.$
(3) We use the formula
$\psi_{u}(a_{n+1})=e^{\hat{\psi}_{u}(a_{n})}+\kappa_{u}$. Since
$\rho=\frac{t_{N}+t_{N-1}}{2}$ and $\kappa_{u}\in D_{\rho}$, for $n\leq N$ and
big $\rho$ we have
$\operatorname{Re\,}\hat{\psi}_{u}(a_{n})=\log\lvert\psi_{u}(a_{n+1})-\kappa_{u}\rvert<\log\lvert
t_{N+1}+1+\rho\rvert<$ $\log(2t_{N+1})=\log(2F(t_{N}))<2t_{N}.$
Thus for $k<l\leq N$ and big enough $\rho$
$\lvert\psi_{u}(a_{k+1})-\psi_{u}(a_{l+1})\rvert=\lvert\int_{\hat{\psi}_{u}(a_{k})}^{\hat{\psi}_{u}(a_{l})}e^{z}dz\rvert\leq\int_{\hat{\psi}_{u}(a_{k})}^{\hat{\psi}_{u}(a_{l})}\lvert
e^{z}\rvert\lvert
dz\rvert<M_{\rho}\lvert\hat{\psi}_{u}(a_{k})-\hat{\psi}_{u}(a_{l})\rvert,$
and
$\lvert\hat{\psi}_{u}(a_{k})-\hat{\psi}_{u}(a_{l})\rvert>\frac{\lvert\psi_{u}(a_{k+1})-\psi_{u}(a_{l+1})\rvert}{M_{\rho}}>\frac{1}{(M_{\rho})^{N-l+1}}.$
(2) Since $\hat{\psi}_{u}$ is continuous in $u$ and
$\hat{\psi}_{0}=\operatorname{id}$, for $n>N$
$\hat{\psi}_{u}(a_{n})=\log(\psi_{u}(a_{n+1})-\kappa_{u})=\log(a_{n+1}+O(1)-\kappa_{u})=$
$a_{n}+\log\left(\frac{a_{n+1}+O(1)-\kappa_{u}}{e^{a_{n}}}\right)=a_{n}+\log\left(1+\frac{\kappa_{0}+O(1)-\kappa_{u}}{a_{n+1}-\kappa_{0}}\right).$
If $\rho$ (and consequently $N$) is big enough, we have
$\lvert\hat{\psi}_{u}(a_{n})-a_{n}\rvert<1/n$ for all $n>N$ (due to the very
fast growth of $\lvert a_{n}\rvert$).
(1) Instead of proving directly that for $n\leq N$ holds
$\hat{\psi}_{u}(a_{n})\in D_{\rho}$, we prove that for big $\rho$
$-\frac{\rho}{2}<\operatorname{Re\,}\hat{\psi}_{u}(a_{n})<\frac{\rho}{2}$
and
$-\frac{\rho}{2}<\operatorname{Im\,}\hat{\psi}_{u}(a_{n})<\frac{\rho}{2}.$
From this would follow that
$\lvert\hat{\psi}_{u}(a_{n})\rvert<\rho/{\sqrt{2}}<\rho.$
In the proof of (3) we have shown that for big $\rho$ holds
$\operatorname{Re\,}\hat{\psi}_{u}(a_{n})<2t_{N}$. Clearly, $2t_{N}<\rho/2$
for big $\rho$.
Using condition (3) for $\psi_{u}$ we get
$\operatorname{Re\,}\hat{\psi}_{u}(a_{n})=\log\lvert\psi_{u}(a_{n+1})-\kappa_{u}\rvert>\log\frac{1}{M_{\rho}^{n}}=-2nt_{N}$
which is bigger than $-\rho/2$ for all $\rho$ big enough.
Since $\lvert
W_{n+1}^{\psi_{u}}\rvert<A^{N-n}\left(\frac{(N+2)!}{(n+2)!}\right)^{4}$, the
spider leg $\hat{\psi}_{u}(R_{n+1})$ makes no more than
$A^{N-n}\left(\frac{(N+2)!}{(n+2)!}\right)^{4}+1$ loops around the singular
value. Hence the difference between $2\pi s_{n}$ and the imaginary part of
$\hat{\psi}_{u}(a_{n})$ will be less than
$2\pi\left(A^{N-n}\left(\frac{(N+2)!}{(n+2)!}\right)^{4}+1\right)$. Due to
fast growth of the sequence $\\{t_{n}\\}_{n=0}^{\infty}$, by making $\rho$
bigger we may assume that for all $n\leq N$ we have
$2\pi\left(A^{N-n}\left(\frac{(N+2)!}{(n+2)!}\right)^{4}+1+\lvert
s_{n}\rvert\right)<\frac{\rho}{2}.$
Hence $\lvert\operatorname{Im\,}\hat{\psi}_{u}(a_{n})\rvert<\rho/2$ and
$\lvert\hat{\psi}_{u}(a_{n})\rvert<\rho/\sqrt{2}<\rho$. ∎
###### Remark 7.2.
From the construction of $\mathcal{C}_{f}(\rho)$ immediately follows that the
elements of $\mathcal{C}_{f}(\rho)$ are asymptotically conformal.
Next, we prove another key statement claiming compactness of
$\mathcal{C}_{f}(\rho)$. Note that it is very natural to expect this
compactness because of the conditions (1)-(4) of the Theorem 7.1. Indeed, note
that (1)-(4) are formulated in terms of how the marked points $a_{n}$ are
allowed to move under isotopies $\varphi_{u}$ of $\operatorname{id}$-type
maps: the points inside of $D_{\rho}$ have to stay inside, points outside of
$D_{\rho}$ move only slightly. At the same time the points $a_{n}$ inside of
$D_{\rho}$ are allowed to rotate around points in $\mathcal{O}_{n}$ only some
bounded amount of times. Outside of $D_{\rho}$ no “rotations” are allowed and
basically nothing happens. The final remark is that the marked points are not
allowed to come too close to each other.
###### Theorem 7.3 (Compactness).
For all $\rho\in\mathcal{P}^{\prime}$ big enough $\mathcal{C}_{f}(\rho)$ is a
compact subset of $\mathcal{T}_{f}$.
###### Proof.
We are going to prove that the set $\mathcal{C}_{f}^{\circ}(\rho)$ from the
proof of Theorem 7.1 is pre-compact, or equivalently, since $\mathcal{T}_{f}$
is a metric space in the Teichmüller metric, that every sequence
$\\{[\varphi^{n}]\\}\subset\mathcal{C}_{f}^{\circ}(\rho)$ has a subsequence
that converges to some point $[\varphi]\in\mathcal{T}_{f}$.
Since each $[\varphi^{n}]\in\mathcal{C}_{f}^{\circ}(\rho)$, we can assume that
every $\varphi^{n}$ is of $\operatorname{id}$-type and $\varphi_{u}^{n}$ is an
isotopy with $\varphi_{0}^{n}=\operatorname{id},\varphi_{1}^{n}=\varphi^{n}$
satisfying conditions (1)-(4) of Theorem 7.1.
Recall from the proof of Theorem 7.1 that for all
$\rho\in\mathcal{P}^{\prime}$ big enough the first $N+1$ marked points move
under isotopy $\varphi_{u}$ only inside of $D_{\rho}$, while for every $n>N$
the point $a_{n}$ moves inside of a disk $D_{n}$ of radius $1/n$, and all
these disks $D_{\rho},D_{N+1},D_{N+2},...$ are mutually disjoint with the
mutual distance between them bigger than one. Hence we can assume that when
$u\in[0,1/2]$, we have $\varphi^{n}_{u}|_{D_{\rho}}=\operatorname{id}$ for
every $n$, and when $u\in[1/2,1]$, we have
$\varphi^{n}_{u}|_{\cup_{k={N+1}}^{\infty}D_{k}}=\operatorname{id}$ for every
$n$, that is, first the marked points inside of $D_{\rho}$ do not move, and
afterwards do not move the marked point outside of $D_{\rho}$. This means that
we simply need to prove the theorem in two separate cases: when only the
marked points outside of $D_{\rho}$ move, and when only the marked points
inside of $D_{\rho}$ move.
In the former case the sequence $[\varphi^{n}]$ clearly has a limit point in
$\mathcal{T}_{f}$, since marked points move inside of mutually disjoint small
disks with mutual distance between them bounded from below.
Assume now that the marked points outside of $D_{\rho}$ do not move. We are
looking for a convergent subsequence in $\\{\varphi^{n}\\}$. Let $V:=P_{f}\cap
D_{\rho}$.
We say that $\varphi^{k}(V)$ and $\varphi^{l}(V)$ have the same configuration
if for every pair of points $a_{m},a_{n}\in V$ we have
$\operatorname{Re\,}\varphi^{k}(a_{m})<\operatorname{Re\,}\varphi^{k}(a_{n})\iff\operatorname{Re\,}\varphi^{l}(a_{m})<\operatorname{Re\,}\varphi^{l}(a_{n}),$
$\operatorname{Re\,}\varphi^{k}(a_{m})=\operatorname{Re\,}\varphi^{k}(a_{n})\iff\operatorname{Re\,}\varphi^{l}(a_{m})=\operatorname{Re\,}\varphi^{l}(a_{n}),$
$\operatorname{Im\,}\varphi^{k}(a_{m})<\operatorname{Im\,}\varphi^{k}(a_{n})\iff\operatorname{Im\,}\varphi^{l}(a_{m})<\operatorname{Im\,}\varphi^{l}(a_{n}),$
$\operatorname{Im\,}\varphi^{k}(a_{m})=\operatorname{Im\,}\varphi^{k}(a_{n})\iff\operatorname{Im\,}\varphi^{l}(a_{m})=\operatorname{Im\,}\varphi^{l}(a_{n}).$
Since $V$ is finite, only finitely many different configurations are possible.
After passing to a subsequence we may assume that all $\varphi^{k}(a_{n})$
converge to some points $b_{n}$, $W_{n}^{\varphi^{k}}=\operatorname{const}$
for all $a_{n}\in V$, and all $\varphi^{k}(V)$ have the same configuration.
Note that for $k$ big enough $\varphi^{k}(a_{n})$ will be compactly contained
in disjoint small disks $D_{n}$ contained in $\mathbb{D}_{\rho+1}(0)$ with
centers at the limits $b_{n}$, so we can assume this holds for all $n$.
Now let $I_{n}^{k}:[0,1]\to\mathbb{C}$ such that
$I_{n}^{k}(u)=\varphi^{k}(a_{n})+u(\varphi^{k+1}(a_{n})-\varphi^{k}(a_{n})),$
and let $\xi_{u}^{k}$ to be an isotopy of quasiconformal maps such that it is
identity outside of $\cup D_{n}$, $\xi_{0}^{k}=\operatorname{id}$ and
$\xi_{u}^{k}(\varphi^{k}(a_{n}))=I_{n}^{k}(u)$ for all $a_{n}\in V$.
Note that $[\varphi^{k+1}]=[\xi_{1}^{k}\circ\varphi^{k}]$. Indeed, while
moving simultaneously along $I_{n}^{k}$, the sets
$\xi_{1}^{k}\circ\psi^{k}(V)$ do not change their configuration. It is easy to
see that in this case
$W_{n}^{\xi_{u}^{k}\circ\varphi^{k}}=W_{n}^{\varphi^{k}}$. But then from
Theorem 6.28 follows that $\xi_{1}^{k}\circ\varphi^{k}$ and $\varphi^{k+1}$
define the same point in $\mathcal{T}_{f}$.
Hence $[\varphi^{k}]=[\xi_{u}^{0}\cdot\xi_{u}^{1}\cdot...\cdot\xi_{u}^{k}]$
converges to some $[\varphi]$ in the Teichmüller metric. This is an easy
consequence of the fact that Teichmüller space of the unit disk with one
marked point is homeomorphic to the unit disk. ∎
## 8\. Proof of Classification Theorem
In this section we are going to finally prove the Classification Theorem 1.1.
Before the proof we state a variation of the Banach Fixed Point Theorem that
will be needed for this.
###### Lemma 8.1 (Adjusted Banach Fixed Point Theorem).
Let $X$ be a compact complete metric space, and $\sigma:X\to X$ be a strictly
contracting map. Then $X$ contains a fixed point of $\sigma$ and it is unique.
###### Proof.
The proof is essentially the same as for the classical Banach Fixed Point
Theorem.
Assume that $X$ does not contain a fixed point of $\sigma$ and define a map
$\kappa:X\to\mathbb{R}$ so that
$\kappa(x)=d(\sigma(x),\sigma^{2}(x))/d(x,\sigma(x))$ where $d(x,y)$ is the
metric function of $X$. Since $X$ is compact, $\kappa$ attains its maximum on
$X$. Moreover, because $\sigma$ is strictly contracting, this maximum is less
than $1$. But this means that for every $x\in X$ the sequence
$x,\sigma(x),\sigma^{2}(x),...$ is Cauchy and due to completeness converges to
some point in $X$. This point will be a fixed point of $\sigma$. Uniqueness is
straightforward from the strict contraction of $\sigma$. ∎
Now, we prove Theorem 1.1.
###### Proof of Classification Theorem 1.1.
We show existence and uniqueness of $g$ separately.
(Existence.) Choose as some $f_{0}\in\mathcal{N}$ so that its singular value
does not escape. Then due to Theorem 2.9 in $I(f_{0})$ there is a point $z$
that escapes on rays with potential $t$ and external address $\underline{s}$.
Let $f=c\circ f_{0}$ be the captured exponential function constructed as in
Subsection 3.2 and having singular value $z$ escaping as under $f_{0}$.
From Theorems 7.1 and 7.3 we know that there is a non-empty compact invariant
under $\sigma$ subset
$\mathcal{C}_{f}=\mathcal{C}_{f}(\rho)\subset\mathcal{T}_{f}$ such that all
its elements are asymptotically conformal. Next, from Theorem 4.4 follows that
$\sigma$ is strictly contracting in the Teichmüller metric on
$\mathcal{C}_{f}$.
Thus we have a strictly contracting map $\sigma$ on a compact complete metric
space $\mathcal{C}_{f}$. Note that it is _not_ true that $\sigma$ is uniformly
strictly contracting on $\mathcal{C}_{f}$ (think of the map $x^{2}$ on the
interval $[0,1/2]$), so we cannot apply the Banach Fixed Point Theorem
directly. But from Lemma 8.1 follows that $\mathcal{C}_{f}$ contains a fixed
point $[\varphi_{0}]$. Note (even though we do not use this fact later) that,
because $\sigma$ is strictly contracting on $\mathcal{C}_{f}$, the constructed
fixed point $[\varphi_{0}]$ is the unique fixed point in $\mathcal{C}_{f}$.
Now let $g=\varphi_{0}\circ f\circ\hat{\varphi}_{0}^{-1}$. This is an entire
function that is Thurston equivalent to $f$. Moreover, since every point in
$\mathcal{C}_{f}$ is of $\operatorname{id}$-type, without loss of generality
we can assume that $\varphi_{0}$ is of $\operatorname{id}$-type.
Now we need to prove that $\varphi_{0}(a_{0})$ escapes on rays with potential
$t$ and external address $\underline{s}$.
Note first that since the map $\varphi_{0}$ is of $\operatorname{id}$-type,
the singular value $\varphi_{0}(a_{0})$ escapes on rays. Indeed, the dynamic
ray of $g$ having external address $\underline{s}$ is defined for potentials
$t>t_{\underline{s}}$ (because none of its images contains the singular
value), and the point $p$ of this ray having potential $t>t_{\underline{s}}$
has the same asymptotics under iterations of $g$ as $\varphi_{0}(a_{0})$.
Since $g^{-1}$ is uniformly contracting on its tracts near $\infty$, we see
that for some integer $N>0$ the point $g^{N}(p)$ coincides with
$g^{N}(\varphi_{0}(a_{0}))$. For the same reason we can assume that for $n\geq
N$ the legs $\varphi_{0}(R_{n})$ coincide with the ray tails of $g$ towards
$g^{n}(\varphi_{0}(a_{0}))$ (just isotope $\varphi_{0}$: every such
$\varphi_{0}(R_{n})$ is in the same homotopy class as the ray tail). Since the
preimages of $\varphi_{0}(R_{n})$ recover according to the external address,
$\varphi_{0}(R_{0})$ coincides with the ray tail of $g$ towards
$\varphi_{0}(a_{0})$.
(Uniqueness.) Assume that there exists another function $h\in\mathcal{N}$
satisfying the same conditions on the combinatorics and speed of escape of the
singular value. Let $S_{0}^{g}$ and $S_{0}^{h}$ be the standard spiders of $g$
and $h$, respectively (i.e. union of the ray tails towards the singular
orbit).
Since $S_{0}^{g}$ and $S_{0}^{h}$ have the same vertical order of legs, one
can map $P_{g}$ into $P_{h}$ with a map $\chi$ of $\operatorname{id}$-type
belonging to an asymptotically conformal point of $\mathcal{T}_{g}$ so that
$\chi(S_{0}^{g})$ is equivalent to $S_{0}^{h}$ as spiders. But then we can
define the map $\sigma:\mathcal{T}_{g}\to\mathcal{T}_{g}$ for the function $g$
(i.e with the capture identically equal to identity), and then the map
$\hat{\chi}$ will be isotopic to $\chi$ relative $P_{g}$ — this is an
immediate consequence of the fact that $\chi$ and $\hat{\chi}$ are of
$\operatorname{id}$-type and that $\chi$ can be lifted by the branched
covering maps $g$ and $h$. On the other hand, since $[\operatorname{id}]$ is
invariant under $\sigma$ ($g$ is entire), and $[\chi]$ is asymptotically
conformal, either the Teichmüller distance from $[\chi]$ to
$[\operatorname{id}]$ must be strictly bigger than the Teichmüller distance
from $[\hat{\chi}]$ to $[\operatorname{id}]$ (Theorem 4.4) or
$[\chi]=[\operatorname{id}]$ and $g=h$. Since $[\chi]$ is a fixed point of
$\sigma$, only the latter option is possible. ∎
## 9\. Acknowledgements
We would like to express our gratitude to our research team in Aix-Marseille
Université, especially to Dierk Schleicher who supported this project from the
very beginning, Sergey Shemyakov who carefully proofread all drafts, as well
as to Kostiantyn Drach, Mikhail Hlushchanka, Bernhard Reinke and Roman Chernov
for uncountably many enjoyable and enlightening discussions of this project at
different stages. We also want to thank Dzmitry Dudko for his multiple
suggestions that helped to advance the project, Lasse Rempe for his long list
of comments and relevant questions, and Adam Epstein for important discussions
especially in the early stages of this project.
Finally, we are grateful to funding by the Deutsche Forschungsgemeinschaft
DFG, and the ERC with the Advanced Grant “Hologram” (695621), whose support
provided excellent conditions for the development of this research project.
## References
* [A] Lars Ahlfors, _Lectures on Quasiconformal Mappings_. American Mathematical Society (2006)
* [B0] Konstantin Bogdanov, _Infinite-dimensional Thurston theory and transcendental dynamics with escaping singular orbits_. PhD Thesis, Aix-Marseille Université, 2020.
* [BA] Arne Beurling and Lars Ahlfors, _The boundary correspondence under quasiconformal mappings_. Acta Mathematica 96 (1956), 125–142.
* [Br] David Brown, _Thurston equivalence without postcritical finiteness for a family of polynomial and exponential mappings_. PhD Thesis, Cornell University.
* [CT] Guizhen Cui and Lei Tan, _A Characterization of Hyperbolic Rational Maps_. Inventiones Mathematicae 183.3 (2011), 451–516.
* [DH] Adrien Douady and John Hubbard, _A proof of Thurstons’s topological characterization of rational functions_. Acta Mathematica 171 (1993), 263–297.
* [dS] Bart de Smit, _The fundamental group of the Hawaiian earring is not free_. International Journal of Algebra and Computation 2 (1) (1992), 33–37.
* [EL] Alexandre Eremenko and Mikhail Lyubich, _Dynamical properties of some classes of entire functions_. Annales de l’institut Fourier, 42 (1992) no. 4, 989–1020
* [F] Markus Förster, _Exponential maps with escaping singular orbits_. PhD Thesis, International University Bremen, 2006.
* [FRS] Markus Förster, Lasse Rempe, and Dierk Schleicher, _Classification of escaping exponential maps_. Proc Amer Mathe Soc. 136 (2008), 651–663.
* [FS] Markus Förster and Dierk Schleicher, _Parameter rays in the space of exponential maps_. Ergod Theory Dynam Systems 29 (2009), 515–544.
* [GL] Frederick P. Gardiner and Nikola Lakic, _Quasiconformal Teichmüller theory_. American Mathematical Society (2000).
* [H1] John Hubbard, _Teichmüller theory and applications to geometry, topology, and dynamics_. Volume 1: Teichmüller theory. Ithaca, NY: Matrix Editions (2006).
* [H2] John Hubbard, _Teichmüller theory and applications to geometry, topology, and dynamics_ , Volume 2: Surface homeomorphisms and rational functions. Ithaca, NY: Matrix Editions (2016).
* [HS] John Hubbard and Dierk Schleicher, _The spider algorithm_. In: R. Devaney (ed.), _Complex dynamics: the mathematics behind the Mandelbrot and Julia sets_. Proc Symp Pure Math 49, Amer Math Soc (1994), 155–180.
* [HSS] John Hubbard, Dierk Schleicher, and Mitsuhiro Shishikura, _Exponential Thurston maps and limits of quadratic differentials_. Journal Amer Math Soc 22 (2009), 77–117.
* [LV] Olli Lehto and K.I. Virtanen, _Quasiconformal mappings in the plane_. Springer-Verlag Berlin Heidelberg (1973).
* [Re] Lasse Rempe, _Rigidity of escaping dynamics for transcendental entire functions_. Acta Mathematica 203 (2009), 235–267.
* [RRRS] Günter Rottenfußer, Johannes Rückert, Lasse Rempe, and Dierk Schleicher, _Dynamic rays of bounded-type entire functions_. Annals of Mathematics 173 (2011), 77–125.
* [S] Dierk Schleicher, _Dynamics of entire functions_. In: G. Gentili et al (eds), _Holomorphic dynamical systems_. LNS 1998 (2010), 295–339.
* [SZ] Johannes Zimmer and Dierk Schleicher, _Escaping points for exponential maps_. J. Lond. Math. Soc. (2) 67 (2003), 380–400.
Aix-Marseille Université, France
Email<EMAIL_ADDRESS>
|
# Ultra-Broadband Kerr Microcomb Through Soliton Spectral Translation
Gregory Moille<EMAIL_ADDRESS>Joint Quantum Institute, NIST/University of
Maryland, College Park, MD, USA Microsystems and Nanotechnology Division,
National Institute of Standards and Technology, Gaithersburg, MD, USA Edgar
F. Perez Joint Quantum Institute, NIST/University of Maryland, College Park,
MD, USA Microsystems and Nanotechnology Division, National Institute of
Standards and Technology, Gaithersburg, MD, USA Jordan R. Stone Joint
Quantum Institute, NIST/University of Maryland, College Park, MD, USA
Microsystems and Nanotechnology Division, National Institute of Standards and
Technology, Gaithersburg, MD, USA Ashutosh Rao Microsystems and
Nanotechnology Division, National Institute of Standards and Technology,
Gaithersburg, MD, USA Institute for Research in Electronics and Applied
Physics, University of Maryland,College Park, MD, USA Xiyuan Lu Microsystems
and Nanotechnology Division, National Institute of Standards and Technology,
Gaithersburg, MD, USA Institute for Research in Electronics and Applied
Physics, University of Maryland,College Park, MD, USA Tahmid Sami Rahman
Joint Quantum Institute, NIST/University of Maryland, College Park, MD, USA
Yanne Chembo Institute for Research in Electronics and Applied Physics,
University of Maryland,College Park, MD, USA Kartik Srinivasan
<EMAIL_ADDRESS>Joint Quantum Institute, NIST/University of
Maryland, College Park, MD, USA Microsystems and Nanotechnology Division,
National Institute of Standards and Technology, Gaithersburg, MD, USA
(August 27, 2024)
###### Abstract
Broad bandwidth and stable microresonator frequency combs are critical for
accurate and precise optical frequency measurements in a compact and
deployable format. Typically, broad bandwidths (e.g., octave spans) are
achieved by tailoring the microresonator’s geometric dispersion. However,
geometric dispersion engineering alone may be insufficient for sustaining
bandwidths well beyond an octave. Here, through spectral translation induced
by the nonlinear mixing between the soliton and a secondary pump, we greatly
expand the bandwidth of the Kerr soliton microcomb far beyond the anomalous
geometric dispersion region on both sides of the spectrum. We show that such
nonlinear mixing can be summarized through the concept of synthetic
dispersion, highlighting the frequency matching of the nonlinear process.
Through detailed numerical simulations, we show that the synthetic dispersion
model captures the system’s key physical behavior, in which the second pump
enables the non-degenerate four-wave mixing process of Bragg scattering, which
spectrally translates the soliton and produces new dispersive waves on both
sides of the spectrum, all while preserving low-noise properties across the
full comb bandwidth. We experimentally demonstrate these concepts by pumping a
silicon nitride microring resonator at 1063 nm and 1557 nm to enable the
spectral translation of a single soliton microcomb so that a total comb
bandwidth close to 1.6 octaves is realized (from 137 THz to 407 THz). We
further examine the low-noise characteristics of the resulting comb state,
through beat note measurements across the spectrum, measurements of the comb
tooth spacing in both the primary and spectrally-translated portions, and use
of an electro-optic comb to probe the region in which the primary and
spectrally-translated comb portions overlap. Such ultra-broadband microcombs
provide new opportunities for full microcomb stabilization in optical
frequency synthesis and optical atomic clocks, while the synthetic dispersion
concept can extend microcomb operation to wavelengths that are hard to reach
solely through geometric dispersion engineering.
Microresonator frequency combs are promising for chip-scale metrology
applications including coherent range measurements Riemensberger _et al._
(2020), spectroscopy Picqué and Hänsch (2019), and optical clocks Yu _et al._
(2019); Newman _et al._ (2019). These applications are typically realized in
the dissipative Kerr soliton (DKS) regime of microcomb operation Herr _et
al._ (2016), and often rely on stabilization of the comb repetition rate and
carrier-envelope offset frequency, the latter usually through a $f$-2$f$
interferometer Udem _et al._ (2002); Diddams _et al._ (2020). $f$-2$f$
stabilization requires at least an octave of comb bandwidth, which can be
achieved through geometric dispersion engineering Coen _et al._ (2012);
Okawachi _et al._ (2014) to create coherent dispersive waves (DWs) that
broaden the comb spectrum Brasch _et al._ (2016); Li _et al._ (2017);
Pfeiffer _et al._ (2017); Karpov _et al._ (2018); Yu _et al._ (2019).
Although DWs significantly increase microcomb bandwidth, the power of these
enhanced comb teeth is still orders of magnitude lower than the pump, so that
the $f$-2$f$ technique remains challenging. For example, an end-to-end comb
bandwidth of one octave is insufficient for self-referencing using the
(centrally located) high-power pump. More complicated resonator cross-sections
Moille _et al._ (2018) and stacks of different materials Zhang _et al._
(2012); Dorche _et al._ (2020) have been proposed to alter the dispersion in
support of ultra-broadband combs, yet remain to be demonstrated
experimentally. Other approaches for super-octave microcomb generation include
combining $\chi^{(2)}$ and $\chi^{(3)}$ effects Chen _et al._ (2020), but
such broadband combs usually present spectral gaps Szabados _et al._ (2020),
and the suitability of such combs Chen _et al._ (2020); Hendry _et al._
(2020); Bruch _et al._ (2020) for metrology has not been shown.
Here, we present a low-noise microcomb whose span extends across 1.6 octaves -
without spectral gaps - while bridging the telecom with near-visible
wavelengths. This is made possible through dual pumping, in which the second
pump enables the $\chi^{(3)}$ process of four-wave mixing Bragg scattering
(FWM-BS) Yulin _et al._ (2004); Xu _et al._ (2013); Li _et al._ (2016) to
significantly broaden the typical DKS state, by spectral translation of the
soliton into other spectral bands. Using the dual pump scheme, we demonstrate
that the DKS teeth, acting as the signal in the FWM-BS process, can be
translated to new frequencies and effectively create new DWs on both sides of
the original DKS spectrum, broadening its bandwidth by more than a factor two.
The parametric nature of the FWM-BS process is such that phase coherence is
expected to be maintained, which we probe experimentally through a series of
noise measurements. In particular, heterodyne beat notes across the spectrum,
measurements of the comb tooth spacing in both the original DKS portion and
the spectrally translated portion, and a measurement of the relative noise
between the comb teeth in the overlap between the two microcomb portions, are
all consistent with the picture that FWM-BS spectrally translates the soliton
– thereby preserving its repetition rate – to the spectral region surrounding
the second pump, and the resulting 1.6 octave comb operates in a low-noise
state. The incorporation of the FWM-BS spectral translation mechanism allows
for a tunability and engineering of new DWs well beyond the limits imposed by
geometric and material dispersion on conventional singly-pumped microresonator
DKS states. To better understand the potential of this system, we introduce
the new concept of synthetic dispersion, which captures the underlying physics
and predicts the comb behavior as a function of resonator geometry and pump
frequencies. Simultaneously, we perform a detailed numerical study using a
single multi-pump Lugiato-Lefever Equation Chembo and Menyuk (2013); Taheri
_et al._ (2017) that accounts for the full set of $\chi^{(3)}$ processes
occurring in the resonator and validates the novel concept of synthetic
dispersion. The synthetic dispersion framework is further validated by close
correspondence with our experimental measurements of ultra-broadband
microcombs created by FWM-BS spectral translation of a DKS state.
Figure 1: $|$ Principle of synthetic dispersion through four-wave mixing Bragg
scattering (FWM-BS). a, Integrated dispersion (panel I) of a ring resonator
with two zero-crossings. Under driving by a single pump called the primary
pump (pp), only one DW at high frequency is created (the potential low
frequency DW is not generated due to power considerations), as shown in the
green spectrum in panel II. This part of the spectrum is referred to as the
primary portion. Inclusion of another pump, the synthesis pump (sp), allows
for the FWM-BS condition to be respected, especially considering the negative
(lower frequency) idler process (panel I). It effectively translates comb
teeth surrounding the primary pump to spectral positions surrounding the
synthesis pump, creating a synthesized portion of the frequency comb (panel
II, purple spectrum). b, The same negative idler frequency and momentum
conservation condition is met for larger detuning between the primary pump and
the signal, as the higher order dispersion coefficients allow for a roll-off
of the integrated dispersion, and therefore another mode at the same
integrated dispersion value as at the synthesis pump exists (panel I). Such
efficient FWM-BS effectively creates a new DW at low frequency (panel II). c,
Now considering the higher frequency idler due to FWM-BS, both the the mode
number matching condition and frequency matching conditions are changed (panel
I). The frequency matching condition now imposes that the integrated
dispersion at the idler must be of equal and opposite sign to the value at the
synthesis pump, and therefore a DW in the primary portion of the comb must
already exist (e.g., at 355 THz) to allow a change of sign of
$D_{\mathrm{int}}$ (panel I). In our case, this frequency matching condition
is met twice, but the mode matching condition permits only the high-frequency
mode to undergo FWM-BS. In the same manner as previously, it effectively
creates a new DW at high frequency (at 410 THz), extending the frequency comb
bandwidth on this side of the spectrum (panel II). d, Introduction of a
synthetic dispersion $D_{\mathrm{synth}}$ (purple, right axis) that captures
the nature of the translated portions as new DWs at low and high frequencies
and exhibits a clear difference from the integrated dispersion of the
resonator (solid green, right axis). The resulting LLE simulated comb spectrum
with a single pump (green) and a dual pump (purple) exhibit a clear difference
of bandwidth. The synthetic integrated dispersion emulates the dual pump
system by an effective single pump system (dark purple line) and shows close
agreement with the dual pump simulation. 0 dB is referenced to 1 mW, i.e.,
dBm.
Spectral translation and the synthetic dispersion framework–A microresonator’s
integrated dispersion $D_{\mathrm{int}}$ represents the variation of the cold
cavity resonance frequencies away from an equidistant frequency grid (i.e. the
DKS comb teeth) spaced by $D_{1}/2\pi$, the free spectral range (FSR) around
the primary pump Kippenberg _et al._ (2018), hereafter annotated with the
label $pp$.
$D_{\mathrm{int}}(\mu)=\omega_{\mathrm{res}}(\mu)-\omega_{\mathrm{\textsc{dks}}}(\mu)=\omega_{\mathrm{res}}(\mu)-\left(\omega_{\mathrm{pp}}+D_{1}\mu\right)$,
with $\mu$ defined as the mode order relative to the pumped mode
$\mu_{\mathrm{pp}}=0$ (i.e. $D_{\mathrm{int}}(\mu_{\mathrm{pp}})=0$),
$\omega_{\mathrm{res}}(\mu)$ being the cavity resonance frequency of the mode
$\mu$, and $\omega_{\mathrm{DKS}}(\mu)$ being the $\mu^{\text{th}}$ DKS comb
tooth frequency. Hence, when the cavity resonances match the DKS comb teeth
frequencies, i.e., $D_{\mathrm{int}}(\mu_{\mathrm{\textsc{dw}}})=0$, a
resonant enhancement happens, leading to the DW creation at the mode position
$\mu_{\mathrm{\textsc{dw}}}$ (Fig. 1a). In this case (and for the
microresonators we study below), $D_{\mathrm{int}}$ is such that only one
primary DW ($DW^{\prime}$) is created, as the second zero crossing (on the low
frequency side) is too far from the primary pump to yield appreciable energy.
The use of an auxiliary pump enables straightforward access to soliton states,
through an effective temperature compensation mechanism that bypasses the
thermal bistability Lu _et al._ (2019); Zhou _et al._ (2019); Zhang _et
al._ (2019). Moreover, simultaneous spectral broadening of the comb,
attributed to cross-phase modulation (XPM) effects, has also recently been
observed Zhang _et al._ (2020), though the magnitude of the effect was
limited. Here, we consider a dual-pumped system in a regime where much more
significant spectral broadening is realized. We pinpoint the origin of the
strong increase in comb bandwidth as originating from an interband four-wave
mixing Bragg scattering (FWM-BS) process. FWM-BS is mediated by the
combination of a strong, secondary pump (hereafter referred to as the
synthesis pump $sp$) and the primary pump $pp$, and results in phase-coherent
spectral translation of comb teeth across wide spectral gaps determined by the
difference in pump frequencies. While FWM-BS of a single frequency continuous
wave input has been demonstrated in a microcavity Li _et al._ (2016), and its
role in soliton-DW mixing in the context of optical fibers has previously been
studied in intraband Xu _et al._ (2013) and interband cases Yulin _et al._
(2004), here we show how it can play a critical role in the creation of ultra-
broadband microresonator frequency combs.
We consider a FWM-BS framework where the signal can be any comb tooth of the
primary soliton ($pDKS$) resulting from the primary pump $pp$ (i.e., the comb
that would be obtained through a single pump, called the primary portion), and
is converted into an idler that is another spectral component of the comb -
hereafter called the synthesized portion ($sDKS$) - through application of the
synthesis pump $sp$. This process must respect the fundamental criteria of
energy and momentum conservation, which in a ring resonator translate to
frequency matching and azimuthal mode number matching
$\omega_{\textsc{sDKS}}=\omega_{\mathrm{pDKS}}\pm|\omega_{\textsc{pp}}-\omega_{\mathrm{sp}}|$
and
$\mu_{\textsc{sDKS}}=\mu_{\mathrm{pDKS}}\pm|\mu_{\textsc{pp}}-\mu_{\mathrm{sp}}|$,
respectively. Using the integrated dispersion previously defined, these
fundamental conditions can be summarized in a single equation (see Extended
Data Section S-1.)
$\displaystyle\Big{(}\mu_{\mathrm{pDKS}}\pm\mu_{\mathrm{pp}}\Big{)}D_{1}=$
$\displaystyle\Big{(}D_{\mathrm{int}}(\mu_{\mathrm{sDKS}}^{\pm})\pm
D_{\mathrm{int}}(\mu_{\mathrm{sp}})\Big{)}+$
$\displaystyle\Big{(}\mu_{\mathrm{sp}}\pm\mu_{\mathrm{sDKS}}^{\pm}\Big{)}D_{1}$
(1)
This results in a simple pair of conditions for FWM-BS based on the idler that
is considered:
$\displaystyle\begin{cases}\Delta
D_{\mathrm{int}}^{-}=D_{\mathrm{int}}(\mu_{\mathrm{sDKS}}^{-})-D_{\mathrm{int}}(\mu_{\mathrm{sp}})=0\\\
\Delta\mu^{-}=\mu_{\mathrm{sDKS}}^{-}-\mu_{\mathrm{sp}}=\mu_{\mathrm{pDKS}}-\mu_{\mathrm{pp}}\end{cases}$
(2) $\displaystyle\begin{cases}\Delta
D_{\mathrm{int}}^{+}=D_{\mathrm{int}}(\mu_{\mathrm{sDKS}}^{+})+D_{\mathrm{int}}(\mu_{\mathrm{sp}})=0\\\
\Delta\mu^{+}=\mu_{\mathrm{sDKS}}^{+}-\mu_{\mathrm{pp}}=\mu_{\mathrm{pDKS}}-\mu_{\mathrm{sp}}\\\
\end{cases}$ (3)
Here the superscript $\pm$ denotes the two kinds of idler that can result from
a FWM-BS process, one at higher frequency (+) and the other at lower frequency
(-) than the signal. Due to phase matching considerations, only one of each
would match the FWM-BS condition for a given signal.
The first case to consider is for a signal that is close to the main pump
(Fig. 1a). Only the low frequency idler satisfies the condition in Eq. 2.
Therefore, the FWM-BS process translates a DKS comb tooth close to the primary
pump into a comb tooth close to the synthesis pump. As the mismatch in the
integrated dispersion $\Delta D_{\mathrm{int}}^{-}$ (i.e. mismatch in the
fundamental energy conservation of the FWM process) increases with the mode
spacing $\Delta\mu^{-}$, the efficiency of the FWM-BS process decreases,
giving rise to the Lorentzian spectral shape around the synthesis pump.
However, comb teeth close to the primary pump are not the only ones that can
be efficiently spectrally translated. Due to the higher order dispersion
coefficients leading to a roll-off of the integrated dispersion, the energy
conservation condition $\Delta D_{\mathrm{int}}^{-}=0$ is met again for large
enough mode spacing (Fig. 1b)). Therefore, the mode frequency exhibiting the
same integrated dispersion value as at the synthesis pump would exhibit
efficient FWM-BS. This process extends the frequency comb bandwidth on the low
frequency side, with a new DW generated near 160 THz. The system we consider
presents a large asymmetry in the integrated dispersion, and therefore the
condition $\Delta D_{\mathrm{int}}^{-}=0$ is only met once. However, even for
symmetric integrated dispersion in which there is an additional $\Delta
D_{\mathrm{int}}^{-}=0$ on the high frequency side, no corresponding high
frequency DW will be generated through this negative idler process. This is
because the phase matching criterion from Eq. 2 requires the mode spacing
between the synthesis pump and the synthesized portion to match the separation
between the primary pump and the primary portion, and such a separation is too
large for the primary portion to contribute an adequate power signal for the
FWM-BS process.
That being said, Eq. 3 indicates that a higher frequency idler can be
generated for any modes whose integrated dispersion is equal and opposite in
sign to the integrated dispersion value at the synthesis pump. A comb tooth
from the synthesis portion $\mu_{\mathrm{sDKS}}^{+}$ must respect momentum
conservation, which in this case means that the mode spacing between the
signal (comb tooth from the primary portion) and the synthesis pump must match
the mode spacing between the idler (comb tooth from the synthesized portion)
and the primary pump. Once again, due to the higher order dispersion that
allows a zero crossing of the integrated dispersion, negative values of
$D_{\mathrm{int}}$ are possible, and an efficient FWM-BS process can happen at
frequencies beyond the original high-frequency DW of the single pumped DKS.
This results in a new DW near 409 THz, extending the bandwidth of the
frequency comb to higher frequencies, i.e. toward the visible. Interestingly,
another frequency matching condition in our case would occur at low frequency
(near 138 THz), as another zero crossing of the integrated dispersion happens
on this side of the spectrum. However, momentum conservation is not respected,
as the idler must have a higher mode number than the signal, and therefore no
FWM-BS happens here.
Figure 2: $|$ Spectral translation to create ultra-broadband microcombs. A
microring resonator with $RW$=1117 nm is pumped by a primary pump at 282 THz
and synthesis pump at 192 THz. The comb behavior surrounding the primary pump
is spectrally translated to the synthesis pump portion. a, Primary comb
generation with low primary pump power near threshold. The comb spacing is
equal to seven free spectral ranges (FSRs) and this spacing is reproduced
around the synthesis pump, where idlers are created seven FSRs away from the
synthesis pump, highlighting the mixing process between the two pumps and the
primary portion comb teeth. b, Primary comb generation at a higher primary
pump power where, as previously, the spectral spacing in the primary portion
is matched by that in the synthesis portion, as expected by the FWM-BS theory.
c Two-soliton state, where the characteristic 8 FSR modulation in the comb
envelope is replicated near the synthesis pump. The inset shows the LLE-
calculated two-soliton pulse arrangement that results in the simulated comb
envelope shown in red. We highlight the missing comb tooth in the primary
portion ($\Delta\mu$=-4), whose absence is translated onto the synthesized
portion of the comb, respecting the FWM-BS phase matching condition. d, Single
soliton state, where the impact of the synthesis pump is to expand the comb
bandwidth to 1.6 octaves and create new DWs on both ends of the spectrum. The
spectrum agrees with the generalized LLE solution using the dual-pump model
(red line), and greatly exceeds the expected spectrum if just the primary pump
is applied (dashed green line). The phase coherent nature of the generated
comb is further verified through beat note measurements with narrow linewidth
lasers throughout the comb spectrum (four left insets). The noise floor for
each measurement is shown in dashed lines, and is higher in the O-band due to
use of an additional RF amplifier. The rightmost inset shows the LLE
simulation of the expected time-domain behavior under dual pumping (red) and
if only the primary pump is applied (green). The horizontal bars at the bottom
of the graph compare the span achieved here with octave-spanning DKSs from
Refs. Spencer _et al._ (2018); Yu _et al._ (2019). We note that the low
frequency portion of the spectrum exhibits OSA artefacts, at 146 THz, 159 THz,
and $<$135 THz; the shortest DW at 141 THz is not impacted by these artefacts.
0 dB is referenced to 1 mW, i.e., dBm.
Using this fundamental property that FWM-BS translates the primary DKS comb
teeth into new spectral regions while maintaining the comb tooth spacing, we
introduce $D_{\mathrm{synt}}$ (Fig. 1d), a synthetic dispersion that captures
the essence of the FWM processes we have presented. $D_{\mathrm{synt}}$ is an
effective integrated dispersion that incorporates the combined influence of
the geometric integrated dispersion and the synthesis-pump-induced FWM-BS
processes. $D_{\mathrm{synt}}$ is essentially a piece-wise shifted version of
$D_{\text{int}}$, with the two being equal in the spectral region surrounding
the primary pump, and differing in the regions where FWM-BS causes a
broadening of the spectrum and the generation of new DWs. In these regions, we
simply use the FWM-BS conditions for creating the new DWs to determine how to
shift $D_{\text{int}}$ (either up or down) so that its zero crossings are
appropriately located. Stitching together the different regions of the
synthetic dispersion is accomplished by taking into account FWM-BS power
considerations, and in particular, where FWM-BS is less efficient. In our
case, this is at the midpoint between the primary pump and the synthetic pump,
and in between the primary DW (i.e., that generated by the original DKS) and
the high frequency FWM-BS DW. This approach provides a linear approximation of
where the DWs will be created ($D_{\mathrm{synt}}$=0) and helps estimate the
spectral extent of the frequency comb. To test its validity, we simulate the
behavior of the system using a generalized version of the Lugiato-Lefever
Equation (LLE), described in detail in the Extended Material Eq. 12. This
version of the equation has not been subjected to typical simplifications, and
in particular, the pumped modes remain as phase terms relative to the center
of the frequency domain, so that the evolution of the intracavity electric
field under multiple driving fields can be studied Taheri _et al._ (2017).
Ultra-broadband microcombs – To experimentally study the above phenomena, we
perform measurements on Si3N4 microring resonators whose design and basic
characterization are described in the Methods and Extended Data. The
resonators are pumped in two bands, with a primary pump around 1063 nm and a
synthesis pump around 1557 nm, and the coupling enabled by a tailored pulley
waveguide geometry that realizes a relatively flat coupling rate across a wide
spectral range Moille _et al._ (2019) (see Extended Data Section S-4). We
first show the spectral translation nature of the dual pump system for
different microcomb states. Figure 2a-b shows the spectral behavior for
primary combs generated just above threshold and at a higher power,
respectively, where in both cases we observe that the comb tooth spacing (7
FSR and 10 FSR, respectively) surrounding the primary pump is retained around
the synthesis pump and the higher frequency region between 350 THz to 400 THz.
This behavior persists as we reach the soliton regime, in which the synthesis
pump provides both a new nonlinear mixing mechanism (FWM-BS) as well as
thermal stabilization, with clear signatures of soliton steps observed
(extended data Fig. S3). Figure 2c shows the results for a two-soliton state,
and in Fig. 2d, spectral translation of a single soliton state and the
generation of additional DWs that greatly expand the comb spectrum is
demonstrated. In each of these states, the spectral separation between the
comb lines remains the same between the primary portion and the synthesized
one, illustrating that the comb lines from the synthesized portion are
unlikely to be due to the synthesis pump alone, and instead are a result of
the mixing between both pumps and the primary portion comb lines. In addition,
the clear translation of the two-soliton comb envelope modulation pattern onto
the synthesis component confirms this point. This spectral pattern (due to the
relative phase of the two pulses circulating in the cavity) effectively acts
as a modulation of the signal power in the FWM-BS process, and the replication
of this pattern in the synthesized portion of the comb follows the expectation
for FWM-BS that the generated idler power is linearly proportional to the
input signal power. For example, a comb tooth 4 FSRs below the primary pump
($\Delta\mu=-4$) is absent (likely due to an avoided mode crossing), and this
is replicated by the absence of a comb tooth 4 FSRs below the synthesis pump.
This also emphasizes the phase-matching condition and is consistent with the
FWM-BS framework described previously.
Figure 3: $|$ Ultra-broadband soliton spectral translation: geometric
dependence. Geometric dispersion impacts the location of the generated DWs,
much like the case in singly-pumped DKS devices. Here, microrings are pumped
at 192 THz and 282 THz with a pump power of 200 mW and 250 mW respectively,
for ring widths (RWs) from 1080 nm to 1125 nm. In each case, an ultra-
broadband microcomb is generated in which the soliton comb teeth surrounding
the primary pump are spectrally translated by the synthesis pump. Single pump
DKS states are also showcased for each RW, where thermal stability was
obtained through cross-polarized counterclockwise pumping at 305 THz. The
solid black lines represent the expected single soliton spectra calculated
through the dual-pumped generalized LLE and single-pump LLE for their
corresponding counterpart experiments. The dashed lines represent the
microcomb envelope prediction using the synthetic dispersion and single-pump
LLE. The light grey solid line are visual guides showcasing the shift of the
DWs with RW. 0 dB is referenced to 1 mW, i.e., dBm. Figure 4: $|$ Overlap of
the primary and synthesized portions of the comb spectrum. a, Spectrally-
translated single DKS spectrum with the highlighted overlap region. The
integrated dispersion computed at the primary pump frequency (282 THz) is
shown in green, with an annotation highlighting the expected frequency offset
between the DKS primary portion comb tooth and the cavity resonance at the
synthesis pump frequency. b, Zoom-in of the overlap region, highlighting the
primary portion of the DKS (blue) and the synthesized portion (red). The beat
note of pairs of adjacent comb teeth separated by the overlap-offset frequency
$\delta f$ are displayed in the insets. c, Measured comb tooth spacing for
both the primary and synthesized portion in the overlap region, exhibiting a
uniform value across the spectrum that is ring width dependent. The linear FSR
around the main pump is measured and reported in the gray dashed line, and is
close to the measured repetition rate of the DKS. The error bars represent the
variance of the measured $f_{\mathrm{rep}}$ for different pairs of comb teeth.
d, Measurement of the overlap-offset frequency $\delta f$ and the predicted
value of the integrated dispersion about the primary pump, evaluated at the
synthesis pump frequency ($D_{\mathrm{int}}(\mu_{\mathrm{sp}})$). The overlap-
offset frequency is uniform across the overlap region, and is well-predicted
by $D_{\mathrm{int}}(\mu_{\mathrm{sp}})$, indicative of the intrinsic detuning
between the DKS teeth and the cavity resonances frequencies. The error bars
represent the variance of the measured $\delta f$ for different pairs of comb
teeth. 0 dB is referenced to 1 mW, i.e., dBm.
When reaching the single soliton state, the comb extends from 137 THz to 407
THz, a span allowing f-3f stabilization and a significant increase in
bandwidth relative to state-of-the-art DKS microcombs Spencer _et al._
(2018); Yu _et al._ (2019). The comb envelope is in good agreement with the
predictions of the generalized LLE model, which incorporates both the primary
and synthesis pumps, in terms of the overall comb envelope, the spectral
positions of the different DWs, and the $>$80 dB dynamic range in comb tooth
power across the ultra-broadband spectral range. Finally, we note that the LLE
provides insight about the nature of the intracavity field (Fig. 2c-d,
insets). In the time domain, it predicts a two soliton pulse (Fig. 2c) and a
single soliton pulse (Fig. 2d) with an estimated pulse duration (full-width at
half-maximum) of 16 fs. The pulses sit on a modulated background
characteristic of DWs on both sides of the spectrum, and the pulse itself
shows some amount of structure. In contrast, we also plot the expected time-
domain behavior for a singly-pumped soliton state with a single DW (Fig. 3d),
where the background modulation is larger on one side of the pulse, and the
pulse itself shows no additional structure. This highlights the continuous
wave nature of the translated portion of the spectrum, which does not create
new pulses, but rather increases the background modulation in the same manner
as DWs do. We note that the pulsed behavior from the time-domain simulations
is consistent with coherence across the whole 1.6 octave bandwidth of the
comb.
To probe the comb coherence in the single soliton case, we perform beat note
measurements with narrow linewidth tunable lasers (at 970 nm, 1270 nm, 1420
nm, and 1520 nm) positioned at different locations within the comb spectrum,
covering many different parts of the comb with different spectral shapes. In
each case (inset to Fig. 2c), the beat note is a single tone, which is a
signature of the phase-coherent nature of the frequency comb Herr _et al._
(2012); Raja _et al._ (2019) and is to be contrasted with the multiple beat
notes that might be expected for modulation instability processes that lead to
sub-comb formation. The beat notes at 1270 nm and 970 nm evaluate the primary
portion of the DKS, and are thus expected to exhibit clear single tones,
reflecting the single pulse nature of the DKS in the resonator. The beat notes
at 1420 nm and at 1520 nm, which lie in spectrally-translated portion near the
synthesis pump, support the hypothesis of the binding of the synthesis pump
with the single soliton and the coherence of the synthesized component of the
comb. Later, we will strengthen these conclusions through heterodyne
measurements between the two portions of the comb.
Soliton spectral translation: geometric dependence — To explicitly demonstrate
the impact of the synthesis laser on spectral translation of soliton
microcombs, in Fig. 3 we compare spectra generated when both primary and
synthesis pump lasers drive nonlinear processes in a series of microresonators
to the case where only the primary pump drives soliton generation. Thermal
stabilization in the latter case is achieved by a counterpropagating cross-
polarized laser at 980 nm; the opposite propagation direction and orthogonal
polarization ensure that its impact on nonlinear dynamics is minimized. For
each microring, we see that effect of the synthesis laser is to mediate
spectral translation and new DW generation, while leaving the portion
surrounding the original soliton state essentially unchanged, as predicted
from Fig. 1. The microrings differ only in their ring widths, which range from
1080 nm to 1125 nm, and since the microring cross-section strongly influences
the integrated geometric dispersion and the resulting synthetic dispersion
under dual pumping, we expect this $RW$ variation to impact the generated comb
spectra and the DW positions. Each device exhibits an ultra-broadband
spectrum, and as expected, all DWs tune with ring width. The generalized LLE
described in the Extended Data Section S-2 provides good agreement (solid
black lines in Fig. 3) with the obtained experimental spectra and reproduces
the observed DW tuning. In addition, the single-pumped synthetic dispersion
simulations (dashed black lines in Fig. 3) provide similar predictions for the
comb envelope and DW positions, highlighting the utility of our heuristic
model and its use as a predictive tool to design ultra-broadband frequency
combs. However, it is important to note that both the single-pump LLE and the
more generalized $N$-pump LLE rely on the basic assumption of a single fixed
frequency grid, indexed by the mode number $\mu$, through Fourier transform of
the fast time temporal profile. Hence these models do not capture any
frequency discrepancy between the DKS teeth and the nearest cavity resonances.
Figure 5: Relative phase noise between the primary and synthesis portions. a,
Schematic of the measurement setup through which the relative phase noise of
two comb teeth in the stitching region, one from the primary comb portion and
one from the synthesis comb portion, is measured with the assistance of a 1300
nm EO comb. b, Spectrum of the microcomb device under investigation. c, Two
microcomb teeth from the stitching region (blue) plotted along with the EO
comb (purple). The EO comb teeth are not fully resolved due to the limited
spectral resolution of the optical spectrum analyzer in comparison to the EO
comb repetition frequency (6.0156 GHz). d, Beat note of the EO comb with the
two microcomb teeth. e, Mixing of the two beat notes produced between the EO
comb and the microcomb teeth to retrieve the relative phase noise of the
microcomb teeth, without the (free-running) 1300 nm laser phase noise
contribution.
Composite frequency comb overlap frequency offset — Closer inspection of Fig.
2d and every spectrum in Fig. 3 reveals an important feature: the primary
portion and the synthesized portion do not overlap perfectly, resulting in a
composite frequency comb. Although we are pumping the same mode family in both
bands, the overlap region in the comb spectrum (Fig. 4a and b) exhibits pairs
of adjacent comb teeth with an overlap-offset frequency $\delta
f=f_{\mathrm{sDKS}}-f_{\mathrm{pDKS}}$ that is smaller than the repetition
rate of the DKS. We now consider whether $\delta f$ remains the same across
this ‘stitching’ region.
We characterize the comb tooth spacing, i.e. the repetition rate
$f_{\mathrm{rep}}$ in the primary and synthesized portions of the comb, for
the state shown in Fig. 4a. This is done by measuring beat notes between a
helper laser and the comb teeth, with the helper laser wavelength determined
by a wavemeter (accuracy $\approx$ 50 MHz). Through measurement across the
helper laser tuning range (228 THz to 238 THz), we find that (Fig. 4b)
$f_{\mathrm{rep}}$ is uniform and equal for both the primary and synthesized
portions, and its value is close to the FSR around the primary pump
(equivalently, $\delta f$ is uniform throughout the overlap region). We then
repeat measurement of $f_{\mathrm{rep}}$ and $\delta f$ for devices with
different $RW$, and find that these conclusions hold, with the specific
measured values dependent on $RW$ (Fig. 4c-d). These conclusions are
understood by the fact that the FWM-BS process must respect frequency
matching, and thus the soliton repetition rate will be transferred to the
synthesized portion of the comb.
In addition, contrary to the strong coupling in a dual pump system that has
been theoretically studied Taheri _et al._ (2017); Hansson and Wabnitz
(2014), the synthesis pump here does not change the DKS repetition rate or
shift the frequency of resonance enough to enter the regime where Arnold
tongues and sychronization might exist Skryabin _et al._ (2021). We believe
that such locking cannot happen in our devices because the overlap frequency
shift cannot be compensated by the Kerr shift at the synthesis pump
($\approx-1.3$ GHz). Thus, this overlap-offset frequency between the primary
and synthesized portions of the comb can be understood as the fundamental
discrepancy between the soliton comb tooth frequency and the cavity resonance
frequency Guo _et al._ (2017). As such, we expect that $\delta f$ should be
given by the value of the primary pump integrated dispersion evaluated at the
synthesis pump frequency $D_{\mathrm{int}}(f_{\mathrm{sp}})$, in the absence
of strong overlap between the primary and the synthesized elements, which is
our case since the synthesis pump is placed outside of the primary soliton
spectral envelope. In Fig. 4d, we compare $\delta f$ and the theoretical value
of $D_{\mathrm{int}}(f_{\mathrm{sp}})$, and find that they are in good
agreement and within the expected Kerr shift (couple of GHz) induced by the
synthesis pump.
Probing the relative noise between the two portions of the comb — Thus far we
have shown that the spectrum produced by our dual-pump system is consistent
with the picture in which FWM-BS mediates soliton spectral translation, with
new DWs generated on both the low and high frequency sides of the spectrum.
Moreover, we have explicitly shown that the comb tooth spacing is translated,
and there is an overall shift between the two portions of the spectrum. In
addition, the narrow heterodyne beat notes across the spectrum suggest that
the primary and synthesis portions of the comb each exhibit low noise, but so
far, their relative noise has not been experimentally considered. Now, if FWM-
BS is indeed the dominant process in the spectral broadening of the primary
DKS state into an ultra-broadband comb, we should expect that no added phase
noise (other than that of the synthesis laser) will be accrued on the
synthesis component teeth, and that there will be low noise between the
synthesis portion and the primary portion of the comb. To investigate this, we
seek to mix two comb teeth from the overlap region, where one comb tooth comes
from the primary portion of the comb and the other tooth comes from the
synthesis portion. Because of the large frequency offset $\delta f\approx$ 200
GHz, as observed earlier, we are unable to directly measure such a large beat
frequency. Instead, we use an electro-optic modulation (EO) comb, generated
through phase modulating an independent 1300 nm laser, to span across $\delta
f$ with a measurable (6.0156 GHz) repetition rate (Fig. 5a-c), and the beat
note of both microcomb teeth with respect to the EO comb is recorded (Fig.
5d). As the 1300 nm laser that seeds the EO comb is free-running and not
locked to the microcomb (which itself is free running), the obtained beat note
is relatively broad. However, by mixing together the two beat notes in a
specific configuration, we can suppress this helper laser phase noise. In
particular, if the EO comb teeth are correctly positioned on the same side of
the DKS comb teeth, for instance both at larger frequency, the two beat notes
can be mixed together in order to suppress the phase noise of the 1300 nm
helper laser (Fig. 5a). The obtained signal after mixing will only correspond
to the beating between the two component teeth of the DKS down-shifted by a
fixed frequency that is related to the microwave signal generator that drives
the EO comb (and which has very low phase noise in comparison to the free-
running laser systems). The expected behavior based on an underlying nonlinear
process dominated by the FWM-BS is corroborated in experiment, where a narrow
single tone, indicating a low-noise state between the two portions of the
comb, is observed (Fig. 5e). Given that a DKS soliton (the primary portion of
the comb) is a low-noise state, this suggests that the spectral translation
process has resulted in a synthesis portion that is also low-noise. Though
stabilization of the DKS and the pump lasers has not been performed, limiting
the ability to draw a final conclusion about the phase coherence between the
two portions of the comb, considering the earlier measurements and theoretical
analysis, which show that the repetition rate in both portions of the comb is
unvarying and identical to within our measurement capability, suggests - as
expected from the physical picture of FWM-BS - that the soliton spectral
translation process results in a single, fully phase-coherent comb. The
frequency of each comb tooth is then known once there is knowledge of the
repetition rate, the carrier-envelope offset frequency, and the overlap-offset
frequency $\delta f$, whose value is determined by the integrated dispersion
value at the synthesis pump.
Hence, based on the experimental demonstrations throughout this work, it seems
likely that the ultra-broadband microcombs we have demonstrated can be used
for metrological purposes, such as $f-2f$ self-referencing, provided that,
along with the comb repetition frequency and carrier-envelope offset
frequency, the overlap-offset frequency $\delta f$ is measured. In such a
scenario, where both pumps can be independently stabilized, the large power
available from the synthesis pump would be of particular appeal, as it could
be efficiently frequency doubled and, through proper dispersion engineering of
the resonator, could be made resonant with the high frequency DW.
Discussion — Zhang and colleagues have recently realized a similar dual
pumping configuration, in which an auxiliary laser at 1330 nm spectrally
broadens a 1550 nm soliton microcomb down to wavelengths of around 1275 nm
Zhang _et al._ (2020). The spectrum extends from 1275 nm to 1720 nm (taken at
the -50 dB points relative to the maximum), an extent of 60 THz, and the
auxiliary laser is responsible for about 22 THz of spectral extension on the
high frequency side, resulting in a about a factor 1.5$\times$ increase of the
bandwidth of the comb. In contrast, here our 1557 nm auxiliary laser causes a
broadening of the comb spectrum on both low and high frequency sides of the
original soliton spectrum centered at 1063 nm, with a comb extending from 737
nm to 2190 nm, an extent of 270 THz, and the auxiliary laser is responsible
for about 93 THz and 79 THz of spectral extension on the low and high
frequency sides, respectively (173 THz broadening in total), in our case
extending the comb by more than a factor 2.6. Perhaps more important than the
characteristics of the comb broadening is its fundamental physical origin. In
Zhang _et al._ (2020), spectral broadening is generally attributed to XPM
effects, which leaves open many questions about the relationship between the
original comb and the spectrally broadened region - in particular, whether any
offset between the two regions is indicative of independent frequency combs.
Here, we show that the dominant process behind spectral broadening in our
system is FWM-BS, so that soliton comb teeth generated by the primary pump are
spectrally translated to both the low frequency and high frequency sides of
the spectrum. In addition, we show that the overlap offset frequency between
the primary region of the comb and the spectrally translated one is inherent
to the microring resonator geometry, and is not a signature of two independent
frequency combs. In particular, the natural discrepancy between the DKS comb
teeth and cavity resonance frequency causes the synthesis pump to be offset
from the nearest DKS comb tooth by the value of the integrated dispersion at
this pumped mode, which we confirm through measurement of the overlap offset
frequency as a function of ring geometry. Finally, we have performed several
measurements that indicate that the FWM-BS process in our system directly
translates primary DKS comb teeth to another spectral window, resulting in a
comb state with low-noise in each portion as well as in the overlap region,
pointing to the potential use of such ultra-broadband frequency combs for
metrological purposes. This allows us to introduce a simplifying tool for
designing this new kind of frequency comb, by summarizing the non-linear
interaction and the position of all the DWs generated by the dual-pump system
through the synthetic dispersion. Such coherent ultra-broadband frequency
combs through DKS spectral translation could find many applications, in
particular, by harnessing the high power of the synthesis pump, which would
aid in $f$-2$f$ self referencing, for monolithic integration of $f$-3$f$
within a $\chi^{(3)}$ platform, and by pushing the limit of coherent DW
generation further into the visible.
## References
* Riemensberger _et al._ (2020) Johann Riemensberger, Anton Lukashchuk, Maxim Karpov, Wenle Weng, Erwan Lucas, Junqiu Liu, and Tobias J. Kippenberg, “Massively parallel coherent laser ranging using a soliton microcomb,” Nature 581, 164–170 (2020).
* Picqué and Hänsch (2019) Nathalie Picqué and Theodor W. Hänsch, “Frequency comb spectroscopy,” Nature Photonics 13 (2019), 10.1038/s41566-018-0347-5.
* Yu _et al._ (2019) Su-Peng Yu, Travis C. Briles, Gregory T. Moille, Xiyuan Lu, Scott A. Diddams, Kartik Srinivasan, and Scott B. Papp, “Tuning Kerr-Soliton Frequency Combs to Atomic Resonances,” Physical Review Applied 11, 044017 (2019).
* Newman _et al._ (2019) Zachary L. Newman, Vincent Maurice, Tara Drake, Jordan R. Stone, Travis C. Briles, Daryl T. Spencer, Connor Fredrick, Qing Li, Daron Westly, B. R. Ilic, Boqiang Shen, Myoung-Gyun Suh, Ki Youl Yang, Cort Johnson, David M. S. Johnson, Leo Hollberg, Kerry J. Vahala, Kartik Srinivasan, Scott A. Diddams, John Kitching, Scott B. Papp, and Matthew T. Hummon, “Architecture for the photonic integration of an optical atomic clock,” Optica 6, 680 (2019).
* Herr _et al._ (2016) Tobias Herr, Michael L. Gorodetsky, and Tobias J. Kippenberg, “Nonlinear Optical Cavity Dynamics,” (John Wiley & Sons, Ltd, 2016).
* Udem _et al._ (2002) Th Udem, R Holzwarth, and T W Hansch, “Optical frequency metrology,” Nature 416 (2002), 10.1038/416233a.
* Diddams _et al._ (2020) Scott A. Diddams, Kerry Vahala, and Thomas Udem, “Optical frequency combs,” Science 369 (2020), 10.1126/science.aay3676.
* Coen _et al._ (2012) Stéphane Coen, Hamish G Randle, Thibaut Sylvestre, and Miro Erkintalo, “Modeling of octave-spanning Kerr frequency combs using a generalized mean-field Lugiato–Lefever model,” Optics Letters 38 (2012), 10.1364/ol.38.000037.
* Okawachi _et al._ (2014) Yoshitomo Okawachi, Michael R. E. Lamont, Kevin Luke, Daniel O. Carvalho, Mengjie Yu, Michal Lipson, and Alexander L. Gaeta, “Bandwidth shaping of microresonator-based frequency combs via dispersion engineering,” Optics Letters 39, 3535 (2014).
* Brasch _et al._ (2016) Victor Brasch, Michael Geiselmann, Tobias Herr, Grigoriy Lihachev, Martin HP Pfeiffer, Michael L. Gorodetsky, and Tobias J. Kippenberg, “Photonic chip–based optical frequency comb using soliton Cherenkov radiation,” Science 351 (2016).
* Li _et al._ (2017) Qing Li, Travis C. Briles, Daron A. Westly, Tara E. Drake, Jordan R. Stone, B. Robert Ilic, Scott A. Diddams, Scott B. Papp, and Kartik Srinivasan, “Stably accessing octave-spanning microresonator frequency combs in the soliton regime,” Optica 4 (2017), 10.1364/OPTICA.4.000193.
* Pfeiffer _et al._ (2017) Martin H. P. Pfeiffer, Clemens Herkommer, Junqiu Liu, Hairun Guo, Maxim Karpov, Erwan Lucas, Michael Zervas, and Tobias J. Kippenberg, “Octave-spanning dissipative Kerr soliton frequency combs in Si${}_{\textrm{3}}$N${}_{\textrm{4}}$ microresonators,” Optica 4, 684–691 (2017).
* Karpov _et al._ (2018) Maxim Karpov, Martin H. P. Pfeiffer, Junqiu Liu, Anton Lukashchuk, and Tobias J. Kippenberg, “Photonic chip-based soliton frequency combs covering the biological imaging window,” Nature Communications 9 (2018), 10.1038/s41467-018-03471-x.
* Moille _et al._ (2018) Gregory Moille, Qing Li, Sangsik Kim, Daron Westly, and Kartik Srinivasan, “Phased-locked two-color single soliton microcombs in dispersion-engineered Si${}_{\textrm{3}}$N${}_{\textrm{4}}$ resonators,” Optics Letters 43 (2018), 10.1364/OL.43.002772.
* Zhang _et al._ (2012) Lin Zhang, Qiang Lin, Yang Yue, Yan Yan, Raymond G. Beausoleil, and Alan E. Willner, “Silicon waveguide with four zero-dispersion wavelengths and its application in on-chip octave-spanning supercontinuum generation,” Optics express 20 (2012).
* Dorche _et al._ (2020) Ali Eshaghian Dorche, Doğan Timuçin, Krishnan Thyagarajan, Thomas Wunderer, Noble Johnson, and David Schwartz, “Advanced dispersion engineering of a III-nitride micro-resonator for a blue frequency comb,” Optics Express 28, 30542–30554 (2020).
* Chen _et al._ (2020) Hao-Jing Chen, Qing-Xin Ji, Heming Wang, Qi-Fan Yang, Qi-Tao Cao, Qihuang Gong, Xu Yi, and Yun-Feng Xiao, “Chaos-assisted two-octave-spanning microcombs,” Nature Communications 11 (2020), 10.1038/s41467-020-15914-5.
* Szabados _et al._ (2020) Jan Szabados, Danila N. Puzyrev, Yannick Minet, Luis Reis, Karsten Buse, Alberto Villois, Dmitry V. Skryabin, and Ingo Breunig, “Frequency Comb Generation via Cascaded Second-Order Nonlinearities in Microresonators,” Physical Review Letters 124 (2020), 10.1103/PhysRevLett.124.203902.
* Hendry _et al._ (2020) Ian Hendry, Luke S. Trainor, Yiqing Xu, Stéphane Coen, Stuart G. Murdoch, Harald G. L. Schwefel, and Miro Erkintalo, “Experimental observation of internally pumped parametric oscillation and quadratic comb generation in a $\chi^{\textrm{(2)}}$ whispering-gallery-mode microresonator,” Optics Letters 45 (2020), 10.1364/OL.385751.
* Bruch _et al._ (2020) Alexander W. Bruch, Xianwen Liu, Zheng Gong, Joshua B. Surya, Ming Li, Chang-Ling Zou, and Hong X. Tang, “Pockels Soliton Microcomb,” Nature Photonics (2020), 10.1038/s41566-020-00704-8.
* Yulin _et al._ (2004) A. V. Yulin, D. V. Skryabin, and P. St J. Russell, “Four-wave mixing of linear waves and solitons in fibers with higher-order dispersion,” Optics Letters 29 (2004), 10.1364/OL.29.002411.
* Xu _et al._ (2013) Y. Q. Xu, M. Erkintalo, G. Genty, and S. G. Murdoch, “Cascaded Bragg scattering in fiber optics,” Optics Letters 38, 142 (2013).
* Li _et al._ (2016) Qing Li, Marcelo Davanço, and Kartik Srinivasan, “Efficient and low-noise single-photon-level frequency conversion interfaces using silicon nanophotonics,” Nature Photonics 10 (2016), 10.1038/nphoton.2016.64.
* Chembo and Menyuk (2013) Yanne K Chembo and Curtis R Menyuk, “Spatiotemporal Lugiato-Lefever formalism for Kerr-comb generation in whispering-gallery-mode resonators,” Physical Review A 87 (2013), 10.1103/PhysRevA.87.053852.
* Taheri _et al._ (2017) Hossein Taheri, Andrey B. Matsko, and Lute Maleki, “Optical lattice trap for Kerr solitons,” The European Physical Journal D 71 (2017), 10.1140/epjd/e2017-80150-6.
* Kippenberg _et al._ (2018) Tobias J. Kippenberg, Alexander L. Gaeta, Michal Lipson, and Michael L. Gorodetsky, “Dissipative Kerr solitons in optical microresonators,” Science 361 (2018), 10.1126/science.aan8083.
* Lu _et al._ (2019) Zhizhou Lu, Weiqiang Wang, Wenfu Zhang, Sai T. Chu, Brent E. Little, Mulong Liu, Leiran Wang, Chang-Ling Zou, Chun-Hua Dong, Bailing Zhao, and Wei Zhao, “Deterministic generation and switching of dissipative Kerr soliton in a thermally controlled micro-resonator,” AIP Advances 9, 025314 (2019).
* Zhou _et al._ (2019) Heng Zhou, Yong Geng, Wenwen Cui, Shu-Wei Huang, Qiang Zhou, Kun Qiu, and Chee Wei Wong, “Soliton bursts and deterministic dissipative Kerr soliton generation in auxiliary-assisted microcavities,” Light: Science & Applications 8, 50 (2019).
* Zhang _et al._ (2019) Shuangyou Zhang, Jonathan M. Silver, Leonardo Del Bino, Francois Copie, Michael T. M. Woodley, George N. Ghalanos, Andreas Ø. Svela, Niall Moroney, and Pascal Del’Haye, “Sub-milliwatt-level microresonator solitons with extended access range using an auxiliary laser,” Optica 6, 206 (2019).
* Zhang _et al._ (2020) Shuangyou Zhang, Jonathan M. Silver, Toby Bi, and Pascal Del’Haye, “Spectral extension and synchronization of microcombs in a single microresonator,” Nature Communications 11, 6384 (2020).
* Spencer _et al._ (2018) Daryl T. Spencer, Tara Drake, Travis C. Briles, Jordan Stone, Laura C. Sinclair, Connor Fredrick, Qing Li, Daron Westly, B. Robert Ilic, Aaron Bluestone, Nicolas Volet, Tin Komljenovic, Lin Chang, Seung Hoon Lee, Dong Yoon Oh, Myoung-Gyun Suh, Ki Youl Yang, Martin H. P. Pfeiffer, Tobias J. Kippenberg, Erik Norberg, Luke Theogarajan, Kerry Vahala, Nathan R. Newbury, Kartik Srinivasan, John E. Bowers, Scott A. Diddams, and Scott B. Papp, “An optical-frequency synthesizer using integrated photonics,” Nature 557, 81–85 (2018).
* Moille _et al._ (2019) Gregory Moille, Qing Li, Travis C. Briles, Su-Peng Yu, Tara Drake, Xiyuan Lu, Ashutosh Rao, Daron Westly, Scott B. Papp, and Kartik Srinivasan, “Broadband resonator-waveguide coupling for efficient extraction of octave-spanning microcombs,” Optics Letters 44, 4737–4740 (2019).
* Herr _et al._ (2012) T. Herr, K. Hartinger, J. Riemensberger, C. Y. Wang, E. Gavartin, R. Holzwarth, M. L. Gorodetsky, and T. J. Kippenberg, “Universal formation dynamics and noise of Kerr-frequency combs in microresonators,” Nature Photonics 6 (2012), 10.1038/nphoton.2012.127.
* Raja _et al._ (2019) Arslan S. Raja, Andrey S. Voloshin, Hairun Guo, Sofya E. Agafonova, Junqiu Liu, Alexander S. Gorodnitskiy, Maxim Karpov, Nikolay G. Pavlov, Erwan Lucas, Ramzil R. Galiev, Artem E. Shitikov, John D. Jost, Michael L. Gorodetsky, and Tobias J. Kippenberg, “Electrically pumped photonic integrated soliton microcomb,” Nature Communications 10 (2019), 10.1038/s41467-019-08498-2.
* Hansson and Wabnitz (2014) T. Hansson and S. Wabnitz, “Bichromatically pumped microresonator frequency combs,” Phys. Rev. A 90, 013811 (2014).
* Skryabin _et al._ (2021) D. V. Skryabin, Z. Fan, A. Villois, and D. N. Puzyrev, “Threshold of complexity and arnold tongues in kerr-ring microresonators,” Phys. Rev. A 103 (2021), 10.1103/PhysRevA.103.L011502.
* Guo _et al._ (2017) H. Guo, M. Karpov, E. Lucas, A. Kordts, M. H. P. Pfeiffer, V. Brasch, G. Lihachev, V. E. Lobanov, M. L. Gorodetsky, and T. J. Kippenberg, “Universal dynamics and deterministic switching of dissipative Kerr solitons in optical microresonators,” Nature Physics 13, 94–102 (2017).
* Hansson _et al._ (2014) T Hansson, D Modotto, and S Wabnitz, “On the numerical simulation of Kerr frequency combs using coupled mode equations,” Optics Communications 312, 134–136 (2014).
Methods
Device design— We use 775 nm thick Si3N4 ring resonators, which were
fabricated at Ligentec SA, with a fixed ring radius of 23 µm, a ring width
($RW$) that is varied between 1088 nm and 1140 nm across the devices, and a
surrounding silica cladding. The access waveguides for coupling to/from the
rings are tapered down to 200 nm at the facets, resulting in about 6 dB and 5
dB insertion losses per facet at 192 THz and 282 THz respectively. We use a
pulley waveguide with a width of $W=550$ nm, a length of $L_{\mathrm{c}}=9$
µm, and a gap $G=370$ nm. The expected frequency-dependent coupling, computed
using the coupled mode theory formalism developed in ref. Moille _et al._
(2019), exhibits a resonance-free spectrum and $Q_{c}$ that varies within one
order of magnitude over an octave (Extended Data Fig. S2).
Data availability
The data that supports the plots within this paper and other findings of this
study are available from the corresponding authors upon reasonable request.
Acknowledgments
The ring resonators were fabricated at Ligentec Inc. The authors thanks
Alfredo de Rossi and Sylvain Combrié for fruitful discussions.
Funding
The authors acknowledge funding from the DARPA APHI, DARPA ACES, DARPA DODOS,
and NIST-on-a-chip programs. A.R. and X.L. acknowledge support under the
Cooperative Research Agreement between the University of Maryland and NIST-
PML, Award no. 70NANB10H193.
Author contributions
G.M developed the theoretical framework, performed the simulations, designed
the ring resonators and conducted the experiments. E.P., J.R.S., and T.S.R.
helped with the experiments, A.R. helped with ring resonator design, and Y.C.
and X.L contributed in the understanding of the physical phenomenon. G.M. and
K.S. wrote the manuscript, with input from all authors, and K.S supervised the
project. All the authors contributed and discussed the content of this
manuscript.
Competing interests
The authors declare no competing interests.
Additional Information
Correspondence and requests for materials should be addressed to G.M. and K.S.
## Extended Data
### Four-Wave Mixing Bragg Scattering Matching Conditions
Four-wave mixing (FWM) relies on the basic conditions of energy and momentum
conservation, which in a microring resonator translate to frequency ($\omega$)
and azimuthal mode number ($M$) conservation. In the case of FWM Bragg
Scattering (FWM-BS), the signal photon (which in our case is a comb tooth from
the primary portion of the comb, labeled $pDKS$) can be scattered to two
spectrally-translated idlers, labeled as $sDKS^{\pm}$, such that these
fundamental energy/momentum conservation equations become:
$\begin{cases}&\omega_{\mathrm{pDKS}}\pm\omega_{\mathrm{pp}}=\omega_{\mathrm{sDKS}}^{\pm}\pm\omega_{\mathrm{sp}}\\\
&M_{\mathrm{pDKS}}\pm M_{\mathrm{pp}}=M_{\mathrm{sDKS}}^{\pm}\pm
M_{\textsc{sp}}\end{cases}$ (4)
The frequency shift in this spectral translation process is set by the
difference in the frequencies of the primary pump $pp$ and synthesis pump
$sp$, and both a higher frequency idler ($\omega_{\mathrm{sDKS}}^{+}$) and a
lower frequency idler ($\omega_{\mathrm{sDKS}}^{-}$) relative to the signal
can be generated, if they are phase- and frequency-matched Li _et al._
(2016).
In a multi-mode resonator, such as the ring resonators studied here, the
integrated dispersion ($D_{\mathrm{int}}$) effectively characterizes the
discrepancy between the modal frequencies and that of a zero-dispersion
resonator, i.e., one in which all resonances are separated by the same free
spectral range (FSR). $D_{\mathrm{int}}$ is usually referenced at the pumped
mode, so that we introduce the relative mode number with respect to the pumped
mode $\mu=M-M_{\mathrm{pp}}$ (i.e $\mu_{\mathrm{pp}}=0$), with:
$D_{\mathrm{int}}(\mu)=\omega_{\mathrm{res}}(\mu)-\left(\omega_{\mathrm{pp}}+D_{1}\mu\right)$
(5)
where $\omega_{\mathrm{res}}(\mu)$ are the cavity resonance frequencies, and
$D_{1}=2\pi FSR$, computed at the primary pumped mode.
From the the above equations, we can compute the frequency of each resonance
of interest from the integrated dispersion, assuming that the signal
$\mu_{\mathrm{pDKS}}$ is part of the primary DKS and that the idler
$\mu_{\mathrm{sDKS}}^{\pm}$ is part of the spectrally-translated (synthesized)
portion of the DKS, and hence on a fixed frequency grid relative to the
synthesis pump. We then have:
$\displaystyle\omega_{\mathrm{pDKS}}=\omega_{\mathrm{pp}}+\mu_{\mathrm{pDKS}}D_{1}$
$\displaystyle\omega_{\mathrm{sp}}=\omega_{\mathrm{pp}}+\mu_{\mathrm{sp}}D_{1}+D_{\mathrm{int}}(\mu_{\mathrm{sp}})$
(6)
$\displaystyle\omega_{\mathrm{sDKS}}^{\pm}=\omega_{\mathrm{pp}}+\mu_{\mathrm{sDKS}}^{\pm}D_{1}+D_{\mathrm{int}}(\mu_{\mathrm{sDKS}}^{\pm})$
This allows us to rewrite the energy/momentum conservation condition for FWM-
BS of Eq. 4 in a single equation such that:
$\displaystyle\left(\mu_{\mathrm{pDKS}}\pm\mu_{\mathrm{pp}}\right)D_{1}=$
$\displaystyle\Big{(}D_{\mathrm{int}}(\mu_{\mathrm{sDKS}}^{\pm})\pm
D_{\mathrm{int}}(\mu_{\mathrm{sp}})\Big{)}+$
$\displaystyle\left(\mu_{\mathrm{sp}}\pm\mu_{\mathrm{sDKS}}^{\pm}\right)D_{1}$
(7)
Because of the asymmetric nature of the integrated dispersion (due to odd
order dispersion coefficients), both high and low frequency idlers will be
obtained for different input signals to respect the above equation. First, we
study the case of the lower frequency idlers, as described in Fig 1(a-b).
Equation S-1 in this case becomes:
$\displaystyle\left(\mu_{\mathrm{pDKS}}-\mu_{\mathrm{pp}}\right)D_{1}=$
$\displaystyle\Big{(}D_{\mathrm{int}}(\mu_{\mathrm{sDKS}}^{-})-D_{\mathrm{int}}(\mu_{\mathrm{sp}})\Big{)}+$
$\displaystyle\left(\mu_{\mathrm{sDKS}}^{-}-\mu_{\mathrm{sp}}\right)D_{1}$ (8)
For efficient idler generation, we start by assuming that momentum is
conserved to allow FWM-BS. In the frequency comb, this is trivial as it
involves the simple condition that the number of resonator modes separating
the primary pump and the signal $\mu_{\mathrm{pDKS}}-\mu_{\mathrm{pp}}$ must
be the same as the number of modes separating the synthesis pump and the idler
$\mu_{\mathrm{sDKS}}^{-}-\mu_{\mathrm{sp}}$. This will be respected due to the
large number of comb teeth that can act as the signal. Assuming these two
conditions, frequency- and phase-matching for FWM-BS becomes:
$\displaystyle\begin{cases}D_{\mathrm{int}}(\mu_{\mathrm{sDKS}}^{-})=D_{\mathrm{int}}(\mu_{\mathrm{sp}})\\\
\mu_{\mathrm{sDKS}}^{-}-\mu_{\mathrm{sp}}=\mu_{\mathrm{pDKS}}-\mu_{\mathrm{pp}}\end{cases}$
(9)
This implies that any resonator mode, as long as it is at a lower frequency
than the the largest frequency comb tooth, will undergo FWM-BS if it exhibits
the same integrated dispersion value as that at the synthesis pump frequency.
The power of this idler will be limited by the primary comb tooth that
realizes momentum conservation. In our case, we have the synthesis pump at a
lower frequency than the primary pump, so that this condition describes both
the frequency comb shape around the synthesis pump and the newly created DW at
lower frequency.
On the other hand, the higher frequency idler respects the equation:
$\displaystyle
D_{\mathrm{int}}(\mu_{\mathrm{sDKS}}^{+})+\left(\mu_{\mathrm{sDKS}}^{+}-\mu_{\mathrm{pp}}\right)D_{1}=$
$\displaystyle-
D_{\mathrm{int}}(\mu_{\mathrm{sp}})+\left(\mu_{\mathrm{pDKS}}-\mu_{\mathrm{sp}}\right)D_{1}$
(10)
Momentum conservation will be respected for any spacing between the primary
comb tooth and synthesis pump $\mu_{\mathrm{pDKS}}-\mu_{\mathrm{sp}}$ that
matches the idler and primary pump spacing
$\mu_{\mathrm{sDKS}}^{+}-\mu_{\mathrm{pp}}$, leading to the condition:
$\displaystyle\begin{cases}D_{\mathrm{int}}(\mu_{\mathrm{sDKS}}^{+})=-D_{\mathrm{int}}(\mu_{\mathrm{sp}})\\\
\mu_{\mathrm{sDKS}}^{+}-\mu_{\mathrm{pp}}=\mu_{\mathrm{pDKS}}-\mu_{\mathrm{sp}}\end{cases}$
(11)
Therefore, any resonator mode, as long as it is at higher frequency than the
the lowest frequency comb tooth, and is equal and opposite in sign to the
integrated dispersion value at the synthesis pump frequency will undergo a
FWM-BS process. Interestingly, a primary portion DW must occur between the
primary pump and the idler, as an integrated dispersion zero-crossing must
happen for the above condition to hold. In Fig. 1(d), this DW occurs at 358
THz, for example.
Figure S1: $|$ Tuning of the newly generated DWs by tuning of the synthetic
dispersion. a, Fixed geometric integrated disperson (green) and resulting
synthetic dispersion (red to blue) for different synthesis pump frequencies
and a fixed primary pump frequency. b, Results of generalized LLE simulations
showing how the comb spectrum changes as a function of synthesis pump
frequency, and how the position of the newly generated DWs is predicted by the
zero crossings of the synthetic dispersion curves in a. The solid green curve
shows the spectral envelope when only the primary pump is applied.
### Lugiato Lefever Model
The Lugiato-Lefever Equation (LLE) can be derived from basic couple mode
theory Hansson _et al._ (2014); Chembo and Menyuk (2013), which is similar to
the temporal approach using the nonlinear Schrödinger equation (NLSE) with the
assumption of periodic boundary conditions and a slowly varying envelope Coen
_et al._ (2012). In many treatments, the detuning of the pump is simplified
and introduced as a linear term in the equation, making it convenient for
solving a single pump system. However, it has been demonstrated in ref. Taheri
_et al._ (2017) that avoiding such approximation allows for solving a system
under $N$ driving forces. Instead of choosing the frequency grid of the
resonant modes with an origin defined by a single pump, the grid is determined
by the spectral domain under study (with the origin placed at the center of
the grid). The generalized LLE becomes:
$\displaystyle\frac{\partial A(t,\tau)}{\partial t}=$
$\displaystyle\left(-\frac{\alpha}{2}-i\gamma L|A|^{2}\right)A$ (12)
$\displaystyle+$ $\displaystyle
it_{\mathrm{r}}\mathrm{FT}^{-1}\left[D_{\mathrm{int}}^{0}\tilde{A}(t,\omega)\right]$
$\displaystyle+$
$\displaystyle\sum_{\mathrm{p}}\kappa_{\mathrm{ext}}|_{\mathrm{p}}\mathcal{F}_{p}\mathrm{e}^{i\sigma_{\mathrm{p}}}$
where the subscript $p$ refers to either the synthesis pump or the primary
pump ($p=\\{\mathrm{pp};\mathrm{sp}\\}$), the index $0$ represents the center
of the simulation domain and hence $D_{\mathrm{int}}^{0}$ is the integrated
dispersion of the resonator computed at the center of the simulation domain
and $\mu_{0}=0$, $\tilde{A}(t,\omega)=\mathrm{FT}\left[{A}(t,\tau)\right]$ is
the Fourier Transform of the field,
$\mathcal{F}_{p}=-i\sqrt{P_{\mathrm{in_{p}}}}$ represents the $p$th driving
field (either the primary or synthesis pump),
$\sigma_{\mathrm{p}}=\delta\omega_{p}t+D_{\mathrm{int}}^{0}(\mu_{p})t-\mu_{p}\theta$,
represents the phase shift of the $p$ driving field, from which it is obvious
that the part relative to the fast time (or resonator angle $\theta$)
corresponds to the spectral shift away from the center of the simulation
domain, and the slow time $t$ phase corresponds to the dispersion shift away
from the uniform spectral grid and the relative detuning of the $p$ pump from
its pumped mode, $\kappa_{\mathrm{ext}}|_{\mathrm{p}}$ is the waveguide-
resonator coupling rate for the $p$ pumped mode, $\alpha$ is the total loss of
the resonator, $\gamma$ is the non-linear coefficient (here assumed constant),
$L$ is the resonator round-trip length, and $t_{\mathrm{r}}$ is the round trip
time. We note that the synthetic dispersion model replaces
$D_{\mathrm{int}}^{0}$ with $D_{\mathrm{synth}}$ and a single pump field for a
symmetric domain.
Figure S2: $|$ Pulley scheme for broadband resonator-waveguide coupling. a,
Calculated coupling quality factor $Q_{\mathrm{c}}$ for a waveguide width =
550 nm and a ring resonator with $RR=23$ µm and $RW=1100$ nm, for three
different pulley lengths $L_{\mathrm{c}}=\\{9$, $10$, $11$} µm.
$L_{\mathrm{c}}=9$ $\mu$m provides nearly equal $Q_{\mathrm{c}}$ at the
primary and synthesis pump frequencies and limited variation in
$Q_{\mathrm{c}}$ across the full comb spectrum, In contrast, conventional
straight coupling, shown in black, results in severe overcoupling at the
synthesis pump wavelength and severe undercoupling at high frequencies. b,
Experimental ultra-broadband DKS spectrum for $RW=1103$ nm, illustrating the
different regions of interest for in-coupling and out-coupling to/from the
ring resonator. 0 dB is referenced to 1 mW, i.e., dBm.
### Dispersive Wave Tuning Independent of Geometric and Material Dispersion
In this section we further justify use of the term ’synthetic dispersion’ to
capture the net effects of the new nonlinear-wave mixing processes that occur
in the dual pump system. Usually, dispersion is driven by two main components.
The first is material dispersion, where the chromatic dependence is such that
the dispersion becomes more normal the closer the wavelength is to the band-
gap, and is a significant challenge to reaching short wavelengths in
integrated frequency combs. The second is geometric dispersion, where the
wavelength-dependent confinement of the light within a guided mode geometry
provides a modification of the phase velocity of light different from the bulk
material trend, therefore providing a counter-balance to material dispersion.
Thus, for a fixed platform (i.e. a chosen set of photonic materials), the
typical view is that the geometry must be changed in order to modify the
dispersion of the resonator, and hence the resulting spectral shape of the
generated DKS. In this work, we have shown that by introducing the synthesis
pump, we provide a new degree of freedom in controlling the comb spectrum.
Apart from the significant spectral broadening mediated by the FWM-BS
processes we have described, control of the synthesis pump frequency results
in tuning of the DWs ( Fig. S1b). This is a consequence of a change in the
synthesis pumped mode resulting in new modes that match the FWM process. The
net result is that DW positions are modified without requiring modification of
the geometry or material. This tuning of the DW positions with synthesis pump
frequency is well-reproduced by the synthetic dispersion ( Fig. S1a), whose
zero-crossings closely matched the DW positions.
### Coupling Design
In order to in-couple two pumps separated by close to 100 THz and out-couple
close to a 300 THz bandwidth comb, straight waveguide coupling cannot be used
as it exhibits a coupling quality factor $Q_{\mathrm{c}}$ (inversely
proportional to the coupling rate $\kappa_{\mathrm{ext}}$) varying by orders
of magnitude over an octave Moille _et al._ (2019). However, pulley couplers
exhibiting phase mismatch can achieve much more spectrally flat coupling over
the bandwidth of interest. In our system, we use a pulley waveguide with a
width of $W=550$ nm, a length of $L_{\mathrm{c}}=9$ µm, and a gap $G=370$ nm.
Such coupling, computed using the coupled mode theory (CMT) formalism
developed in ref. Moille _et al._ (2019), exhibits a resonance-free spectrum
and $Q_{c}$ that varies within one order of magnitude over an octave (Fig.
S2). This allows for efficient extraction of the whole ultra-broadband dual-
pumped frequency comb without any gap in the spectrum, thanks to the
resonance-less coupling. However, this design is quite sensitive to phase-
mismatch, and as a result the geometric parameters that define the coupling,
as longer pulley lengths would exhibit coupling resonances that either limit
coupling of the long wavelength pump (case $L_{c}=10$ µm) or the spectral
region in between the primary pump and the synthesis pump would (case
$L_{c}=11$ µm). However, the optimal flat coupling over a large bandwidth
comes at the cost that the coupling quality factor $Q_{c}$ is lower than the
intrinsic quality factor ($Q_{i}\approx 10^{6}$), resulting in over-coupled
devices.
Figure S3: $|$ Experimental setup. Two amplified lasers at 282 THz (1063 nm)
and 192 THz (1557 nm) pump a microring resonator. The output signal is split
between different paths that allow the observation of the spectrum with an OSA
and the comb power exhibiting the DKS step signature (experimental data
labeled as ‘Soliton detection’). One output path is used to measure the DKS
comb tooth frequency through beat-note measurement with different frequency
lasers and a wavemeter for absolute wavelength measurement. Pwr: Powermeter,
CTL: Continuous Tunable Laser, PC: Polarization Controller, PD: PhotoDiode,
OSA: Optical Spectrum Analyzer, ESA: Electrical Spectrum Analyzer, WDM:
Wavelength Demultiplexer. Figure S4: $|$ Linear measurement of the ring
resonator. a, Transmission spectrum around 1550 nm for $RW=1088$ (I) and
$1103$ nm (II), with a zoom-in of a representative resonance highlighted by
the black dashed line in II displayed in panel III, with the nonlinear least
squares fit shown in the red dashed lined. The 42 MHz MZI that serves as a
calibration is displayed in green. b, Transmission spectrum around 1050 nm for
the same resonators, namely $RW=1088$ (I) and $1103$ nm (II), with the fit of
a representative resonance highlighted by the black dashed line in II. The
nonlinear least squares fit to the zoomed-in data is shown as a red dashed
lined in III. The 83 MHz MZI that serves as a calibration for the 1060nm band
is displayed in green.
### Characterization setup
The characterization setup used in measurement of the microcomb devices is
shown in Fig. S3. The resonator is pumped in the transverse electric (TE)
polarization in two bands, with a primary pump at 1063 nm amplified by a
ytterbium-doped fiber amplifier and a synthesis pump at 1557 nm amplified with
an erbium-doped fiber amplifier, which are both coupled to the chip using a
wavelength division multiplexer and a lensed fiber. The on-chip primary pump
power, i.e. at 282 THz, is $P_{\mathrm{pp}}=200$ mW (7.5 dB insertion loss per
facet) and the on-chip synthesis pump power $P_{\mathrm{sp}}=250$ mW (6 dB
insertion loss per facet). The generated microcomb is out-coupled and split
between an optical spectrum analyzer and another path allowing for beat-note
measurements and comb-power detection. As expected with the synthesis pump
scheme, appropriate choice of the frequency of each laser thermally stabilizes
the ring resonator and results in a relatively pure Kerr response (i.e.,
thermal effects mitigated) with a clear signature of soliton steps for
different soliton orders (inset to Fig. S3), ultimately reaching the lowest
order single soliton state.
### Linear Measurements
Linear measurements to determine the mode family and the quality factor of the
ring resonator, in both the 1550 and 1060 nm band, are performed by sweeping a
continuous tunable laser (CTL) and recording the transmission spectrum (Fig.
S4). A Mach-Zehnder Interferometer (MZI), with a free spectral range of 42 MHz
and 83 MHz for the 1550 nm and 1060 nm band respectively, is used to calibrate
the laser sweeps. This allows retrieval of the quality factor of the
resonances in both bands, which are in the over-coupled regime as expected
from the broadband-flat coupling described in the previous section.The modes
in the spectra shown in Fig. S4 exhibit $Q_{\mathrm{i}}\approx 1.1\times
10^{6}$ and $Q_{\mathrm{c}}\approx 3.5\times 10^{5}$ for the average intrinsic
and coupled quality factors, respectively.
|
# Smooth numbers in Beatty sequences
Roger Baker Department of Mathematics
Brigham Young University
Provo, UT 84602, U.S.A<EMAIL_ADDRESS>
###### Abstract.
Let $\theta$ be an irrational number of finite type and let $\psi\geq 0$. We
consider numbers in the Beatty sequence of integer parts,
$\mathcal{B}(x)=\\{\lfloor\theta n+\psi\rfloor:1\leq n\leq x\\}.$
Let $C>3$. Writing $P(n)$ for the largest prime factor of $n$ and $|\ldots|$
for cardinality, we show that
$|\\{n\in\mathcal{B}(x):P(n)\leq y\\}|=\frac{1}{\theta}\,\Psi(\theta x,y)\
(1+o(1))$
as $x\to\infty$, uniformly for $y\geq(\log x)^{C}$. Here $\Psi(X,y)$ denotes
the number of integers up to $X$ with $P(n)\leq y$. The range of $y$ extends
that given by Akbal [1]. The work of Harper [8] plays a key role in the proof.
###### Key words and phrases:
Beatty sequence, exponential sums over smooth numbers.
###### 2020 Mathematics Subject Classification:
Primary 11N25; secondary 11L03
## 1\. Introduction
A positive integer $n$ is said to be $y$-smooth if $P(n)$, the largest prime
factor of $n$, is at most $y$. We write $\mathfrak{S}(y)$ for the set of
$y$-smooth numbers in $\mathbb{N}$ and
$\Psi(x,y)=|\mathfrak{S}(y)\cap[1,x]|,$
where $|\ldots|$ denotes cardinality.
Let $\theta>1$ be an irrational number and $\psi\in[0,\infty)$. Arithmetic
properties of the Beatty sequence
$\mathcal{B}(x)=\\{\lfloor\theta n+\psi\rfloor:1\leq n\leq x\\}$
(where $\lfloor$ $\rfloor$ denotes integer part), have been studied in [1, 3,
4, 7], for example. One may conjecture that for $x$ large and $y=y(x)$ not too
small, say $x\geq y\geq(\log x)^{C}$ where $C>1$, we have
(1.1) $|\\{n\in\mathcal{B}(x):P(n)\leq y\\}|=\frac{1}{\theta}\,\Psi(\theta
x,y)\quad(1+o(1))$
where $o(1)$ denotes a quantity tending to 0 as $x$ tends to infinity. Banks
and Shparlinski [4] obtained (1.1) (in a slightly different form) uniformly
for
$\exp((\log x)^{2/3+\varepsilon})\leq y\leq x.$
(We write $\varepsilon$ for an arbitrary positive number.) Under the
additional condition that $\theta$ is of finite type, Akbal [1] obtained (1.1)
uniformly for
(1.2) $\exp((\log\log x)^{5/3+\varepsilon})\leq y\leq x.$
In the present paper we extend the range (1.2).
###### Theorem 1.
Let $\theta>1$ be an irrational number of finite type and $\psi\geq 0$. Then
(1.1) holds uniformly for
$(\log x)^{3+\varepsilon}\leq y\leq x.$
We recall that an irrational number $\theta$ is said to be of finite type if
($\|\ldots\|$ denoting distance to the nearest integer) we have
$\|m\theta\|\geq\frac{c}{m^{\kappa}}\quad(m\in\mathbb{N})$
for some $c>0$ and $\kappa>0$. We note that if $\theta$ is of finite type,
then so is $\theta^{-1}$.
Theorem 1 depends on an estimate for the exponential sum
$S(\theta)=\sum_{\begin{subarray}{c}n\leq x\\\\[2.84526pt]
n\in\,\mathfrak{S}(y)\end{subarray}}e(n\theta).$
Akbal [1] uses the estimate of Foury and Tenenbaum [6]: for $3\leq
y\leq\sqrt{x}$, $q\in\mathbb{N}$, $(a,q)=1$, and $\delta\in\mathbb{R}$,
$S\left(\frac{a}{q}+\delta\right)\ll x(1+|\delta
x|)\log^{3}x\left(\frac{y^{1/2}}{x^{1/4}}+\frac{1}{q^{1/2}}+\left(\frac{qy}{x}\right)^{1/2}\right).$
This is unhelpful when, say, $y=(\log x)^{C}$ $(C>1)$ since the trivial bound
for $S(\theta)$ is $x^{1-1/C+o(1)}$ (see e.g. [9]).
We use a procedure of Harper [8, Theorem 1] and to state his result we
introduce some notation. For $2\leq y\leq x$, let $u=\frac{\log y}{\log x}$
and let $\alpha=\alpha(x,y)$ be the solution of
$\sum_{p\leq y}\,\frac{\log p}{p^{\alpha}-1}=\log x.$
For convenience, when $\theta=\frac{a}{q}+\delta$ as above, we write
$L=2(1+|\delta x|)$
and
(1.3) $M=u^{3/2}\log u\log x(\log L)^{1/2}(\log qy)^{1/2}.$
In Theorem 1 of [8] it is shown that whenever
(1.4) $q^{2}L^{2}y^{3}\leq x,$
we have
(1.5) $S\left(\frac{a}{q}+\delta\right)\ll\Psi(x,y)(q(1+|\delta
x|)^{-\frac{1}{2}+\frac{3}{2}(1-\alpha(x,y))}M.$
We cannot use this bound directly since (1.3) is too restrictive. We adapt
Harper’s argument to obtain
###### Theorem 2.
Let $f$ be a completely multiplicative function, $|f(n)|\leq 1$
$(n\in\mathbb{N})$. Let
$S(f,\theta)=\sum_{\begin{subarray}{c}n\leq x\\\\[2.84526pt]
n\in\mathfrak{S}(y)\end{subarray}}f(n)e(\theta n).$
Let $q\in\mathbb{N}$, $(a,q)=1$, and $\delta\in\mathbb{R}$. Then, with
$\alpha=\alpha(x,y)$, we have
$S\left(f,\frac{a}{q}+\delta\right)\ll\Psi(x,y)\left\\{(q(1+|\delta
x|)^{-\frac{1}{2}+\frac{3}{2}\,(1-\alpha)}M+x^{\alpha/2}(qLy^{3})^{\frac{1}{2}}\sqrt{\log
y\log q}\right\\}.$
To save space, we refer frequently to [8] in our proof of Theorem 2 in Section
2. Theorem 1 is deduced in a straightforward manner from Theorem 2 in Section
3.
The factor $f(n)$ in Theorem 2 is not needed elsewhere in the paper, but it
requires no significant effort to include it.
I would like to thank Adam Harper for helpful comments concerning his proof of
(1.5).
## 2\. Proof of Theorem 2.
###### Lemma 1.
Let $2\leq y\leq x$ and $d\geq 1$. Then we have
$\Psi\left(\frac{x}{d},y\right)\ll\frac{1}{d^{\alpha(x,y)}}\ \Psi(x,y).$
###### Proof.
See de la Bretèche and Tenenbaum [5, Théorème 2.4 (i)] ∎
We write $p(n)$ for the smallest prime factor of $n\in\mathbb{N}$. We begin
the proof of Theorem 2 by noting that the result is trivial for $qLy^{3}\geq
x^{\alpha}$. Suppose now that $qLy^{3}<x^{\alpha}$. Every $y$-smooth number in
$[qLy^{2},x]$ can be written uniquely in the form $mn$, where
$qLy<m\leq qLy^{2},\ \frac{m}{P(m)}\leq qLy,\ m\in\mathfrak{S}(y)$
and
$\frac{qLy^{2}}{m}\leq n\leq\frac{x}{m}\ ,\ p(n)\geq P(m),\ \
n\in\mathfrak{S}(y).$
(We take $m$ to consist of the product of the smallest prime factors of the
number.) With $\theta=\frac{a}{q}+\delta$, we have
(2.1) $\displaystyle S(f,\theta)$
$\displaystyle=\sum_{\begin{subarray}{c}qLy^{2}\leq n\leq x\\\\[2.84526pt]
n\in\mathfrak{S}(y)\end{subarray}}f(n)e(n\theta)+O(\Psi(qLy^{2},y))$
$\displaystyle=U+O(\Psi(qLy^{2},y))$
where
$U=\sum_{\begin{subarray}{c}qLy<m\leq qLy^{2}\\\\[2.84526pt] m/P(m)\leq
qLy\\\\[2.84526pt] m\in\mathfrak{S}(y)\end{subarray}}\
\sum_{\begin{subarray}{c}\frac{qLy^{2}}{m}\leq n\leq\frac{x}{m}\\\\[2.84526pt]
p(n)\geq P(m)\\\\[2.84526pt]
n\in\mathfrak{S}(y)\end{subarray}}f(mn)e(mn\theta).$
We now decompose $U$ as
(2.2) $U=\sum_{0\leq j\leq\frac{\log y}{\log 2}}U_{j}$
where
$U_{j}=\sum_{p\leq y}\ \ \sum_{\begin{subarray}{c}2^{j}qLy<m\leq
qLy\min(2^{j+1},p)\\\\[2.84526pt] P(m)=p\end{subarray}}f(m)\
\sum_{\begin{subarray}{c}\frac{qLy^{2}}{m}\leq n\leq\frac{x}{m}\\\\[2.84526pt]
p(n)\geq p\\\\[2.84526pt] n\in\mathfrak{S}(y)\end{subarray}}f(n)e(mn\theta),$
noting that if $P(m)=p\leq y$, then $m$ is $y$-smooth, and the condition
$\frac{m}{P(m)}\leq qLy$ can be written as $m\leq qLyp$.
We apply the Cauchy-Schwarz inequality to $U_{j}$. Let $\sum\limits_{m}$
denote
$\sum_{\begin{subarray}{c}2^{j}qLy<m\leq qLy\,\min(2^{j+1},p).\\\\[2.84526pt]
P(m)=p\end{subarray}}$
We obtain
$\displaystyle U_{j}$ $\displaystyle\leq\sqrt{\sum_{p\leq y}\ \sum_{m}1}\
\sqrt{\sum_{2^{j}\leq p\leq y}\
\,\sum_{\frac{2^{j}qLy}{p}<m^{\prime}\leq\frac{qLy}{p}\,\min(2^{j+1},p)}\Bigg{|}\sum_{\frac{qLy^{2}}{m^{\prime}p}\leq
n\leq\frac{x}{m^{\prime}p},p(n)\geq
p,n\in\mathfrak{S}(y)}f(n)e(m^{\prime}pn\theta)\Bigg{|}^{2}}$
$\displaystyle\ll\sqrt{\Psi(2^{j+1}qLy,y)}\ \,\sqrt{\sum_{2^{j}\leq p\leq y}\
\,\sum_{\begin{subarray}{c}n_{1},n_{2}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
p(n_{1}),p(n_{2})\geq p\\\\[2.84526pt]
n_{1},n_{2}\in\mathfrak{S}(y)\end{subarray}}\
\min\left\\{\frac{2^{j+1}qLy}{p},\frac{1}{\|(n_{1}-n_{2})p\theta\|}\right\\}}$
For the last step, we open the square and sum the geometric progression over
$m^{\prime}$. We may restrict the sum over primes to $p\geq 2^{j}$, since
otherwise the sum over $m$ is empty. Our final bound here for $U_{j}$ exactly
matches [8].
Let
$\mathfrak{T}_{j}(r):=\max_{1\leq b\leq r-1}\
\sum_{\begin{subarray}{c}n_{1},n_{2}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
n_{1},n_{2}\in\mathfrak{S}(y)\\\\[2.84526pt] n_{1}-n_{2}\equiv
b\operatorname{mod}r\end{subarray}}\,1.$
Just as in [8], after distinguishing the cases $p\mid q$ and $p\nmid q$, we
arrive at
(2.3) $U_{j}\ll\sqrt{\Psi(2^{j+1}qLy,y)}\,(\sqrt{S}_{1}+\sqrt{S}_{2}),$
with
$\displaystyle S_{1}=S_{1}(j):=\sum_{\begin{subarray}{c}2^{j}\leq p\leq
y\\\\[2.84526pt] p\nmid q\end{subarray}}\ $
$\displaystyle\mathfrak{T}_{j}(q)\sum_{b=1}^{q-1}\min\left\\{\frac{2^{j+1}qLy}{p}\,,\frac{q}{p}\right\\}$
$\displaystyle+\sum_{\begin{subarray}{c}2^{j}\leq p\leq y\\\\[2.84526pt] p\mid
q\end{subarray}}\mathfrak{T}_{j}\left(\frac{q}{p}\right)\
\sum_{b=1}^{(q/p)-1}\min\left\\{\frac{2^{j+1}qLy}{p},\frac{q}{pb}\right\\}$
and
$\displaystyle S_{2}=S_{2}(j):=\sum_{\begin{subarray}{c}2^{j}\leq p\leq
y\\\\[2.84526pt] p\nmid q\end{subarray}}\frac{1}{p}\
\sum_{\begin{subarray}{c}n_{1},n_{2}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
n_{1}-n_{2}\equiv 0\operatorname{mod}q\\\\[2.84526pt]
n_{1},n_{2}\in\mathfrak{S}(y)\end{subarray}}\min\left\\{2^{j+1}qLy,\frac{1}{|(n_{1}-n_{2})\delta|}\right\\}$
$\displaystyle\sum_{\begin{subarray}{c}2^{j}\leq p\leq y\\\\[2.84526pt] p\mid
q\end{subarray}}\frac{1}{p}\
\sum_{\begin{subarray}{c}n_{1},n_{2}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
n_{1}-n_{2}\equiv 0\operatorname{mod}{q/p}\\\\[2.84526pt]
n_{1},n_{2}\in\mathfrak{S}(y)\end{subarray}}\min\left\\{2^{j+1}qLy,\frac{1}{|(n_{1}-n_{2})\delta|}\right\\}.$
We now depart to an extent from the argument in [8]; we have extra terms in
the upper bounds in Lemma 2 below which arise because we do not have the upper
bound (1.4).
###### Lemma 2.
Let $(\log x)^{1.1}\leq y\leq x^{1/3}$, $q\geq 1$, and $L=2(1+|\delta x|)$.
Then for any $j$, $0\leq j\leq\frac{\log y}{\log 2}$, and any prime $p\mid q$,
we have
(2.4) $\displaystyle\mathfrak{T}_{j}(q)$
$\displaystyle\ll\frac{\Psi(x/2^{j}qLy,y)^{2}}{q}\ q^{1-\alpha(x,y)}\log
x+y\Psi\left(\frac{x}{2^{j}qLy},y\right)$ and (2.5)
$\displaystyle\mathfrak{T}_{j}\left(\frac{q}{p}\right)$
$\displaystyle\ll\frac{\Psi(x/2^{j}qLy,y)^{2}}{q/p}\
\left(\frac{q}{p}\right)^{1-\alpha(x,y)}\log x$ $\displaystyle\hskip
162.6075pt+y\Psi\left(\frac{x}{2^{j}qLy},y\right).$
Under the same hypotheses, we have
(2.6)
$\displaystyle\sum_{\begin{subarray}{c}n_{1},n_{2}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
n_{1}-n_{2}\equiv 0\operatorname{mod}q\\\\[2.84526pt]
n_{1},n_{2}\in\mathfrak{S}(y)\end{subarray}}$
$\displaystyle\min\left\\{2^{j+1}qLy,\frac{1}{|(n_{1}-n_{2})\delta|}\right\\}$
$\displaystyle\ll
2^{j}y\,\Psi\left(\frac{x}{2^{j}qLy},y\right)^{2}(qL)^{1-\alpha(x,y)}\log
x\log L$ $\displaystyle\hskip
54.2025pt+2^{j}qLy^{2}\Psi\left(\frac{x}{2^{j}qLy},y\right)$
and, for any prime $p\mid q$,
(2.7)
$\displaystyle\sum_{\begin{subarray}{c}n_{1},n_{2}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
n_{1}-n_{2}\equiv 0\operatorname{mod}{q/p}\\\\[2.84526pt]
n_{1},n_{2}\in\mathfrak{S}(y)\end{subarray}}$
$\displaystyle\min\left\\{2^{j+1}qLy,\frac{1}{|(n_{1}-n_{2})\delta|}\right\\}$
$\displaystyle\ll
p2^{j}y\,\Psi\left(\frac{x}{2^{j}qLy},y\right)^{2}\left(\frac{qL}{p}\right)^{1-\alpha(x,y)}\log
x\log L$ $\displaystyle\hskip
54.2025pt+2^{j}qLy^{2}\,\Psi\left(\frac{x}{2^{j}qLy},y\right).$
###### Proof.
We have
$\mathfrak{T}_{j}(q)=\max_{1\leq b\leq
q-1}\,\sum_{\begin{subarray}{c}n_{1}\leq x/2^{j}qLy\\\\[2.84526pt]
n_{1}\in\mathfrak{S}(y)\end{subarray}}\ \sum_{\begin{subarray}{c}n_{2}\leq
x/2^{j}qLy\\\\[2.84526pt] n_{2}\equiv
n_{1}-b\operatorname{mod}q\\\\[2.84526pt]
n_{2}\in\mathfrak{S}(y)\end{subarray}}1,$
and just as in [8] the inner sum is
$\ll y+\frac{\Psi(x/2^{j}qLy)}{q}\,q^{1-\alpha(x,y)}\log x.$
This leads to the bound (2.4) on summing over $n_{1}$. The bound (2.5) follows
in exactly the same way.
To prove (2.6), (2.7) we distinguish two cases. If $|\delta|\leq 1/x$, then
$L=2(1+|\delta x|)\asymp 1$, and the bounds can be proved exactly as above on
bounding $\min\left\\{2^{j+1}qLy,\frac{1}{|(n_{1}-n_{2})\delta|}\right\\}$ by
$2^{j+1}qLy\ll 2^{j}qy$.
Now suppose $|\delta|>1/x$, so that $L\asymp|\delta x|$. We partition the sum
in (2.6) dyadically. Let us use $\sum^{\dagger}$ to denote a sum over pairs of
integers $n_{1}$, $n_{2}\leq\frac{x}{2^{j}qLy}$ that are $y$-smooth and
satisfy $n_{1}-n_{2}\equiv 0\operatorname{mod}q$. Then we have
$\displaystyle\sideset{}{{}^{\dagger}}{\sum}_{|n_{1}-n_{2}|\leq\frac{x}{2^{j}qL^{2}y}}$
$\displaystyle\min\left\\{2^{j+1}qLy,\frac{1}{|(n_{1}-n_{2})\delta|}\right\\}$
$\displaystyle\ll
2^{j}qLy\sum_{\begin{subarray}{c}n_{1}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
n_{1}\in\mathfrak{S}(y)\end{subarray}}\
\sum_{\begin{subarray}{c}|n_{2}-n_{1}|\leq\frac{x}{2^{j}qL^{2}y}\\\\[2.84526pt]
n_{2}\in\mathfrak{S}(y)\\\\[2.84526pt] n_{2}\equiv
n_{1}\operatorname{mod}q\end{subarray}}1.$
Following [8], but as above not absorbing a term $y$, the last expression is
$\displaystyle\ll
2^{j}qLy\sum_{\begin{subarray}{c}n_{1}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
n_{1}\in\mathfrak{S}(y)\end{subarray}}\left\\{\frac{\Psi(x/2^{j}qLy,y)}{qL}\,(qL)^{1-\alpha}\log
x+y\right\\}$ $\displaystyle\ll
2^{j}y\Psi(x/2^{j}qLy,y)^{2}(qL)^{1-\alpha}\log
x+2^{j}qLy^{2}\Psi(x/2^{j}qLy,y).$
Similarly, for any $r$, $0\leq r\leq(\log L)/\log 2$, we have
$\displaystyle\sideset{}{{}^{\dagger}}{\sum}_{\frac{2^{r}}{2^{j}qL^{2}y}<|n_{1}-n_{2}|\leq\frac{2^{r+1}x}{2^{j}qL^{2}y}}\min\left\\{2^{j+1}qLy,\frac{1}{|(n_{1}-n_{2})\delta|}\right\\}$
$\displaystyle\hskip 72.26999pt\ll\frac{2^{j}qLy}{2^{r}}\
\sum_{\begin{subarray}{c}n_{1}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
n_{1}\in\mathfrak{S}(y)\end{subarray}}\
\sum_{\begin{subarray}{c}|n_{2}-n_{1}|\leq\frac{2^{r+1}x}{2^{j}qL^{2}y}\\\\[2.84526pt]
n_{2}\in\mathfrak{S}(y)\\\\[2.84526pt] n_{2}\equiv
n_{1}\operatorname{mod}q\end{subarray}}1$
$\displaystyle\qquad\ll\frac{2^{j}qLy}{2^{r}}\
\sum_{\begin{subarray}{c}n_{1}\leq\frac{x}{2^{j}qLy}\\\\[2.84526pt]
n_{1}\in\mathfrak{S}(y)\end{subarray}}\left\\{\Psi\left(\frac{x}{2^{j}qLy},y\right)\left(\frac{2^{r}}{qL}\right)^{\alpha}\log
x+y\right\\}$ $\displaystyle\qquad\ll
2^{j}y\Psi\left(\frac{x}{2^{j}qLy},y\right)^{2}\left(\frac{2^{r}}{qL}\right)^{\alpha}\log
x+\frac{2^{j}qLy^{2}}{2^{r}}\,\Psi\left(\frac{x}{2^{j}qLy},y\right).$
The bound (2.6) follows on summing over $r$.
The bound (2.7) follows in exactly the same way; we lose a factor $p^{\alpha}$
in the first term on the right-hand side in (2.7) because of the weaker
congruence condition. This completes the proof of Lemma 2. ∎
We now assemble our bounds to prove Theorem 1. As noted in [8], for any $p\leq
y$ we have
(2.8)
$\sum_{b=1}^{q-1}\min\left\\{\frac{2^{j+1}qLy}{p},\frac{q}{b}\right\\}\ll
q\log q$
and, for $p\mid q$,
(2.9)
$\sum_{b=1}^{(q/p)-1}\min\left\\{\frac{2^{j+1}qLy}{p},\frac{q}{pb}\right\\}\ll\frac{q}{p}\log
q.$
Let
$A_{j}=\Bigg{(}\sum_{\begin{subarray}{c}2^{j}\leq p\leq y\\\\[2.84526pt]
p\nmid q\end{subarray}}\frac{1}{p}+\sum_{\begin{subarray}{c}2^{j}\leq p\leq
y\\\\[2.84526pt] p\mid
q\end{subarray}}1\Bigg{)}2^{j}y\Psi\left(\frac{x}{2^{j}qLy}\,,y\right)(qL)^{1-\alpha}\log
x\log L.$
We deduce from (2.1)–(2.3), Lemma 2, and (2.8)–(2.9) that
(2.10) $S\left(f,\frac{a}{q}+\delta\right)\ll\Psi(qLy^{2},y)+A+B_{1}+B_{2}$
where
$\displaystyle A$ $\displaystyle=\sum_{0\leq j\leq\frac{\log y}{\log
2}}\sqrt{\Psi(2^{j+1}qLy,y)}\sqrt{\frac{y}{\log y}\,\log
q\Psi\left(\frac{x}{2^{j}qLy},y\right)^{2}q^{1-\alpha}\log x}$
$\displaystyle\quad+\sum_{0\leq j\leq\frac{\log y}{\log
2}}\sqrt{\Psi(2^{j+1}qLy,y)}\sqrt{A_{j}},$ $\displaystyle B_{1}$
$\displaystyle=\sum_{0\leq j\leq\frac{\log y}{\log
2}}\sqrt{\Psi(2^{j+1}qLy,y)}\ \sqrt{\sum_{2^{j}\leq p\leq y}q\log q\cdot
y\Psi\left(\frac{x}{2^{j}qLy},y\right)},$ and $\displaystyle B_{2}$
$\displaystyle=\sum_{0\leq j\leq\frac{\log y}{\log
2}}\sqrt{\Psi(2^{j+1}qLy,y)}\sqrt{\frac{qLy^{3}}{\log
y}\,\Psi\left(\frac{x}{2^{j}qy},y\right)}.$
For the estimation of $A$ we can appeal to [8]:
(2.11) $A\ll\Psi(x,y)(qL)^{-\frac{1}{2}+\frac{3}{2}\,(1-\alpha(x,y))}M.$
The $\Psi$ functions in $B_{1}$ and $B_{2}$ are estimated using Lemma 1. Thus
$\displaystyle\sqrt{\Psi\left(\frac{x}{2^{j}qLy},y\right)\Psi(2^{j+1}qLy,y)}$
$\displaystyle\ll\Psi(x,y)(2^{j}qLy)^{-\alpha/2}\left(\frac{x}{2^{j+1}qLy}\right)^{-\alpha/2}$
$\displaystyle\ll\Psi(x,y)x^{-\alpha/2},$
leading to (a slightly wasteful) bound
(2.12) $B_{1}+B_{2}\ll\Psi(x,y)\sqrt{y^{3}qL\log q\log y}\ x^{-\alpha/2}.$
The remaining term to be estimated is $\Psi(qLy^{2},y)$, and Lemma 1 gives
(2.13) $\Psi(qLy^{2},y)\ll\left(\frac{x}{qLy^{2}}\right)^{-\alpha}\Psi(x,y).$
This term can be absorbed into the right-hand side of (2.12) since
(2.14) $(qLy^{3})^{\alpha-\frac{1}{2}}\ll(qLy^{3})^{\alpha/2}\ll
x^{\alpha/2}.$
Theorem 2 follows on combining (2.10)–(2.14).
## 3\. Proof of Theorem 2
###### Lemma 3.
Let $u_{1},\ldots,u_{N}\in\mathbb{R}$. Then for any $J\in\mathbb{N}$ and any
$\rho\leq\sigma\leq\rho+1$, we have
$\displaystyle\big{|}|\\{1\leq n\leq N$
$\displaystyle:u_{n}\in[\rho,\sigma]\operatorname{mod}1\\}|-(\sigma-\rho)N\big{|}$
$\displaystyle\qquad\leq\frac{N}{J+1}+3\sum_{j=1}^{J}\
\frac{1}{j}\,\Bigg{|}\sum_{n=1}^{N}e(ju_{n})\Bigg{|}.$
###### Proof.
See e.g. [2], Theorem 2.1. ∎
###### Proof of Theorem 1.
In view of the result of [1] cited above, we may suppose that
(3.1) $(\log x)^{3+\varepsilon}\leq y\leq\exp((\log\log x)^{2}).$
We note that $\lfloor\theta n+\psi\rfloor=m$ for some $m$ if and only if
(3.2) $0<\left\\{\frac{m+1-\psi}{\theta}\right\\}\leq\frac{1}{\theta},$
so that, applying Lemma 3,
$\displaystyle\sum_{n\in\mathcal{B}(x,y)}1$
$\displaystyle=\sum_{\begin{subarray}{c}m\leq\theta x\\\\[2.84526pt]
m\in\mathfrak{S}(y)\\\\[2.84526pt] \eqref{eq3.2}\,holds\end{subarray}}1+O(1)$
$\displaystyle=\theta^{-1}\Psi(\theta x,y)+O\Bigg{(}\frac{\Psi(\theta
x,y)}{\log x}+\sum_{j=1}^{[\log x]}\
\frac{1}{j}\Bigg{|}\sum_{\begin{subarray}{c}m\leq\theta x\\\\[2.84526pt]
m\in\mathfrak{S}(y)\end{subarray}}e\left(\frac{jm}{\theta}\right)\Bigg{|}\Bigg{)}$
For our purposes, then, it suffices to show that
(3.3) $\sum_{\begin{subarray}{c}m\leq\theta x\\\\[2.84526pt]
m\in\mathfrak{S}(y)\end{subarray}}e(j\theta^{-1}m)\ll\Psi(\theta x,y)(\log
x)^{-2}$
uniformly for $1\leq j\leq\log x$. We deduce this from Theorem 2.
By Dirichlet’s theorem there is $q\in\mathbb{N}$, $1\leq q\leq x^{1/2}$, and
$a\in\mathbb{N}$ with $(a,q)=1$ such that
$\delta:=j\theta^{-1}-\frac{a}{q}\in\left[-\frac{1}{qx^{1/2}},\frac{1}{qx^{1/2}}\right].$
Now
$|qj\theta^{-1}-a|\geq\frac{c_{1}}{(qj)^{\kappa}}$
for positive constants $c_{1}$ and $\kappa$, hence
$x^{-1/2}\geq q|\delta|\geq\frac{c_{1}}{(qj)^{\kappa}}$
giving
$q\gg x^{1/2\kappa}(\log x)^{-1}.$
We apply Theorem 2 with $j\theta^{-1}$ in place of $\theta$, and $\theta x$ in
place of $x$. Note that
$qL=2q(1+|\delta\theta x|)\ll x^{1/2}$
so that, in view of (3.1),
(3.4) $qLy^{3}\ll x^{5/9}.$
Now, since $y\geq(\log x)^{3+\varepsilon}$, we have
$\alpha(\theta x,y)\geq\frac{2}{3}+\eta\quad(\eta=\eta(\varepsilon)>0);$
see [9], for example. Thus, abbreviating $\alpha(\theta x,y)$ to $\alpha$,
$q^{-\frac{1}{2}+\frac{3}{2}\,(1-\alpha)}\ll q^{-\eta}\ll x^{-\eta/3\kappa}.$
Now, with $M$ as in (1.3) with $\theta x$ in place of $x$,
(3.5) $q^{-\frac{1}{2}+\frac{3}{2}\,(1-\alpha)}M\ll x^{-\eta/4\kappa}.$
Next, recalling (3.4),
(3.6) $(\theta x)^{-\alpha/2}(qLy^{3})^{1/2}\sqrt{\log y\log q}\ll x^{-1/20}.$
Combining (3.5), (3.6) we obtain (3.3). This completes the proof of Theorem 1.
∎
## References
* [1] Y. Akbal, A note on values of Beatty sequences that are free of large prime factors, Colloq. Math. 160 (2020), 53–63.
* [2] R.C. Baker, Diophantine Inequalities, Clarendon Press, Oxford, 1986.
* [3] R.C. Baker and L. Zhao, Gaps between primes in Beatty sequences, Acta Arith. 172 (2016), 207–242.
* [4] W.D. Banks and I.E. Shparlinski, Prime divisors in Beatty sequences, J. Number Theory 123 (2007), 413–425.
* [5] R. de la Brétèche and G. Tenenbaum, Propriétés statistiques des entiers friables, Ramanujan J. 9 (2005), 139–202.
* [6] E. Fouvry and G. Tenenbaum, Entiers sans grand facteur premier en progressions arithmetiques, Proc. London Math. Soc. 63 (1991), 449–494.
* [7] G. Harman, Primes in Beatty sequences in short intervals, Mathematika 62 (2016), 572–586.
* [8] A. Harper, Minor arcs, mean values and restriction theory for exponential sums over smooth numbers, Compos. Math. 152 (2016), 1121–1158.
* [9] A. Hildebrand and G. Tenenbaum, Integers without large prime factors, J. Théor. Nombres Bordeaux 5 (1993), 411–484.
|
# Fractional double phase Robin problem involving variable order-exponents
without Ambrosetti-Rabinowitz condition
Reshmi Biswas , Sabri Bahrouni and Marcos L. Carvalho Mathematics
Department, Indian Institute of technology Guwahati, Guwahati, Assam 781039,
India Mathematics Department, Faculty of Sciences, University of Monastir,
5019 Monastir, Tunisia Mathematics Institute, Universidade Federal de Goiás,
Brazil<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
We consider a fractional double phase Robin problem involving variable order
and variable exponents. The nonlinearity $f$ is a Carathéodory function
satisfying some hypotheses which do not include the Ambrosetti-Rabinowitz type
condition. By using a Variational methods, we investigate the multiplicity of
solutions.
###### Key words and phrases:
Variable-order fractional $p(\cdot)$-Laplacian, Double phase problem, Robin
boundary condition, variational methods
2010 Mathematics Subject Classifications: 35R11, 35S15, 47G20, 47J30.
###### Contents
1. 1 Introduction
2. 2 Preliminaries results
1. 2.1 Variable exponent Lebesgue spaces
2. 2.2 Variable order fractional Sobolev spaces with variable exponents
3. 3 Functional setting
4. 4 Proof of Theorem 1.1
1. 4.1 Abstract results
2. 4.2 Geometric condition
3. 4.3 Cerami condition
5. 5 Proof of Theorem 1.2
6. 6 Proof of Theorem 1.3
7. 7 Proof of Theorem 1.4
8. 8 Proof of Theorem 1.5
## 1\. Introduction
In the last few decades, problems involving $p(x)$-Laplacian, defined as
$(-\Delta)_{p(x)}u:=(|\nabla u|^{p(x)-2}\nabla u),\ x\in\mathbb{R}^{N},$ were
$p:\mathbb{R}^{N}\to[1,\infty)$ is continuous function, have been studied
intensively due to its major real world appearances in several mathematical
models, for e.g., electrorheological fluid flow, image restorations, etc. (see
[1, 14, 42, 49]). Various parametric boundary value problems with variable
exponents can be found in the book of Rădulescu-Repovš [39] and also one can
refer to the book by Diening et al. [17] for the properties of such operator
and associated variable exponent Lebesgue spaces and variable exponent Sobolev
spaces.
On the other hand, recently, great attention has been focused on the study of
fractional and nonlocal operators of elliptic type, both for pure mathematical
research and in view of concrete real-world applications. In most of these
applications, a fundamental tool to treat these type of problems is the so-
called fractional order Sobolev spaces. The literature on nonlocal operators
and on their applications is very interesting and, up to now, quite large. We
also refer to the recent monographs [18, 36] for a thorough variational
approach of nonlocal problems.
A bridge between fractional order theories and Sobolev spaces with variable
settings is first provided in [27]. In that paper, the authors defined the
Fractional Sobolev spaces with variable exponents and introduce the
corresponding fractional $p(\cdot)$-Laplacian as
(1.1) $(-\Delta)_{p(\cdot)}^{s}u(x):=P.V.\int_{\Omega}\frac{\mid
u(x)-u(y)\mid^{p(x,y)-2}(u(x)-u(y))}{\mid
x-y\mid^{N+sp(x,y)}}dy,~{}~{}x\in\Omega,$
where P.V. denotes Cauchy’s principal value,
$p:\Omega\times\Omega\to\mathbb{R}$ is a continuous function with
$1<p(x,y)<\infty$ and $0<s<1$, where $\Omega$ is a smooth domain. The idea of
studying such spaces and the associated operator defined in (1.1) arises from
a natural inquisitiveness to see what results can be recovered when the
standard local $p(x)$-Laplace operator is replaced by the fractional
$p(\cdot)$-Laplacian. Continuing with this thought and inspired by the vast
applications of variable order derivative (see for e.g. [29, 30, 32, 33, 41,
44] and references there in), Biswas and Tiwari [11] introduced the variable
order fractional Sobolev spaces with variable exponent and corresponding
variable-order fractional $p(\cdot)$-Laplacian by imposing variable growth on
the fractional order $s$, given in (1.1), to study some elliptic problems. In
fact, results regarding fractional $p(\cdot)$-Laplace equations and variable-
order fractional $p(\cdot)$-Laplace equations are in progress, for example, we
refer to [4, 9, 24, 25] and [6, 12, 47], respectively.
In this paper, we are interested in the following problem:
(1.2)
$\begin{cases}\mathcal{L}_{p_{1},p_{2}}^{s}(u)+|u|^{\overline{p}_{1}(x)-2}u+|u|^{\overline{p}_{2}(x)-2}u=f(x,u)&\mbox{in}\
\Omega\\\
\mathcal{N}^{s}_{p_{1},p_{2}}(u)+\beta(x)(|u|^{\overline{p}_{1}(x)-2}u+|u|^{\overline{p}_{2}(x)-2}u)=0&\mbox{in}\
\mathbb{R}^{N}\setminus\overline{\Omega},\end{cases}$
where
$\mathcal{L}_{p_{1},p_{2}}^{s}(u):=(-\Delta)^{s(.)}_{p_{1}(.)}(u)+(-\Delta)^{s(.)}_{p_{2}(.)}(u),$
$\mathcal{N}^{s}_{p_{1},p_{2}}(u):=\mathcal{N}^{s(\cdot)}_{p_{1}(\cdot)}(u)+\mathcal{N}^{s(\cdot)}_{p_{2}(\cdot)}(u)$
and
(1.3) $\displaystyle(-\Delta)^{s(.)}_{p_{i}(.)}u(x)=\displaystyle
P.V.\int_{\Omega}\frac{|u(x)-u(y)|^{p_{i}(x,y)}}{|x-y|^{N+s(x,y)p_{i}(x,y)}}\,dy,\;i=1,2,\quad\text{for
}x\in{\Omega},$ (1.4)
$\displaystyle\mathcal{N}^{s(\cdot)}_{p_{i}(\cdot)}u(x)=\int_{\Omega}\frac{|u(x)-u(y)|^{p_{i}(x,y)-2}(u(x)-u(y))}{|x-y|^{N+s(x,y)p_{i}(x,y)}}\,dy\quad\text{for
}x\in\mathbb{R}^{N}\setminus\overline{\Omega}.$
Here P.V. denotes the Cauchy’s principal value, $\Omega\subset\mathbb{R}^{N}$
is a bounded smooth domain, $s,p_{1},p_{2}$ are continuous functions such that
$\overline{p}_{i}(x):=p_{i}(x,x),~{}i=1,2,~{}\overline{s}(x):=s(x,x)$ with
appropriate assumptions described later. The variable exponent $\beta$
verifies the assumption
($\beta$) $\beta\in
L^{\infty}(\mathbb{R}^{N}\setminus\Omega)\quad\text{and}\quad\beta\geq 0\
\text{in}\ \mathbb{R}^{N}\setminus\Omega.$
The operator, defined in (1.2), is called double phase type operator which has
some important applications in biophysics, plasma physics, reaction-diffusion,
etc. (see [13, 22, 45], for e.g.). For more details on applications of such
operators in constant exponent set up, that is , $(p,q)$-Laplacian equations,
we refer to the survey article [34], see also [8, 40] for the nonconstant
case. This present paper generalizes some results contained in [37] and [43]
to the case of nonlocal partial differential equations with variable
exponents. If $p_{1}$ and $p_{2}$ are constants, then (1.2) becomes the usual
nonlocal constant exponent differential equation discussed in [3, 15]. Several
results for $(p,q)$-Laplacian problems set in bounded domains and in the whole
of $R^{N}$ can be found in [10, 21, 35] and the references therein. But if
either $p_{1}$ or $p_{2}$ is a non-constant function, then (1.2) has a more
complicated structure, due to its non-homogeneity and to the presence of
several nonlinear terms, only few recent works deal with these problems. For
instance, in [48], the authors generalize the double phase problem involving a
local version of the fractional operator with variable exponents, discussed in
[16], and studied the problem involving variable order fractional
$p(\cdot)\&q(\cdot)$-Laplacian but with homogeneous Dirichlet boundary datum,
that is, $u=0$ in $\mathbb{R}^{N}\setminus\Omega.$
Now we consider some notations as follows. For any set $\mathcal{D}$ and any
function $\Phi:\mathcal{D}\rightarrow\mathbb{R}$, we fix
$\displaystyle\Phi^{-}:=\inf_{\mathcal{D}}\Phi(x)\text{ ~{}~{}~{}and
~{}~{}}\Phi^{+}:=\sup_{\mathcal{D}}\Phi(x).$
We define the function space
$C_{+}(\mathcal{D}):=\\{\Phi:\mathcal{D}\to\mathbb{R}\text{~{}is uniformly
continuous~{}}:~{}1<\Phi^{-}\leq\Phi^{+}<\infty\\}.$
We consider the following hypotheses on the variable order $s$ and on the
variable exponents $p_{1},p_{2}:$
* $(H_{1})$
$s:\mathbb{R}^{N}\times\mathbb{R}^{N}\rightarrow(0,1)$ is a uniformly
continuous and symmetric function, i.e., $s(x,y)=s(y,x)$ for all
$(x,y)\in\mathbb{R}^{N}\times\mathbb{R}^{N}$ with $0<s^{-}\leq s^{+}<1$.
* $(H_{2})$
$p_{i}\in C_{+}(\mathbb{R}^{N}\times\mathbb{R}^{N})$ are uniformly continuous
and symmetric functions, i.e., $p_{i}(x,y)=p_{i}(y,x),i=1,2$ for all
$(x,y)\in\mathbb{R}^{N}\times\mathbb{R}^{N}$ with $1<p_{1}^{-}\leq
p_{1}^{+}<p_{2}^{-}\leq p_{2}^{+}<+\infty$ such that $s^{+}p_{i}^{+}<N$.
First we study our problem without assuming the well known Ambrosetti-
Rabinowitz (AR, in short) type condition on the nonlinearity $f$, which is
given as
$(AR)\qquad\quad{\exists\theta>p_{2}^{+}}\text{ s.t.}\;\;tf(x,t)>\theta
F(x,t),\;\;\forall|t|>0.$
As known, under $(AR)$, any Palais-Smale sequence of the corresponding energy
functional is bounded, which plays an important role of the application of
variational methods. In our problem the nonlinearity
$f:\Omega\times\mathbb{R}\rightarrow\mathbb{R}$ is a Carathéodory function
such that $f(x,0)=0$ for a.e. $x\in\Omega.$ The further assumptions on $f$ are
given below.
* $(f_{1})$
There exists $a\in L^{\infty}(\Omega)$ such that $|f(x,t)|\leq
a(x)\left(1+|t|^{r(x)-1}\right),$ for a.e. $x\in\Omega$ and for all
$t\in\mathbb{R},$ where $r\in C_{+}(\mathbb{R}^{N})$ with $p_{2}^{+}<r^{-}\leq
r(x)<\frac{N\overline{p}_{2}(x)}{N-\overline{s}(x)\overline{p}_{2}(x)}:={{p_{2}}_{s}}^{*}(x)$.
* $(f_{2})$
If $F(x,t):=\int_{0}^{t}f(x,s)ds$, then
$\displaystyle\lim_{|t|\to+\infty}\frac{F(x,t)}{|t|^{p_{2}^{+}}}=0$ uniformly
for $a.e$ $x\in\Omega.$
* $(f_{3})$
$\displaystyle\lim_{|t|\to 0}\frac{f(x,t)}{|t|^{p_{2}^{+}-2}t}=0$ uniformly
for $a.e$ $x\in\Omega$.
* $(f_{4})$
Let $\mathcal{F}(x,t)=tf(x,t)-p_{2}^{+}F(x,t).$ Then there exists $b\in
L^{1}(\Omega)$ such that
$\mathcal{F}(x,t)\leq\mathcal{F}(x,\tau)+b(x)\ \text{for a.e.}\ x\in\Omega,\
\text{all}\ 0\leq t\leq\tau\ \text{or all}\ \tau\leq t\leq 0.$
Consider the following function
$g(x,t)=t|t|^{{\frac{p_{2}^{+}}{2}}-2}\log(1+|t|).$
One can check that $g$ does not satisfy $(AR)$ but it satisfies
$(f_{1})$-$(f_{4})$. Therefore by dropping $(AR)$ condition, not only we
invite complications in the compactness of Palais-Smale sequence but also we
include larger class of nonlinearities. To overcome such aforementioned
difficulty, we analyze the Cerami condition (see Definition 4.1), which is
more appropriate for the set up of our problem. Finally, we are in a position
to state the main results of this article.
###### Theorem 1.1.
Let hypotheses $(H_{1})$-$(H_{2})$, $(\beta)$ and $(f_{1})$-$(f_{4})$ hold.
Then there exists a non-trivial weak solution of (1.2).
Next, for the odd nonlinearity $f(x,t),$ we state the existence results of
infinitely many solutions using the Fountain theorem and the Dual fountain
theorem, respectively.
###### Theorem 1.2.
Let hypotheses $(H_{1})$-$(H_{2})$, $(\beta)$ and $(f_{1})$-$(f_{4})$ hold.
Also let $f(x,-t)=-f(x,t).$ Then the problem (1.2) has a sequence of
nontrivial weak solutions with unbounded energy.
###### Theorem 1.3.
Let hypotheses $(H_{1})$-$(H_{2})$, $(\beta)$ and $(f_{1})$-$(f_{4})$ hold.
Also let $f(x,-t)=-f(x,t)$. Then the problem (1.2) has a sequence of
nontrivial weak solutions with negative critical values converging to zero.
We prove the next theorem using the symmetric mountain pass theorem.
###### Theorem 1.4.
Let hypotheses $(H_{1})$-$(H_{2})$, $(\beta)$ and $(f_{1})$-$(f_{4})$ hold.
Also let $f(x,-t)=-f(x,t)$. Then the problem (1.2) has a sequence of
nontrivial weak solutions with unbounded energy characterized by a minmax
argument.
In the next theorem, we consider the following concave and convex type
nonlinearity $f:$
* $(f_{5})$
For $\lambda>0$ and $q,r\in C_{+}(\Omega)$ with $1<q^{-}\leq q^{+}<p_{1}^{-}$
and $p_{2}^{+}<r^{-}$
$f(x,t)=\lambda|t|^{q(x)-2}t+|t|^{r(x)-2}t$
###### Theorem 1.5.
Let hypotheses $(H_{1})$-$(H_{2})$, $(\beta)$ and $(f_{5})$ hold. Then for all
$\lambda>0,$ the problem (1.2) has a sequence of nontrivial weak solutions
converging to $0$ with negative energy.
It is noteworthy to mention that we are the first ( as per the best of our
knowledge) to study the above existence results for the problem (1.2) driven
by double phase variable-order fractional
$p_{1}(\cdot)\&p_{2}(\cdot)$-Laplacian involving Robin boundary condition and
non-AR type nonlinearities.
###### Remark 1.6.
Throughout this paper $C$ represents generic positive constant which may vary
from line to line.
## 2\. Preliminaries results
### 2.1. Variable exponent Lebesgue spaces
In this section first we recall some basic properties of the variable exponent
Lebesgue spaces, which we will use to prove our main results.
For $q\in C_{+}(\Omega)$ define the variable exponent Lebesgue space
$L^{q(\cdot)}(\Omega)$ as
$L^{q(\cdot)}(\Omega):={\Big{\\{}u:\Omega\to\mathbb{R}\
\text{is~{}measurable}:\int_{\Omega}|u(x)|^{q(x)}\;dx<+\infty\Big{\\}}}$
which is a separable, reflexive, uniformly convex Banach space (see [17, 19])
with respect to the Luxemburg norm
$\|{u}\|_{L^{q(\cdot)}(\Omega)}:=\inf\Big{\\{}\eta>0:\int_{\Omega}\Big{|}\frac{u(x)}{\eta}\Big{|}^{q(x)}\;dx\leq
1\Big{\\}}.$
Define the modular $\rho_{\Omega}^{q}:\ L^{q(\cdot)}(\Omega)\to\mathbb{R}$ as
$\rho_{\Omega}^{q}(u):=\int_{\Omega}|u|^{q(x)}\;dx,\ {for~{}all~{}}u\in
L^{q(\cdot)}(\Omega).$
###### Proposition 2.1.
([19])Let $u_{n},u\in L^{q(\cdot)}(\Omega)\setminus\\{0\\},$ then the
following properties hold:
* (i)
$\eta=\|u\|_{L^{q(\cdot)}(\Omega)}$ if and only if
$\rho_{\Omega}^{q}(\frac{u}{\eta})=1.$
* (ii)
$\rho_{\Omega}^{q}(u)>1$ $(=1;\ <1)$ if and only if
$\|u\|_{L^{q(\cdot)}(\Omega)}>1$ $(=1;\ <1)$, respectively.
* (iii)
If $\|u\|_{L^{q(\cdot)}(\Omega)}>1$, then
$\|u\|_{L^{q(\cdot)}(\Omega)}^{p^{-}}\leq\rho_{\Omega}^{q}(u)\leq\|u\|_{L^{q(\cdot)}(\Omega)}^{p^{+}}$.
* (iv)
If $\|u\|_{L^{q(\cdot)}(\Omega)}<1$, then
$\|u\|_{L^{q(\cdot)}(\Omega)}^{p^{+}}\leq\rho_{\Omega}^{q}(u)\leq\|u\|_{L^{q(\cdot)}(\Omega)}^{p^{-}}$.
* (v)
${\displaystyle\lim_{n\to+\infty}}\|u_{n}-u\|_{L^{q(\cdot)}(\Omega)}=0\iff{\displaystyle\lim_{n\to+\infty}}\rho_{\Omega}^{q}(u_{n}-u)=0.$
Let $q^{\prime}$ be the conjugate function of $q,$ that is,
$1/q(x)+1/q^{\prime}(x)=1$.
###### Proposition 2.2.
(Hölder inequality) ([19]) For any $u\in L^{q(\cdot)}(\Omega)$ and $v\in
L^{q^{\prime}(\cdot)}(\Omega)$, we have
$\Big{|}\int_{\Omega}uv\,dx\Big{|}\leq
2\|u\|_{L^{q(\cdot)}(\Omega)}\|{v}\|_{L^{q^{\prime}(\cdot)}(\Omega)}.$
###### Lemma 2.3.
([23, Lemma A.1]) Let $\vartheta_{1}(x)\in L^{\infty}(\Omega)$ such that
$\vartheta_{1}\geq 0,\;\vartheta_{1}\not\equiv 0.$ Let
$\vartheta_{2}:\Omega\to\mathbb{R}$ be a measurable function such that
$\vartheta_{1}(x)\vartheta_{2}(x)\geq 1$ a.e. in $\Omega.$ Then for every
$u\in L^{\vartheta_{1}(x)\vartheta_{2}(x)}(\Omega),$
$\parallel|u|^{\vartheta_{1}(\cdot)}\parallel_{L^{\vartheta_{2}(x)}(\Omega)}\leq\parallel
u\parallel_{L^{\vartheta_{1}(x)\vartheta_{2}(x)}(\Omega)}^{\vartheta_{1}^{-}}+\parallel
u\parallel_{L^{\vartheta_{1}(x)\vartheta_{2}(x)}(\Omega)}^{\vartheta_{1}^{+}}.$
### 2.2. Variable order fractional Sobolev spaces with variable exponents
Next, we define the fractional Sobolev spaces with variable order and variable
exponents (see [11]). Define
$\displaystyle W$
$\displaystyle=W^{s(\cdot,\cdot),\overline{p}(\cdot),p(\cdot,\cdot)}(\Omega)$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}:=\Big{\\{}u\in
L^{\overline{p}(\cdot)}(\Omega):\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{p(x,y)}}{\eta^{p(x,y)}|x-y|^{N+s(x,y)p(x,y)}}dxdy<\infty,\text{
for some }\eta>0\Big{\\}}$
endowed with the norm
$\|u\|_{W}:=\inf\Big{\\{}\eta>0:\rho_{W}\left(\frac{u}{\eta}\right)<1\Big{\\}},$
where
$\rho_{W}(u):=\int_{\Omega}\left|u\right|^{\overline{p}(x)}dx+\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}\,dxdy$
is a modular on $W.$ Then, $(W,\|\cdot\|_{W})$ is a separable reflexive Banach
space (see [11, 24]). On $W$ we also make use of the following norm:
$|u|_{W}:=\|u\|_{L^{\overline{p}(\cdot)}(\mathbb{R}^{N})}+[u]_{W},$
where the seminorm $[\cdot]_{W}$ is defined as follows:
$[u]_{W}:=\inf\Big{\\{}\eta>0:\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{p(x,y)}}{\eta^{p(x,y)}|x-y|^{N+s(x,y)p(x,y)}}\,dxdy<1\Big{\\}}.$
Note that $\|\cdot\|_{W}$ and $|\cdot|_{W}$ are equivalent norms on $W$ with
the relation
(2.1) $\frac{1}{2}\|u\|_{W}\leq|u|_{W}\leq 2\|u\|_{W},\ \
\text{~{}for~{}all~{}}u\in W.$
The following embedding result is studied in [11]. We also refer to [24] where
the authors proved the same result when $s(x,y)=s,$ constant.
###### Theorem 2.4 (Sub-critical embedding).
Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^{N}$ or
$\Omega=\mathbb{R}^{N}$. Let $s$ and $p$ satisfy $(H_{1})$ and $(H_{2}),$
respectively and $\gamma\in C_{+}(\overline{\Omega})$ satisfy
$1<\gamma(x)<p_{s}^{*}(x)$ for all $x\in\overline{\Omega}$. In addition, when
$\Omega=\mathbb{R}^{N}$, $\gamma$ is uniformly continuous and
$\overline{p}(x)<\gamma(x)$ for all $x\in\mathbb{R}^{N}$ and
$\inf_{x\in\mathbb{R}^{N}}\left(p_{s}^{*}(x)-\gamma(x)\right)>0$. Then, it
holds that
(2.2) $W\hookrightarrow L^{\gamma(\cdot)}(\Omega).$
Moreover, the embedding is compact.
Notations:
* •
$\delta_{\Omega}^{p}(u)=\displaystyle\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}}\,dxdy.$
* •
For any measurable set $\mathcal{S},$ $|\mathcal{S}|$ denotes the Lebesgue
measure of the set.
## 3\. Functional setting
Now, we give the variational framework of problem (1.2). Let $s,p$ satisfy
$(H_{1})$, $(H_{2}),$ respectively. We set
$\displaystyle|u|_{{X{{}_{p}}}}:=[u]_{s(\cdot),p(\cdot),\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}+\|u\|_{L^{\overline{p}(\cdot)}(\Omega)}+\left\|\beta^{\frac{1}{\overline{p}(\cdot)}}u\right\|_{L^{\overline{p}(\cdot)}(\mathcal{C}\Omega)},$
where $\mathcal{C}\Omega=\mathbb{R}^{N}\setminus\Omega$ and
$\displaystyle
X^{s(\cdot)}_{p(\cdot)}:=\left\\{u\colon\mathbb{R}^{N}\to\mathbb{R}\text{
measurable }:\ \|u\|_{{X}_{p}}<\infty\right\\}.$
By following standard arguments, it can be seen that $X^{s(\cdot)}_{p(\cdot)}$
is reflexive Banach space with respect to the norm $|\cdot|_{X_{p}}$ (see [7,
Proposition 3.1]).
Note that the norm $|\cdot|_{X_{p}}$ is equivalent on
$X_{p(\cdot)}^{s(\cdot)}$ to the following norm:
(3.1) $\displaystyle\begin{split}\|u\|_{X_{p}}&=\inf\left\\{\eta\geq 0\
\bigg{|}\ \rho_{p}\left(\frac{u}{\eta}\right)\leq 1\right\\}\\\
&=\inf\left\\{\eta\geq 0\ \bigg{|}\
\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(u)|^{p(x,y)}}{\eta^{p(x,y)}p(x,y)|x-y|^{N+s(x,y)p(x,y)}}\,dx\,dy+\int_{\Omega}\frac{|u|^{\overline{p}(x)}}{\overline{p}(x)\eta^{\overline{p}(x)}}\,dx\right.\\\
&\qquad\qquad\qquad\quad\left.+\int_{\mathcal{C}\Omega}\frac{\beta(x)}{\eta^{\overline{p}(x)}\overline{p}(x)}|u|^{\overline{p}(x)}\,dx\leq
1\right\\},\end{split}$
where the modular $\rho_{p}\colon X_{p(\cdot)}^{s(\cdot)}\to\mathbb{R}$ is
defined by
$\displaystyle\rho_{p}\left(u\right)$
$\displaystyle=\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(u)|^{p(x,y)}}{p(x,y)|x-y|^{N+s(x,y)p(x,y)}}\,dx\,dy+\int_{\Omega}\frac{|u|^{\overline{p}(x)}}{\overline{p}(x)}\,dx$
(3.2)
$\displaystyle\quad+\int_{\mathcal{C}\Omega}\frac{\beta(x)}{\overline{p}(x)}|u|^{\overline{p}(x)}\,dx.$
The following lemma will be helpful in later considerations. The proof of this
lemma follows using the similar arguments as in [19].
###### Lemma 3.1.
Let $s,p$ and $\beta$ satisfy $(H_{1})$, $(H_{2})$ and $(\beta),$
respectively, and let $u\in X_{p(\cdot)}^{s(\cdot)}$. Then the following hold:
1. (i)
For $u\neq 0$ we have: $\|u\|_{X_{p}}=\eta$ if and only if
$\rho_{p}(\frac{u}{\eta})=1$;
2. (ii)
If $\|u\|_{X_{p}}<1$ then
${\|u\|_{X_{p}}^{p{+}}}\leq\rho_{p}(u)\leq\|u\|_{X_{p}}^{p^{-}}$;
3. (iii)
If $\|u\|_{X_{p}}>1$ then
$\|u\|_{X_{p}}^{p^{-}}\leq\rho_{p}(u)\leq\|u\|_{X_{p}}^{p^{+}}$.
###### Lemma 3.2.
Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^{N}.$ Let $s$ and $p$
satisfy $(H1)$ and $(H2),$ respectively and $(\beta)$ hold. Then for any
$\gamma\in C_{+}(\overline{\Omega})$ satisfying $1<\gamma(x)<p_{s}^{*}(x)$ for
all $x\in\overline{\Omega},$ there exists a constant
$C(s,p,N,\gamma,\Omega)>0$ such that
$\|u\|_{L^{\gamma(\cdot)}(\Omega)}\leq
C(s,p,N,\gamma,\Omega)\|u\|_{X_{p}}\text{~{}~{}~{}~{}for all~{}~{}}u\in X,$
moreover this embedding is compact.
###### Proof.
It can easily be seen that $\|u\|_{W}\leq\|u\|_{X_{p}}.$ Now by applying
Theorem 2.4 we get our desired result. ∎
In order to deal with fractional $p_{1}(\cdot)\&p_{2}(\cdot)$-Laplacian
problems, we consider the space
$X:=X^{s(\cdot)}_{p_{1}(\cdot)}\cap X^{s(\cdot)}_{p_{2}(\cdot)}$
endowed with the norm
$|u|_{X}=\|u\|_{{X}_{p_{1}}}+\|u\|_{{X}_{p_{2}}}.$
Clearly $X$ is reflexive and separable Banach space with respect to the above
norm. It is not difficult to see we can make use of another norm on $X$
equivalent to $|\cdot|_{X}$ given as
$\|u\|:=\|u\|_{X}=\inf\left\\{\eta\geq 0\ \bigg{|}\
\rho\left(\frac{u}{\eta}\right)\leq 1\right\\},$
where the modular $\rho:X\to\mathbb{R}$ is defined as
$\rho(u)=\rho_{p_{1}}(u)+\rho_{p_{2}}(u)$
such that $\rho_{p_{1}},\rho_{p_{2}}$ are described as in (3).
###### Lemma 3.3.
Let hypotheses $(H_{1})$-$(H_{2})$ and $(\beta)$ be satisfied and let $u\in
X$. Then the following hold:
1. (i)
For $u\neq 0$ we have: $\|u\|=\eta$ if and only if $\rho(\frac{u}{\eta})=1$;
2. (ii)
If $\|u\|<1$ then $\|u\|^{p_{2}^{+}}\leq\rho(u)\leq\|u\|^{p_{1}^{-}}$;
3. (iii)
If $\|u\|>1$ then $\|u\|^{p_{1}^{-}}\leq\rho(u)\leq\|u\|^{p_{2}^{+}}$.
###### Lemma 3.4.
Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^{N}.$ Let $s$ and
$p_{i}$ satisfy $(H1)$ and $(H2),$ respectively for $i=1,2$ and $(\beta)$
hold. Then for any $\gamma\in C_{+}(\overline{\Omega})$ satisfying
$1<\gamma(x)<{p}_{2_{s}}^{*}(x)$ for all $x\in\overline{\Omega},$ there exists
a constant $C(s,p_{i},N,\gamma,\Omega)>0$ such that
$\|u\|_{L^{\gamma(\cdot)}(\Omega)}\leq
C(s,p_{i},N,\gamma,\Omega)\|u\|\text{~{}~{}~{}~{}for all~{}~{}}u\in X,$
moreover this embedding is compact.
###### Proof.
The proof directly follows from the definition of $\|u\|$ and Lemma 3.2. ∎
Throughout this article $X^{*}$ represents the topological dual of $X$.
###### Lemma 3.5.
Let hypotheses $(H_{1})$-$(H_{2})$ and $(\beta)$ be satisfied. Then
$\rho:X\to\mathbb{R}$ and $\rho^{\prime}:X\to X^{*}$ have the following
properties:
* $(i)$
The function $\rho$ is of class $C^{1}(X,\mathbb{R})$ and $\rho^{\prime}:X\to
X^{*}$ is coercive, that is,
$\frac{\langle\rho^{\prime}(u),u\rangle}{\|u\|}\to+\infty\text{\;\;as\;}\|u\|\to+\infty.$
* $(ii)$
$\rho^{\prime}$ is strictly monotone operator.
* $(iii)$
$\rho^{\prime}$ is a mapping of type $(S_{+})$, that is, if
$u_{n}\rightharpoonup u$ in $X$ and
$\displaystyle\limsup_{n\to+\infty}\langle\rho^{\prime}(u_{n}),u_{n}-u\rangle\leq
0$, then $u_{n}\to u$ strongly in $X$.
###### Proof.
The proof of this result is similar to the proof of [9, Lemma 4.2], just
noticing that, the quantities
$\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}$ and $\Omega\times\Omega$
play a symmetrical role. ∎
As proved in [7, Proposition 3.6], the following integration by parts formula
arises naturally for $u\in C^{2}$ functions:
$\displaystyle\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}|u(x)-u(y)|^{p(x,y)-2}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+s(x,y)p(x,y)}}\,dx\,dy$
(3.3)
$\displaystyle=\int_{\Omega}v(-\Delta)^{s(\cdot)}_{p(\cdot)}u\,dx+\int_{\mathcal{C}\Omega}v\mathcal{N}^{s(\cdot)}_{p(\cdot)}\,dx.$
The previous integration by parts formula leads to the following definition:
###### Definition 3.6.
We say that $u\in X$ is a weak solution to (1.2) if for any $v\in X$ we have
$\displaystyle\mathcal{H}(u,v)$
$\displaystyle=\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{1}(x,y)-2}(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+s(x,y)p_{1}(x,y)}}dxdy+\int_{\Omega}|u|^{\overline{p}_{1}(x)-2}uvdx$
$\displaystyle+\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{2}(x,y)-2}(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+s(x,y)p_{2}(x,y)}}dxdy+\int_{\Omega}|u|^{\overline{p}_{2}(x)-2}uvdx$
$\displaystyle-\int_{\Omega}f(x,u)v\,dx+\int_{\mathcal{C}\Omega}\beta(x)|u|^{\overline{p}_{1}(x)-2}uv\,dx+\int_{\mathcal{C}\Omega}\beta(x)|u|^{\overline{p}_{2}(x)-2}uv\,dx$
$\displaystyle=0.$
The problem taken into account in the present paper has a variational
structure, namely its solutions can be found as critical points of the
associated energy functional. Hence, our problem can be studied using all the
methods which aim to prove the existence of a critical point for a functional.
The energy functional associated with problem (1.2) is the functional
$\mathcal{I}\colon X\to\mathbb{R}$ given by
$\displaystyle\mathcal{I}(u)$
$\displaystyle=\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{1}(x,y)}}{p_{1}(x,y)|x-y|^{N+s(x,y)p_{1}(x,y)}}dxdy+\int_{\Omega}\frac{1}{\overline{p}_{1}(x)}|u|^{\overline{p}_{1}(x)}dx$
$\displaystyle+\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{2}(x,y)}}{p_{2}(x,y)|x-y|^{N+s(x,y)p_{2}(x,y)}}dxdy+\int_{\Omega}\frac{1}{\overline{p}_{2}(x)}|u|^{\overline{p}_{2}(x)}dx$
$\displaystyle+\int_{\mathcal{C}\Omega}\frac{\beta(x)|u|^{\overline{p}_{1}(x)}}{\overline{p}_{1}(x)}\,dx+\int_{\mathcal{C}\Omega}\frac{\beta(x)|u|^{\overline{p}_{2}(x)}}{\overline{p}_{2}(x)}\,dx-\int_{\Omega}F(x,u)dx.$
A direct computation from [7, Proposition 3.8] shows that the functional
$\mathcal{I}$ is well defined on $X$ and $\mathcal{I}\in C^{1}(X,\mathbb{R})$
with
$\langle\mathcal{I}^{{}^{\prime}}(u),v\rangle=\mathcal{H}(u,v)\quad\text{for
any}\quad v\in X.$
Thus the weak solutions of (1.2) are precisely the critical points of
$\mathcal{I}$.
## 4\. Proof of Theorem 1.1
### 4.1. Abstract results
###### Definition 4.1.
Let $E$ be a Banach space and $E^{*}$ be its topological dual. Suppose that
$\Phi\in C^{1}(E)$. We say that $\Phi$ satisfies the Cerami condition at the
level $c\in\mathbb{R}$ $($the $(C)_{c}$-condition for short$)$ if the
following is true:
$``\text{every sequence}\ (u_{n})_{n\in\mathbb{N}}\subseteq E\ \text{such
that}\ \Phi(u_{n})\to c\ \text{and}$
$(1+\|u_{n}\|_{E})\Phi^{{}^{\prime}}(u_{n})\to 0\ \text{in}\ E^{*}\ \text{as}\
n\to+\infty$ $\text{admits a strongly convergent subsequence}".$
If this condition holds at every level $c\in\mathbb{R}$, then we say that
$\Phi$ satisfies the Cerami condition (the $C$-condition for short).
The $(C)_{c}$-condition is weaker than the $(PS)_{c}$-condition. However, it
was shown in [28] that from $(C)_{c}$-condition it can obtain a deformation
lemma, which is fundamental in order to get some minimax theorems. Thus we
have
###### Theorem 4.2.
If there exist $e\in E$ and $r>0$ such that
$\|e\|>r,\quad\max(\Phi(0),\Phi(e))\leq\inf_{\|x\|=r}\Phi(x),$
and $\Phi\colon E\to\mathbb{R}$ satisfies the $(C)_{c}$-condition with
$c=\inf_{\gamma\in\Gamma}\max_{t\in(0,1)}\Phi(\gamma(t)),$
where
$\Gamma=\\{\gamma\in C((0,1),E)\colon\gamma(0)=0,\ \gamma(1)=e\\}.$
Then $c\geq\inf_{\|x\|=r}\Phi(x)$ and $c$ is a critical value of $\Phi$.
### 4.2. Geometric condition
###### Lemma 4.3.
Let $(H_{1})$-$(H_{2})$, $(\beta)$ and $(f_{1})$-$(f_{4})$ hold. Then
1. $(i)$
there exist $\alpha>0$ and $R>0$ such that
$\displaystyle\mathcal{I}(u)\geq\beta>0\quad\text{for any }u\in X\text{ with
}\ \|u\|=\alpha.$
2. $(ii)$
there exists $\varphi\in X$ such that $I(\varphi)<0$.
###### Proof.
$(i)$ For any $\epsilon>0$, by the assumptions $(f_{1})$-$(f_{3})$, we have
(4.1) $F(x,t)\leq\epsilon|t|^{p_{2}^{+}}+C(\epsilon)|t|^{r(x)},\quad\text{for
a.e.}\ x\in\Omega\ \text{and all}\ t\in\mathbb{R}.$
Hence, using Theorem 2.4, Lemma 3.3, Lemma 2.3 and Lemma 3.4 for any $u\in X$
with $\|u\|<1$ $($ i.e. $\|u\|_{X_{p_{i}}}<1,i=1,2),$ we obtain
$\displaystyle\mathcal{I}(u)$
$\displaystyle=\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{1}(x,y)}}{p_{1}(x,y)|x-y|^{N+s(x,y)p_{1}(x,y)}}dxdy+\int_{\Omega}\frac{1}{\overline{p}_{1}(x)}|u|^{\overline{p}_{1}(x)}dx$
$\displaystyle+\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{2}(x,y)}}{p_{2}(x,y)|x-y|^{N+s(x,y)p_{2}(x,y)}}dxdy+\int_{\Omega}\frac{1}{\overline{p}_{2}(x)}|u|^{\overline{p}_{2}(x)}dx$
$\displaystyle+\int_{\mathcal{C}\Omega}\frac{\beta(x)|u|^{\overline{p}_{1}(x)}}{\overline{p}_{1}(x)}\,dx+\int_{\mathcal{C}\Omega}\frac{\beta(x)|u|^{\overline{p}_{2}(x)}}{\overline{p}_{2}(x)}\,dx$
$\displaystyle\quad-\int_{\Omega}F(x,u)\,dx$
$\displaystyle\geq\frac{1}{2}\rho(u)-{\epsilon}\int_{\Omega}|u|^{p_{2}^{+}}\,dx-C(\epsilon)\int_{\Omega}|u|^{r(x)}\,dx$
$\displaystyle\geq\frac{1}{2}\|u\|^{p_{2}^{+}}-\epsilon\|u\|^{p_{2}^{+}}_{L^{p_{2}^{+}}(\Omega)}-C(\epsilon)\left\\{\|u\|^{r^{-}}_{L^{r(\cdot)}(\Omega)}+\|u\|^{r^{+}}_{L^{r(\cdot)}(\Omega)}\right\\}$
$\displaystyle\geq\frac{1}{2}\|u\|^{p_{2}^{+}}-\epsilon
C\|u\|^{p_{2}^{+}}-C^{{}^{\prime}}(\epsilon)\|u\|^{r^{-}}$
$\displaystyle=\left(\frac{1}{2}-\epsilon
C\right)\|u\|^{p_{2}^{+}}-C^{{}^{\prime}}(\epsilon)\|u\|^{r^{-}},$
where $C^{{}^{\prime}}(\epsilon)>0$ is a constant. Consider
$0<\epsilon<\frac{1}{4C}.$
Since $p_{2}^{+}<r^{-}$, we can choose $\alpha\in(0,1)$ sufficiently small
such that for all $u\in X$ with $\|u\|=\alpha$
$\displaystyle\mathcal{I}(u)\geq\alpha^{p_{2}^{+}}\left(\frac{1}{2}-{\epsilon
C}\right)-C^{{}^{\prime}}(\epsilon)\alpha^{r^{-}}=R>0.$
The proof of $(i)$ is complete.
$(ii)$ It follows from $(f_{1})$ and $(f_{2})$ that for any positive constant
$M$, there exists a corresponding positive constant $C_{M}$ such that
(4.2) $\displaystyle F(x,t)\geq{M}|t|^{p_{2}^{+}}-C_{M}.$
Let $e\in X,~{}e>0$ with $\|e\|=1$ and $\int_{\Omega}|e|^{p_{2}^{+}}dx>0$ and
$t>1$. Then, using Lemma 3.3 and (4.2), we get
$\displaystyle I(te)$
$\displaystyle=\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}t^{p_{1}(x,y)}\frac{|e(x)-e(y)|^{p_{1}(x,y)}}{2p_{1}(x,y)|x-y|^{N+sp_{1}(x,y)}}\,dx\,dy+\int_{\mathcal{C}\Omega}t^{\overline{p}_{1}(x)}\frac{\beta(x)|e|^{\overline{p}_{1}(x)}}{\overline{p}_{1}(x)}\,dx$
$\displaystyle\quad+\int_{\Omega}t^{\overline{p}_{1}(x)}\frac{|e|^{\overline{p}_{1}(x)}}{\overline{p}_{1}(x)}\,dx$
$\displaystyle+\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}t^{p_{2}(x,y)}\frac{|e(x)-e(y)|^{p_{2}(x,y)}}{2p_{2}(x,y)|x-y|^{N+sp_{2}(x,y)}}\,dx\,dy+\int_{\mathcal{C}\Omega}t^{\overline{p}_{2}(x)}\frac{\beta(x)|e|^{\overline{p}_{2}(x)}}{\overline{p}_{2}(x)}\,dx$
$\displaystyle\quad+\int_{\Omega}t^{\overline{p}_{2}(x)}\frac{|e|^{\overline{p}_{2}(x)}}{\overline{p}_{2}(x)}\,dx-\int_{\Omega}F(x,te)\,dx$
$\displaystyle\leq
t^{p_{2}^{+}}\rho(e)-t^{p_{2}^{+}}M\int_{\Omega}|e|^{p_{2}^{+}}\,dx+|\Omega|C_{M}$
$\displaystyle=t^{p_{2}^{+}}\left[1-M\int_{\Omega}|e|^{p_{2}^{+}}\,dx\right]+|\Omega|C_{M}$
We choose $M$ sufficiently large so that
$\lim_{t\to+\infty}\mathcal{I}(te)=-\infty.$
Hence, there exists some $t_{0}>0$ such that $\mathcal{I}(\varphi)<0,$ where
$\varphi=t_{0}e.$ Thus the proof of $(ii)$ is complete. ∎
### 4.3. Cerami condition
###### Proposition 4.4.
If hypotheses $(H_{1})$-$(H_{2})$, $(\beta)$ and $(f_{1})$-$(f_{4})$ hold,
then the functional $\mathcal{I}$ satisfies the $(C)_{c}$-condition for any
$c\in\mathbb{R}.$
###### Proof.
In this proof the value of the constant $C$ changes from line to line. We
consider a sequence $(u_{n})_{n\geq 1}\subset X$ such that
(4.3) $|\mathcal{I}(u_{n})|\leq C\quad\text{for some}\quad C>0\quad\text{and
for all}\ n\geq 1,$ (4.4) $(1+\|u_{n}\|)\mathcal{I}^{{}^{\prime}}(u_{n})\to
0\quad\text{in}\quad X^{*}\quad\text{as}\ n\to+\infty.$
From (4.4), we have
(4.5) $|\mathcal{H}(u_{n},v)|\leq\frac{\epsilon_{n}\|v\|}{1+\|u_{n}\|},$
for all $v\in X$ with $\epsilon_{n}\to 0$.
In (4.5), we choose $v=u_{n}\in X$ and obtain for all $n\in\mathbb{N}$
$\displaystyle-\frac{1}{2}\delta^{p_{1}}_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}(u_{n})-\frac{1}{2}\delta^{p_{2}}_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}(u_{n})-\rho_{\Omega}^{p_{1}}(u_{n})-\rho_{\Omega}^{p_{2}}(u_{n})$
$\displaystyle-\int_{C\Omega}\beta(x)|u_{n}|^{\overline{p}_{1}(x)}dx-\int_{C\Omega}\beta(x)|u_{n}|^{\overline{p}_{2}(x)}dx+\int_{\Omega}f(x,u_{n}(x))u_{n}(x)dx$
(4.6) $\displaystyle\leq\epsilon_{n}.$
Also, by (4.3) we have for all $n\in\mathbb{N}$,
$\displaystyle\frac{1}{2p_{1}^{+}}\delta^{p_{1}}_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}(u_{n})+\frac{1}{2p_{2}^{+}}\delta^{p_{2}}_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}(u_{n})+\frac{1}{p_{1}^{+}}\rho_{\Omega}^{p_{1}}(u_{n})+\frac{1}{p_{2}^{+}}\rho_{\Omega}^{p_{2}}(u_{n})$
$\displaystyle+\frac{1}{p_{1}^{+}}\int_{C\Omega}\beta(x)|u_{n}|^{\overline{p}_{1}(x)}dx+\frac{1}{p_{2}^{+}}\int_{C\Omega}\beta(x)|u_{n}|^{\overline{p}_{2}(x)}dx$
(4.7) $\displaystyle-\int_{\Omega}F(x,u_{n}(x))dx\leq C.$
Adding relations (4.3) and (4.3), we obtain
(4.8) $\int_{\Omega}\mathcal{F}(x,u_{n}(x))dx\leq C\quad\text{for some}\quad
C>0\quad\text{and all}\ n\in\mathbb{N}.$
Claim: The sequence $(u_{n})_{n\geq 1}\subset X$ is bounded.
We argue by contradiction. Suppose that the claim is not true. We may assume
that
(4.9) $\|u_{n}\|\to+\infty\quad\text{as}\quad n\to+\infty.$
We set $w_{n}:=\frac{u_{n}}{\|u_{n}\|}$ for all $n\in\mathbb{N}$. Then
$\|w_{n}\|=1$, for all $n\in\mathbb{N}$. Using reflexivity of $X$ and Lemma
3.4, up to a subsequence, still denoted by $(w_{n})_{n\geq 1}$, as
$n\to+\infty,$ we get
(4.10) $w_{n}\rightharpoonup w\ \text{~{}weakly in}\ X\quad\text{and}\quad
w_{n}\to w\ \text{~{} strongly in}\
L^{\gamma(\cdot)}(\Omega),\;1<\gamma(x)<{p_{2}}_{s}^{*}(x).$
We claim that $w=0$. Indeed, if not then the set
$\widehat{\Omega}:=\\{x\in\Omega:w(x)\not=0\\}$ has positive Lebesgue measure,
i.e., $|\widehat{\Omega}|>0.$ Hence, $|u_{n}(x)|\to+\infty$ for a.e.
$x\in\widehat{\Omega}$ as $n\to+\infty$. On account of hypothesis $(f_{2})$,
for a.e. $x\in\widehat{\Omega}$ we have
(4.11)
$\displaystyle\frac{F(x,u_{n}(x))}{\|u_{n}\|^{p_{2}^{+}}}=\frac{F(x,u_{n}(x))}{|u_{n}(x)|^{p_{2}^{+}}}|w_{n}(x)|^{p_{2}^{+}}\to+\infty\
\text{~{}as~{}}n\to+\infty.$
Then by Fatou’s lemma, we obtain
(4.12)
$\int_{\widehat{\Omega}}\frac{F(x,u_{n}(x))}{\|u_{n}\|^{p_{2}^{+}}}dx\to+\infty\text{~{}~{}as~{}}{n\to+\infty}.$
Hypotheses $(f_{1})$-$(f_{2})$ imply there exists $K>0$ such that
(4.13) $\frac{F(x,t)}{|t|^{p_{2}^{+}}}\geq 1\quad\text{for a.e.}\ x\in\Omega,\
\text{all}\ |t|>K.$
By $(f_{1}),$ there exists a positive constant $\widehat{C}>0$ such that
(4.14) $\displaystyle|F(x,t)|\leq\widehat{C},\text{ for
all~{}}(x,t)\in\overline{\Omega}\times[-K,K].$
Now from (4.13) and (4.14), we get
(4.15) $\displaystyle F(x,t)>C_{0}\text{ ~{}for
all~{}}(x,t)\in\overline{\Omega}\times\mathbb{R},$
where $C_{0}\in\mathbb{R}$ is a constant. The above relation implies
$\displaystyle\frac{F(x,u_{n}(x))-C_{0}}{\|u_{n}\|^{p_{2}^{+}}}\geq
0\text{~{}~{} for all~{}}x\in\overline{\Omega},\text{ for
all~{}}n\in\mathbb{N}.$
that is,
(4.16)
$\displaystyle\frac{F(x,u_{n}(x))}{|u_{n}(x)|^{p_{2}^{+}}}|w_{n}(x)|^{p_{2}^{+}}-\frac{C_{0}}{\|u_{n}\|^{p_{2}^{+}}}\geq
0\text{~{}~{} for all~{}}x\in\overline{\Omega},\text{ for
all~{}}n\in\mathbb{N}.$
By (4.3), (4.9), (4.12), (4.16) and using the fact $\|w_{n}\|=1,$ Lemma 3.1
and Fatou’s lemma, we have
$\displaystyle+\infty$
$\displaystyle=\left[\int_{\widehat{\Omega}}\liminf_{n\to+\infty}\frac{F(x,u_{n}(x))|w_{n}(x)|^{p_{2}^{+}}}{|u_{n}(x)|^{p_{2}^{+}}}dx-\int_{\widehat{\Omega}}\limsup_{n\to+\infty}\frac{C_{0}}{\|u_{n}\|^{p_{2}^{+}}}dx\right]$
$\displaystyle=\int_{\widehat{\Omega}}\liminf_{n\to+\infty}\left[\frac{F(x,u_{n}(x))|w_{n}(x)|^{p_{2}^{+}}}{|u_{n}(x)|^{p_{2}^{+}}}-\frac{C_{0}}{\|u_{n}\|^{p_{2}^{+}}}\right]dx$
$\displaystyle\leq\liminf_{n\to+\infty}\int_{\widehat{\Omega}}\left[\frac{F(x,u_{n}(x))|w_{n}(x)|^{p_{2}^{+}}}{|u_{n}(x)|^{p_{2}^{+}}}-\frac{C_{0}}{\|u_{n}\|^{p_{2}^{+}}}\right]dx$
$\displaystyle\leq\liminf_{n\to+\infty}\int_{{\Omega}}\left[\frac{F(x,u_{n}(x))|w_{n}(x)|^{p_{2}^{+}}}{|u_{n}(x)|^{p_{2}^{+}}}-\frac{C_{0}}{\|u_{n}\|^{p_{2}^{+}}}\right]dx$
$\displaystyle=\left[\liminf_{n\to+\infty}\int_{{\Omega}}\frac{F(x,u_{n}(x))|w_{n}(x)|^{p_{2}^{+}}}{|u_{n}(x)|^{p_{2}^{+}}}dx-\limsup_{n\to+\infty}\int_{{\Omega}}\frac{C_{0}}{\|u_{n}\|^{p_{2}^{+}}}dx\right]$
$\displaystyle=\liminf_{n\to+\infty}\int_{{\Omega}}\frac{F(x,u_{n}(x))}{\|u_{n}\|^{p_{2}^{+}}}dx$
$\displaystyle=\liminf_{n\to+\infty}\bigg{[}\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{1}{\|u_{n}\|^{p_{2}^{+}-p_{1}(x,y)}}\frac{|w_{n}(x)-w_{n}(y)|^{p_{1}(x,y)}}{p_{1}(x,y)|x-y|^{N+s(x,y)p_{1}(x,y)}}dxdy$
$\displaystyle\quad+\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{1}{\|u_{n}\|^{p_{2}^{+}-p_{2}(x,y)}}\frac{|w_{n}(x)-w_{n}(y)|^{p_{2}(x,y)}}{p_{2}(x,y)|x-y|^{N+s(x,y)p_{2}(x,y)}}dxdy$
$\displaystyle\quad+\int_{\Omega}\frac{1}{\|u_{n}\|^{p_{2}^{+}-\overline{p}_{1}(x)}}\frac{|w_{n}|^{\overline{p}_{1}(x)}}{\overline{p}_{1}(x)}dx+\int_{\Omega}\frac{1}{\|u_{n}\|^{p_{2}^{+}-\overline{p}_{2}(x)}}\frac{|w_{n}|^{\overline{p}_{2}(x)}}{\overline{p}_{2}(x)}dx$
$\displaystyle\quad+\int_{\mathcal{C}\Omega}\frac{\beta(x)|w_{n}|^{\overline{p}_{1}(x)}}{\|u_{n}\|^{p_{2}^{+}-\overline{p}_{1}(x)}\overline{p}_{1}(x)}\,dx+\int_{\mathcal{C}\Omega}\frac{\beta(x)|w_{n}|^{\overline{p}_{2}(x)}}{\|u_{n}\|^{p^{+}-\overline{p}_{2}(x)}\overline{p}_{2}(x)}dx-\frac{\mathcal{I}(u_{n})}{\|u_{n}\|^{p^{+}}}\bigg{]}$
(4.17) $\displaystyle\leq\liminf_{n\to+\infty}\rho(w_{n})=1.$
Thus we arrive at a contradiction. Hence, $w=0$. Let $\mu\geq 1$ and set
$\kappa:=(2\mu)^{\frac{1}{p_{2}^{-}}}\geq 1$ for all $n\in\mathbb{N}$.
Evidently, from (4.10) we have
$w_{n}\to 0\quad\text{~{} strongly in}\quad
L^{\gamma(\cdot)}(\Omega),~{}1<\gamma(x)<{p_{2}}_{s}^{*}(x)$
which combining with $(f_{1})$-$(f_{3})$ and Lebesgue dominated convergence
theorem yields that
(4.18) $\displaystyle\int_{\Omega}F(x,\kappa w_{n})\,dx\to 0\ \text{as}\
n\to+\infty.$
We can find $t_{n}\in[0,1]$ such that
(4.19) $\mathcal{I}(t_{n}u_{n})=\max_{0\leq t\leq 1}\mathcal{I}(tu_{n}).$
Because of (4.9), for sufficiently large $n\in\mathbb{N},$ we have
(4.20) $0<\frac{(2\mu)^{\frac{1}{p_{2}^{-}}}}{\|u_{n}\|}\leq 1.$
Using (4.18), (4.19) and (4.20) and recalling that $\|w_{n}\|=1,$ for
sufficiently large $n\in\mathbb{N},$ it follows that
$\displaystyle\mathcal{I}(t_{n}u_{n})\geq\mathcal{I}\left(\kappa\frac{u_{n}}{\|u_{n}\|}\right)$
$\displaystyle=\mathcal{I}(\kappa w_{n})$
$\displaystyle\geq(\kappa)^{p_{2}^{-}}\frac{1}{2}\rho(w_{n})-\int_{\Omega}F(x,\kappa
w_{n})dx$ $\displaystyle=2\mu.\frac{1}{2}\rho(w_{n})-\int_{\Omega}F(x,\kappa
w_{n})dx$ $\displaystyle=\mu+o_{n}(1).$
Since $\mu>0$ is arbitrary, we have
(4.21) $\mathcal{I}(t_{n}u_{n})\to+\infty\quad\text{as}\quad n\to+\infty.$
From the assumption$f(x,0)=0$ and (4.3) we know that
(4.22) $\mathcal{I}(0)=0\ \text{and}\ \mathcal{I}(u_{n})\leq C\ \text{for
all}\ n\in\mathbb{N}.$
By (4.21) and (4.22), we can infer that, for $n\in\mathbb{N}$ large,
(4.23) $t_{n}\in(0,1).$
From (4.19) and (4.23), we can see that for all $n\in\mathbb{N}$ sufficiently
large,
(4.24) $\displaystyle
0=t_{n}\frac{d}{dt}\mathcal{I}(tu_{n})|_{t=t_{n}}=\langle\mathcal{I}^{{}^{\prime}}(t_{n}u_{n}),t_{n}u_{n}\rangle,$
so,
$\displaystyle\frac{1}{2}\delta^{p_{1}}_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}(t_{n}u_{n})+\rho^{p_{1}}_{\Omega}(t_{n}u_{n})+\int_{\mathcal{C}\Omega}\beta(x)|t_{n}u_{n}|^{\overline{p}_{1}(x)}\,dx$
(4.25)
$\displaystyle+\frac{1}{2}\delta^{p_{2}}_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}(t_{n}u_{n})+\rho^{p_{2}}_{\Omega}(t_{n}u_{n})+\int_{\mathcal{C}\Omega}\beta(x)|t_{n}u_{n}|^{\overline{p}_{2}(x)}\,dx-\int_{\Omega}f(x,t_{n}u_{n})t_{n}u_{n}\,dx=0.$
From hypothesis $(f_{4})$, we obtain for all $n\in\mathbb{N},$
$\mathcal{F}(x,t_{n}u_{n})\leq\mathcal{F}(x,u_{n})+b(x)\ \text{for a.e}\
x\in\Omega,$
that is,
(4.26)
$f(x,t_{n}u_{n})(t_{n}u_{n})\leq\mathcal{F}(x,u_{n})+b(x)+p_{2}^{+}F(x,t_{n}u_{n})\
\text{for a.e}\ x\in\Omega.$
Combining (4.3) and (4.26), we deduce
$\displaystyle\frac{1}{2}\delta^{p_{1}}_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}(t_{n}u_{n})+\rho^{p_{1}}_{\Omega}(t_{n}u_{n})+\int_{\mathcal{C}\Omega}\beta(x)|t_{n}u_{n}|^{\overline{p}_{1}(x)}\,dx$
$\displaystyle+\frac{1}{2}\delta^{p_{2}}_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}(t_{n}u_{n})+\rho^{p_{2}}_{\Omega}(t_{n}u_{n})+\int_{\mathcal{C}\Omega}\beta(x)|t_{n}u_{n}|^{\overline{p}_{2}(x)}\,dx-
p_{2}^{+}\int_{\Omega}F(x,t_{n}u_{n})dx$
$\displaystyle\leq\int_{\Omega}\mathcal{F}(x,u_{n})dx+\|b\|_{L^{1}(\Omega)}\
\text{for all}\ n\in\mathbb{N},$
and hence by (4.8), we get
(4.27) $p_{2}^{+}\mathcal{I}(t_{n}u_{n})\leq C\ \text{for all}\
n\in\mathbb{N}.$
We compare (4.21) and (4.27) and arrive at a contradiction. Thus the claim
follows.
On account of this claim, we may assume that
(4.28) $u_{n}\rightharpoonup u\ \text{~{} weakly in}\ X\quad\text{and}\quad
u_{n}\to u\ \text{~{} strongly in}\ L^{\gamma(\cdot)}(\Omega),\
1<\gamma(x)<{p_{2}}_{s}^{*}(x).$
We show in what follows that
$u_{n}\to u\ \text{in}\ X.$
Using (4.28), we have
(4.29) $\displaystyle
o_{n}(1)=\langle\mathcal{I}^{{}^{\prime}}(u_{n}),u_{n}-u\rangle\geq\frac{1}{2}\langle\rho^{\prime}(u_{n}),u_{n}-u\rangle-\int_{\Omega}f(x,u_{n})(u_{n}-u)dx.$
Now by $(f_{1}),$ Hölder inequality, (4.28), boundedness of $(u_{n})_{n}$ and
Lemma 2.3, we obtain
$\displaystyle\left|\int_{\Omega}f(x,u_{n})(u_{n}-u)dx\right|$
$\displaystyle\leq\|a\|_{L^{\infty}(\Omega)}\left[\int_{\Omega}|u_{n}-u|.1\,dx+\int_{\Omega}|v_{n}|^{r(x)-1}|v_{n}-v_{0}|dx\right]$
$\displaystyle\leq\|a\|_{L^{\infty}(\Omega)}\left[\|u_{n}-u\|_{L^{r(\cdot)}(\Omega)}\left(1+|\Omega|\right)^{r^{\prime+}}+\|v_{n}-v_{0}\|_{L^{r(\cdot)}(\Omega)}\||u_{n}|^{r(\cdot)-1}\|_{L^{r^{\prime}(\cdot)}(\Omega)}\right]$
$\displaystyle\leq
C\left[\|u_{n}-u\|_{L^{r(\cdot)}(\Omega)}+\|u_{n}-u\|_{L^{r(\cdot)}(\Omega)}\left(\|u_{n}\|_{L^{r(\cdot)}(\Omega)}^{r^{+}-1}+\|u_{n}\|_{L^{r(\cdot)}(\Omega)}^{r^{-}-1}\right)\right]$
$\displaystyle\quad\to 0\text{\;\; as\;}n\to+\infty.$
Hence, combining (4.29) and (4.3) and using the $(S_{+})$ property of
$\rho^{\prime}$ (see Lemma 3.5), we have $u_{n}\to u$ strongly in $X$ as
$n\to+\infty.$, which shows that the $(C)_{c}$-condition is satisfied. The
proof is now complete. ∎
## 5\. Proof of Theorem 1.2
To prove the Theorem 1.2 we need the Fountain theorem of Bartsch [5, Theorem
2.5]; (see also [46, Theorem 3.6]). We recall next lemma from [20].
###### Lemma 5.1.
Let $E$ be a reflexive and separable Banach space. Then there are
$\\{e_{n}\\}\subset E$ and $\\{g_{n}^{*}\\}\subset E^{*}$ such that
$E=\overline{span\\{e_{n}:n=1,2,3..\\}},~{}~{}E^{*}=\overline{span\\{g_{n}^{*}:n=1,2,3..\\}},$
and
$\langle
g_{i}^{*},e_{j}\rangle=\left\\{\begin{array}[]{rl}&1\text{~{}~{}~{}~{}if~{}~{}~{}}i=j\\\
&0\text{~{}~{}~{}~{}if~{}~{}~{}}i\not=j.\end{array}\right.$
Let us denote
(5.1) $\displaystyle
E_{n}=span\\{e_{n}\\},~{}~{}~{}X_{k}=\bigoplus_{n=1}^{k}E_{n}\text{
~{}~{}~{}and }Y_{k}=\overline{\bigoplus_{n=k}^{\infty}E_{n}}.$
Now we recall the following Fountain theorem from [2]:
###### Theorem 5.2 (Fountain theorem).
Assume that $\Phi\in C^{1}(E,\mathbb{R})$ satisfies the Cerami condition
$(C)_{c}$ for all $c\in\mathbb{R}$ and $\Phi(-u)=\Phi(u).$ If for each
sufficiently large $k\in\mathbb{N},$ there exists $\varrho_{k}>\delta_{k}>0$
such that
* ${(\mathcal{A}_{1})}$
$b_{k}:=\inf\\{\Phi(u):u\in Y_{k},~{}\|u\|_{E}=\delta_{k}\\}\to+\infty,$ as
$k\to+\infty,$
* $\rm(\mathcal{A}_{2})$
$a_{k}:=\max\\{\Phi(u):u\in X_{k},~{}\|u\|_{E}=\varrho_{k}\\}\leq 0.$
Then $\Phi$ has a sequence of critical points $(u_{k})_{k}$ such that
$\Phi(u_{k})\to+\infty.$
###### Proof of Theorem 1.2:.
Define $X_{k}$ and $Y_{k}$ as in (5.1) for the reflexive, separable Banach
space $X$. Now $\mathcal{I}$ is even and satisfies Cerami condition $(C)_{c}$
for all $c\in\mathbb{R}$ (see Lemma 4.4). So now we will show that the
conditions $(\mathcal{A}_{1})$-$(\mathcal{A}_{2})$ hold for our problem.
* $(\mathcal{A}_{1}):$
For large $k\in\mathbb{N},$ set
(5.2) $\displaystyle\alpha_{k}=\sup_{u\in
Y_{k},~{}\|u\|=1}\|u\|_{L^{\gamma(\cdot)}(\Omega)},$
where $\gamma\in C_{+}(\overline{\Omega})$ such that for all
$x\in\overline{\Omega},$ $1<\gamma(x)<p_{s}^{*}(x).$ So,
(5.3) $\displaystyle\lim_{k\to+\infty}\alpha_{k}=0.$
Supposing to the contrary, there exist $\epsilon^{\prime}>0,k_{0}\geq 0$ and a
sequence $(u_{k})_{k}$ in $Y_{k}$ such that
$\|u_{k}\|=1\text{
and}~{}~{}\|u_{k}\|_{L^{\gamma(\cdot)}(\Omega)}\geq\epsilon^{\prime}$
for all $k\geq k_{0}.$ Since $(u_{k})_{k}$ is bounded in $X,$ there exists
$u_{0}\in E$ such that up to a subsequence, still denoted by $(u_{k})_{k},$ we
have $u_{k}\rightharpoonup u_{0}$ weakly in $E$ as $k\to+\infty$ and
$\langle g_{j}^{*},u_{0}\rangle=\lim_{k\to+\infty}\langle
g_{j}^{*},u_{k}\rangle=0$
for $j=1,2,3,\cdots.$ Thus we have $u=0.$ In addition, using Theorem 3.4 we
obtain
$\epsilon^{\prime}\leq\lim_{k\to+\infty}\|u_{k}\|_{L^{\gamma(\cdot)}(\Omega)}=\|u_{0}\|_{L^{\gamma(\cdot)}(\Omega)}=0,$
a contradiction. Hence, (5.3) holds true. Let $u\in Y_{k}$ with $\|u\|>1.$
Note that (5.3) implies $\alpha_{k}<1$ for large $k\in\mathbb{N}.$ Thus using
Lemma 2.3, Lemma 3.3 and (5.2) and (4.1) with $\epsilon=1$ for
$k\in\mathbb{N}$ large enough, we get
$\displaystyle\mathcal{I}(u)$
$\displaystyle\geq\frac{1}{2}\rho(u)-\left[\int_{\Omega}|u|^{p_{2}^{+}}\,dx-C(1)\int_{\Omega}|u|^{r(x)}\,dx\right]$
$\displaystyle\geq\frac{1}{2}\|u\|^{p_{1}^{-}}-\|u\|^{p_{2}^{+}}_{L^{p_{2}^{+}}(\Omega)}-C(1)\left\\{\|u\|^{r^{-}}_{L^{r(\cdot)}(\Omega)}+\|u\|^{r^{+}}_{L^{r(\cdot)}(\Omega)}\right\\}$
$\displaystyle\geq\frac{1}{2}\|u\|^{p_{1}^{-}}-\alpha_{k}^{p_{2}^{+}}C_{1}\|u\|^{p_{2}^{+}}-C_{2}\\{\alpha_{k}^{r^{-}}\|u\|^{r^{-}}+\alpha_{k}^{r^{+}}\|u\|^{r^{+}}\\}$
(5.4)
$\displaystyle\geq\frac{1}{2}\|u\|^{p_{1}^{-}}-C\alpha_{k}\|u\|^{r^{+}},$
where $C,C_{1},C_{2}$ are some positive constants.
Define the function $\mathcal{G}:\mathbb{R}\to\mathbb{R},$
$\mathcal{G}(t)=\frac{1}{2}t^{p_{1}^{-}}-C\alpha_{k}t^{r^{+}}.$
Then it can be derived by a simple computation that $G$ attains its maximum at
$\delta_{k}={\left(\frac{p_{1}^{-}}{2r^{+}{C}\alpha_{k}}\right)^{1/(r^{+}-p_{1}^{-})}}$
and the maximum value of $\mathcal{G}$ is
$\displaystyle\mathcal{G}(\delta_{k})$
$\displaystyle=\frac{1}{2}\left(\frac{p_{1}^{-}}{2r^{+}{C}\alpha_{k}}\right)^{{p_{1}^{-}}/(r^{+}-p_{1}^{-})}-{{C}}\alpha_{k}\left(\frac{p_{1}^{-}}{2r^{+}{C}\alpha_{k}}\right)^{{r^{+}}/(r^{+}-p_{1}^{-})}$
$\displaystyle=\left(\frac{1}{2}\right)^{r^{+}/(r^{+}-p_{1}^{-})}\left(\frac{1}{{{C}}\alpha_{k}}\right)^{{p_{1}^{-}}/(r^{+}-p_{1}^{-})}\left(\frac{p_{1}^{-}}{r^{+}}\right)^{\theta
p_{1}^{-}/(r^{+}-p_{1}^{-})}\bigg{(}1-\frac{p_{1}^{-}}{r^{+}}\bigg{)}.$
Since $p_{1}^{-}<r^{+}$ and $\alpha_{k}\to 0$ as $k\to+\infty,$ we obtain
(5.5) $\mathcal{G}(\delta_{k})\to+\infty\ \text{as}\ k\to+\infty.$
Again, (5.3) infers $\delta_{k}\to+\infty$ as $k\to+\infty.$ Thus for $u\in
Y_{k}$ with $\|u\|=\delta_{k},$ taking into account ($(\mathcal{A}_{1}):$) and
(5.5), it follows that as $k\to+\infty$
$b_{k}=\inf_{u\in Y_{k},\|u\|=\delta_{k}}\mathcal{I}(u)\to+\infty.$
* $(\mathcal{A}_{2}):$
Let us assume that the assertion $(\mathcal{A}_{2})$ does not hold for some
$k.$ So there exists a sequence $(u_{n})_{n}\subset X_{k}$ such that
(5.6) $\displaystyle\|u_{n}\|\to+\infty\ \text{and}\ \mathcal{I}(u_{n})\geq
0.$
Let us take $w_{n}:=\frac{u_{n}}{\|u_{n}\|},$ then $w_{n}\in X$ and
$\|w_{n}\|=1.$ Since $X_{k}$ is of finite dimension, there exists $w\in
X_{k}\setminus\\{0\\}$ such that up to a subsequence, still denoted by
$(w_{n})_{n},$ $w_{n}\to w$ strongly and $w_{n}(x)\to w(x)$ a.e. $x\in\Omega$
as $n\to+\infty.$ If $w(x)\not=0$ then $|u_{n}(x)|\to+\infty$ as
$n\to+\infty.$ Similar to (4.11), it follows that for each $x\in\Omega$,
(5.7)
$\displaystyle\frac{F(x,u_{n}(x))}{|u_{n}(x)|^{p_{2}^{+}}}|w_{n}(x)|^{p_{2}^{+}}\to+\infty.$
Hence, using (5.6) and applying Fatou’s lemma, we have
(5.8)
$\frac{1}{\|u_{n}\|^{p_{2}^{+}}}\int_{\Omega}F(x,u_{n})dx=\int_{\Omega}\frac{F(x,u_{n})}{|u_{n}(x)|^{p_{2}^{+}}}|w_{n}(x)|^{p_{2}^{+}}dx\to+\infty\
\text{as}\ $
Since $\|u_{n}\|>1$ for large $n\in\mathbb{N}$, from Proposition 3.3 and
(5.8), we obtain as $n\to+\infty$
$\displaystyle\mathcal{I}(u_{n})$
$\displaystyle\leq\rho(u_{n})-\int_{\Omega}F(x,u_{n})\,dx$
$\displaystyle\leq\|u_{n}\|^{p_{2}^{+}}-\int_{\Omega}F(x,u_{n})\,dx$
$\displaystyle=\bigg{(}1-\frac{1}{\|u_{n}\|^{p_{2}^{+}}}\int_{\Omega}F(x,u_{n})\;dx\bigg{)}\|u_{n}\|^{p_{2}^{+}}\to-\infty,$
a contradiction to (5.6). Therefore, for sufficiently large $k\in\mathbb{N},$
we can get $\varrho_{k}>\delta_{k}>0$ such that $(\mathcal{A}_{2})$ holds for
$u\in X_{k}$ with $\|u\|=\varrho_{k}$.
∎
## 6\. Proof of Theorem 1.3
For proving Theorem 1.3, we first recall the Dual fountain theorem due to
Bartsch and Willem (see [46, Theorem 3.18]). Considering Lemma 5.1 and using
the reflexivity and separability of the Banach space $X$ we can define $X_{k}$
and $Y_{k}$ appropriately.
###### Definition 6.1.
For $c\in\mathbb{R},$ we say that $\mathcal{I}$ satisfies the $(C)_{c}^{*}$
condition (with respect to $Y_{k}$) if any sequence $(u_{k})_{k}$ in $X$ with
$u_{k}\in Y_{k}$ such that
$\mathcal{I}(u_{k})\to
c\text{~{}~{}and~{}~{}}\|\mathcal{I}^{\prime}_{|_{Y_{k}}}(u_{k})\|_{E^{*}}(1+\|u_{k}\|)\to
0,\text{~{}~{}as~{}~{}~{}}k\to+\infty$
contains a subsequence converging to a critical point of $\mathcal{I},$ where
$X^{*}$ is the dual of $X.$
###### Theorem 6.2 (Dual fountain Theorem).
Let $\Phi\in C^{1}(E,\mathbb{R})$ satisfy $\Phi(-u)=\Phi(u).$ If for each
$k\geq k_{0}$ there exist $\varrho_{k}>\delta_{k}>0$ such that
* $(\mathcal{B}_{1})$
$a_{k}=\inf\\{\Phi(u):u\in Z_{k},\,\,\|u\|_{E}=\varrho_{k}\\}\geq 0;$
* $(\mathcal{B}_{2})$
$b_{k}=\sup\\{\Phi(u):u\in Y_{k},\,\,\|u\|_{E}=\delta_{k}\\}<0;$
* $(\mathcal{B}_{3})$
$d_{k}=\inf\\{\Phi(u):u\in Z_{k},\,\,\|u\|_{E}\leq\varrho_{k}\\}\to 0$ as
$k\to+\infty;$
* $(\mathcal{B}_{4})$
$\Phi$ satisfies the $(C)_{c}^{*}$ condition for every $c\in[d_{k_{0}},0[.$
Then $\Phi$ has a sequence of negative critical values converging to $0$.
###### Remark 6.3.
Note that, in [46], assuming that the energy functional associated to the
problem satisfies $(PS)_{c}^{*}$ condition the Dual fountain theorem is
obtained using Deformation theorem which is still valid under Cerami
condition. Therefore, like many critical point theorems the Dual fountain
theorem holds under $(C)_{c}^{*}$ condition.
Next lemma is due to [26, Lemma 3.2]
###### Lemma 6.4.
Suppose that the hypotheses in Theorem 1.3 hold, then $\mathcal{I}$ satisfies
the $(C)_{c}^{*}$ condition.
###### Proof of Theorem 1.3.
For the reflexive, separable Banach space $X,$ define $X_{k}$ and $Y_{k}$ as
in (5.1). From the assumptions we have that $\mathcal{I}$ is even and by Lemma
6.4 we get that $\mathcal{I}$ satisfies Cerami condition $(C)_{c}^{*}$ for all
$c\in\mathbb{R}.$ Thus for proving Theorem 1.3 we are only left with verifying
the conditions $(\mathcal{B}_{1})$-$(\mathcal{B}_{3}).$
$(\mathcal{B}_{1})$: For all $u\in Y_{k}$ with $\|u\|<1,$ arguing similarly as
we did for obtaining ($(\mathcal{A}_{1}):$), we can derive
$\displaystyle\mathcal{I}(u)$
$\displaystyle\geq\frac{1}{2}\left[\rho_{p_{1}}(u)+\rho_{p_{2}}(u)\right]-\int_{\Omega}F(x,u)\,dx$
$\displaystyle\geq\frac{1}{2}\rho(u)-\left[\int_{\Omega}|u|^{p_{2}^{+}}\,dx-C(1)\int_{\Omega}|u|^{r(x)}\,dx\right]$
$\displaystyle\geq\frac{1}{2}\|u\|^{p_{2}^{+}}-\|u\|^{p_{2}^{+}}_{L^{p_{2}^{+}}(\Omega)}-C(1)\left\\{\|u\|^{r^{-}}_{L^{r(\cdot)}(\Omega)}+\|u\|^{r^{+}}_{L^{r(\cdot)}(\Omega)}\right\\}$
$\displaystyle\geq\frac{1}{2}\|u\|^{p_{2}^{+}}-\alpha_{k}^{p_{2}^{+}}C_{1}\|u\|^{p_{2}^{+}}-C_{2}\\{\alpha_{k}^{r^{-}}\|u\|^{r^{-}}+\alpha_{k}^{r^{+}}\|u\|^{r^{+}}\\}$
(6.1) $\displaystyle\geq\frac{1}{2}\|u\|^{p_{2}^{+}}-C_{4}\alpha_{k}\|u\|,$
Let us choose
$\varrho_{k}=\left(C_{4}\alpha_{k}/{2}\right)^{1/{(p_{2}^{+}-1)}}.$ Since
$p_{2}^{+}>1,$ (5.2) infers that
(6.2) $\varrho_{k}\to 0\text{~{}~{}as~{}}k\to+\infty.$
Thus for $u\in Y_{k}$ with $\|u\|=\varrho_{k}$ and for sufficiently large
$k\in\mathbb{N},$ from (6) we have $\mathcal{I}(u)\geq 0.$
$(\mathcal{B}_{2})$: Suppose assertion $(\mathcal{B}_{2})$ does not hold true
for some given $k\in\mathbb{N}.$ Then there exists a sequence $(v_{n})_{n}$ in
$X_{k}$ such that
(6.3) $\displaystyle\|v_{n}\|\to+\infty,~{}~{}~{}~{}~{}\mathcal{I}(v_{n})\geq
0.$
Now arguing in a similar way as in the proof of assertion $(\mathcal{A}_{2})$
of Theorem 5.2, we obtain (5.7) and (5.8) which combining with Lemma 3.3 imply
that
$\displaystyle\mathcal{I}(v_{n})$
$\displaystyle\leq\rho(v_{n})-\int_{\Omega}F(x,v_{n})\,dx$
$\displaystyle\leq{\|v_{n}\|}^{p_{2}^{+}}-\int_{\Omega}F(x,v_{n})\,dx$
$\displaystyle=\bigg{(}1-\frac{1}{\|v_{n}\|^{p_{2}^{+}}}\int_{\Omega}F(x,v_{n})\;dx\bigg{)}\|v_{n}\|^{p_{2}^{+}}\to-\infty\text{\;\;as}$
a contradiction to (6.3). So, there exists $k_{0}\in\mathbb{N}$ such that for
all $k\geq k_{0}$ we have $1>\varrho_{k}>\delta_{k}>0$ such that for $u\in
X_{k}$ with $\|u\|=\delta_{k}$ the condition $(\mathcal{B}_{2})$ holds true.
$(\mathcal{B}_{3})$: Since $X_{k}\cap Y_{k}\not=\emptyset,$ we get that
$d_{k}\leq b_{k}<0.$ Now for $u\in Y_{k}$ with $\|u\|\leq\varrho_{k}$ by (6),
we get
$\displaystyle\mathcal{I}(u)\geq-C_{4}\alpha_{k}\|u\|\geq-
C_{4}\alpha_{k}\varrho_{k}.$
Therefore, combining (5.2) and (6.2), we obtain
$d_{k}\geq-C_{4}\alpha_{k}\varrho_{k}\to 0\text{~{}~{} as~{}}k\to+\infty.$
Since $d_{k}<0,$ it follows that $\lim_{k\to+\infty}d_{k}=0.$ Thus the proof
of the theorem is complete. ∎
## 7\. Proof of Theorem 1.4
First we state the following $\mathbb{Z}_{2}$-symmetric version of mountain
pass theorem due to [38, Theorem 9.12]. Here again we want to mention that in
[38] this theorem is proved using $(PS)$-condition, which can also be proved
using $C$-condition.
###### Theorem 7.1.
(Symmetric Mountain pass Theorem): Let $E$ be a real infinite dimensional
Banach space and $\Phi\in C^{1}(E,\mathbb{R})$ be an even functional
satisfying the $(C)_{c}$ condition. Also let $\Phi$ satisfy the following:
* $(\mathcal{D}_{1})$
$\Phi(0)=0$ and there exist two constants $\nu,\mu>0$ such that
$\Phi(u)\geq\mu$ for all $u\in E$ with $\|u\|_{E}=\nu.$
* $(\mathcal{D}_{2})$
for all finite dimensional subspaces $\widehat{E}\subset E$ there exists
$\overline{R}=\overline{R}(\widehat{E})>0$ such that $\Phi(u)\leq 0$ for all
$u\in\widehat{E}\setminus B_{\overline{R}}(\widehat{E}),$ where
$B_{\overline{R}}(\widehat{E})=\\{u\in\widehat{E}:\|u\|_{E}\leq\overline{R}\\}.$
Then $\Phi$ poses an unbounded sequence of critical values characterized by a
minimax argument.
###### Proof of Theorem 1.4 .
From the hypotheses of the theorem it follows that $\mathcal{I}$ is even and
we have $\mathcal{I}(0)=0.$ Now we will prove that $\mathcal{I}$ satisfies the
assertions in Theorem 7.1.
* $(\mathcal{D}_{1}):$
It follows from Lemma 4.3$(i)$.
* $(\mathcal{D}_{2}):$
To show this, first claim that for any finite dimensional subspace $Y$ of $X$
there exists $\overline{R}_{0}=\overline{R}_{0}(Y)$ such that
$\mathcal{I}(u)<0$ for all $u\in E\setminus B_{\overline{R}_{0}}(Y),$ where
$B_{\overline{R}_{0}}(Y)=\\{u\in X:\|u\|\leq\overline{R}_{0}\\}.$ Fix $u\in
X,\;\|u\|=1.$ For $t>1$ using (4.2) and Lemma 3.3, we get
$\displaystyle\mathcal{I}(tu)$
$\displaystyle\leq\rho(tu)-\int_{\Omega}F(x,u)dx$ $\displaystyle\leq
t^{p_{2}^{+}}\rho(u)-t^{p_{2}^{+}}M\int_{\Omega}|u|^{p_{2}^{+}}\,dx+|\Omega|C_{M}$
(7.1)
$\displaystyle=t^{p_{2}^{+}}\left[1-M\|u\|_{L^{p_{2}^{+}}(\Omega)}^{p_{2}^{+}}\,\right]+|\Omega|C_{M}.$
Since $Y$ is finite dimensional, all norms are equivalent on $Y$, which infers
that there exists some constant $C(Y)>0$ such that
$C(Y)\|u\|\leq\|u\|_{L^{p_{2}^{+}}(\Omega)}.$ Therefore, from
($(\mathcal{D}_{2}):$), we obtain
$\displaystyle\mathcal{I}(tu)$ $\displaystyle\leq
t^{p_{2}^{+}}\left[1-M(C(Y))^{p_{2}^{+}}\|u\|^{p_{2}^{+}}\,\right]+|\Omega|C_{M}$
$\displaystyle=t^{p_{2}^{+}}\left[1-M(C(Y))^{p_{2}^{+}}\,\right]+|\Omega|C_{M}.$
Now by choosing $M$ sufficiently large such that
$M>\frac{1}{(C(Y))^{p_{2}^{+}}},$ from the last relation we yields that
$\mathcal{I}(u)\to-\infty\text{\;\;\; as\;\;}t\to+\infty.$
Hence, there exists $\overline{R}_{0}>0$ large enough such that
$\mathcal{I}(u)<0$ for all $u\in X$ with $\|u\|=\overline{R}$ and
$\overline{R}\geq\overline{R}_{0}$. Therefore, $\mathcal{I}$ verifies
$(\mathcal{D}_{2})$.
∎
## 8\. Proof of Theorem 1.5
First, we recall a new variant of Clark’s theorem (see [31, Theorem 1.1]).
###### Theorem 8.1.
Let $E$ be a Banach space, $\Phi\in C^{1}(E,\mathbb{R})$. Let $\Phi$ be even
and $\Phi(0)=0$. Also assume $\Phi$ satisfies the $(PS)$-condition and bounded
from below. If for any $k\in\mathbb{N},$ there exists a $k$-dimensional
subspace $E^{k}$ of $E$ and $\beta_{k}>0$ such that
$\displaystyle\sup_{E^{k}\cap B_{\beta_{k}}}\Phi(u)<0,$ where
$B_{\beta_{k}}=\\{u\in E:\|u\|_{E}=\beta_{k}\\}$, then at least one of the
following conclusions holds:
* $(\mathcal{M}_{1})$
There exists a sequence of critical points $(u_{k})_{k}$ satisfying
$\Phi(u_{k})<0$ for all $k$ and $\|u_{k}\|_{E}\to 0$ as $k\to+\infty.$
* $(\mathcal{M}_{2})$
There exists $l>0$ such that for any $0<b<l$ there exists a critical point $u$
such that $\|u\|_{E}=b$ and $\Phi(u)=0.$
The corresponding energy functional is given as
$\displaystyle\mathcal{I}(u)$
$\displaystyle=\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{1}(x,y)}}{p_{1}(x,y)|x-y|^{N+s(x,y)p_{1}(x,y)}}dxdy+\int_{\Omega}\frac{1}{\overline{p}_{1}(x)}|u|^{\overline{p}_{1}(x)}dx$
$\displaystyle+\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{2}(x,y)}}{p_{2}(x,y)|x-y|^{N+s(x,y)p_{2}(x,y)}}dxdy+\int_{\Omega}\frac{1}{\overline{p}_{2}(x)}|u|^{\overline{p}_{2}(x)}dx$
$\displaystyle+\int_{\mathcal{C}\Omega}\frac{\beta(x)|u|^{\overline{p}_{1}(x)}}{\overline{p}_{1}(x)}\,dx+\int_{\mathcal{C}\Omega}\frac{\beta(x)|u|^{\overline{p}_{2}(x)}}{\overline{p}_{2}(x)}\,dx$
$\displaystyle\quad-\lambda\int_{\Omega}\frac{|u|^{q(x)}}{q(x)}dx-\int_{\Omega}\frac{|u|^{r(x)}}{r(x)}dx.$
Next, we will prove the following lemma:
###### Lemma 8.2.
Suppose the hypotheses in Theorem 1.5 hold. Then $\mathcal{I}$ satisfies
$(PS)_{c}$ for any $c\in\mathbb{R}.$
###### Proof.
Let $(v_{n})_{n}$ be a sequence in $X$ such that
(8.1) $\displaystyle\mathcal{I}(v_{n})\to
c\text{\;\;\;and\;\;}\mathcal{I}^{\prime}(v_{n})\to
0\text{\;in\;}X^{*}\text{\;\; as\;\;}n\to+\infty.$
Therefore,
(8.2) $\displaystyle\langle\mathcal{I}^{\prime}(v_{n}),v_{n}-v_{0}\rangle\to
0\text{\;\;as\;}n\to+\infty.$
Hence, we have that $(v_{n})_{n}$ is bounded in $X.$ If not, then
$v_{n}\to+\infty$ as $n\to+\infty.$ Using (8.1) and (8.2) and $(f_{5}),$ we
deduce
$\displaystyle 1+C+\|v_{n}\|$
$\displaystyle\geq\mathcal{I}(v_{n})-\frac{1}{q^{-}}\langle\mathcal{I}^{\prime}(v_{n}),v_{n}\rangle$
$\displaystyle\geq\frac{1}{2}\left[\rho(v_{n})-\int_{\Omega}F(x,v_{n})dx-\frac{1}{q^{-}}\rho(v_{n})+\frac{1}{q^{-}}\int_{\Omega}f(x,v_{n})v_{n}dx\right]$
$\displaystyle\geq\frac{1}{2}\left[\left(1-\frac{1}{q^{-}}\right)\|v_{n}\|^{p^{-}}+\left(\frac{1}{q^{-}}-\frac{1}{r^{-}}\right)\int_{\Omega}|v_{n}|^{r(x)}dx\right]$
(8.3)
$\displaystyle\geq\frac{1}{2}\left(1-\frac{1}{q^{-}}\right)\|v_{n}\|^{p^{-}},$
which is a contradiction to the fact that $v_{n}\to+\infty$ as $n\to+\infty$.
Now, since $X$ is reflexive, up to a subsequence, still denoted by
$(v_{n})_{n}$, we have $v_{n}\rightharpoonup v_{0}$ weakly as $n\to+\infty.$
Therefore, as $n\to+\infty$ by Theorem 3.4, arguing similar as in (4.3), we
obtain
(8.4) $\displaystyle v_{n}\to v_{0}\text{\;\;strongly
in\;}L^{\gamma}(\Omega),\,1<\gamma(x)<p_{s}^{*}(x)\text{\;\;
and\;\;}v_{n}(x)\to v_{0}(x)\text{\;a.e. in\;}\Omega.$
By $(f_{5}),$ Hölder inequality, (8.4), boundedness of $(v_{n})_{n}$ and Lemma
2.3, arguing in a similar fashion as (4.3), we obtain
(8.5) $\displaystyle\left|\int_{\Omega}f(x,v_{n})(v_{n}-v_{0})dx\right|\to
0\text{\;\; as\;}n\to+\infty.$
Hence, combining (8.2) and (8.5) and using the $(S_{+})$ property of
$\rho^{\prime},$ we have $v_{n}\to v_{0}$ strongly in $X$ as $n\to+\infty.$
∎
###### Proof of Theorem 1.5:.
From the hypotheses we have that $\mathcal{I}$ is even and $\mathcal{I}(0)=0.$
Also Lemma 8.2 ensures that $\mathcal{I}$ satisfies $(PS)$-condition. But note
that, $\mathcal{I}$ is not bounded from below on $X.$ Hence, we will use a
truncation technique. For that choose $h\in C^{\infty}([0,\infty),[0,1])$ such
that
$h(t)=\begin{cases}&1\mbox{\;\;if}\ t\in[0,l_{0}]\\\ &0\mbox{\;\;if}\
t\in[l_{1},\infty),\end{cases}$
where $l_{0}<l_{1}$ and set $\Psi(u):=h(\|u\|).$ Now we define the truncated
functional $\mathcal{J}$ as:
$\displaystyle\mathcal{J}(u)$
$\displaystyle=\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{1}(x,y)}}{p_{1}(x,y)|x-y|^{N+s(x,y)p_{1}(x,y)}}dxdy+\int_{\Omega}\frac{1}{\overline{p}_{1}(x)}|u|^{\overline{p}_{1}(x)}dx$
$\displaystyle+\frac{1}{2}\int_{\mathbb{R}^{2N}\setminus(\mathcal{C}\Omega)^{2}}\frac{|u(x)-u(y)|^{p_{2}(x,y)}}{p_{2}(x,y)|x-y|^{N+s(x,y)p_{2}(x,y)}}dxdy+\int_{\Omega}\frac{1}{\overline{p_{2}}(x)}|u|^{\overline{q}_{2}(x)}dx$
$\displaystyle+\int_{\mathcal{C}\Omega}\frac{\beta(x)|u|^{\overline{p}_{1}(x)}}{\overline{p}_{1}(x)}\,dx+\int_{\mathcal{C}\Omega}\frac{\beta(x)|u|^{\overline{p}_{2}(x)}}{\overline{p}_{2}(x)}\,dx$
$\displaystyle\quad-\Psi(u)\left(\lambda\int_{\Omega}\frac{|u|^{q(x)}}{q(x)}dx-\int_{\Omega}\frac{|u|^{r(x)}}{r(x)}dx\right).$
Then $\mathcal{J}\in C^{1}(X,\mathbb{R})$ and $\mathcal{J}(0)=0.$ Also
$\mathcal{J}$ is even. Moreover from Lemma 8.2, it can be shown that
$\mathcal{J}$ satisfies $(PS)$-condition. Now we will show $\mathcal{J}$ is
bounded from below. For $\|u\|>1,$ using Lemma 3.3, we get
$\displaystyle\mathcal{J}(u)\geq\frac{1}{2}\rho(u)\geq\frac{1}{2}\|u\|^{p_{1}^{-}}\to+\infty$
as $\|u\|\to+\infty,$ that is $\mathcal{J}(u)$ is coercive and hence bounded
below on $X.$ Next, we claim that $\mathcal{J}$ verifies the assertion
$(\mathcal{M}_{1})$ of Theorem 8.1. For any $k\in\mathbb{N}$ and
$0<R_{k}<l_{0}<1$ let us set
$\mathcal{B}_{R_{k}}=\\{u\in X:\|u\|=R_{k}\\}.$
Also consider the $k$-dimensional subspace $X^{k}$ of $X$. Then for $u\in
X^{k}\cap\mathcal{B}_{R_{k}}$ there are some constants $K_{1},K_{2}>0$ such
that
$\displaystyle\mathcal{J}(u)$ $\displaystyle\leq\rho(u)-\int_{\Omega}F(x,u)dx$
$\displaystyle\leq\|u\|^{p_{1}^{-}}-\frac{\lambda}{q^{+}}\int_{\Omega}|u|^{q(x)}dx-\frac{1}{r^{+}}\int_{\Omega}|u|^{r(x)}dx$
(8.6)
$\displaystyle\leq\|u\|^{p_{1}^{-}}-K_{1}^{q^{+}}\frac{\lambda}{q^{+}}\|u\|^{q^{+}},$
since $X^{k}\cap\mathcal{B}_{R_{k}}$ being of finite dimension all norms on it
are equivalent. Now by letting $R_{k}\to 0$ as $k\to+\infty$ from (8), we get
$\sup_{X^{k}\cap\mathcal{B}_{\mathbb{R}_{k}}}\mathcal{J}(u)<0$ since
$p_{1}^{-}<q^{+}.$ Furthermore, for a given $u\in X$ from (8) it follows that
$\mathcal{J}(tu)<0$ for $t\to 0^{+},$ that is $\mathcal{J}(u)\not=0.$ Thus
$\mathcal{J}$ does not satisfy $(\mathcal{M}_{2}).$ Therefore, by appealing
Theorem 8.1, there exists a sequence of critical points $(v_{k})_{k}$ of
$\mathcal{J}$ in $X$ such that $\|v_{k}\|\to 0$ as $k\to+\infty.$ So, for
$l_{0}>0$ there exists $\hat{k}_{0}\in\mathbb{N}$ such that for all
$k\geq\hat{k}_{0}$ we have $\|u\|<l_{0}$ which infers that
$\mathcal{J}(u_{k})=\mathcal{I}(u_{k})$ for all $k>\hat{k}_{0}.$ Since the
critical values of $\mathcal{I}$ are the solutions to (1.2), the theorem
follows. ∎
## References
* [1] E. Acerbi and G. Mingione, Regularity results for stationary electrorheological fuids, Arch. Rational Mech. Anal., 164 (2002), 213-259.
* [2] C.O. Alves, On superlinear $p(x)$-Laplacian equations in $\mathbb{R}^{N}$, Nonlinear Anal., 73 (2010), 2566-2579.
* [3] V. Ambrosio and V. Rădulescu, Fractional double-phase patterns: concentration and multiplicity of solutions, J. Math. Pures Appl., 142 (2020), 101-145.
* [4] A. Bahrouni, Comparison and sub-supersolution principles for the fractional $p(x)$-Laplacian, J. Math. Anal. Appl., 458 (2018), 1363–1372.
* [5] T. Bartsch, Infinitely many solutions of a symmetric Dirichlet problem, Nonlinear Anal., 20(10) (1993), 1205-1216.
* [6] S. Bahrouni and H. Ounaies, Strauss and Lions type theorems for the fractional Sobolev spaces with variable exponent and applications to nonlocal Kirchhoff-Choquard problem, Mediterr. J. Math., 18(46) (2021), doi:10.1007/s00009-020-01661-w.
* [7] A. Bahrouni, V. Rădulescu, and P. Winkert, Robin fractional problems with symmetric variable growth, J. Math. Phys, 61 (2020), 101503.
* [8] A. Bahrouni, V. Rădulescu, and D. Repovš, Double phase transonic flow problems with variable growth: nonlinear patterns and stationary waves, Nonlinearity, 32(7) (2019), 2481-2495.
* [9] A. Bahrouni and V. D. Rădulescu, On a new fractional Sobolev space and applications to nonlocal variational problems with variable exponent, Discrete Contin. Dyn. Syst. Ser. S, 11(3) (2018), 379.
* [10] G. Bonanno, G. Molica Bisci, and V. Rădulescu, Quasilinear elliptic non-homogeneous Dirichlet problems through Orlicz-Sobolev spaces, Nonlinear Anal., 75(12) (2012), 4441-4456.
* [11] R. Biswas and S. Tiwari, Variable order nonlocal Choquard problem with variable exponents, Complex Var. Elliptic Equ., (2020), 1-23, doi:10.1080/17476933.2020.1751136.
* [12] R. Biswas and S. Tiwari, On a class of Kirchhoff-Choquard equations involving variable-order fractional $p(\cdot)$\- Laplacian and without Ambrosetti-Rabinowitz type condition, Topol. Methods Nonlinear Anal., in press.
* [13] L. Cherfils and Y. Ilyasov, On the stationary solutions of generalized reaction diffusion equations with $p\&q$-Laplacian,Comm. Pure Appl. Anal., 4 (2005), 922.
* [14] Y. Chen, S. Levine, and M. Rao, Variable exponent, linear growth functionals in image restoration, SIAM J. Appl. Math., 66(4) (2006), 1383-1406.
* [15] C. Chen and J. Bao, Existence, nonexistence, and multiplicity of solutions for the fractional $p\&q$-Laplacian equation in $R^{N}$, Bound. Value Probl., 16(153) (2016).
* [16] N. T. Chung, H. Q. Toan, On a class of fractional Laplacian problems with variable exponents and indefinite weights, Collect. Math., 71 (2020), 223–237.
* [17] L. Diening, P. Harjulehto, P. Hasto, and M. Ruzicka, Lebesgue and Sobolev spaces with variable exponents, Springer-Verlag, Heidelberg, 2011.
* [18] S. Dipierro, M. Medina, and E. Valdinoci, Fractional elliptic problems with critical growth in the whole of $\mathbb{R}^{n}$, Lecture Notes, Scuola Normale Superiore di Pisa, vol. 15, Pisa, 2017.
* [19] X. Fan and D. Zhao, On the spaces $L^{p(x)}(\Omega)$ and $W^{m,p(x)}(\Omega)$, J. Math. Anal. Appl., 263(2) (2001), 424-446.
* [20] M. Fabian, P. Habala, P.Hájek, V. Montesinos, and V. Zizler, Banach Space Theory: The basis for linear and nonlinear analysis, Springer, New York, 2011.
* [21] G. M. Figueiredo, Existence of positive solutions for a class of $p,q$ elliptic problems with critical growth on $R^{N}$, J. Math. Anal. Appl., 378 (2011), 507–518.
* [22] P. C. Fife, Mathematical aspects of reacting and diffusing systems, Lecture notes in Biomath, vol. 28, Springer, Berlin, 1979.
* [23] J. Giacomoni, S. Tiwari, and G. Warnault, Quasilinear parabolic problem with $p(x)$-Laplacian: existence, uniqueness of weak solutions and stabilization, NoDEA Nonlinear Differential Equations Appl., 23(3) (2016), 24.
* [24] K. Ho and Y. H. Kim, A-priori bounds and multiplicity of solutions for nonlinear elliptic problems involving the fractional $p(\cdot)$-Laplacian, Nonlinear Anal., 188 (2019), 179-201.
* [25] K. Ho and Y. H. Kim, The concentration-compactness principles for $W^{s,p(\cdot,\cdot)}(\mathbb{R}^{N})$ and application, Adv. Nonlinear Anal., 10(1), 816-848.
* [26] E. J. Hurtado, O. H. Miyagaki, and R. S. Rodrigues, Existence and multiplicity of solutions for a class of elliptic equations without Ambrosetti–Rabinowitz type conditions, J. Dynam. Differential Equations, 30 (2018), 405-432.
* [27] U. Kaufmann, J. D. Rossi, and R. E. Vidal, Fractional Sobolev spaces with variable exponents and fractional $p(x)$-Laplacians, Electron. J. Qual. Theory Differ. Equ., 76 (2017), 1-10.
* [28] N. C. Kourogenis and N. S. Papageorgiou, Nonsmooth critical point theory and nonlinear elliptic equations at resonance, Kodai Math. J., 23 (2000), 108-135.
* [29] K. Kikuchi and A. Negoro, On Markov processes generated by pseudodifferentail operator of variable order, Osaka J. Math., 34 (1997), 319-335.
* [30] H. G. Leopold, Embedding of function spaces of variable order of differentiation, Czechoslovak Math. J., 49 (1999), 633-644.
* [31] Z. Liu and Z-Q. Wang, On Clark’s theorem and its applications to partially sublinear problems, Ann. Inst. H. Poincaré Anal. Non Linéaire, 32 (2015), 1015-1037.
* [32] C. F. Lorenzo and T. T. Hartley, Initialized fractional calculus, Int. J. Appl. Math., 3 (2000), 249-265.
* [33] C. F. Lorenzo and T. T. Hartley, Variable order and distributed order fractional operators, Nonlinear Dynam, 29 (2002), 57-98.
* [34] S. Marano and S. Mosconi, Some recent results on the Dirichlet problem for $(p,q)$\- Laplacian equation, Discrete Contin. Dyn. Syst. Ser. S, 11 (2018), 279-291.
* [35] G. Molica Bisci and V. Rădulescu, Ground state solutions of scalar field fractional Schrödinger equations, Calc. Var. Partial Differ. Equ., 54(3) (2015), 2985-3008.
* [36] G. Molica Bisci, V. Rădulescu, and R. Servadei, Variational methods for nonlocal fractional problems, Encyclopedia of mathematics and its applications, vol. 162, Cambridge University Press, Cambridge, 2016.
* [37] N. S. Papageorgiou, V. D. Rădulescu, and D. D. Repovš, Existence and multiplicity of solutions for double-phase Robin problems, Bull. London Math. Soc., (2020), doi:10.1112/blms.12347.
* [38] P. H. Rabinowitz, Minimax methods in critical point theory with applications to differential equations, CBMS regional conference series in Mathematics, vol. 65, American Mathematical Society, Providence, 1986.
* [39] V. Rădulescu, D. Repovš, Partial differential equations with variable exponents, CRC Press, Boca Raton, FL, 2015.
* [40] V. Rădulescu, Isotropic and anistropic double-phase problems: old and new, Opuscula Math., 39(2) (2019), 259-279.
* [41] M. D. Ruiz-Medina, V. V. Anh, and J. M. Angulo, Fractional generalized random fields of variable order, Stoch. Anal. Appl., 22 (2004), 775-799.
* [42] M. Ružička, Electrorheological fluids: Modeling and mathematical theory, Lecture notes in Mathematics, vol. 1748, Springer, Berlin, 2000
* [43] X. Shi, V. D. Rădulescu, D. D. Repovš, and Q. Zhang, Multiple solutions of double phase variational problems with variable exponent, Adv. Calc. Var., doi:10.1515/acv-2018-0003.
* [44] H. Sun, W. Chen, H. Wei, and Y. Q. Chen, A comparative study of constant–order and variable–order fractional models in characterizing memory property of systems, Eur. Phys. J. Spec. Top., 193 (2011), 185-192.
* [45] H. Wilhelmsson, Explosive instabilities of reaction-diffusion equations, Phys. Rev. A, 36 (1987), 965966.
* [46] M. Willem, Minimax theorems, vol. 24, Birkhäuser, Boston, 1996.
* [47] J. Zuo and A. Fiscella, A critical Kirchhoff type problem driven by a $p(\cdot)$-fractional Laplace operator with variable $s(\cdot)$-order, Math. Methods Appl. Sci., (2020), doi:10.1002/mma.6813.
* [48] J. Zuo, A. Fiscella, and A. Bahrouni, Existence and multiplicity results for $p(\cdot)\&q(\cdot)$ fractional Choquard problems with variable order, Complex Var. Elliptic Equ., (2020), doi:10.1080/17476933.2020.1835878.
* [49] V. V. Zhikov, Averaging of functionals of the calculus of variations and elasticity theory, Math. USSR. Izv., 29(1) (1987), 33-36.
|
# End-to-End Language Identification using Multi-Head Self-Attention and 1D
Convolutional Neural Networks
###### Abstract
In this work, we propose a new approach for language identification using
multi-head self-attention combined with raw waveform based 1D convolutional
neural networks for Indian languages. Our approach uses an encoder, multi-head
self-attention, and a statistics pooling layer. The encoder learns features
directly from raw waveforms using 1D convolution kernels and an LSTM layer.
The LSTM layer captures temporal information between the features extracted by
the 1D convolutional layer. The multi-head self-attention layer takes outputs
of the LSTM layer and applies self-attention mechanisms on these features with
M different heads. This process helps the model give more weightage to the
more useful features and less weightage to the less relevant features.
Finally, the frame-level features are combined using a statistics pooling
layer to extract the utterance-level feature vector label prediction. We
conduct all our experiments on the 373 hrs of audio data for eight different
Indian languages. Our experiments show that our approach outperforms the
baseline model by an absolute 3.69% improvement in F1-score and achieves the
best F1-score of 95.90%. Our approach also shows that using raw waveform
models gets a 1.7% improvement in performance compared to the models built
using handcrafted features.
Index Terms— multi-head self-attention,language identification, 1D-CNNs
## 1 INTRODUCTION
A recent development in the area of deep neural networks has shown tremendous
improvements in speech systems, including speech recognition[1,2,3], emotion
recognition[20], speaker identification [13]. Previously, the language
identification field was dominated by i-vectors [5], which uses Gaussian
mixture models traditionally. Even today, i-vectors are considered to be the
best model in the case of low data regime. However, recent developments in the
field of deep learning show that deep neural networks are one of the
dominating approaches in language identification. Previously, W. Geng et al.
[4] proposed to use deep features extracted from a neural network trained for
speech recognition and showed that deep neural models are capable of obtaining
excellent performance over the classical systems[7,9,10,15,19]. Recently,
Time-delay neural networks(TDNN) have shown excellent performance for speech
recognition tasks [21]. X-vector [13] built for speaker identification was
used for language identification [14], and its shown to be one of the best
methods for language identification. The recent trend in sequence to sequence
mapping problems [18] involves the attention mechanism [16]. The attention
mechanism is one of the very well known techniques being used in sequence
mapping problems [17], and today’s state of the art speech recognition models
are built using the attention-based models. These models process sequential
inputs by iteratively selecting relevant features using attention technique.
Due to the efficiency of the attention technique for sequence mapping
problems, A. Vaswani et al. [22] propose to use stacks of attention layers
alone. They have shown remarkable results in machine translation. Motivated by
the work [22], this paper proposes to use multi-head self-attention in
combination with 1D convolutional neural network front-end processing for
language identification. Ours proposed model takes raw waveform as input
directly and extracts features that are useful for the LID task. The model
consists of a sequence of Residual blocks [23] of 1D convolutional layers to
extract features from the raw audio. Since the convolutional layer does not
have the capability to capture temporal information, we use an LSTM layer on
top of the convolutional network to capture time-varying information from the
input. Finally, the LSTM output feature sequence is fed into a multi-head
self-attention block consisting of multiple attention heads to select
important features from different parts of the input feature sequence using
attention weighting. Finally, an utterance level feature vector is generating
using a statistics pooling layer, and we classify the utterance level feature
to predict the class label.The organization of the paper is as follows. In
section 2, we explain our proposed approach in detail. In section 3, we give a
detailed analysis of the dataset collection and curation process, and in
section 4, we explain our experimental setup in detail. Finally, in section 5,
we describe our results.
## 2 Proposed method
In this section, we explain our proposed approach in detail. The detailed
model architecture is shown in Figure 1. Our model consists of 3 main stages,
1) An Encoder layer, which includes multiple 1D convolutional layers with
residual connections, an LSTM layer, 2) Multi-head self-attention layer to
select important features for language identification using attention
weighting, and 3)statistic pooling layer to obtain utterance level feature
vector for classification. The model takes raw audio waveform as input and
applies initial 1D convolution operations along with 1D max-pooling, as shown
in Figure 1. The initial convolutional layer features will go through a series
of there 1D Residual blocks followed by an LSTM layer. We then use multi-head
self-attention to extract relevant features from different parts of the input.
The statistics pooling layer generates an utterance level feature vector
containing language discriminative properties. The output of the statistics
pooling layer gives us a single feature vector called the utterance level
feature vector. This utterance level feature vector is fed into a projection
layer followed by a softmax layer to predict the class label. We explain the
details of each of these blocks in the following section.
Fig. 1: Proposed model architecture
### 2.1 Encoder
The encoder of our model consists of a series of three residual blocks
combined with a single LSTM layer, as shown in Figure 1. The encoder takes a
raw waveform signal and applies an initial 1D convolution operation consisting
of 64 filters of 1x7 filter size followed by a max-pooling operation. The max-
pooling is applied with a kernel size of 1x3 with stride 2. After the initial
convolution and max-pooling operation, we send the output a sequence of
residual blocks. The details of a single residual block is shown in Figure 1.
Each residual blocks operate with 1D convolution kernels of size 1x3. Residual
Block -1 consist of 2*64 convolution kernels of size 1x3, Residual Block -2
consist of 2*128 convolution kernels of size 1x3 and Residual Block -3
contains 2*256 convolution kernels of size 1x3. The output of Residual Block
-2 and Residual Block -3 will go through a 1D max-pooling operation. Since
LSTMs are known to capture long-range dependencies between frames in speech,
we use LSTM at the end of the final residual block to capture temporal
information. The output of the final residual block Residual Block -3 after
the max-pooling operation is sent to a single unidirectional LSTM layer with a
hidden size of 256. Let $\boldsymbol{X_{n}=[x_{1},x_{2}..x_{n},...x_{N}]}$ be
raw audio sequence with N samples.
$\boldsymbol{H^{A}}=\textsf{Encoder}(\boldsymbol{X_{n}})$ (1)
Where, Encoder is a mapping function which consists of initial 1D
convolutional layer and max-pooling operation, sequence of there residual
blocks Residual Block-1,Residual Block-2 and Residual Block-3 along with LSTM
layer . After this operation, we obtain a feature sequence
$\boldsymbol{H^{A}=[h_{1},h_{2}.....h_{T}]}$ of length T (T$<<$N). Typically
after the convolution operation, $\boldsymbol{H^{A}}$ can be looked at as a
feature matrix whose x-axis is a time dimension, and the y-axis is a feature
dimension. The feature dimension in our case is 256, as the hidden layer size
of the LSTM is the same.
### 2.2 Multi-Head Self-Attention
In this section, we describe multi-head self-attention in detail. It consists
of 3 different linear blocks, one for query, one for key, and another for
value. Each linear block consists of M independent linear layers. Where M is
the number of heads. The multi-head attention block takes features
$\boldsymbol{H^{A}=[h_{1},h_{2}.....h_{T}]}$ from LSTM and applies linear
transformation to create $\boldsymbol{Q_{i}}$, $\boldsymbol{K_{i}}$ and
$\boldsymbol{V_{i}}$ using $\boldsymbol{i^{\text{th}}}$ linear layers where,
$\boldsymbol{i=[1,2.....M]}$ and M is the total number of attention heads. The
$\boldsymbol{Q_{i}}$, $\boldsymbol{K_{i}}$ and $\boldsymbol{V_{i}}$ are fed
into scaled dot product attention layer. The scaled dot product attention
$\boldsymbol{A_{i}}$ for $\boldsymbol{i^{\text{th}}}$ head is defined as
follows.
$\boldsymbol{A_{i}}=\textsf{Softmax}(\boldsymbol{\frac{Q_{i}K_{i}}{d_{q}}})\boldsymbol{V_{i}}$
(2)
Where $\boldsymbol{d_{q}}$ is the dimension of the query vector. We combine
the attention output from all the heads using simple concatenation and feed
into the feed-forward layer.
$\boldsymbol{A}=\textsf{Concat}(\boldsymbol{A_{1},A_{2},A_{3}...A_{i}.....A_{M}}){W_{0}}$
(3)
Where, $\boldsymbol{A_{i}}$ is a $\boldsymbol{d_{q}}$x$\boldsymbol{T}$
dimensional matrix. Since the Concat operation is applied to the feature
dimension of all the matrices, the final output attention matrix
$\boldsymbol{A}$ from the multi-head attention block will have
$\boldsymbol{Md_{q}}$x$\boldsymbol{T}$ matrix dimensions.
The multi-head attention layer helps in finding features that are more
relevant for language identification. The scaled dot product attention
achieves this by giving more weighting to the more relevant features and less
weighting to less relevant features. Due to the presence of multiple heads in
the attention layer, this process selects features from different parts of the
input and helps in obtaining better language classification performance.
### 2.3 Statistics pooling
The idea of the statistics pooling layer is similar to max pooling. In the
case of statistics pooling, we compute the mean and standard deviation from
frame-level features. The mean and standard deviation features are
concatenated to create the utterance level feature vector, as described in the
equation below. Let $\boldsymbol{A=[a_{1},a_{2}.....a_{T}]}$ is the output
from multi-head attention block.
$\boldsymbol{P}=\textsf{Concat}({\textsf{mean}(\boldsymbol{A})},{\textsf{std}(\boldsymbol{A})})$
(4)
Where, $\boldsymbol{a_{i}}$ is a feature vector of dimension
$\boldsymbol{M*d_{q}}$ and $\boldsymbol{P}$ is final pooled feature vector
using statistics pooling layer. Since the dimension of the utterance level
feature vector is $\boldsymbol{P}$ become bigger when M is large, we add a
projection layer on top to the statistics pooling layer (Figure 1) in order to
reduce the dimension of $\boldsymbol{P}$. We take the output from this
projection layer to visualize the utterance level embeddings for different
languages.
## 3 Dataset
In this section, we describe our data collection process. We collect and
curate videos from Youtube111www.youtube.com using manual labeling. We ask
annotators to look for videos for eight languages in Youtube and manually
verify it make sure the video does not contain multiple Indian languages. Most
of these videos contain background noise or music signals. Sometime the video
may contain a mix of English and other Indian languages due to code-mixing. We
use an in-house speech v/s non-speech detection model to detect only the
speech segments. We clip only the speech segments from every video and discard
the non-speech part of the video. After preprocessing, our total dataset
contains 373.27 hrs of audio data for 8 Indian languages. Our dataset includes
Hindi, English, Kannada,Tamil, Telugu, Malayalam, Gujarati and Marathi. These
languages are officially spoken in the North and South regions of India. We
split the dataset into training and evaluation part, and the statistics of the
training and evaluation parts are shown in Table 1.
Datset Train Eval Duration Files Duration Files English 44.21 15963 11.28 4074
Kannada 35.95 12988 8.90 3216 Gujarati 30.24 10933 7.312 2642 Hindi 38.79
14004 9.72 3510 Malayalam 34.98 12636 8.94 3228 Tamil 63.15 22774 15.95 5753
Telugu 35.07 12666 8.54 3087 Marathi 16.23 5873 4.013 1449
Table 1: Train and evaluation splits for different languages (Duration is in
Hrs)
## 4 Experiments
We conduct all our experiments on in house dataset collected for 8 Indian
languages. Our proposed model consists of an encoder, multi-head self-
attention block, and statistics pooling layer followed by projection and a
softmax layer. We randomly select a 4sec audio signal from each audio file
during training. Since our data has a sampling rate of 16KHz, we get 64000
samples from every file during training. We feed a 1x64000 dimensional signal
into our encoder. We conduct multiple experiments to see the effectiveness of
the multi-head self-attention module for language identification. We first
train a standalone 1D convolutional neural network model as the first baseline
model. We refer this system as ResNet. We also train a 1D convolutional neural
network in combination with a unidirectional LSTM as a second baseline, and we
refer to it as ResNet-LSTM. Finally, our proposed model is built using a 1D
convolutional neural network, LSTM, and multi-head self-attention. We refer to
it as ResNet-LSTM-MHA-Raw.
We conduct multiple experiments to see the effectiveness of our model on the
duration of the audio during training. We train 3 different models ResNet-
LSTM-MHA-2Sec, ResNet-LSTM-MHA-3Sec, ResNet-LSTM-MHA-4Sec which takes
2sec,3sec and 4sec audio data respectively during training. Our final
experiments study the effectiveness of using raw waveform methods instead of
handcrafted features. We set up an experiment to train the model using MFCC
features as inputs instead of the raw waveform to our model. We extract 13
dimensional MFCC (with delta and double-delta) feature for every 25ms using a
10ms frameshift for this experiment. The MFCC based model is referred to as
ResNet-LSTM-MHA-MFCC while the raw waveform based model is referred to as
ResNet-LSTM-MHA-Raw in this paper. We use Adam [24] optimizer to train all our
models with a learning rate of 0.001 for up to 25 epochs. We use a batch size
of 64 during training. We train all our models using
Pytorch222https://pytorch.org/ toolkit.
## 5 RESULTS
In this section, we describe the evaluation of different models and their
performances. We train 2 baseline models ResNet and ResNet-LSTM. The first
baseline model ResNet consists fo a sequence of 3 Residual blocks made up of
1D convolution kernels. We can think of this model as a ResNet [23] with an
average pooling layer replaced by a statistics pooling layer. This model takes
4sec raw audio data and predicts the language label. The ResNet model has an
F1-score of 88.67% on the test dataset. The second baseline model ResNet-LSTM
consists of the same setting as baseline-1, but it has an extra LSTM layer on
top of CNN in order to capture long-range temporal information. The
performance of this model is 92.21%, as shown in Table 2. We compare our
baseline models with our proposed model ResNet-LSTM-MHA-RAW, which contains a
multi-head attention layer and operates with the raw waveform as input. Table
2 shows that our model gets 3.69% absolute improvement in F1-score compared to
the second baseline model. We also create a model that takes MFCC features as
input instead of raw audio refer to as ResNet-LSTM-MHA-MFCC. We show that raw
waveform based models can get 1.7% improvement over handcrafted feature-based
models.
System F1-Score ResNet (baseline-1) 89.67% ResNet-LSTM (baseline-2) 92.21%
ResNet-LSTM-MHA-MFCC (ours) 94.22% ResNet-LSTM-MHA-RAW (ours) 95.90%
Table 2: Comparison of different architectures for language identification.
Bold indicates the best performance
In order to see the effect of the input length during training, we conduct an
experiment to train the model using 2Sec, 3Sec, and 4Sec audio data, and we
refer to these models as ResNet-LSTM-MHA-2Sec, ResNet-LSTM-MHA-3Sec and
ResNet-LSTM-MHA-4Sec respectively. The results of these experiments are shown
in Table 3. It shows that longer audio data tends to improve the F1-score on
the test data due to longer context signals.
Table 3: Comparison of models trained with different segment duration. Bold
indicates the best performance
System F1-Score ResNet-LSTM-MHA-2Sec 92.64% ResNet-LSTM-MHA-3Sec 94.40%
ResNet-LSTM-MHA-4Sec 95.90%
Finally, we visualize the utterance level embeddings extracted from the
projection layer for all the languages. We extract embeddings for 6500
randomly selected test utterances for t-sne visualization. Each embedding has
dimension. We reduce the dimension of the embeddings to 2 using the t-sne
technique. The t-sne plot of the 2-D embeddings is shown in Figure 2. It can
be clearly seen that the proposed model learns very good language
discriminative features at the segment level.
Fig. 2: t-sne plot of utterence level embeddings
## 6 Conclusion
In this work, we propose a new architecture for language identification using
multi-head self-attention and 1D convolutional neural networks. We propose to
use raw waveform directly as input instead of handcrafted features to learn
language discriminative feature using 1D convolution operations. Our model
uses multi-head self-attention to learn and select more important features for
language identification task. We finally use a statistics pooling approach to
extract utterance level language representation from frame-level features. We
collect and curate 373hrs audio data for 8 Indian languages Hindi, English,
Kannada,Tamil, Telugu, Malayalam, Gujarati and Marathi. Our experiments show
that multi-head self-attention in combination with raw waveform based 1D
convolutional neural network model obtains the best performance on our
evaluation dataset. We extract the utterance level embeddings for our
evaluation data and visualize the clustering effect using t-sne. The
visualization clearly shows that the model learns very good language
discriminative features.
## References
* [1] W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, “Listen, Attend and Spell: A Neural Network for Large Vocabulary Conversational Speech Recognition” in ICASSP, 2016
* [2] S. Kim, T. Hori, and S. Watanabe, “Joint CTC-attention based endto-end speech recognition using multi-task learning,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 4835–4839.
* [3] Ronan Collobert, Christian Puhrsch, and Gabriel Synnaeve, “Wav2letter: an end-to-end convnet based speech recognition system,” CoRR, vol.abs/1609.03193, 2016.
* [4] W. Geng, J. Li, S. Zhang, X. Cai, and B. Xu, “Multilingual tandem bottleneck feature for language identification,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015\.
* [5] N. Dehak, P. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE Transactions on Audio, Speech,and Language Processing vol. 19, no. 4,pp. 788–798, 2011.
* [6] Y. Song, B. Jiang, Y. Bao, S. Wei, and L.-R. Dai, “I-vector representation based on bottleneck features for language identification,” Electronics Letters, vol. 49, no. 24, pp. 1569–1570, 2013
* [7] I. Lopez-Moreno, J. Gonzalez-Dominguez, O. Plchot, D. Martinez, J. Gonzalez-Rodriguez, and P. Moreno, “Automatic language identification using deep neural networks”, in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 5337–5341
* [8] S. Hochreiter and J. Schmidhuber, “Long short term memory”, Neural computation, 1997.
* [9] J. Gonzalez-Dominguez, I. Lopez-Moreno, H. Sak, J. Gonzalez Rodriguez, and P. J. Moreno, “Automatic language identification using long short-term memory recurrent neural networks.” in INTERSPEECH, 2014, pp. 2155–2159.
* [10] S. Ganapathy, K. J. Han, S. Thomas, M. K. Omar, M. Van Segbroeck, and S. S. Narayanan, “Robust language identification using convolutional neural network features.” ISCA INTERSPEECH, 2014\.
* [11] Lozano Diez, A. Zazo Candil, R. Gonzalez Dominguez, J., Toledano, D.T., and Gonzalez Rodrıguez, J. (2015), “An end-to-end approach to language identification in short utterances using convolutional neural networks”, INTERSPEECH 2015
* [12] Garcia-Romero, D., McCree, A. “Stacked Long-Term TDNN for Spoken Language Recognition,” Proc. Interspeech 2016, 3226- 3230\.
* [13] D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudanpur,“X-vectors: Robust dnn embeddings for speaker recognition,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018
* [14] D. Snyder, D. Garcia-Romero, A. McCree, G. Sell, D. Povey, and S. Khudanpur, “Spoken language recognition using x-vectors,” in Odyssey: The Speaker and Language Recognition Workshop, 2018
* [15] C Bartz, T Herold, H Yang, and C Meinel ,“Language identification using deep convolutional recurrent neural networks”, CoRR,abs/1708.04811, 2017
* [16] D. Bahdanau, K. Cho, and Y. Bengio. “Neural machine translation by jointly learning to align and translate”, arXiv preprint arXiv:1409.0473, 2014
* [17] J. Chorowski, D. Bahdanau, K. Cho, and Y. Bengio, “End-to-end continuous speech recognition using attention-based recurrent nn: first results,” arXiv preprint arXiv:1412.1602, 2014
* [18] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in neural information processing systems, 2014, pp. 3104–3112.
* [19] M. Ravanelli and Y.Bengio, “Speaker Recognition from raw waveform with SincNet,” in Proc. of SLT, 2018
* [20] M. Sarma, P. Ghahremani, D. Povey, N. K. Goel, K. K. Sarma, and N. Dehak, “Emotion identication from raw speech signals using dnns”, Proc. Interspeech 2018, pp. 3097–3101, 2018.
* [21] V. Peddinti, D. Povey, and S. Khudanpur, “A time delay neural network architecture for efficient modeling of long temporal contexts,” in Proceedings of INTERSPEECH, 2015.
* [22] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017, pp. 5998–6008
* [23] K. He, X. Zhang, S. Ren, and J. Sun. “Deep residual learning for image recognition,” Computer Vision and Pattern Recognition (CVPR), 2016
* [24] Diederik P. Kingma and Jimmy Ba, “Adam: A Method for Stochastic Optimization”, In Proceedings of the International Conference on Learning Representations (ICLR), 2014
* [25] Okabe, Koji and Koshinaka, Takafumi and Shinoda, Koichi,“Attentive Statistics Pooling for Deep Speaker Embedding”, In Interspeech 2018
|
# Fairness through Social Welfare Optimization
Violet (Xinying) Chen , J. N. Hooker Stevens Institute of Technology,
<EMAIL_ADDRESS>Mellon University<EMAIL_ADDRESS>
(July 2022)
###### Abstract
We propose social welfare optimization as a general paradigm for formalizing
fairness in AI systems. We argue that optimization models allow formulation of
a wide range of fairness criteria as social welfare functions, while enabling
AI to take advantage of highly advanced solution technology. Rather than
attempting to reduce bias between selected groups, one can achieve equity
across all groups by incorporating fairness into the social welfare function.
This also allows a fuller accounting of the welfare of the individuals
involved. We show how to integrate social welfare optimization with both rule-
based AI and machine learning, using either an in-processing or a post-
processing approach. We present empirical results from a case study as a
preliminary examination of the validity and potential of these integration
strategies.
## 1 Introduction
Artificial intelligence is increasingly used not only to solve problems, but
to recommend action decisions that range from awarding mortgage loans to
granting parole. The prospect of making decisions immediately raises the
question of ethics and fairness. If ethical norms are to be incorporated into
artificial decision making, these norms must somehow be automated or
formalized. The leading approaches to this challenge include
* •
value alignment, which strives to train or modify AI systems to reflect human
ethical values automatically, e.g. Allen et al. (2005); Russell (2019);
Gabriel (2020);
* •
logical formulations of ethical and fairness principles that attempt to
represent them precisely enough to govern a rule-based AI system, e.g.
Bringsjord et al. (2006); Lindner et al. (2020); Hooker and Kim (2018); and
* •
statistical fairness metrics that aim to ensure that benefits are allocated
equitably in the decision process, e.g. Dwork et al. (2012); Mehrabi et al.
(2019); Chouldechova and Roth (2020).
Each of these approaches can be useful in a suitable context. We wish to
propose, however, an alternate framework for formalizing ethics and fairness
that has received less attention:
* •
social welfare optimization, which allows one to achieve equity or fairness by
maximizing a social welfare function.
Welfare economics has long used social welfare functions (SWFs) as a tool to
measure the desirability of a given distribution of benefits and harms. A SWF
is a function of the utility levels allocated to affected parties, where
utility reflects a party’s gain or loss as a consequence of the decisions of
interest. Using a SWF motivates explicit consideration of the downstream
outcomes of fairness and equity criteria. In contrast to leading notions of AI
fairness that focus on eliminating disparity between groups, SWFs allow a
broader perspective that emphasizes fairness in the welfare impacts of
decisions.
AI research is beginning to recognize the importance of a welfare perspective
on fairness (e.g., Corbett-Davies and Goel (2018); Hu and Chen (2020)), due in
part to its potential for aligning fairness concepts with social well-being.
Despite this rising attention, there is no general framework for incorporating
welfare-based fairness into AI systems. In this paper, we utilize social
welfare optimization as the core component of one possible framework. This
framework allows one to take advantage of the flexibility of SWFs to represent
a wide range of fairness and equity concepts, as well as to harness powerful
optimization solvers. Optimization methods are of course already employed in
AI to train neural networks, calibrate machine learning models, and the like.
Our proposal is to formalize welfare perspectives on AI fairness through
social welfare optimization.
We begin below by stating some specific advantages of social welfare
optimization as a paradigm for implementing equity and fairness in AI. We then
introduce a bit of notation, state the basic optimization problem, and present
an example of mortgage loan processing. Drawing motivation from this example,
we describe a general framework for welfare-based fairness through various
integration schemes between AI and optimization. We continue with a review of
related work in both the operations research and AI communities, and we
examine several SWFs to illustrate how they can capture a variety of fairness
concepts. We then show how the mortgage example can be implemented in a simple
setting, and observe that social welfare maximization can improve group
parity. We conclude by outlining possible research directions.
### 1.1 Advantages of Social Welfare Optimization
The optimization of social welfare functions offers several advantages as a
framework for incorporating fairness into AI.
* •
Social welfare functions provide a broader perspective on fairness than can be
achieved by focusing exclusively on bias and concepts of parity across groups.
They can represent a wide range of fairness concepts and take into account the
level of benefit or harm to those affected. There is no need to decide which
specific groups should be protected, because maximizing social welfare tends
to promote equity across all groups.
* •
Social welfare functions allow one to balance equity and efficiency in a
principled way. Where equity is an issue, there is often a desire for
efficiency as well. A social welfare approach obliges one to consider how
equity and utilitarian goals should be represented and balanced when one
chooses the function to be maximized.
* •
Optimization models allow one to harness powerful optimization methods, which
have been developed and refined over a period of 80 years or more. A wide
variety of social welfare functions can be formulated for solution by highly
advanced linear, nonlinear, and mixed integer programming solvers.
Formulations are provided in Anonymous (2022) for the SWFs mentioned herein.
* •
Optimization models offer enormous flexibility to include constraints on the
problem. Decisions are normally made in the context of resource constraints or
other limitations on possible options. These can be represented as constraints
in the optimization problem, as nearly all state-of-the-art optimization
methods are designed for constrained optimization.
## 2 The Basic Optimization Problem
The general problem of maximizing social welfare can be stated
$\max_{\bm{d}}\big{\\{}W\big{(}\bm{U}(\bm{d})\big{)}\;\big{|}\;\bm{d}\in
S_{\bm{d}}\big{\\}}$ (1)
where $\bm{d}=(d_{1},\ldots,d_{n})$ is a vector of resources distributed
across stakeholders $1,\ldots,n$, and $S_{\bm{d}}$ is the set of feasible
values of $\bm{d}$ permitted by resource limits and other constraints.
$\bm{U}=(U_{1},\ldots,U_{n})$ is a vector of utility functions, where
$U_{i}(\bm{d})$ defines the utility experienced by stakeholder $i$ as a result
of the resource distribution $\bm{d}$. We can normally write $U_{i}(\bm{d})$
as $U_{i}(d_{i})$, since a stakeholder’s utility typically depends only on the
resources allotted to that stakeholder. Finally, $W(\bm{u})$ is a social
welfare function that measures the desirability of a vector $\bm{u}$ of
utilities. Problem (1) maximizes social welfare over all feasible resource
allocations.
## 3 Example: Mortgage Loan Processing
We use mortgage loan processing as an example, as it is a much-discussed
application of AI-based decision making. Issues of fairness arise when an AI
system is more likely to deny loans to members of certain groups, perhaps
reflecting minority status or gender. A frequently used remedy is to apply
statistical bias metrics to detect the problem and adjust the decision
algorithms in an attempt to solve it.
Yet bias is only one element of a broader decision-making context. For one
thing, there is a clear utilitarian imperative. The reason for automating
mortgage decisions in the first place is to predict more accurately who will
default, because defaults are costly for the bank and devastating to home
buyers. The desire for accurate prediction is, at root, a desire to maximize
utility. Furthermore, bias is regarded as unfair in large part because it
reduces the welfare of a segment of society that is already disadvantaged. An
aversion to bias is, to a great degree, grounded in a desire for distributive
justice in general. All this suggests that loan decisions should be designed
to achieve what we really want: efficiency and distributive justice, rather
than focusing exclusively on predictive accuracy and group parity.
The social welfare function $W$ in (1) should be selected to balance
efficiency and equity in a suitable fashion; we consider some candidate SWFs
in Section 6. The stakeholders $1,\ldots,n$ might include the loan applicants,
the bank, the bank’s stockholders, and the community at large. For simplicity,
we focus on the loan applicants as stakeholders. The utility function $\bm{U}$
converts a given set of loan decisions $\bm{d}=(d_{1},\ldots,d_{n})$ to a
vector of expected utilities $\bm{u}=(u_{1},\ldots,u_{n})=\bm{U}(\bm{d})$ that
the stakeholders experience as a result. More specifically, we let $d_{i}$
denote the approved loan amount, then $0\leq d_{i}\leq r_{i}$ where $r_{i}$ is
the requested amount by $i$, and $d_{i}=0$ means the loan request is denied.
The utility measure $u_{i}=U_{i}(d_{i})$ for applicant $i$ could depend on the
applicant’s financial situation as well as the amount of the loan, as for
example when the overall benefits for receiving a loan is greater for an
applicant who is less well-off. The SWF can reflect a preference for granting
loans to disadvantaged applicants even when they have a somewhat higher
probability of default, so as to ensure a more just distribution of utility.
This could have the effect of avoiding bias against minority groups, but as
part of a more comprehensive assessment of social welfare.
## 4 A General Framework
Drawing motivation from the mortgage example, we formalize a general framework
for designing AI systems embedded with welfare-based fairness considerations.
### Step 1: Specify the decision problem
We begin by specifying the needed components of the decision problem. This
step is critical for the success of later steps as it ensures we have a
precise understanding of the problem scope and context. We highlight some key
components that commonly exist in problem instances. Additional elements may
be needed for specific problems.
* •
Task. The task is defined by the resources to be allocated and the action to
be taken. In the mortgage example, the bank’s task is to decide upon the loan
amount to be granted to each applicant.
* •
Stakeholders. Stakeholders are individuals or groups directly or indirectly
affected by the decisions. They are the utility recipients in the social
welfare model.
* •
Goals. The goals are desirable outcomes of the decision problem and serve as
guiding principles for selecting the social welfare function $W(\bm{u})$. The
goals can encompass total utility created as well as how it is distributed.
* •
Constraints. These are restrictions that limit which actions are feasible and
therefore define the domain $S_{\bm{d}}$. A main source of restriction is the
scarcity of resources. In the mortgage example, the bank is subject to budget
constraints and total risk tolerance.
### Step 2: Define utility and social welfare functions
The appropriate definition of utility varies with the application. It can
sometimes be identified directly with quantity of resources allocated to an
individual, in which case $U_{i}(d_{i})=d_{i}$. In other cases, it can be
measured by the financial benefit provided or the negative cost incurred by
resources allocated, which tends to result in linear utility functions
$U_{i}(d_{i})=\alpha_{i}d_{i}$. In health applications, it is often measured
in quality-adjusted life years (QALYs). When there are decreasing returns to
scale, a concave nonlinear utility function $U_{i}(d_{i})$ can be used.
A suitable social welfare function likewise depends on the application. In
some contexts, minimizing inequality may be the overriding goal, while in
others one may wish to give special priority to the disadvantaged or combine
efficiency and equity in some fashion. We provide below a sampling of SWFs
that may be appropriate. Additional SWFs are surveyed in Anonymous (2022).
### Step 3: Develop a decision model
Decision models for social welfare optimization can be developed for either
rule-based AI systems or machine learning.
#### Rule-based AI
Rule-based systems are increasingly recognized for their capacity to support
principled and transparent AI in various application domains. For instance,
Brandom (2018) observes the trend in autonomous vehicle industry whereby
“companies have shifted to rule-based AI, an older technique that lets
engineers hard-code specific behaviors or logic into an otherwise self-
directed system.” Kim et al. (2021) demonstrate that ethical rules can be
precisely represented as logical propositions suitable for inclusion in a rule
base.
Social welfare can be incorporated in a rule-based system in either of two
ways. One uses the social welfare optimization problem to guide the selection
of rules to encode directly into the AI system. It then relies on the system
to make decisions as cases arise. In the mortgage example, the bank may pre-
specify applicant classes, based on applicant financial data, and determine
decision rules for these classes using a social welfare optimization model.
Such a rule-based system is straightforward to use: for a new loan applicant,
the bank would first identify which class the applicant belongs to, then
approve the loan if the corresponding rule for the class says so and reject
otherwise.
Alternatively, one can include rules in the AI rule base that provide
instructions for formulating the optimization problem and for choosing actions
based on the optimal solution. This is consistent with the proposal from
Bringsjord et al. (2006) that one could constrain AI systems with ethical
principles formalized as logic statements, such as if-then statements. In the
example, the bank may consider rules that require applicants with certain
features to receive reasonable prioritization, and these rules can be captured
as constraints or incorporated into the objective function in the optimization
model. Furthermore, when making the final loan decisions, the bank may define
rules about implementing the allocation solution obtained from the
optimization problem.
#### Machine Learning
There are at least two ways in which social welfare optimization can be
integrated with machine learning. A post-processing approach trains the
machine to predict the information needed to formulate the welfare
optimization model, whereupon the model is solved for decisions. An in-
processing approach combines the loss function of the learning mechanism with
a social welfare function, perhaps by taking a convex combination of the two.
As a result, the machine yields decisions that are already welfare sensitive,
and no post-processing is necessary.
We can make this more precise for supervised learning as follows. Suppose a
training data set is $\mathcal{D}=\\{(\bm{x}_{i},y_{i})\\}_{i=1}^{n}$ where
$\bm{x}_{i}$ is the feature vector and $y_{i}$ is the true label, then a
supervised learning method trains a model $h$ with the accuracy of the
predicted information $\\{h(\bm{x}_{i})\\}$ as the primary goal. The ML
literature has studied a large number of formats for $h$, ranging from a
simple functional form in logistic regression and support vector machine to
more complex structures like decision trees and neural networks.
In the post-processing approach, the prediction step focuses solely on
accuracy through minimization of a loss function $\mathcal{L}$. The prediction
and decision steps can be formalized as
$\displaystyle\begin{aligned} &\textit{Prediction step:
}h^{*}=\text{argmin}_{\bm{h}}\mathcal{L}(\bm{h},\mathcal{D});\\\
&\textit{Decision step:
}\bm{d}^{*}=\text{argmax}_{\bm{d}}\big{\\{}W\big{(}\bm{U}(\bm{d})\big{)}:d_{i}=d\big{(}\bm{x}_{i},h^{*}(\bm{x}_{i})\big{)}\big{\\}}.\end{aligned}$
In the mortgage example, we might suppose that $h(\bm{x}_{i})$ is the
probability that applicant $i$ will repay the loan if it is granted. Then if
$d_{i}$ is the loan amount, we could set
$U_{i}(\bm{d})=U_{i}(d_{i})=h^{*}(\bm{x}_{i})d_{i}$. It is notable that all
supervised learning methods are suitable for this type of integration, and the
decision maker has the flexibility to choose the ML method that fits the
problem’s context and computational requirement.
The in-processing approach can be viewed as a type of in-processing fair ML
method, but distinguished from most of the literature in that we modify the
standard accuracy objective in a training model to reflect social welfare. A
simple scheme is to take a weighted sum of training loss and negative social
welfare. The if $h(\bm{x}_{i})$ is the predicted decision for individual $i$,
we have
$\displaystyle\textit{Prediction/decision step:
}h^{*}=\text{argmin}_{h}\big{\\{}\mathcal{L}(h,\mathcal{D})-\lambda
W(\bm{U}(\bm{d})):d_{i}=h(\bm{x}_{i})\big{\\}}$
The success of this type of integration is contingent on whether the resulting
training model can be solved efficiently.
As a final remark, we briefly note the integration potential for two other
core machine learning methods, unsupervised learning and reinforcement
learning. Fairness has been studied in both methods, but progress is much more
limited relative to fair supervised learning. With respect to unsupervised
learning, we can easily apply post-processing integration to clustering
methods (for example) and utilize the trained clusters as input to specify the
optimization problem. In the loan example, the bank can use clustering
algorithms to categorize financial profiles that will play a role in the
optimization formulation. Recent work in fair clustering, e.g. Abraham et al.
(2019) and Deepak and Abraham (2020), explore an in-processing strategy to
extend K-means clustering to include fairness considerations by adding a
fairness component to the usual K-means objective function.
In reinforcement learning (RL), the goal is to search for a reward-maximizing
policy in a dynamic environment that is typically modelled as a Markov
Decision Process. Defining and achieving fairness in RL is more challenging
due to the sequential and dynamic structure. Weng (2019); Siddique et al.
(2020) propose a novel framework for fair multi-objective reinforcement
learning based on welfare optimization. The key component of their proposal is
to replace the standard reward objective with a particular social welfare
function on the reward distribution. This precisely exemplifies in-processing
integration and demonstrates the potential of social welfare optimization for
seeking fairness in RL. In addition, Chohlas-Wood et al. (2021) propose a
multi-armed bandit inspired framework for learning a fair policy, where
fairness is defined with respect to the consequences of the decisions. A core
element of their approach is a social welfare optimization problem, where the
objective characterizes a trade-off between fairness and efficiency.
## 5 Previous Work
Social welfare optimization is already fairly well established in the
operations research literature, and it is beginning to attract interest in the
AI community. We view AI in a broad sense to refer to all models and
algorithms that can support decision-making. Our proposal is that AI expand
these initial efforts into a general research program for formulating
fairness. We review here some of the previous work in both literatures.
An excellent survey of equity models used in operations research is provided
by Karsu and Morton (2015). We mention a few examples that combine equity and
efficiency. Bandwidth allocation in telecommunication networks is a popular
application studied in early works on fair resource allocation (Luss (1999);
Ogryczak and Śliwiński (2002); Ogryczak et al. (2008)). For problems in this
domain, a standard setup is to interpret bandwidth as utility and define a SWF
that is consistent with a Rawlsian maximin criterion. The corresponding
optimization problem seeks equitable allocations that optimize the worst
performance among activities or services that compete for bandwidth. Project
assignment is another application where fairness is often relevant, as the
involved stakeholders may have different preferences over projects. For
instance, Chiarandini et al. (2019) work with a real-life decision to assign
projects to university students. They use student rankings of projects as
utilities and study a variety of SWFs that capture different fairness-
efficiency balancing principles. Fair optimization has also received attention
in humanitarian operations. Eisenhandler and Tzur (2019) study an important
logistical challenge in food bank operations, food pickup and distribution.
They design a routing resource allocation model to seek both fair allocation
of food to different agencies and efficient delivery of as much food as
possible. The utilities of agencies are measured by the amount of food
delivered. An SWF is selected to combine utility and the Gini coefficient.
Mostajabdaveh et al. (2019) consider a disaster preparation task of selecting
shelter locations and assigning neighborhoods to shelters. They choose a SWF
that combines the Gini coefficient with neighborhood utilities based on the
travel distances to their assigned shelter.
Recent AI research has developed efficient algorithms that take fairness into
account. This effort is not directly comparable to our proposal in that it
develops algorithms to solve specific problems that have a fairness component,
rather than formulating optimization models that can be submitted to state-of-
the-art software. Algorithmic design tasks are often associated with fair
matching decisions, such as kidney exchange McElfresh and Dickerson (2018),
paper-reviewer assignment in peer review Stelmakh et al. (2019), or online
decision procedures for a complex situation such as ridesharing Nanda et al.
(2020).
Fair machine learning is a rapidly growing field in recent years. Fair ML
methods in literature can be categorized as pre-, in-, or post-processing,
which respectively seek fairness by modifying standard ML methods before,
during, or after the training phase. The majority of fair ML methods seek to
eliminate bias and discrimination in standard ML models, via fairness notions
that measure certain type of disparity in the generated predictions. Many of
these methods rely on optimization in the fairness-seeking components. Pre-
processing methods can use optimization models to find the best data
modifications to the training data to prevent bias and disparity (see e.g.
Zemel et al. (2013); Calmon et al. (2017)). Similarly, post-processing methods
can use optimization models to determine the optimal tuning rules to adjust
the predictions generated from the trained model to seek fairness (see e.g.
Hardt et al. (2016); Alabdulmohsin (2020)). Moreover, fairness through
optimization fits naturally into in-processing methods, which modify standard
ML models by adding fairness constraints or including fairness components in
objective function (see e.g. Zafar et al. (2019); Olfat and Aswani (2018);
Donini et al. (2018).
In contrast to this dominant statistical view of fairness, an emerging
research thread advocates welfare-based fairness in ML to seek better
compatibility between fair ML and distributive justice. This is in line with
our proposal of using social welfare functions to capture a broader
perspective on fairness. We next discuss a few representative papers in this
thread, and review their chosen utility and social welfare definitions. Liu et
al. (2018) look into the delayed impacts of fairness in machine learning on
the welfare of the involved people. Using loan processing as the main running
example, they suppose people have a performance variable, such as, credit
score, that they would like to improve overtime. The paper’s main results
demonstrate that inserting fairness, which aims to benefit certain protected
groups, in machine learning model does not guarantee long-term improvements
for the targeted groups. In their analysis, they define utility functions
based on the performance variables and individuals’ expected outcomes, and use
a utilitarian sum of individual utilities as the central decision maker’s
social welfare objective. Heidari et al. (2018) consider a standard supervised
learning setting with true labels $\\{y_{i}\\}$ and predicted labels
$\\{\hat{y}_{i}\\}$. They define the utility function as a function of
$y_{i},\hat{y}_{i}$, and the specific format is chosen to reflect whether $i$
is risk averse, neutral or seeking, and how close the predicted outcome
$\hat{y}_{i}$ is to $i$’s desirable outcome. They then define a utilitarian
sum of these individual utilities as the social welfare measure, and propose
to add a constraint on this social welfare value to standard ML models as an
in-processing fair ML approach. Hu and Chen (2020) study a similar utility
definition without the risk component in a classification setup. They evaluate
the overall welfare associated with classification decisions through comparing
a vector of welfare values, which measure the utilitarian welfare by group.
Also in a classification setting, Corbett-Davies and Goel (2018) suppose each
group has fixed benefits and costs associated with classification outcomes,
and these values are used as parameters in the utility functions. A group’s
utility aggregates the benefits and costs that individuals of the group incur
from their classification outcomes. A more refined view of utility is studied
in Heidari et al. (2019): they partition one’s actual utility into an effort-
based component and an advantage component. Utilizing this partition, they
group individuals by effort-based utilities and propose a fairness measure
equivalent to the expected advantage utility of the worst-off group.
## 6 A Sampling of Social Welfare Functions
We briefly review a collection of SWFs to illustrate how they can embody
various conceptions of equity. For each, we indicate the type of optimization
model it yields, and whether it is appropriate for our running example of
mortgage loan processing. We classify the SWFs as pure fairness metrics,
functions that combine fairness and efficiency, and statistical fairness
metrics.
### 6.1 Pure fairness measures
Social welfare functions that measure fairness alone, without an element of
efficiency, are of two basic types: inequality metrics and fairness for the
disadvantaged.
Inequality metrics abound in the economics literature. Some simple ones are
represented by the following SWFs (which negate the inequality measure):
$W(\bm{u})=\left\\{\begin{array}[]{ll}{\displaystyle-(1/\bar{u})(u_{\max}-u_{\min})}&\mbox{for
the {\em relative range}}\vspace{0.5ex}\vspace{0.5ex}\vspace{0.5ex}\\\
{\displaystyle-(1/\bar{u})\sum_{i}|u_{i}-\bar{u}|}&\mbox{for the {\em relative
mean deviation}}\vspace{-0.5ex}\\\
{\displaystyle-(1/\bar{u})\Big{[}(1/n)\sum_{i}(u_{i}-\bar{u})^{2}\Big{]}^{{\textstyle\frac{1}{2}}}}&\mbox{for
the {\em coefficient of variation}}\end{array}\right.$
There is also the well-known Gini coefficient, which is proportional to the
area between the Lorenz curve and a diagonal line representing perfect
equality. It corresponds to the SWF
$W(\bm{u})=1-\frac{1}{2\bar{u}n^{2}}\sum_{i,j}|u_{i}-u_{j}|$
Other fairness-based SWFs are concerned with the lot of the disadvantaged. The
Hoover index measures the fraction of total utility that would have to be
transferred from the richer half of the population to the poorer half to
achieve perfect equality. The SWF is
$W(\bm{u})=-\frac{1}{2n\bar{u}}\sum_{i}|u_{i}-\bar{u}|$
The Hoover index is proportional to the relative mean deviation and can
therefore be optimized using the same model.
The McLoone index compares the total utility of individuals at or below the
median utility to the utility they would enjoy if all were brought up to the
median utility. The index is 1 if nobody’s utility is strictly below the
median and approaches 0 if there is a long lower tail. The SWF is
$W(\bm{u})=\frac{1}{|I(\bm{u})|\tilde{u}}\sum_{i\in I(\bm{u})}u_{i}$
where $\tilde{u}$ is the median of utilities in $\bm{u}$ and $I(\bm{u})$ is
the set of indices of utilities at or below the median.
The Hoover and McLoone indices measure only the relative welfare of
disadvantaged parties, and not their absolute welfare. The maximin criterion
addresses both. It is based on the Difference Principle of John Rawls, which
states that inequality should exist only to the extent it is necessary to
improve the lot of the worst-off (Rawls (1999); Freeman (2003); Richardson and
Weithman (1999)). It can be plausibly extended to a lexicographic maximum
principle. The SWF is simply
$W(\bm{u})=\min_{i}\\{u_{i}\\}$
Purely fairness-oriented SWFs can be used when equity is truly the only issue
of concern. In particular, they might be unsuitable for the mortgage problem,
where overall utility is a prime consideration.
### 6.2 Combining fairness and efficiency
Several SWFs combine equity and efficiency, sometimes with a parameter that
regulates the relative importance of each. Perhaps the best known is alpha
fairness, for which the SWF is
$W_{\alpha}(\bm{u})=\left\\{\begin{array}[]{ll}{\displaystyle\frac{1}{1-\alpha}\sum_{i}u_{i}^{1-\alpha}}&\mbox{for}\;\alpha\geq
0,\;\alpha\neq 1\vspace{0.5ex}\\\
{\displaystyle\sum_{i}\log(u_{i})}&\mbox{for}\;\alpha=1\end{array}\right.$
Larger values of $\alpha$ imply a greater emphasis on equity, with $\alpha=0$
corresponding to a pure utilitarian criterion $\sum_{i}u_{i}$, and
$\alpha=\infty$ to a pure maximin criterion. An important special case is
$\alpha=1$, which corresponds to proportional fairness, also known as the Nash
bargaining solution. It is widely used in telecommunications and other
engineering applications. Both proportional fairness and alpha fairness have
been given axiomatic and bargaining justifications (Nash (1950); Harsanyi
(1977); Rubinstein (1982); Binmore et al. (1986); Lan et al. (2010)).
The Kalai-Smorodinsky (K–S) bargaining solution, proposed as an alternative to
the Nash bargaining solution, minimizes each person’s relative concession.
That is, it provides everyone the largest possible utility relative to the
maximum one could obtain if other players are disregarded, subject to the
condition that all persons receive the same fraction $\beta$ of their maximum.
In addition to the bargaining justification of Kalai and Smorodinsky (1975),
this approach has been defended by Thompson (1994) and is implied by the
contractarian philosophy of Gautier (1983). The SWF can be formulated
$W(\bm{u})=\left\\{\begin{array}[]{ll}\sum_{i}u_{i},&\mbox{if
$\bm{u}=\beta\bm{u}^{\max}$}\hskip 4.30554pt\mbox{for some $\beta$ with
$0\leq\beta\leq 1$}\vspace{0.5ex}\\\ 0,&\mbox{otherwise}\end{array}\right.$
where $u_{i}^{\max}=\max_{(\bm{x},\bm{u})\in S_{\mathbf{x}\bm{u}}}u_{i}$ for
each $i$.
Williams and Cookson (2000) suggest two threshold criteria for combining
maximin and utilitarian objectives in a 2-person context. One uses maximin
until the cost of fairness becomes too great, whereupon it switches to
utilitarianism, and the other does the opposite. Hooker and Williams (2012)
generalize the former to $n$ persons by proposing the following SWF:
$W_{\Delta}(\bm{u})=(n-1)\Delta+\sum_{i=1}^{n}\max\big{\\{}u_{i}-\Delta,u_{min}\big{\\}}$
where $u_{\min}=\min_{i}\\{u_{i}\\}$. The parameter $\Delta$ regulates the
equity/efficiency trade-off in a way that may be easier to interpret in
practice than the $\alpha$ parameter: parties whose utility is within $\Delta$
of the lowest utility receive special priority. Thus the disadvantaged are
favored, and $\Delta$ defines who is disadvantaged. As with the $\alpha$
parameter, $\Delta=0$ corresponds to a purely utilitarian criterion and
$\Delta=\infty$ to a maximin criterion.
One weakness of the model is that the actual utility levels of disadvantaged
parties other than the very worst-off have no effect on the measurement of
social welfare, as long as those utilities are within $\Delta$ of the lowest.
As a result, the socially optimal solution may not be as sensitive to equity
as one might desire. Chen and Hooker (2020, 2022) address this issue by
combining utilitarianism with a leximax rather than a maximin criterion. A
leximax (lexicographic maximum) solution is one is which the smallest utility
is maximized, then subject to this value the second smallest is maximized, and
so forth. Chen and Hooker combine leximax and utilitarian criteria by
maximizing a sequence of threshold SWFs, thereby obtaining more satisfactory
solutions.
### 6.3 Statistical bias metrics
While we argue that bias metrics afford an overly narrow perspective on
fairness, they nonetheless can be expressed as SWFs if desired. The utility
vector $\bm{u}$ becomes simply a binary vector in which $u_{i}=1$ if
individual $i$ is selected for some benefit, and $u_{i}=0$ otherwise. We set
constant $a_{i}=1$ when person $i$ actually qualifies for selection (as for
example when person $i$ in the mortgage training set repaid the loan), and
$a_{i}=0$ otherwise. Two groups are compared, respectively indexed by $N$ and
$N^{\prime}$. One is a protected group, such as a minority subpopulation, and
the other consists of the rest of the population.
For example, demographic parity has the SWF
$W(\bm{u})=1-\left|\frac{1}{|N|}\sum_{i\in
N}u_{i}-\frac{1}{|N^{\prime}|}\sum_{i\in N^{\prime}}u_{i}\right|$
Equalized odds can be measured in two ways, one of which is equality of
opportunity:
$W(\bm{u})=1-\left|\frac{\sum_{i\in N}a_{i}u_{i}}{\sum_{i\in
N}a_{i}}-\frac{\sum_{i\in N^{\prime}}a_{i}u_{i}}{\sum_{i\in
N^{\prime}}a_{i}}\right|$
and still another predictive rate parity:
$W(\bm{u})=1-\left|\frac{\sum_{i\in N}a_{i}u_{i}}{\sum_{i\in
N}u_{i}}-\frac{\sum_{i\in N^{\prime}}a_{i}u_{i}}{\sum_{i\in
N^{\prime}}u_{i}}\right|$
Bias measures take no account of efficiency. One can, of course, maximize
predictive accuracy subject to constraints on the amount of bias, but this has
a number of drawbacks:
* •
As previously argued, it provides a very limited perspective on the utility
actually created by decisions. Indeed, the utility vector consists only of 0–1
choices.
* •
Bias measurement forces one to designate “protected groups” (as indicated by
the index set $N$). There is no clear principle for selecting which groups
should be protected, unless one is content simply to recognize those mandated
by law.
* •
There is no consensus on which bias measure is suitable in a given context, if
any. Bias measures were developed by statisticians to measure predictive
accuracy, not to assess fairness.
* •
There is no principle for balancing equity and efficiency. If equity is part
of the objective function, as in social welfare maximization, the choice of
that function obliges one to justify the equity/efficiency trade-off mechanism
in a transparent manner.
* •
Bias measurement forces one to identify a priori which individuals in a
training set should be selected for benefits (as indicated by $a_{i}$). In a
social welfare approach, no prior decisions of this kind are necessary.
## 7 Case Study: Mortgage Loan Processing
We demonstrate the proposed framework on our running example. Specifically, we
develop loan decision models integrated with machine learning for social
welfare optimization. Recall from Section 3, a bank faces the task to process
loan applications and allocate available funds to the approved applicants. For
this task, the stakeholders are loan applicants. The bank has a conventional
efficiency-driven goal to approve qualified applicants and avoid loan
defaults. Additionally, the bank aims to achieve a fair distribution of
utilities among the loan applicants.
### 7.1 Decision models
Suppose an applicant has a feature profile $\bm{x}_{i}$ and a true label
$y_{i}\in\\{1,-1\\}$ indicating his/her qualification status, namely, whether
he/she will default on an approved loan. The bank has data on past loan
processing decisions and repay outcomes from applicants of mixed profiles. To
inform loan decisions, the bank can train ML models based on historical loan
processing data to predict applicants’ default risks.
We consider logistic regression, one of the popular ML methods in practice. A
standard logistic regression model is trained through the following loss
minimization problem, where $\bm{\theta}$ is the decision boundary.
$\displaystyle\begin{aligned}
\min_{\bm{\theta}}~{}C\sum_{i}\log(1+e^{-y_{i}\langle\bm{\theta},\bm{x}_{i}\rangle})+\lambda_{1}\left\lVert\bm{\theta}\right\rVert_{1}\end{aligned}$
(2)
Based on this model, $\frac{1}{1+e^{-\langle\bm{\theta},\bm{x}_{i}\rangle}}$
can be interpreted as the probability of $y_{i}=1$, which we use as the
predicted probability for an applicant with the features $\bm{x}_{i}$ to repay
a loan. To construct a classifier from the logistic regression model, we use a
threshold $\tau\in[0,1]$ on these output probabilities. When
$\frac{1}{1+e^{-\langle\bm{\theta},\bm{x}_{i}\rangle}}>\tau$, $i$ will be
labeled positive, i.e. $\hat{y}_{i}=1$; otherwise, the predicted label is
negative, $\hat{y}_{i}=-1$. The typical threshold is $\tau=0.5$, and
increasing the threshold corresponds to a more strict positive classification
standard.
#### Post-processing integration
In this integration scheme, we utilize the predicted repay probabilities
$\hat{p}_{i}=\frac{1}{1+e^{-\langle\bm{\theta},\bm{x}_{i}\rangle}}$ in loan
processing. Based on $\\{\hat{p}_{i}\\}$, we use the following model to
optimize the social welfare in loan decisions.
$\displaystyle\begin{aligned} \min_{\bm{u},\bm{d}}~{}&W(\bm{u})\\\ \text{s.t.
}&u_{i}=\hat{p}_{i}d_{i},d_{i}\in[0,r_{i}]~{}\forall i;~{}\sum_{i}d_{i}\leq
B.\end{aligned}$ (3)
As discussed in Section 4, an applicant’s expected utility $u_{i}$ from the
loan decision is dependent on the prediction $\hat{p}_{i}$. The utility
definition characterizes that $i$’s utility from receiving a loan of amount
$d_{i}$ is exactly $d_{i}$ if $i$ could repay the loan, but is $0$ if $i$
would default.
#### In-processing integration
In this approach, social welfare optimization is integrated directly into the
prediction/decision model. The additional hyperparameter $\lambda_{2}\geq 0$
regulates the importance level of $W(\bm{u})$ in the training phase.
$\displaystyle\begin{aligned}
\min_{\bm{\theta},\bm{u}}~{}&C\sum_{i}\log(1+e^{-y_{i}\langle\bm{\theta},\bm{x}_{i}\rangle})+\lambda_{1}\left\lVert\bm{\theta}\right\rVert_{1}-\lambda_{2}W(\bm{u})\\\
\text{s.t. }&u_{i}=g_{y_{i}}\hat{y}_{i}+b_{y_{i}},~{}\forall i\\\
&\hat{y}_{i}=1\text{ if }\langle\bm{\theta},\bm{x}_{i}\rangle\geq
0,\hat{y}_{i}=0\text{ otherwise},~{}\forall i.\end{aligned}$ (4)
In this training problem, each involved applicant receives a predicted label
$\hat{y}$ as the decision. We assume a linear utility function
$u_{i}(\hat{y}_{i})=g_{i}\hat{y}_{i}+b_{i}$, where $g_{i},b_{i}$ are assigned
based on the true label $y_{i}$. Specifically, we assign
$g_{1}=0.25,g_{-1}=-0.25$ and $b_{1}=0.5,b_{-1}=0.25$. An applicant has four
possible outcomes from the prediction task: true positive (TP) when a
qualified applicant is correctly classified, and similarly false positive
(FP), true negative (TN), false negative (FN). With the assigned parameters,
the four outcomes have the following utilities:
$u^{TP}=0.75,u^{TN}=0.5,u^{FP}=0,u^{FN}=0.25$. As shown, we have defined
utilities to let correct predictions provide greater benefits than incorrect
predictions, which fits the bank’s efficiency-driven goal to identify
qualified applicants reliably. Additionally, we set a FP outcome to be more
costly for a disqualified applicant than a FN outcome for a qualified
applicant.
For both integrated models, we consider the following social welfare
definitions: utilitarian SWF $W(\bm{u})=\sum_{i}u_{i}$, alpha fairness
$W(\bm{u})=2\sum_{i}u_{i}^{1/2}$ at $\alpha=0.5$, proportional fairness
$W(\bm{u})=\sum_{i}\log{u_{i}}$ and the maximin criterion
$W(\bm{u})=\min_{i}u_{i}$. We have discussed in Section 6 that these SWFs
reflect different positions on the spectrum from pure efficiency to pure
fairness. Our experiment results will illustrate that the SWF choices impact
decisions’ welfare.
### 7.2 Dataset and implementation details
We work with a small scale instance based on the German credit dataset
(Hofmann (1994)). This dataset is a common choice for investigating fair
machine learning models. It contains $1000$ entries of people labeled as
having good ($y_{i}=1$) or bad ($y_{i}=-1$) credit risks. Each entry has $20$
attributes describing relevant information including age, gender, employment
status, etc. For the purpose of our numerical study, we view these people as
loan applicants. Those with good credit risks are qualified applicants who
will repay a loan, and the others are unqualified and will default.
On this dataset, we use a 80/20 training-testing split, namely, 800 entries
are used to train a logistic regression model, and the remaining 200 entries
form the testing dataset and represent the new loan applicants to process. For
each person $i$ in the testing dataset, we randomly generate the requested
loan amount $d_{i}\in[0,100]$. The bank’s available fund is $B=5000$. We
repeat all experiment instances on $5$ randomly generated training-testing
splits and refer to the average statistics for analysis. In the training
models, we fix $C=1,\lambda_{1}=10^{-4}$ and test a range of values for
$\lambda_{2}\in\\{0.1,1,10,50,100\\}$ for the in-processing integration. While
more advanced hyperparameter tuning on all three parameters could be useful
for improving the predictive accuracy, we adopt the simpler parameter setups
to focus on understanding the impacts of the social welfare optimization.
We solve the standard logistic regression model using the scipy minimize
function with the ’SLSQP’ method. We solve all the other optimization models
using Gurobi 9.1.2. All codes are written in Python 3.8 on a desktop PC
running Windows 10 Pro 64-bit with Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz
processors and 24 GB of RAM.
### 7.3 Results and findings
From examining the dataset, we notice that the labels are unbalanced across
age ranges. For example, among the $548$ individuals younger than $35$ years
old, 65% have a positive true label (i.e. they are viewed as qualified
applicants), whereas 76% of the 452 older applicants are labeled positive. In
the AI Fairness 360 Toolkit, Bellamy et al. (2018) identify the older
individuals as more privileged in ML models trained using the German credit
dataset. We first verify this observation in the standard logistic regression
models.
group | positive rate (%) | true positive rate (%) | true negative rate (%)
---|---|---|---
young | 70.33 $\pm$ 2.76 | 82.03 $\pm$ 3.74 | 51.05 $\pm$ 5.23
old | 82.14 $\pm$ 3.21 | 91.01 $\pm$ 4.61 | 45.27 $\pm$ 3.39
Table 1: Classification outcomes for young and old groups on test dataset.
Reported numbers are means and standard deviations from $5$ random train-test
splits.
As shown in Table 1, the standard prediction model without fairness
considerations violates demographic parity and equality of opportunity against
the young group. We next demonstrate that applying social welfare optimization
with both integration methods could improve group parity through seeking
welfare-based fairness.
In the post-processing method, we focus on comparing the loan distributions to
young and old applicants. We consider two performance metrics: the ratio
between the total received loans and the total requested loans by applicants
in either group, and the ratio computed based on only the qualified
applicants. Since the old group is more privileged in the prediction task, the
applicants in this group are better positioned for loan distribution if the
objective is primarily efficiency driven. For example, Fig. 1 shows that
optimizing a utilitarian SWF leads to higher approved ratios for the old
group. On the contrary, through optimizing a SWF with greater emphasis on
fairness, e.g. proportional fairness, the obtained loan distributions are
fairer between groups. We also note that improving fairness costs efficiency,
as the overall loan approval ratios are lower than the utilitarian solutions.
(a) Approval ratios for all applicants
(b) Approval ratios for qualified applicants
Figure 1: Loan distributions for young and old groups from post-integration
with four SWF objectives. Reported numbers are means from 5 random test
datasets.
In the in-processing method, we compare the positive classification rates and
true positive rates from classification. Figs. 2(a) and 2(b) illustrate that,
when social welfare optimization has a small weight in training, optimizing
proportional fairness and the maximin criterion leads to better fairness
outcomes for both groups, whereas the other two options only slightly reduce
the disparities. As the social welfare weight increases, all the SWF choices
provide effective improvement in group parities. Additionally, we note that
including social welfare optimization in the training problem reduces the test
accuracy by a small amount (see Fig. 3).
(a) Positive rates (%), $\lambda_{2}=0.1$
(b) True positive rates (%), $\lambda_{2}=0.1$
(c) Positive rates (%), $\lambda_{2}=1$
(d) True positive rates (%), $\lambda_{2}=1$
Figure 2: Classification outcomes for young and old groups on test dataset
using standard logistic regression and in-processing integrated models with
four SWFs. Reported numbers are means from 5 random train-test splits. The
performances at $\lambda_{2}\in\\{10,50,100\\}$ are highly similar to those at
$\lambda_{2}=1$ shown in (c), (d). Figure 3: Test accuracy over different
strengths of social welfare integration. Plots are generated based on the
means from $5$ random train-test splits. The choices of $\lambda_{2}$ have
small impacts on test accuracy. Among the social welfare definitions,
integrated models with alpha fairness and proportional fairness have slightly
higher test accuracy.
## 8 Discussion and Conclusion
We formalize a general framework for using optimization to incorporate
welfare-based fairness into AI applications. The framework provides a
guideline for formulating a decision task into a social welfare optimization
problem. In particular, we illustrate how optimization can be integrated with
rule-based AI systems and ML models. By expanding the fairness problem to the
optimization of social welfare functions, one can achieve a broader
perspective on fairness that are driven by the well-beings of stakeholders and
characterize the broader fairness concepts in a principled way. On a small
scale loan processing application, we demonstrate that decision models
integrating social welfare optimization with ML could improve fairness among
groups without requiring pre-specified group labels as protected vs.
unprotected.
We conclude the paper by outlining a brief research program to explore some
key questions related to the framework.
* •
The presented general framework opens up numerous questions. For integration
with rule-based AI, one important direction is to investigate how to build
ethics-sensitive rule bases to fit into different social welfare optimization
scenarios. Previous works on formulating ethics principles into rules, e.g.
Bringsjord et al. (2006); Kim et al. (2021), may provide guidance for this
direction. For integration with ML, future research could explore the
empirical direction further on large scale and real-world applications, which
could also provide engagement opportunities with practitioners to specify
context-specific utility and social welfare definitions. Theoretical
connections between social welfare optimization and group parity improvement
could be useful to explore as well.
* •
Although optimization solvers have been developed over decades, not all
classes of optimization models are readily solvable by state-of-the-art
software. For practical use of social welfare optimization models, one may
need to apply available computational strategies or design problem-specific
heuristics to speed up solving the optimization problems.
* •
The social welfare functions we consider are of a static nature, that is, a
SWF does not attempt to capture potential dynamics in the utilities. A dynamic
perspective may be required in sequential decision problems where decisions
need to be made repeatedly and the selected actions have incremental impacts
on the long term social welfare. Future research could explore how to extend
the presented optimization based framework to fit a dynamic view of welfare
and fairness. Although this is not a trivial task, there are many well-
developed techniques to utilize, such as, stochastic optimization, Markov
decision process, etc.
## References
* Abraham et al. [2019] S. S. Abraham, S. S. Sundaram, et al. Fairness in clustering with multiple sensitive attributes. _arXiv preprint arXiv:1910.05113_ , 2019.
* Alabdulmohsin [2020] I. Alabdulmohsin. Fair classification via unconstrained optimization. _arXiv preprint_ , 2005.14621, 2020.
* Allen et al. [2005] C. Allen, I. Smit, and W. Wallach. Artificial morality: Top-down, bottom-up, and hybrid approaches. _Ethics and Information Technology_ , 7:149–155, 2005.
* Anonymous [2022] Anonymous. A guide to formulating equity and fairness in an optimization model (submitted). Technical report, Carnegie Mellon University, 2022.
* Bellamy et al. [2018] R. K. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilovic, et al. AI fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. _arXiv preprint arXiv:1810.01943_ , 2018.
* Binmore et al. [1986] K. Binmore, A. Rubinstein, and A. Wolinsky. The Nash bargaining solution in economic modeling. _RAND Journal of Economics_ , 17:176–188, 1986.
* Brandom [2018] R. Brandom. Self-driving cars are headed toward an ai roadblock. _The Verge_ , 2018.
* Bringsjord et al. [2006] S. Bringsjord, K. Arkoudas, and P. Bello. Toward a general logicist methodology for engineering ethically correct robots. _IEEE Intelligent Systems_ , 21:38–44, 2006.
* Calmon et al. [2017] F. Calmon, D. Wei, B. Vinzamuri, K. N. Ramamurthy, and K. R. Varshney. Optimized pre-processing for discrimination prevention. In _Advances in Neural Information Processing Systems_ , pages 3992–4001, 2017.
* Chen and Hooker [2020] V. Chen and J. N. Hooker. A just approach balancing Rawlsian leximax fairness and utilitarianism. In _AAAI/ACM Conference on AI, Ethics, and Society (AIES)_ , pages 221–227, 2020.
* Chen and Hooker [2022] V. Chen and J. N. Hooker. Combining leximax fairness and efficiency in a mathematical programming model. _European Journal of Operational Research_ , 299:235–248, 2022.
* Chiarandini et al. [2019] M. Chiarandini, R. Fagerberg, and S. Gualandi. Handling preferences in student-project allocation. _Annals of Operations Research_ , 275(1):39–78, 2019.
* Chohlas-Wood et al. [2021] A. Chohlas-Wood, M. Coots, E. Brunskill, and S. Goel. Learning to be fair: A consequentialist approach to equitable decision-making. _arXiv preprint arXiv:2109.08792_ , 2021.
* Chouldechova and Roth [2020] A. Chouldechova and A. Roth. A snapshot of the frontiers of fairness in machine learning. _Communications of the ACM_ , 63(5):82–89, 2020.
* Corbett-Davies and Goel [2018] S. Corbett-Davies and S. Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. _arXiv preprint arXiv:1808.00023_ , 2018.
* Deepak and Abraham [2020] P. Deepak and S. S. Abraham. Representativity fairness in clustering. In _WebSci_ , pages 202–211, 2020.
* Donini et al. [2018] M. Donini, L. Oneto, S. Ben-David, J. S. Shawe-Taylor, and M. Pontil. Empirical risk minimization under fairness constraints. In _Advances in Neural Information Processing Systems_ , pages 2791–2801, 2018.
* Dwork et al. [2012] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. S. Zemel. Fairness through awareness. In _Symposium on Innovations in Theoretical Computer Science (ITCS)_ , pages 214–226, 2012.
* Eisenhandler and Tzur [2019] O. Eisenhandler and M. Tzur. The humanitarian pickup and distribution problem. _Operations Research_ , 67:10–32, 2019.
* Freeman [2003] S. Freeman, editor. _The Cambridge Companion to Rawls_. Cambridge University Press, 2003.
* Gabriel [2020] I. Gabriel. Artificial intelligence, values, and alignment. _Minds and Machines_ , 30:411–437, 2020.
* Gautier [1983] D. Gautier. _Morals by Agreement_. Oxford University Press, 1983.
* Hardt et al. [2016] M. Hardt, E. Price, and N. Srebro. Equality of opportunity in supervised learning. In _Advances in neural information processing systems_ , pages 3315–3323, 2016.
* Harsanyi [1977] J. C. Harsanyi. _Rational Behavior and Bargaining Equilibrium in Games and Social Situations_. Cambridge University Press, 1977.
* Heidari et al. [2018] H. Heidari, C. Ferrari, K. Gummadi, and A. Krause. Fairness behind a veil of ignorance: A welfare analysis for automated decision making. In _Advances in Neural Information Processing Systems_ , pages 1265–1276, 2018.
* Heidari et al. [2019] H. Heidari, M. Loi, K. P. Gummadi, and A. Krause. A moral framework for understanding fair ML through economic models of equality of opportunity. In _Proceedings of the Conference on Fairness, Accountability, and Transparency_ , pages 181–190, 2019.
* Hofmann [1994] H. Hofmann. German credit dataset. https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data),Accessed:2021-05-20, University of California, Irvine, School of Information and Computer Sciences, 1994.
* Hooker and Kim [2018] J. N. Hooker and T. W. Kim. Toward non-intuition-based machine and artificial intelligence ethics: A deontological approach based on modal logic. In _AAAI/ACM Conference on AI, Ethics, and Society (AIES)_ , pages 130–136, 2018.
* Hooker and Williams [2012] J. N. Hooker and H. P. Williams. Combining equity and utilitarianism in a mathematical programming model. _Management Science_ , 58:1682–1693, 2012.
* Hu and Chen [2020] L. Hu and Y. Chen. Fair classification and social welfare. In _Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency_ , pages 535–545, 2020.
* Kalai and Smorodinsky [1975] E. Kalai and M. Smorodinsky. Other solutions to Nash’s bargaining problem. _Econometrica_ , 43:513–518, 1975.
* Karsu and Morton [2015] O. Karsu and A. Morton. Inequity-averse optimization in operational research. _European Journal of Operational Research_ , 245:343–359, 2015.
* Kim et al. [2021] T. W. Kim, J. Hooker, and T. Donaldson. Taking principles seriously: A hybrid approach to value alignment in artificial intelligence. _Journal of Artificial Intelligence Research_ , 70:871–890, 2021.
* Lan et al. [2010] T. Lan, D. Kao, M. Chiang, and A. Sabharwal. An axiomatic theory of fairness in network resource allocation. In _Conference on Information Communications (INFOCOM 2010)_ , pages 1343–1351. IEEE, 2010.
* Lindner et al. [2020] F. Lindner, R. Mattmüller, and B. Nebel. Evaluation of the moral permissibility of action plans. _Artificial Intelligence_ , 287, 2020.
* Liu et al. [2018] L. T. Liu, S. Dean, E. Rolf, M. Simchowitz, and M. Hardt. Delayed impact of fair machine learning. In _International Conference on Machine Learning_ , pages 3150–3158. PMLR, 2018.
* Luss [1999] H. Luss. On equitable resource allocation problems: A lexicographic minimax approach. _Operations Research_ , 47(3):361–378, 1999.
* McElfresh and Dickerson [2018] C. McElfresh and J. Dickerson. Balancing lexicographic fairness and a utilitarian objective with application to kidney exchange. In _Proceedings of AAAI Conference on Artificial Intelligence (AAAI 2018)_ , pages 1161–1168, 2018.
* Mehrabi et al. [2019] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan. A survey on bias and fairness in machine learning. _arXiv preprint_ , 1908.09635, 2019.
* Mostajabdaveh et al. [2019] M. Mostajabdaveh, W. J. Gutjahr, and S. Salman. Inequity-averse shelter location for disaster preparedness. _IISE Transactions_ , 51(8):809–829, 2019.
* Nanda et al. [2020] V. Nanda, P. Xu, K. A. Sankararaman, J. Dickerson, and A. Srinivasan. Balancing the tradeoff between profit and fairness in rideshare platforms during high-demand hours. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pages 2210–2217, 2020.
* Nash [1950] J. Nash. The bargaining problem. _Econometrica_ , 18:155–162, 1950.
* Ogryczak and Śliwiński [2002] W. Ogryczak and T. Śliwiński. On equitable approaches to resource allocation problems: The conditional minimax solutions. _Journal of Telecommunications and Information Technology_ , pages 40–48, 2002.
* Ogryczak et al. [2008] W. Ogryczak, A. Wierzbicki, and M. Milewski. A multi-criteria approach to fair and efficient bandwidth allocation. _Omega_ , 36(3):451–463, 2008.
* Olfat and Aswani [2018] M. Olfat and A. Aswani. Spectral algorithms for computing fair support vector machines. In _International Conference on Artificial Intelligence and Statistics_ , pages 1933–1942, 2018.
* Rawls [1999] J. Rawls. _A Theory of Justice(revised)_. Harvard University Press (original edition 1971), 1999.
* Richardson and Weithman [1999] H. S. Richardson and P. J. Weithman, editors. _The Philosophy of Rawls(5 volumes)_. Garland, 1999.
* Rubinstein [1982] A. Rubinstein. Perfect equilibrium in a bargaining model. _Econometrica_ , 50:97–109, 1982.
* Russell [2019] S. Russell. _Human Compatible: AI and the Problem of Control_. Bristol, UK: Allen Lane, 2019.
* Siddique et al. [2020] U. Siddique, P. Weng, and M. Zimmer. Learning fair policies in multi-objective (deep) reinforcement learning with average and discounted rewards. In _International Conference on Machine Learning_ , pages 8905–8915. PMLR, 2020.
* Stelmakh et al. [2019] I. Stelmakh, N. B. Shah, and A. Singh. Peerreview4all: Fair and accurate reviewer assignment in peer review. _Proceedings of Machine Learning Research_ , 98:1–29, 2019\.
* Thompson [1994] W. Thompson. Cooperative models of bargaining. In R. J. Aumann and S. Hart, editors, _Handbook of Game Theory_ , volume 2, pages 1237–1284. North-Holland, 1994.
* Weng [2019] P. Weng. Fairness in reinforcement learning. _arXiv preprint arXiv:1907.10323_ , 2019.
* Williams and Cookson [2000] A. Williams and R. Cookson. Equity in health. In A. Culyer and J. Newhouse, editors, _Handbook of Health Economics_. Elsevier, 2000.
* Zafar et al. [2019] M. B. Zafar, I. Valera, M. Gomez-Rodriguez, and K. P. Gummadi. Fairness constraints: A flexible approach for fair classification. _Journal of Machine Learning Research_ , 20(75):1–42, 2019.
* Zemel et al. [2013] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In _International Conference on Machine Learning_ , pages 325–333, 2013.
|
# Efficient CNN Building Blocks for Encrypted Data
Nayna Jain1,4, Karthik Nandakumar2, Nalini Ratha3, Sharath Pankanti5, Uttam
Kumar 1
###### Abstract
Machine learning on encrypted data can address the concerns related to privacy
and legality of sharing sensitive data with untrustworthy service providers,
while leveraging their resources to facilitate extraction of valuable insights
from otherwise non-shareable data. Fully Homomorphic Encryption (FHE) is a
promising technique to enable machine learning and inferencing while providing
strict guarantees against information leakage. Since deep convolutional neural
networks (CNNs) have become the machine learning tool of choice in several
applications, several attempts have been made to harness CNNs to extract
insights from encrypted data. However, existing works focus only on ensuring
data security and ignore security of model parameters. They also report high
level implementations without providing rigorous analysis of the accuracy,
security, and speed trade-offs involved in the FHE implementation of generic
primitive operators of a CNN such as convolution, non-linear activation, and
pooling. In this work, we consider a Machine Learning as a Service (MLaaS)
scenario where both input data and model parameters are secured using FHE.
Using the CKKS scheme available in the open-source HElib library, we show that
operational parameters of the chosen FHE scheme such as the degree of the
cyclotomic polynomial, depth limitations of the underlying leveled HE scheme,
and the computational precision parameters have a major impact on the design
of the machine learning model (especially, the choice of the activation
function and pooling method). Our empirical study shows that choice of
aforementioned design parameters result in significant trade-offs between
accuracy, security level, and computational time. Encrypted inference
experiments on the MNIST dataset indicate that other design choices such as
ciphertext packing strategy and parallelization using multithreading are also
critical in determining the throughput and latency of the inference process.
## Introduction
Deep neural networks have proven to be a promising technology due to their
ability to achieve competitive machine learning performance in diverse domains
including computer vision. Ever since AlexNet achieved a top-5 error rate of
16.4% using Convolutional Neural Networks (CNN) for the image classification
task on the ImageNet dataset of 1.2 million high-resolution images as part of
ILVSRC 2012 (Krizhevsky, Sutskever, and Hinton 2012) competition, various CNN
architectures (Simonyan and Zisserman 2014)(Szegedy et al. 2015)(Zeiler and
Fergus 2013) have been developed to improve accuracy and other performance
metrics. Though the continuous improvement in the performance of CNNs has
hastened their adoption in broader computer vision applications such as mobile
and embedded devices (Howard et al. 2017)(Phan et al. 2020), the large
computing/memory resource requirements of CNNs present practical challenges in
applying them in many applications. This is often overcome by outsourcing the
network training and/or inference computations to the Cloud, which is commonly
referred to as Machine Learning as a Service (MLaaS).
Figure 1: In a conventional Machine Learning as a Service (MLaaS) scenario,
both the data and model parameters are unencrypted. Existing works on secure
inference assume only the data is encrypted. Our proposed approach encrypts
both the data and model parameters.
In MLaaS, there are three parties: (i) the end-user, who requires predictions
on sensitive data, (ii) the cloud service provider, who has access to large
computational resources, and (iii) the model owner, who has developed the
inference model, trained using machine learning algorithms that may be
proprietary. In conventional MLaaS scenarios, the cloud service providers
typically have access to the inference models as well as the content of the
query data. In many applications, such access to data and the model is
undesirable because the query data/models could reveal sensitive information.
Moreover, sharing of such data may be prohibited under emerging privacy
regulations such as GDPR. This has led to the development of a variety of
privacy preserving machine learning algorithms. In this paper, we focus on a
specific class of privacy preserving machine learning algorithms that rely on
Homomorphic Encryption (HE).
Most of the approaches presented in the literature for encrypted inference
encrypt only the input data and do not encrypt the model parameters. This is
based on the assumption that the same entity owns the model and runs the cloud
service. However, access to the model parameters could lead to competitive
disadvantage for the model owner, when the model owner and cloud service
provider are separate entities (e.g., a start-up providing the MLaaS service
on a Amazon or Google cloud). Hence, it becomes imperative to protect both the
input data and model parameters in such scenarios (see Figure 1). (Bost et al.
2015) discussed about encrypted models in the context of Hyperplane Decision,
Navie Bayes and Decision Tree algorithm. In this work, we focus on the
scenario where the model owner and cloud service provider are separate
entities and the cloud provider is not trusted. Therefore, we encrypt the
model parameters111Note that the model architecture is still available to the
cloud service provider in the clear. in addition to the input data using the
end-user’s public key (as shown on Figure 1). The advantage of this approach
is that the cloud service provider cannot derive any benefit from the model
directly. The disadvantage of model parameter re-encryption using the specific
client’s public key is the increased computation and communication burden of
the model owner.
### Contributions
The objective of this work is to develop the component building blocks for
enabling generic CNN inference on encrypted data. We use the implementation of
the CKKS scheme available in the HElib library (Halevi and Shoup 2020) for all
our analysis. However, we note that the lack of a bootstrapping implementation
for CKKS makes it impossible to perform inferencing on any arbitrary CNN.
Hence, we design depth-constrained CNNs that operate within the multiplicative
depth limitations of the underlying leveled HE scheme. Our contributions are
four-fold:
1. 1.
MLaaS scenario for convolution neural networks is proposed where both the
input data and model parameters are encrypted, thereby enabling a model owner
to provide the MLaaS service on an untrusted cloud.
2. 2.
CNN inference on encrypted data using the CKKS scheme is presented, which
eliminates the need for careful model parameter quantization.
3. 3.
Implementation and performance analysis of illustrative linear operators such
as convolution and matrix multiplication as well as non-linear operations such
as ReLU and maxpooling in the encrypted domain is provided.
4. 4.
Various multithreading strategies were explored and the analysis revealed that
the optimal strategy involves a complex interplay between system resources and
the computations involved.
## Background and Related Work
### Homomorphic Encryption
Homomorphic encryption schemes enable a class of computations on the
ciphertexts (i.e., encrypted data) without decrypting them (Acar et al. 2018).
Let $\llbracket x\rrbracket$ denote the encryption of value $x$ using a public
key $pk$. A cryptosystem is said to be _fully homomorphic_ (FHE) (Gentry 2009)
if it enables computation of any arbitrary function
$f(x_{1},x_{2},\cdots,x_{d})$ in the encrypted domain. In other words, there
exists a function $g$ such that
$f(x_{1},x_{2},\cdots,x_{d})=\mathcal{D}\left(g(\llbracket
x_{1}\rrbracket,\llbracket x_{2}\rrbracket,\cdots\llbracket
x_{d}\rrbracket),sk\right)$, where $\mathcal{D}$ represents the corresponding
decryption mechanism of the cryptosystem using private/secret key $sk$.
Recently, a number of FHE schemes have been proposed based on the hardness of
the Ring Learning With Errors (RWLE) problem. The most well-known examples of
such schemes are BFV (Brakerski 2012)(Fan and Vercauteren 2012), BGV
(Brakerski, Gentry, and Vaikuntanathan 2014), TFHE (Chillotti et al. 2020),
and CKKS (Cheon et al. 2017) cryptosystems, all of which support both additive
and multiplicative homomorphism. Among the well-known FHE schemes, only CKKS
natively supports floating-point operations, albeit with approximate (limited
precision) computations. Most of the previous attempts at encrypted inference
coarsely quantize the weights of a neural network in order to satisfy the
requirements of the underlying HE schemes (Gilad-Bachrach et al. 2016; Lou and
Jiang 2019). In this paper, we explore the use of the CKKS scheme, which
eliminates the need for transformations of operands as integers.
FHE is typically obtained by combining a leveled HE scheme with a
bootstrapping operation. Since each HE computation increases the “noise” level
of a ciphertext, a leveled HE scheme has a (parameter-dependent) limit on the
computational (multiplicative) depth of the arithmetic circuit that can be
computed, while still allowing for meaningful decryption. This limitation is
overcome by using Gentry’s bootstrapping technique (Gentry 2009), which
“refreshes” the ciphertext and reduces its noise level (at the cost of relying
on circular security). While bootstrapping can enable computation of
arithmetic circuits with arbitrary multiplicative depth, it comes at a
prohibitively high computational cost. For some FHE constructs such as CKKS,
there is no existing implementation of true “Gentry-style” bootstrapping. This
leads to an intractable conundrum. While modern machine (deep) learning
algorithms rely on their depth to achieve good generalization performance
(Poggio et al. 2017), deeper networks are inherently more difficult to compute
in the encrypted domain without bootstrapping. Therefore, optimizing the
computational depth to achieve an acceptable trade-off between accuracy and
practical computational feasibility is one of the fundamental challenges in
encrypted deep learning.
### Secure Machine Learning Using FHE
Every supervised machine learning algorithm involves two steps: (i) learning
the model from training data; (ii) inference on new unknown samples. We
presume that an efficient model has already been trained from data available
in clear. Our focus is on how to carry out encrypted inferencing with minimal
leakage of information for all the parties in MLaaS while providing guaranteed
privacy.
Privacy preserving machine learning can have different challenges and threats
based on what is required to be private. In (Al-Rubaie and Chang 2019),
various types of attacks (_e.g._ , reconstruction attacks, model inversion
attacks, membership inference attacks) that could potentially reveal
private/proprietary information from data/model have been discussed. The work
also presents various cryptographic and non-cryptographic approaches to thwart
these attacks. Key techniques that have been explored to achieve privacy
preserving machine learning include differential privacy, secure multiparty
computation (SMPC), and homomorphic encryption. Homomorphic encryption has
been considered more suitable for the cloud-based applications, but its
practical feasibility is still in question. SMPC-based methods have been
generally deemed more practical, but it requires interaction between the
client and server resulting in large communication cost. Differential privacy
techniques lack the formal guarantees of security under assumptions of more
conservative attack models.
Most of the recent work in privacy-preserving machine learning (Gilad-Bachrach
et al. 2016)(Juvekar, Vaikuntanathan, and Chandrakasan 2018) (Chou et al.
2018)(Dathathri et al. 2019) has primarily focused on secure inference where
the model has already been trained in the clear. Once the model has been
trained, these methods use either SMPC or homomorphic encryption techniques to
enable inference. While CryptoNets (Gilad-Bachrach et al. 2016) rely only on
homomorphic encryption to enable inferencing over private input data, Gazelle
(Juvekar, Vaikuntanathan, and Chandrakasan 2018) uses a combination of
homomorphic encryption and garbled circuits to achieve two orders of magnitude
speed improvement. Faster CryptoNets (Chou et al. 2018) have demonstrated
improved performance over that of CryptoNets by leveraging sparsity properties
and reduced the amortized inference time from minutes to seconds. CHET
(Dathathri et al. 2019) provides an optimizing compiler for Fully Homomorphic
Neural Network Inference. It basically provides high level user framework to
automate the tuning of parameters for security as well as performance without
degrading the accuracy. The framework provides high level and low level
intermediate representations that can be linked with different homomorphic
libraries. Their results showed that automated homomorphic circuits from the
compiler outperform the circuits that were manually tuned. An encrypted
inference service built on top of TensorFlow using secure multiparty
computation is discussed in (Dahl et al. 2018). Encrypted model was considered
in (Bost et al. 2015) for Hyperplane Decision, Naive Bayes, Decision Tree
algorithms, but not convolution neural network.
### Convolutional Neural Network (CNN)
A CNN is a multi-layered neural network that is usually applied to image/video
data. Let $\mbox{\boldmath$X$}(\ell)$ and $\mbox{\boldmath$Z$}(\ell)$ denote
the input and output of layer $\ell$ ($\ell=1,2,\cdots,D$) in the network.
Each layer in the network performs specific arithmetic operations on the
outputs of the previous layer and passes the result to the subsequent layer,
i.e., $\mbox{\boldmath$X$}(\ell)=\mbox{\boldmath$Z$}(\ell-1)$ and
$\mbox{\boldmath$Z$}(0)$ is the raw input data. This structure allows higher-
level abstract features to be computed as non-linear functions of lower-level
features (starting with the raw data). The first few layers of a CNN generally
consists of the three types of operations.
1. 1.
Convolution: This is a linear operation, where the input to the layer222For
the sake of convenience, we drop the layer index $\ell$ in the subsequent
discussion. ($X$) is convolved with a kernel ($W$) to generate a filtered
output $Y$. Let $\mbox{\boldmath$A$}_{i,j}$ denote the element in the $i$-th
row and $j$-th column of a matrix $A$. Suppose that the size of input $X$ is
$(M\times N)$ and the size of kernel $W$ is $(P\times Q)$. Typically, one can
pad the input with zeros to compute the filter responses at the edges. Let
$\widetilde{X}$ be the zero-padded version of the input $X$.
$\mbox{\boldmath$Y$}_{m,n}=\langle\mbox{\boldmath$\widehat{X}$}_{m,n},\mbox{\boldmath$\widehat{W}$}\rangle,$
(1)
where $\langle\mbox{\boldmath$a$},\mbox{\boldmath$b$}\rangle$ is the inner
product of two vectors $a$ and $b$, $\widehat{W}$ is the vectorized
(flattened) version of the kernel $W$, and
$\mbox{\boldmath$\widehat{X}$}_{m,n}$ is the vectorized version of the input
window (a window of size $P\times Q$ with
$\mbox{\boldmath$\widetilde{X}$}_{m,n}$ as the top-left element) selected from
the padded image. For the sake of simplicity, the stride length is assumed to
be $1$ in equation (1).
2. 2.
Activation: In this step, a point-wise non-linear function is applied to the
filter responses. The commonly used activation functions are rectified linear
unit (ReLU), sigmoid, and hyperbolic tangent (tanh). In this paper, we
consider only the ReLU activation function, which is defined as follows:
$ReLU(a)=max(0,a)=\begin{cases}0,&\mbox{if }a\leq 0\\\ a,&\mbox{if
}a>0.\end{cases}$ (2)
3. 3.
Pooling: This is typically used for dimensionality reduction, where multiple
responses within a neighborhood are pooled together. While max pooling is
often used for selecting the most dominant response in each neighborhood and
is a non-linear operation, mean pooling computes the average response in each
neighborhood and is a linear operation. The final output ($Z$) of the
activation and pooling layers can be modeled as a function $f$ of the filter
responses, i.e., $\mbox{\boldmath$Z$}=f(\mbox{\boldmath$Y$})$, where $f$ is
non-linear.
After the initial layers, the data is flattened into a vector and a few fully
connected (FC) layers are added to the network. The output of each node in the
FC layer is often computed by applying a non-linear activation function to the
weighted average of its inputs, which includes a bias term that always emits
value 1. This can be mathematically represented as
$\mbox{\boldmath$Z$}=f(\mbox{\boldmath$W$}\mbox{\boldmath$X$})$, where $W$ is
the weight matrix of the FC layer and $f$ is the activation function. The
final layer in a CNN is usually a softmax layer that provides an approximate
probability distribution over the class labels.
## Proposed Approach
In this work, we employ a typical CNN architecture as an illustrative example.
Though FHE schemes allow arbitrary computations on the encrypted data, two key
challenges need to be circumvented before applying a CNN on encrypted inputs.
The first challenge is the appropriate packing of input data to efficiently
make use of the Single Instruction Multiple Data (SIMD) operations. The next
challenge stems from the inherent inability of most FHE schemes to directly
compute non-linear functions (with the exception of polynomials). Often, the
non-linear function is approximated by an iterative algorithm or a polynomial
function. Since the approximation error is generally inversely proportional to
the computational depth (using more iterations or higher-degree polynomials
leads to less approximation error), the trade-off between computational depth
and accuracy must be carefully managed.
| (a)
---|---
(b) |
Figure 2: Ciphertext packing for batched inference. (a) Ciphertext
$\llbracket\mbox{\boldmath$X$}\rrbracket_{m,n}$ contains the encrypted values
of pixel $\llbracket x_{m,n}^{(\cdot)}\rrbracket$ from all the $K$ images in a
batch, where $K$ is the number of slots in the ciphertext. (b) Ciphertext
$\llbracket\mbox{\boldmath$W$}\rrbracket_{p,q}$ contains the encrypted value
of kernel weight $\llbracket w_{p,q}\rrbracket$ in all the $K$ slots.
#### Convolution Operator:
We employ the CKKS scheme for encrypted operations. The native plaintext in
the CKKS scheme is a polynomial in the cyclotomic ring, which enables packing
multiple plaintext values into different “slots” in the ciphertext. This
ciphertext packing enables parallelization of addition and multiplication
operations through SIMD operations. However, it must be emphasized that it is
not possible to randomly access values in the individual slots of the
ciphertext after packing. Since only a limited set of operations (e.g.,
rotation of values in the slots) are possible within a ciphertext, the
benefits of SIMD operations can be fully realized only when there is minimal
interaction between the slots. Therefore, we follow a more conservative and
straightforward approach to ciphertext packing that amortizes the
computational time by processing a batch of images in parallel.
Let $K$ be the number of slots available in the ciphertext. Given a batch of
$K$ images
$\mbox{\boldmath$X$}^{(1)},\mbox{\boldmath$X$}^{(2)},\cdots,\mbox{\boldmath$X$}^{(K)}$,
where each image is of size $M\times N$, we represent the encrypted image as a
ciphertext matrix $\llbracket\mbox{\boldmath$X$}\rrbracket$ of size $M\times
N$ as shown in Figure 2(a). The ciphertext
$\llbracket\mbox{\boldmath$X$}\rrbracket_{m,n}$ contains the encrypted values
of pixel $(m,n)$ from all the $K$ images, i.e.
$\llbracket\mbox{\boldmath$X$}\rrbracket_{m,n}=\left[\llbracket
x_{m,n}^{(1)}\rrbracket,\llbracket x_{m,n}^{(2)}\rrbracket,\cdots,\llbracket
x_{m,n}^{(K)}\rrbracket\right]$. Similarly, the encrypted convolution kernel
is also represented as a ciphertext matrix
$\llbracket\mbox{\boldmath$W$}\rrbracket$ of size $P\times Q$, where $P\times
Q$ is the kernel size333Encryption of the convolution kernel addresses the
more general scenario where the model provider may be different from the cloud
service provider.. However, as shown in Figure 2(b), the encrypted weight
value $\llbracket w_{p,q}\rrbracket$ is repeated across all the slots of the
ciphertext $\llbracket\mbox{\boldmath$W$}\rrbracket_{p,q}$.
While the above simple packing approach greatly increases the memory
requirement, it provides great simplicity in terms of implementing the
convolution operator. One can directly treat the encrypted image matrix
$\llbracket\mbox{\boldmath$X$}\rrbracket$ and the encrypted kernel
$\llbracket\mbox{\boldmath$W$}\rrbracket$ as equivalent of the plaintext
matrices $X$ and $W$, respectively, and compute the convolution results as the
inner product between the vectorized versions of the selected image window and
the kernel. Thus, equation (1) can be modified as follows:
$\llbracket\mbox{\boldmath$Y$}\rrbracket_{m,n}=\langle\llbracket\mbox{\boldmath$\widehat{X}$}\rrbracket_{m,n},\llbracket\mbox{\boldmath$\widehat{W}$}\rrbracket\rangle.$
(3)
When there is no padding involved, the above approach requires
$(M-P+1)(N-Q+1)PQ$ ciphertext multiplications (henceforth denoted as CT-CT
mult) and ciphertext additions (henceforth denoted as CT-CT add) and consumes
a multiplicative depth of $1$.
#### Activation and Pooling Operators:
In this paper, we implement a ReLU function using a polynomial approximation.
In (Gottemukkula 2020), a number of polynomial approximations have been
proposed for the ReLU function. Since one of our objectives is to minimize the
computational depth, we choose the following polynomial of degree 2 for our
approximation.
$g(u)=0.47+0.50u+0.09u^{2},u\in[-\sqrt{2},\sqrt{2}].$ (4)
A linear transformation may be required to limit $u$ within the appropriate
range before the above polynomial approximation is applied. The polynomial
approximation of ReLU consumes only a multiplicative depth of 1 and requires 1
CT-CT mult, 2 CT-PT mults (multiplication a ciphertext with a plaintext
operand), 2 CT-CT adds, and 1 CT-PT add (addition of a plaintext operand to a
ciphertext) operations.
In terms of pooling, we consider mean pooling operator in a $2\times 2$
neighborhood, which is a linear operation. While mean pooling does not involve
any ciphertext multiplication (CT-CT mults), it does require $4$ CT-CT adds
and $1$ CT-PT mult. Our activation and pooling operations can be summarized as
follows:
$\llbracket\mbox{\boldmath$Z$}\rrbracket_{i,j}=f\left(\llbracket\mbox{\boldmath$Y$}\rrbracket_{2i-1,2j-1},\llbracket\mbox{\boldmath$Y$}\rrbracket_{2i,2j-1},\llbracket\mbox{\boldmath$Y$}\rrbracket_{2i-1,2j},\llbracket\mbox{\boldmath$Y$}\rrbracket_{2i,2j}\right),$
(5)
where
$f(a,b,c,d)=(g(a)+g(b)+g(c)+g(d))/4.$ (6)
## Experimental Results
### Environment
For FHE, we chose IBM’s open-source library named HElib (Halevi and Shoup
2020) version 1.0.1. Since FHE based experiments require signficant compute
capability and efficiency, we chose the natural API interface of HElib library
through C++. The configuration details of the computing environment are listed
below:
* •
System used: Virtual Machine ppc64le POWER9,
* •
CPUs: 112 CPUs with 14 sockets, 1 core/socket, 8 threads per core
* •
Memory: RAM - 511 GB,
* •
Operating system: Fedora Core 32
### Impact of FHE parameters
Three key parameters define the operational characteristics of the FHE scheme
and determine the security level ($\lambda$). The plaintext space of CKKS is
the set of polynomials in the cyclotomic ring $\mathbb{Z}[X]/(\Phi_{m}(X))$,
where $\Phi_{m}(X)$ is the $m^{th}$ cyclotomic polynomial with degree given by
Euler’s totient $\phi(m)$. The number of slots ($K$) in the ciphertext is
given by $K=\phi(m)/2$. The second key parameter is $L$, which is the bitsize
of the modulus of a freshly encrypted ciphertext. Since we are using a leveled
HE scheme, $L$ determines the depth of the circuit that can be evaluated
without bootstrapping. Increasing $L$ allows more computations
(multiplications) to be performed before hitting the noise threshold. Finally,
the parameter $r$ determines the computational precision in the encrypted
domain. While larger values of $m$ (and hence $\phi(m)$) increase the security
level $\lambda$, larger values of $L$ and $r$ generally decrease the security.
In our experiments, we choose $m=2^{16}$ (corresponding to $\phi(m)=32768$ and
$K=16384$), $r=35$, and vary $L$ depending on the computational depth
required.
We first evaluate the impact of the depth parameter $L$ on the computational
time, security level, and ciphertext size. For measuring the computational
time, we consider the average time required for the adding/multiplying two
ciphertexts (filled with constant values in all the slots) over 1000 trials.
As shown in Figure 3, increasing $L$ increases the computational time and
ciphertext size in a linear fashion. Moreover, the rate of increase is much
higher for multiplication of two ciphertexts (CT-CT mult) than for the
addition of two ciphertexts (CT-CT add). Our analysis also indicates that for
the first multiplication to be successful, a minimum value of $L=200$ bits is
required and each additional multiplication in the circuit (increase in the
multiplicative depth by 1) requires an increase in the value of $L$ by
approximately 100 bits.
It must be highlighted that the impact of $L$ on the security estimate
($\lambda$) is non-linear. Note that evaluation of deeper (multiplicative
depth) circuits would require large values of $L$. This in turn would require
a large increase in the value of $m$ to maintain the same level of security,
consequently slowing down the computations significantly as well as increasing
the memory requirements (due to larger ciphertext sizes). Therefore, it is
critical to constrain the depth parameter $L$ to a reasonable value and design
the machine learning model to fit the depth constraint.
| |
---|---|---
(a) | (b) | (c)
Figure 3: Impact of the depth parameter $L$ on (a) computational time, (b)
security level, and (c) ciphertext size. For this experiment, the value of $m$
and $r$ are set to $2^{16}$ and $35$, respectively. The computational time is
the average time required for adding/multiplying two ciphertexts over 1000
trials.
### Dataset and Inference Results
We use the MNIST dataset for our experiments. This dataset consists of $60000$
$28\times 28$ grayscale images of the $10$ digits (0-9), along with a test set
of $10000$ images. We designed a simple CNN model that consists of a single
convolutional layer with $28$ filters (each having a kernel size of $3\times
3$) without any padding, followed by _ActPool1_ layer (polynomial
approximation of ReLU and mean pooling), a flattening layer (whose output
dimension is $13\times 13\times 28=4732$), a single fully connected layer of
size $4732\times 10$, and a final softmax layer. The CNN is trained using the
Keras library for $10$ epochs using a sparse_categorical_crossentropy loss
function and ADAM optimizer with a learning rate of $0.01$. The test accuracy
of this simple CNN model on plaintext images was found to be $97.86$%, which
is only marginally lower than that of a CNN where the approximate ReLU
function is replaced with the standard ReLU function ($98.39$%).
Next, we focus on the problem of encrypted inference where both the inputs to
the CNN and weights of the network are encrypted. To implement inference based
on the above simple CNN model, the depth parameter $L$ was set to $600$ and
the values of $m=2^{16}$ and $r=35$ were retained. The conservative security
estimate based on these parameters is $128$ bits, which is obtained by using
an in-built library call in HElib. Note that the inference process does not
require the computation of the softmax function, which is monotonic in nature.
Thus, the user can decrypt the output of the fully connected layer to infer
the predicted class, which is the class with the maximum output at the FC
layer. Based on the above HE parameters, the number of slots ($K$) available
is more than the number of test images ($10000$). Thus, the complete inference
process could be executed in the encrypted domain in one batch without any
loss of test accuracy ($97.86$%).
The time required for the execution of each layer in the CNN during encrypted
inference is shown in Table 1. It must be emphasized that the reported timings
in Table 1 are for evaluating a single filter on a single thread. The results
for the complete network (all $28$ filters) are shown in the next sub-section
on multi-threading. As one would expect, the convolution layer is primary
bottleneck in terms of number of computations. On the other hand, the fully
connected (FC) layer presents a significant challenge in terms of memory
bottleneck. Even for a single filter, the FC layer requires holding
($(M-P+1)(N-Q+1)(R+1)/4$) ciphertexts in memory, where $R$ is the number of
classes (($R=10$ for MNIST). The number of ciphertexts gets inflated by a
factor of $28$ when all the filters are considered, which makes
parallelization more difficult.
Operation | Execution Time (in seconds) for Single Filter and Single Thread
---|---
Convolution | 487.4
Approximate ReLU | 102.1
Mean pooling | 16.9
Fully Connected | 123.4
Total (including overhead) | 812.6
Table 1: Execution time for different layers of the proposed simple CNN model.
### Parallelization using multi-threading
HElib supports thread level parallelism (inherited from the underlying NTL
library), which can speed up the execution time. Theoretically, the design of
the proposed simple CNN allows us to compute the response of each convolution
filter as well as the subsequent activation, pooling, and FC layers
independently. However, in practice this was not feasible due to memory
constraints. Moreover, our computing environment has 112 CPUs, but the model
has only 28 filters. Therefore, to make the best use of the available
resources we had to implement a nested multi-threading strategy.
In the nested multi-threading strategy, we spawn $F$ (we choose $F=14$ or
$F=28$) threads for the filters, and each filter thread in turn spawns $C$
convolution threads (we choose $C=1,3,5$ or $7$). Each convolutional thread
operates on a horizontal partition of the full image. For example, when $C=7$,
the $28\times 28$ image is partitioned into $7$ horizontal sub-images with
appropriate overlap (to avoid boundary issues). Since the approximate RELU is
a pixel-wise operation, it can be applied to each horizontal partition
independently. But since the mean pool operation could span across partitions,
we have to wait for all the convolution threads to complete before meanpool is
applied. Finally, due to memory constraints, we could not proceed with the FC
layer directly. Instead, we waited for all the filter threads to complete and
then spawn $H$ new threads, one for each of the $10$ classes (columns of the
FC layer matrix). As earlier, these $H$ class threads further spawn $J$
channel threads to parallelize the matrix multiplication task along rows of
the FC layer matrix.
Figure 4: Execution time for various number of filter and convolution threads.
In this experiment, we fix the number of class and channel threads to $10$ and
$1$, respectively.
Figure 4 shows the execution times for the above nested multi-threading
strategy. For this experiment, we fix $H=10$ and $J=1$ and vary $F$ and $C$.
From this figure, we can observe that having more filter threads generally
leads to faster execution. Since the maximum number of available threads is
112, we observe that utilizing only 70-80 threads provides the optimal results
because it achieves the best compromise between the number of threads and
thread efficiency. In particular, when $F=28$ and $C=3$, 84 threads are
utilized in total and it results in the lowest execution time of $864$
seconds. Similarly, when the number of channel threads $J$ is increased, the
best results were obtained when $J=7$ leading to a total execution time of
$561$ seconds.
Thus, the proposed multi-threading strategy leads to approximately $40$ times
improvement over the time required for computing all the $28$ filters on a
single thread ($28\times 812[Table1]=22736$ seconds). This is achieved by
utilizing $70-80$ threads on average. If the memory constraints can be
surmounted, there is scope for further refinement in the proposed multi-
threaded strategy.
It must be emphasized that the values reported in the above paragraphs are the
actual execution times and not the amortized time. Since $K$ images are
processed in parallel exploiting the SIMD mechanism, the amortized time per
image could be orders of magnitude lower. Since $K=16384$ in our experiments,
the amortized execution time for inference on a single image can be estimated
as $34$ milliseconds. Note that most of the reported literature on inference
based on encrypted MNIST data (e.g., (Gilad-Bachrach et al. 2016; Chou et al.
2018)) use the BFV encryption scheme available in the Microsoft SEAL library
implemented on a x86 platform. Hence, it is not possible to directly compare
the execution times reported in this work with those reported in the
literature. However, we do note that the wall-clock run time reported in
(Gilad-Bachrach et al. 2016) is 250 seconds, with only 10 $5\times 5$ filters
and a stride length of 2, which is comparable to our run time of 561 seconds
(with 28 filters and a stride length of 1). Moreover, unlike (Gilad-Bachrach
et al. 2016), the weights of the convolution and fully-connected layers are
encrypted in our proposed implementation.
It must be highlighted that the ciphertext packing strategy used in this work
has high latency for a single inference. This could be addressed using
different packing strategies as discussed in (Brutzkus, Gilad-Bachrach, and
Elisha 2019), which can reduce the latency at the cost of decreasing the
throughput. For example, for single image inference with a different packing
strategy (the entire image is encrypted within a single ciphertext), we were
able to achieve a total inference time of 8.8 seconds with encrypted model
parameters. When the model parameters are not encrypted, the latency can be
further reduced to 2.5 seconds, which is comparable to the latency of 2.2
seconds reported in (Brutzkus, Gilad-Bachrach, and Elisha 2019).
## Conclusions and Future Work
In this paper, we attempted to address the computation complexity in a
convolutional deep neural network for encrypted processing. The approach
presented in the paper describes a secure way to compute convolution, non-
linear activation and pooling layers using power of the CKKS FHE scheme. We
exploited two features to tame the complexity: SIMD ciphertext packing and
thread level parallelism. In addition, we empirically set the depth of the
circuit to further minimize the execution time. Using a recent method to
approximate ReLU, we were able to build a bootstrap-free ReLU function in FHE.
Since the max pooling layer requires bootstrapping, we replaced it with a mean
pooling layer. Overall, we show that it is possible to implement encrypted
inference in reasonable time by investing in the right CNN design and
parameter choices.
In future, we will proceed to integrate other layers with the end-goal of
support full inferencing and training on encrypted data. We also observe that
an overall latency of $561$ seconds could be too high for some applications.
The limitation of the ciphertext packing strategy used in this work is high
latency for a single inference and high memory requirement. This could be
addressed using different packing strategies as discussed in (Brutzkus, Gilad-
Bachrach, and Elisha 2019), which can reduce the latency while also decreasing
the throughput. Since all the ideas contained in (Brutzkus, Gilad-Bachrach,
and Elisha 2019) are practical HE implementation nuances that do not require
any change to the machine learning model, they can be readily applied to the
proposed approach. We leave this analysis for future work.
## Acknowledgement
The work was initiated when authors Nandakumar/Ratha/Pankanti were part of IBM
Research. We thank IBM, USA for providing us the computing facilities. Author
Kumar acknowledges Infosys Foundation for the financial support to IIIT-
Bangalore through the Infosys Foundation Career Development Chair Professor.
## References
* Acar et al. (2018) Acar, A.; Aksu, H.; Uluagac, A. S.; and Conti, M. 2018. A Survey on Homomorphic Encryption Schemes: Theory and Implementation. _ACM Computing Surveys_ 51(4).
* Al-Rubaie and Chang (2019) Al-Rubaie, M.; and Chang, J. 2019. Privacy-Preserving Machine Learning: Threats and Solutions. _IEEE Security & Privacy_ 17: 49–58. doi:10.1109/MSEC.2018.2888775.
* Bost et al. (2015) Bost, R.; Popa, R.; Tu, S.; and Goldwasser, S. 2015. Machine Learning Classification over Encrypted Data. doi:10.14722/ndss.2015.23241.
* Brakerski (2012) Brakerski, Z. 2012. Fully homomorphic encryption without modulus switching from classical GapSVP. In _Annual Cryptology Conference_ , 868–886. Springer.
* Brakerski, Gentry, and Vaikuntanathan (2014) Brakerski, Z.; Gentry, C.; and Vaikuntanathan, V. 2014. (Leveled) fully homomorphic encryption without bootstrapping. _ACM Transactions on Computation Theory (TOCT)_ 6(3): 1–36.
* Brutzkus, Gilad-Bachrach, and Elisha (2019) Brutzkus, A.; Gilad-Bachrach, R.; and Elisha, O. 2019. Low latency privacy preserving inference. In _International Conference on Machine Learning_ , 812–821.
* Cheon et al. (2017) Cheon, J. H.; Kim, A.; Kim, M.; and Song, Y. 2017. Homomorphic encryption for arithmetic of approximate numbers. In _International Conference on the Theory and Application of Cryptology and Information Security_ , 409–437. Springer.
* Chillotti et al. (2020) Chillotti, I.; Gama, N.; Georgieva, M.; and Izabachène, M. 2020. TFHE: fast fully homomorphic encryption over the torus. _Journal of Cryptology_ 33(1): 34–91.
* Chou et al. (2018) Chou, E.; Beal, J.; Levy, D.; Yeung, S.; Haque, A.; and Fei-Fei, L. 2018. Faster CryptoNets: Leveraging Sparsity for Real-World Encrypted Inference. _CoRR_ abs/1811.09953. URL http://arxiv.org/abs/1811.09953.
* Dahl et al. (2018) Dahl, M.; Mancuso, J.; Dupis, Y.; Decoste, B.; Giraud, M.; Livingstone, I.; Patriquin, J.; and Uhma, G. 2018. _Private Machine Learning in TensorFlow using Secure Computation_.
* Dathathri et al. (2019) Dathathri, R.; Saarikivi, O.; Chen, H.; Laine, K.; Lauter, K.; Maleki, S.; Musuvathi, M.; and Mytkowicz, T. 2019. CHET: An Optimizing Compiler for Fully-Homomorphic Neural-Network Inferencing. In _PLDI 2019_ , 142–156. ACM. URL https://www.microsoft.com/en-us/research/publication/chet-an-optimizing-compiler-for-fully-homomorphic-neural-network-inferencing/.
* Fan and Vercauteren (2012) Fan, J.; and Vercauteren, F. 2012. Somewhat Practical Fully Homomorphic Encryption. Cryptology ePrint Archive, Report 2012/144. https://eprint.iacr.org/2012/144.
* Gentry (2009) Gentry, C. 2009. Fully homomorphic encryption using ideal lattices. In _Proceedings of the forty-first annual ACM symposium on Theory of computing_ , 169–178.
* Gilad-Bachrach et al. (2016) Gilad-Bachrach, R.; Dowlin, N.; Laine, K.; Lauter, K.; Naehrig, M.; and Wernsing, J. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In _International Conference on Machine Learning_ , 201–210.
* Gottemukkula (2020) Gottemukkula, V. 2020. Polynomial Activation Functions. ICLR 2020 Open Review. https://openreview.net/attachment?id=rkxsgkHKvH&name=original˙pdf.
* Halevi and Shoup (2020) Halevi, S.; and Shoup, V. 2020. _HElib - An Implementation of homomorphic encryption_. https://github.com/shaih/HElib/ (accessed March 30, 2020).
* Howard et al. (2017) Howard, A. G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; and Adam, H. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. _CoRR_ abs/1704.04861. URL http://arxiv.org/abs/1704.04861.
* Juvekar, Vaikuntanathan, and Chandrakasan (2018) Juvekar, C.; Vaikuntanathan, V.; and Chandrakasan, A. 2018. GAZELLE: A low latency framework for secure neural network inference. In _27th $USENIX$ Security Symposium ($USENIX$ Security 18)_, 1651–1669.
* Krizhevsky, Sutskever, and Hinton (2012) Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Pereira, F.; Burges, C. J. C.; Bottou, L.; and Weinberger, K. Q., eds., _Advances in Neural Information Processing Systems_ , 1097–1105. Curran Associates, Inc.
* Lou and Jiang (2019) Lou, Q.; and Jiang, L. 2019. SHE: A Fast and Accurate Deep Neural Network for Encrypted Data. In _Advances in Neural Information Processing Systems_ , 10035–10043.
* Phan et al. (2020) Phan, H.; Huynh, D. T.; He, Y.; Savvides, M.; and Shen, Z. 2020. MoBiNet: A Mobile Binary Network for Image Classification. In _The IEEE Winter Conference on Applications of Computer Vision (WACV)_.
* Poggio et al. (2017) Poggio, T.; Mhaskar, H.; Rosasco, L.; Miranda, B.; and Liao, Q. 2017. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review. _International Journal of Automation and Computing_ 14(5): 503–519.
* Simonyan and Zisserman (2014) Simonyan, K.; and Zisserman, A. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. _arXiv 1409.1556_ .
* Szegedy et al. (2015) Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; and Rabinovich, A. 2015. Going Deeper with Convolutions. In _Computer Vision and Pattern Recognition (CVPR)_. URL http://arxiv.org/abs/1409.4842.
* Zeiler and Fergus (2013) Zeiler, M.; and Fergus, R. 2013. Visualizing and Understanding Convolutional Neural Networks. volume 8689.
|
# Cosmic-ray current-driven instabilities – revisiting environmental
conditions
Brian Reville1 Gwenael Giacinti1 and Robyn Scott2
1Max-Planck-Insitut für Kernphysik, Saupfercheckweg 1, Heidelberg 69117,
Germany
2 Centre for Plasma Physics, Queen’s University Belfast, University Road,
Belfast BT7 1NN, Northern Ireland E-mail<EMAIL_ADDRESS>
(Accepted 2021 January. Received 2021 January; in original form 2020 November
)
###### Abstract
The growth of magneto-hydrodynamic fluctuations relevant to cosmic ray
confinement in and near their sources, and the effects of local plasma
conditions is revisited. We consider cases where cosmic rays penetrate a
medium which may contain a fraction of neutral particles, and explore the
possible effects of high-order cosmic-ray anisotropies. An algorithm for
calculating the dispersion relation for arbitrary distributions, and
anisotropies is presented, and a general solution for power-law cosmic-ray
distributions is provided. Implications for the resulting instabilities near
to strong Galactic cosmic-ray sources are discussed. We argue that cosmic-ray
streaming in weakly ionised plasmas eliminates the need for the existence of
an evanescent band in the dispersion relation, a conclusion which may be
confirmed by gamma-ray observations. The necessity for additional multi-scale
numerical simulations is highlighted, as understanding the non-linear
behaviour is crucial.
###### keywords:
plasmas – instabilities – (ISM:) cosmic rays
††pubyear: 2020††pagerange: Cosmic-ray current-driven instabilities –
revisiting environmental conditions–C.5.2
## 1 Introduction
Young supernova remnants (SNR) are sources of high-energy particles, revealed
by their non-thermal x-ray and $\gamma$-ray emission (e.g. Berezhko et al.,
2002; Hinton & Hofmann, 2009). If they are also sources of cosmic rays (CRs),
as is likely the case, then the escape and transport in their local
environment should hint at the processes determining confinement, and offer a
potential probe of the time-history and conditions near the source. Such
information is required to understand their contribution to the total Galactic
CR population.
Studies on CR escape from sources, dating back to the early works of Kulsrud &
Cesarsky (1971), Skilling (1971) and others, implicitly assume that CR
transport, at least at low energies, is dominated by scattering on resonantly-
excited Alfvén waves. Close to efficient CR sources, this is however unlikely
to be the case. For example, on scales relevant to the energetic particles
emitting the $>$TeV $\gamma$-rays now detected from several young SNRs,
estimates for the growth-time of resonantly excited Alfvén modes can exceed
the lifetime of the SNR in question. The present understanding is that the
non-linear development of a much faster non-resonant short-wavelength
instability provides the required scattering and confinement (Bell, 2004; Bell
et al., 2013).
The concept of magnetic field amplification, and self-confinement underpins
all current approaches to model acceleration of CRs to the highest energies in
SNRs. It will likewise be a requirement of any conceivable alternative source.
The model put forward by Zirakashvili & Ptuskin (2008) and Bell et al. (2013)
establishes an upper-limit to the maximum achievable CR energy; directly
related to the ability of escaping high-energy particles to amplify the
magnetic field (non-resonantly) to a level that inhibits further escape. In
this model, the system is continuously leaking a fraction of particles at the
highest energies, their mean free path growing with distance from the shock.
The picture bears similarities with the free escape boundary approach commonly
used as a closure approximation in non-linear steady state models (e.g.
Ellison & Eichler, 1984; Reville et al., 2009).
This slightly artificial picture is conceptually simple in that it
conveniently separates the energetic particles into two populations; one being
transported in a highly developed and likely complex field topology close to
the shock, and another streaming freely into an ambient medium. The former
regime has been the focus of numerous numerical investigations (e.g. Bell,
2004; Zirakashvili et al., 2008; Reville et al., 2008; Gargaté et al., 2010;
Rogachevskii et al., 2012; Reville & Bell, 2013; Bai et al., 2015; Marret et
al., 2021) although the inherent multi-dimensional, non-linear and multi-scale
nature of the problem still poses serious challenges. For the latter scenario,
capturing the transport and self-regulated scattering in the immediate
surrounding medium is an equally challenging problem, as an accurate model
demands connecting the global structure of the accelerator with the local
large scale turbulent magnetic field, neither of which are completely
understood. Ultimately a unified theory that self-consistently captures the
complete global picture bridging the two regimes is desired, but a clear
understanding of the underlying assumptions is an essential first step.
Using test particle simulations in an isotropic Kolmogorov turbulent field,
Giacinti et al. (2012) demonstrated that in the process of diffusing away from
a source, before CRs have diffused over several correlation lengths $L_{c}$ of
the large scale field, the transport is highly filamentary. The filaments
develop due to the anisotropic nature of particle diffusion in magnetised
plasmas (e.g. Isenberg & Jokipii, 1979). Giacalone (2017) has shown that
similar filaments may also develop on smaller scales close to the shock, for a
given field realisation. Efforts to incorporate this phenomenon into the
standard picture have been applied, although for practical purposes in a
reduced one-dimensional picture (Ptuskin et al., 2008; Malkov et al., 2013)
while other have included the role of neutral particles in damping/stabilising
the self-excited modes (Nava et al., 2016; D’Angelo et al., 2018; Brahimi et
al., 2020). These works have exclusively worked within the confines of the
standard quasi-linear theory framework discussed previously. Bell et al.
(2013) on the other hand, made a first attempt to apply advances in our
understanding of non-resonant excitation of magnetic fields into a more self-
consistent picture of escape from SNRs. This same approach has also been
applied by Blasi et al. (2015) to the escape of CRs from galaxies.
In this work, we revisit the linear analysis of unstable modes, developing a
more generic framework for treating the linear perturbations, with the aim of
providing fresh insight into the problem, while also laying the groundwork for
future numerical investigations. The outline of the paper is as follows. In
the next section we describe the general framework, including the key
assumptions. Section 3 details application of the new results to ionised
plasmas, using new results from Cas A to motivate the parameter choices. In
section 4, the combined effects of neutrals and cosmic-rays are investigated.
Due to the lengthy nature of the calculation, the derivation of the cosmic-ray
response is left to the appendix.
## 2 Cosmic-ray anisotropy and generalised MHD instability
We consider the generic case of transport in the neighbourhood of a strong CR
source. A magneto-hydrodynamic description for the background plasma is
adopted, which for simplicity we take to be composed of electrons and protons
only. Combining the single fluid equations, while accounting for the presence
of cosmic-rays and neutral hydrogen, the momentum equation for the charge
carrying thermal fluid reads (Bell, 2004; Reville et al., 2007)
$\displaystyle\rho_{\rm i}\frac{\mathrm{d}\boldsymbol{u}_{\rm
i}}{\mathrm{d}t}=-\nabla P_{\rm
th}-\frac{1}{4\pi}\boldsymbol{B}\times(\nabla\times\boldsymbol{B})-\rho_{\rm
i}\nu_{\mathrm{in}}\left(\boldsymbol{u}_{\mathrm{i}}-\boldsymbol{u}_{\mathrm{n}}\right)\enspace\enspace\enspace$
$\displaystyle\enspace\enspace\enspace\enspace\enspace\enspace\enspace\enspace\enspace\enspace\enspace\enspace-\frac{1}{c}(\boldsymbol{j}_{\mathrm{cr}}-n_{\mathrm{cr}}q_{\rm
cr}\boldsymbol{u}_{\rm i})\times\boldsymbol{B}.$ (1)
The last term ($n_{\mathrm{cr}}q_{\rm cr}\boldsymbol{u}_{\rm
i}\times\boldsymbol{B}$) arises due to the fact that the return current is
drawn relative to the background fluid, and it is this term that ultimately
determines the neutral streaming velocity in the fully ionised case. $\nu_{\rm
in}$ is the momentum exchange rate due to elastic charge-exchange collisions
with neutral hydrogen atoms. A derivation of this equation is provided in
Appendix A, where previous kinetic approaches are also reviewed.
Adopting ideal MHD, Faraday’s equation reads
$\displaystyle\frac{\partial\boldsymbol{B}}{\partial
t}=\bm{\nabla}\times(\boldsymbol{u}_{\rm i}\times\boldsymbol{B})~{}.$ (2)
The above equations must be solved together with momentum conservation for the
neutral fluid
$\displaystyle\rho_{n}\frac{\mathrm{d}\boldsymbol{u}_{\rm
n}}{\mathrm{d}t}=-\nabla P_{\rm n}-{\rho}_{\rm
i}\nu_{\mathrm{in}}\left(\boldsymbol{u}_{\mathrm{n}}-\boldsymbol{u}_{\mathrm{i}}\right)~{}.$
(3)
where from momentum conservation, we have $\rho_{\rm n}\nu_{\rm
ni}={\rho}_{\rm i}\nu_{\mathrm{in}}$. The following expression for the ion-
neutral collision frequency, valid in the range $10^{2}{\rm K}\ll T<10^{6}{\rm
K}$ (Kulsrud & Cesarsky, 1971) is used
$\displaystyle\nu_{\rm in}=8.9\times 10^{-9}n_{\rm n}{T_{\rm i}}^{0.4}~{}{\rm
s}^{-1}~{},$ (4)
where $T_{\rm i}$ is the ion temperature in eV. For lower temperatures
$T\approx 10^{2}$ K, we use the results of Osterbrock (1961) which, since we
neglect Helium contributions throughout this work, is simply
$\displaystyle\nu_{\rm in}=2.3\times 10^{-9}{n}_{\rm n}~{}{\rm s}^{-1}~{}.$
(5)
We take this value to hold at lower temperatures, since the same physical
arguments presented in Osterbrock (1961) apply. Note however, for such small
temperature the weakly collisional regime may come in to play, which requires
a different analysis. Bell et al. (2020) have recently demonstrated that a
related instability, the magneto-collisional instability, occurs in this
regime.
It is also known that the environments of SNRs have significant amounts of
dust, as do many parts of the ISM. A framework for adding this additional
charged species to the system of equations and a detailed linear analysis has
been performed by Squire et al. (2020). We will not consider this additional
complication here.
We proceed with a standard linear analysis, looking at circularly polarised
modes propagating parallel to a mean background field with average CR current
along the mean magnetic field direction: $\bm{j}_{0}\|\bm{B}_{0}$. Following
Bell (2004), we make the substitution $j_{\bot}/j_{0}=\sigma B_{\bot}/B_{0}$
i.e. $\sigma$ describes the response of the CR current to small transverse
fluctuations in the magnetic field. Working in the local fluid rest frame
($\langle\bm{u}_{\rm i}\rangle=0$), and excluding any possible drift between
neutrals and the ionised fluid to lowest order, the general dispersion
relation is
$\displaystyle\omega^{2}\left[1+\frac{{\rm i}\nu_{\rm in}}{\omega+{\rm
i}\chi\nu_{\rm in}}\right]=$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}k^{2}v_{\rm
A}^{2}-\epsilon\frac{kB_{0}j_{0}}{\rho_{\rm
i}c}\left[1-\sigma_{1}-\frac{\omega}{kv_{\rm cr}}(1-\sigma_{2})\right]~{},$
(6)
where $v_{\rm A}=B_{0}/\sqrt{4\pi\rho_{i}}$ is the Alfvén velocity. The
cosmic-ray drift velocity $v_{\rm cr}=j_{0}/n_{\rm cr}q_{\rm cr}$ is
introduced here, as well as an ionisation ratio $\chi={\rho_{\rm
i}}/{\rho_{\rm n}}=\nu_{\rm ni}/\nu_{\rm in}$.111 As opposed to the ionisation
_fraction_ $X\equiv\rho_{\rm i}/\rho_{\rm tot}=\chi/(1+\chi)$ For convenience,
we have also separated out the $\omega$ dependent terms as
$\sigma(\omega,k)=\sigma_{1}(k)-(\omega/v_{cr}k)\sigma_{2}(k)$ (see Appendix
B).
A generalised expression for $\sigma$ is given in the Appendix where an
expansion of the cosmic-ray anisotropy to arbitrary order in Legendre
polynomials is presented. Other basis functions (e.g. Chebyshev) are equally
straightforward to derive, however, here we aim to develop a framework to
compare to future simulations which will build on the work of Reville & Bell
(2013) who used the related spherical harmonic functions as a basis. The
penultimate expression for $\sigma$, which involves an integral over the
magnitude of momentum, is given in a form that allows for straight-forward
numerical calculation which can be applied to an arbitrary distribution. In
the present work, we focus on a fixed power-law distribution at all orders of
the expansion $f_{\ell}\propto p^{-s}\Theta(p;p_{1},p_{2})$, where
$\Theta(x;a,b)$ is a top hat function, equal to unity for $a<x<b$ and zero
otherwise. In this situation, the integrals, although cumbersome, can still be
expressed in terms of standard functions, and can be shown to reduce to
previous results in the appropriate limits. e.g. first order expansions with
$s=4$ were originally presented in Bell (2004), while thermal/composition
effects, and alternative power-laws have been explored in Reville et al.
(2007). We revisit and extend these results in the following.
Higher order anisotropies (to second order in Legendre polynomials) have been
presented in Bykov et al. (2011), although there the emphasis was on the
firehose instability. Using the asymptotic $kr_{\rm g,2}\ll 1$ values given in
the appendix, where $r_{\rm g,2}=r_{\rm g}(p_{2})$ is the gyro-radius of the
maximum energy cosmic-rays, independent of $s$ the usual firehose expression
is found
$\displaystyle\omega^{2}=k^{2}v_{\rm
A}^{2}\left(1-\frac{P_{\|}-P_{\bot}}{B_{0}^{2}/4\pi}\right)~{},$ (7)
where $P_{\|,\bot}$ are the parallel,perpendicular components (w.r.t. the
guide field) of the cosmic-ray pressure tensor (Bykov et al., 2012; Scott et
al., 2017). If the pressure anisotropy $\Delta P_{\rm cr}=P_{\|}-P_{\bot}$ is
large enough the system is firehose unstable. We will see that for what
concerns the acceleration of the highest energy particles, this regime is
generally too slow to be of any influence, at least for the cases we consider.
We may also compare our result to that of the resonant ion-cyclotron
instability, valid in the weak damping approximation. As we will show below,
this case applies in the limit ${v_{\rm cr}}/{c}\ll U_{B}/U_{\rm cr}$, where
$U_{B}$ and $U_{\rm cr}$ are the energy densities of the magnetic field and
CRs respectively. In this limit the first term on the rhs of equation 2
dominates and the real part of the dispersion relation corresponds to left or
right propagating Alfvén waves ${\rm Re}({\omega})=\pm kv_{\rm A}$. Using the
asymptotic expressions given in equations (73) and (74) of the Appendix, we
recover the familiar linear growth rate expression for the resonant streaming
instability
$\displaystyle{\rm
Im}({\omega})=\pm{\Omega_{0}}\left.\frac{\pi}{4}\frac{s-3}{s-2}\frac{n_{\rm
cr}\left(>p\right)}{n_{i}}\left(\frac{3}{s}\frac{v_{\rm cr}}{v_{\rm A}}\mp
1\right)\right|_{p=eB/ck}~{},$
where $\Omega_{0}=eB_{0}/m_{p}c$ is the non-relativistic gyro-frequency, and
$n_{\rm cr}(>p)=4\pi\int_{p}^{\infty}fp^{2}dp$. Note that only waves with
$\omega/k$ and $v_{\rm cr}$ of the same sign have positive growth. We see here
also the neutral streaming speed has the correct form, (e.g. Melrose &
Wentzel, 1970), namely that steady streaming is proportional to the Alfvén
velocity. If the self-generated waves reduce the streaming velocity to the
threshold condition $v_{\rm cr}=sv_{\rm A}/3$, the next order term in the
anisotropy can be included. Using the results in the appendix B5, the pressure
anisotropy instability recently investigated by Zweibel (2020) is recovered.
The instabilities are not mutually exclusive, but rather limiting forms of the
resonantly driven instability for vanishing first and second order
anisotropies respectively.
Interest in cosmic-ray streaming has been renewed in recent years, where
significant efforts have been made to implement the process in large galaxy-
scale simulations (Jiang & Oh, 2018; Thomas & Pfrommer, 2019; Chan et al.,
2019). In this regard, it is necessary to highlight the range of validity of
the streaming instability result above. The previously mentioned limit on
applicability of the weak-damping approximation corresponds to a cosmic-ray
energy density of $U_{\rm cr}\ll 10~{}(v_{\rm cr}/v_{\rm
A})^{-1}n_{gas}^{1/2}B_{\rm\mu G}$ keV cm-3. This limit has a simple physical
interpretation, being equivalent to the condition that the
$\bm{j}_{cr}\times\bm{B}$ force on the background fluid does not dominate over
the magnetic tension (a necessary condition for Alfvén wave propagation) on
any scale in the resonant range. The above limit is not always guaranteed to
be the case, particularly in the neighbourhoods of strong CR sources, where
both $U_{\rm cr}$ and $v_{\rm cr}/v_{\rm A}$ are enhanced relative to Galactic
averages. When this condition is violated, the short-wavelength non-resonant
branch first investigated by Bell (2004) dominates. However, as we show below,
the resonant modes also become increasingly sensitive to the nature and
details of the anisotropy.
With the goal of generalising the prescription for parallel propagating modes
in magnetised plasmas containing a population of drifting CRs in mind, it will
prove convenient to introduce the dimensionless variables
$(\hat{\omega},\hat{\nu})=(\omega,\nu_{\rm in})\times r_{\rm g1}/v_{\rm A}$
and $\hat{k}=kr_{\rm g1}$, where $r_{\rm g1}$ is the gyro-radius of the lowest
energy cosmic rays in the background mean field. We keep the treatment
general, such that $v_{\rm cr}$ can take any value, but will specialise later
to the case of shock precursors.
In dimensionless form, equation (2) reads
$\displaystyle\hat{\omega}^{2}\left[1+\frac{{\rm
i}\hat{\nu}}{\hat{\omega}+{\rm
i}\chi\hat{\nu}}\right]=\hat{k}^{2}-\epsilon\zeta_{1}\hat{k}\left[1-\sigma_{1}-\frac{\hat{\omega}}{\hat{k}}\frac{v_{\rm
A}}{v_{\rm cr}}(1-\sigma_{2})\right]~{},$ (8)
where we have introduced the dimensionless quantity
$\displaystyle\zeta_{1}$ $\displaystyle=\frac{r_{\rm
g1}B_{\|}j_{\|}}{\rho_{\rm i}v_{\rm A}^{2}c}=\frac{4\pi n_{\rm cr}p_{1}v_{\rm
cr}}{B_{\|}^{2}}$
$\displaystyle=\frac{1}{2}\frac{(s-4)}{(s-3)}\frac{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-3}}{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-4}}\left(\frac{U_{cr}}{U_{\rm
B}}\right)\frac{v_{\rm cr}}{c}~{}.$ (9)
In this notation, the condition for Bell-instability to operate is
$\zeta_{1}>1$. If this condition is not satisfied, the magnetic tension
dominates at $kr_{\rm g,1}=1$, and the non-resonant branch is effectively
stabilised. The weak-damping result given above generally applies in this
case, provided the anisotropy is small. If the anisotropy is large,
specifically if $\Delta P_{\rm cr}/P_{\rm cr}>v_{\rm cr}/{c}$, the leading
order terms are comparable. The effect of increased anisotropy will be shown
in the next section. The second form for $\zeta_{1}$ given in (9) above is
close222 For the $s=4$ case, note $\lim_{n\rightarrow 0}\frac{x^{n}-1}{n}=\ln
x$ to that used in Bell (2004) although the assumption that $v_{\rm cr}=u_{\rm
sh}$ is no longer implicit. Note that setting $U_{B}$ to the total magnetic
energy (turbulent + mean field) in this expression, the saturation condition
given in Bell (2004) corresponds to $\zeta_{1}=1$.
An alternative approach introduced by Bell et al. (2013), which is more
applicable to CR escape upstream of a shock, is to define an escaping flux
$j_{\|}\bar{p}_{\rm esc}c=\eta_{\rm esc}e\rho_{\rm tot}u_{\rm sh}^{3}$, where
$\eta_{\rm esc}$ represents the fraction of the total mass energy density
processes by the shock being carried away by escaping particles, and
$\bar{p}_{\rm esc}$ the average momentum of these escaping particles. Setting
this to $p_{1}$, we find
$\displaystyle\zeta_{1}=\eta_{\rm esc}M_{\rm A,n}^{2}\frac{u_{\rm sh}}{c},$
(10)
where $M_{\rm A,n}=u_{\rm sh}/v_{\rm A,n}$ is the Mach number of the shock.
Here we introduce the reduced Alfvén velocity $v_{\rm
A,n}=B/\sqrt{4\pi(\rho_{\rm i}+\rho_{\rm n})}$ in anticipation of shocks that
encounter an incompletely ionised medium in the far upstream. This reduces to
the expression in Bell et al. (2013) in the limit $\rho_{n}=0$.
We explore the new generalised dispersion relation below for a selection of
specific relevant cases.
Figure 1: Real (left) and imaginary (right) components of the phase speeds for
waves in a fully ionised background, normalised to the ambient Alfvén
velocity, as a function of pressure anisotropy $\Delta_{P}=(\Delta P_{\rm
cr}/P_{\rm cr})/(2(u_{\rm sh}/c)^{2})$ (i.e. the fractional pressure
anisotropy relative to the diffusion approximation result). Dashed curves are
for modes rotating in the same sense as the driving cosmic-rays, while solid
lines in the opposite sense. We use values expected for Cas A: $v_{\rm
cr}=u_{\rm sh}=5,000~{}{\rm km~{}s}^{-1}$, $\rho_{\rm i}=2.34\times 10^{-24}$
g cm-3 and $B_{0}=3~{}\mu$G. We take $p_{2}/p_{1}=10^{4.5}$, such that for
$E_{\rm min}=1$GeV, $E_{\rm max}\approx 30$ TeV. We have taken a CR efficiency
of $P_{\rm cr}/\rho_{\rm i}u_{\rm sh}^{2}=0.2$ (see text). Note the cut-off in
the real part of the frequency below $kr_{\rm g2}=10^{-4.5}$ due to onset of
the firehose instability.
## 3 Strongly ionised plasmas - $\chi\rightarrow\infty$
Observations of several SNRs in H$\alpha$ provide unambiguous evidence that
SNR shocks interact with partially ionised plasmas. As previously pointed out
by Reville & Bell (2013), the strong heating in the non-linear phase of CR-
driven magnetic field amplification is likely to ionise most material before
it reaches the shock, subject to the obvious condition that the driving is
strong enough to reach this stage of non-linear development. The apparent
anti-correlation between synchrotron X-ray rims, a tracer of strong magnetic
field amplification, and H$\alpha$ emission in some cases (Winkler & Long,
1997) indicate that this might be the case. Hence, the completely ionised
scenario likely applies to the developed precursors of strong SNR shocks. Here
we apply the new generalised expressions, motivated by recent observational
results which better constrain the cosmic-ray parameters.
#### Cas A:
Recent observations by Ahnen et al. (2017) and in particular the combined
Fermi-VERITAS analysis of Abeysekara et al. (2020) provide robust constraints
on the CR parameters in Cas A. While a full numerical model that captures
self-consistent acceleration and field growth is desirable, here we simply aim
to use the revised dispersion relation to explore the various modes that
mediate the scattering. We focus on the pure hadronic model of Abeysekara et
al. (2020), which indicates a total CR energy $E_{\rm CR}\approx 1.7\times
10^{50}$ erg, with best fit above the pion bump exhibiting a power-law with
exponent $s=4.17$ and exponential cut-off at $pc=17$ TeV. Adopting for the
total available energy of the SNR a value of $E_{\rm SNR}=2-5\times 10^{51}$
erg (Chevalier & Oishi, 2003), the hadronic model implies a total (i.e.
intergrated over the SNR lifetime) CR acceleration efficiency $\approx 5\%$.
To determine the dispersion of the linear modes, we require two additional
pieces of information. The cosmic-ray energy density in the precursor, and
limits on the higher-order CR anisotropy. For the latter, we will restrict
ourselves to second order, i.e. pressure anisotropy. For a shock radius of
$2.5$ pc, adopting the total CR energy in Cas A given by Abeysekara et al.
(2020), the average energy density is $\left\langle U_{\rm
cr}\right\rangle\approx 60$ keV cm-3. This is to be compared with the mean
energy density of the shock processed material $\bar{U}_{\rm
sh}\equiv\frac{1}{2}\rho_{\rm i}u_{\rm sh}^{2}\approx 180$ keV cm-3. In the
latter estimate, we have adopted the same gas density upstream of the shock
$\rho_{\rm i}=2.34\times 10^{-24}$ g cm-3 as used in Abeysekara et al. (2020).
In terms of hydrodynamic feedback, this gives $\eta\equiv P_{\rm
cr}/\rho_{0}u_{\rm sh}^{2}\approx 0.2$. Comparable hydrodynamic efficiencies
for other young remnants have been claimed previously in the literature,
although here we will adopt it as an upper limit, as it can not yet be ruled
out that most of the energy resides deep in the SNR interior.
Figure 2: Real component of the phase speeds (left) and growth rates (right)
in a fully ionised background and growth rate, as a function of cosmic-ray
acceleration efficiency $\eta=P_{\rm cr}/\rho_{\rm i}u_{\rm sh}^{2}$. Pressure
anisotropy is set to the diffusion approximation $\Delta_{P}=2(u_{\rm
sh}/c)^{2}$, and $\eta_{0}=0.2$ (see text). All other parameters are as in
Fig.1. In the lower plot, for the conditions given, the relevant time scale
for growth is $r_{\rm g1}/v_{\rm A}\approx 10^{6}(p_{1}/m_{\rm p}c)$ s. We
recall that the maximum growth rate is predicted to be Im$(\omega r_{\rm
g1}/v_{\rm A}=\zeta_{1}/2)$, where in these plots $\zeta_{1}\approx
1200\eta/\eta_{0}$
With this picture in mind, we can repeat the estimates of Bell et al. (2013),
assuming the non-resonant mode dominates, and that confinement demands a
minimum of $N\approx 5$ growth times ($t_{\rm NR}=2r_{\rm g1}/v_{\rm
A}\zeta_{1}$). In a quasi-steady state, we expect the escaping flux to match
the average accelerating flux close to the cut-off (Drury et al., 1999),
$\displaystyle j_{\rm acc}(p_{\rm max})=\int_{p_{\rm max}-\Delta p}^{p_{\rm
max}}d^{3}p~{}f(\bm{p})\bm{v}\cdot\bm{n}\approx\frac{4\pi\Delta u}{3}p_{\rm
max}^{3}f_{0}(p_{\rm max})~{},$ (11)
where $\bm{v}\cdot\bm{n}$ is the projection of the particle velocity vector
onto the shock normal, $\Delta p$ the average momentum change per cycle, and
$\Delta u$ the velocity jump across the shock. Assuming the cosmic-rays
populate a power-law distribution, extending without break to low energies,
$\displaystyle j_{\rm acc}(p_{\rm max})=\frac{P_{\rm cr}}{\rho_{i}u_{\rm
sh}^{2}}n_{i}\Delta u\left(\frac{u_{\rm sh}}{c}\right)^{2}\frac{(p_{\rm
max}/m_{p}c)^{3-s}}{F(p_{0},p_{\rm max})~{}~{}}$ (12)
with
$\displaystyle F(p_{0},p_{\rm max})=\int_{p_{0}/m_{p}c}^{p_{\rm
max}/m_{p}c}\frac{u^{4-s}}{\sqrt{1+u^{2}}}~{}du~{}.$
This integral can be expressed as a combination of Hypergeometric functions,
but it suffices to know that it is of order unity for any realistic choice of
parameters. Using this value for the escaping flux of CRs upstream, the
confinement condition $t_{\rm NR}<t_{\rm SNR}/N$ thus becomes
$\displaystyle\left(\frac{p_{\rm max}}{m_{\rm
p}c}\right)^{s-3}=\frac{3}{4}\frac{P_{\rm cr}}{\rho_{i}u_{\rm
sh}^{2}}\left(\frac{u_{\rm sh}}{c}\right)^{3}\frac{\omega_{\rm pi}t_{\rm
SNR}}{NF}~{},$ (13)
where $\omega_{\rm pi}$ is the background ion plasma frequency upstream of the
shock. For $\eta=0.2$, and $s=4.17$, this gives $E_{\rm max}\approx 50$ TeV,
reasonably close to the observed cut-off, suggesting the efficiency of
$\eta\approx 0.2$ is justified.
Regarding the pressure anisotropy, the standard diffusion approximation
predicts $\Delta P_{\rm cr}/P_{\rm cr}=2(u_{\rm sh}/c)^{2}$ (Jokipii &
Williams, 1992). However, as shown by Luo & Melrose (2009), the non-resonant
short wavelength instability of Bell (2004) leads to an _enhancement_ of the
anisotropy, and requires resonant scattering to counteract it. It is thus
desirable to explore the relevant time-scales involved using the updated
constraints in this object.
We first consider the case relevant to confined particles that occupy the
precursor in the immediate upstream of the shock. As shown above, the
acceleration efficiency, as inferred by the joint Fermi/VERITAS observations,
is close to 20%, unless of course the CRs are mostly confined deep inside the
SNR. We thus explore the possible parameter space, keeping fixed the inferred
spectral index of $s=4.17$. In figures 1 and 2 we plot the phase velocity and
growth rates for the full dispersion relation, truncating the expansion at
second order. In figure 1 the CR efficiency is kept fixed at the upper limit
inferred from observations, and we explore the effect of increasing
anisotropy. The first thing to note is that the anisotropy has no effect on
the maximum growth rate of the Bell instability
$\displaystyle\Gamma_{\rm max}=\frac{1}{2}\zeta_{1}\frac{v_{\rm A}}{r_{\rm
g1}}~{}.$ (14)
This results from the non-resonant nature of the instability, for which only
the total current matters. The CR current is kept fixed for all plots in
Fig.1. Second, and contrary to the standard picture, only at extremely short
wavelengths (or small $\zeta_{1}$) does the usual Alfvén branch emerge. At all
other wave-lengths, the modes are highly super-Alfvénic and also dispersive.
There is thus no unique wave-frame, and naively, one might anticipate this to
have an effect on the transport in the precursor, and as a consequence the
acceleration. The dispersion free ($\omega/k=$ constant) behaviour that is
commonly assumed strictly speaking only applies to $s=4$ power-law spectra, or
in the $\zeta_{1}\ll 1$ limit, where the waves are Alfvénic. The imaginary
components are shown on the right, where we similarly scale by $k$ to
highlight the different growth rates for left and right circularly-polarised
modes. The blue lines in Figures 1 and 2 show the same data. We do not explore
the effect of wave dispersion further here, as the extremely short time-scales
associated to the non-resonant waves is likely to dominate the field growth
and subsequent particle transport (e.g. Reville & Bell, 2013; Bell et al.,
2013).
Figure 3: Real ($\omega_{r}$) and imaginary ($\omega_{i}$) components of the
frequency showing phase velocity and damping rates in the absence of cosmic
rays for $\chi=1$ (top) and $\chi=10^{-2}$ (bottom). Both plots assume a fixed
$n_{\rm tot}=n_{\rm i}+n_{\rm n}=10$ cm-3, a temperature of $1$ eV, a magnetic
field of $5~{}\mu$G, while $r_{\rm g1}$ corresponds to that of a $100$ GeV
proton in this field. In these units, we find for $\chi=10^{-2}$, the
evanescent band occurs in the range $0.3<kr_{\rm g1}<0.8$, as shown.
In figure 2, we show the dependence of the phase velocity and growth rates on
the cosmic-ray efficiency $\eta$ relative to the reference value
$\eta_{0}=0.2$. This demonstrates the sensitivity of the growth rate, and
phase speeds to the assumed cosmic-ray efficiency, as well as the effect of a
radial decrease in $\zeta_{1}$. As expected, provided $\zeta_{1}>1$,
scatterers in the resonant band are not adequately described by Alfvén waves,
as is often assumed to be the case. Thus, in contrast to the standard model,
the range over which the scattering fluctuations are non-Alfvénic may extend
over a significantly larger volume than previously considered. Recently,
several groups have performed kinetic simulations of the resonant ion-
cyclotron instability in the weak-damping limit (e.g Holcomb & Spitkovsky,
2019; Bai et al., 2019b). We will explore this process, and its impact on the
surrounding environment, in a separate numerical study.
For Cas A parameters, the reference time $r_{\rm g1}/v_{\rm A}$ is on the
order of a few weeks, while the inverse lifetime of Cas A is $r_{\rm
g1}/v_{\rm A}t_{\rm SNR}\approx 10^{-4}$. This coincidentally is close to the
growth rate for waves resonant with particles close to the inferred cut-off in
Cas A, provided acceleration remains efficient. Long wavelength, non-resonant
firehose unstable fluctuations however appear to be too slow to be of
significance, although an enhancement by almost an order of magnitude could be
achieved if the anisotropy is greatly enhanced relative to its diffusive limit
(see Fig 1). The existence of such strong anisotropy would need to be properly
motivated.
## 4 Cosmic-ray propagation in partially/weakly ionised plasmas
The linear theory of parallel propagating plane-waves in incompletely ionised
magnetised plasmas is well known (Kulsrud & Pearce, 1969; Zweibel & Shull,
1982; Tagger et al., 1995; Xu et al., 2016), as is its effect on diffusive
shock acceleration (Drury et al., 1996). Interest around this topic has grown
in recent years, primarily in the context of escape from SNRs (Bykov et al.,
2013; Nava et al., 2019; Inoue, 2019), but also regarding the penetration of
CRs into molecular clouds including their role as an ionising agent, as well
as their transport in the multi-phase ISM (Padovani et al., 2009; Phan et al.,
2018; Brahimi et al., 2020). Complexities associated to the field topology and
its impact on CR observable such as the Boron to Carbon ratio have also been
explored (D’Angelo et al., 2018).
The analysis here follows the approach of Reville et al. (2007). The standard
QLT approach of applying the diffusion approximation of Skilling (1971):
$\partial f/\partial\mu\approx\lambda_{\rm mfp}\partial f_{0}/\partial x$ ($x$
here is distance along the mean field) only holds if gradients are long
relative to the mean free path $\lambda_{\rm mfp}$. This may not be satisfied
for dynamically evolving systems, and hence we keep $\zeta_{1}$ as a free a
parameter. Our aim here is partly to bring attention to areas where more
numerical efforts are required to address this problem self-consistently.
#### Incompletely ($\chi\sim 1$) and weakly ionised gases without cosmic rays
For incompletely ionised ($\chi>1$) gases, the linear analysis is well known,
and we refer the reader to Zweibel & Shull (1982) for a clear presentation.
The key limiting results can be read off Fig. 3. At high frequencies,
$\omega\gg\nu_{\rm in}$, inter-species collisions do not much affect the
oscillations. This results in slowly damped Alfvén waves. At low frequencies
$\omega\ll\nu_{\rm ni}$, the ions and neutrals are collisionally coupled,
which results in the two species oscillating as a single fluid with increased
inertia. This has the effect of reducing the Alfvén velocity $\omega/k=v_{\rm
A,n}$, and since the coupling is more effective at lower frequencies, the
damping naturally decreases rapidly ($\propto k^{2}$) as one moves to longer
wavelengths. The upper blue line in the right hand panel corresponds to
damping rates of non-propagating perturbations in the neutral fluid, should
they exist.
The case of weakly ionised gases ($\chi\ll 1$) reveals a number of additional
complexities, and is of course more relevant to CRs impacting on dense
molecular material. In the absence of CRs, and subject ot the condition
$\chi<1/8$, waves in the so-called evanescent range $2\nu_{\rm ni}/v_{\rm
A,n}<k<\nu_{\rm in}/2v_{\rm A}$ do not propagate (Zweibel & Shull, 1982). For
sharp resonance ($k=1/r_{\rm g}$) this equates to cosmic-ray energies in the
approximate range
$\displaystyle 0.4\frac{B_{5\mu{\rm G}}^{2}}{n_{\rm i}^{1/2}n_{\rm n}T_{\rm
i,eV}^{0.4}}<E_{\rm TeV}<15\frac{B_{5\mu{\rm G}}^{2}}{\chi_{0.01}^{1/2}n_{\rm
i}^{1/2}n_{\rm n}T_{\rm i,eV}^{0.4}}~{},$ (15)
with reference values given in subscripts. For many relevant parameters, this
falls in an energy range pertinent for GeV-TeV $\gamma$-ray observations.
Tagger et al. (1995) have pointed out that the wave propagation can be viewed
in two ways (see also Soler et al., 2013). In the generally standard approach
of taking coherent plane waves with real $k$, no solutions for the real part
of $\omega$ are found in this wave band. However, taking the opposite approach
by considering a wave-packet at fixed real $\omega$ and examine how a wave
grows/decays as it propagates, Tagger et al. (1995) show that waves in this
band do in fact propagate, damping as they do so. This latter approach is also
well-suited to shock precursor studies, where wave amplitudes are expected to
develop in-homogeneously as they approach the shock. A comparison of the two
approaches is shown in Fig. 3, where the lower panels are for an ionisation
factor $\chi=10^{-2}$. As expected, the two methods agree at high and low
frequencies, but perturbations excited with Fourier components in the
evanescent band can clearly be seen to propagate only in the latter approach.
Implications are discussed in section 5
Figure 4: Effect of cosmic-rays on evanescent wave-band (shaded-region) for
the cases of WNM (left) and CNM (right) as a function of cosmic ray drift
speed. Here $M_{\rm A}=v_{\rm cr}/v_{\rm A}$, solid (dashed) lines are for
$U_{\rm cr}/U_{B}=10(100)$. The cosmic rays are taken to have a spectral index
of $s=4.3$ and $E_{\rm min}=1$GeV. For the WNM we adopt the parameters
$n_{i}+n_{n}=0.5$ cm-3, $T=5,000$K and $\chi^{-1}=50$, while for the CNM we
take $n_{i}+n_{n}=50$ cm-3, $T=50$K and $\chi=10^{-3}$. In both cases the
magnetic field is set to $5~{}\mu$G. Note that for the two dashed lines
$\zeta_{1}>1$.
#### Weakly ionised gases with cosmic rays
Here, we focus exclusively on gases in which $\chi<1/8$, i.e. the threshold
condition for the evanescent band to exist (Zweibel & Shull, 1982). The
inclusion of CRs into the calculations should be considered with some care. It
is convenient to continue working with the cosmic-ray and magnetic energy
densities, as both can in principle be inferred from observations. However,
one should make sure to check in extremely weakly ionised cases, that the MHD
approximation holds, i.e. $n_{\rm cr}\ll n_{\rm i}$.
The short-wavelength $\zeta_{1}>1$ case was previously considered by Reville
et al. (2008). The present results extend those results to arbitrary
wavelengths, and upper cut-offs in the distribution. As discussed in the
appendix, the contribution from the end points of the distribution must be
correctly accounted for to ensure consistent results.
The damping of scatterers due to ion-neutral collisions in and near CR
accelerators has been explored previously in the context of spectral breaks
(Malkov et al., 2011), cosmic ray escape from SNR (e.g. Xu et al., 2016;
Brahimi et al., 2020), and either the penetration through (Cesarsky & Volk,
1978; Morlino & Gabici, 2015; Inoue, 2019) or exclusion from (e.g. Skilling &
Strong, 1976) molecular clouds. Such studies are particularly interesting when
they occur in the neighbourhood of a CR source. As discussed above, the
existence of an evanescent band is an artefact of the chosen approach, and may
not exist in a turbulent plasma, provided the MHD cascade continuously
replenishes the wave-band in question. Here we show that even a minute
presence of CR anisotropy also causes waves in the evanescent band to
propagate.
In Figure 4 we consider gas conditions representative of the warm and cold
neutral medium phases (WNM and CNM respectively) of the ISM, and the effect of
varying CR anisotropy. The details can be read from the caption. Note that we
adopted a modest value of $U_{\rm cr}/U_{B}=10$ in these plots, and explored
the effect of increased anisotropy. The dashed curves correspond to $U_{\rm
cr}/U_{B}=100$ to show an example of $\zeta_{1}>1$ limit. This again shows how
waves deviate from standard Alfvén branch in the strongly driven regime. While
not shown here, it is easily shown that provided $\zeta_{1}\ll 1$, the
imaginary part of the frequency matches the expectation from a linear addition
of the damping and growth rates (with an appropriate choice for the Alfvén
velocity at long wavelengths for the latter). This also turns out to be a
reasonable approximation to the growth/decay in the $\zeta_{1}>1$ limit, i.e.
$\displaystyle{\rm Im}(\omega)\approx{\rm Im}(\omega)_{\rm no\,CRs}+G(k){\rm
Im}(\omega)_{\rm no\,neutrals}~{},$ (16)
where
$\displaystyle G(k)=\left\\{\begin{array}[]{cc}1&k>\nu_{\rm in}/2v_{\rm A}\\\
\sqrt{\chi}&k<2\sqrt{\chi}\nu_{\rm in}/v_{\rm A}\end{array}\right.$ (19)
and with a linear fit in between. This renormalizing factor is necessary to
account for the increased inertia of the plasma. The agreement is typically
good to within a few percent although slightly larger variations may be found
close to the evanescent band that occurs in the absence of CRs.
It is well known that for a fully ionised plasma, the maximum growth rate for
field-aligned CR driven modes is given by the expression in equation (14),
regardless of the value of $\zeta_{1}$ (e.g. Bell, 2013). However, there is a
fundamental difference between the physical processes underlying the
instability in the two limits of $\zeta_{1}$ greater than or less than unity.
The presence of neutrals can expose this difference.
It is easily verified that the magnitude of the ion-neutral damping rate is
always less than the oscillation frequency $kv_{\rm A}G$. If $\zeta_{1}>1$,
the maximum growth rate is also $k_{\rm max}v_{\rm A}G$, where $k_{\rm
max}=\zeta_{1}/2r_{\rm g1}$ is the wavenumber of the fastest growing mode.
This is merely confirmation of the proof in Reville et al. (2007), that the
non-resonant instability can not be stabilised by neutral damping. The
opposite is true for resonant modes. If $\zeta_{1}<1$, and assuming the
distribution is peaked at an energy corresponding to gyroradius $r_{\rm g1}$,
by definition $k_{\rm max}=1/r_{\rm g1}$ and $\gamma_{\rm max}<Gkv_{\rm A}$
and hence can be stabilised.
We thus seek roots to the equation
$\displaystyle{\rm Im}(\omega)_{\rm no\,CRs}+G(k){\rm Im}(\omega)_{\rm
no\,neutrals}=0~{}.$ (20)
Two possibilities exist. If $r_{\rm g1}<\zeta_{1}v_{\rm A}/\nu_{\rm in}$ (c.f.
the lower value in the inequality (15)) growth is still possible, however for
small $\zeta_{1}$ this may correspond to very low energies. On the other hand,
if $r_{\rm g1}>\zeta_{1}v_{\rm A}/\nu_{\rm in}$ net damping is expected at
short wavelength, but since the damping rate at long wavelengths scales as
$k^{2}$, and resonantly driven modes scale as $k^{s-3}$ (neglecting pressure
anisotropy effects), higher energy particles, if present, can still produce
net growth.
Thus it is possible to define a critical energy, below which damping must
dominate.
$\displaystyle r_{\rm g}^{5-s}=\frac{r_{\rm
g1}^{4-s}}{\sqrt{\chi}\zeta_{1}}\frac{v_{\rm A}}{\nu_{in}}~{}.$ (21)
Taking for simplicity $s=4$, we see this equates to an energy $\zeta_{1}^{-1}$
times the upper energy limit in the inequality (15), with all energies below
this being unable to self-generate scattering waves. While it is tempting to
speculate on the impact of this effect for escape from SNR, or indeed
penetration into a molecular cloud, we will defer such discussion until
reliable numerical simulations can explore the full multi-scale aspects.
## 5 Discussion
The stability of parallel propagating transverse MHD fluctuations in the
presence of an external current of cosmic rays has been explored. The solution
given in equation (2), and in particular the method for determining the
cosmic-ray response provided in the appendix, generalises many of the existing
results in the literature. The approach can readily be applied to other plasma
conditions, where parallel propagating modes are of interest. A limitation of
the analytic results presented here, is the use of a single power-law
dependence at all orders of anisotropy. For example, in the context of shock
precursors, one might anticipate the anisotropy at large momenta to have a
different spatial profile to that at low momenta, which might for example
enhance the growth rate of the fire-hose instability. A follow-up study of
cosmic-ray anisotropy in multi-scale self-generated fields will exploit the
generic form presented here.
The standard quasi-linear theory approach to studying cosmic-ray transport and
self-confinement applies in the limit of $\zeta_{1}<1$, or $U_{\rm
cr}/U_{B}\ll c/v_{\rm cr}$. In the Galactic disk, we expect $U_{\rm cr}\approx
U_{B}$ and $v_{\rm cr}\approx v_{\rm A}\ll c$, meaning this condition is
generally satisfied. However, near to sources, where both $U_{\rm cr}$ and
$v_{\rm cr}$ are greatly enhanced, this is generally not the case. Similar
conclusions may apply for the low density environment at the periphery of the
Galactic halo (e.g. Blasi & Amato, 2019). We plan to bridge the two regimes in
an upcoming numerical study.
The transport of cosmic-rays in the ISM, and the related topic of their escape
from sources, has recently received significant interest, with particular
emphasis on the different phases of the ISM. The role of ion-neutral damping
features heavily in these studies. That different conceptual approaches to the
problem produce opposite conclusions regarding propagation of Alfvénic
fluctuations at intermediate frequencies in weakly ionised plasmas ($\nu_{\rm
ni}<\omega<\nu_{\rm in}$) is already known (Tagger et al., 1995; Soler et al.,
2013). However, studies to date have focussed exclusively on only one such
approach, which, motivated by the new results, we now argue is in fact less
likely to apply.
Turbulent MHD cascades can be conceptualised as interactions between colliding
wave-packets (Kraichnan, 1965). To mediate efficient exchange of energy
between Fourier modes, the semi-classical conservation laws for three-wave
couplings: $\bm{k_{1}}+\bm{k_{2}}=\bm{k_{3}}$ and
$\omega(\bm{k_{1}})+\omega(\bm{k_{2}})=\omega(\bm{k_{3}})$ should be
satisfied. Here $\omega(\bm{k})$ is the 3D generalisation of the dispersion
relation in this paper 333 Bell (2005) provides the result without neutrals
for fixed CR current, while Soler et al. (2013) provides the result for
neutrals without CRs.. It is well-known that for incompressible MHD, coupling
requires either vanishing $\bm{k}_{1}\cdot\bm{v}_{\rm A}$ or
$\bm{k}_{2}\cdot\bm{v}_{\rm A}$, i.e. one of the Fourier modes must be non-
propagating and in a plane perpendicular to the guide field. This generally
results in an anisotropic turbulence spectrum (see Zhou et al., 2004, for a
review). If the waves are however dispersive, which will always be the case in
the neighbourhood of an hypothetical evanescent band, it is straightforward to
show that this is no longer a requirement, provided $\omega\neq 0$. Thus,
based on the dispersion curves shown in section 4, there is no physical
justification to exclude waves with $\bm{k_{3}}$ falling within the evanescent
band, and allowing them to propagate with phase velocity between the two
Alfvén velocity limits, as shown in Fig 3. The existence of a region of non-
propagating fluctuations would appear to be a highly idealised concept. The
dispersive nature of the waves however may still cause modifications to the
resulting turbulence spectrum and may yet leave an imprint on energetic
particle transport. To our knowledge, there has been no thorough investigation
of this effect, which may reveal itself in the $\gamma$-ray emission from CRs
in molecular clouds.
To this end, recent studies of the diffuse $\gamma$-ray emission from giant
molecular clouds with Fermi-LAT (e.g. Aharonian et al., 2020) hint at a
dependence of the Galactic cosmic-ray density with Galacto-centric radius,
although could also correlate with nearby sources. The lack of any clear trend
in the spectra however hint at the absence of unique energy-dependent
behaviour/trends. Surveys that look at higher energies with instruments such
as HAWC (Lauer, 2015), LHAASO (Bai et al., 2019a) and the anticipated southern
hemisphere counterpart SWGO (Albert et al., 2019), may provide a different
picture. The analysis presented in this work is important for the transport at
such energies, and future observations in the VHE to UHE $\gamma$-ray regimes
may reveal new insight into the role of cosmic-ray feedback on molecular
clouds. Unfortunately, we currently lack predictive capability.
Finally, we note that attention was restricted in the present study to
relativistic particles. The results could however be extended to non-
relativistic particles without difficulty. Since low energy cosmic rays may
play an important role in the ionisation of dense molecular clouds, the
ability of self-induced scattering to inhibit penetration into the cores of
such clouds warrants further investigation.
## Acknowledgements
The authors wish to thank J. Kirk and H. Voelk for comments and discussion. BR
acknowledges conversations with attendees of the Multiscale Phenomena in
Plasma Astrophysics workshop at the Kavli Institute for Theoretical Physics.
This research was supported in part by the National Science Foundation under
Grant No. NSF PHY-1748958.
## DATA AVAILABILITY
No new data were generated or analysed in support of this research.
## References
* Abeysekara et al. (2020) Abeysekara A. U., et al., 2020, ApJ, 894, 51
* Achterberg (1983) Achterberg A., 1983, A&A, 119, 274
* Aharonian et al. (2020) Aharonian F., Peron G., Yang R., Casanova S., Zanin R., 2020, Phys. Rev. D, 101, 083018
* Ahnen et al. (2017) Ahnen M. L., et al., 2017, MNRAS, 472, 2956
* Albert et al. (2019) Albert A., et al., 2019, arXiv e-prints, p. arXiv:1902.08429
* Amato & Blasi (2009) Amato E., Blasi P., 2009, MNRAS, 392, 1591
* Bai et al. (2015) Bai X.-N., Caprioli D., Sironi L., Spitkovsky A., 2015, ApJ, 809, 55
* Bai et al. (2019a) Bai X., et al., 2019a, arXiv e-prints, p. arXiv:1905.02773
* Bai et al. (2019b) Bai X.-N., Ostriker E. C., Plotnikov I., Stone J. M., 2019b, ApJ, 876, 60
* Bell (2004) Bell A. R., 2004, MNRAS, 353, 550
* Bell (2005) Bell A. R., 2005, MNRAS, 358, 181
* Bell (2013) Bell A. R., 2013, Astroparticle Physics, 43, 56
* Bell et al. (2013) Bell A. R., Schure K. M., Reville B., Giacinti G., 2013, MNRAS, 431, 415
* Bell et al. (2020) Bell A. R., Kingham R. J., Watkins H. C., Matthews J. H., 2020, Plasma Physics and Controlled Fusion, 62, 095026
* Berezhko et al. (2002) Berezhko E. G., Ksenofontov L. T., Völk H. J., 2002, A&A, 395, 943
* Blasi & Amato (2019) Blasi P., Amato E., 2019, Phys. Rev. Lett., 122, 051101
* Blasi et al. (2015) Blasi P., Amato E., D’Angelo M., 2015, Phys. Rev. Lett., 115, 121101
* Brahimi et al. (2020) Brahimi L., Marcowith A., Ptuskin V. S., 2020, A&A, 633, A72
* Bykov et al. (2011) Bykov A. M., Osipov S. M., Ellison D. C., 2011, MNRAS, 410, 39
* Bykov et al. (2012) Bykov A. M., Ellison D. C., Renaud M., 2012, Space Sci. Rev., 166, 71
* Bykov et al. (2013) Bykov A. M., Malkov M. A., Raymond J. C., Krassilchtchikov A. M., Vladimirov A. E., 2013, Space Sci. Rev., 178, 599
* Cesarsky & Volk (1978) Cesarsky C. J., Volk H. J., 1978, A&A, 70, 367
* Chan et al. (2019) Chan T. K., Kereš D., Hopkins P. F., Quataert E., Su K. Y., Hayward C. C., Faucher-Giguère C. A., 2019, MNRAS, 488, 3716
* Chevalier & Oishi (2003) Chevalier R. A., Oishi J., 2003, ApJ, 593, L23
* D’Angelo et al. (2018) D’Angelo M., Morlino G., Amato E., Blasi P., 2018, MNRAS, 474, 1944
* Drury et al. (1996) Drury L. O., Duffy P., Kirk J. G., 1996, A&A, 309, 1002
* Drury et al. (1999) Drury L. O., Duffy P., Eichler D., Mastichiadis A., 1999, A&A, 347, 370
* Ellison & Eichler (1984) Ellison D. C., Eichler D., 1984, ApJ, 286, 691
* Gargaté et al. (2010) Gargaté L., Fonseca R. A., Niemiec J., Pohl M., Bingham R., Silva L. O., 2010, ApJ, 711, L127
* Giacalone (2017) Giacalone J., 2017, ApJ, 848, 123
* Giacinti et al. (2012) Giacinti G., Kachelrieß M., Semikoz D. V., 2012, Phys. Rev. Lett., 108, 261101
* Hinton & Hofmann (2009) Hinton J. A., Hofmann W., 2009, ARA&A, 47, 523
* Holcomb & Spitkovsky (2019) Holcomb C., Spitkovsky A., 2019, ApJ, 882, 3
* Ichimaru (1973) Ichimaru S., 1973, Basic principles of plasma physics, a statistical approach.. Benjamin Frontiers in Physics
* Inoue (2019) Inoue T., 2019, ApJ, 872, 46
* Isenberg & Jokipii (1979) Isenberg P. A., Jokipii J. R., 1979, ApJ, 234, 746
* Jiang & Oh (2018) Jiang Y.-F., Oh S. P., 2018, ApJ, 854, 5
* Jokipii & Williams (1992) Jokipii J. R., Williams L. L., 1992, ApJ, 394, 184
* Kraichnan (1965) Kraichnan R. H., 1965, Physics of Fluids, 8, 1385
* Krall & Trivelpiece (1973) Krall N. A., Trivelpiece A. W., 1973, Principles of plasma physics. McGraw-Hill, New York
* Kulsrud & Cesarsky (1971) Kulsrud R. M., Cesarsky C. J., 1971, Astrophys. Lett., 8, 189
* Kulsrud & Pearce (1969) Kulsrud R., Pearce W. P., 1969, ApJ, 156, 445
* Lauer (2015) Lauer R. J., 2015, arXiv e-prints, p. arXiv:1509.07561
* Luo & Melrose (2009) Luo Q., Melrose D., 2009, MNRAS, 397, 1402
* Malkov et al. (2011) Malkov M. A., Diamond P. H., Sagdeev R. Z., 2011, Nature Communications, 2, 194
* Malkov et al. (2013) Malkov M. A., Diamond P. H., Sagdeev R. Z., Aharonian F. A., Moskalenko I. V., 2013, ApJ, 768, 73
* Marret et al. (2021) Marret A., Ciardi A., Smets R., Fuchs J., 2021, MNRAS, 500, 2302
* Melrose & Wentzel (1970) Melrose D. B., Wentzel D. G., 1970, ApJ, 161, 457
* Morlino & Gabici (2015) Morlino G., Gabici S., 2015, MNRAS, 451, L100
* Nava et al. (2016) Nava L., Gabici S., Marcowith A., Morlino G., Ptuskin V. S., 2016, MNRAS, 461, 3552
* Nava et al. (2019) Nava L., Recchia S., Gabici S., Marcowith A., Brahimi L., Ptuskin V., 2019, MNRAS, 484, 2684
* Osterbrock (1961) Osterbrock D. E., 1961, ApJ, 134, 270
* Padovani et al. (2009) Padovani M., Galli D., Glassgold A. E., 2009, A&A, 501, 619
* Phan et al. (2018) Phan V. H. M., Morlino G., Gabici S., 2018, MNRAS, 480, 5167
* Ptuskin et al. (2008) Ptuskin V. S., Zirakashvili V. N., Plesser A. A., 2008, Advances in Space Research, 42, 486
* Reville & Bell (2013) Reville B., Bell A. R., 2013, MNRAS, 430, 2873
* Reville et al. (2006) Reville B., Kirk J. G., Duffy P., 2006, Plasma Physics and Controlled Fusion, 48, 1741
* Reville et al. (2007) Reville B., Kirk J. G., Duffy P., O’Sullivan S., 2007, A&A, 475, 435
* Reville et al. (2008) Reville B., O’Sullivan S., Duffy P., Kirk J. G., 2008, MNRAS, 386, 509
* Reville et al. (2009) Reville B., Kirk J. G., Duffy P., 2009, ApJ, 694, 951
* Rogachevskii et al. (2012) Rogachevskii I., Kleeorin N., Brandenburg A., Eichler D., 2012, ApJ, 753, 6
* Scott et al. (2017) Scott R., Reville B., Spitkovsky A., 2017, in 6th International Symposium on High Energy Gamma-Ray Astronomy. p. 040041 (arXiv:1610.08302), doi:10.1063/1.4968945
* Skilling (1971) Skilling J., 1971, ApJ, 170, 265
* Skilling & Strong (1976) Skilling J., Strong A. W., 1976, A&A, 53, 253
* Soler et al. (2013) Soler R., Carbonell M., Ballester J. L., Terradas J., 2013, ApJ, 767, 171
* Squire et al. (2020) Squire J., Hopkins P. F., Quataert E., Kempski P., 2020, arXiv e-prints, p. arXiv:2011.02497
* Tagger et al. (1995) Tagger M., Falgarone E., Shukurov A., 1995, A&A, 299, 940
* Thomas & Pfrommer (2019) Thomas T., Pfrommer C., 2019, MNRAS, 485, 2977
* Winkler & Long (1997) Winkler P. F., Long K. S., 1997, ApJ, 491, 829
* Xu et al. (2016) Xu S., Yan H., Lazarian A., 2016, ApJ, 826, 166
* Zhou et al. (2004) Zhou Y., Matthaeus W. H., Dmitruk P., 2004, Reviews of Modern Physics, 76, 1015
* Zirakashvili & Ptuskin (2008) Zirakashvili V. N., Ptuskin V. S., 2008, ApJ, 678, 939
* Zirakashvili et al. (2008) Zirakashvili V. N., Ptuskin V. S., Völk H. J., 2008, ApJ, 678, 255
* Zweibel (2020) Zweibel E. G., 2020, ApJ, 890, 67
* Zweibel & Everett (2010) Zweibel E. G., Everett J. E., 2010, ApJ, 709, 1412
* Zweibel & Shull (1982) Zweibel E. G., Shull J. M., 1982, ApJ, 259, 859
## Appendix A Dispersion relation -kinetic vs fluid approach
For completeness, we compare different approaches to deriving the linear
dispersion relation for circularly-polarized modes propagating parallel to a
background field ($\bm{B}_{0}=B_{0}\bm{\hat{x}}$). Following standard text-
books, we explore first order small perturbations as a spectrum of periodic
transverse waves (e.g. Ichimaru, 1973; Krall & Trivelpiece, 1973). For a
uniform background, the zeroth order total current and charge density must
vanish. Taking all first order quantities to be of the form
$\xi(x,t)=\xi_{k}{\rm e}^{i(kx-\omega t)}$, with $\bm{k\cdot\delta E}_{k}=0$
Maxwell’s equations can be combined to give
$\displaystyle(\omega^{2}-c^{2}k^{2})\bm{\delta E}_{k}+4\pi
i\omega\sum_{s}\bm{j}_{\bot,s}=0~{},$ (22)
where
$\displaystyle\bm{j}_{\bot,s}=q_{s}\int\delta f_{k,s}\bm{v}_{\bot}d^{3}p~{},$
(23)
is the perturbed current associated to species $s$, with $q_{s}$ its charge.
The summation is taken over all species in the plasma.
To derive a dispersion relation, it is necessary to describe the response of
the current to a fluctuating magnetic field. This is determined from the
Vlasov equation, using the zero-th order helical trajectories along the mean
field(see for example Krall & Trivelpiece, 1973, chapter 8), and gives
$\displaystyle\bm{j}_{\bot,s}=i\frac{q_{s}^{2}}{2}\sum_{n=\pm 1}\int
d^{3}p\frac{v_{\bot}p_{\bot}}{\omega-n\omega_{{\rm g},s}-kv_{\|}+{\rm
i}0^{+}}\left[\frac{kv_{\|}}{\omega}\left(\frac{\partial f_{s}}{\partial
p_{\bot}^{2}}-\frac{\partial f_{s}}{\partial p_{\|}^{2}}\right)-\frac{\partial
f_{s}}{\partial p_{\bot}^{2}}\right]\left(\begin{array}[]{cc}1&in\\\
-in&1\end{array}\right)\bm{\delta E}_{k}~{}.$ (26)
Here $f_{s}(p_{\|},p_{\bot}^{2})$ is the gyrotropic zeroth order equilibrium
solution of sepcies $s$. The ${\rm i}0^{+}$ term ensures the integration is
taken along the Landau contour, and $\omega_{{\rm g},s}=q_{s}B_{0}/\gamma
m_{s}c$ is the relativistic gyro frequency. Introducing the complex quantities
$j_{\bot}=j_{y}+{\rm i}\epsilon j_{z}$ (and similarly for $\delta E$), where
$\epsilon=\pm 1$ determines the polarisation, the current takes the rather
simple form
$\displaystyle j_{\bot,s}={\rm i}{q_{s}^{2}}\int
d^{3}p\frac{v_{\bot}p_{\bot}}{\omega-\epsilon\omega_{{\rm g},s}-kv_{\|}+{\rm
i}0^{+}}\left[\frac{kv_{\|}}{\omega}\left(\frac{\partial f_{s}}{\partial
p_{\bot}^{2}}-\frac{\partial f_{s}}{\partial p_{\|}^{2}}\right)-\frac{\partial
f_{s}}{\partial p_{\bot}^{2}}\right]{\delta E}_{k}~{}.$ (27)
This expression is found in most textbooks, however, it proves useful to write
it out explicitly here, as we will use it in the next section.
Applying the same notation to Eq (22) and substituting in the expression for
the current, one recovers the familiar expression
$\displaystyle\frac{c^{2}k^{2}}{\omega^{2}}-1=\sum_{s}\mathcal{X}_{s}$ (28)
where the plasma susceptibility
$\displaystyle\mathcal{X}_{s}=\frac{4\pi q_{s}^{2}}{\omega}\int
d^{3}p\frac{v_{\bot}p_{\bot}}{\omega-\epsilon\omega_{{\rm g},s}-kv_{\|}+{\rm
i}0^{+}}\left[\frac{\partial f_{s}}{\partial
p_{\bot}^{2}}+\frac{kv_{\|}}{\omega}\left(\frac{\partial f_{s}}{\partial
p_{\|}^{2}}-\frac{\partial f_{s}}{\partial p_{\bot}^{2}}\right)\right]$ (29)
has been introduced.
This is the starting point for kinetic investigations of circularly polarised
linear modes that propagate parallel to a guide field. As is standard, an
equilibrium condition must be considered that satisfies the general
constraints of zero net charge and current. When a current is provided by
streaming cosmic rays, a compensating return current must be drawn by the
background plasma. Since the cosmic rays are generally small by way of number,
one approach is to simply assume the thermal electrons establish a slow drift
relative to the background ions that provides the required return current
(e.g. Achterberg, 1983; Reville et al., 2006; Luo & Melrose, 2009). An
alternative scenario involves only a sub-population of electrons are drawn to
provide the return current (Amato & Blasi, 2009; Zweibel & Everett, 2010). We
note however, that such a configuration is also two-stream unstable, and for
realistic scenarios, is likely to relax rapidly to the former. In the work of
Amato & Blasi (2009) the two approaches are shown to ultimately give
equivalent results, at least in the low frequency regime of interest. In this
regime, it is of course expected that the results approach the MHD limit,
which is insensitive to the microphysics of the thermal gas.
In the fluid approach, one need only consider the bulk properties of all
quantities, although a general treatment still requires a kinetic treatment of
the CRs. The fluid approach has the tremendous advantage that it allows easy
inclusion of collisions, something that would be undesirably cumbersome in a
kinetic approach.
Starting from the fluid equations for the thermal species, which for
simplicity we limit to be electrons, ions and neutrals, for each species we
have
$\displaystyle n_{s}m_{s}\frac{d\bm{u}_{s}}{dt}=-\nabla
P_{s}+n_{s}q_{s}\left(\bm{E}+\frac{1}{c}\bm{u}_{s}\times\bm{B}\right)+n_{s}m_{s}\sum_{\alpha\neq
s}\nu_{s\alpha}(\bm{u}_{s}-\bm{u}_{\alpha})$ (30)
where $\nu_{s\alpha}$ is the interspecies momentum exchange rate, and we have
assumed isotropic pressure for each species. In the following, we consider
only collisions between the ions and neutrals.
We proceed by summing over the charged components in the usual fashion, to
give
$\displaystyle\rho\frac{d\bm{u}}{dt}=-\nabla P_{\rm
th}+\rho_{i}\nu_{in}(\bm{u}_{i}-\bm{u}_{n})+\left(\sum_{s=i,e}n_{s}q_{s}\right)\bm{E}+\frac{1}{c}\left(\sum_{s=i,e}n_{s}q_{s}\bm{u}_{s}\right)\times\bm{B}$
(31)
where $\rho=n_{i}m_{i}+n_{e}m_{e}\approx\rho_{i}$ and
$\bm{u}=(n_{i}m_{i}\bm{u}_{i}+n_{e}m_{e}\bm{u}_{e})/\rho\approx\bm{u}_{i}$ .
In the absence of CRs, the last two terms equate to $0$ and $\bm{j}_{tot}$
respectively. However, as in the kinetic description above, we must not
neglect the CR contribution. Thus, from quasineutrality and Ampere’s law, we
have
$\displaystyle\sum_{s=i,e}n_{s}q_{s}=-n_{\rm cr}q_{\rm
cr}\mbox{~{}~{}~{}and~{}~{}~{}}\frac{c}{4\pi}\bm{\nabla}\times\bm{B}=\bm{j}_{\rm
cr}+\sum_{s=i,e}n_{s}q_{s}\bm{u}_{s}$ (32)
Subsituting into the above, we find
$\displaystyle\rho_{i}\frac{d\bm{u_{i}}}{dt}=-\nabla P_{\rm
th}+\rho_{i}\nu_{in}(\bm{u}_{i}-\bm{u}_{n})-n_{\rm cr}q_{\rm
cr}\bm{E}+\frac{1}{c}\left(\frac{c}{4\pi}\bm{\nabla}\times\bm{B}-\bm{j}_{\rm
cr}\right)\times\bm{B}$ (33)
which in the ideal MHD limit, $\bm{E}=-\bm{u}\times\bm{B}/c$ reduces to
Equation (2).
The kinetic elements of this problem are introduced via $\bm{j}_{\rm cr}$,
which we describe in the next section.
## Appendix B The cosmic-ray magnetic response
Following Bell (2004), we wish to express the response of the cosmic-ray
current to fluctuations in the magnetic field. As discussed and summarised
above, this equivalent procedure for electric fields can be found in most
plasma text books. However, we can easily connect the two methods. Using the
complex notation from the previous section, it follows from Faradays’ law,
$\displaystyle\delta E=-{\rm i}\epsilon\frac{\omega}{ck}B_{\bot}~{},$ (34)
which on substitution into (27) gives
$j_{\bot}=\epsilon q^{2}\int
d^{3}p\frac{p_{\bot}v_{\bot}}{\omega-\epsilon\omega_{g}-kv_{\|}+{\rm
i}0^{+}}\left[\frac{v_{\|}}{c}\left(\frac{\partial f}{\partial
p_{\bot}^{2}}-\frac{\partial f}{\partial
p_{\|}^{2}}\right)-\frac{\omega}{ck}\frac{\partial f}{\partial
p_{\bot}^{2}}\right]B_{\bot}~{}.$
Note we have dropped the subscript for species, as we are only concerned with
CRs.
We seek a general formalism to determine the response for a given Legendre
polynomial expansion of the distribution function. In this case it is
convenient to first transform to spherical momentum coordinates
$j_{\bot}=\epsilon\frac{\pi q^{2}}{k}\int dp~{}p\int
d\mu\frac{1-\mu^{2}}{\mu-\mu_{0}-{\rm i}0^{+}}\left[\frac{v}{c}\frac{\partial
f}{\partial\mu}+\frac{\omega}{ck}\left(p\frac{\partial f}{\partial
p}-\mu\frac{\partial f}{\partial\mu}\right)\right]B_{\bot}~{},$ (35)
where we have introduced
$\mu_{0}=\frac{\omega}{kv}-\epsilon\frac{\omega_{g}}{kv}$, with $k>0$.
Expansion of the distribution in a Legendre polynomial series $f(p,\mu)=\sum
f_{\ell}(p)P_{\ell}(\mu)$, leads to
$j_{\bot}=\frac{4\pi q^{2}}{3k}\left[\sum_{n=0}^{\infty}\int
dp~{}pA_{n}I_{n}\right]B_{\bot}~{},$
where from Rodrigues’ formula one can show
$\displaystyle
A_{n}=\frac{v}{c}(n+1)\sum_{\ell=n+1}^{\infty}a^{\ell}_{n+1}{f_{\ell}}+\frac{\omega}{ck}\sum_{\ell=n}^{\infty}a^{\ell}_{n}\left(p\frac{\partial}{\partial
p}-n\right)f_{\ell}~{},$
with generalised binomial coefficients
$a^{\ell}_{n}=2^{\ell}\binom{\ell}{n}\binom{\frac{\ell+n-1}{2}}{\ell}~{}.$
The integration over pitch angle is contained in the term
$I_{n}=\frac{3\epsilon}{4}\int_{-1}^{1}d\mu\frac{(1-\mu^{2})\mu^{n}}{\mu-\mu_{0}-{\rm
i}0^{+}}\enspace.$
Recall that $j_{\bot}/j_{0}=\sigma B_{\bot}/B_{0}$ and therefore
$\sigma=\frac{4\pi q^{2}}{3k}\frac{B_{0}}{j_{0}}\sum_{n=0}^{\infty}\int
dp~{}pA_{n}I_{n}~{}.$
This format is convenient when the series is to be truncated after a finite
number of terms, i.e. setting $f_{\ell>L_{\rm max}}=0$. Consider for example
$L_{\rm max}=3$, the first few $A_{n}$ terms are
$\displaystyle
A_{0}=\frac{v}{c}\left(f_{1}-\frac{3}{2}f_{3}\right)+\frac{\omega}{ck}p\frac{\partial~{}}{\partial
p}\left(f_{0}-\frac{1}{2}f_{2}\right)~{},$ $\displaystyle
A_{1}=3\frac{v}{c}f_{2}+\frac{\omega}{ck}\left(p\frac{\partial}{\partial
p}-1\right)\left(f_{1}-\frac{3}{2}f_{3}\right)~{},$ $\displaystyle
A_{2}=\frac{15}{2}\frac{v}{c}f_{3}+\frac{3}{2}\frac{\omega}{ck}\left(p\frac{\partial
f_{2}}{\partial p}-2f_{2}\right)~{},$ $\displaystyle
A_{3}=\frac{5}{2}\frac{\omega}{ck}\left(p\frac{\partial f_{3}}{\partial
p}-3f_{3}\right)~{}.$
For the integral over $\mu$ in $I_{n}$, we note that if $|\mu_{0}|<1$, the
integral has a simple pole at $\mu=\mu_{0}$, and hence in general we may write
$I_{n}=\frac{3\epsilon}{4}\left\\{{\rm i}\pi\left[1-{\rm
Min}(1,\mu_{0}^{2})\right]\mu_{0}^{n}+\mathcal{P}\int_{-1}^{1}d\mu\frac{(1-\mu^{2})\mu^{n}}{\mu-\mu_{0}}\right\\}~{},$
with $\mathcal{P}$ denoting the Cauchy principal value. The $I_{n}$ satisfy
the recurrence relation
$I_{n+1}=\mu_{0}I_{n}+\frac{3\epsilon}{2}\frac{1+\cos n\pi}{(n+1)(n+3)}$
and hence in computing the pitch angle integration it is necessary to specify
only $I_{0}$:
$I_{0}=\frac{3\epsilon}{4}\left\\{\left(1-\mu_{0}^{2}\right)\ln\left|\frac{1-\mu_{0}}{1+\mu_{0}}\right|-2\mu_{0}+{\rm
i}\pi\left[1-{\rm Min}(1,\mu_{0}^{2})\right]\right\\}~{}.$
This formalism is still completely general (we have not made any assumptions
about $\omega$ in high/low frequency limits, etc.). It is particularly
convenient for numerical calculations, as the final integral over momentum can
be performed using standard methods. The approach may thus be applied to other
scenarios where field aligned modes are of interest. A similar result is found
using Chebyshev polynomials for the angular basis functions, but since one of
the motivations is to provide a general framework for comparison with future
simulations using an expansion in the related Spherical Harmonic basis
functions (Reville & Bell, 2013), for brevity, we leave this extension to the
reader.
Before proceeding to derive expressions for power-law distributions of
relativistic particles, we make a brief digression to demonstrate how the
above technique could be applied to other cases, such as a slow drifting
Maxwellian distributions. To compare with previous results, we use the
previous expression for the susceptibility, Equation (29), which in spherical
coordinates reads
$\mathcal{X}_{s}=-\frac{4\pi^{2}q^{2}c}{\omega^{2}}\int dp~{}p\int
d\mu\frac{1-\mu^{2}}{\mu-\mu_{0}-{\rm i}0^{+}}\left[\frac{v}{c}\frac{\partial
f}{\partial\mu}+\frac{\omega}{ck}\left(p\frac{\partial f}{\partial
p}-\mu\frac{\partial f}{\partial\mu}\right)\right]~{}.$
Noting the similarity to equation (35), the same expansion approach applies,
and we find
$\mathcal{X}_{s}=-\varepsilon\frac{16\pi^{2}q^{2}c}{3\omega^{2}}\left[\sum_{n=0}^{\infty}\int
dp~{}pA_{n}I_{n}\right]~{}.$
Consider a Maxwell-Boltzmann distribution of particles with mass $m$,
temperature $T$ and a non-relativistic drift velocity
$\bm{v}_{d}=v_{d}\bm{\hat{x}}$:
$f_{MB}(\bm{p})=\frac{n}{(2\pi p_{th}^{2})^{3/2}}{\rm
e}^{-(\bm{p}-\bm{p}_{d})^{2}/2p_{th}^{2}}$
where $p_{th}=\sqrt{mk_{B}T}$, and $\bm{p}_{d}=m\bm{v}_{d}$. As above, the
distribution is expanded in Legendre polynomials,
$f_{\ell}=\frac{2\ell+1}{2}\frac{n}{(2\pi p_{th}^{2})^{3/2}}{\rm
e}^{-\frac{{p}^{2}+p_{d}^{2}}{2p_{th}^{2}}}\int_{-1}^{1}d\mu
P_{\ell}(\mu)~{}\exp\left[{\frac{pp_{d}}{p_{th}^{2}}\mu}\right]$
which provided $p_{d}\ll p_{th}$, converges rapidly. The first 3 terms are
$\displaystyle f_{0}=\frac{p_{th}^{2}}{2pp_{d}}\left[\frac{n}{(2\pi
p_{th}^{2})^{3/2}}\left({\rm
e}^{-\frac{1}{2}\left(\frac{p-p_{d}}{p_{th}}\right)^{2}}-{\rm
e}^{-\frac{1}{2}\left(\frac{p+p_{d}}{p_{th}}\right)^{2}}\right)\right]~{}\approx~{}\frac{n}{(2\pi
p_{th}^{2})^{3/2}}{\rm
e}^{-p^{2}/2p_{th}^{2}}\left[1+\frac{p_{d}^{2}}{p_{th}^{2}}\frac{p^{2}-3p_{th}^{2}}{6p_{th}^{2}}\right]$
$\displaystyle f_{1}=\frac{3p_{th}^{2}}{2pp_{d}}\left[\frac{n}{(2\pi
p_{th}^{2})^{3/2}}\left({\rm
e}^{-\frac{1}{2}\left(\frac{p-p_{d}}{p_{th}}\right)^{2}}+{\rm
e}^{-\frac{1}{2}\left(\frac{p+p_{d}}{p_{th}}\right)^{2}}\right)~{}-~{}2f_{0}\right]~{}\approx~{}\frac{n}{(2\pi
p_{th}^{2})^{3/2}}{\rm
e}^{-p^{2}/2p_{th}^{2}}\left[\frac{p_{d}p}{p_{th}^{2}}\right]$ $\displaystyle
f_{2}=5\left[f_{0}-\frac{p_{th}^{2}}{pp_{d}}f_{1}\right]~{}\approx~{}\frac{n}{(2\pi
p_{th}^{2})^{3/2}}{\rm
e}^{-p^{2}/2p_{th}^{2}}\left[\frac{p^{2}_{d}p^{2}}{3p_{th}^{4}}\right]$
where the approximate terms are accurate to
$\mathcal{O}\left[(p_{d}/p_{th})^{2}\right]$.
Keeping only terms to first order in $p_{d}/p_{th}$, it follows that for
wavelengths $kv_{th}/\omega_{\rm g}\ll 1$,
$\mathcal{X}_{s}=\frac{\omega_{\rm p}^{2}(\omega-
kv_{d})}{\omega^{2}(\omega-\varepsilon\omega_{\rm g})}\left[{\rm
i}\sqrt{\frac{\pi}{2}}\mu_{0}{\rm e}^{-Z^{2}/2}-\frac{1}{Z^{2}}-1\right]$
where $\omega_{\rm p}=(4\pi nq^{2}/m)^{1/2}$ is the plasma frequency, and
$Z=\mu_{0}|_{v=v_{th}}$. This agrees with the standard result for the long
wavelength limit of the plasma dispersion function (e.g. Ichimaru, 1973).
While the approach is not as straightforward, the algorithm is simple and
trivial to implement numerically. Most importantly, it may prove useful for
situations in which the moments of the distribution function do not reduce to
standard integrals.
## Appendix C Solving for power-law Cosmic Ray distributions
To simplify the analysis that follows (although not strictly necessary at this
stage), we consider only modes with $\omega\ll\omega_{g}$ (which can be
checked after the fact) making
$\mu_{0}=-\epsilon\omega_{g}/kv\equiv-\epsilon\lambda$. Here we have defined
the normalised wavelength $\lambda=\omega_{g}/kv=(kr_{g})^{-1}$. In this limit
$\displaystyle
I_{0}=\frac{3}{2}\lambda+\frac{3}{4}\left(1-\lambda^{2}\right)\ln\left|\frac{1+\lambda}{1-\lambda}\right|+\epsilon\frac{3\pi{\rm
i}}{4}[1-{\rm Min}(1,\lambda^{2})]$ $\displaystyle
I_{n+1}=\epsilon\left[\frac{3(1+\cos n\pi)}{2(n+1)(n+3)}-\lambda I_{n}\right]$
We seek a closed form solution to develop astrophysical applications. To this
end, we follow Bell (2004), taking a power-law distribution
$f_{\ell}(p)=\phi_{\ell}p^{-s}\Theta(p;p_{1},p_{2})$ where
$\Theta(p;p_{1},p_{2})=H(p-p_{1})-H(p-p_{2})$, and
$H(p)=\int_{-\infty}^{p}\delta(x)dx$ is the Heaviside step function. The
assumption that all orders in the expansion have the same power-law shape is
an obvious short-coming, as in non-equilibrium systems the anisotropy may well
have a non-trivial momentum dependence, but we defer any detailed discussion
to future numerical experiments, and a full non-linear theory is beyond the
scope of this paper.
In the following, we consider only situations where $p_{1}\gg mc$ such that we
can approximate $v/c=1$. We note that
$p\frac{\partial f_{\ell}}{\partial
p}-nf_{\ell}=\phi_{\ell}p^{1-s}\left[\delta(p-p_{1})-\delta(p-p_{2})\right]-(n+s)f_{\ell}$
In their final results, Bell (2004) and Reville et al. (2007) ignored the
contribution from the end point delta-functions, as they play no role in the
short wavelength limit. However, they are essential in the long wavelength
limit, and neglecting them would result in the unphysical result of
$\lim_{k\rightarrow 0}\sigma\neq 1$, i.e. cosmic rays would not follow the
field lines on large scales!
Substituting into $A_{n}$, and changing variable to
$\lambda=\lambda_{1}p_{1}/p=\lambda_{2}p_{2}/p$, we find
$\displaystyle A_{n}$
$\displaystyle=\frac{\lambda^{s-1}}{(p_{1}\lambda_{1})^{s}}\left\\{(n+1)\sum_{\ell=n+1}^{{L_{\rm
max}}}a^{\ell}_{n+1}{\phi_{\ell}}\lambda+\frac{\omega}{ck}\sum_{\ell=n}^{{L_{\rm
max}}}a^{\ell}_{n}\phi_{\ell}\left[\lambda_{1}^{2}\delta(\lambda-\lambda_{1})-\lambda_{2}^{2}\delta(\lambda-\lambda_{2})-(n+s)\lambda\right]\right\\}\Theta(\lambda;\lambda_{2},\lambda_{1})$
$\displaystyle=\frac{\lambda^{s-1}}{(p_{1}\lambda_{1})^{s}}\Theta(\lambda;\lambda_{2},\lambda_{1})\tilde{A}_{n}$
Likewise, changing variable in the momentum integral it follows that
$\sigma=\frac{4\pi}{3n_{\rm cr}}\frac{c}{v_{\rm
cr}}(p_{1}\lambda_{1})^{3-s}\sum_{n=0}^{L_{\rm
max}}\int_{\lambda_{2}}^{\lambda_{1}}d\lambda~{}\lambda^{s-4}\tilde{A}_{n}I_{n}$
where we have introduced the bulk CR velocity $v_{\rm cr}=j_{\rm cr}/qn_{\rm
cr}$. From here on, we restrict our attention to positive $k$ (and $\lambda$)
and define the integral
$K_{n}^{m}(\lambda)=\int^{\lambda}d\zeta~{}\zeta^{m}I_{n}(\zeta)~{},$
which also satisfies a useful recursion relation:
$K_{n}^{m}+\epsilon K_{n+1}^{m-1}=\frac{3}{2}\frac{1+\cos
n\pi}{(n+1)(n+3)}\frac{\lambda^{m}}{m}\enspace.$ (36)
Using this allows us to write
$\sigma=\frac{4\pi}{3n_{\rm cr}}(p_{1}\lambda_{1})^{3-s}\sum_{n=0}^{L_{\rm
max}}\left[(n+1)\frac{c}{v_{\rm cr}}\sum_{\ell=n+1}^{{L_{\rm
max}}}a^{\ell}_{n+1}{\phi_{\ell}}K_{n}^{s-3}+\frac{\omega}{v_{\rm
cr}k}\sum_{\ell=n}^{{L_{\rm
max}}}a^{\ell}_{n}\phi_{\ell}\left(\lambda^{s-2}I_{n}-(n+s)K_{n}^{s-3}\right)\right]_{\lambda_{2}}^{\lambda_{1}}$
Finally, for convenience in the main part of the text, we will split up the
linear susceptibility into frequency dependent/independent parts,
$\sigma=\left[\sigma_{1}-\frac{\omega}{kv_{\rm
cr}}\sigma_{2}\right]_{\lambda_{2}}^{\lambda_{1}},$ (37)
where
$\displaystyle\sigma_{1}=\frac{4\pi}{3n_{\rm
cr}}(p_{1}\lambda_{1})^{3-s}\frac{c}{v_{\rm cr}}\sum_{n=0}^{L_{\rm
max}}\sum_{\ell=n+1}^{{L_{\rm
max}}}(n+1)a^{\ell}_{n+1}{\phi_{\ell}}K_{n}^{s-3}$
$\displaystyle\sigma_{2}=\frac{4\pi}{3n_{\rm
cr}}(p_{1}\lambda_{1})^{3-s}\sum_{n=0}^{L_{\rm max}}\sum_{\ell=n}^{{L_{\rm
max}}}a^{\ell}_{n}\phi_{\ell}\left[(n+s)K_{n}^{s-3}-\lambda^{s-2}I_{n}\right]\enspace.$
### C.1 Evaluating $K_{n}^{m}$
From the definition of $I_{n}$ it follows that $K_{n}^{m}$ contains only terms
which are either integrals of powers of $\lambda$ or the logarithmic function
via:
$L_{m}=\int
d\lambda~{}\lambda^{m}\left(1-\lambda^{2}\right)\ln\left|\frac{1+\lambda}{1-\lambda}\right|\enspace,$
where $m\geq 0$. The latter has solution for general $m$
$L_{m}=\left[\frac{\lambda^{1+m}}{1+m}-\frac{\lambda^{3+m}}{3+m}\right]\ln\left|\frac{1+\lambda}{1-\lambda}\right|+F_{m+2}-F_{m}$
where Gauss’ Hypergeometric function appears via
$F_{m}(\lambda)={\rm
Re}\left[\frac{2\lambda^{2+m}}{(1+m)(2+m)}\,_{2}F_{1}\left(1,1+\frac{m}{2};2+\frac{m}{2};\lambda^{2}\right)\right]\,.$
We note the limiting forms of $F_{m}$ are
$F_{m}(\lambda)\approx\frac{2\lambda^{m}}{(1+m)}\left\\{\begin{array}[]{cc}\frac{\lambda^{2}}{2+m}&\lambda\ll
1\\\ \frac{(-1)}{m}&\lambda\gg 1\end{array}\right.\enspace,$
where the long wavelength limit is only accurate for $m>1$. This completes our
description of the necessary equations to generate the full solution for an
arbitrary order expansion. We conclude by writing out the first few terms:
$\displaystyle
K_{0}^{m}=\frac{3}{2}\frac{\lambda^{m+2}}{m+2}+\frac{3}{4}L_{m}+\epsilon\frac{3\pi{\rm
i}}{4}\left\\{\begin{array}[]{cc}\frac{\lambda^{m+1}}{m+1}-\frac{\lambda^{m+3}}{m+3}&|\lambda|<1\\\
\frac{1}{m+1}-\frac{1}{m+3}&|\lambda|\geq 1\end{array}\right.$ (40)
$\displaystyle
K_{1}^{m}=\epsilon\left[\frac{\lambda^{m+1}}{m+1}-K_{0}^{m+1}\right]~{},\enspace\enspace
K_{2}^{m}=K_{0}^{m+2}-\frac{\lambda^{m+2}}{m+2}\enspace.$ (41)
Below we consider some specific integer examples which have special regular
solutions.
### C.2 Specific solutions for integer values of $s$
We introduce a final recurrence relation, this time for $F_{m}$, valid for
integer values of $m\geq 2$:
$(m+1)F_{m}=(m-1)F_{m-2}-2\frac{\lambda^{m}}{m}\enspace,$ (42)
with $F_{0}=-\ln|1-\lambda^{2}|$ and
$F_{1}=\frac{1}{2}\ln\left|\frac{1+\lambda}{1-\lambda}\right|-\lambda$ . With
the above recurrence relations, the $K_{n}^{m}$ expression for different
integer power-law exponents, are trivially found. As examples, we list below
the expressions for the first few terms $m=0,1,2$, corresponding to $s=3,4,5$
respectively:
$\displaystyle
K_{0}^{0}=\frac{1}{2}\lambda^{2}-\frac{1}{4}(\lambda+2)(\lambda-1)^{2}\ln\left|\frac{1+\lambda}{1-\lambda}\right|+\ln\left|1+\lambda\right|+\epsilon\frac{\pi{\rm
i}}{4}\left\\{\begin{array}[]{cc}\lambda(3-\lambda^{2})&|\lambda|<1\\\
2&|\lambda|\geq 1\end{array}\right.$ (45) $\displaystyle
K_{1}^{0}=\epsilon\left[\frac{1}{8}\lambda(5-3\lambda^{2})+\frac{3}{16}\left(1-\lambda^{2}\right)^{2}\ln\left|\frac{1+\lambda}{1-\lambda}\right|\right]-\frac{3\pi{\rm
i}}{16}\left\\{\begin{array}[]{cc}\lambda^{2}(2-\lambda^{2})&|\lambda|<1\\\
1&|\lambda|\geq 1\end{array}\right.$ (48) $\displaystyle
K_{2}^{0}=\left[-\frac{1}{10}\lambda^{2}\left(4-3\lambda^{2}\right)-\frac{1}{20}\left(3\lambda^{5}-5\lambda^{3}+2\right)\ln\left|\frac{1+\lambda}{1-\lambda}\right|+\frac{1}{5}\ln\left|1+\lambda\right|\right]+\epsilon\frac{\pi{\rm
i}}{20}\left\\{\begin{array}[]{cc}\lambda^{3}(5-3\lambda^{2})&|\lambda|<1\\\
2&|\lambda|\geq 1\end{array}\right.$ (51) $\displaystyle
K_{0}^{1}=\frac{3}{8}\lambda(1+\lambda^{2})-\frac{3}{16}\left(1-\lambda^{2}\right)^{2}\ln\left|\frac{1+\lambda}{1-\lambda}\right|+\epsilon\frac{3\pi{\rm
i}}{16}\left\\{\begin{array}[]{cc}\lambda^{2}(2-\lambda^{2})&|\lambda|<1\\\
1&|\lambda|\geq 1\end{array}\right.$ (54) $\displaystyle
K_{1}^{1}=\epsilon\left[\frac{1}{10}\lambda^{2}\left(4-3\lambda^{2}\right)+\frac{1}{20}\left(3\lambda^{5}-5\lambda^{3}+2\right)\ln\left|\frac{1+\lambda}{1-\lambda}\right|-\frac{1}{5}\ln\left|1+\lambda\right|\right]-\frac{\pi{\rm
i}}{20}\left\\{\begin{array}[]{cc}\lambda^{3}(5-3\lambda^{2})&|\lambda|<1\\\
2&|\lambda|\geq 1\end{array}\right.$ (57) $\displaystyle
K_{2}^{1}=\frac{1}{24}\lambda\left(6\lambda^{4}-7\lambda^{2}+3\right)-\frac{1}{16}\left(2\lambda^{6}-3\lambda^{4}+1\right)\ln\left|\frac{1+\lambda}{1-\lambda}\right|+\epsilon\frac{\pi{\rm
i}}{16}\left\\{\begin{array}[]{cc}\lambda^{4}(3-2\lambda^{2})&|\lambda|<1\\\
1&|\lambda|\geq 1\end{array}\right.$ (60) $\displaystyle
K_{0}^{2}=\frac{1}{10}\lambda^{2}\left(1+3\lambda^{2}\right)-\frac{1}{20}\left(3\lambda^{5}-5\lambda^{3}+2\right)\ln\left|\frac{1+\lambda}{1-\lambda}\right|+\frac{1}{5}\ln\left|1+\lambda\right|+\epsilon\frac{\pi{\rm
i}}{20}\left\\{\begin{array}[]{cc}\lambda^{3}(5-3\lambda^{2})&|\lambda|<1\\\
2&|\lambda|\geq 1\end{array}\right.$ (63) $\displaystyle
K_{1}^{2}=\epsilon\left[-\frac{1}{24}\lambda\left(6\lambda^{4}-7\lambda^{2}+3\right)+\frac{1}{16}\left(2\lambda^{6}-3\lambda^{4}+1\right)\ln\left|\frac{1+\lambda}{1-\lambda}\right|\right]-\frac{\pi{\rm
i}}{16}\left\\{\begin{array}[]{cc}\lambda^{4}(3-2\lambda^{2})&|\lambda|<1\\\
1&|\lambda|\geq 1\end{array}\right.$ (66) $\displaystyle
K_{2}^{2}=\frac{1}{140}\left[2\lambda^{2}\left(3-16\lambda^{2}+15\lambda^{4}\right)-3\left(5\lambda^{7}-7\lambda^{5}+2\right)\ln\left|\frac{1+\lambda}{1-\lambda}\right|+12\ln\left|1+\lambda\right|\right]+\epsilon\frac{3\pi{\rm
i}}{140}\left\\{\begin{array}[]{cc}\lambda^{5}(7-5\lambda^{2})&|\lambda|<1\\\
2&|\lambda|\geq 1\end{array}\right.$ (69)
### C.3 Moments of the distribution function
To help visualise the above results, normalised by the unhelpful $\phi_{\ell}$
terms, it is convenient to introduce more familiar fluid concepts. Using the
ortho-normality of the Legendre polynomials, one can easily recover the
cosmic-ray density, current and pressure tensor, being respectively:
$\displaystyle n_{\rm cr}$ $\displaystyle=\int d^{3}p~{}f=4\pi\int
dp~{}p^{2}f_{0}$ $\displaystyle j_{\rm cr}$ $\displaystyle=q\int
d^{3}p~{}v_{x}f=\frac{4\pi}{3}q\int dp~{}p^{2}vf_{1}$ $\displaystyle
T^{\mu\nu}_{\rm cr}$ $\displaystyle=\int
d^{3}p^{\mu}v^{\nu}f=\frac{4\pi}{3}\int
dp~{}p^{3}v\left[f_{0}\delta^{\mu\nu}+\frac{f_{2}}{5}{\rm
diag}\left(2,-1,-1\right)\right]$
From the latter, we identify the isotropic and anisotropic pressures
$\displaystyle P_{\rm cr}=\frac{4\pi}{3}\int
dp~{}p^{3}vf_{0}~{},\enspace\enspace\Delta P_{\rm cr}\equiv T_{\rm
cr}^{11}-T_{\rm cr}^{22}=\frac{4\pi}{5}\int dp~{}p^{3}vf_{2}\enspace.$
For power-law distributions as we have used in the previous section, we can
re-express the first few $\phi_{n}$ in terms of these fluid parameters:
$\frac{4\pi}{3n_{\rm cr}}\phi_{0}=\frac{1}{3}\alpha=\frac{P_{cr}}{n_{\rm
cr}c}\kappa\enspace\enspace,\enspace\frac{4\pi}{3n_{\rm
cr}}\phi_{1}=\frac{v_{\rm
cr}}{c}\alpha\enspace\enspace,\enspace\frac{4\pi}{3n_{\rm
cr}}\phi_{2}=\frac{5}{9}\frac{\Delta P_{cr}}{P_{cr}}\alpha$
where
$\alpha=\left\\{\begin{array}[]{cc}\left[\ln\left(\frac{p_{2}}{p_{1}}\right)\right]^{-1}&s=3\\\
(s-3)\left[p_{1}^{3-s}-p_{2}^{3-s}\right]^{-1}&s\neq
3\end{array}\right.~{}~{}\mbox{and}~{}~{}~{}~{}~{}\kappa=\left\\{\begin{array}[]{cc}\left[\ln\left(\frac{p_{2}}{p_{1}}\right)\right]^{-1}&s=4\\\
(s-4)\left[p_{1}^{4-s}-p_{2}^{4-s}\right]^{-1}&s\neq 4\end{array}\right.$
are the relevant momentum integrals.
Figure 5: $\sigma_{1}$ plotted for different power-law indices $s$. In all
cases $p_{2}/p_{1}=10^{4}$. For the real part, $1-{\rm Re}\sigma$ is plotted,
to emphasise how it approaches zero as $k\rightarrow 0$, as it must.
### C.4 Susceptibilities - example to second order in $f_{\ell}$ and limiting
cases
Using these expressions, we provide the required scalar susceptibilities, i.e.
the response of the CR current to linear magnetic perturbations. The relative
magnitudes of the different moments will depend on the physical scenario under
investigation. For example, in the diffusion approximation, it is generally
found that $\phi_{\ell+1}\approx(v_{\rm cr}/c)\phi_{\ell}$ (e.g. Jokipii &
Williams, 1992).
As we have only considered the first three fluid moments, we consider only the
terms in the Legendre expansion up to $f_{2}$, in which case
$\displaystyle\sigma_{1}=\alpha\left(\lambda_{1}p_{1}\right)^{3-s}\left[K_{0}^{s-3}+\frac{5}{3}\frac{c}{v_{\rm
cr}}\frac{\Delta
P_{cr}}{P_{cr}}K_{1}^{s-3}\right]_{\lambda_{2}}^{\lambda_{1}}$
$\displaystyle{\rm and}$ (70)
$\displaystyle\sigma_{2}=\frac{\alpha}{3}\left(\lambda_{1}p_{1}\right)^{3-s}\left[sK_{0}^{s-3}-\lambda^{s-2}I_{0}+3\frac{v_{\rm
cr}}{c}\left((s+1)K_{1}^{s-3}-\lambda^{s-2}I_{1}\right)+\frac{5}{6}\frac{\Delta
P_{cr}}{P_{cr}}\left(\lambda^{s-2}\left(I_{0}-3I_{2}\right)-sK_{0}^{s-3}+3(s+2)K_{2}^{s-3}\right)\right]_{\lambda_{2}}^{\lambda_{1}}$
where expressions for $K_{n}^{m}$ are found in equations (41) above.
The real and imaginary parts of $\sigma_{1}$ are plotted in Figure 5, where we
have taken $\Delta P_{\rm cr}>0$ in all cases. A crucial test of the solution
is that the long wavelength limit ($kr_{g2}\ll 1$), the cosmic rays must
follow the magnetic fields, i.e. $j_{\bot}/j_{\|}=B_{\bot}/B_{\|}$, or
equvalently, $\sigma\rightarrow 1$. We observe that this is indeed the case.
$\sigma_{2}$ is dominated by the leading two terms in equation 70, and
plotting for different parameters is un-necessary, although graphically
$\sigma_{2}$ look very similar to the left column in Figure 5. This is also to
be expected, since it is this term which stabilises the resonant streaming
instability when $\omega/v_{\rm cr}k$ approaches unity, which in turn
determines the neutral streaming speed of the cosmic rays. Strong anisotropy
of course changes this (see for example Zweibel, 2020).
### C.5 Long-wavelength approximations
For $\lambda\gg 1$ one can show for non-integer $m$
$\displaystyle
K_{0}^{m}\approx\frac{\lambda^{m}}{m}+\frac{1}{5}\frac{\lambda^{m-2}}{m-2}+\epsilon\frac{3\pi{\rm
i}}{4}\left[\frac{1}{1+m}-\frac{1}{3+m}\right]$ $\displaystyle
K_{1}^{m}\approx-\frac{\epsilon}{5}\frac{\lambda^{m-1}}{m-1}-\frac{3\pi{\rm
i}}{4}\left[\frac{1}{2+m}-\frac{1}{4+m}\right]$ $\displaystyle
K_{2}^{m}\approx\frac{1}{5}\frac{\lambda^{m}}{m}+\epsilon\frac{3\pi{\rm
i}}{4}\left[\frac{1}{3+m}-\frac{1}{5+m}\right]$ $\displaystyle
I_{0}\approx\lambda^{-1},~{}~{}~{}~{}~{}I_{1}~{}~{}\approx~{}~{}-\frac{\epsilon}{5}\lambda^{-2},~{}~{}~{}~{}~{}I_{2}~{}~{}\approx~{}~{}\frac{1}{5}\lambda^{-1}$
Integer cases can be found from the final expressions using L’Hôpital’s rule.
#### C.5.1 Non-resonant long-wavelength
For $\lambda_{1}\gg\lambda_{2}\gg 1$ the imaginary parts of the integrals
vanish, while to order $k^{2}$
$1-\sigma_{1}=kr_{g,1}\left[\epsilon\frac{(s-3)}{3(s-4)}\frac{c}{v_{\rm
cr}}\frac{\Delta
P_{cr}}{P_{cr}}\left(\frac{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-4}}{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-3}}\right)-\frac{(s-3)}{5(s-5)}\left(\frac{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-5}}{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-3}}\right)kr_{g,1}\right]$
(71)
and
$1-\sigma_{2}=kr_{g,1}\left[\epsilon\frac{s-3}{s-4}\frac{v_{\rm
cr}}{c}\left(\frac{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-4}}{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-3}}\right)-\frac{s}{15}\frac{s-3}{s-5}\left(1-\frac{5}{6}\frac{\Delta
P_{cr}}{P_{cr}}\right)\left(\frac{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-5}}{1-\left(\frac{p_{1}}{p_{2}}\right)^{s-3}}\right)kr_{g,1}\right]$
(72)
From these, the standard firehose instability result follows.
#### C.5.2 Resonance
For $\lambda_{1}>1>\lambda_{2}$ resonance is possible. We simplify the picture
by taking the limit of $\lambda_{2}\rightarrow 0$ and $\lambda_{1}\gg 1$,
finding
$1-\sigma_{1}=kr_{g,1}\left[\epsilon\frac{(s-3)}{3(s-4)}\frac{c}{v_{\rm
cr}}\frac{\Delta
P_{cr}}{P_{cr}}-\frac{(s-3)}{5(s-5)}kr_{g,1}\right]-\frac{3\pi{\rm
i}}{2}\left(kr_{g,1}\right)^{s-3}\left[\frac{\epsilon(s-3)}{s(s-2)}-\frac{5(s-3)}{3(s^{2}-1)}\frac{c}{v_{\rm
cr}}\frac{\Delta P_{cr}}{P_{cr}}\right]$ (73)
and
$1-\sigma_{2}=kr_{g,1}\left[\epsilon\frac{s-3}{s-4}\frac{v_{\rm
cr}}{c}-\frac{s}{15}\frac{s-3}{s-5}\left(1-\frac{5}{6}\frac{\Delta
P_{cr}}{P_{cr}}\right)kr_{g,1}\right]-\frac{\pi{\rm
i}}{2}\left(kr_{g,1}\right)^{s-3}\left[\frac{\epsilon(s-3)}{(s-2)}-\frac{3}{(s-1)}\frac{v_{\rm
cr}}{c}+\frac{5}{6}\frac{\Delta
P_{cr}}{P_{cr}}\left(\frac{3}{s}-\frac{\epsilon}{s-2}\right)\right]$ (74)
Since the pressure diverges for $s<4$, this result only applies to cases in
which $s>4$. For harder spectra, one can combine with the previous results for
$\lambda_{2}\ll 1$, in which case the leading order terms effectively cancel.
i.e. the Firehose instability operates only in the long-wavelength, non-
resonant limit.
|
# Single-site Rydberg addressing in 3D atomic arrays for quantum computing
with neutral atoms
Xiao-Feng Shi School of Physics and Optoelectronic Engineering, Xidian
University, Xi’an 710071, China
###### Abstract
Neutral atom arrays are promising for large-scale quantum computing especially
because it is possible to prepare large-scale qubit arrays. An unsolved issue
is how to selectively excite one qubit deep in a 3D atomic array to Rydberg
states. In this work, we show two methods for this purpose. The first method
relies on a well-known result: in a dipole transition between two quantum
states driven by two off-resonant fields of equal strength but opposite
detunings $\pm\Delta$, the transition is characterized by two counter-rotating
Rabi frequencies $\Omega e^{\pm i\Delta t}$ [or $\pm\Omega e^{\pm i\Delta t}$
if the two fields have a $\pi$-phase difference]. This pair of detuned fields
lead to a time-dependent Rabi frequency $2\Omega\cos(\Delta t)$ [or
$2i\Omega\sin(\Delta t)$], so that a full transition between the two levels is
recovered. We show that when the two detuned fields are sent in different
directions, one atom in a 3D optical lattice can be selectively addressed for
Rydberg excitation, and when its state is restored, the state of any nontarget
atoms irradiated in the light path is also restored. Moreover, we find that
the Rydberg excitation by this method can significantly suppress the
fundamental blockade error of a Rydberg gate, paving the way for a high-
fidelity entangling gate with commonly used quasi-rectangular pulse that is
easily obtained by pulse pickers. Along the way, we find a second method for
single-site Rydberg addressing in 3D, where a selected target atom can be
excited to Rydberg state while preserving the state of any nontarget atom due
to a spin echo sequence. The capability to selectively address a target atom
in 3D atomic arrays for Rydberg excitation makes it possible to design large-
scale neutral-atom information processor based on Rydberg blockade.
## I introduction
Neutral atom arrays have become a promising platform for large-scale quantum
computing Jaksch _et al._ (2000); Saffman _et al._ (2010, 2011); Saffman
(2016); Weiss and Saffman (2017); Adams _et al._ (2019). The attraction lies
in that one can not only prepare large-scale qubit arrays and initialize the
state for the qubits with high fidelity, but can also prepare high-fidelity
single-qubit gate and easily read out the quantum information stored in the
qubits Aljunid _et al._ (2009); Leung _et al._ (2014); Nogrette _et al._
(2014); Xia _et al._ (2015); Zeiher _et al._ (2015); Ebert _et al._ (2015);
Wang _et al._ (2016); Barredo _et al._ (2016). The fidelity of two-qubit
entangling gates is growing steadily recently Wilk _et al._ (2010); Isenhower
_et al._ (2010); Zhang _et al._ (2010); Maller _et al._ (2015); Zeng _et
al._ (2017); Picken _et al._ (2019); Levine _et al._ (2018); Graham _et
al._ (2019); Levine _et al._ (2019), from the initial value of $73\%$
Isenhower _et al._ (2010) to $97\%$ Levine _et al._ (2019), which points to
the likeliness that high-fidelity universal gate set can be experimentally
realized in the near future for the required accuracy of measurement-free
fault-tolerant quantum computing Crow _et al._ (2016). Nevertheless, there
are still many unsolved issues toward this goal as reviewed in Ref. Saffman
(2016).
One issue for large-scale quantum information processing in a 3-dimensional
(3D) qubit array is how to selectively excite one qubit deep in the lattice
between ground and Rydberg states. To date, there has been demonstration of
two-qubit gate in a two-dimensional (2D) array of neutral atoms Maller _et
al._ (2015); Graham _et al._ (2019), where the single-site addressing without
affecting nontarget atoms is guaranteed by sending laser fields perpendicular
to the 2D array, and thus is not applicable in a 3D lattice. There have been
several experiments on single-qubit gates in 3D Wang _et al._ (2015, 2016) or
2D Xia _et al._ (2015) atomic arrays by using a hybrid irradiation of laser
and microwave fields. These methods depend on shifting the atomic transition
frequency by sending laser fields upon target atoms, where the Stark shifts
(divided by the Planck constant) should be comparable to the inverse of the
gate durations of several hundred $\mu s$. To the best of our knowledge,
however, there is no method of single-site Rydberg addressing in a 3D lattice
where nontarget atoms in the light path are not influenced.
In this work, we present two methods for selectively exciting one qubit in any
site of a 3D qubit array to Rydberg state. The first method relies on a well-
known phenomenon: the action of two symmetrically detuned laser pulses is
equivalent to that of a monochromatic field whose amplitude is sinusoidally
modulated in time, as studied in Ref. Goreslavsky _et al._ (1980). This means
that a full dipole transition can also occur by absorbing two fractional
photons, i.e., by absorbing half of a photon with energy $E_{ge}+\hbar\Delta$,
and half of another photon with energy $E_{ge}-\hbar\Delta$, where $\hbar$ and
$\Delta$ are the reduced Planck constant and frequency detuning, respectively.
This resonance can be called off-resonance-induced resonance (ORIR). By ORIR,
one laser light is sent along one direction, while the other sent along
another direction, and they together excite a target atom. The nontarget atom
can accumulate phase shift or even experience state evolution in general, but
we show that our ORIR-based theory can eliminate its effect when the state of
the target atom is restored. Besides the ORIR-based method, we show a second
method with a three-level ladder type system, where one target atom can be
excited to Rydberg state while preserving the state of any nontarget atom. In
the second method, the state of the target atom can pick a $\pi$ phase shift
upon the completion of the ground-Rydberg-ground state transfer; such a $\pi$
phase is crucial in the controlled-phase gate based on Rydberg blockade. These
methods make it possible to couple only one atom to the Rydberg state in 3D.
We further show that ORIR can lead to high fidelity in the Rydberg blockade
gate. In quantum computing with neutral atoms and Rydberg interactions Jaksch
_et al._ (2000); Saffman _et al._ (2010, 2011); Saffman (2016); Weiss and
Saffman (2017); Adams _et al._ (2019), it has been an outstanding challenge
to design a practical high-fidelity entangling gate Goerz _et al._ (2014);
Theis _et al._ (2016); Shi (2017); Petrosyan _et al._ (2017); Shi (2018a);
Shen _et al._ (2019); Shi (2019); Yu _et al._ (2019); Levine _et al._
(2019). A traditional method to achieve an entangling gate by Rydberg
interactions is via the blockade mechanism Jaksch _et al._ (2000), in which
there is a fundamental blockade error. We show that when the Rabi frequency
$\Omega$ in the usual method is replaced by an ORIR-induced Rabi frequency
$i\Omega\sin(\Delta t)$, the blockade error can be suppressed by more than two
orders of magnitude. More important, this reduction of error by ORIR is robust
against the variation of the blockade interaction, which compares favorably to
other pulse-shaping-based methods for suppressing the blockade error. Thus,
ORIR can effectively remove the blockade error, making it possible to realize
a high-fidelity neutral-atom entangling gate with quasi-rectangular pulses
that are easily attainable by pulse pickers.
The remainder of this work is organized as follows. In Sec. II, we give
details about how ORIR appears. In Sec. III, we study two methods for single-
site Rydberg addressing in a 3D optical lattice; a comparison between the two
methods is given at the end of Sec. III. In Sec. IV, we show that ORIR can
suppress a fundamental rotation error in the Rydberg blockade gate. In Sec. V,
we discuss other applications of ORIR. Section VI gives a brief summary.
## II Resonance from off resonance
Figure 1: (a) Schematic and population dynamics of Rabi oscillations between
$|g\rangle$ and $|e\rangle$ driven by an external field with a detuning of
$\Delta$ or $-\Delta$. The initial state is $|\psi(0)\rangle=|g\rangle$. (b)
When the transition $|g\rangle\leftrightarrow|e\rangle$ is driven by two
external coherent fields with opposite detunings $\pm\Delta$, an effective
Rabi frequency of $2\Omega\cos(\Delta t)$ appears. This leads to a resonant
transition between $|g\rangle$ and $|e\rangle$. The condition
$\Delta/\Omega=2/\pi$ is used in both (a) and (b).
The reason for ORIR to occur lies in that the dipole coupling between an atom
and two symmetrically detuned laser fields can be characterized by a
sinusoidal Rabi frequency $2i\Omega\sin(\Delta t)$ or $2\Omega\cos(\Delta t)$
Goreslavsky _et al._ (1980). To put it in perspective we consider a pair of
external coherent electromagnetic fields of equal strength but with opposite
detunings $\pm\Delta$ applied for the dipole transition
$|g\rangle\leftrightarrow|e\rangle$. In the dipole approximation, these two
fields lead to a pair of counter-rotating Rabi frequencies
$\Omega_{\pm}=\Omega e^{\pm i\Delta t}$. When only one of these two detuned
fields is applied, the expectation value of the population in $|e\rangle$ has
an upper bound $\Omega^{2}/(\Omega^{2}+\Delta^{2})$ if the initial state is
$|g\rangle$ Shi (2017, 2018a), as shown in Fig. 1(a). But when the two
oppositely detuned fields are applied simultaneously and because
$\Omega_{+}+\Omega_{-}=2\Omega\cos(\Delta t)$, a full transition between
$|g\rangle$ and $|e\rangle$ becomes attainable within a time of less than
$|\pi/\Delta|$ as long as $|\Omega/\Delta|\geq\pi/2$. The population dynamics
in the two-level system in response to the detuned driving is shown in Fig. 1
when $\Omega/\Delta=\pi/2$. Figures 1(a) and 1(b) show that the dynamics in
the two-level system driven by the two oppositely detuned fields dramatically
differs from that when only one detuned driving is present. This phenomenon
results from the quantum interference between the two Rabi oscillations with
opposite detunings. Remarkably, the speed of this off-resonance-induced
resonant transition is comparable to its resonant counterpart: when
$|\Omega/\Delta|=\pi/2$, a full transition between the two states is achieved
within a time of $t_{\pi}\pi/2$ that is comparable to the duration,
$t_{\pi}=\pi/(2\Omega)$, of a $\pi$ pulse required for a full transition in
the resonant case with the Rabi frequency $2\Omega$. This type of multi-photon
ORIR can occur in various electric or magnetic dipole transitions of an atom
or molecule, either natural or artificial.
We give a detailed mathematical argument for the above discussion. For a two-
level system in Fig. 1, the eigenenergies of $|g\rangle$ and $|e\rangle$ are
zero and $E_{\text{ge}}$, respectively. The electric dipole transition between
$|g\rangle$ and $|e\rangle$ is driven by two laser fields, one with central
(angular) frequency $\omega+\Delta$, and the other with central frequency
$\omega-\Delta$, where $\omega=E_{\text{ge}}/\hbar$. The Hamiltonian (divided
by $\hbar$) of the matter-light coupling in dipole approximation is
$\displaystyle\hat{H}(t)$ $\displaystyle=$
$\displaystyle\omega|e\rangle\langle
e|+[\Omega_{1}(e^{it(\omega+\Delta)}+e^{-it(\omega+\Delta)})|e\rangle\langle
g|$ (1)
$\displaystyle+\Omega_{2}(e^{it(\omega-\Delta)}+e^{-it(\omega-\Delta)})|e\rangle\langle
g|+\text{H.c.}]/2,$
where we assume that $\Omega_{1}$ and $\Omega_{2}$ are real, and “H.c.”
denotes the Hermitian conjugate. In principle, there is a phase difference
$\varphi_{0}=(\mathbf{k}_{1}-\mathbf{k}_{2})\cdot\mathbf{r}$ between
$\Omega_{1}$ and $\Omega_{2}$, where the subscript $1(2)$ distinguishes the
two transitions with opposite detunings. If the Rydberg state $|e\rangle$ is
excited by couterpropagating fields along $\mathbf{z}$ via two-photon
excitation through an intermediate state with a GHz-scale detuning
$\delta_{1(2)}$, and if meanwhile the fields for $\mathbf{k}_{1}$ and
$\mathbf{k}_{2}$ copropagate, $\varphi_{0}$ becomes
$2(\Delta+\delta_{2}-\delta_{1})z/c$, where $z$ is the z-component coordinate
of the atom, $c$ the speed of light. The difference between $\delta_{1}$ and
$\delta_{2}$ is much larger than $\Omega_{1(2)}$ so that a common intermediate
state is used for the two two-photon transitions; for instance, $\delta_{1}$
and $\delta_{2}$ can have different signs. Because $\Delta\sim\Omega_{1(2)}$
and the effective two-photon Rabi frequency $\Omega_{1(2)}$ is on the order of
MHz, $\varphi_{0}\approx 2(\delta_{2}-\delta_{1})z/c$. For atoms cooled to
temperatures around $1$ K or below, the change of $z$ within several $\mu$s is
on the order of $\mu$m, which means that $\varphi_{0}$ is almost constant;
furthermore, $|\delta_{2}-\delta_{1}|/c\ll k_{1},k_{2}$, and thus we can
assume that this phase has been compensated by adjustment of the overall
phases carried by the laser beams. This is technically possible since the
phase fluctuation of a laser field can be made negligible Saffman and Walker
(2005); Wineland _et al._ (1998). But for single-site Rydberg addressing with
our methods (shown in Sec. III), the fields for $\Omega_{1}$ and those for
$\Omega_{2}$ propagate along different directions, e.g., one along
$\mathbf{z}$ and the other along $\mathbf{x}$, then
$\varphi_{0}\approx\mathcal{K}(z-x)$, where $\mathcal{K}$ is the difference of
the wavevectors of the upper and lower transitions. For configurations with
$\mathcal{K}$ of several $10^{6}m^{-1}$ Isenhower _et al._ (2010), the
fluctuation of the qubit locations lead to large errors in $\varphi_{0}$, and
it is necessary to cool atoms to very low temperatures to establish ORIR.
Using the operator $\hat{\mathcal{R}}=\omega|e\rangle\langle e|$ for a
rotating-frame transform,
$e^{i\hat{\mathcal{R}}t}\hat{H}e^{-i\hat{\mathcal{R}}t}-\hat{\mathcal{R}}$,
the Hamiltonian becomes,
$\displaystyle\hat{H}(t)$ $\displaystyle=$
$\displaystyle\Omega_{1}(e^{it(2\omega+\Delta)}+e^{-it\Delta})|e\rangle\langle
g|/2$
$\displaystyle+\Omega_{2}(e^{it(2\omega-\Delta)}+e^{it\Delta})|e\rangle\langle
g|/2+\text{H.c.}.$
When $\Omega_{1}=\Omega_{2}=\Omega$, the above equation simplifies to,
$\displaystyle\hat{H}(t)$ $\displaystyle=$
$\displaystyle\Omega(e^{2it\omega}+1)\cos(t\Delta)|e\rangle\langle
g|+\text{H.c.}.$
We further assume $\Delta\sim|\Omega|\ll\omega$ so that the Bloch-Siegert
shift is negligible Cohen-Tannoudji _et al._ (1998). In this case, $e^{\pm
2it\omega}$ is rapidly oscillating and can be discarded according to the
rotating wave approximation. This leads to the following Hamiltonian,
$\displaystyle\hat{H}(t)$ $\displaystyle=$
$\displaystyle\Omega\cos(t\Delta)(|e\rangle\langle g|+|g\rangle\langle e|).$
Starting from the initial state, $|\psi(0)\rangle=|g\rangle$, the system
wavefunction $|\psi(t)\rangle=C_{g}(t)|g\rangle+C_{e}(t)|e\rangle$ evolves
according to
$\displaystyle C_{g}(t)$ $\displaystyle=$
$\displaystyle\cos\left[\frac{\Omega}{\Delta}\sin(t\Delta)\right],$
$\displaystyle C_{e}(t)$ $\displaystyle=$
$\displaystyle-i\sin\left[\frac{\Omega}{\Delta}\sin(t\Delta)\right].$ (2)
When $\frac{|\Omega|}{\Delta}\geq\frac{\pi}{2}$ is satisfied, the transition
probability from the ground state to the excited state can reach 1. A critical
condition for a full transition is
$\displaystyle\frac{|\Omega|}{\Delta}=\frac{\pi}{2},~{}~{}t=\frac{\pi}{2\Delta}.$
Remarkably, the time for the transition from the ground state to the excited
state is $\frac{\pi}{2\Delta}$, which is only $\pi/2$ times longer than the
time $\frac{\pi}{2|\Omega|}$ of a $\pi$ pulse in a resonant transition with a
Rabi frequency $2\Omega$.
ORIR is a useful method in quantum control with atomic ions Sørensen and
Mølmer (1999). Below, we show its applicability in neutral atoms.
## III Single-site Rydberg addressing in 3D arrays
ORIR can be used for Rydberg addressing of a single atom in a 3D optical
lattice. In general, the difficulty of Rydberg addressing in a 3D lattice lies
in two aspects. First, sending lasers to a target atom in a dense 3D lattice
can influence other atoms along the light path, and it is unlikely to leave
the state of a nontarget atoms intact after fully Rydberg exciting and
deexciting a target atom. Second, the atoms along the light path can exhibit
Rydberg blockade, bringing the problem a many-body complexity. One may image
that for a two-photon excitation of $s$\- or $d$-orbital Rydberg atoms, the
two laser beams can be sent along different directions; because both laser
fields are largely detuned, only the target atom at the intersection of the
two beams are Rydberg excited. However, phase twist can occur for the
nontarget atom even if it is pumped only by an off-resonant field [see, for
example, the dashed curve in Fig. 3(b) of Ref. Shi (2019)]. So, there can be a
phase shift to the atomic state of a nontarget qubit illuminated by the laser
fields addressing the lower transition in our problem. In fact, even if the
detuning of the laser field addressing the lower transition is 10 times larger
than its Rabi frequency $\Omega_{\text{g}}$, the phase shift to the ground
state can reach $\pi/2$ if the atom is irradiated by a time of
$20\pi/\Omega_{\text{g}}$. These issues can be tackled by using ORIR-based
optical spin echo in a very small lattice, and a microwave spin echo assisted
by ORIR in a relatively large one. Here, the optical spin echo is for
restoring the state of nontarget atoms if no Rydberg interaction exists
between the nontarget atoms, while the microwave spin echo is for removing
many-body effect if there is Rydberg blockade in the nontarget atoms.
We take a system shown in Fig. 2 for illustration. For a small array, the
laser spot is small enough so that only an atom at the beam axis can be
irradiated, and the laser intensity is of similar magnitude at the target atom
and any nontarget atom along the light path. So, the Rabi frequencies are of
similar magnitude for all the irradiated atoms in the problem. In Fig. 2, a
cubic lattice with $3\times 5\times 3$ sites is shown, the lattice constant is
$\mathcal{L}$, and the relevant transition is between a hyperfine state of
$F=2$ and a high-lying $s$-orbital Rydberg state of rubidium-87. The
quantization axis is along $[101]$ which can be specified by a magnetic field.
All laser fields are $\pi$-polarized along the quantization axis. We want to
excite the central atom (denoted by the red ball), which is located at the
origin of the Cartesian coordinate (the $\mathbf{x}-\mathbf{y}-\mathbf{z}$
arrows are only for clarifying the directions). In a rotating-frame as in Sec.
II, a two-photon transition $|1\rangle\leftrightarrow|r\rangle$ with Rabi
frequency $\Omega_{\perp}=\Omega e^{i(t\Delta_{\perp}+\varphi_{\perp})}$ is
created by sending focused Gaussian beams along
$\mathbf{l}_{\perp}=[\overline{1}21]$, with the foci at the center of the
target site. Here, $\varphi_{\perp}$ is determined by the phases of the laser
oscillators and the distance that the light travels from the laser sources to
the target atom; for brevity, we write
$\varphi_{\perp}=\mathbf{k}_{\perp}\cdot\mathbf{r}$, where
$\mathbf{k}_{\perp}$ is the wavevector of the fields. Furthermore,
$\Delta_{\perp}$ is the overall detuning of the two-photon transition between
the ground state $|1\rangle$ and the Rydberg state $|r\rangle$, which should
be much smaller than the detuning $\delta$ for the transition
$|1\rangle\rightarrow|e\rangle$. In our method, $\Delta$ is several MHz, thus
$\delta$ should be at least several hundred MHz. Meanwhile, another set of
lasers are sent along $\mathbf{l}_{\shortparallel}=[12\overline{1}]$ which
also focus at the same target atom. In method I, the lasers along
$\mathbf{l}_{\perp}$ and $\mathbf{l}_{\shortparallel}$ are of similar
wavelengths, but with some difference so that the two two-photon transitions
are built via very different one-photon detunings at the intermediate state.
In the same rotating frame, the lasers along $\mathbf{l}_{\shortparallel}$
drive the transition $|1\rangle\leftrightarrow|r\rangle$ by a Rabi frequency
$\Omega_{\shortparallel}=\Omega
e^{i(t\Delta_{\shortparallel}+\varphi_{\shortparallel})}$.
Our scheme requires the condition of
$e^{i\varphi_{\perp}},~{}e^{i\varphi_{\shortparallel}}=\pm 1$. Because both
$\varphi_{\perp}$ and $\varphi_{\shortparallel}$ are determined by the laser
sources and the length of the light path, it is possible to tune the laser
phases to the condition of
$e^{i\varphi_{\perp}},~{}e^{i\varphi_{\shortparallel}}=\pm 1$ if the position
fluctuation of qubits is negligible. For optically cooled qubits before each
experimental cycle, there is fluctuation of the qubit positions, leading to
fluctuation in $\varphi_{\perp}$ and $\varphi_{\shortparallel}$. For the
configuration in Fig. 2(b), we have
$|\mathbf{k}_{\perp}|=|\mathbf{k}_{\shortparallel}|\approx 5\times
10^{6}m^{-1}$ when the upper and lower fields propagate oppositely. This means
that the fluctuation of the qubit position along the light path should be much
smaller than $1~{}\mu$m to validate the method. In the experiment of Ref.
Graham _et al._ (2019), the transverse (longitudinal) position fluctuation of
the qubit is $0.27~{}(1.47)~{}\mu$m, which will lead to large fluctuation of
$\varphi_{\perp}$ and $\varphi_{\shortparallel}$ if similar traps are used
here. Thus it is necessary to use sufficiently deep traps and efficient
cooling. This is why the methods shown below can not be implemented by sending
one laser field along $\mathbf{l}_{\perp}$ for a one-photon transition
$|1\rangle\rightarrow|e\rangle$, another field along
$\mathbf{l}_{\shortparallel}$ for the other one-photon transition
$|e\rangle\rightarrow|r\rangle$, where $|e\rangle$ is the intermediate state
shown in Fig. 2(b). This is because the wavevector for these two one-photon
transitions are much larger, which makes the condition of
$e^{i\varphi_{\perp}}=e^{i\varphi_{\shortparallel}}=\pm 1$ even more
challenging to be satisfied.
An alternative solution to the above issue is to use naturally existing
transitions with negligible wavevector. For instance, the Rydberg excitation
via two counterpropagating fields for $6^{1}S_{1}\rightarrow
6^{1}P_{1}\rightarrow n^{1}S_{0}$ (or $n^{1}D_{2}$) of ytterbium suffers from
a negligible Doppler dephasing because $k\lesssim 10^{5}m^{-1}$ (see, for
example, Fig.1 of Ref. Lehec _et al._ (2018)). In this latter case, it is
necessary to store quantum information in the nuclear spin states of, e.g.,
171Yb (or 173Yb) for the purpose of quantum computing with Rydberg
interaction. For Rydberg excitation of one of the two nuclear spin qubit
states, one can first put the other nuclear spin qubit state to the metastable
excited state ${}^{3}P_{0}$ to avoid leakage. However, it is beyond the scope
of this work to give detail about this latter scheme of Rydberg gate based on
171Yb. We assume that the fluctuation of $\varphi_{\perp}$ and
$\varphi_{\shortparallel}$ has been suppressed.
Below, Secs. III.1 and III.2 show two optical spin-echo methods for the
excitation of target atoms, termed as method I and method II. The above
discussion is applicable to method I. In method II, the lasers along
$\mathbf{l}_{\perp}$ is for the transition
$|1\rangle\leftrightarrow|r\rangle$, but those along
$\mathbf{l}_{\shortparallel}$ is for the transition between $|r\rangle$ and
another Rydberg state $|R\rangle$ via a low-lying intermediate state. Because
$|r\rangle$ is near $|R\rangle$, the phase $\varphi_{\shortparallel}$ can be
easily set in method II. The discussion about the phase above is applicable
for both $\varphi_{\shortparallel}$ and $\varphi_{\perp}$ in method I and
$\varphi_{\perp}$ in method II.
It is useful to briefly show why two methods are introduced. For brevity, we
use “one pulse” when the laser fields along $\mathbf{l}_{\perp}$ and
$\mathbf{l}_{\shortparallel}$ irradiate the system simultaneously for a
certain duration. In method I, one pulse is used for the Rydberg excitation,
and the nontarget atoms can have some residual population in the Rydberg state
when the target atom is excited to the Rydberg state. After the second pulse,
both the target and nontarget atoms return to the ground state. No phase shift
occurs for the target atom because of the spin echo. In method II, two pulses
are used for Rydberg excitation. The two pulses form an optical spin echo
sequence for the nontarget atoms so that they have no population in the
Rydberg state when the target atom is in the Rydberg state. Similarly, two
similar pulses can pump the target atom back to its ground state. Because the
target atom experiences no spin echo, it can accumulate a $\pi$ phase shift
upon its state restoration. When there is Rydberg interaction between
nontarget atoms, a microwave spin echo is used to reverse the sign of the
Rydberg interaction, shown in Sec. III.3. In Sec. III.4, we study the issue of
divergence of the laser beam. In Sec. III.5, we study the application of the
methods in the Rydberg blockade gate. We give a detailed comparison between
the two methods in Sec. III.6.
Figure 2: (a) Diagram of targeting one atom by two detuned laser beams in a 3D
lattice of rubidium atoms. Origin of the coordinate system is at the center of
the target site. The quantization axis is along $[101]$; the two light paths
are along $\mathbf{l}_{\perp}=[\overline{1}21]$ and
$\mathbf{l}_{\shortparallel}=[12\overline{1}]$, respectively. The red atom
labeled as “C” is the target atom. Atom $E$ and atom $F$ (atom $A$ and atom
$G$) are nontarget atoms at the beam axis of fields along $\mathbf{l}_{\perp}$
($\mathbf{l}_{\shortparallel}$). (b) Atomic transition diagram. Both the light
fields propagating along $\mathbf{l}_{\perp}$ and
$\mathbf{l}_{\shortparallel}$ are polarized along the quantization axis,
inducing the same transition. The detuning for the transition
$|1\rangle\rightarrow|e\rangle$ is much larger than the two-photon detuning
$\Delta$. (c) Time-dependence of the Rabi frequencies for method I. The
effective Rabi frequency is $2i\Omega\sin(\Delta t)$ at the target atom.
### III.1 Method I: one pulse for Rydberg excitation
We first show Rydberg excitation of the target atom by one pulse. There are
three stages to implement the optical spin echo, two optical pumping and one
wait time. The wait can have a duration of $t_{\mu}=2\pi/\Delta$, during which
a microwave field can be used to induce a transition between two different
Rydberg states as discussed later; for Rydberg blockade gate, another atom
near the target atom can be excited to the Rydberg state during the wait.
In the first stage, laser fields along $\mathbf{l}_{\perp}$ and along
$\mathbf{l}_{\shortparallel}$ induce the following transition
$\displaystyle\hat{H}_{\perp}^{(1)}$ $\displaystyle=$
$\displaystyle\Omega_{\perp}e^{it\Delta}|r\rangle_{\perp}\langle
1|/2+\text{H.c.},$ $\displaystyle\hat{H}_{\shortparallel}^{(1)}$
$\displaystyle=$
$\displaystyle-\Omega_{\shortparallel}e^{-it\Delta}|r\rangle_{\shortparallel}\langle
1|/2+\text{H.c.}$ (3)
during $t\in[0,~{}t_{0})$, where $t_{0}=\pi/\Delta$ and the subscript
$\alpha=\perp(\shortparallel)$ distinguishes notations for atoms in the two
different light paths. For Gaussian beams, the values of $\Omega_{\alpha}$
reach to their maximum at the target atom where $\Omega=\pi\Delta/4$. The
above Hamiltonian is in a rotating frame with
$\hat{\mathcal{R}}=\omega|r\rangle\langle r|$. To show the echo for the
nontarget atoms, we use the rotating frame with
$\hat{\mathcal{R}}_{\mp}\equiv\hat{\mathcal{R}}\mp\Delta|r\rangle\langle r|$
for the atoms along the two different light paths, and use $\psi~{}(\Psi)$ to
denote the wavefunction in the interaction (Schrödinger) picture. The
wavefunctions in the frame $\hat{\mathcal{R}}_{\mp}$ are
$\displaystyle|\psi(t)\rangle_{\perp}$ $\displaystyle=$ $\displaystyle
e^{it\hat{\mathcal{R}}_{-}}|\Psi(t)\rangle_{\perp},$
$\displaystyle|\psi(t)\rangle_{\shortparallel}$ $\displaystyle=$
$\displaystyle e^{it\hat{\mathcal{R}}_{+}}|\Psi(t)\rangle_{\shortparallel},$
and the Hamiltonians are
$\displaystyle\hat{\mathcal{H}}_{\perp}^{(1)}$ $\displaystyle=$
$\displaystyle\Delta|r\rangle_{\perp}\langle
r|+\Omega_{\perp}(|r\rangle_{\perp}\langle 1|+\text{H.c.})/2,$
$\displaystyle\hat{\mathcal{H}}_{\shortparallel}^{(1)}$ $\displaystyle=$
$\displaystyle-\Delta|r\rangle_{\shortparallel}\langle
r|-\Omega_{\shortparallel}(|r\rangle_{\shortparallel}\langle
1|+\text{H.c.})/2.$ (4)
The wavefunction evolves according to
$e^{-i\hat{\mathcal{H}}_{\alpha}t}|\psi(0)\rangle_{\alpha}$, where
$|\psi(0)\rangle_{\alpha}=|1\rangle_{\alpha}$ is the initial wavefunction. In
Eqs. (3) and (4), the Rabi frequency $\Omega$ for the target atom is equal to
the value of $\Omega_{\perp}$ (or $\Omega_{\shortparallel}$) at the foci of
the laser fields. Because of the Gaussian profile of the fields, $\Omega$ is
larger than $\Omega_{\perp}$ and $\Omega_{\shortparallel}$ of any nontarget
atom. Equation (3) means that the state of the target atom evolves according
to the Hamiltonian
$\displaystyle\hat{H}(t)$ $\displaystyle=$ $\displaystyle
i\Omega\sin(t\Delta)(|r\rangle\langle 1|-|1\rangle\langle r|)$ (5)
in the rotating frame of $\hat{\mathcal{R}}$, and one can show that when the
initial state of the target atom is $|1\rangle$, its state becomes $|r\rangle$
at the end of the first pulse.
Second, a wait time of duration $t_{\mu}$ is inserted when nothing is done. If
we would like to suppress the many-body effect as discussed later, a microwave
transition can be used to transfer the state $|r\rangle$ to another Rydberg
state $|r^{\prime}\rangle$. There can be a phase difference between $\langle
r|\psi(t_{0})\rangle$ and $\langle r^{\prime}|\psi(t_{0}+t_{\mu})\rangle$
determined by the microwave fields, and the subsequent laser fields in the
third stage should appropriately compensate this phase. But to show the
essence of the optical spin echo, we assume nothing is done during the wait.
The state of a nontarget atom evolves to
$\displaystyle|\Psi(t_{0}+t_{\mu})\rangle_{\perp}$ $\displaystyle=$
$\displaystyle
e^{-it_{\mu}\hat{\mathcal{R}}}e^{-it_{0}\hat{\mathcal{R}}_{-}}e^{-it_{0}\hat{\mathcal{H}}_{\perp}^{(1)}}|\psi(0)\rangle_{\perp},$
$\displaystyle|\Psi(t_{0}+t_{\mu})\rangle_{\shortparallel}$ $\displaystyle=$
$\displaystyle
e^{-it_{\mu}\hat{\mathcal{R}}}e^{-it_{0}\hat{\mathcal{R}}_{+}}e^{-it_{0}\hat{\mathcal{H}}_{\shortparallel}^{(1)}}|\psi(0)\rangle_{\shortparallel},$
(6)
at the end of the wait.
Third, during $t\in[t_{0}+t_{\mu},~{}2t_{0}+t_{\mu})$, laser fields along both
$\mathbf{l}_{\perp}$ and $\mathbf{l}_{\shortparallel}$ are sent for the
following transitions,
$\displaystyle\hat{H}_{\perp}^{(2)}$ $\displaystyle=$
$\displaystyle-\Omega_{\perp}e^{-it\Delta}|r\rangle_{\perp}\langle
1|/2+\text{H.c.},$ $\displaystyle\hat{H}_{\shortparallel}^{(2)}$
$\displaystyle=$
$\displaystyle\Omega_{\shortparallel}e^{it\Delta}|r\rangle_{\shortparallel}\langle
1|/2+\text{H.c.},$ (7)
so that the Hamiltonian for the target atom is still given by Eq. (5). When
$t_{\mu}=2\pi/\Delta$, one can show that the state of the target atom is
$|\Psi(t)\rangle=|1\rangle$ at the time $t=2t_{0}+t_{\mu}$. Now, we use the
rotating frame with $\hat{\mathcal{R}}_{\pm}$ for the atoms along the two
different light path so that the Hamiltonians become
$\displaystyle\hat{\mathcal{H}}_{\perp}^{(2)}$ $\displaystyle=$
$\displaystyle-\Delta|r\rangle_{\perp}\langle
r|-\Omega_{\perp}(|r\rangle_{\perp}\langle 1|+\text{H.c.}),$
$\displaystyle\hat{\mathcal{H}}_{\shortparallel}^{(2)}$ $\displaystyle=$
$\displaystyle\Delta|r\rangle_{\shortparallel}\langle
r|+\Omega_{\shortparallel}(|r\rangle_{\shortparallel}\langle 1|+\text{H.c.}).$
(8)
The state of the nontarget atoms becomes,
$\displaystyle|\psi(2t_{0}+t_{\mu})\rangle_{\perp}$ $\displaystyle=$
$\displaystyle
e^{-it_{0}\hat{\mathcal{H}}_{\perp}^{(2)}}e^{i(t_{0}+t_{\mu})\hat{\mathcal{R}}_{+}}e^{-it_{\mu}\hat{\mathcal{R}}}e^{-it_{0}\hat{\mathcal{R}}_{-}}$
$\displaystyle\times
e^{-it_{0}\hat{\mathcal{H}}_{\perp}^{(1)}}|\psi(0)\rangle_{\perp},$
$\displaystyle|\psi(2t_{0}+t_{\mu})\rangle_{\shortparallel}$ $\displaystyle=$
$\displaystyle
e^{-it_{0}\hat{\mathcal{H}}_{\shortparallel}^{(2)}}e^{i(t_{0}+t_{\mu})\hat{\mathcal{R}}_{-}}e^{-it_{\mu}\hat{\mathcal{R}}}e^{-it_{0}\hat{\mathcal{R}}_{+}}$
(9) $\displaystyle\times
e^{-it_{0}\hat{\mathcal{H}}_{\shortparallel}^{(1)}}|\psi(0)\rangle_{\shortparallel},$
Because
$e^{i(t_{0}+t_{\mu})\hat{\mathcal{R}}_{+}}e^{-it_{\mu}\hat{\mathcal{R}}}e^{-it_{0}\hat{\mathcal{R}}_{-}}=e^{4i\pi|r\rangle\langle
r|}$, it is equal to the identity since it operates either on $|r\rangle$ or
$|1\rangle$; similarly,
$e^{i(t_{0}+t_{\mu})\hat{\mathcal{R}}_{-}}e^{-it_{\mu}\hat{\mathcal{R}}}e^{-it_{0}\hat{\mathcal{R}}_{+}}=\hat{1}$.
Because $\hat{\mathcal{H}}_{\alpha}^{(2)}=-\hat{\mathcal{H}}_{\alpha}^{(1)}$,
we have
$\displaystyle|\psi(2t_{0}+t_{\mu})\rangle_{\perp}$ $\displaystyle=$
$\displaystyle
e^{-it_{0}\hat{\mathcal{H}}_{\perp}^{(2)}}e^{-it_{0}\hat{\mathcal{H}}_{\perp}^{(1)}}|\psi(0)\rangle_{\perp}=|\psi(0)\rangle_{\perp},$
$\displaystyle|\psi(2t_{0}+t_{\mu})\rangle_{\shortparallel}$ $\displaystyle=$
$\displaystyle
e^{-it_{0}\hat{\mathcal{H}}_{\shortparallel}^{(2)}}e^{-it_{0}\hat{\mathcal{H}}_{\shortparallel}^{(1)}}|\psi(0)\rangle_{\shortparallel}=|\psi(0)\rangle_{\shortparallel},$
where $\alpha=\perp,\shortparallel$. So, the state of the nontarget atom is
restored whatever the magnitude of $\Omega_{\alpha}$ is. The value of
$t_{\mu}=2\pi/\Delta$ is chosen so that Eq. (9) can be simplified to Eq.
(LABEL:method01finalstate2). In fact, it can be an arbitrary value of
$t_{\mu}^{\prime}\in(0,~{}t_{\mu})$, but each set of the laser fields used
during the third stage should have an extra phase
$\varphi=t_{\mu}^{\prime}\Delta$ so that
$(\Omega_{\perp},~{}\Omega_{\shortparallel})$ becomes
$(\Omega_{\perp}e^{i\varphi},~{}\Omega_{\shortparallel}e^{-i\varphi})$.
Figure 3: Numerical result for the excitation and deexcitation of Rydberg
state $|r\rangle$ in Method I. (a) and (b) show the time evolution of the
population in $|1\rangle$ (solid curve) and $|r\rangle$ (dashed curve) for the
nontarget atom and target atom, respectively, with
$(\Delta,~{}\Omega)/2\pi=(4,~{}\pi)$ MHz, $t_{\mu}=2\pi/\Delta$, and the Rabi
frequency for the nontarget atom is $\Omega_{\perp}=0.73\Omega$. The same
state evolution occurs if $t_{\mu}=0$.
As shown above, the merit of the method is that it can avoid bringing other
atoms to Rydberg states upon the completion of the deexcitation of the target
atom. The method works well when the light intensity is different at atoms
along the light path. The focused Gaussian beam, even with a very small waist,
can still have a Rayleigh length as large as $X=26~{}\mu$m Wang _et al._
(2016). At a spot away from the foci by $\mathfrak{l}_{0}$ for the beam
propagating along $\mathbf{l}_{\shortparallel}$, the laser intensity at the
beam axis is smaller than that at the foci by about
$X^{2}/(X^{2}+\mathfrak{l}_{0}^{2})$. This fact leads to different magnitudes
of Rabi frequency at different nontarget atoms, and its irrationality
indicates that it is impossible to simultaneously restore the states of all
the atoms along the light path unless spin echo is used.
We continue to analyze the condition when the blockade interaction is
included. If the lattice constant is $\mathcal{L}=10~{}\mu$m Isenhower _et
al._ (2010), the values of $\Omega_{\alpha}=\Omega_{\shortparallel}$ (or
$\Omega_{\perp}$) for a nontarget atom and $\Omega$ of the target atom are
comparable because there are only two nontarget atoms in the light path along
$\mathbf{l}_{\shortparallel}$, denoted as atom $A$ and atom $G$ in Fig. 2(a).
Atom $A$ is away from the foci by $\mathfrak{l}_{0}=\pm\sqrt{6}\mathcal{L}$,
and thus its Rabi frequency $\Omega_{\alpha}$ is only about $0.73\Omega$ when
$X=26~{}\mu$m. We suppose that the van der Waals interaction between Rydberg
atoms is $V=C_{6}/\mathcal{L}^{6}$ and $V/\Omega\approx 12$ [see text
following Eq. (28) for the reason of this choice]. Then, the blockade
interaction between the target atom and the nontarget atom is only $V/216$. On
the other hand, the Rydberg population in the nontarget atom (if it is
initialized in the state $|1\rangle$) is tiny, shown above, and thus the
influence of the blockade interaction for the excitation of the target atom is
negligible. For the same reason, the Rydberg blockade does not perturb the
spin-echo time evolution in the nontarget atom since
$V/216\ll\Omega_{\alpha}$. So, the Rydberg addressing of one atom in a small
3D lattice with 45 sites in Fig. 2 is achievable.
Using Eqs. (3) and (7) in the interaction picture with $\hat{\mathcal{R}}$, we
numerically study the state evolution of the target atom and the nontarget
atom with parameters $(\Delta,~{}\Omega)/2\pi=(4,~{}\pi)$ MHz,
$t_{\mu}=2\pi/\Delta$ and, as an example,
$\Omega_{\perp}=\Omega_{\shortparallel}=0.73\Omega$. Because the population
evolution for the nontarget atom $\alpha=\perp$ is the same with that for
$\alpha=\shortparallel$, we take $\alpha=\perp$ as an example and ignore the
Rydberg blockade. The results are shown in Fig. 3, where the population in
$|r\rangle$ is 0.23 during $t\in[t_{0},~{}t_{0}+t_{\mu})$ for the nontarget
atom when the target atom is in the Rydberg state. As a result of the optical
spin echo, the state of the nontarget atom returns to the ground state at the
end of the pulse sequence.
One concern about the applicability of this method is that there is Rydberg-
state decay in any irradiated nontarget atom during the pulse sequence, which
can cause decoherence for many qubits. But the decay is negligible: if the
state is $|\psi(0)\rangle=|1\rangle$ for a nontarget atom, the probability of
decay for the atom is $T_{\text{de}}/\tau$, where $\tau$ is the lifetime of
the Rydberg state, and
$\displaystyle T_{\text{de}}$ $\displaystyle=$
$\displaystyle\int_{0}^{2t_{0}+t_{\mu}}|\langle r|\Psi(t)\rangle|^{2}dt,$ (11)
which is $0.093~{}\mu$s and $0.375~{}\mu$s for the processes in Figs. 3(a) and
3(b), respectively. For an $s$ or $d-$orbital state of rubidium with principal
value of $100$, $\tau$ is about $320~{}\mu$s in a temperature of $300$ K
Beterov _et al._ (2009), which gives a decay error of about
$E_{\text{decay-n}}=2.9\times 10^{-4}$ for the nontarget atom, and $1.2\times
10^{-3}$ for the target atom; when the sequence in Fig. 3 is used for the
pumping of the control qubit in the Rydberg blockade gate Jaksch _et al._
(2000), the decay of one nontarget atom contributes an error
$E_{\text{decay-n}}/2$ to the gate fidelity. Moreover, the state of a
nontarget qubit can also be $|0\rangle$ that doesn’t respond to the light
irradiation. The farther the nontarget atom is from the foci, the smaller the
value of $\Omega_{\perp}$ will be. With $\Omega_{\perp}=0.3\Omega$, we have
$T_{\text{de}}=0.02~{}\mu$s which gives a decay error of about five times
smaller than that in Fig. 3(a). These analyses show that the Rydberg-state
decay for the nontarget atoms is negligible.
### III.2 Method II: two-pulse excitation of Rydberg states
For high fidelity quantum control, it is desirable to avoid Rydberg-state
decay whenever possible. The scheme in Sec. III.1 has a wait time during which
the nontarget atoms have some probability in the Rydberg state, shown in Fig.
3(a). This leads to extra Rydberg-state decay that hampers the protocol.
Besides, the target atom can not acquire any phase shift upon its state
restoration because of the spin echo. In this section, we show a method that
leaves no Rydberg population in the nontarget atom when the target atom is
excited to the Rydberg state. Moreover, the target atom can have a $\pi$ phase
shift when its state is restored. For brevity, we only show the Rydberg
excitation of the target atom since its deexcitation is achieved with a
similar process.
In the first pulse of duration $t_{0}=\pi/\Delta$, laser fields along
$\mathbf{l}_{\perp}$ similar to that in Eq. (3) are used for the following
transition,
$\displaystyle\hat{H}_{\perp}^{(1)}$ $\displaystyle=$
$\displaystyle\Omega_{\perp}e^{it\Delta}|r\rangle_{\perp}\langle
1|/2+\text{H.c.},$ (12)
and meanwhile laser fields along $\mathbf{l}_{\shortparallel}$ induce a two-
photon transition between $|r\rangle$ and $|R\rangle$ via a low-lying
intermediate state $|p\rangle$,
$\displaystyle\hat{H}_{\shortparallel}^{(1)}$ $\displaystyle=$
$\displaystyle\Omega_{\shortparallel}e^{-it\Delta}|R\rangle_{\shortparallel}\langle
r|/2+\text{H.c.},$ (13)
where $|p\rangle$ should be a state higher than $|e\rangle$, and the fields
used for $|r\rangle\rightarrow|p\rangle\rightarrow|R\rangle$ should be optical
or infrared.
After the first pulse, a wait duration $t_{\mu}$ elapses as in Sec. III.1,
where microwave pumping can be used if necessary Shi (2018a). As shown in Sec.
III.1, the duration of the wait can be adjusted without altering the optical
spin echo, and hence we can assume $t_{\mu}=0$ for brevity.
In the second pulse of duration $t_{0}$, laser fields along
$\mathbf{l}_{\perp}$ are sent for the following transition,
$\displaystyle\hat{H}_{\perp}^{(2)}$ $\displaystyle=$
$\displaystyle-\Omega_{\perp}e^{-it\Delta}|r\rangle_{\perp}\langle
1|/2+\text{H.c.},$ (14)
while the laser fields along $\mathbf{l}_{\shortparallel}$ are used for
$\displaystyle\hat{H}_{\shortparallel}^{(2)}$ $\displaystyle=$
$\displaystyle\Omega_{\shortparallel}e^{it\Delta}|R\rangle_{\shortparallel}\langle
r|/2+\text{H.c.}.$ (15)
Whatever the magnitude of $\Omega_{\perp}$ is, the state of a nontarget atom
irradiated by the laser fields along $\mathbf{l}_{\perp}$ can return to its
initial state, as shown in Sec. III.1. Because there is no population in
$|r\rangle$ or $|R\rangle$ for any nontarget atom irradiated by the fields
along $\mathbf{l}_{\shortparallel}$, nothing happens for them. Below, we study
the state evolution for the target atom.
Figure 4: Numerical result by Eqs. (12)-(15) for the excitation of the Rydberg
state $|R\rangle$ in Method II. (a) shows the time evolution of the population
in $|1\rangle$ (solid curve) and $|r\rangle$ (dotted curve) for the nontarget
atom $E$ (or atom $F$, see Fig. 2). The Rabi frequency for the nontarget atom
$E$ is $\Omega_{\perp}=0.73\Omega$ as estimated in Sec. III.1. (b) shows the
time evolution of the population in $|1\rangle$ (solid curve), $|R\rangle$
(dashed curve), and $|r\rangle$ (dotted curve) for the target atom (labeled as
atom $C$ in Fig. 2), with $\Delta/2\pi=4$ MHz, $\Omega=1.2247\Delta$, and
$t_{\mu}=0$. In (b), the initial state is $|1\rangle$, and the final state is
$i|R\rangle$ with an error smaller than $10^{-8}$. The same optical spin echo
for the nontarget atom and the Rydberg excitation of the target atom are
achieved with $t_{\mu}=2\pi/\Delta$.
Equations (12)-(15) are written in a rotating frame with
$\displaystyle\hat{\mathcal{R}}$ $\displaystyle=$
$\displaystyle\omega|r\rangle\langle r|+\omega_{R}|R\rangle\langle R|,$ (16)
where $\omega_{R}$ is the energy (divided by $\hbar$) of the state
$|R\rangle$. If we use a rotating frame with
$\hat{\mathcal{R}}_{\mp}=\hat{\mathcal{R}}\mp\Delta|r\rangle\langle r|$ for
the first (second) pulse, the Hamiltonians of the target atom become
$\displaystyle\hat{\mathcal{H}}^{(1)}$ $\displaystyle=$
$\displaystyle\Delta|r\rangle\langle
r|+(\Omega_{\shortparallel}|R\rangle\langle r|+\Omega_{\perp}|r\rangle\langle
1|+\text{H.c.})/2,$ $\displaystyle\hat{\mathcal{H}}^{(2)}$ $\displaystyle=$
$\displaystyle-\Delta|r\rangle\langle
r|+(\Omega_{\shortparallel}|R\rangle\langle r|-\Omega_{\perp}|r\rangle\langle
1|+\text{H.c.})/2.$
For $\Delta\gg\Omega_{\perp},\Omega_{\shortparallel}$, an effective pumping
emerges between $|1\rangle$ and $|R\rangle$ with a Rabi frequency of
$\Omega_{\perp}\Omega_{\shortparallel}/(2\Delta)$ for both
$\hat{\mathcal{H}}^{(1)}$ and $\hat{\mathcal{H}}^{(2)}$. But the time for a
complete excitation of $|R\rangle$ is
$2\pi\Delta/(\Omega_{\perp}\Omega_{\shortparallel})$. If $N$ cycles of optical
spin echo sequence are used, the condition of
$\Delta/\sqrt{\Omega_{\perp}\Omega_{\shortparallel}}=\sqrt{N}$ leads to
excitation of the target qubit. However, this is complicated. For an efficient
quantum control, the case of $N=1$ is best. We numerically found that with
$\Omega_{\perp}=\Omega_{\shortparallel}=1.2247\Delta$, a complete excitation
of the ground state $|1\rangle$ to the Rydberg state $i|R\rangle$ is achieved
with one spin-echo cycle, shown in Fig. 4; the residual population in
$|1\rangle$ and $|r\rangle$ is $1.9(5.6)\times 10^{-9}$ at the time of
$t=2t_{0}$ in Fig. 4(b). The time for the nontarget atom [atom $E$ of Fig.
2(a)] to be in the Rydberg state is $T_{\text{de}}=0.07~{}\mu$s in Fig. 4(a),
which means that the decay probability for the nontarget atom is
$T_{\text{de}}/\tau=2\times 10^{-4}$ if $\tau=320~{}\mu$s, i.e., negligible.
In Fig. 4(a), the population dynamics for atom $E$ of Fig. 2 was shown. As
estimated in the third paragraph counted backward from the end of Sec. III.1,
if the Rabi frequency in Eq. (12) for atom $C$ is $\Omega$, then atom $E$ is
pumped according to Eq. (12) with a Rabi frequency $0.73\Omega$, which was
used in Fig. 4(a). The reason to ignore the population of $|R\rangle$ in Fig.
4(a) lies in that the field amplitude for the transition
$|r\rangle\leftrightarrow|R\rangle$ at atom $E$ is negligible: the distance
from $E$ to the beam axis of the field propagating along
$\mathbf{l}_{\shortparallel}$ is $r_{\perp,E}\approx 2.3\mathcal{L}$, while
atom $B$ is only away from this beam axis by about $r_{\perp,B}\approx
0.46\mathcal{L}$. The field drops off by the law of $\propto
e^{-r_{\perp}^{2}/w^{2}}$ at a point away from the beam axis by $r_{\perp}$,
where $w$ is the beam radius. As detailed later below Eq. (22), we can suppose
that the field amplitude at atom $B$ is no more than $1/e$ of that at the beam
axis, i.e., $e^{-r_{\perp,B}^{2}/w^{2}}\leq 1/e$. Because
$r_{\perp,E}^{2}/r_{\perp,B}^{2}\approx 25$, the field amplitude for the
transition $|r\rangle\leftrightarrow|R\rangle$ at atom $E$ can be ignored.
Similarly, we can ignore any pumping of atom A (or atom G) by the field
propagating along $\mathbf{l}_{\perp}$. In Fig. 2(a), there are other atoms
very near to the beam axes, such as atom $D$. But it is away from the beam
axis of the fields along either $\mathbf{l}_{\perp}$ or
$\mathbf{l}_{\shortparallel}$ by about $0.9\mathcal{L}\approx 2r_{\perp,B}$,
which means that the field amplitude for either beam at atom $D$ is smaller
than $1/e^{4}$ of that at the beam axis if conditions like
$e^{-r_{\perp,B}^{2}/w^{2}}\leq 1/e$ apply to both beams. This means that only
the target atom, i.e., atom $C$ experiences pumping by the two laser beams.
In Fig. 4(b), the final phase $\pi/2$ for the state $|R\rangle$ arises in
response to the sequential pumping with
$(\hat{H}_{\perp}^{(1)},\hat{H}_{\shortparallel}^{(1)})$ and
$(\hat{H}_{\perp}^{(2)},\hat{H}_{\shortparallel}^{(2)})$ defined in Eqs.
(12)-(15); the same pulse sequence as in Fig. 4 can deexcite the target atom
from $i|R\rangle$ to $-|1\rangle$ which can lead to the conditional $\pi$
phase if method II is used for the Rydberg blockade gate Jaksch _et al._
(2000). Note that if only
$(\hat{H}_{\perp}^{(1)},\hat{H}_{\shortparallel}^{(1)})$ is used for Rydberg
excitation with a time of $2t_{0}$, the final state of the target atom is
$-|R\rangle$, but a subsequent deexcitation by this sequence can not lead to a
conditional $\pi$ phase for entanglement generation.
Because the energy difference between $|r\rangle$ and $|R\rangle$ is smaller
than that between $|r(1)\rangle$ and $|p\rangle$ by several orders of
magnitude, the wavevector for $\Omega_{\shortparallel}$ is negligible when the
field for $|r\rangle\rightarrow|p^{\prime}\rangle$ and the field for
$|p^{\prime}\rangle\rightarrow|R\rangle$ copropagate, where
$|p(p^{\prime})\rangle$ is the intermediate state for
$|r\rangle\leftrightarrow|1(R)\rangle$. Thus, it is easy to set the phase for
$\Omega_{\shortparallel}$ although that for $\Omega_{\perp}$ is still an
issue. In this sense, the position distribution of the qubit will induce error
only through the phase fluctuation of $\Omega_{\perp}$, and the fidelity for
method II can be much larger than that of method I. For more detail about this
issue, see the third paragraph at the beginning of Sec. III.
### III.3 Rydberg addressing in a relatively large 3D lattice
For a nontarget atom away from the foci of the Gaussian beam, the relevant
Rydberg blockade can be comparable to the Rabi frequency $\Omega_{\alpha}$ for
the nontarget atom in the light path along $\mathbf{l}_{\alpha}$, where
$\alpha=\perp,\shortparallel$. Then, we should include the blockade in the
analysis about the state evolution of the nontarget atom. First of all,
because any nontarget atom lying away from the beam axis is pumped by a very
small field, its population in Rydberg states is negligible and for a first
approximation it does not induce dynamical phase in the target atom. Second,
the blockade between two atoms along the beam axis is $V/216$ as shown in Sec.
III.1, so the blockade interaction between the target atom and a nontarget
atom can be neglected. Then, we can separate the atoms influenced by the
lasers into two groups: the target atom, and all the nontarget atoms. These
two groups do not disturb each other. But there is many-body physics in the
second group: for two nontarget atoms away from the foci of the laser beams by
$\mathfrak{l}_{0}\gg X$, $\Omega_{\alpha}$ can be comparable to the
interaction $V/216$ between them because $\Omega_{\alpha}/\Omega\propto
X/\sqrt{X^{2}+\mathfrak{l}_{0}^{2}}$. So, the single-particle optical spin
echo in Secs. III.1 and III.2 is not sufficient to restore the state of the
nontarget atoms.
#### III.3.1 Microwave spin echo for method I
A microwave spin-echo sequence can eliminate the many-body imprinted Rydberg
excitation in the nontarget atoms. As an example, we take method I shown in
Sec. III.1 to explain this. The required modification is that between the two
stages of time evolution shown in Hamiltonian (3) and (7), we should add a
two-photon microwave transition between $|r\rangle$ and another Rydberg state
$|r^{\prime}\rangle$
$\displaystyle\hat{H}_{\mu}$ $\displaystyle=$ $\displaystyle
i\Omega_{\mu}|r^{\prime}\rangle\langle r|/2+\text{H.c.},$ (18)
where the microwave field covers all atoms in the system, but will only
influence states that are initially in $|r\rangle$; one can assume that the
strength of the microwave field is the same for all atoms in the system. The
Rabi frequency of this microwave field should be much larger than the blockade
interactions between any two atoms in the problem, and the signs of the
blockade interactions of $|rr\rangle$ and $|r^{\prime}r^{\prime}\rangle$
should be opposite. This latter requirement is achievable if $|r\rangle$ and
$|r^{\prime}\rangle$ are $s$\- and $d$-orbital states of 87Rb, respectively
Walker and Saffman (2008). If the ratio between the $C_{6}$ coefficients of
$|r^{\prime}r^{\prime}\rangle$ and $|rr\rangle$ is $\varkappa$, the amplitude
and detuning of the laser fields used in the latter stage of Eq. (7) should be
$\varkappa\Omega$ and $\varkappa\Delta$, respectively. Then, the duration of
the second pulse in the microwave spin echo becomes
$t_{1}=\pi/|\varkappa\Delta|=t_{0}/|\varkappa|$. Because $\varkappa$ can
differ from $-1$, it is necessary to either add appropriate phases in the
laser fields, or add extra wait time. To explain this, we take a single
nontarget atom as an example. Suppose the energy of the state
$|r^{\prime}\rangle$ is $\hbar\omega^{\prime}$, then the rotating frame with
it is given by
$\hat{\mathcal{R}}^{\prime}=\hat{\mathcal{R}}+\omega^{\prime}|r^{\prime}\rangle\langle
r^{\prime}|$, and the operators $\hat{\mathcal{R}}_{\pm}$ in Secs. III.1 and
III.2 change to
$\hat{\mathscr{R}}_{\pm}=(\omega^{\prime}\pm|\varkappa|\Delta)|r^{\prime}\rangle\langle
r^{\prime}|$, which are applicable after the microwave pulse. If the microwave
field lasts for a duration of $t_{\mu}$, an extra wait time $t_{\text{w}}$
changes Eq. (9) to
$\displaystyle|\psi(t_{\text{f}})\rangle_{\perp}$ $\displaystyle=$
$\displaystyle
e^{-it_{1}\hat{\mathscr{H}}_{\perp}^{(2)}}e^{i(t_{\text{w}}+t_{\mu}+t_{0})\hat{\mathscr{R}}_{+}}e^{-i(t_{\text{w}}+t_{\mu}+t_{0})\hat{\mathcal{R}}^{\prime}}$
$\displaystyle\times
e^{-it_{\mu}\hat{H}_{\mu}}e^{it_{0}\hat{\mathcal{R}}^{\prime}}e^{-it_{0}\hat{\mathcal{R}}_{-}}e^{-it_{0}\hat{\mathcal{H}}_{\perp}^{(1)}}|\psi(0)\rangle_{\perp},$
$\displaystyle|\psi(t_{\text{f}})\rangle_{\shortparallel}$ $\displaystyle=$
$\displaystyle
e^{-it_{1}\hat{\mathscr{H}}_{\shortparallel}^{(2)}}e^{i(t_{\text{w}}+t_{\mu}+t_{0})\hat{\mathscr{R}}_{-}}e^{-i(t_{\text{w}}+t_{\mu}+t_{0})\hat{\mathcal{R}}^{\prime}}$
(19) $\displaystyle\times
e^{-it_{\mu}\hat{H}_{\mu}}e^{it_{0}\hat{\mathcal{R}}^{\prime}}e^{-it_{0}\hat{\mathcal{R}}_{+}}e^{-it_{0}\hat{\mathcal{H}}_{\shortparallel}^{(1)}}|\psi(0)\rangle_{\shortparallel},$
where $t_{\text{f}}=t_{1}+t_{\text{w}}+t_{\mu}+t_{0}$ is the time at the end
of the sequence, and
$\displaystyle\hat{\mathscr{H}}_{\perp}^{(2)}$ $\displaystyle=$
$\displaystyle-|\varkappa|\Omega_{\perp}e^{-it|\varkappa|\Delta}|r^{\prime}\rangle_{\perp}\langle
1|/2+\text{H.c.},$ $\displaystyle\hat{\mathscr{H}}_{\shortparallel}^{(2)}$
$\displaystyle=$
$\displaystyle|\varkappa|\Omega_{\shortparallel}e^{it|\varkappa|\Delta}|r^{\prime}\rangle_{\shortparallel}\langle
1|/2+\text{H.c.}.$ (20)
Compared to Eq. (9), the different magnitudes of the detuning of the laser
fields before and after the wait in Eq. (19) requires appropriate wait time to
validate the microwave spin echo. To show this, one can verify that with
$\Omega_{\mu}t_{\mu}=\pi$, the operator $e^{-it_{\mu}\hat{H}_{\mu}}$ is
equivalent to $(|r^{\prime}\rangle\langle r|-$H.c.) when acting on a state
$|r(r^{\prime})\rangle$, and is $\hat{1}$ otherwise. Thus the microwave spin
echo in the two equations in Eq. (19) requires
$|\varkappa|\Delta(t_{\text{w}}+t_{\mu}+t_{0})+t_{0}\Delta=2n\pi$, where $n$
is an integer, which gives
$\displaystyle t_{\text{w}}+t_{\mu}$ $\displaystyle=$
$\displaystyle\frac{2n\pi}{|\varkappa|\Delta}-t_{0}(1+1/|\varkappa|).$ (21)
For fast quantum control, the minimal positive integer $n$ leads to the
smallest $t_{\text{w}}+t_{\mu}$. When the above condition is satisfied, the
many-body Hamiltonian $\hat{H}^{\prime}$ during
$t\in[t_{\text{w}}+t_{\mu}+t_{0},~{}t_{\text{w}}+t_{\mu}+t_{0}+t_{1})$ is
exactly $\varkappa$ times the Hamiltonian during $t\in[0,~{}t_{0})$; the only
modification is that the involved Rydberg state is different. As a
consequence, the many-body system of the nontarget atoms will have their
states restored even though they experience many-body state evolution; in
fact, even if there is interaction between the target and nontarget qubit, the
microwave field also changes the state of the target atom, and hence the many-
body effect can still be removed. For a numerical simulation, see the appendix
of Ref. Shi (2018a) where the duration of $t_{\text{w}}+t_{\mu}$ is ignored
supposing the spin-echo condition is satisfied.
#### III.3.2 Microwave spin echo for method II
The process above is applicable to method II, too. Because there is no
nontarget atom in the light path along $\mathbf{l}_{\shortparallel}$, we focus
on the nontarget atoms in the light path along $\mathbf{l}_{\perp}$. As in
Sec. III.3.1, a microwave field pumps $|r\rangle$ to $|r^{\prime}\rangle$ so
as to reverse the sign of the Rydberg interaction. Because the microwave field
influences all atoms, the state component $|r\rangle$ of the target qubit also
changes likewise; but the target atom can have some population in $|R\rangle$,
too. To completely remove the many-body effect in the system, we should
reverse the sign of interaction in $|r^{\prime}R\rangle$, too. This can be
realized by using a superposition of two Rydberg states Shi _et al._ (2016);
Shi and Kennedy (2017); Shi (2018b) as $|R\rangle$, so that its interaction
with a nontarget atom can also change by a ratio of $\varkappa$. For more
detail about this method, see Ref. Shi and Kennedy (2017).
In Fig. 5, we show the microwave spin echo with one atom by using
$\varkappa=-52.6/56.2$, which corresponds to the $100d(s)$ state of rubidium
Shi (2018a). Figure 5 shows that the nontarget atom irradiated by the light
along $\mathbf{l}_{\perp}$ indeed returns to the ground state, and the target
atom is excited to the Rydberg state $|R\rangle$. The final phase of the
Rydberg state is $\pi/2$; when the same spin-echo sequence in Fig. 5 is used
for deexcitation, the final state is $-|1\rangle$ for the target atom.
Figure 5: Numerical result for the excitation of the Rydberg state $|R\rangle$
in Method II with a microwave field $\pi$-pulse between the two optical
pulses. (a) shows the time evolution of the population in $|1\rangle$ (solid
curve), $|r\rangle$ (dotted curve), and $|r^{\prime}\rangle$ (dash-dotted
curve) for the nontarget atom $E$ (or atom $F$). (b) shows the time evolution
of the population in $|R\rangle$ (dashed curve), $|1\rangle$, $|r\rangle$, and
$|r^{\prime}\rangle$ for the target atom. Before the microwave pulse,
$\Delta/2\pi=4$ MHz and $\Omega=1.2247\Delta$; after the microwave pulse, the
detuning and Rabi frequency become $|\varkappa|\Delta$ and $|\varkappa|\Omega$
with $\varkappa=-0.936$ (see text), and the microwave field lasts for a
$\pi$-pulse duration of $t_{\mu}=(1/|\varkappa|-1)\pi/\Delta$ with
$\Omega_{\mu}=\pi/t_{\mu}$. The Rabi frequency for the nontarget atom is
$\Omega_{\perp}=0.73\Omega$ in (a).
### III.4 Limitation from divergence of laser beam
The above method assumes that at most one nontarget atom is irradiated near
$\mathfrak{l}_{0}$ for each laser beam. If more than one nontarget atoms are
irradiated near $\mathfrak{l}_{0}$, the Rydberg interactions between the
nontarget atom at the axis of the beam and the atom off the axis can not have
a ratio of $\varkappa$ in the two pulses of the optical spin echo. This is
because the interaction between $s$-orbital Rydberg atoms is isotropic, but
that between $d$-orbital atoms is not Walker and Saffman (2008). Even the
Rydberg interaction along the the light path can change by a factor of
$\varkappa$ after the microwave irradiation, the interaction between two atoms
not along the light path can not.
We study condition that approximately validates the microwave spin echo in
Sec. III.3. We first consider two nontarget atoms shown in Fig. 2(a), labeled
as $A$ and $B$, where $B$ is not at the beam axis, but $A$ is, and thus atom
$A$ is irradiated. Because the field amplitude decays rapidly away from the
beam axis, atom $B$ is irradiated with a small field. For the configuration in
Fig. 2, the angle between $AC$ and $BC~{}(AB)$ is only $0.3~{}(0.4)$ radian,
so that the interaction between them is approximately isotropic Walker and
Saffman (2008), and if no atom that is even farther to the beam axis than B is
irradiated, the many-body spin echo is valid.
That no nontarget atoms farther than the atom B are irradiated is determined
by the lattice constant and the laser beam. To study the decay of the light
intensity away from the foci, we use $\mathfrak{l}_{0}$ as the longitudinal
coordinate, and $r_{\perp}$ as the radial coordinate. The foci is at the atom
$C$, which is the atom at the beam axis nearest to $A$. The coordinate of $B$
is $(\mathfrak{l}_{0},r_{\perp 0})$. From the geometry of the lattice, one can
locate another atom $B^{\prime}$ at $(\mathfrak{l}_{1},2r_{\perp 0})$, where
$\mathfrak{l}_{1}\approx\mathfrak{l}_{0}$. The beam radius of the laser field
near $(\mathfrak{l}_{1},0)$ is given by
$\displaystyle w(\mathfrak{l}_{1})$ $\displaystyle=$ $\displaystyle
w(0)\sqrt{X^{2}+\mathfrak{l}_{1}^{2}}/X$ (22) $\displaystyle=$
$\displaystyle\lambda\sqrt{\mathfrak{l}_{1}^{2}/w^{2}(0)+\pi^{2}w^{2}(0)/\lambda^{2}}/\pi$
where $w(0)$ is the beam radius at the beam waist and $\lambda$ is the
wavelength of the laser field. The electric field amplitude of the laser is
$\propto e^{-r_{\perp}^{2}/w^{2}(\mathfrak{l}_{1})}$, which means that the
electric field is $e^{-r_{\perp 0}^{2}/w^{2}(\mathfrak{l}_{1})}\mathcal{E}$
(or $e^{-4r_{\perp 0}^{2}/w^{2}(\mathfrak{l}_{1})}\mathcal{E}$) for the atom
$B$ (or $B^{\prime}$), where $\mathcal{E}$ is the amplitude at
$(\mathfrak{l}_{1},0)$. If it is reasonable to neglect atomic excitation when
the field amplitude of the laser light falls to $e^{-4}$ of that at the beam
axis, $r_{\perp 0}$ should be smaller than $w(0)$ so that $B^{\prime}$ is not
irradiated. From Eq. (22), one finds that
$w(\mathfrak{l}_{0})\geq\sqrt{2\lambda\mathfrak{l}_{0}/\pi}$. For Fig. 2(a),
one can show that $r_{\perp 0}=0.46\mathcal{L}$. So, it is necessary to have
$\mathfrak{l}_{0}<0.33\mathcal{L}^{2}/\lambda$ if we want the field at
$B^{\prime}$ to be smaller than $\mathcal{E}/e^{4}$. But for the sake of
convenience in experiments, it is useful to let the two laser fields propagate
only along the two directions, $[12\overline{1}]$ and $[\overline{1}21]$, so
that by only sweeping the laser fields parallel one can address any site of
the lattice. Then, we need to consider that the lasers can be focused at an
atom that is at any edge of the lattice so as to derive the permissible
largest lattice; one can find that the lattice size should be smaller than
$(1+\mathcal{N})\times(1+2\mathcal{N})\times(1+\mathcal{N})$, where
$\mathcal{N}$ is the largest integer that is smaller than
$0.13\mathcal{L}/\lambda$. This means that only a lattice with a large enough
lattice constant is applicable by our method. If small Rydberg blockade is
needed, relatively large lattice constants like $16.5~{}\mu$m are required Shi
(2019); in such a case and with $\lambda=0.78~{}\mu$m, a lattice of the size
$3\times 5\times 3$ in Fig. 2 can allow arbitrary single-site addressing by
our method. For $\lambda\leq 0.78~{}\mu$m in the case of rubidium,
$0.13\mathcal{L}/\lambda\approx 1$ if $\mathcal{L}=6~{}\mu$m, which means that
a $2\times 3\times 2$ lattice can be addressed only if the lattice constant is
as large as $6~{}\mu$m. This limitation from the beam divergence can be
tackled by sending laser fields along various directions other than the two
shown in Fig. 2 so that a larger lattice can be used for this purpose although
it involves more experimental complexity.
A possible solution to the problem of system size is to trap two types of
atoms in one lattice. In Fig. 9 of Ref. Beterov and Saffman (2015), a two-
dimensional lattice trapping both cesium and rubidium is studied. If such a
lattice is extended to 3D in a configuration so that the atoms nearest to a
rubidium are all cesium atoms, it should be possible to address the rubidium
atoms with laser fields that propagate through cesium atoms. This method may
allow more qubits to be trapped. However, it is an open question whether such
a 3D lattice can be prepared.
### III.5 Application in Rydberg blockade gate
Till now, we have assumed that there is no Rydberg atom near the target atom
before exciting it to Rydberg states. In the context of Rydberg blockade gate
Jaksch _et al._ (2000), if there is a Rydberg atom near the target qubit, it
is called the control qubit. Before proceeding, we note that the microwave
field used in the spin-echo sequence will influence both the control and
target qubits if the same Rydberg state is used for the two qubits, and thus
can transfer the Rydberg state of the control qubit. Then, the interaction
between the control qubit and any nontarget atoms reverses the sign,
validating the required spin-echo condition.
Compared to method I, method II is more useful when the control qubit is
already in the Rydberg state. We assume that the control qubit is away from
the target qubit by one lattice constant $\mathcal{L}$, and take the
configuration in Fig. 2(a) for analysis. For a concrete discussion, we assume
that the atom labeled by $D$ in Fig. 2(a) is the control qubit, which is
already in the Rydberg state $|R_{1}\rangle$, and we are to excite the target
atom $C$ to the Rydberg state $|R_{2}\rangle$ to prepare entanglement through
the blockade interaction $V$ of the state $|R_{1}R_{2}\rangle$. Now atom $D$
is very near to the laser beams along both $\mathbf{l}_{\shortparallel}$ and
$\mathbf{l}_{\perp}$, and one can choose different Rydberg states
$|R_{1}\rangle$ and $|R_{2}\rangle$ when exciting the control and target
qubit, respectively. When $|R_{1}\rangle$ and $|R_{2}\rangle$ have a GHz
energy separation, the light field used for the target will not alter the
state of the control qubit. If microwave spin echo is used in this case, three
sets of microwave fields should be employed, one to transfer $|r\rangle$ to
$|r^{\prime}\rangle$ (as in Fig. 5), and the other two for
$|R_{1(2)}\rangle\rightarrow|R_{1(2)}^{\prime}\rangle$. This can still realize
the microwave spin echo when the interactions of the states
$(|R_{1}r\rangle,|R_{1}R_{2}\rangle)$ and those of
$(|R_{1}^{\prime}r^{\prime}\rangle,|R_{1}^{\prime}R_{2}^{\prime}\rangle)$
change by a common ratio $\varkappa$; to realize this, superposition states
can be used as $|r^{\prime}\rangle$ and $|R_{1(2)}^{\prime}\rangle$ Shi _et
al._ (2016); Shi and Kennedy (2017); Shi (2018b). But for method I, nontarget
atoms along the laser fields along both $\mathbf{l}_{\shortparallel}$ and
$\mathbf{l}_{\perp}$ can be excited to the Rydberg state. Then, because atom
$B$ and atom $D$ have a distance of $\sqrt{3}\mathcal{L}$, the interaction
between them is $V/3^{3}$, which can be comparable to the Rabi frequency and
hampers the gate operation. This means that it is necessary to avoid exciting
atom $B$ if method I is used, which puts a significant limit to the system
size. But if method II is used, the light field along
$\mathbf{l}_{\shortparallel}$ will not excite any Rydberg state, thus atom $B$
is not a problem. This means that both method I and method II can be used for
the excitation of the control qubit, but method II is more useful for the
$2\pi$ pulse for the target qubit in the Rydberg blockade gate Jaksch _et
al._ (2000).
### III.6 Comparison between method I and method II
The two methods shown in Secs. III.1 and III.2 have their own advantages and
shortcomings, and can be used for different purposes.
First, both methods will leave the state of any nontarget atoms intact when
the state of the target atom is restored to the ground state. However, method
I only needs one pulse for exciting the target atom to the Rydberg state,
while method II needs two.
Second, although the excitation and deexcitation of the target atom form an
optical spin-echo cycle in method I, the excitation pulse alone does not form
a spin echo. So, the nontarget atoms can have some residual population in
Rydberg state when the target atom is pumped to the Rydberg state. For method
II, Rydberg excitation of the target atom is by two sets of lasers: those
along $\mathbf{l}_{\shortparallel}$ form a spin-echo cycle, but those along
$\mathbf{l}_{\perp}$ do not. Although the field along $\mathbf{l}_{\perp}$ do
not form a spin echo sequence, it does not excite any nontarget atoms, shown
in Secs. III.2. So, no nontarget atom will have any population in the Rydberg
state when the target atom is completely excited to Rydberg states in method
II.
Third, the state of the target atom will not pick any phase upon the
completion of its state restoration in method I. For method II, the target
atom does not experience any spin echo, and a $\pi$ phase is imprinted when
its state is restored to the ground state. This means that for the Rydberg
gate in Ref. Jaksch _et al._ (2000), method I can be used for the control
qubit, but can not be used for the target qubit, while method II can be used
for both the control and target qubit.
## IV High-fidelity quantum gates
Another application of ORIR is in achieving high fidelity in a Rydberg quantum
gate Jaksch _et al._ (2000); Saffman _et al._ (2010); Saffman (2016); Weiss
and Saffman (2017). Although it is theoretically possible to achieve high
fidelity beyond $0.999$ in an entangling quantum gate by Rydberg interactions
Goerz _et al._ (2014); Theis _et al._ (2016); Shi (2017); Petrosyan _et
al._ (2017); Shi (2018a, 2019), the experimental implementation is difficult
Wilk _et al._ (2010); Isenhower _et al._ (2010); Zhang _et al._ (2010);
Maller _et al._ (2015); Jau _et al._ (2016); Zeng _et al._ (2017); Picken
_et al._ (2019), with best fidelity below $0.98$ Graham _et al._ (2019);
Levine _et al._ (2019). This is partly due to a number of technical issues de
Léséleuc _et al._ (2018). But even if these issues are removed with improved
technology Levine _et al._ (2018); Graham _et al._ (2019); Levine _et al._
(2019), the fidelity of the best-known Rydberg quantum gate, i.e., the Rydberg
blockade gate Jaksch _et al._ (2000), is limited by the blockade error Zhang
_et al._ (2012). Below, we show that ORIR can effectively suppress this
blockade error.
The Rydberg blockade gate maps the input states from
$\\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\\}$ to
$\\{|00\rangle,-|01\rangle,-|10\rangle,-|11\rangle\\}$, and is realized in
three steps: apply a $\pi$ pulse to the control qubit; apply a $2\pi$ pulse to
the target; apply another $\pi$ pulse to the control qubit Jaksch _et al._
(2000). These three steps induce the following state evolution for the input
state $|11\rangle$:
$\displaystyle|11\rangle\xrightarrow[\text{For
control}]{\pi}-i|r1\rangle\xrightarrow[\text{For
target}]{2\pi}-i|r1\rangle\xrightarrow[\text{For control}]{\pi}-|11\rangle,$
where $|r\rangle$ is a Rydberg state. The second step in the above equation is
perfect if the Rydberg blockade $V$ of the state $|rr\rangle$ is infinitely
large compared to the Rabi frequency $\Omega$ of the atom-field interaction;
for any finite $V$, it attains a blockade error on the order of
$\Omega^{2}/V^{2}$. There is also a phase error, but it can be removed
effectively by adding a phase difference in the laser field at the middle of
the second step of the gate. When a global phase shift to the target qubit is
inserted, the phase error is removed Zhang _et al._ (2012), but the blockade
error sets a fundamental limit to the achievable fidelity of the gate. For
this reason, there are many theoretical proposals to suppress the blockade
error Goerz _et al._ (2014); Theis _et al._ (2016); Shi (2017); Petrosyan
_et al._ (2017); Shi (2018a, 2019); Shen _et al._ (2019); Yu _et al._
(2019); Levine _et al._ (2019).
We show that the blockade error can be suppressed by replacing the usual Rabi
frequency $\Omega$ with a time-dependent Rabi frequency $i\Omega\sin(\Delta
t)$ formed by ORIR. In practice, this is done via replacing a resonant laser
of power $P$ by two laser beams of opposite phases, one with detuning $\Delta$
and power $P/4$, and the other with detuning $-\Delta$ and power $P/4$. This
results in an effective Rabi frequency $i\Omega\sin(\Delta t)$. In contrast to
the single-site Rydberg addressing studied in Sec. III, here the fields with
detuning $\Delta$ and those with detuning $-\Delta$ can propagate in one
direction, and hence it is easier to set the relative phase of the laser
fields to realize Eq. (1). In this case, the Hamiltonian for the input state
$|11\rangle$ during the second step of the traditional protocol
$\displaystyle\hat{H}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(\begin{array}[]{cc}2V&\Omega\\\
\Omega&0\end{array}\right)$ (25)
is replaced by
$\displaystyle\hat{H}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cc}V&i\Omega\sin(\Delta t)/2\\\
-i\Omega\sin(\Delta t)/2&0\end{array}\right)$ (28)
in our scheme. The basis is $\\{|rr\rangle,~{}|r1\rangle\\}$ in the two
matrices of Eqs. (25) and (28). To induce the transform
$|01\rangle\rightarrow-|01\rangle$ for the input state $|01\rangle$, one can
choose $\Omega/\Delta=\pi$ and a duration $t=\pi/\Delta$ for the second step,
as can be derived in a similar method used for the derivation of Eq. (2).
Here, the condition of $\Omega/\Delta=\pi$ and $t=\pi/\Delta$ is not
compatible with an optical spin echo sequence because it does not allow the
transform from Eq. (9) to Eq. (LABEL:method01finalstate2), which is in
contrast to the case studied in Sec. III.1. This means that to use the method
in this section for addressing one atom in 3D lattice, only atoms near the
very edge of the atomic array can be excited so that no nontarget atoms will
be excited.
To analyze the blockade errors for the two schemes in Eqs. (25) and (28), we
take the setup in Levine _et al._ (2018) to estimate the system parameters.
In Levine _et al._ (2018), its Figure 1 shows that the laser fields for the
optical trap propagate (along $\mathbf{z}$) perpendicular to the quantization
axis (labeled as $\mathbf{x}$) of the atom. The supplemental material of
Levine _et al._ (2018) shows that the r.m.s. fluctuation of the qubit spacing
along $\mathbf{x}$ is $\varsigma_{x}=0.2~{}\mu$m. If the position fluctuation
along $\mathbf{y}$ or $\mathbf{z}$ is about 10 times of that along
$\mathbf{x}$ as in Isenhower _et al._ (2010), we can assume
$\varsigma_{y}=\varsigma_{z}=2~{}\mu$m. The two-qubit spacing is
$L=5.7~{}\mu$m in Levine _et al._ (2018), which means that the actual
blockade $V$ can be very different from the chosen value $V_{0}$: fluctuation
along $\mathbf{x}$ can lead to $V/V_{0}=(1\pm
2\varsigma_{x}/L)^{-6}~{}(\approx 0.67,1.55)$, while that along $\mathbf{z}$
can lead to $V/V_{0}=(1+4\varsigma_{z}^{2}/L^{2})^{-3}~{}(\approx 0.3)$. For a
conserved estimation, we consider $(V-V_{0})/V_{0}\in[-0.25,~{}0.25]$. The
entanglement between the ground and Rydberg states in Levine _et al._ (2018)
used $(\Omega,~{}V_{0})/2\pi=(2,~{}30)$ MHz, while the actual Rabi frequency
for the entanglement was $\sqrt{2}\Omega$ due to the many-body enhancement.
Then, we take $V_{0}/\Omega=12$ as an example to study the rotation error of a
Rydberg blockade process.
Figure 6: Population loss in the the state $-i|r1\rangle$ after the second
step of the Rydberg gate shown with the common logarithm and parameters
$V_{0}:\Omega:\Delta=12:1:\pi^{-1}$. $V_{0}$ is the blockade interaction of
$|rr\rangle$ when both qubits are located at the center of their traps; the
actual Rydberg blockade $V$ can differ from $V_{0}$ due to the thermal motion
of the qubits. The solid curve denotes results from propagating the
wavefunction by the Hamiltonian in Eq. (28) for a duration of $\pi/\Delta$.
The dashed curve shows the population error of the state $-i|r1\rangle$ after
an evolution time of $2\pi/\Omega$ under the Hamiltonian in Eq. (25). When
averaged over all the $V$ in the figure, the population errors are $9.5\times
10^{-6}$ and $3.7\times 10^{-3}$ for the ORIR-based method and the traditional
method, respectively. These two averages respectively become $1.7\times
10^{-5}$ and $4.6\times 10^{-3}$ if a larger interval of
$(V-V_{0})/V_{0}\in[-0.5,~{}0.5]$ is considered.
The population loss in the state $|r1\rangle$ after the second step of the
gate is shown in Fig. 6 for different $(V-V_{0})/V_{0}\in[-0.25,~{}0.25]$. The
dashed curve in Fig. 6 shows that the blockade error in the traditional method
is on the order of $10^{-3}$ for most values of $V$. Although at some special
values of $(V-V_{0})/V_{0}$ (for example, at around $-1/300$) a tiny
population leakage of less than $10^{-5}$ can appear with the traditional
method, the blockade error quickly rises to the level of $10^{-3}$ when $V$
deviates from that special value. Thus it is difficult to use the traditional
method to achieve a small blockade error since it is challenging to
sufficiently suppress the fluctuation of the qubit positions. On the other
hand, the solid curve in Fig. 6 shows that the population loss in $|r1\rangle$
is much smaller when the second step is characterized by the effective
Hamiltonian in Eq. (28). In fact, the blockade error in the ORIR-based
protocol of Fig. 6 is on the order of $10^{-5}$ for most values of $V$, which
is two orders of magnitude smaller compared with the errors of the traditional
method, effectively removing the blockade error.
The removal of blockade error by Eq. (28) is quite different from other
proposals of high-fidelity gates by time-dependent pumping. For example,
comparing to the method of analytical derivative removal by adiabatic gate in
Ref. Theis _et al._ (2016), our method only needs rectangular pulse (or
quasi-rectangular pulse) that can be easily chopped from a continuous laser
field with a pulse picker. Moreover, the ORIR-based gate does not have a
strict dependence on specific blockade interaction for achieving the optimal
fidelity, which can be found by comparing the solid curve in Fig. 6 for our
method and Fig. 3 of Ref. Theis _et al._ (2016) for the other method [when
comparing, we note that a fourth of $1-|\langle r1|\psi\rangle|^{2}$ in Fig. 6
contributes to the gate infidelity]. In fact, as an example, if the value of
$V_{0}/\Omega$ in Fig. 6 increases to $30~{}(50)$, the population errors
averaged over the interval $(V-V_{0})/V_{0}\in[-0.25,~{}0.25]$ become
$1.3\times 10^{-6}$ and $5.9\times 10^{-4}$ ($4.3\times 10^{-7}$ and
$2.1\times 10^{-4}$) in the ORIR-based method and the traditional method,
respectively. These data show that the ORIR-based protocol can still suppress
the blockade error by more than two orders of magnitude when $V/\Omega$ is
large.
Figure 7: Time evolution of $1-|\langle r1|\psi\rangle|^{2}$ during the second
step of the Rydberg gate shown by the common logarithm with the condition of
$V=V_{0}=12\Omega$. (a) shows the result from the ORIR-based method by using
the Hamiltonian in Eq. (28). (b) shows the results of the traditional method
by using the Hamiltonian in Eq. (25). The red circles denote the population
errors at the end of the second step of the gate, which are $4.3\times
10^{-6}$ and $1.2\times 10^{-4}$ in (a) and (b), respectively.
To understand the mechanism for the suppression of the blockade error by ORIR,
we show the time evolution of the population in $|rr\rangle$, which is equal
to $1-|\langle r1|\psi\rangle|^{2}$, during the second step for our method and
the traditional method in Fig. 7(a) and 7(b), respectively. Figure 7(a) shows
that the population leakage with ORIR increases with a minor oscillation
pattern during the first half of the pulse, but decreases with a similar
pattern during the latter half of the pulse. This removal of the blockade
error is similar to the suppression of the blockade error with the spin-echo
method studied in Ref. Shi (2018a), although the underlying physics is
different.
### IV.1 Gate error from finite edges and timing errors of laser pulses
As shown in Appendix A, the finite rising and falling, and timing
synchronization of laser pulses are also factors that limit the accuracy of
the quantum gate. In most experiments on Rydberg quantum gates, an $s$\- or
$d$-orbital Rydberg state was used Wilk _et al._ (2010); Isenhower _et al._
(2010); Zhang _et al._ (2010); Maller _et al._ (2015); Zeng _et al._
(2017); Picken _et al._ (2019); Levine _et al._ (2018); Graham _et al._
(2019); Levine _et al._ (2019). Their excitation is usually via a two-photon
excitation through a largely detuned intermediate state. In typical
experiments, the laser beams for the upper transition are left on for several
tens of nanoseconds longer than the lasers for the lower transitions Maller
_et al._ (2015). Then, the onset and cutoff of the optical pumping are
determined by the lasers of the lower transitions. So, we only need to
consider four timing errors
$\\{t_{\text{ge}+}^{\text{(s)}},t_{\text{ge}-}^{\text{(s)}},t_{\text{ge}+}^{\text{(e)}},t_{\text{ge}-}^{\text{(e)}}\\}$
for the two lower laser beams, where $t_{\text{ge}\pm}^{\text{(s)}}$ and
$t_{\text{ge}\pm}^{\text{(e)}}$ are the deviation of the actual start and
cutoff times from the desired start and cutoff times of the pulse (as
illustrated in Fig. 9); ideally, they should be zero. These errors will
influence both $|01\rangle$ and $|11\rangle$ among the four input states.
According to the discussion in the fourth paragraph of this section and Levine
_et al._ (2018), we choose $\Omega/2\pi=2$ MHz as the desired value of
$\Omega_{\text{ge}\pm}$. As for the Rydberg blockade, we assume
$V_{0}/\Omega=12$ and choose, as an example, $V=0.97V_{0}$, where the
population error in $|r1\rangle$ is about $10^{-5}$ ($\approx$ average error
for the solid curve in Fig. 6) in the ideal case. The optical excitation
occurs during $[0,~{}T]$ in the ideal case, where $T$ is the desired pulse
duration. We suppose that the rising or falling of the laser pulses need $20$
ns Urban _et al._ (2009); Maller _et al._ (2015); Levine _et al._ (2018).
The optimal value of $T$ corresponds to the smallest population error in
$|01\rangle$. This is because the population leakage is less sensitive for the
input state $|11\rangle$ since the transition
$|r1\rangle\rightarrow|rr\rangle$ is blocked during the second step of the
gate. With a numerically found optimal $T$ of $795.3963$ ns, the population
leakage in $|01\rangle$ ($|r1\rangle$) is $4.4~{}(7.7)\times 10^{-6}$. We then
use this $T$ to investigate the population errors in $|01\rangle$ and
$|r1\rangle$ when the timing error is included.
Figure 8: Round (square) symbols show the average population loss in the state
$|01\rangle~{}(|r1\rangle)$ with a r.m.s. timing error $\sigma_{\text{t}}$ for
the start and end of the two laser pulses. Calculation was based on
propagating the state with the Hamiltonian in Eq. (LABEL:Hwithtimingerror)
during $[t_{\text{ge}\pm}^{\text{(s)}},~{}T+t_{\text{ge}\pm}^{\text{(e)}}]$.
Average was calculated with Eq. (34) by letting
$t_{\text{ge}\pm}^{\text{(s)}}$ and $t_{\text{ge}\pm}^{\text{(e)}}$ apply the
values
$\\{-5\sigma_{\text{t}},-4\sigma_{\text{t}},\cdots,~{}5\sigma_{\text{t}}\\}$.
Here $(\Omega,~{}V,~{}\Delta)/2\pi=(2,~{}23.28,~{}2/\pi)$ MHz.
To include possible timing error in the second step with optical pumping for
the target qubit, we cast Eq. (28) into
$\displaystyle\hat{H}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cc}V&\frac{\Omega_{\text{ge}+}}{4}e^{i\Delta
t}-\frac{\Omega_{\text{ge}-}}{4}e^{-i\Delta t}\\\
\frac{\Omega_{\text{ge}+}}{4}e^{-i\Delta
t}-\frac{\Omega_{\text{ge}-}}{4}e^{i\Delta t}&0\end{array}\right),$ (31)
where $\Omega_{\text{ge}\pm}$ is nonzero during
$[t_{\text{ge}\pm}^{\text{(s)}},~{}T+t_{\text{ge}\pm}^{\text{(e)}}]$, and has
a rising and falling edge during the first and last $20$ ns of the pulse. Each
set of nonzero timing errors can result in gate error. Following Appendix A,
the average error is calculated with Eq. (34) where the summation runs through
the $11^{4}$ sets of
$\\{t_{\text{ge}+}^{\text{(s)}},t_{\text{ge}-}^{\text{(s)}},t_{\text{ge}+}^{\text{(e)}},t_{\text{ge}-}^{\text{(e)}}\\}$.
With the r.m.s. error $\sigma_{\text{t}}\in[0.2,~{}5]$ ns, the population
losses in $|01\rangle$ and $|r1\rangle$ are shown by the round and square
symbols in Fig. 8, respectively. One can find that even with a very large
fluctuation characterized by $\sigma_{\text{t}}=5$ ns, the lost population in
$|01\rangle$ ($|r1\rangle$) is only $3.0~{}(1.8)\times 10^{-4}$ after the
second step, which is much smaller than the blockade error in the traditional
method even without pulse defects (see the dashed curve in Fig. 6). This
suggests that ORIR is useful for high-fidelity quantum gate based on the
mechanism of Rydberg blockade.
## V Discussions
The ORIR-based time-dependent Rabi frequencies can also be used in trapped
ions. Compared to neutral atoms, the entangling gates with trapped ions can
attain a much better accuracy Tan _et al._ (2015); Ballance _et al._ (2015,
2016); Gaebler _et al._ (2016). But compared to ions, neutral atoms are easy
for building large-scale qubit arrays Wang _et al._ (2015); Xia _et al._
(2015); Barredo _et al._ (2016); Wang _et al._ (2016). Thus, it is useful to
combine these two merits together in one system; indeed, there is intense
interest to trap and manipulate Rydberg ions Higgins _et al._ (2017a, b);
Engel _et al._ (2018), as well as theoretical effort to design protocols for
quantum entanglement between Rydberg ions Müller _et al._ (2008); Li _et
al._ (2013); Li and Lesanovsky (2014). The methods in Müller _et al._ (2008);
Li _et al._ (2013); Li and Lesanovsky (2014) depend on time-dependent Rabi
frequencies and, in fact, the protocol in Li _et al._ (2013) uses a Rabi
frequency $\propto\sin(\Delta t)$ that can be easily realized with quasi-
rectangular laser fields chopped from continuous lasers with pulse pickers.
Recently, adiabatic excitation of a single trapped Rydberg ion was
experimentally demonstrated by using two sinusoidal Rabi frequencies in a
ladder configuration Higgins _et al._ (2017b). Reference Higgins _et al._
(2017b) reported that the reason to have used sinusoidal Rabi frequencies is
that they result in a better fidelity than using Gaussian pulses. These
results show that ORIR, which can offer sinusoidal Rabi frequencies by only
using continuous laser and pulse pickers, is useful for Rydberg excitation in
trapped ions.
## VI Conclusions
In conclusion, we have studied the application of an off-resonance-induced
resonance (ORIR) in single-site Rydberg addressing in a 3D lattice and high-
fidelity Rydberg gate. These applications benefit from a time-dependent Rabi
frequency $2\Omega\cos(\Delta t)$ [or $2i\Omega\sin(\Delta t)$] that naturally
arises from the action of two symmetrically detuned coherent fields of equal
[or opposite] phase. ORIR can be implemented with quasi-rectangular laser
pulse chopped from a continuous laser by an electro-optic modulator or an
acousto-optic modulator, a method commonly used in experiments with Rydberg
atoms.
In the study of quantum gates by Rydberg interactions of neutral atoms, ORIR
can enable single-site Rydberg addressing in a 3D optical lattice while
leaving irradiated nontarget atoms intact upon the completion of the Rydberg
deexcitation of the target atom; moreover, we show that ORIR can suppress a
fundamental rotation error in the Rydberg blockade gate, making it possible to
achieve high fidelity with only quasi-rectangular pulses. Along the way, we
find another method for single-site Rydberg addressing in a 3D lattice by
using a ladder-type system. In this latter method, spin echo is used for any
nontarget atom so that its state is restored when a single target atom is
pumped to the Rydberg state, and the target atom picks up a $\pi$ phase when
its state is restored to the ground state. These methods make it possible to
entangle two selected qubits deep in a 3D atomic array and paves the way to
large-scale quantum processors based on Rydberg atoms.
## ACKNOWLEDGMENTS
The author is grateful to Yan Lu and Tian Xia for fruitful discussions. This
work was supported by the National Natural Science Foundation of China under
Grant No. 11805146 and the 111 Project (No. B17035).
## Appendix A Transfer error from failure of timing synchronization of laser
pulses
The ORIR-based application in the quantum gate and single-site addressing
depends on timing synchronization of laser pulses, which is discussed in this
appendix.
First, the timing synchronization of the laser pulses can be realized by using
acousto-optic multi-channel modulators made with tellurium dioxide crystals,
which can deflect multiple laser beams http://www.sintecoptronics.com/aom.asp
. To address arrays of rubidium atoms with two-photon excitation of Rydberg
states, a Doppler-free multiwavelength (780 and 480 nm) acousto-optic
deflector was experimentally demonstrated about a dozen years ago Kim _et
al._ (2008). The deflector in Kim _et al._ (2008) can simultaneously diffract
the two incident optical wavelengths with a common diffraction angle. So, it
is possible to realize a multi-channel modulator for deflecting several laser
lights. In this case, the durations of the two laser pulses in Fig. 9 will be
identical so that there will be no error due to failure of the timing
synchronization.
Second, the timing synchronization can be approximately achieved by using
commercially available high-precision digital delay generators. In Ref.
Saßmannshausen _et al._ (2013), a single delay generator (Quantum Composer
9528) was used to control all timings that were relevant for Rydberg-state
excitation and detection. This type of delay generator has a resolution of
0.25 ns and r.m.s. jitter of 0.4 ns Qua , and is widely used in the
experimental study on quantum physics Matthews _et al._ (2011); de Groote
_et al._ (2015), quantum chemistry Zhu _et al._ (2019), combustion Eitel _et
al._ (2017); Bonebrake _et al._ (2019), accelerator Smith (2010), and
aerodynamics Watkins (2013). The precision of commercial digital delay
generators can also be surprisingly high. For example, the pulse generated by
a laser synchronization module in EKS has a resolution of 25 ps, an accuracy
of 25 ps$+10^{-6}\times$delay, and jitter of less than 30 ps. However, ultra-
high precision is not required for most applications, so that such digital
delay generators are not well-known. To use these digital delay generators for
timing, the transmission time of the control signal should also be considered,
which is possible by adjusting the length of the cable.
Figure 9: Failure of timing synchronization of the laser pulses. The topmost
curve shows the shape of the two lower laser pulses when they have no timing
error. (a) and (b) show the pulse shapes with timing errors.
We suppose the pulse timing is controlled by a commercial digital delay
generator and assume that the r.m.s timing errors of the onset and cutoff of
the pulse are both $\sigma_{\text{t}}$. The beginning of the pulse is at a
time $t$ that obeys Gaussian distribution
$f(t|0,\sigma_{\text{t}})=e^{-t^{2}/(2\sigma_{\text{t}}^{2})}/\sqrt{2\pi\sigma_{\text{t}}^{2}}$,
while the end of the pulse is at a time $t$ that is distributed according to
$f(t|T,\sigma_{\text{t}})=e^{-(t-T)^{2}/(2\sigma_{\text{t}}^{2})}/\sqrt{2\pi\sigma_{\text{t}}^{2}}$,
where $T$ is the desired pulse duration. The timing diagram of the four laser
beams is shown in Fig. 9. Altogether, there are four timing errors, i.e., the
two timing errors of the pulse arrival, $t_{\text{ge}\pm}^{\text{(s)}}$, of
the two laser lights for the lower transition, and their cutoff,
$t_{\text{ge}\pm}^{\text{(e)}}$. The fidelity $\mathcal{F}$ for each
realization of the gate is a function of these four timing errors. Then, the
expected fidelity is given by
$\displaystyle\overline{\mathcal{F}}$ $\displaystyle=$
$\displaystyle\int\mathcal{F}(t_{\text{ge}+}^{\text{(s)}},t_{\text{ge}-}^{\text{(s)}},t_{\text{ge}+}^{\text{(e)}},t_{\text{ge}-}^{\text{(e)}})$
(33)
$\displaystyle\cdot\mathcal{P}(t_{\text{ge}+}^{\text{(s)}},t_{\text{ge}-}^{\text{(s)}},t_{\text{ge}+}^{\text{(e)}},t_{\text{ge}-}^{\text{(e)}})\mathscr{D}_{4},$
where
$\displaystyle\mathcal{P}$ $\displaystyle=$ $\displaystyle
f(t_{\text{ge}+}^{\text{(s)}}|0,\sigma_{\text{t}})f(t_{\text{ge}-}^{\text{(s)}}|0,\sigma_{\text{t}})f(t_{\text{ge}+}^{\text{(e)}}|T,\sigma_{\text{t}})f(t_{\text{ge}-}^{\text{(e)}}|T,\sigma_{\text{t}}),$
and $\mathscr{D}_{4}$ indicates a four-dimensional integral given by
$dt_{\text{ge}+}^{\text{(s)}}dt_{\text{ge}-}^{\text{(s)}}dt_{\text{ge}+}^{\text{(e)}}dt_{\text{ge}-}^{\text{(e)}}$.
Each value of $\mathcal{F}$ in Eq. (33) is evaluated by integration of the
Schrödinger equation through the pumping process; this results in quite a long
time to evaluate the four-dimensional integration in Eq. (33). Since the
timing errors obey a normal distribution, the contribution from large timing
errors is negligible. For this reason, we approximate Eq. (33) by considering
eleven values $\in\\{\pm 5\sigma_{\text{t}},\pm 4\sigma_{\text{t}},\pm
3\sigma_{\text{t}},\pm 2\sigma_{\text{t}},\pm\sigma_{\text{t}},0\\}$ for the
integration over each of the four timing errors, so that
$\displaystyle\overline{\mathcal{F}}$ $\displaystyle\approx$
$\displaystyle\frac{\sum\mathcal{F}\mathcal{P}}{\sum\mathcal{P}},$ (34)
where ‘$\sum$’ sums over the $11^{4}$ sets of
$\\{t_{\text{ge}+}^{\text{(s)}},t_{\text{ge}-}^{\text{(s)}},t_{\text{ge}+}^{\text{(e)}},t_{\text{ge}-}^{\text{(e)}}\\}$.
## References
* Jaksch _et al._ (2000) D. Jaksch, J. I. Cirac, P. Zoller, S. L. Rolston, R. Côté, and M. D. Lukin, Fast Quantum Gates for Neutral Atoms, Phys. Rev. Lett. 85, 2208 (2000).
* Saffman _et al._ (2010) M. Saffman, T. G. Walker, and K. Mølmer, Quantum information with Rydberg atoms, Rev. Mod. Phys. 82, 2313 (2010).
* Saffman _et al._ (2011) M. Saffman, X. L. Zhang, A. T. Gill, L. Isenhower, and T. G. Walker, Rydberg state mediated Quantum gates and entanglement of pairs of Neutral atoms, J. Phys.: Conf. Ser. 264, 012023 (2011).
* Saffman (2016) M. Saffman, Quantum computing with atomic qubits and Rydberg interactions: Progress and challenges, J. Phys. B 49, 202001 (2016).
* Weiss and Saffman (2017) D. S. Weiss and M. Saffman, Quantum computing with neutral atoms, Phys. Today 70, 44 (2017).
* Adams _et al._ (2019) C. S. Adams, J. D. Pritchard, and J. P. Shaffer, Rydberg atom quantum technologies, (2019), arXiv:1907.09231 .
* Aljunid _et al._ (2009) S. A. Aljunid, M. K. Tey, B. Chng, T. Liew, G. Maslennikov, V. Scarani, and C. Kurtsiefer, Phase Shift of a Weak Coherent Beam Induced by a Single Atom, Phys. Rev. Lett. 103, 153601 (2009).
* Leung _et al._ (2014) V. Y. F. Leung, D. R. M. Pijn, H. Schlatter, L. Torralbo-Campo, A. L. La Rooij, G. B. Mulder, J. Naber, M. L. Soudijn, A. Tauschinsky, C. Abarbanel, B. Hadad, E. Golan, R. Folman, and R. J. C. Spreeuw, Magnetic-film atom chip with 10 μm period lattices of microtraps for quantum information science with Rydberg atoms, Rev. Sci. Instrum. 85, 053102 (2014).
* Nogrette _et al._ (2014) F. Nogrette, H. Labuhn, S. Ravets, D. Barredo, L. Béguin, A. Vernier, T. Lahaye, and A. Browaeys, Single-Atom Trapping in Holographic 2D Arrays of Microtraps with Arbitrary Geometries, Phys. Rev. X 4, 021034 (2014).
* Xia _et al._ (2015) T. Xia, M. Lichtman, K. Maller, A. W. Carr, M. J. Piotrowicz, L. Isenhower, and M. Saffman, Randomized Benchmarking of Single-Qubit Gates in a 2D Array of Neutral-Atom Qubits, Phys. Rev. Lett. 114, 100503 (2015).
* Zeiher _et al._ (2015) J. Zeiher, P. Schauß, S. Hild, T. Macrì, I. Bloch, and C. Gross, Microscopic Characterization of Scalable Coherent Rydberg Superatoms, Phys. Rev. X 5, 031015 (2015).
* Ebert _et al._ (2015) M. Ebert, M. Kwon, T. G. Walker, and M. Saffman, Coherence and Rydberg Blockade of Atomic Ensemble Qubits, Phys. Rev. Lett. 115, 093601 (2015).
* Wang _et al._ (2016) Y. Wang, A. Kumar, T.-Y. Wu, and D. S. Weiss, Single-qubit gates based on targeted phase shifts in a 3D neutral atom array, Science 352, 1562 (2016).
* Barredo _et al._ (2016) D. Barredo, S. D. Léséleuc, V. Lienhard, T. Lahaye, and A. Browaeys, An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic arrays, Science 354, 1021 (2016).
* Wilk _et al._ (2010) T. Wilk, A. Gaëtan, C. Evellin, J. Wolters, Y. Miroshnychenko, P. Grangier, and A. Browaeys, Entanglement of Two Individual Neutral Atoms Using Rydberg Blockade, Phys. Rev. Lett. 104, 010502 (2010).
* Isenhower _et al._ (2010) L. Isenhower, E. Urban, X. L. Zhang, A. T. Gill, T. Henage, T. A. Johnson, T. G. Walker, and M. Saffman, Demonstration of a Neutral Atom Controlled-NOT Quantum Gate, Phys. Rev. Lett. 104, 010503 (2010).
* Zhang _et al._ (2010) X. L. Zhang, L. Isenhower, A. T. Gill, T. G. Walker, and M. Saffman, Deterministic entanglement of two neutral atoms via Rydberg blockade, Phys. Rev. A 82, 030306 (2010).
* Maller _et al._ (2015) K. M. Maller, M. T. Lichtman, T. Xia, Y. Sun, M. J. Piotrowicz, A. W. Carr, L. Isenhower, and M. Saffman, Rydberg-blockade controlled-not gate and entanglement in a two-dimensional array of neutral-atom qubits, Phys. Rev. A 92, 022336 (2015).
* Zeng _et al._ (2017) Y. Zeng, P. Xu, X. He, Y. Liu, M. Liu, J. Wang, D. J. Papoular, G. V. Shlyapnikov, and M. Zhan, Entangling Two Individual Atoms of Different Isotopes via Rydberg Blockade, Phys. Rev. Lett. 119, 160502 (2017).
* Picken _et al._ (2019) C. J. Picken, R. Legaie, K. McDonnell, and J. D. Pritchard, Entanglement of neutral-atom qubits with long ground-Rydberg coherence times, Quantum Sci. Technol. 4, 015011 (2019).
* Levine _et al._ (2018) H. Levine, A. Keesling, A. Omran, H. Bernien, S. Schwartz, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletić, and M. D. Lukin, High-fidelity control and entanglement of Rydberg atom qubits, Phys. Rev. Lett. 121, 123603 (2018).
* Graham _et al._ (2019) T. M. Graham, M. Kwon, B. Grinkemeyer, Z. Marra, X. Jiang, M. T. Lichtman, Y. Sun, M. Ebert, and M. Saffman, Rydberg mediated entanglement in a two-dimensional neutral atom qubit array, , 1 (2019), arXiv:1908.06103 .
* Levine _et al._ (2019) H. Levine, A. Keesling, G. Semeghini, A. Omran, T. T. Wang, S. Ebadi, H. Bernien, M. Greiner, V. Vuletić, H. Pichler, and M. D. Lukin, Parallel implementation of high-fidelity multi-qubit gates with neutral atoms, Phys. Rev. Lett. 123, 170503 (2019).
* Crow _et al._ (2016) D. Crow, R. Joynt, and M. Saffman, Improved Error Thresholds for Measurement-Free Error Correction, Phys. Rev. Lett. 117, 130503 (2016).
* Wang _et al._ (2015) Y. Wang, X. Zhang, T. A. Corcovilos, A. Kumar, and D. S. Weiss, Coherent Addressing of Individual Neutral Atoms in a 3D Optical Lattice, Phys. Rev. Lett. 115, 043003 (2015).
* Goreslavsky _et al._ (1980) S. P. Goreslavsky, N. B. Delone, and V. P. Krainov, The dynamics and spontaneous radiation of a two-level atom in a bichromatic field, J. Phys. B 13, 2659 (1980).
* Goerz _et al._ (2014) M. H. Goerz, E. J. Halperin, J. M. Aytac, C. P. Koch, and K. B. Whaley, Robustness of high-fidelity Rydberg gates with single-site addressability, Phys. Rev. A 90, 032329 (2014).
* Theis _et al._ (2016) L. S. Theis, F. Motzoi, F. K. Wilhelm, and M. Saffman, High-fidelity Rydberg-blockade entangling gate using shaped, analytic pulses, Phys. Rev. A 94, 032306 (2016).
* Shi (2017) X.-F. Shi, Rydberg Quantum Gates Free from Blockade Error, Phys. Rev. Appl. 7, 064017 (2017).
* Petrosyan _et al._ (2017) D. Petrosyan, F. Motzoi, M. Saffman, and K. Mølmer, High-fidelity Rydberg quantum gate via a two-atom dark state, Phys. Rev. A 96, 042306 (2017).
* Shi (2018a) X.-F. Shi, Accurate Quantum Logic Gates by Spin Echo in Rydberg Atoms, Phys. Rev. Appl. 10, 034006 (2018a).
* Shen _et al._ (2019) C.-P. Shen, J.-L. Wu, S.-L. Su, and E. Liang, Construction of robust Rydberg controlled-phase gates, Opt. Lett. 44, 2036 (2019).
* Shi (2019) X.-F. Shi, Fast, Accurate, and Realizable Two-Qubit Entangling Gates by Quantum Interference in Detuned Rabi Cycles of Rydberg Atoms, Phys. Rev. Appl. 11, 044035 (2019).
* Yu _et al._ (2019) D. Yu, H. Wang, D. Ma, X.-D. Zhao, and J. Qian, Adiabatic and high-fidelity quantum gates with hybrid Rydberg-Rydberg interactions, Opt. Express 27, 23080 (2019).
* Saffman and Walker (2005) M. Saffman and T. G. Walker, Analysis of a quantum logic device based on dipole-dipole interactions of optically trapped Rydberg atoms, Phys. Rev. A 72, 022347 (2005).
* Wineland _et al._ (1998) D. J. Wineland, C. Monroe, W. M. Itano, D. Leibfried, B. E. King, and D. M. Meekhof, Experimental issues in coherent quantum-state manipulation of trapped atomic ions, J. Res. Natl. Inst. Stand. Technol. 103, 259 (1998).
* Cohen-Tannoudji _et al._ (1998) C. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg, _Atom-Phton Interactions_ (John Wiley & Sons, Inc., New York, 1998).
* Sørensen and Mølmer (1999) A. Sørensen and K. Mølmer, Quantum Computation with Ions in Thermal Motion, Phys. Rev. Lett. 82, 1971 (1999).
* Lehec _et al._ (2018) H. Lehec, A. Zuliani, W. Maineult, E. Luc-Koenig, P. Pillet, P. Cheinet, F. Niyaz, and T. F. Gallagher, Laser and microwave spectroscopy of even-parity Rydberg states of neutral ytterbium and multichannel quantum defect theory analysis, Phys. Rev. A 98, 062506 (2018).
* Beterov _et al._ (2009) I. Beterov, I. Ryabtsev, D. Tretyakov, and V. Entin, Quasiclassical calculations of blackbody-radiation-induced depopulation rates and effective lifetimes of Rydberg nS, nP, and nD alkali-metal atoms with n$\leq$80, Phys. Rev. A 79, 052504 (2009).
* Walker and Saffman (2008) T. G. Walker and M. Saffman, Consequences of Zeeman degeneracy for the van der Waals blockade between Rydberg atoms, Phys. Rev. A 77, 032723 (2008).
* Shi _et al._ (2016) X.-F. Shi, P. Svetlichnyy, and T. A. B. Kennedy, Spin – charge separation of dark-state polaritons in a Rydberg medium, J. Phys. B 49, 074005 (2016).
* Shi and Kennedy (2017) X.-F. Shi and T. A. B. Kennedy, Annulled van der Waals interaction and fast Rydberg quantum gates, Phys. Rev. A 95, 043429 (2017).
* Shi (2018b) X.-F. Shi, Universal Barenco quantum gates via a tunable non-collinear interaction, Phys. Rev. A 97, 032310 (2018b).
* Beterov and Saffman (2015) I. I. Beterov and M. Saffman, Rydberg blockade, Förster resonances, and quantum state measurements with different atomic species, Phys. Rev. A 92, 042710 (2015).
* Jau _et al._ (2016) Y.-Y. Jau, A. M. Hankin, T. Keating, I. H. Deutsch, and G. W. Biedermann, Entangling atomic spins with a Rydberg-dressed spin-flip blockade, Nat. Phys. 12, 71 (2016).
* de Léséleuc _et al._ (2018) S. de Léséleuc, D. Barredo, V. Lienhard, A. Browaeys, and T. Lahaye, Analysis of imperfections in the coherent optical excitation of single atoms to Rydberg states, Phys. Rev. A 97, 053803 (2018).
* Zhang _et al._ (2012) X. L. Zhang, A. T. Gill, L. Isenhower, T. G. Walker, and M. Saffman, Fidelity of a Rydberg-blockade quantum gate from simulated quantum process tomography, Phys. Rev. A 85, 042310 (2012).
* Urban _et al._ (2009) E. Urban, T. A. Johnson, T. Henage, L. Isenhower, D. D. Yavuz, T. G. Walker, and M. Saffman, Observation of Rydberg blockade between two atoms, Nat. Phys. 5, 110 (2009).
* Tan _et al._ (2015) T. R. Tan, J. P. Gaebler, Y. Lin, Y. Wan, R. Bowler, D. Leibfried, and D. J. Wineland, Multi-element logic gates for trapped-ion qubits, Nature 528, 380 (2015).
* Ballance _et al._ (2015) C. J. Ballance, V. M. Schafer, J. P. Home, S. D. J, W. S. C, A. D. T. C, N. M. Linke, T. P. Harty, D. P. L. A. Craik, D. N. Stacey, a. M. Steane, and D. M. Lucas, Hybrid quantum logic and a test of Bell’s inequality using two different atomic species, Nature 528, 384 (2015).
* Ballance _et al._ (2016) C. J. Ballance, T. P. Harty, N. M. Linke, M. A. Sepiol, and D. M. Lucas, High-Fidelity Quantum Logic Gates Using Trapped-Ion Hyperfine Qubits, Phys. Rev. Lett. 117, 060504 (2016).
* Gaebler _et al._ (2016) J. P. Gaebler, T. R. Tan, Y. Lin, Y. Wan, R. Bowler, A. C. Keith, S. Glancy, K. Coakley, E. Knill, D. Leibfried, and D. J. Wineland, High-Fidelity Universal Gate Set for $Be^{+}$ Ion Qubits, Phys. Rev. Lett. 117, 060505 (2016).
* Higgins _et al._ (2017a) G. Higgins, W. Li, F. Pokorny, C. Zhang, F. Kress, C. Maier, J. Haag, Q. Bodart, I. Lesanovsky, and M. Hennrich, A single strontium Rydberg ion confined in a Paul trap, Phys Rev X 7, 021038 (2017a).
* Higgins _et al._ (2017b) G. Higgins, F. Pokorny, C. Zhang, Q. Bodart, and M. Hennrich, Coherent control of a single trapped Rydberg ion, Phys. Rev. Lett. 119, 220501 (2017b).
* Engel _et al._ (2018) F. Engel, T. Dieterle, T. Schmid, C. Tomschitz, C. Veit, N. Zuber, R. Löw, T. Pfau, and F. Meinert, Observation of Rydberg Blockade Induced by a Single Ion, Phys. Rev. Lett. 121, 193401 (2018).
* Müller _et al._ (2008) M. Müller, L. Liang, I. Lesanovsky, and P. Zoller, Trapped Rydberg ions: From spin chains to fast quantum gates, New J. Phys. 10, 093009 (2008).
* Li _et al._ (2013) W. Li, A. W. Glaetzle, R. Nath, and I. Lesanovsky, Parallel execution of quantum gates in a long linear ion chain via Rydberg mode shaping, Phys. Rev. A 87, 052304 (2013).
* Li and Lesanovsky (2014) W. Li and I. Lesanovsky, Entangling quantum gate in trapped ions via Rydberg blockade, Appl. Phys. B 114, 37 (2014).
* (60) http://www.sintecoptronics.com/aom.asp, .
* Kim _et al._ (2008) S. Kim, R. R. Mcleod, M. Saffman, and K. H. Wagner, for two-photon addressing arrays of Rb atoms in a quantum information processor, Appl. Opt. 47, 1816 (2008).
* Saßmannshausen _et al._ (2013) H. Saßmannshausen, F. Merkt, and J. Deiglmayr, High-resolution spectroscopy of Rydberg states in an ultracold cesium gas, Phys. Rev. A 87, 032519 (2013).
* (63) https://www.quantumcomposers.com, .
* Matthews _et al._ (2011) J. C. F. Matthews, A. Politi, D. Bonneau, and J. L. O’Brien, Heralding two-photon and four-photon path entanglement on a chip, Phys. Rev. Lett. 107, 163602 (2011).
* de Groote _et al._ (2015) R. P. de Groote, I. Budinčević, J. Billowes, M. L. Bissell, T. E. Cocolios, G. J. Farooq-Smith, V. N. Fedosseev, K. T. Flanagan, S. Franchoo, R. F. Garcia Ruiz, H. Heylen, R. Li, K. M. Lynch, B. A. Marsh, G. Neyens, R. E. Rossel, S. Rothe, H. H. Stroke, K. D. A. Wendt, S. G. Wilkins, and X. Yang, Use of a Continuous Wave Laser and Pockels Cell for Sensitive High-Resolution Collinear Resonance Ionization Spectroscopy, Phys. Rev. Lett. 115, 132501 (2015).
* Zhu _et al._ (2019) C. Zhu, R. Frigge, A. M. Turner, M. J. Abplanalp, B.-J. Sun, Y.-l. Chen, H. H. Chang, and R. I. Kaiser, A vacuum ultraviolet photoionization study on the formation of methanimine (CH2NH) and ethylenediamine (NH2CH2CH2NH2) in low temperature interstellar model ices exposed to ionizing radiation, Phys. Chem. Chem. Phys. 21, 1952 (2019).
* Eitel _et al._ (2017) F. Eitel, J. Pareja, A. Johchi, B. Böhm, D. Geyer, and A. Dreizler, Temporal evolution of auto-ignition of ethylene and methane jets propagating into a turbulent hot air co-flow vitiated with NOx, Combustion and Flame 177, 193 (2017).
* Bonebrake _et al._ (2019) J. M. Bonebrake, D. L. Blunck, J. K. Lefkowitz, and T. M. Ombrello, The effect of nanosecond pulsed high frequency discharges on the temperature evolution of ignition kernels, Proceedings of the Combustion Institute 37, 5561 (2019).
* Smith (2010) S. L. Smith, First commissioning results from the non-scaling FFAG accelerator, EMMA, EuCard-CON-2010-058, (2010).
* Watkins (2013) A. N. Watkins, Experimental Results for Temporally Overlapping Pulses from Quantel EverGreen 200 Laser (https://ntrs.nasa.gov/search.jsp?R=20140003162), (2013).
* (71) See the datasheet retrieved from https://ekspla.com/product/sy4000-synchronization-module, .
|
# Transition slow-down by Rydberg interaction of neutral atoms and a fast
controlled-NOT quantum gate
Xiao-Feng Shi School of Physics and Optoelectronic Engineering, Xidian
University, Xi’an 710071, China
###### Abstract
Exploring controllable interactions lies at the heart of quantum science.
Neutral Rydberg atoms provide a versatile route toward flexible interactions
between single quanta. Previous efforts mainly focused on the excitation
annihilation (EA) effect of the Rydberg blockade due to its robustness against
interaction fluctuation. We study another effect of the Rydberg blockade,
namely, the transition slow-down (TSD). In TSD, a ground-Rydberg cycling in
one atom slows down a Rydberg-involved state transition of a nearby atom,
which is in contrast to EA that annihilates a presumed state transition. TSD
can lead to an accurate controlled-NOT (CNOT) gate with a sub-$\mu$s duration
about $2\pi/\Omega+\epsilon$ by two pulses, where $\epsilon$ is a negligible
transient time to implement a phase change in the pulse and $\Omega$ is the
Rydberg Rabi frequency. The speedy and accurate TSD-based CNOT makes neutral
atoms comparable (superior) to superconducting (ion-trap) systems.
## I introduction
There are exciting advances in Rydberg atom quantum science recently Jaksch
_et al._ (2000); Lukin _et al._ (2001); Saffman _et al._ (2010); Saffman
(2016); Weiss and Saffman (2017); Firstenberg _et al._ (2016); Adams _et
al._ (2020); Browaeys and Lahaye (2020) because of the feasibility to
coherently and rapidly switch on and off the strong dipole-dipole interaction.
Such interaction enables simulation of quantum many-body physics Gurian _et
al._ (2012); Tretyakov _et al._ (2017); De Léséleuc _et al._ (2019); Schauß
_et al._ (2015); Labuhn _et al._ (2016); Zeiher _et al._ (2016, 2017);
Bernien _et al._ (2017); de Léséleuc _et al._ (2018); Guardado-Sanchez _et
al._ (2018); Kim _et al._ (2018); Keesling _et al._ (2019); Ding _et al._
(2020); Borish _et al._ (2020), probing and manipulation of single photons
Dudin and Kuzmich (2012); Peyronel _et al._ (2012); Firstenberg _et al._
(2013); Li _et al._ (2013); Gorniaczyk _et al._ (2014); Baur _et al._
(2014); Tiarks _et al._ (2014); Li _et al._ (2016); Busche _et al._ (2017);
Ripka and Pfau (2018); Liang _et al._ (2018); Li _et al._ (2019); Thompson
_et al._ (2017), large-scale entanglement generation Ebert _et al._ (2015);
Omran _et al._ (2019), and quantum computation Ebert _et al._ (2015); Omran
_et al._ (2019); Wilk _et al._ (2010); Isenhower _et al._ (2010); Zhang _et
al._ (2010); Maller _et al._ (2015); Zeng _et al._ (2017); Levine _et al._
(2019); Graham _et al._ (2019); Madjarov _et al._ (2020); Jau _et al._
(2016); Levine _et al._ (2018); Picken _et al._ (2019); Tiarks _et al._
(2019); Jo _et al._ (2020). To date, however, most effort focused on the
effect of excitation annihilation (EA) proposed in Jaksch _et al._ (2000);
Lukin _et al._ (2001) and reviewed in Saffman _et al._ (2010); Saffman
(2016); Weiss and Saffman (2017); Firstenberg _et al._ (2016); Adams _et
al._ (2020); Browaeys and Lahaye (2020), although there are other categories
as summarized in Table 1. Belonging to the Rydberg blockade regime, EA
involves single-atom Rydberg excitation and hence is not sensitive to the
fluctuation of interaction. Besides EA, one can also excite both qubits to
Rydberg states Jaksch _et al._ (2000), or explore the antiblockade regime
Ates _et al._ (2007); Amthor _et al._ (2010), or use the resonant dipole-
dipole flip Thompson _et al._ (2017); De Léséleuc _et al._ (2019). These
latter processes, however, involve two-atom Rydberg excitation and are
sensitive to the fluctuation of qubit separation that can substantially
reduces the fidelity of a quantum control by using them Jo _et al._ (2020).
It is an open question whether there is another means other than EA to explore
the Rydberg blockade regime for efficient and high-fidelity quantum control.
Here, we study an unexplored transition slow-down (TSD) effect of the dipole-
dipole interaction between Rydberg atoms. When the state of one atom
oscillates back and forth between ground and Rydberg states, its Rydberg
interaction does not block a Rydberg-involved state swap in a nearby atom, but
slows it down, and the fold of slow-down is adjustable. The resulted TSD
denotes the slow-down of the state transfer of one of the two atoms although
the collective Rabi frequency is enhanced due to the many-body effect. Albeit
appeared as a slow-down, a controlled TSD on the contrary can drastically
speed up certain crucial element in a quantum computing circuit, such as the
controlled-NOT (CNOT) which is the very important two-qubit entangling gate in
the circuit model of quantum computing in both theory Nielsen and Chuang
(2000); Williams (2011); Ladd _et al._ (2010); Shor (1997); Bremner _et al._
(2002); Shende _et al._ (2004) and experiment Peruzzo _et al._ (2014);
Debnath _et al._ (2016). TSD is a new route toward efficient, flexible, and
high-fidelity quantum control over neutral atoms because it is implemented in
the strong blockade regime which is robust to the fluctuation of interactions.
Category | Blockade ($V\gg\Omega$) | Frozen interaction
---|---|---
Feature | Robust to fluctuation of $V$ | Sensitive to fluctuation of $V$
Type of $V$ | | Dipole-dipole; Van der Waals
---
Van der Waals | Dipole-dipole
Application | TSD | Excitation annihilation | Phase shift | Antiblockade | | State flip
---
Theoretical proposal | Here | Jaksch _et al._ (2000); Lukin _et al._ (2001) | Jaksch _et al._ (2000) | Ates _et al._ (2007) | (Intrinsic)
Realized | CNOT; Toffoli | | | Isenhower _et al._ (2010); Zhang _et al._ (2010); Maller _et al._ (2015); Zeng _et al._ (2017); Levine _et al._ (2019); Graham _et al._ (2019); Tiarks _et al._ (2019); Levine _et al._ (2019)
---
| |
Entanglement | | Ebert _et al._ (2015); Omran _et al._ (2019); Wilk _et al._ (2010); Isenhower _et al._ (2010); Zhang _et al._ (2010); Maller _et al._ (2015); Jau _et al._ (2016); Zeng _et al._ (2017); Levine _et al._ (2018); Picken _et al._ (2019); Levine _et al._ (2019); Graham _et al._ (2019); Madjarov _et al._ (2020), for photons:Tiarks _et al._ (2019)
---
Jo _et al._ (2020) | |
| Many-body; Optics
---
| Schauß _et al._ (2015); Labuhn _et al._ (2016); Zeiher _et al._ (2016,
2017); Bernien _et al._ (2017); de Léséleuc _et al._ (2018); Guardado-
Sanchez _et al._ (2018); Kim _et al._ (2018); Keesling _et al._ (2019);
Ding _et al._ (2020); Borish _et al._ (2020); Dudin and Kuzmich (2012);
Peyronel _et al._ (2012); Firstenberg _et al._ (2013); Li _et al._ (2013);
Gorniaczyk _et al._ (2014); Baur _et al._ (2014); Tiarks _et al._ (2014);
Li _et al._ (2016); Busche _et al._ (2017); Ripka and Pfau (2018); Liang
_et al._ (2018); Li _et al._ (2019)
---
| | Ates _et al._ (2007); Amthor _et al._ (2010)
---
| Gurian _et al._ (2012); Tretyakov _et al._ (2017); De Léséleuc _et al._
(2019); Thompson _et al._ (2017)
---
Table 1: Summary of quantum science and technology based on two-body Rydberg
interaction of neutral atoms (references are not complete but representative).
There are in general two categories, the strong blockade regime and the frozen
interaction regime, where the former was widely studied because of its
robustness to fluctuation of interactions.
Though Rydberg atoms have kindled the flame for the ambition to large-scale
quantum computing Jaksch _et al._ (2000); Lukin _et al._ (2001); Saffman
_et al._ (2010); Saffman (2016); Weiss and Saffman (2017), and recent
experiments demonstrated remarkable advances Wilk _et al._ (2010); Isenhower
_et al._ (2010); Zhang _et al._ (2010); Maller _et al._ (2015); Jau _et
al._ (2016); Zeng _et al._ (2017); Levine _et al._ (2018); Picken _et al._
(2019); Levine _et al._ (2019); Graham _et al._ (2019); Madjarov _et al._
(2020); Tiarks _et al._ (2019); Jo _et al._ (2020), further progress toward
neutral-atom quantum computing is hindered by the difficulty to prepare a fast
and accurate CNOT. This is partly because each of those CNOT gates was
realized via combining an EA-based controlled-Z ($C_{Z}$) and a series of
single-qubit gates Isenhower _et al._ (2010); Zhang _et al._ (2010); Maller
_et al._ (2015); Zeng _et al._ (2017); Graham _et al._ (2019); Levine _et
al._ (2019), leading to CNOT durations dominated by single-qubit operations
(e.g., over $4~{}\mu$s in Levine _et al._ (2019); Graham _et al._ (2019)).
In contrast, an TSD-based CNOT has a duration about $2\pi/\Omega+\epsilon$ by
only two Rydberg pulses, i,e., needs no single-qubit rotations, where
$\epsilon$ is a transient moment to implement a phase change in the pulses and
$\Omega$ the Rydberg Rabi frequency. Using values of $\Omega$ and $\epsilon$
from Graham _et al._ (2019); Levine _et al._ (2019), the TSD-based CNOT
(duration $\sim 0.3~{}\mu$s) would be orders of magnitude faster than those in
Graham _et al._ (2019); Levine _et al._ (2019), which means that a neutral-
atom CNOT can be much faster than the CNOT (or ground Bell-state gate) by
trapped ions Cirac and Zoller (1995); Sørensen and Mølmer (1999); Ballance
_et al._ (2015, 2016); Gaebler _et al._ (2016) (notice that the fast ion-trap
gates in Schäfer _et al._ (2018); Zhang _et al._ (2020) are phase gates).
Although still inferior to superconducting circuits You and Nori (2005);
Devoret and Schoelkopf (2013) where CNOT gate times can be around $50$ ns
Barends _et al._ (2014), the TSD-based CNOT is applicable in scalable
neutral-atom platforms that are ideal for long-lived storage of quantum
information in room temperatures.
## II A two-state case
The simplest model of TSD consists of two nearby atoms, each pumped by a laser
pulse that induces a transition between a ground state $|1\rangle$ and a
Rydberg state $|r\rangle$. The Rabi frequency is $\Omega_{\text{c(t)}}$ for
the control (target) qubit, and the dipole-dipole interaction $V$ in
$|rr\rangle$ is assumed large compared to $\Omega_{\text{c(t)}}$ so that
$|rr\rangle$ is not populated. For Rydberg interaction of the van der Waals
type, $V$ is limited because in this regime the native dipole-dipole
interaction should be much smaller than the energy gaps between nearby two-
atom Rydberg states Walker and Saffman (2008), and thus the qubit spacing
should be large enough. On the other hand, a direct dipole-dipole interaction
can be huge for high-lying Rydberg states although it is no longer a pure
energy shift. In this sense, the residual blockade error of the order
$\sim\Omega_{\text{c(t)}}^{2}/V^{2}$ Saffman and Walker (2005) can be
negligible in the strong dipole-dipole interaction regime as long as the two-
atom spacing is beyond the LeRoy radius.
Figure 1 shows a contrast between EA and a two-state TSD, where the pulse sent
to the control atom is applied during $t\in[t_{0},~{}t_{0}+t_{1})$, and that
to the target atom is during $t\in[0,~{}2t_{1})$ with
$t_{0}<t_{1}\equiv\pi/\Omega_{\text{t}}$. Starting from an initial two-atom
state $|11\rangle$, the wavefunction at $t=t_{0}$ becomes
$\displaystyle|\psi(t_{0})\rangle$ $\displaystyle=$
$\displaystyle\cos(\Omega_{\text{t}}t_{0}/2)|11\rangle-i\sin(\Omega_{\text{t}}t_{0}/2)|1r\rangle.$
(1)
During $t\in[t_{0},~{}t_{0}+t_{1})$, the system Hamiltonian is
$\displaystyle\hat{H}$ $\displaystyle=$
$\displaystyle[(\Omega_{\text{c}}|r1\rangle+\Omega_{\text{t}}|1r\rangle)\langle
11|/2+\text{H.c.}]+\hat{H^{\prime}},$ (2)
where $\hat{H^{\prime}}$ includes excitation between
$\\{|1r\rangle,~{}|r1\rangle\\}$ and $|rr\rangle$ and the dipole-dipole flip
from $|rr\rangle$. Focusing on the strong interaction regime,
$\hat{H^{\prime}}$ can be discarded because $|rr\rangle$ is not coupled Jaksch
_et al._ (2000) and hence $\hat{H}=\bar{\Omega}|\mathbb{R}\rangle\langle
11|/2+\text{H.c.}$, where
$\bar{\Omega}\equiv\sqrt{\Omega_{\text{c}}^{2}+\Omega_{\text{t}}^{2}}$ and
$|\mathbb{R}\rangle\equiv(\Omega_{\text{c}}|r1\rangle+\Omega_{\text{t}}|1r\rangle)/\bar{\Omega}$.
When $\alpha\equiv\Omega_{\text{c}}/\Omega_{\text{t}}=\sqrt{15}$, we have
$\bar{\Omega}t_{1}=4\pi$, so that the wavefunction at $t=t_{0}+t_{1}$, given
by $e^{-it_{1}\hat{H}}|\psi(t_{0})\rangle$, is equal to Eq. (1), and it is
like that nothing happens to the target qubit upon the completion of the drive
in the control qubit. Then, because the pulse for the control qubit ends at
$t=t_{0}+t_{1}$, the continuous pumping on the target qubit drives the state
to $|1r\rangle$ at $t=2t_{1}$, i.e., a $2\pi$ pulse, instead of a $\pi$ pulse,
completes the transition $|11\rangle\rightarrow|1r\rangle$, which corresponds
to a 2-fold slow-down.
Figure 1: Comparison between excitation annihilation (EA) and transition slow-
down (TSD) in two states. Black and blue (gray) circles represent ground and
Rydberg states, respectively. Hollow circles denote Rydberg states not
populated. (a) The first and second pulses in EA are sent to the control and
target qubits, respectively, both for the transition between a ground state
$|1\rangle$ and a Rydberg state $|r\rangle$. (b) State evolution starting from
the two-qubit state $|11\rangle$ in EA. The blockade effect is in the second
pulse, where the transition $|r1\rangle\leftrightarrow|rr\rangle$ is
annihilated when $\Omega_{\text{t}}\ll V$. (c) In a two-state TSD, one pulse
is applied to the target qubit with duration $2t_{\pi}$, within which another
pulse is applied to the control qubit with duration $t_{\pi}$, where
$t_{\pi}=\pi/\Omega_{\text{t}}$. The delay between the pulses for the two
qubits shall be smaller than $t_{\pi}$. (d) The parameter
$\alpha\equiv\Omega_{\text{c}}/\Omega_{\text{t}}$ depends on the desired
extent of TSD. With $\alpha=\sqrt{15}$, the Rydberg pumping in the target
qubit, $|11\rangle\rightarrow|1r\rangle$, requires a $2\pi$ pulse, i.e., twice
of that when the control qubit is not pumped. Figure 2: A three-state TSD and
a fast CNOT by two pulses. (a) A three-state TSD is realized by sending to
each qubit two pulses, each with equal duration $\pi/\Omega_{\text{c}}$. The
Rabi frequencies are $\Omega_{\text{c}}~{}(\Omega_{\text{c}})$ and
$\Omega_{\text{t}}~{}(-\Omega_{\text{t}})$ for the control and target qubits
in the first (second) pulse, where
$\Omega_{\text{t}}=\sqrt{1.5}\Omega_{\text{c}}$. (b) Relevant transitions for
$|10\rangle\rightarrow|11\rangle$. Transitions involving $|00\rangle$ and
$|01\rangle$ are not shown because they do not involve dipole-dipole
interaction. That the arrows point left or right does not mean change of
angular momentum but only for clarifying the protocol. (c) The input state
$|00\rangle$ transitions back to itself because of the spin echo by
$\Omega_{\text{t}}\rightarrow-\Omega_{\text{t}}$. (d) With the same pumping,
the input state $|10\rangle$ transitions to $|11\rangle$ at the end of the
pulse sequence. The time for implementing the phase change between the two
pulses is not shown for brevity (see text). The duration of the two pulses is
$2\pi/\Omega_{\text{c}}=\sqrt{6}\pi/\Omega_{\text{t}}$, corresponding to a
$\sqrt{3}$-fold slow-down because the state swap
$|0\rangle\rightarrow|r\rangle\rightarrow|1\rangle$ in the target qubit can
complete by a pulse duration $\sqrt{2}\pi/\Omega_{\text{t}}$ if no TSD is
used.
## III A fast CNOT
To show the strength of TSD in quantum information, we would like to consider
TSD in three states. This is because the two-state TSD ends up in a Rydberg
state which is not stable, while quantum information shall be encoded with two
qubit states $|0\rangle$ and $|1\rangle$ in the more stable ground manifold.
For frequently used rubidium and cesium atoms, $|0\rangle$ and $|1\rangle$ can
be chosen from the two hyperfine-split ground levels with a frequency
difference of several gigahertz.
Consider a ground-Rydberg-ground transition chain
$|0\rangle\leftrightarrow|r\rangle\leftrightarrow|1\rangle$, i.e., a state
swap between the two long-lived qubit states via a metastable Rydberg state
$|r\rangle$. The three-state TSD is implemented by pumping the control and
target qubits with respective Rabi frequencies $\Omega_{\text{c}}$ and
$\Omega_{\text{t}}$ for the same duration, with transition
$|1\rangle\xleftrightarrow{\Omega_{\text{c}}}|r\rangle$ for the control qubit
and
$|0\rangle\xleftrightarrow{\Omega_{\text{t}}}|r\rangle\xleftrightarrow{\Omega_{\text{t}}}|1\rangle$
for the target qubit. This requires a setup capable of pumping both qubit
states to a common Rydberg state in the target qubit, which is feasible as
experimentally demonstrated for two-atom entanglement about a decade ago Wilk
_et al._ (2010) and for $W$-state preparation in atom ensembles later Ebert
_et al._ (2015). Because $|0\rangle$ is not pumped in the control qubit, the
Hamiltonian is
$\hat{\mathcal{H}}_{c0}(\Omega_{\text{t}})=\Omega_{\text{t}}(|00\rangle+|01\rangle)\langle
0r|/2+$H.c. for the input states $|00\rangle$ and $|01\rangle$, where the
subscript $c0$ ($c1$) denotes that the Hamiltonian applies when the input
state for the control qubit is $|0(1)\rangle$. For the remaining input states,
we consider the ordered basis
$\\{|1r\rangle,|r1\rangle,|r0\rangle,|11\rangle,|10\rangle\\}$ for the
following Hamiltonian
$\displaystyle\hat{\mathcal{H}}_{c1}(\Omega_{\text{c}},\Omega_{\text{t}})$
$\displaystyle=$
$\displaystyle\frac{1}{2}\left(\begin{array}[]{ccccc}0&0&0&\Omega_{\text{t}}&\Omega_{\text{t}}\\\
0&0&0&\Omega_{\text{c}}&0\\\ 0&0&0&0&\Omega_{\text{c}}\\\
\Omega_{\text{t}}&\Omega_{\text{c}}&0&0&0\\\
\Omega_{\text{t}}&0&\Omega_{\text{c}}&0&0\end{array}\right),$ (8)
where we ignore the two-atom Rydberg state $|rr\rangle$ with reasons shown
below Eq. (2). By diagonalizing Eq. (8) one has
$\displaystyle|10\rangle$ $\displaystyle=$
$\displaystyle(|\mathscr{R}_{4}\rangle-|\mathscr{R}_{3}\rangle+|\mathscr{R}_{2}\rangle-|\mathscr{R}_{1}\rangle)/2,$
$\displaystyle|11\rangle$ $\displaystyle=$
$\displaystyle(|\mathscr{R}_{4}\rangle-|\mathscr{R}_{3}\rangle-|\mathscr{R}_{2}\rangle+|\mathscr{R}_{1}\rangle)/2,$
(9)
where $|\mathscr{R}_{k}\rangle$ with $k=1-4$ are four eigenvectors of Eq. (8),
with eigenvalues
$(\Omega_{\text{c}},~{}-\Omega_{\text{c}},~{}\overline{\Omega},-\overline{\Omega})/2$,
respectively [the fifth eigenvector does not enter Eq. (9)], where
$\overline{\Omega}\equiv\sqrt{\Omega_{\text{c}}^{2}+2\Omega_{\text{t}}^{2}}$
and
$\displaystyle|\mathscr{R}_{1,2}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{2}[|r1\rangle-|r0\rangle\pm(|11\rangle-|10\rangle)],$
(10) $\displaystyle|\mathscr{R}_{3,4}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{2\overline{\Omega}}[\Omega_{\text{c}}(|r1\rangle+|r0\rangle)\pm\overline{\Omega}(|11\rangle+|10\rangle)+2\Omega_{\text{t}}|1r\rangle].$
We proceed to describe an accurate and exceedingly fast CNOT by a spin-echo
assisted TSD. The sequence consists of two pulses, each with duration
$\pi/\Omega_{\text{c}}$ and condition
$|\Omega_{\text{t}}|/\Omega_{\text{c}}=\sqrt{6}/2$, shown in Fig. 2(a). A
$\pi$ phase change (requiring a time $\epsilon$) is inserted between the
pulses sent to the target qubit so as to induce spin echo, where $\epsilon$
can be around $10$ ns Levine _et al._ (2019). The spin echo suppresses the
state swap of the target qubit if the control qubit is initialized in
$|0\rangle$ Shi (2018a, 2020a); but when the control qubit is initialized in
$|1\rangle$, it is pumped to $|r\rangle$ which results in TSD in the target
qubit, i.e., the transition
$|0\rangle\rightleftharpoons|r\rangle\rightleftharpoons|1\rangle$ in the
target qubit, which can occur with a pulse duration
$\sqrt{2}\pi/\Omega_{\text{t}}$ if no TSD is used Shi (2018b), will be slowed
down by a fold of $\sqrt{3}$. The mechanism is understood in two steps. First,
during $t\in[0,~{}t_{\pi})$ with $t_{\pi}=\pi/\Omega_{\text{c}}$, the input
state $|\psi_{c0}(0)\rangle=|00\rangle$ or $|01\rangle$ evolves according to
$e^{-it\mathcal{H}_{c0}(\Omega_{\text{t}})}|\psi_{c0}\rangle$, and after a
$\pi$ phase change to the Rydberg Rabi frequency which may require a finite
transient time $\epsilon$, the state evolution becomes
$e^{-i(t-t_{\pi}-\epsilon)\mathcal{H}_{c0}(-\Omega_{\text{t}})}|\psi_{c0}(t_{\pi})\rangle$
during the second pulse, leading to
$e^{-it\mathcal{H}_{c0}(-\Omega_{\text{t}})t_{\pi}}|\psi_{c0}(t_{\pi})\rangle=e^{-it\mathcal{H}_{c0}(-\Omega_{\text{t}})t_{\pi}}e^{-it\mathcal{H}_{c0}(\Omega_{\text{t}})t_{\pi}}|\psi_{c0}(0)\rangle=|\psi_{c0}(0)\rangle$
at the end of the sequence. Second, according to Eq. (10), the first pulse
during $t\in[0,~{}t_{\pi})$ evolves the input state
$|\psi_{c1}(0)\rangle=|10\rangle$ according to
$\displaystyle\frac{1}{2}(e^{\frac{it\overline{\Omega}}{2}}|\mathscr{R}_{4}\rangle-e^{\frac{-it\overline{\Omega}}{2}}|\mathscr{R}_{3}\rangle+e^{\frac{it\Omega_{\text{c}}}{2}}|\mathscr{R}_{2}\rangle-e^{-\frac{it\Omega_{\text{c}}}{2}}|\mathscr{R}_{1}\rangle),$
(11)
which becomes
$\displaystyle|\psi_{c1}(t_{\pi})\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{2}(i|\mathscr{R}_{1}\rangle+i|\mathscr{R}_{2}\rangle+|\mathscr{R}_{3}\rangle-|\mathscr{R}_{4}\rangle)$
(12)
at $t=t_{\pi}$ because
$(t_{\pi}\Omega_{\text{c}},~{}t_{\pi}\overline{\Omega})=(\pi,~{}2\pi)$. In Eq.
(12), the eigenvectors $|\mathscr{R}_{j}\rangle$, $j=1-4$, are defined in Eq.
(10) with Rabi frequencies $(\Omega_{\text{c}},~{}\Omega_{\text{t}})$ during
$t\in[0,~{}t_{\pi})$. During $t\in t_{\pi}+\epsilon+[0,~{}t_{\pi})$, the
Hamiltonian has Rabi frequencies $(\Omega_{\text{c}},~{}-\Omega_{\text{t}})$
with the eigenvectors $|\mathscr{R}_{1,2}\rangle$ the same as in Eq. (10), but
the expressions for $|\mathscr{R}_{3,4}\rangle$ become
$\displaystyle|\mathscr{R}_{3,4}^{\prime}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{2\overline{\Omega}}[\Omega_{\text{c}}(|r1\rangle+|r0\rangle)\pm\overline{\Omega}(|11\rangle+|10\rangle)-2\Omega_{\text{t}}|1r\rangle].$
Using Eqs. (10) and (LABEL:eq08) we cast Eq. (12) into
$\displaystyle|\psi_{c1}(t_{\pi})\rangle=e^{-it_{\pi}\mathcal{H}_{c1}(\Omega_{\text{c}},-\Omega_{\text{t}})}|\psi_{c1}(0)\rangle,$
(14)
i.e., the state at $t=t_{\pi}$ is like if Rabi frequencies
$(\Omega_{\text{c}},~{}-\Omega_{\text{t}})$ were used during the first pulse.
Equation (14) is the key feature enabling a fast CNOT. After the second pulse,
the state at $t=2t_{\pi}+\epsilon$ becomes
$e^{-it_{\pi}\mathcal{H}_{c1}(\Omega_{\text{c}},-\Omega_{\text{t}})}|\psi_{c1}(t_{\pi})\rangle$
that can be rewritten as
$e^{-2it_{\pi}\mathcal{H}_{c1}(\Omega_{\text{c}},-\Omega_{\text{t}})}|\psi_{c1}(0)\rangle$
thanks to Eq. (14), which further reduces to $|11\rangle$ according to Eqs.
(9) and (11). A similar analysis shows that if the initial state is
$|11\rangle$, it maps to $|10\rangle$ upon the completion of the pulse. So the
following map is realized
$\displaystyle\\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\\}\rightarrow\\{|00\rangle,|01\rangle,|11\rangle,|10\rangle\\}$
(15)
which is the standard CNOT. A numerical simulation of the population evolution
by $\mathcal{H}_{c0}$ and $\mathcal{H}_{c1}$ in Eq. (8) is shown in Figs. 2(c)
and 2(d) for the input states $|00\rangle$ and $|10\rangle$, respectively,
where the transient time $\epsilon$ is ignored for brevity.
High fidelity is possible with the TSD-based CNOT. To estimate the intrinsic
fidelity limited by Rydberg-state decay and Doppler dephasing, we consider
atomic levels and Rydberg laser Rabi frequencies in recent experiments Graham
_et al._ (2019); Levine _et al._ (2019), and numerically found that the
fidelity of the TSD-based CNOT or Bell state would reach $99.9\%$ ($99.8\%$)
with experimentally affordable effective temperatures $T_{a}=5~{}(15)~{}\mu$K
of qubit motion Picken _et al._ (2019); Zeng _et al._ (2017); Graham _et
al._ (2019). As detailed in Appendix A.2, this estimate assumes that the field
for $|0\rangle\rightarrow|r\rangle$ and that for
$|1\rangle\rightarrow|r\rangle$ in the target qubit copropagate, leading to
opposite phases in the two transitions of the chain
$|0\rangle\xrightarrow{\Omega_{\text{t}}e^{itkv_{t}}}|r\rangle\xrightarrow{\Omega_{\text{t}}e^{-itkv_{t}}}|1\rangle$,
where $k$ is the wavevector and $v_{t}$ the speed of the target qubit along
the propagation direction of light. Then, the state transfer from $|0\rangle$
to $|1\rangle$ (or the reverse) picks up two opposite phases $\pm tkv_{t}$
which partly suppresses the dephasing. Fortunately, our method is not strictly
dependent on the above dephasing-resilient configuration. For example, for the
worst case of counterpropagating fields so that the sequential state transfer
from $|0\rangle$ to $|1\rangle$ picks up a total phase $2tkv_{t}$, simulation
in Appendix A shows that fidelity over $99.6\%$ is achievable for $T_{a}\sim
15~{}\mu$K in room temperatures. The robustness against the Doppler dephasing
benefits from the avoidance of shelving the control atom on the Rydberg level
in free flight during pumping the target atom as required in an EA-based CNOT
Müller _et al._ (2009); Shi (2018b).
The above analysis assumes $V/2\pi=500$ MHz corresponding to an interatomic
distance $l=3~{}\mu$m. When $V/2\pi=100$ MHz with larger $l=4.6~{}\mu$m (where
crosstalk can be negligible Graham _et al._ (2019)), the fidelity would be
$99.8\%$ ($99.7\%$) with $T_{a}=5~{}(15)~{}\mu$K in room temperatures (see
Appendix B for details). This benefits from that our CNOT is not sensitive to
the change of interaction $V$ in the strong blockade regime. Finally, with
fluctuation (of relative Gaussian width $\sigma$) of the two Rabi frequencies
for the two transitions $|0(1)\rangle\leftrightarrow|r\rangle$ in the target
qubit, very small extra error $\in(0.02,~{}0.4)\%$ arises for
$\sigma\in(1,~{}5)\%$ as detailed in Appendix C. So, high fidelity is
achievable since the power fluctuation of Rydberg lasers can be well
suppressed Levine _et al._ (2018, 2019).
## IV Discussion and conclusions
The CNOT in Eq. (15) is implemented within a short time
$2\pi/\Omega_{\text{c}}=\sqrt{6}\pi/\Omega_{\text{t}}$ plus a transient moment
$\epsilon$ for a phase change between the two pulses, where $\epsilon$ can be
negligible as in Levine _et al._ (2019). Although a phase twist is used in
both Levine _et al._ (2019) and here, our TSD method is drastically different
in physics from the method of Levine _et al._ (2019) as discussed in Appendix
D. Note that Eq. (15) does not depend on any single-qubit gates if one starts
from Eq. (8). If $s$ or $d$-orbital Rydberg states are used, two-photon
transitions can be applied which leads to ac Stark shifts for the ground and
Rydberg states. As shown in Appendix E, these shifts are annulled in Eq. (8)
which is achievable by choosing appropriate ratio between the detuning at the
intermediate state and the magnitudes of the fields for the lower and upper
transitions Maller _et al._ (2015).
Our CNOT gate duration can be $\lesssim 0.3~{}\mu$s with Rydberg Rabi
frequencies ($2\pi\times(3.5\sim 4.6)$ MHz) like those realized in Graham _et
al._ (2019); Levine _et al._ (2019) that showed CNOT gate durations orders of
magnitude longer than here. Previous CNOT gates Zhang _et al._ (2010); Maller
_et al._ (2015); Zeng _et al._ (2017); Graham _et al._ (2019); Levine _et
al._ (2019); Isenhower _et al._ (2010) required two or more single-qubit
rotations to convert $C_{Z}$ to CNOT, which was usually achieved with two-
frequency Raman light Isenhower _et al._ (2010); Zhang _et al._ (2010); Zeng
_et al._ (2017); Levine _et al._ (2019), or with microwave driving assisted
by Stark shift of laser fields Graham _et al._ (2019); Maller _et al._
(2015). High-fidelity realizations of such gates came with long gate durations
Xia _et al._ (2015); Wang _et al._ (2016); Graham _et al._ (2019); Levine
_et al._ (2019), although they can also be carried out rapidly in proof-of-
principle experiments Yavuz _et al._ (2006); Jones _et al._ (2007) where
high fidelity is not concerned. The atom traps have finite lifetimes Saffman
(2016) and a faster protocol can lead to more CNOT cycles before the qubit
arrays should be reloaded. Because a practical quantum processor tackling real
problems uses a series of quantum gates including CNOT Nielsen and Chuang
(2000); Williams (2011); Ladd _et al._ (2010); Shor (1997); Bremner _et al._
(2002); Shende _et al._ (2004); Peruzzo _et al._ (2014); Debnath _et al._
(2016), certain computation tasks can be executed only with fast enough CNOT
in each cycle of laser cooling and loading of atom array. The studied speedup
of the minimal functional quantum circuit shows a possibility to confront the
problems of finite lifetime of atom traps which limits the capability of a
neutral-atom quantum processor. Moreover, concerning the ratio between the
coherence time and the CNOT gate time, the TSD method makes neutral-atom
systems competitive with superconducting and ion-trap systems as compared in
detail in Appendix F.
In conclusion, we have explored another effect of Rydberg blockade by dipole-
dipole interaction in neutral atoms, namely, the effect of transition slow-
down (TSD). We show that TSD can speed up a neutral-atom quantum computer by
proposing an exceedingly fast TSD-based CNOT realized by two pulses separated
by a short transient moment for changing the phase of the pulse. The exotic
TSD is an excellent character of dipole-dipole interactions other than the
well-known excitation annihilation effect that can push Rydberg-atom quantum
science to another level.
## ACKNOWLEDGMENTS
The author thanks Yan Lu for useful discussions. This work is supported by the
National Natural Science Foundation of China under Grant No. 11805146 and
Natural Science Basic Research plan in Shaanxi Province of China under Grant
No. 2020JM-189.
## Appendix A Gate fidelity
Before analyzing the gate error, we would like to emphasize that TSD is
particularly useful for speeding up the minimal functional quantum circuit,
namely, the CNOT. The CNOT protocols by EA in previous methods involve
multiple switchings of the external control fields which brings extra
complexity to the experimental implementation, elongate the gate duration, and
introduces extra errors from the fluctuation of the control fields and from
the intrinsic atomic Doppler broadening. Previous effort for suppressing these
errors includes a $C_{Z}$ gate based on quantum interference Shi (2019);
Levine _et al._ (2019), but going from $C_{Z}$ to the more useful CNOT (or to
create Bell states in the ground-state manifold) still needs several single-
qubit operations Graham _et al._ (2019); Levine _et al._ (2019); Isenhower
_et al._ (2010); Zhang _et al._ (2010); Maller _et al._ (2015); Zeng _et
al._ (2017). For example, the first experimental neutral-atom CNOT needed five
or seven pulses Isenhower _et al._ (2010), and the recent CNOT in Levine _et
al._ (2019) used two (four) pulses in the control (target) qubit besides
several short pulses for phase compensation. The central procedure of the CNOT
in Levine _et al._ (2019) is via combining (i) a two-qubit $C_{Z}$-like gate
by two pulses of duration about $2.7\pi/\Omega$ with a phase change inserted
between the pulses that requires an extra transient time $\epsilon$, (ii) a
short pulse to compensate an intrinsic phase to recover a $C_{Z}$ from the
$C_{Z}$-like gate, and (iii) two single-qubit rotations of duration
$\pi/\Omega_{\text{hf}}$ ($\approx 2~{}\mu$s therein) in the target qubit,
where $\Omega$ is the Rydberg laser Rabi frequency and $\Omega_{\text{hf}}$ is
the hyperfine laser Rabi frequency between the two qubit states. Page 4 of
Ref. Levine _et al._ (2019) indicates that the $C_{Z}$-like gate needs
$0.4~{}\mu$s, and it had
$(\Omega_{\text{hf}},~{}\Omega)/(2\pi)=(0.25,~{}3.5)~{}$MHz (see page 1 and 2
of Levine _et al._ (2019)), thus
$\epsilon=0.4~{}\mu$s$-2.732\pi/\Omega\approx 9.7$ ns; we assume such a fast
phase change time here. Figure 3(d) of Ref. Levine _et al._ (2019) presents a
CNOT sequence with durations equal to those of two $X(\pi/2)$, one $X(\pi)$,
and the $C_{Z}$-like, i.e.,
$2\times\pi/(2\Omega_{\text{hf}})+\pi/\Omega_{\text{hf}}+2.7\pi/\Omega$, and
thus the CNOT sequence needs about $4.4~{}\mu$s therein (the actual gate
durations should be larger when accounting for gaps between pulses therein).
In contrast, the TSD-based CNOT needs a duration about
$\sqrt{6}\pi/\Omega_{\text{t}}$ which is only $0.27~{}(0.35)~{}\mu$s with
$\Omega_{\text{t}}/(2\pi)=4.6~{}(3.5)$ MHz from Ref. Graham _et al._ (2019)
(Ref. Levine _et al._ (2019)).
One may image that if the single-qubit rotations in Refs. Graham _et al._
(2019); Levine _et al._ (2019) are implemented by the transition chain
$|0\rangle\leftrightarrow|r\rangle\leftrightarrow|1\rangle$, then their gate
durations can also be small. But there will be problems in this assumption.
This is because the single-qubit rotations necessary to transform $C_{Z}$ to
CNOT are two $X(\pi/2)$ gates that transfer $\\{|0\rangle,~{}|1\rangle\\}$ to
$\\{|0\rangle-i|1\rangle,~{}-i|0\rangle+|1\rangle\\}/\sqrt{2}$ (see Fig. 5(a)
of Graham _et al._ (2019) or Fig. 3(d) of Levine _et al._ (2019)). But the
pumping by
$\hat{\mathbb{H}}(\Omega_{\text{t}})=\Omega_{\text{t}}(|0\rangle+|1\rangle)\langle
r|/2+$H.c can not achieve this since one can easily prove that starting from
$|\varphi\rangle=|0\rangle$, the populations in
$\\{|0\rangle,~{}|r\rangle,~{}|1\rangle\\}$ will be
$\frac{1}{2}\\{2\cos^{4}\frac{\theta}{2},\sin^{2}\theta,~{}2\sin^{4}\frac{\theta}{2}\\}$,
where $\theta=\frac{t\Omega_{\text{t}}}{\sqrt{2}}$. This means that there is
no way to use the resonant transition chain
$|0\rangle\leftrightarrow|r\rangle\leftrightarrow|1\rangle$ for the $X(\pi/2)$
rotation. On the other hand, one may also imagine a succession of a $\pi$
pulse on $|1\rangle\rightarrow|r\rangle$, a $\pi/2$ pulse on
$|r\rangle\rightarrow|0\rangle$, and a $\pi$ pulse on
$|r\rangle\rightarrow|1\rangle$ can, e.g., realize a $X(\pi/2)$ rotation.
However, the extra time to shelve an atom in Rydberg state leads to extra
Doppler dephasing, and the frequent turning on and off of Rydberg lasers can
lead to extra atom loss Maller _et al._ (2015).
Below, we analyze gate imperfections due to the prevailing intrinsic Rydberg-
state decay and Doppler broadening. These are the dominant intrinsic errors in
gate operations Graham _et al._ (2019) while technical issues such as laser
noise is in principle not fundamental. From here to Appendix C, we take, as an
example, 87Rb qubits and consider the intermediate level $6P_{3/2}$ for
Rydberg pumping, where the two detunings at $6P_{3/2}$ should be different for
the two transitions $|0\rangle\xleftrightarrow{\Omega_{\text{t}}}|r\rangle$
and $|1\rangle\xleftrightarrow{\Omega_{\text{t}}}|r\rangle$. For an
$s$-orbital rubidium Rydberg state with principal quantum number around $70$,
the lifetime of $|r\rangle$ is about $\tau=400~{}(150)~{}\mu$s at a
temperature of $4~{}(300)$ K by the estimate in Beterov _et al._ (2009). When
the lower and upper fields counterpropagate along, e.g., $\mathbf{z}$, the
wavevector is $k=2\pi(1/420.3-1/1012.7)$nm-1 Shi (2020b). We assume that the
two qubits are initially located at the centers of the traps at $(0,0,x_{0})$
and $(0,0,0)$ respectively. The traps are usually turned off during Rydberg
pumping, and the free flight of the qubits leads to time-dependent Rabi
frequencies $\Omega_{\text{c}}e^{it(kv_{c}+x_{0})}$ for the control qubit, and
$(\Omega_{\text{t}}e^{itkv_{t}},\Omega_{\text{t}}e^{itkv_{t}})$ for the two
transitions $|0(1)\rangle\leftrightarrow|r\rangle$ in the target qubit, where
$(v_{c},~{}v_{t})$ are the projection of velocity along $\mathbf{z}$ for the
control and target qubits, respectively. With a finite atomic temperature
$T_{\text{a}}$, there is a finite distribution
$\mathscr{D}(v_{c})\mathscr{D}(v_{t})$ for the speeds $(v_{c},v_{t})$, where
$\mathscr{D}(v)$ is a Gaussian Graham _et al._ (2019); Shi (2020b).
### A.1 Rydberg-state decay
Although the gate duration, when neglecting the transient time ($<10$ ns as
shown above) for phase change in the pulses, is
$t_{\text{g}}=\frac{2\pi}{\Omega_{\text{c}}}$, the main decay error arises
when the atoms are in Rydberg state Saffman and Walker (2005) supposing the
intermediate state is largely detuned. By using the estimate in Zhang _et
al._ (2012), the decay error of the TSD-based CNOT gate can be approximated as
$\displaystyle E_{\text{decay}}$ $\displaystyle=$
$\displaystyle\frac{1}{4\tau}\int
dt\Big{[}\sum_{|\psi(0)\rangle=|00\rangle,|01\rangle}|\langle
0r|\psi(t)\rangle|^{2}$ (16)
$\displaystyle+\sum_{|\psi(0)\rangle=|10\rangle,|11\rangle}\big{(}|\langle
r0|\psi(t)\rangle|^{2}+|\langle r1|\psi(t)\rangle|^{2}$
$\displaystyle+|\langle 1r|\psi(t)\rangle|^{2}\big{)}\Big{]},$
which is $0.39t_{\text{g}}/\tau$ by numerical simulation. Consider a set of
experimentally feasible Graham _et al._ (2019) values of Rydberg Rabi
frequencies $\Omega_{\text{c}}=\Omega_{\text{t}}/\sqrt{1.5}=2\pi\times 3.6$
MHz, we have $E_{\text{decay}}=2.7~{}(7.2)\times 10^{-4}$ for qubits in an
environment temperature of $4~{}(300)$ K. Alternatively, a more detailed
numerical simulation by using the optical Bloch equation in the Lindblad form
with correct branching ratios Shi (2019) can predict a slightly lower
$E_{\text{decay}}$ because some population can decay to qubit states that will
again contribute to the gate operation.
### A.2 Doppler dephasing
For the TSD-based CNOT to be resilient to Doppler dephasing, the two fields
for the lower transitions $|0\rangle\rightarrow|p\rangle$ and
$|1\rangle\rightarrow|p\rangle$ shall copropagate along $\mathbf{z}$, and
those for their upper transitions $|p\rangle\rightarrow|r\rangle$ shall
copropagate along $-\mathbf{z}$, so that the wavevectors for
$|0(1)\rangle\rightarrow|r\rangle$ have (approximately) the same value $k$.
Here $|p\rangle$ is symbolic for the intermediate $6P_{3/2}$ state and one
shall bare in mind that the detunings at $|p\rangle$ for the two transition
chains must have a large difference; alternatively one can use different fine
states in the $6p$ manifold for the two transition chains (the numerical
results in Tables 2 and 3 stay similar). Then, the transition
$|0\rangle\xrightarrow{\Omega_{\text{t}}}|r\rangle\xrightarrow{\Omega_{\text{t}}}|1\rangle$
becomes
$\displaystyle|0\rangle\xrightarrow{\Omega_{\text{t}}e^{itkv_{t}}}|r\rangle\xrightarrow{\Omega_{\text{t}}e^{-itkv_{t}}}|1\rangle$
(17)
when accounting for the atom drift. The above transition mainly transfers the
population between the two hyperfine states, which means that if negligible
population stays at $|r\rangle$, the two phases $tkv_{t}$ and $-tkv_{t}$ add
up for any moment, leading to negligible phase noise because
$tkv_{t}-tkv_{t}=0$ for the population transfer from $|0\rangle$ to
$|1\rangle$. However, this is not ideal since there is always some population
at $|r\rangle$ during the process. Nonetheless, that there is partial phase
cancellation can suppress the Doppler dephasing compared to usual cases.
For the control qubit, the transition
$|1\rangle\xrightarrow{\Omega_{\text{c}}e^{itkv_{c}}}|r\rangle$ still has the
usual Doppler dephasing. However, the pumping
$|1\rangle\xrightarrow{\Omega_{\text{c}}e^{itkv_{c}}}|r\rangle$ in the control
is immersed in the TSD and does not put much population in the Rydberg state.
Numerical simulation shows $\int dt[|\langle r0|\psi(t)\rangle|^{2}+|\langle
r1|\psi(t)\rangle|^{2}]=0.31t_{\text{g}}$ for either
$|\psi(0)\rangle=|00\rangle$ or $|01\rangle$ in each gate sequence.
To show the robustness of the TSD-based CNOT against the Doppler dephasing, we
numerically simulate the state evolution by using
$\hat{\mathcal{H}}_{c0}(\Omega_{\text{t}}e^{itkv_{t}})=\Omega_{\text{t}}(e^{-itkv_{t}}|00\rangle+e^{-itkv_{t}}|01\rangle)\langle
0r|/2+$H.c. for the input states $|00\rangle$ and $|01\rangle$, and
$\displaystyle\hat{\mathcal{H}}_{c1}(\Omega_{\text{c}}e^{itkv_{c}},\Omega_{\text{t}}e^{itkv_{t}})$
$\displaystyle=$
$\displaystyle\frac{1}{2}\left(\begin{array}[]{cccccc}2V&\Omega_{\text{c}}e^{itkv_{c}}&\Omega_{\text{t}}e^{itkv_{t}}&\Omega_{\text{t}}e^{itkv_{t}}&0&0\\\
\Omega_{\text{c}}e^{-itkv_{c}}&0&0&0&\Omega_{\text{t}}e^{itkv_{t}}&\Omega_{\text{t}}e^{itkv_{t}}\\\
\Omega_{\text{t}}e^{-itkv_{t}}&0&0&0&\Omega_{\text{c}}e^{itkv_{c}}&0\\\
\Omega_{\text{t}}e^{-itkv_{t}}&0&0&0&0&\Omega_{\text{c}}e^{itkv_{c}}\\\
0&\Omega_{\text{t}}e^{-itkv_{t}}&\Omega_{\text{c}}e^{-itkv_{c}}&0&0&0\\\
0&\Omega_{\text{t}}e^{-itkv_{t}}&0&\Omega_{\text{c}}e^{-itkv_{c}}&0&0\end{array}\right),$
(24)
for the input states $|10\rangle$ and $|11\rangle$, where Eq. (24) is written
with the basis
$\\{|rr\rangle,|1r\rangle,|r1\rangle,|r0\rangle,|11\rangle,|10\rangle\\}$,
where $V$ represents the interaction of the state $|rr\rangle$. Because of the
Doppler dephasing, the gate map in the basis
$\\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\\}$ changes from the ideal
form
$\displaystyle U$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&1&0&0\\\ 0&0&0&1\\\
0&0&1&0\\\ \end{array}\right),$ (29)
to
$\displaystyle\mathscr{U}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}a&b&0&0\\\ c&d&0&0\\\ 0&0&e&f\\\
0&0&g&h\\\ \end{array}\right),$ (34)
where $a,b,c$, and $d$ can be calculated by sequentially using
$\hat{\mathcal{H}}_{c0}(\Omega_{\text{t}})$ and
$\hat{\mathcal{H}}_{c0}(-\Omega_{\text{t}})$ for the input states $|00\rangle$
and $|01\rangle$, while $e,f,g$, and $h$ can be calculated using,
sequentially, $\hat{\mathcal{H}}_{c1}(\Omega_{\text{c}},\Omega_{\text{t}})$
and $\hat{\mathcal{H}}_{c1}(\Omega_{\text{c}},-\Omega_{\text{t}})$, for the
input states $|10\rangle$ and $|11\rangle$. To study the robustness to Doppler
dephasing, we would like to see errors mainly from the Doppler dephasing if
the blockade error is negligible. So we adopt a large blockade interaction
$V/(2\pi)=500$ MHz as in Ref. Saffman _et al._ (2020). We choose
$\Omega_{\text{c}}=\Omega_{\text{t}}/\sqrt{1.5}=2\pi\times 3.6$ MHz, define
the rotation error by Pedersen _et al._ (2007)
$\displaystyle E_{\text{ro}}$ $\displaystyle=$ $\displaystyle
1-\frac{1}{20}\left[|\text{Tr}(U^{\dagger}\mathscr{U})|^{2}+\text{Tr}(U^{\dagger}\mathscr{U}\mathscr{U}^{\dagger}U)\right],$
(35)
and evaluate the ensemble average with
$\displaystyle\overline{E_{\text{ro}}}\approx\frac{\sum_{v_{c}}\sum_{v_{t}}E_{\text{ro}}(v_{c},~{}v_{t})\mathscr{D}(v_{c})\mathscr{D}(v_{t})}{\sum_{v_{c}}\sum_{v_{t}}\mathscr{D}(v_{c})\mathscr{D}(v_{t})},$
(36)
where the sum is over $10^{4}$ sets of speeds $(v_{c},~{}v_{t})$, where
$v_{c(t)}$ applies $101$ values equally distributed from $-0.5$ to $0.5$ m/s
because the atomic speed has little chance to be over $0.5$ m/s for the
temperatures $T_{a}\leq 50~{}\mu$K considered in this work. The approximation
by Eq. (36) has little difference from a rigorous integration. More details
can be found in Shi (2020b) for the method of numerical simulation.
Case 1.–By using $\hat{\mathcal{H}}_{c0}(\Omega_{\text{t}}e^{itkv_{t}})$ (for
$\\{|00\rangle,|01\rangle\\}$) and
$\hat{\mathcal{H}}_{c1}(\Omega_{\text{c}}e^{itkv_{c}},\Omega_{\text{t}}e^{itkv_{t}})$
(for $\\{|10\rangle,|11\rangle\\}$) for the first pulse, and
$\hat{\mathcal{H}}_{c0}(\Omega_{\text{t}}e^{itkv_{t}})$ and
$\hat{\mathcal{H}}_{c1}(\Omega_{\text{c}}e^{itkv_{c}},-\Omega_{\text{t}}e^{itkv_{t}})$
for the second pulse, the ensemble-averaged rotation errors are given in the
second row of Table 2, with effective atomic temperatures above
$T_{a}=5~{}\mu$K which was achievable in experiments Picken _et al._ (2019).
From these results, one can see that the CNOT fidelity
$1-E_{\text{decay}}-\overline{E_{\text{ro}}}$ can reach $99.9\%$ with qubits
cooled to around $T_{a}=10~{}(5)~{}\mu$K in a $4~{}(300)~{}$K environment.
Case 2.–To further suppress the Doppler dephasing, we consider switching
propagation directions of fields for the control qubit between the two pulses.
In other words, the Hamiltonians (for $\\{|00\rangle,|01\rangle\\}$) are
$\hat{\mathcal{H}}_{c0}(\Omega_{\text{t}}e^{itkv_{t}})$ and
$\hat{\mathcal{H}}_{c0}(-\Omega_{\text{t}}e^{-itkv_{t}})$ for the first and
second pulses, respectively. The ensemble-averaged rotation errors are shown
in the third row of Table 2. From the results in Table 2, one can see that the
CNOT fidelity $1-E_{\text{decay}}-\overline{E_{\text{ro}}}$ has a slight
improvement if this latter case of configuration is employed. The mechanism
for the extra suppression of Doppler dephasing in this latter case lies in
that the population going, e.g., from $|0\rangle$ to $|1\rangle$, obtains some
phase error due to the action
$\Omega_{\text{t}}e^{-it^{\prime}kv_{t}}|1\rangle\langle r|$ and
$\Omega_{\text{t}}e^{itkv_{t}}|r\rangle\langle 0|$; subsequently, the second
pulse pumps $|1\rangle$ to $|0\rangle$ and induces some phase error due to the
action $-\Omega_{\text{t}}e^{-it^{\prime}kv_{t}}|r\rangle\langle 1|$ and
$-\Omega_{\text{t}}e^{itkv_{t}}|0\rangle\langle r|$, which means that the
phase terms between the two pulses are continuous which is better for the
desired transition to occur. For case 1, the pumping from $|0\rangle$ to
$|r\rangle$ has a phase term $tkv_{t}$ at the end of pulse 1, which becomes
$-tkv_{t}$ when the population begins to go back at the start of pulse 2,
i.e., there is a phase jump. More details about the influence on gate fidelity
from atom-drift-induced phase change in Rabi frequencies can be found in Shi
(2020b). Since this latter case is a little involved and the increase of
fidelity is marginal, it is only of interest in the future when technical
issues are resolved so that a fidelity level beyond $99\%$ is technically
achievable.
| $T~{}(\mu$K) | 5 | 10 | 15 | 20 | 50
---|---|---|---|---|---|---
Case 1 | $10^{4}\times\overline{E_{\text{ro}}}$ | $4.31$ | $8.09$ | $11.9$ | $15.6$ | $38.2$
Case 2 | $10^{4}\times\overline{E_{\text{ro}}}$ | $3.11$ | $5.69$ | $8.26$ | $10.8$ | $26.2$
Table 2: Rotation error (scaled up by $10^{4}$; excluding Rydberg-state decay) of the TSD-based CNOT by $\Omega_{c}/2\pi=3.5$ MHz with two cases, where case 2 corresponds to that the propagation directions for the fields on the target qubit are switched between the first and second pulses, while no such switching occurs in case 1. Here Eq. (35) was used that accounts for the phase errors and the truth table errors. | $T~{}(\mu$K) | 5 | 10 | 15 | 20 | 50
---|---|---|---|---|---|---
Case 1 | $10^{4}\times\overline{E_{\text{Bell}}}$ | $2.86$ | $5.27$ | $7.67$ | $10.1$ | $24.4$
Case 2 | $10^{4}\times\overline{E_{\text{Bell}}}$ | $2.57$ | $4.67$ | $6.78$ | $8.88$ | $21.4$
Table 3: Fidelity error (scaled up by $10^{4}$; excluding Rydberg-state
decay) of the Bell state by TSD-based CNOT, with the same pulse sequence as
used for Table 2. $\overline{(\cdots)}$ denotes ensemble average as Eq. (36).
The result in Table 2 accounts for both the population errors and the phase
errors. In experiments, usually the phase errors are not important especially
in the characterization of the Bell state Levine _et al._ (2019); Graham _et
al._ (2019). Thus we continue to study the strength of TSD-based CNOT for
achieving high-fidelity Bell states.
### A.3 High-fidelity Bell states
A TSD-based CNOT can map the initial product state
$|\psi(0)\rangle=(|00\rangle+|10\rangle)/\sqrt{2}$ to the Bell state
$|\Phi\rangle=(|00\rangle+|11\rangle)/\sqrt{2}$ without resorting to extra
single-qubit gates (except the state initialization). The Rydberg
superposition time is equal to that in Sec. A.1. We follow Ref. Levine _et
al._ (2019) by evaluating the infidelity as
$E_{\text{Bell}}=1-\langle\Phi|\rho(t)|\Phi\rangle$, where
$\rho(t)=|\psi\rangle\langle\psi|$ with $|\psi\rangle$ evolved by using the
pulse sequence as in Sec. A.2 for the input state $|\psi(0)\rangle$. The
numerical results are given in Table 3, which shows that in room temperatures,
the Bell state can be created with a fidelity $99.88\%$ if qubits are cooled
to the level of $T_{a}=10~{}\mu$K, which is affordable according to the atom
cooling achieved in previous experiments Picken _et al._ (2019); Zeng _et
al._ (2017).
Finally, we would like to emphasize that it is not so crucial to have the
Doppler-resilient configuration (with copropagating fields) for the two
transition chains in the target qubit, as discussed around Eq. (17). Consider
the worst case with largest dephasing, i.e., if the two sets of fields
counterpropagate so that Eq. (17) becomes
$|0\rangle\xrightarrow{\Omega_{\text{t}}e^{itkv_{t}}}|r\rangle\xrightarrow{\Omega_{\text{t}}e^{itkv_{t}}}|1\rangle$,
then the motional dephasing error should be larger. For example, we
numerically found that the Bell-state errors in the second row (case 1) of
Table 3 become $(9.72\times 10^{-4},~{}1.90\times 10^{-3},~{}2.82\times
10^{-3},~{}3.74\times 10^{-3},~{}9.22\times 10^{-3})$ for
$T_{a}=(5,~{}10,~{}15,~{}20,~{}50)~{}\mu$K; similar results can be found for
the TSD-based CNOT. With $E_{\text{decay}}=7.2\times 10^{-4}$ for qubits in an
environment of $300$ K, the Bell-state fidelity would be $99.65\%$ at
$T_{a}=15~{}\mu$K. This still large fidelity means that for experimental
convenience, high fidelity is possible with any configuration for the
propagation directions for the two sets of fields if errors other than the
intrinsic Doppler dephasing and Rydberg-state decay can be avoided.
Figure 3: Energy spectrum for two 87Rb atoms lying along the quantization axis
when initialized in the two-atom pair state
$|ss\rangle\equiv|70S_{1/2},m_{J}=1/2;70S_{1/2},m_{J}=1/2\rangle$ Šibalić _et
al._ (2017). Here 1 and 2 label the two states that have largest overlap with
$|ss\rangle$ and nearest to the unperturbed energy (labeled as zero).
## Appendix B Interatomic distances
The dipole-dipole interaction $V$ is a function of interatomic distance
$\mathcal{L}$. For $V$ to be large enough so that the blockade error is
negligible, $\mathcal{L}$ should be small enough. In order to avoid
wavefunction overlap, the distance between the nuclei of the two Rydberg atoms
shall be larger than the Le Roy distance which can be calculated by using the
open source library of Ref. Šibalić _et al._ (2017). For the parameters
chosen as an example in this work, the Le Roy distance is $1.50~{}\mu$m for
two rubidium atoms in the state $70S_{1/2}$, and we can consider longer
interatomic distance in the range of $\mathcal{L}>2.6\mu$m. The dipole-dipole
interaction will couple the two-atom state
$|70S_{1/2},m_{J}=1/2;70S_{1/2},m_{J}=1/2\rangle$ to many other states. To
find the interaction, we consider states $|nL;n^{\prime}L^{\prime}\rangle$
coupled from the initial states ($n$ and $L$ denote the principal and angular
momentum quantum numbers, respectively), with $|n-70|,|n^{\prime}-70|\leq 5$,
and $|L|,|L^{\prime}|\leq 4$ (because $ss$ states couple to $pp$ states, which
couple to $d$-orbital and to $f$-orbital states, and so on), and consider
energy gaps between pair states within $2\pi\times 25$ GHz. After
diagonalization, the energy map is shown in Fig. 3. In Fig. 3, the color
denotes the population of the state
$|70S_{1/2},m_{J}=1/2;70S_{1/2},m_{J}=1/2\rangle$ in the diagonalized state;
the color can also be understood as how possible if one atom is already in the
state $|70S_{1/2},m_{J}=1/2\rangle$, what will the chance be to populate the
state if the other atom is Rydberg pumped via, e.g., the $5P_{3/2}$
intermediate state.
For the TSD to hold, we focus on the strong blockade regime where the two-atom
Rydberg state shown in Fig. 3 is barely populated. In this case, the smallest
energy of the diagonalized eigenstate matters, or more accurately, the state
with the smallest eigenenergy that can be coupled (marked by red) plays the
role. In Fig. 3, one can find that for $\mathcal{L}\in[2.6,~{}4.6]\mu$m, there
are mainly three eigenstates, one with a positive energy (labeled as state 1),
and the other two with negative energy. For those two that have negative
energy, the one we focus on is the state with more overlap with
$|ss\rangle\equiv|70S_{1/2},m_{J}=1/2;70S_{1/2},m_{J}=1/2\rangle$, whose
energy is sometimes lower and sometimes higher than the energy of the other
with less population in $|ss\rangle$. The two states we focus on are labeled
by 1 and 2 in Fig. 3. State 1 has the largest component in $|ss\rangle$: at
$\mathcal{L}=\\{2.6,~{}3.0,~{}3.6,~{}4.0,~{}4.6\\}\mu$m, the amplitude overlap
between state 1 and $|ss\rangle$ is
$\\{0.64,~{}0.74,~{}0.84,~{}0.9,~{}0.95\\}$. For state 2, its overlap with
$|ss\rangle$ is $\\{0.49,~{}0.43,~{}0.32\\}$ when
$\mathcal{L}=\\{2.6,~{}3.0,~{}3.6\\}\mu$m and is negligible when $\mathcal{L}$
is beyond $4.0~{}\mu$m. The eigenenergy of these two states are listed in
Table 4, from which one can see that around $\mathcal{L}=3.0~{}\mu$m, the
interaction $V=2\pi\times 510$ can be represented by the eigenenergy of state
2, which is largest for the cases shown in Table 4.
| $\mathcal{L}~{}(\mu$m) | 2.6 | 3.0 | 3.6 | 4.0 | 4.6
---|---|---|---|---|---|---
State 1 | $V/(2\pi)$ (MHz) | 1600 | 780 | 340 | 180 | 94
State 2 | $V/(2\pi)$ (MHz) | -280 | -510 | -590 | -650 | -670
Table 4: Eigenenergy of the two eigenstates of the dipole-coupled two-atom states that have the largest overlap with the state $|70S_{1/2},m_{J}=1/2;70S_{1/2},m_{J}=1/2\rangle$ and that are nearest to zero, labeled as state 1 and state 2 in Fig. 3. $V/(2\pi)$ (MHz) | $T~{}(\mu$K) | 5 | 10 | 15 | 20 | 50
---|---|---|---|---|---|---
50 | $10^{4}\times\overline{E_{\text{ro}}}$ | $56.5$ | $60.3$ | $64.1$ | $67.8$ | $90.3$
100 | $10^{4}\times\overline{E_{\text{ro}}}$ | $17.0$ | $20.8$ | $24.5$ | $28.3$ | $58.6$
200 | $10^{4}\times\overline{E_{\text{ro}}}$ | $7.09$ | $10.9$ | $14.6$ | $18.4$ | $41.0$
300 | $10^{4}\times\overline{E_{\text{ro}}}$ | $5.25$ | $9.03$ | $12.8$ | $16.6$ | $39.1$
400 | $10^{4}\times\overline{E_{\text{ro}}}$ | $4.61$ | $8.39$ | $12.2$ | $15.9$ | $38.5$
Table 5: Rotation error (scaled up by $10^{4}$; excluding Rydberg-state
decay) of the TSD-based CNOT by $\Omega_{c}/2\pi=3.5$ MHz with different
values of $V$.
To have $V=2\pi\times 500$ MHz, the above study shows that placing the two
qubits with a distance around $\mathcal{L}=3.0~{}\mu$m seems necessary. With
such a distance, the analysis in Ref. Graham _et al._ (2019) shows that the
crosstalk error is about $0.5\%$ if the waist ($1/e^{2}$ intensity radii) of
the laser beams is $w=3.0~{}\mu$m. To reduce the crosstalk, it is possible to
use super-Gaussian Rydberg beams, as detailed in Ref. Gillen-Christandl _et
al._ (2016). Another possibility is to use higher Rydberg states so that the
dipole-dipole interaction can be as large as $V=2\pi\times 500$ MHz at a
larger interatomic distance, and hence the laser-beam crosstalk can be
avoided.
If only commonly used Gaussian Rydberg beams are employed for a Rydberg state
of principal quantum number around $n=70$, then smaller values of $V$ can be
used for larger interatomic distances so as to avoid crosstalk. For the values
$V/(2\pi)\in[50,~{}400]$ MHz, the fidelity of our gate is shown in Table 5,
which shows that with $V=2\pi\times 100$ MHz, the rotation error is about
$1.7\times 10^{-3}$ with $T_{a}=5~{}\mu$K, and the interatomic distance should
be around $\mathcal{L}=4.6\mu$m where crosstalk can be ignored safely
($\Omega^{\prime}/\Omega\sim e^{-2\mathcal{L}^{2}/w^{2}}=0.009$) if the waists
for the lower and upper lasers are both $w=3.0~{}\mu$m. Taking into account
the decay error, this means that our gate would have a fidelity $99.76\%$ with
$T_{a}=5~{}\mu$K in room temperatures.
Figure 4: Fidelity error from the fluctuation in the Rabi frequencies for the
two transitions $|0\rangle\leftrightarrow|r\rangle$ and
$|1\rangle\leftrightarrow|r\rangle$; the fluctuation obeys a Gaussian
distribution of width $\sigma_{\Omega}$. The error is $0.00376$ for
$\sigma_{\Omega}=5\%$.
## Appendix C Amplitude fluctuation of laser fields
Our TSD-based CNOT requires the Rabi frequencies for the two transitions
$|0\rangle\leftrightarrow|r\rangle\leftrightarrow|1\rangle$ to be equal in the
target qubit. Here we investigate the impact on the gate fidelity if this
condition is not satisfied.
We assume that the Rabi frequency $\Omega$ obeys a Gaussian distribution
$\displaystyle\mathscr{G}(\Omega)$ $\displaystyle=$
$\displaystyle\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(\Omega-\Omega_{k})^{2}}{2\sigma^{2}}}$
(37)
around the desired value $\Omega_{k}$, where $k\in\\{$t1, t2$\\}$ denotes the
two channels $|0\rangle\leftrightarrow|r\rangle$ and
$|1\rangle\leftrightarrow|r\rangle$. By using the Gaussian distributed Rabi
frequencies in the target qubit, one can evaluate the averaged fidelity by
$\displaystyle\overline{E_{\text{ro}}}\approx\frac{\sum_{\Omega_{\text{t1}}}\sum_{\Omega_{\text{t2}}}E_{\text{ro}}(\Omega_{\text{t1}},\Omega_{\text{t2}})\mathscr{G}(\Omega_{\text{t1}})\mathscr{G}(\Omega_{\text{t2}})}{\sum_{\Omega_{\text{t1}}}\sum_{\Omega_{\text{t2}}}\mathscr{G}(\Omega_{\text{t1}})\mathscr{G}(\Omega_{\text{t2}})},$
(38)
where $E_{\text{ro}}(\Omega_{\text{t1}},\Omega_{\text{t2}})$ is evaluated by
using the gate fidelity with $\Omega_{\text{t1}}$ and $\Omega_{\text{t2}}$ for
the two transitions $|0\rangle\leftrightarrow|r\rangle$ and
$|1\rangle\leftrightarrow|r\rangle$. In Eq. (38), the integration is
approximated by the sum over 121 sets of
$(\Omega_{\text{t1}},\Omega_{\text{t2}})$, each of which applies values
$(\Omega-\Omega_{k})/\Omega_{k}\in\\{\pm 5,~{}\pm 4,~{}\pm 3,~{}\pm 2,~{}\pm
1,~{}0\\}\sigma$. With $\sigma\in\\{0.01,~{}0.02,~{}0.03,~{}0.04,~{}0.05\\}$,
the result is shown in Fig. 4. One can see that with quite large relative
fluctuation of Gaussian width $5\%$, the fidelity is still larger than
$99.6\%$. There is little relation between the Doppler dephasing and
fluctuation of Rabi frequency, and one can expect that the gate fidelity can
be evaluated by combining Table 2 and Fig. 4 if the fluctuation of the Rabi
frequencies is not quenched.
In Ref. Levine _et al._ (2018), the power fluctuation of the Rydberg lasers
was suppressed below $1\%$ for preparing ground-Rydberg entanglement, which
means that the Rydberg Rabi frequencies were suppressed to have relative noise
below $10\%$ in Ref. Levine _et al._ (2018). More than one year later, in
Ref. Levine _et al._ (2019), preparation of high-fidelity ground-state
entanglement was reported by the same group, which was a significant progress.
We suppose that the laser noise was even more suppressed in Ref. Levine _et
al._ (2019) compared to the earlier experiment in Ref. Levine _et al._
(2018), and thus we consider the relative Gaussian width of the fluctuation up
to $5\%$.
## Appendix D Comparison with other fast Rydberg gates
It is useful to compare our CNOT with other fast entangling methods with
Rydberg atoms. The comparison focuses on physical mechanism and their
application in quantum computing.
A popular method to generate entanglement by Rydberg interactions is to use
the EA mechanism Jaksch _et al._ (2000). The standard way to use it is a
three-pulse sequence for a $C_{Z}$ gate in the form of
$\displaystyle C_{Z}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&-1&0&0\\\ 0&0&-1&0\\\
0&0&0&-1\\\ \end{array}\right)$ (43)
in the basis $\\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\\}$, where the
blockade takes effect in the state $|11\rangle$. To use it for quantum
computing in the circuit model, the $C_{Z}$ gate in Eq. (43) needs single-
qubit gates to become a CNOT. By this method, a most recent experiment in Ref.
Graham _et al._ (2019) realized a CNOT of duration more than 100 $\mu$s with
Rydberg Rabi frequencies $2\pi\times 4.6$ MHz because of the slow single-qubit
gates. Physically, this method depends on a missed $\pi$ phase accumulation in
an annihilated Rabi cycle for the input state $|11\rangle$ by the Rydberg
blockade.
The other recent experiment in Ref. Levine _et al._ (2019) used detuned Rabi
cycles for entanglement combined with detunings found by optimal control. The
method is essentially identical to the interference method proposed in Ref.
Shi (2019), as can be easily verified by looking at the similar structure of
the gate matrix [with the same basis as in Eq. (43)]
$\displaystyle C_{\text{phase}}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&e^{i\alpha}&0&0\\\
0&0&e^{i\alpha}&0\\\ 0&0&0&e^{i\beta}\\\ \end{array}\right)$ (48)
in Refs. Levine _et al._ (2019); Shi (2019) (Ref. Shi (2019) proposed two
gates among which the first is quoted here). Compared to the initial
interference method, Ref. Levine _et al._ (2019) used optimal-control-found
parameters and phase twist in the Rydberg pumping within the blockade regime
so as to realize $2\alpha-\beta=\pi$ (and thus the gate in Ref. Levine _et
al._ (2019) can be called a $C_{Z}$-like gate). Because of the detuned Rabi
cycles used, there is no way to realize a gate with
$\\{\alpha,~{}\beta\\}/\pi=\\{N_{1},N_{2}\\}$, where $N_{j}$ is an integer
with $j=1,2$. To use it in quantum computing, several single-qubit gates must
be used to transform the gate in Ref. Levine _et al._ (2019) to the CNOT so
their final CNOT duration was more than $4~{}\mu$s with Rydberg Rabi
frequencies $2\pi\times 3.5$ MHz. Without using the detuned Rabi cycles, the
method in Ref. Levine _et al._ (2019) can not work. The physical mechanism of
Ref. Levine _et al._ (2019) is that dynamical phases in detuned Rabi cycles
are accumulated.
In order to compare with the gates in Eqs. (43) and (48) which have a diagonal
form, the TSD-based CNOT in the basis of
$\\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\\}$ can be rewritten as
$\displaystyle C_{\text{TSD}}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&1&0&0\\\ 0&0&-1&0\\\
0&0&0&1\\\ \end{array}\right),$ (53)
with the basis
$\\{|0\overline{0}\rangle,|0\overline{1}\rangle,|1\overline{0}\rangle,|1\overline{1}\rangle\\}$,
where $|\overline{1}(\overline{0})\rangle=(|0\rangle\pm|1\rangle)/\sqrt{2}$.
The CNOT duration would be less than $0.4~{}\mu$s with Rydberg Rabi
frequencies $\leq 2\pi\times 3.5$ MHz by the TSD method. The blockade
interaction is in $|1\overline{1}\rangle$ and the pumping on the target qubit
induces a transition from $|1\overline{1}\rangle$ to
$|\mathscr{S}_{+}\rangle\equiv(\Omega_{\text{c}}|r\overline{1}\rangle+\sqrt{2}\Omega_{\text{t}}|1r\rangle)/\overline{\Omega}$
with a Rabi frequency
$\overline{\Omega}=\sqrt{\Omega_{\text{c}}^{2}+2\Omega_{\text{t}}^{2}}$ while
further excitation to $|rr\rangle$ is blocked. Each of the two pulses in the
TSD-based CNOT sequence induces a complete $2\pi$ rotation
$|1\overline{1}\rangle\rightarrow-i|\mathscr{S}_{+}\rangle\rightarrow-|1\overline{1}\rangle$
which leads to a $\pi$ phase change to $|1\overline{1}\rangle$. The change of
the sign of $\Omega_{\text{t}}$ between the two pulses does not change this
picture, and thus a total phase $2\pi$ is accumulated in
$|1\overline{1}\rangle$. Since $e^{2i\pi}=1$, $|1\overline{1}\rangle$ acquires
no phase term in practice. On the other hand, the input state
$|1\overline{0}\rangle$ only experiences two $\pi$ pulses (i.e., a $2\pi$
pulse) in the control qubit which results in a $\pi$ phase change to it. The
pumping of the target qubit experiences spin echo for the input state
$|0\overline{1}\rangle$, thus no phase appears for it. So, the physical
mechanism of the gate in Eq. (53) is fundamentally different from those in
Eqs. (43) and (48); in fact, the drastically different forms of the three gate
maps reveal this. To sum up in one word, Eq. (43) relies on a missed $\pi$
phase change in an annihilated Rabi cycle, Eq. (48) relies on three phase
changes in three detuned Rabi cycles, and (53) relies on a $\pi$ phase change
in a resonant Rabi cycle. For quantum computing, the gate in Eq. (53) is
exactly a CNOT in the basis of
$\\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\\}$, and thus is more useful
compared to Eqs. (43) and (48).
Because a phase twist is used in realizing both Eq. (48) and Eq. (53), one may
guess that they are similar. But the following facts show their distinct
physics: (i) detuned Rydberg pumping is used in Eq. (48), but resonant Rydberg
pumping is used in Eq. (53); (ii) three input states acquire phase terms in
Eq. (48), but only one input state acquires a phase term in Eq. (53); (iii)
the population in Rydberg state can reach 1 for the fourth input state
$|1\overline{1}\rangle$ in Eq. (53), while there is no way to realize such an
effect in Eq. (48). This last effect basically means that the mechanism for
realizing Eq. (53) can be used to realize a high-fidelity multi-qubit gate:
one can use TSD to excite the input state $|1\overline{1}\rangle$ to
$|\mathscr{S}_{+}\rangle$ which can block the Rydberg pumping in a nearby atom
(of course care shall be taken for the design of such a gate); notice that the
three-qubit gate in Ref. Levine _et al._ (2019) requires exciting the two
edge atoms to block the Rydberg pumping in the middle atom which results in
error due to the residual blockade between the two edge atoms. For the method
in Eq. (48), detuned Rabi cycles are used and it can not excite one Rydberg
excitation in the blocked excitation [for the input state $|11\rangle$ in Eq.
(48)] , and thus it is not possible to extend the method in Eq. (48) to a
high-fidelity multi-qubit gate. This means that the underlying physics in Eq.
(53) is different from that of Eq. (48).
Note that the basis transform used to write the TSD-based CNOT in Eq. (48) is
used only for clarifying the physics, but not for practical use since in a
large-scale quantum computer the way to encode quantum information shall be
based on a commonly used qubit basis in all registers (otherwise the
information loading and retrieving will face trouble). For example, to
initialize the atomic arrays, the quantization axis is fixed by applying,
e.g., an external magnetic field, and the qubit states $|0\rangle$ and
$|1\rangle$ are chosen from two hyperfine levels with different energy. So,
the states
$|\overline{1}(\overline{0})\rangle=(|0\rangle\pm|1\rangle)/\sqrt{2}$ are no
longer eigenstates because of the applied external fields, and new state
detection schemes shall be designed. In a large-scale quantum computer, to
play the role of control or target is not fixed for any qubit: a computation
task is divided into a series of unitary operations, in different unitary
operations, the same qubit sometimes serves as a control, and sometimes as a
target. This means that if one uses a hybrid coding with $|0(1)\rangle$ for
the control and $|\overline{1}(\overline{0})\rangle$ for the target, an
exceedingly large amount of transforming operations between $|0(1)\rangle$ and
$|\overline{1}(\overline{0})\rangle$ should be used in the quantum circuit.
So, it is impractical to use $|0(1)\rangle$ for information coding in the
control, and to use $|\overline{1}(\overline{0})\rangle$ for information
coding in the target; for a proof-of-principle study either in theory or in
experiment, it may be of interest, but not for a realistic large-scale quantum
computer.
## Appendix E AC Stark shifts
There is a detailed study about the ac Stark shifts in a two-photon Rydberg
excitation in Ref. Maller _et al._ (2015). Appendix B of Maller _et al._
(2015) presents a detailed calculation of the ac Stark shifts accounting for
the hyperfine splitting of the intermediate state, where one can find that
compensation of the ac Stark shifts is feasible. Below, we ignore the
hyperfine splitting of the intermediate state to give a brief introduction.
More details can be found in Ref. Maller _et al._ (2015).
Here, we choose the qubit states by $|1\rangle=|6S_{1/2},F=3,m_{F}=3\rangle$
and $|0\rangle=|6S_{1/2},F=4,m_{F}=4\rangle$ of cesium. The state $|1\rangle$
is driven to $|p\rangle$ (in the $7P_{1/2}$ manifold) with laser fields that
are left-hand circularly polarized (note that this is a little different from
Ref. Maller _et al._ (2015)), which is further excited to
$|r\rangle=|82S_{1/2},m_{J},m_{I}\rangle$. The Hamiltonian for a two-photon
Rydberg excitation is (in this section we explicitly put $\hbar$ in
Hamiltonians)
$\displaystyle\hat{H}_{\text{2-pho;0}}$ $\displaystyle=$
$\displaystyle\hbar[\Omega_{1}|p\rangle\langle 1|/2+\Omega_{2}|r\rangle\langle
p|/2+\text{H.c.}]$ (54) $\displaystyle+\hbar\Delta|p\rangle\langle p|,$
where $\Delta$ is defined as the frequency of the laser field deducted by the
frequency of the atomic transition. Note that we have not included the
nonresonant shift in the equation above. When $\Delta$ is very large compared
to the decay rate of $|p\rangle$, the intermediate state can be adiabatically
eliminated, leading to
$\displaystyle\hat{H}_{\text{2-pho}}$ $\displaystyle=$
$\displaystyle\hbar\\{[\Omega_{\text{eff}}|r\rangle\langle
1|/2+\text{H.c.}]+\Delta_{r}|r\rangle\langle r|$ (55)
$\displaystyle+\Delta_{q1}|1\rangle\langle 1|\\},$
where $\Omega_{\text{eff}}=\Omega_{1}\Omega_{2}/(2\Delta)$, and according to
Ref. Maller _et al._ (2015) the effective detuning at the level $|r\rangle$
is
$\displaystyle\Delta_{r}(\omega_{1},\omega_{2},\Delta,\mathcal{E}_{1},\mathcal{E}_{2})$
$\displaystyle=$
$\displaystyle\frac{\Omega_{2}^{2}}{4\Delta}-\frac{e^{2}}{4m_{e}\hbar}\left(\frac{\mathcal{E}_{1}^{2}}{\omega_{1}^{2}}+\frac{\mathcal{E}_{2}^{2}}{\omega_{2}^{2}}\right),$
(56)
where $e$ is the elementary charge, $m_{e}$ the mass of the electron, and
$\omega_{j}$ and $\mathcal{E}_{j}$ are the frequency and electric field
amplitude of the laser field, where $j=1$ and $2$ for the lower and upper
transitions, respectively. Similarly, for the ground state, the effective
detuning is
$\displaystyle\Delta_{q1}(\omega_{1},\omega_{2},\Delta,\mathcal{E}_{1},\mathcal{E}_{2})$
$\displaystyle=$
$\displaystyle\frac{\Omega_{1}^{2}}{4\Delta}-\frac{1}{4\hbar}\left(\alpha_{1}\mathcal{E}_{1}^{2}+\alpha_{2}\mathcal{E}_{2}^{2}\right),$
(57)
where $\alpha_{1}$ and $\alpha_{2}$ are the nonresonant polarizabilities from
the lower and upper fields that can be calculated via the sum over transitions
to other states except the intermediate state $|p\rangle$. Meanwhile, there is
a shift on the ground state $|0\rangle$
$\displaystyle\Delta_{q0}(\omega_{1},\omega_{2},\Delta,\mathcal{E}_{1},\mathcal{E}_{2})$
$\displaystyle=$
$\displaystyle\frac{\mathscr{C}^{2}\Omega_{1}^{2}}{4(\Delta+\omega_{q})}-\frac{1}{4\hbar}(\alpha_{1}\mathcal{E}_{1}^{2}$
(58) $\displaystyle+\alpha_{2}\mathcal{E}_{2}^{2}),$
where $\omega_{q}$ is the frequency separation between $|0\rangle$ and
$|1\rangle$, and $\mathscr{C}$ is a factor determined by the selection rules.
Because $\alpha_{2}/\alpha_{1}=-16.3$ and $\alpha_{1}$ is negative, the off-
resonant shift will be negative, and thus it is necessary for the resonant
shift to be positive. So, both $\Delta$ in Eq. (57) and $\Delta+\omega_{q}$ in
(58) shall be positive, which further means that $\mathscr{C}^{2}$ shall be
larger than 1.
For the case when $\Delta$ is much larger than the hyperfine splitting of the
intermediate $7P_{1/2}$ state, we ignore its hyperfine splitting and then it
can be written as $|7P_{1/2},m_{J},m_{I}\rangle$. When right-hand polarized
fields are used for coupling qubit states and the $7P_{1/2}$ states, the
square of the ratio of the coupling between $|0\rangle$ and $|p\rangle$ and
that between $|1\rangle$ and $|p\rangle$ is
$\displaystyle\mathscr{C}^{2}$ $\displaystyle=$
$\displaystyle\frac{\sum_{m_{J},m_{I}}\left(\sum_{m_{e}}C_{m_{J},1,m_{e}}^{1/2,1,1/2}C_{m_{e},m_{I},0}^{1/2,7/2,4}\right)^{2}}{\sum_{m_{J},m_{I}}\left(\sum_{m_{e}}C_{m_{J},1,m_{e}}^{1/2,1,1/2}C_{m_{e},m_{I},0}^{1/2,7/2,3}\right)^{2}},$
(59)
which is 8. To shift
$\Delta_{r}(\omega_{1},\omega_{2},\Delta,\mathcal{E}_{1},\mathcal{E}_{2})$ to
zero, one can simply adjust the frequency of the laser fields for addressing
the upper transition. To compensate the Stark shifts at $|0\rangle$ and
$|1\rangle$, Eqs. (57) and (58) indicates that
$\mathscr{C}^{2}=1+\omega_{q}/\Delta$, which means that we shall choose
$\Delta=\omega_{q}/(\mathscr{C}^{2}-1)\approx 2\pi\times 1.3$ GHz (for
rubidium it would be $2\pi\times 0.97$ GHz) which is near to values used in
experiments Isenhower _et al._ (2010); Maller _et al._ (2015). With
$\Delta=\omega_{q}/(\mathscr{C}^{2}-1)$ satisfied, an appropriate set of
$(\Delta,\mathcal{E}_{1},\mathcal{E}_{2})$ satisfying Eq. (57) would satisfy
Eq. (58), too. With the given data for the radial coupling between ground and
$|p\rangle$ states and the values of $\alpha_{1,2}$ (page 6 of Maller _et
al._ (2015)), Eq. (57) reduces to
$\overline{\Delta}[16.3\left(\frac{\mathcal{E}_{2}}{\mathcal{E}_{1}}\right)^{2}-1]=2.98$,
where $\overline{\Delta}=\Delta\times 10^{-9}s$, which can be used to set up
the ratio $|\mathcal{E}_{2}/\mathcal{E}_{1}|$.
We have shown the method to compensate the Stark shifts if pumping applies
only to the transition $|1\rangle\leftrightarrow|r\rangle$. For the target
qubit, there will be another transition required,
$|0\rangle\leftrightarrow|r\rangle$. Then, four fields lead to Stark shifts in
the two qubit states (two equations), and there is a condition that the Rabi
frequencies for $|0(1)\rangle\leftrightarrow|r\rangle$ shall be equal (a third
equation). Moreover, there will be two detunings at the intermediate states
for these two transitions. Altogether there are three equations for six
variables and hence the problem is solvable. The Stark shift at $|r\rangle$
can be compensated by adjusting the central frequencies of the laser fields.
So, Eq. (24), or Eq. (3) in the main text, can be realized.
For one-photon excitation of $p$-orbital Rydberg states, the off-resonant ac
Stark shift on $|0\rangle$ or $|1\rangle$ from the resonant field is positive,
but one can use another field of larger wavelength to induce a negative shift.
Then, a similar set of equations like Eqs. (57) and (58) can be established to
compensate the Stark shifts. For the shift on the Rydberg state, one can shift
the wavelength of the laser field to recover the resonance condition for
$|r\rangle$.
## Appendix F CNOT in other systems
It is useful to compare the speed of CNOT in different physical systems by
using the ratio $\Xi=T_{\text{coh}}/T_{\text{g}}$, where $T_{\text{coh}}$ is
the coherence time (pure spin dephasing $T_{2}$ or the inhomogeneously
broadened $T_{2}^{\ast}$), and $T_{\text{g}}$ is the duration of CNOT. First,
for neutral atom qubit systems, in Ref. Wang _et al._ (2016) a coherence time
of 7 seconds has been measured (see the left column of page 2 of Wang _et
al._ (2016)) in a large neutral atom array. Ref. Wang _et al._ (2016)
realized such a coherence time so as to have high fidelity in the single-qubit
gates that had gate durations 80 $\mu$s; in principle, much longer coherence
times can be achieved. To have a conserved estimation, we assume that a
coherence time of 7s for neutral atoms. With the protocol in this manuscript,
the CNOT duration would be about 0.3 $\mu$s with Rydberg Rabi frequencies like
those realized in Refs. [43, 44]. Then, the figure-of-merit would be
$\Xi=2.3\times 10^{7}$.
Second, for trapped ions, several most recent and most advanced results can be
inferred from literatures. However, one should bare in mind that some fast
entangling gates are not CNOT gates. Any two-qubit entangling gate can be
repeated several times to form a CNOT when assisted by single-qubit gates, as
demonstrated in Bremner _et al._ (2002), but the CNOT is the gate directly
helpful to quantum computation in the circuit model Shor (1997); Bremner _et
al._ (2002); Shende _et al._ (2004); Peruzzo _et al._ (2014); Debnath _et
al._ (2016). So, we focus on data for trapped ions with CNOT gates
demonstrated. (i) In Ref. Ballance _et al._ (2015), Bell states of trapped
ions were created with gate times 27.4 $\mu$s. The coherence times were not
mentioned in Ref. Ballance _et al._ (2015). However, near the end of Ref.
Ballance _et al._ (2015) it stated coherence times of 60s which indicates
that the coherence times of their qubits were around 60s. So the figure-of-
merit would be $60s/(27.4~{}\mu$s) which is about $\Xi=2.2\times 10^{6}$. (ii)
In Ref. Ballance _et al._ (2016), Bell states of trapped ions were prepared
by pulses with durations from 50$\mu$s to 100$\mu$s; it also showed very short
gate times of about 3.8$\mu$s. In the Supplemental Material of Ref. Ballance
_et al._ (2016), it showed that the coherence time is about 6s. So the figure-
of-merit is 6s/(3.8$\mu$s) which is about $\Xi=1.6\times 10^{6}$. (iii) In
Ref. Gaebler _et al._ (2016), Bell states of trapped ions were prepared by
pulses of duration about 30$\mu$s (Fig. 6 of Ref. Gaebler _et al._ (2016)
showed gates with durations even longer), and the coherence time is about
1.5s. So the figure-of-merit would be 1.5s/(30$\mu$s) which is about
$\Xi=5\times 10^{4}$.
Third, for superconducting qubits, very fast CNOT gates were reported in Ref.
Barends _et al._ (2014). The energy relaxation time $T_{1}$ is usually
shorter than the phase coherence time $T_{2}^{\ast}$ in superconducting
circuits. In Barends _et al._ (2014) the measured value of $T_{1}$ was
$20-40~{}\mu$s and values up to $57~{}\mu$s were recorded. The Supplementary
Information of Ref. Barends _et al._ (2014) showed that the $C_{Z}$ gate
durations can be as short as $38$ ns, and the single-qubit gate times can be
as small as $10$ ns. So the gate duration of a CNOT can be $T_{\text{g}}=58$
ns, leading to a figure-of-merit $\Xi=10^{3}$. Suppose the fast gate in Ref.
Barends _et al._ (2014) was realized in the superconducting system with a
longer coherence time $T_{1}=70~{}\mu$s Rigetti _et al._ (2012), the figure-
of-merit $\Xi=1.2\times 10^{3}$ is still much smaller than those in neutral
atoms.
Concerning the ratio between the coherence time and the CNOT duration, the
above comparison shows that TSD makes neutral atoms advantageous compared to
trapped ions and superconducting systems.
## References
* Jaksch _et al._ (2000) D. Jaksch, J. I. Cirac, P. Zoller, S. L. Rolston, R. Côté, and M. D. Lukin, Fast Quantum Gates for Neutral Atoms, Phys. Rev. Lett. 85, 2208 (2000).
* Lukin _et al._ (2001) M. D. Lukin, M. Fleischhauer, R. Cote, L. M. Duan, D. Jaksch, J. I. Cirac, and P. Zoller, Dipole Blockade and Quantum Information Processing in Mesoscopic Atomic Ensembles, Phys. Rev. Lett. 87, 037901 (2001).
* Saffman _et al._ (2010) M. Saffman, T. G. Walker, and K. Mølmer, Quantum information with Rydberg atoms, Rev. Mod. Phys. 82, 2313 (2010).
* Saffman (2016) M. Saffman, Quantum computing with atomic qubits and Rydberg interactions: Progress and challenges, J. Phys. B 49, 202001 (2016).
* Weiss and Saffman (2017) D. S. Weiss and M. Saffman, Quantum computing with neutral atoms, Phys. Today 70, 44 (2017).
* Firstenberg _et al._ (2016) O. Firstenberg, C. S. Adams, and S. Hofferberth, Nonlinear quantum optics mediated by Rydberg interactions, J. Phys. B 49, 152003 (2016).
* Adams _et al._ (2020) C. S. Adams, J. D. Pritchard, and J. P. Shaffer, Rydberg atom quantum technologies, J. Phys. B 53, 012002 (2020).
* Browaeys and Lahaye (2020) A. Browaeys and T. Lahaye, Many-body physics with individually controlled Rydberg atoms, Nat. Phys. 16, 132 (2020).
* Gurian _et al._ (2012) J. H. Gurian, P. Cheinet, P. Huillery, A. Fioretti, J. Zhao, P. L. Gould, D. Comparat, and P. Pillet, Observation of a resonant four-body interaction in cold cesium rydberg atoms, Phys. Rev. Lett. 108, 023005 (2012).
* Tretyakov _et al._ (2017) D. B. Tretyakov, I. I. Beterov, E. A. Yakshina, V. M. Entin, I. I. Ryabtsev, P. Cheinet, and P. Pillet, Observation of the Borromean three-body Förster resonances for three interacting Rb Rydberg atoms, Phys. Rev. Lett. 119, 173402 (2017).
* De Léséleuc _et al._ (2019) S. De Léséleuc, V. Lienhard, P. Scholl, D. Barredo, S. Weber, N. Lang, H. P. Büchler, T. Lahaye, and A. Browaeys, Observation of a symmetry-protected topological phase of interacting bosons with Rydberg atoms, Science 365, 775 (2019).
* Schauß _et al._ (2015) P. Schauß, J. Zeiher, T. Fukuhara, S. Hild, M. Cheneau, T. Macrì, T. Pohl, I. Bloch, and C. Gross, Crystallization in Ising quantum magnets, Science 347, 1455 (2015).
* Labuhn _et al._ (2016) H. Labuhn, D. Barredo, S. Ravets, S. de Léséleuc, T. Macrì, T. Lahaye, and A. Browaeys, A highly-tunable quantum simulator of spin systems using two-dimensional arrays of single Rydberg atoms, Nature 534, 667 (2016).
* Zeiher _et al._ (2016) J. Zeiher, R. V. Bijnen, P. Schauß, S. Hild, J.-y. Choi, T. Pohl, I. Bloch, and C. Gross, Many-body interferometry of a Rydberg-dressed spin lattice, Nat. Phys. 12, 1095 (2016).
* Zeiher _et al._ (2017) J. Zeiher, J.-Y. Choi, A. Rubio-Abadal, T. Pohl, R. van Bijnen, I. Bloch, and C. Gross, Coherent many-body spin dynamics in a long-range interacting Ising chain, Phys. Rev. X 7, 041063 (2017).
* Bernien _et al._ (2017) H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletic, and M. D. Lukin, Probing many-body dynamics on a 51-atom quantum simulator, Nature 551, 579 (2017).
* de Léséleuc _et al._ (2018) S. de Léséleuc, S. Weber, V. Lienhard, D. Barredo, H. P. Büchler, T. Lahaye, and A. Browaeys, Accurate Mapping of Multilevel Rydberg Atoms on Interacting Spin- 1 /2 Particles for the Quantum Simulation of Ising Models, Phys. Rev. Lett. 120, 113602 (2018).
* Guardado-Sanchez _et al._ (2018) E. Guardado-Sanchez, P. T. Brown, D. Mitra, T. Devakul, D. A. Huse, P. Schauß, and W. S. Bakr, Probing the Quench Dynamics of Antiferromagnetic Correlations in a 2D Quantum Ising Spin System, Phys. Rev. X 8, 021069 (2018).
* Kim _et al._ (2018) H. Kim, Y.-J. Park, K. Kim, H. S. Sim, and J. Ahn, Detailed Balance of Thermalization dynamics in Rydberg atom quantum simulators, Phys. Rev. Lett. 120, 180502 (2018).
* Keesling _et al._ (2019) A. Keesling, A. Omran, H. Levine, H. Bernien, H. Pichler, S. Choi, R. Samajdar, S. Schwartz, P. Silvi, S. Sachdev, P. Zoller, M. Endres, M. Greiner, V. Vuletić, and M. D. Lukin, Quantum Kibble–Zurek mechanism and critical dynamics on a programmable Rydberg simulator, Nature 568, 207 (2019).
* Ding _et al._ (2020) D.-S. Ding, H. Busche, B.-S. Shi, G.-C. Guo, and C. S. Adams, Phase Diagram and Self-Organizing Dynamics in a Thermal Ensemble of Strongly Interacting Rydberg Atoms, Phys. Rev. X 10, 21023 (2020).
* Borish _et al._ (2020) V. Borish, O. Marković, J. A. Hines, S. V. Rajagopal, and M. Schleier-Smith, Transverse-Field Ising Dynamics in a Rydberg-Dressed Atomic Gas, Phys. Rev. Lett. 124, 063601 (2020).
* Dudin and Kuzmich (2012) Y. O. Dudin and A. Kuzmich, Strongly interacting Rydberg excitations of a cold atomic gas. Science 336, 887 (2012).
* Peyronel _et al._ (2012) T. Peyronel, O. Firstenberg, Q.-Y. Liang, S. Hofferberth, A. V. Gorshkov, T. Pohl, M. D. Lukin, and V. Vuletić, Quantum nonlinear optics with single photons enabled by strongly interacting atoms. Nature 488, 57 (2012).
* Firstenberg _et al._ (2013) O. Firstenberg, T. Peyronel, Q.-Y. Liang, A. V. Gorshkov, M. D. Lukin, and V. Vuletić, Attractive photons in a quantum nonlinear medium, Nature 488, 57 (2013).
* Li _et al._ (2013) L. Li, Y. O. Dudin, and A. Kuzmich, Entanglement between light and an optical atomic excitation. Nature 498, 466 (2013).
* Gorniaczyk _et al._ (2014) H. Gorniaczyk, C. Tresp, J. Schmidt, H. Fedder, and S. Hofferberth, Single-Photon Transistor Mediated by Interstate Rydberg Interactions, Phys. Rev. Lett. 113, 053601 (2014).
* Baur _et al._ (2014) S. Baur, D. Tiarks, G. Rempe, and S. Dürr, Single-Photon Switch Based on Rydberg Blockade, Phys. Rev. Lett. 112, 073901 (2014).
* Tiarks _et al._ (2014) D. Tiarks, S. Baur, K. Schneider, S. Dürr, and G. Rempe, Single-Photon Transistor Using a Förster Resonance, Phys. Rev. Lett. 113, 053602 (2014).
* Li _et al._ (2016) J. Li, M.-T. Zhou, B. Jing, X.-J. Wang, S.-J. Yang, X. Jiang, K. Mølmer, X.-H. Bao, and J.-W. Pan, Hong-Ou-Mandel Interference between Two Deterministic Collective Excitations in an Atomic Ensemble, Phys. Rev. Lett. 117, 180501 (2016).
* Busche _et al._ (2017) H. Busche, P. Huillery, S. W. Ball, T. Ilieva, M. P. A. Jones, and C. S. Adams, Contactless nonlinear optics mediated by long-range Rydberg interactions, Nat. Phys. 13, 655 (2017).
* Ripka and Pfau (2018) F. Ripka and T. Pfau, A room-temperature single-photon source based on strongly interacting Rydberg atoms, Science 449, 446 (2018).
* Liang _et al._ (2018) Q.-Y. Liang, A. V. Venkatramani, S. H. Cantu, T. L. Nicholson, M. J. Gullans, A. V. Gorshkov, J. D. Thompson, C. Chin, M. D. Lukin, and V. Vuleti, Observation of three-photon bound states in a quantum nonlinear medium, Science 786, 783 (2018).
* Li _et al._ (2019) J. Li, M.-T. Zhou, C.-W. Yang, P.-F. Sun, J.-L. Liu, X.-H. Bao, and J.-W. Pan, Semi-Deterministic Entanglement between a Single Photon and an Atomic Ensemble, Phys. Rev. Lett. 123, 140504 (2019).
* Thompson _et al._ (2017) J. D. Thompson, T. L. Nicholson, Q.-Y. Liang, S. H. Cantu, A. V. Venkatramani, S. Choi, I. A. Fedorov, D. Viscor, T. Pohl, M. D. Lukin, and V. Vuletić, Symmetry-protected collisions between strongly interacting photons, Nature 542, 206 (2017).
* Ebert _et al._ (2015) M. Ebert, M. Kwon, T. G. Walker, and M. Saffman, Coherence and Rydberg Blockade of Atomic Ensemble Qubits, Phys. Rev. Lett. 115, 093601 (2015).
* Omran _et al._ (2019) A. Omran, H. Levine, A. Keesling, G. Semeghini, T. T. Wang, S. Ebadi, H. Bernien, A. S. Zibrov, H. Pichler, S. Choi, J. Cui, M. Rossignolo, P. Rembold, S. Montangero, T. Calarco, M. Endres, M. Greiner, V. Vuletić, and M. D. Lukin, Generation and manipulation of Schrödinger cat states in Rydberg atom arrays, Science 365, 570 (2019).
* Wilk _et al._ (2010) T. Wilk, A. Gaëtan, C. Evellin, J. Wolters, Y. Miroshnychenko, P. Grangier, and A. Browaeys, Entanglement of Two Individual Neutral Atoms Using Rydberg Blockade, Phys. Rev. Lett. 104, 010502 (2010).
* Isenhower _et al._ (2010) L. Isenhower, E. Urban, X. L. Zhang, A. T. Gill, T. Henage, T. A. Johnson, T. G. Walker, and M. Saffman, Demonstration of a Neutral Atom Controlled-NOT Quantum Gate, Phys. Rev. Lett. 104, 010503 (2010).
* Zhang _et al._ (2010) X. L. Zhang, L. Isenhower, A. T. Gill, T. G. Walker, and M. Saffman, Deterministic entanglement of two neutral atoms via Rydberg blockade, Phys. Rev. A 82, 030306(R) (2010).
* Maller _et al._ (2015) K. M. Maller, M. T. Lichtman, T. Xia, Y. Sun, M. J. Piotrowicz, A. W. Carr, L. Isenhower, and M. Saffman, Rydberg-blockade controlled-not gate and entanglement in a two-dimensional array of neutral-atom qubits, Phys. Rev. A 92, 022336 (2015).
* Zeng _et al._ (2017) Y. Zeng, P. Xu, X. He, Y. Liu, M. Liu, J. Wang, D. J. Papoular, G. V. Shlyapnikov, and M. Zhan, Entangling Two Individual Atoms of Different Isotopes via Rydberg Blockade, Phys. Rev. Lett. 119, 160502 (2017).
* Levine _et al._ (2019) H. Levine, A. Keesling, G. Semeghini, A. Omran, T. T. Wang, S. Ebadi, H. Bernien, M. Greiner, V. Vuletić, H. Pichler, and M. D. Lukin, Parallel implementation of high-fidelity multi-qubit gates with neutral atoms, Phys. Rev. Lett. 123, 170503 (2019).
* Graham _et al._ (2019) T. M. Graham, M. Kwon, B. Grinkemeyer, Z. Marra, X. Jiang, M. T. Lichtman, Y. Sun, M. Ebert, and M. Saffman, Rydberg mediated entanglement in a two-dimensional neutral atom qubit array, Phys. Rev. Lett. 123, 230501 (2019).
* Madjarov _et al._ (2020) I. S. Madjarov, J. P. Covey, A. L. Shaw, J. Choi, A. Kale, A. Cooper, H. Pichler, V. Schkolnik, J. R. Williams, and M. Endres, High-fidelity entanglement and detection of alkaline-earth Rydberg atoms, Nat. Phys. 16, 857 (2020).
* Jau _et al._ (2016) Y.-Y. Jau, A. M. Hankin, T. Keating, I. H. Deutsch, and G. W. Biedermann, Entangling atomic spins with a Rydberg-dressed spin-flip blockade, Nat. Phys. 12, 71 (2016).
* Levine _et al._ (2018) H. Levine, A. Keesling, A. Omran, H. Bernien, S. Schwartz, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletić, and M. D. Lukin, High-fidelity control and entanglement of Rydberg atom qubits, Phys. Rev. Lett. 121, 123603 (2018).
* Picken _et al._ (2019) C. J. Picken, R. Legaie, K. McDonnell, and J. D. Pritchard, Entanglement of neutral-atom qubits with long ground-Rydberg coherence times, Quantum Sci. Technol. 4, 015011 (2019).
* Tiarks _et al._ (2019) D. Tiarks, S. Schmidt-Eberle, T. Stolz, G. Rempe, and S. Dürr, A photon–photon quantum gate based on Rydberg interactions, Nat. Phys. 15, 124 (2019).
* Jo _et al._ (2020) H. Jo, Y. Song, M. Kim, and J. Ahn, Rydberg atom entanglements in the weak coupling regime, Phys. Rev. Lett. 124, 33603 (2020).
* Ates _et al._ (2007) C. Ates, T. Pohl, T. Pattard, and J. M. Rost, Antiblockade in Rydberg Excitation of an Ultracold Lattice Gas, Phys. Rev. Lett. 98, 023002 (2007).
* Amthor _et al._ (2010) T. Amthor, C. Giese, C. S. Hofmann, and M. Weidemüller, Evidence of Antiblockade in an Ultracold Rydberg Gas, Phys. Rev. Lett. 104, 013001 (2010).
* Nielsen and Chuang (2000) M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, Cambridge, 2000).
* Williams (2011) C. P. Williams, _Explorations in Quantum Computing_ , 2nd ed., edited by D. Gries and F. B. Schneider, Texts in Computer Science (Springer-Verlag, London, 2011).
* Ladd _et al._ (2010) T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, and J. L. O’Brien, Quantum computers. Nature (London) 464, 45 (2010).
* Shor (1997) P. W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM J. Comput. 26, 1484 (1997).
* Bremner _et al._ (2002) M. J. Bremner, C. M. Dawson, J. L. Dodd, A. Gilchrist, A. W. Harrow, D. Mortimer, M. A. Nielsen, and T. J. Osborne, Practical Scheme for Quantum Computation with Any Two-Qubit Entangling Gate, Phys. Rev. Lett. 89, 247902 (2002).
* Shende _et al._ (2004) V. V. Shende, I. L. Markov, and S. S. Bullock, Minimal universal two-qubit controlled-NOT-based circuits, Phys. Rev. A 69, 062321 (2004).
* Peruzzo _et al._ (2014) A. Peruzzo, J. McClean, P. Shadbolt, M. H. Yung, X. Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien, A variational eigenvalue solver on a photonic quantum processor, Nat. Comm. 5, 4213 (2014).
* Debnath _et al._ (2016) S. Debnath, N. M. Linke, C. Figgatt, K. A. Landsman, K. Wright, and C. Monroe, Demonstration of a small programmable quantum computer with atomic qubits, Nature 536, 63 (2016).
* Cirac and Zoller (1995) J. I. Cirac and P. Zoller, Quantum Computations with Cold Trapped Ions, Phys. Rev. Lett. 74, 4091 (1995).
* Sørensen and Mølmer (1999) A. Sørensen and K. Mølmer, Quantum Computation with Ions in Thermal Motion, Phys. Rev. Lett. 82, 1971 (1999).
* Ballance _et al._ (2015) C. J. Ballance, V. M. Schafer, J. P. Home, S. D. J, W. S. C, A. D. T. C, N. M. Linke, T. P. Harty, D. P. L. A. Craik, D. N. Stacey, a. M. Steane, and D. M. Lucas, Hybrid quantum logic and a test of Bell’s inequality using two different atomic species, Nature 528, 384 (2015).
* Ballance _et al._ (2016) C. J. Ballance, T. P. Harty, N. M. Linke, M. A. Sepiol, and D. M. Lucas, High-Fidelity Quantum Logic Gates Using Trapped-Ion Hyperfine Qubits, Phys. Rev. Lett. 117, 060504 (2016).
* Gaebler _et al._ (2016) J. P. Gaebler, T. R. Tan, Y. Lin, Y. Wan, R. Bowler, A. C. Keith, S. Glancy, K. Coakley, E. Knill, D. Leibfried, and D. J. Wineland, High-Fidelity Universal Gate Set for $Be^{+}$ Ion Qubits, Phys. Rev. Lett. 117, 060505 (2016).
* Schäfer _et al._ (2018) V. M. Schäfer, C. J. Ballance, K. Thirumalai, L. J. Stephenson, T. G. Ballance, A. M. Steane, and D. M. Lucas, Fast quantum logic gates with trapped-ion qubits, Nature 555, 75 (2018).
* Zhang _et al._ (2020) C. Zhang, F. Pokorny, W. Li, G. Higgins, A. Pöschl, I. Lesanovsky, and M. Hennrich, Submicrosecond entangling gate between trapped ions via Rydberg interaction, Nature 580, 345 (2020).
* You and Nori (2005) J. Q. You and F. Nori, Superconducting circuits and quantum information, Phys. Today 58, 42 (2005).
* Devoret and Schoelkopf (2013) M. H. Devoret and R. J. Schoelkopf, Superconducting Circuits for Quantum Information: An Outlook, Science , 1169 (2013).
* Barends _et al._ (2014) R. Barends, J. Kelly, A. Megrant, A. Veitia, D. Sank, E. Jeffrey, T. C. White, J. Mutus, A. G. Fowler, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, C. Neill, P. O’Malley, P. Roushan, A. Vainsencher, J. Wenner, A. N. Korotkov, A. N. Cleland, and J. M. Martinis, Superconducting quantum circuits at the surface code threshold for fault tolerance, Nature 508, 500 (2014).
* Walker and Saffman (2008) T. G. Walker and M. Saffman, Consequences of Zeeman degeneracy for the van der Waals blockade between Rydberg atoms, Phys. Rev. A 77, 032723 (2008).
* Saffman and Walker (2005) M. Saffman and T. G. Walker, Analysis of a quantum logic device based on dipole-dipole interactions of optically trapped Rydberg atoms, Phys. Rev. A 72, 022347 (2005).
* Shi (2018a) X.-F. Shi, Accurate Quantum Logic Gates by Spin Echo in Rydberg Atoms, Phys. Rev. Appl. 10, 034006 (2018a).
* Shi (2020a) X.-F. Shi, Single-site Rydberg addressing in 3D atomic arrays for quantum computing with neutral atoms, J. Phys. B 53, 054002 (2020a).
* Shi (2018b) X.-F. Shi, Deutsch, Toffoli, and CNOT Gates via Rydberg Blockade of Neutral Atoms, Phys. Rev. Appl. 9, 051001 (2018b).
* Müller _et al._ (2009) M. Müller, I. Lesanovsky, H. Weimer, H. P. Büchler, and P. Zoller, Mesoscopic Rydberg Gate Based on Electromagnetically Induced Transparency, Phys. Rev. Lett. 102, 170502 (2009).
* Xia _et al._ (2015) T. Xia, M. Lichtman, K. Maller, A. W. Carr, M. J. Piotrowicz, L. Isenhower, and M. Saffman, Randomized Benchmarking of Single-Qubit Gates in a 2D Array of Neutral-Atom Qubits, Phys. Rev. Lett. 114, 100503 (2015).
* Wang _et al._ (2016) Y. Wang, A. Kumar, T.-Y. Wu, and D. S. Weiss, Single-qubit gates based on targeted phase shifts in a 3D neutral atom array, Science 352, 1562 (2016).
* Yavuz _et al._ (2006) D. D. Yavuz, P. B. Kulatunga, E. Urban, T. A. Johnson, N. Proite, T. Henage, T. G. Walker, and M. Saffman, Fast Ground State Manipulation of Neutral Atoms in Microscopic Optical Traps, Phys Rev Lett 96, 063001 (2006).
* Jones _et al._ (2007) M. P. A. Jones, J. Beugnon, A. Gaëtan, J. Zhang, G. Messin, A. Browaeys, and P. Grangier, Fast quantum state control of a single trapped neutral atom, Phys. Rev. A 75, 040301(R) (2007).
* Shi (2019) X.-F. Shi, Fast, Accurate, and Realizable Two-Qubit Entangling Gates by Quantum Interference in Detuned Rabi Cycles of Rydberg Atoms, Phys. Rev. Appl. 11, 044035 (2019).
* Beterov _et al._ (2009) I. I. Beterov, I. I. Ryabtsev, D. B. Tretyakov, and V. M. Entin, Quasiclassical calculations of blackbody-radiation-induced depopulation rates and effective lifetimes of Rydberg nS, nP, and nD alkali-metal atoms with n$\leq$80, Phys. Rev. A 79, 052504 (2009).
* Shi (2020b) X.-F. Shi, Suppressing Motional Dephasing of Ground-Rydberg Transition for High-Fidelity Quantum Control with Neutral Atoms, Phys. Rev. Appl. 13, 024008 (2020b).
* Zhang _et al._ (2012) X. L. Zhang, A. T. Gill, L. Isenhower, T. G. Walker, and M. Saffman, Fidelity of a Rydberg-blockade quantum gate from simulated quantum process tomography, Phys. Rev. A 85, 042310 (2012).
* Saffman _et al._ (2020) M. Saffman, I. I. Beterov, A. Dalal, E. J. Paez, and B. C. Sanders, Symmetric Rydberg controlled- Z gates with adiabatic pulses control target, Phys. Rev. A 101, 62309 (2020).
* Pedersen _et al._ (2007) L. H. Pedersen, N. M. Møller, and K. Mølmer, Fidelity of quantum operations, Phys. Lett. A 367, 47 (2007).
* Šibalić _et al._ (2017) N. Šibalić, J. D. Pritchard, C. S. Adams, and K. J. Weatherill, ARC: An open-source library for calculating properties of alkali Rydberg atoms, Comp. Phys. Comm. 220, 319 (2017).
* Gillen-Christandl _et al._ (2016) K. Gillen-Christandl, G. D. Gillen, M. J. Piotrowicz, and M. Saffman, Comparison of Gaussian and super Gaussian laser beams for addressing atomic qubits, Appl. Phys. B 122, 131 (2016).
* Rigetti _et al._ (2012) C. Rigetti, J. M. Gambetta, S. Poletto, B. L. T. Plourde, J. M. Chow, A. D. Córcoles, J. A. Smolin, S. T. Merkel, J. R. Rozen, G. A. Keefe, M. B. Rothwell, M. B. Ketchen, and M. Steffen, Superconducting qubit in a waveguide cavity with a coherence time approaching 0.1 ms, Phys. Rev. B 86, 100506(R) (2012).
|
# A Survey on Limit Theorems for Toeplitz Type Quadratic Functionals of
Stationary Processes and Applications
Mamikon S. Ginovyan and Murad S. Taqqu
###### Abstract
This is a survey of recent results on central and non-central limit theorems
for quadratic functionals of stationary processes. The underlying processes
are Gaussian, linear or Lévy-driven linear processes with memory, and are
defined either in discrete or continuous time. We focus on limit theorems for
Toeplitz and tapered Toeplitz type quadratic functionals of stationary
processes with applications in parametric and nonparametric statistical
estimation theory. We discuss questions concerning Toeplitz matrices and
operators, Fejér-type singular integrals, and Lévy-Itô-type and Stratonovich-
type multiple stochastic integrals. These are the main tools for obtaining
limit theorems.
Keywords. Central and non-central limit theorems; Toeplitz type quadratic
functional; stationary processes; spectral density; Brownian motion;
parametric and nonparametric estimation. 2010 Mathematics Subject
Classification. Primary: 60F05, 60G10, 60G15; secondary: 62F12, 62G05.
###### Contents
1. 1 Introduction
1. 1.1 Notation and conventions.
2. 1.2 The functionals under consideration
3. 1.3 A brief history
4. 1.4 Frequency-domain conditions
5. 1.5 Methods and tools.
6. 1.6 The structure of the paper
2. 2 The model: second-order stationary process
1. 2.1 Key notions and some basic results
1. 2.1.1 Second-order (wide-sense) stationary process
2. 2.1.2 Spectral representations
3. 2.1.3 Kolmogorov’s isometric isomorphism theorem
2. 2.2 Linear processes. Existence of spectral density functions
3. 2.3 Lévy-driven linear process
4. 2.4 Dependence (memory) structure of the model
1. 2.4.1 Short memory models
2. 2.4.2 Discrete-time long-memory and anti-persistent models
3. 2.4.3 Continuous-time long-memory and anti-persistent models
3. 3 CLT for Toeplitz type quadratic functionals for Gaussian and linear processes
1. 3.1 Frequency domain conditions
2. 3.2 Time domain conditions
3. 3.3 Operator conditions
4. 4 Functional limit theorems for Gaussian and linear models
1. 4.1 Functional CLT for $Q_{T}(\tau)$
2. 4.2 Functional NCLT for $Q_{T}(\tau)$
1. 4.2.1 Non-central limit theorems (discrete-time)
2. 4.2.2 Non-central limit theorems (continuous-time)
5. 5 Functional limit theorems for Lévy-driven linear models
1. 5.1 Central limit theorems
2. 5.2 Non-central limit theorems
6. 6 CLT for Tapered Toeplitz Quadratic Functionals
1. 6.1 The problem
2. 6.2 Statistical motivation
3. 6.3 Central limit theorems for tapered quadratic functional $Q_{T}^{h}$
1. 6.3.1 CLT for Gaussian models
2. 6.3.2 CLT for Lévy-driven stationary linear models
7. 7 Applications
1. 7.1 Nonparametric estimation of spectral functionals
2. 7.2 Parametric estimation: The Whittle procedure
8. 8 Methods and tools
1. 8.1 The characteristic functions and cumulant criteria for the CLT
2. 8.2 Approximation of traces of products of Toeplitz matrices and operators
3. 8.3 Approximation method for the CLT
4. 8.4 Fejér-type singular integrals
5. 8.5 Lévy-Itô-type and Stratonovich-type multiple stochastic integrals
6. 8.6 Power counting theorems
## 1 Introduction
A significant part of large-sample statistical inference relies on limit
theorems of probability theory, which involves sums and quadratic functionals
of stationary observations. Depending on the memory (or dependence) structure
of the underlying processes, these functionals, once normalized, can have
different limits, and the proofs of such limit theorems generally use
different methods. In this paper, we focus on the quadratic functionals. The
term ’central limit theorem’ (CLT) refers to a statement that a suitably
standardized quadratic functional converges in distribution to a Gaussian
random variable. Limit theorems where a suitably standardized quadratic
functional converges in distribution to a non-Gaussian random variable are
termed ’non-central limit theorems’ (NCLT).
We present results on central and non-central limit theorems for Toeplitz and
tapered Toeplitz type quadratic functionals of stationary processes with
applications in parametric and nonparametric statistical estimation theory.
The underlying processes are Gaussian, linear or Lévy-driven linear processes
with memory, and are defined either in discrete or continuous time. We also
discuss some questions concerning Toeplitz matrices and operators, Fejér-type
singular integrals, Lévy-Itô-type and Stratonovich-type multiple stochastic
integrals, and power counting theorems. These are the main tools for obtaining
limit theorems, but they are also of interest in themselves.
### 1.1 Notation and conventions.
The following notation and conventions are used throughout the paper.
The symbol ’$:=$’ stands for ’by definition’. c.t.: = continuous-time; d.t.: =
discrete-time; s.d.:= spectral density; c.f.:= covariance function; CLT:=
central limit theorem; NCLT:= non-central limit theorem. The symbol
’$\stackrel{{\scriptstyle d}}{{=}}$’ stands for equality of the finite-
dimensional distributions. The symbol ’$\overset{d}{\rightarrow}$’ stands for
convergence in distribution. The symbol ’$\overset{f.d.d.}{\longrightarrow}$’
stands for convergence of finite-dimensional distributions. The symbol
’$\Longrightarrow$’ stands for weak convergence. The notation
$X_{T}\overset{d}{\rightarrow}\eta\sim N(0,\sigma^{2})$ as $T\to\infty$ will
mean that the distribution of the random variable $X_{T}$ tends (as
$T\to\infty$) to the centered normal distribution with variance $\sigma^{2}$.
$\mathbb{E}[\cdot]$: = expectation operator. $\hbox{tr}[A]$: = trace of an
operator (matrix) $A$. ${\mathbb{I}}_{A}(\cdot)$: = indicator of a set
$A\subset\Lambda$. WN$(0,1)$: = standard white-noise.
The standard symbols $\mathbb{N}$, $\mathbb{Z}$ and $\mathbb{R}$ denote the
sets of natural, integer and real numbers, respectively. By $\Lambda$ we
denote the frequency domain, that is, $\Lambda:=\mathbb{R}$ in the c.t. case,
and $\Lambda:=[-\pi.\pi]$ in the d.t. case. By
$L^{p}(\mu):=L^{p}(\Lambda,\mu)$ ($p\geq$1) we denote the weighted Lebesgue
space with respect to the measure $\mu$, and by $||\cdot||_{p,\mu}$ we denote
the norm in $L^{p}(\mu)$. In the special case where $d\mu(\lambda)=d\lambda$,
we will use $L^{p}$ and $||\cdot||_{p}$, respectively. By $l^{p}$ ($p\geq$1)
we denote the the space of $p$-summable sequences. The letters $C$ and $c$
with or without indices are used to denote positive constants, the values of
which can vary from line to line.
### 1.2 The functionals under consideration
Let $\\{X(u),\ u\in\mathbb{U}\\}$ be a centered real-valued stationary process
with spectral density (s.d.) $f(\lambda)$, $\lambda\in\Lambda$ and covariance
function (c.f.) $r(t)$, $t\in\mathbb{U}$. We consider simultaneously the
continuous-time (c.t.) case, where $\mathbb{U}=\mathbb{R}:=(-\infty,\infty)$,
and the discrete-time (d.t.) case, where $\mathbb{U}=\mathbb{Z}:=\\{0,\pm
1,\pm 2,\ldots\\}$. The domain $\Lambda$ of the frequency variable $\lambda$
is $\Lambda=\mathbb{R}$ in the c.t. case, and $\Lambda:=[-\pi.\pi]$ in the
d.t. case.
We first survey the recent results concerning the asymptotic distribution (as
$T\to\infty$) of the following Toeplitz type quadratic functionals of the
process $X(u)$:
$Q_{T}:=\left\\{\begin{array}[]{ll}\sum_{t=1}^{T}\sum_{s=1}^{T}\widehat{g}(t-s)X(t)X(s)&\mbox{in
the d.t.\ case},\\\ \\\
\int_{0}^{T}\int_{0}^{T}\widehat{g}(t-s)X(t)X(s)\,dt\,ds&\mbox{in the c.t.\
case},\end{array}\right.$ (1.1)
where
$\widehat{g}(t):=\int_{\Lambda}e^{i\lambda t}\,g(\lambda)\,d\lambda,\quad
t\in\mathbb{U}$ (1.2)
is the Fourier transform of some real, even, integrable function $g(\lambda),$
$\lambda\in\Lambda$. We will refer to $g(\lambda)$ and to its Fourier
transform $\widehat{g}(t)$ as a generating function and generating kernel for
the functional $Q_{T}$, respectively. In the d.t. case the functions
$f(\lambda)$ and $g(\lambda)$ are assumed to be $2\pi$-periodic and
periodically extended to $\mathbb{R}$. In the c.t. case the process $X(u)$ is
assumed measurable and mean square continuous.
The limit distributions of the functionals in (1.1) are completely determined
by the spectral density $f(\lambda)$ (or covariance function $r(t)$) and the
generating function $g(\lambda)$ (or generating kernel $\widehat{g}(t)$), and
depending on their properties, the limits can be either Gaussian (that is,
$Q_{T}$ with an appropriate normalization obeys the central limit theorem), or
non-Gaussian.
The following two questions arise naturally:
* (a)
Under what conditions on $f(\lambda)$ (resp. $r(t)$) and $g(\lambda)$ (resp.
$\widehat{g}(t)$) will the limits be Gaussian? (CLT-problem).
* (b)
Describe the limit distributions, if they are non-Gaussian (NCLT-problem).
We discuss here these questions both for d.t. and c.t. stationary processes,
and survey the recent results.
We also survey recent results concerning functional central and non-central
limit theorems for the following processes, generated by quadratic functionals
in (1.1):
$Q_{T}(\tau):=\left\\{\begin{array}[]{ll}\sum_{t=1}^{[T\tau]}\sum_{s=1}^{[T\tau]}\widehat{g}(t-s)X(t)X(s)&\mbox{in
the d.t.\ case},\\\ \\\
\int_{0}^{T\tau}\int_{0}^{T\tau}\widehat{g}(t-s)X(t)X(s)\,dt\,ds&\mbox{in the
c.t.\ case},\end{array}\right.$ (1.3)
where $\tau\in[0,1]$ and $[\,\cdot\,]$ stands for the greatest integer.
We say that a functional central limit theorem (FCLT) for $Q_{T}(\tau)$ holds
if the process $Q_{T}(\tau)$ with an appropriate normalization converges
weakly in $C[0,1]$ in the c.t. case (and in $D[0,1]$ in the d.t. case) to
Brownian motion. We say that we have a functional non-central limit theorem
(FNCLT) for the process $Q_{T}(\tau)$ if the limit is non-Gaussian or, if
Gaussian, it is not Brownian motion.
The tapered case. In the spectral analysis of stationary processes the data
are frequently tapered before calculating the statistics of interest. Instead
of the original data $\\{X(t)$, $0\leq t\leq T\\}$ the tapered data
$\\{h(t)X(t)$, $0\leq t\leq T\\}$ with the data taper $h(t)$ are used for all
further calculations. The benefits of tapering the data have been widely
reported in the literature (see, e.g., Bloomfield [14], Brillinger [17],
Dahlhaus [24, 25, 26], Dahlhaus and Künsch [28], Guyon [69], and references
therein). For example, data-tapers are introduced to reduce the so-called
’leakage effects’, that is, to obtain better estimation of the spectrum of the
model in the case where it contains high peaks. Other application of data-
tapers is in situations in which some of the data values are missing. Also,
the use of tapers leads to bias reduction, which is especially important when
dealing with spatial data. In this case, the tapers can be used to fight the
so-called ’edge effects’.
In this case, to establish asymptotic properties of the corresponding
estimators, we have to study the asymptotic distribution (as $T\to\infty$) of
the following Toeplitz type tapered quadratic functionals of the process
$X(u)$:
$Q_{T}^{h}:=\left\\{\begin{array}[]{ll}\sum_{t=1}^{T}\sum_{s=1}^{T}\widehat{g}(t-s)h_{T}(t)h_{T}(s)X(t)X(s)&\mbox{in
the d.t.\ case},\\\ \\\
\int_{0}^{T}\int_{0}^{T}\widehat{g}(t-s)h_{T}(t)h_{T}(s)X(t)X(s)\,dt\,ds&\mbox{in
the c.t.\ case},\end{array}\right.$ (1.4)
where $\widehat{g}(t)$, $t\in\mathbb{U}$ is the Fourier transform of some
integrable even function $g(\lambda)$, $\lambda\in\Lambda$, and
$h_{T}(t):=h(t/T)$ with a taper function $h(t)$, $t\in[0,1]$.
Quadratic functionals of the form (1.1) and (1.4) arise naturally in the
context of nonparametric and parametric statistical estimation of the spectrum
of stationary processes based on the non-tapered and tapered data,
respectively. And their limiting distributions are necessary to establish
asymptotic properties of the corresponding estimators. For instance, when we
are interested in nonparametric estimation of a linear integral functional
$J(f)$ in $L^{p}(\Lambda)$, $p>1$, then a natural statistical estimator for
$J(f)$ is the linear integral functional of the empirical spectral density
(periodogram) of the process $X(t)$, which is a quadratic functional of the
form (1.1) in the non-tapered case, and of the form (1.4) in the tapered case.
In the case of parametric estimation, for example, the Whittle estimation
procedure is based on the minimization of quadratic functionals of the form
(1.1) and (1.4).
### 1.3 A brief history
The problem of describing the asymptotic distribution of Toeplitz type
quadratic forms and functionals of stationary processes has a long history,
and goes back to the classical monograph by Grenander and Szegő [67], where
the CLT-problem for Gaussian processes was considered as an application of the
authors’ theory of the asymptotic behavior of the trace of products of
truncated Toeplitz matrices and operators.
Later the CLT-problem has been studied by a number of authors. Here we mention
only some significant contributions. For the d.t. short memory processes, the
problem was studied by Bentkus [10], Ibragimov [75] and Rosenblatt [87], in
connection with statistical estimation of the spectral and covariance
functions. Since 1986, there has been a renewed interest in this problem,
related to the statistical inferences for long memory (long-range dependence)
and intermediate memory (anti-persistent) processes (see, e.g., Avram [6], Fox
and Taqqu [39], Giraitis and Surgailis [60], Giraitis and Taqqu [62],
Has’minskii and Ibragimov [70], Ginovian and Sahakian [52], Terrin and Taqqu
[102], and references therein). In particular, Avram [6], Fox and Taqqu [39],
Ginovian and Sahakian [52], Giraitis and Surgailis [60], Giraitis and Taqqu
[62] have obtained sufficient conditions for quadratic form $Q_{T}$ to obey
the central limit theorem.
In the case of c.t. stationary Gaussian processes the CLT-problem for Toeplitz
type quadratic functionals was studied in a number of papers. We cite merely
the papers Avram et al. [7], Bai et al. [8, 9], Bryc and Dembo [19], Ginovyan
[43, 45, 48], Ginovyan and Sahakyan [53], Ibragimov [75], Leonenko and Sakhno
[80], where additional references can be found. The NCLT-problem have been
studied in Bai et al. [9], Giraitis and Taqqu [65], and Terrin and Taqqu
[101].
Central and non-central limit theorems for tapered quadratic forms of a d.t.
long memory Gaussian stationary fields have been proved in Doukhan et al.
[32]. A central limit theorem for tapered quadratic functionals $Q_{T}^{h}$,
in the case where the underlying model $X(t)$ is a Lévy-driven c.t. stationary
linear process has been proved in Ginovyan and Sahakyan [56].
The problem of nonparametric and parametric estimation of the spectrum of the
process $X(t)$ both for d.t. and c.t. cases based on the non-tapered data has
been considered by many authors (see, e.g., Avram et al. [7], Bentkus [10],
Dahlhaus [27], Dzhaparidze [27], Fox and Taqqu [38], Gao et al. [41], Ginovyan
[42, 43, 46, 47, 48, 49], Giraitis et al. [59], Giraitis and Surgailis [60],
Giraitis and Taqqu [64], Guyon [69], Hasminskii and Ibragimov [70], Ibragimov
[75, 76], Leonenko and Sakhno [80], Taniguchi [95], Taniguchi and Kakizawa
[96], Taqqu [98], and references therein). The problem in the tapered case was
studied in Alomari et al. [1], Dahlhaus [24, 26], Dahlhaus and Künsch [28],
Ginovyan [50], Ginovyan and Sahakyan [56], and Ludeña and Lavielle [82].
### 1.4 Frequency-domain conditions
Conditions that are expressed in terms of the spectral density $f(\lambda)$
and the generating function $g(\lambda)$ will be termed frequency-domain
conditions, while conditions that are in terms of the covariance function
$r(t)$ and the generating kernel $\widehat{g}(t)$ will be termed time-domain
conditions.
There are three sets of frequency-domain conditions for functionals of the
form (1.1) to obey the CLT, and these conditions separate the ranges of CLT
and NCLT:
* (a)
the $(L^{p},L^{q})$ condition,
* (b)
the $(\alpha,\beta)$ condition,
* (c)
the trace condition.
All three are compensation conditions, meaning that the good behavior of one
function, say $g$ can compensate for the bad behavior of $f$ and vice versa.
(a) The $(L^{p},L^{q})$ condition. Let $f\in L^{p}$ $(p\geq 2)$ and $g\in
L^{q}$ $(q\geq 2)$. If $1/p+1/q\leq 1/2,$ then the functional $Q_{T}$ with an
appropriate normalization obeys the CLT (see Theorem 3.1(C)), while when
$1/p+1/q>1/2$, then, in general, $Q_{T}$ does not obey the CLT. This condition
goes back to the classical works of Rosenblatt [86, 87], where when estimating
the covariance function $r(t)$ of a d.t. Gaussian process, it was shown that
for a sample covariance $\widehat{r}(t)$ (which is a functional of the form
(1.1) with $g(\lambda)=\cos(t\lambda)$) to obey the CLT, the spectral density
$f$ should satisfy the condition $f\in L^{p}$ $(p\geq 2)$.
(b) The $(\alpha,\beta)$ condition. If both the spectral density $f$ and the
generating function $g$ are regularly varying at the origin of orders $\alpha$
and $\beta$, respectively, then it is the sum $\alpha+\beta$ that determines
the limiting distribution of $Q_{T}$. When $\alpha+\beta\leq 1/2,$ then the
limiting distribution of $Q_{T}$ is Gaussian, that is, $Q_{T}$ with an
appropriate normalization obeys the CLT (see Theorem 3.3), while when
$\alpha+\beta>1/2,$ then the limiting distribution of $Q_{T}$ is non-Gaussian
(see Theorem 4.6).
(c) The trace condition. This condition, which is more general and implies
both conditions (a) and (b), is an implicit condition. It is expressed in
terms of traces of products of truncated Toeplitz matrices (in the d.t. case)
and operators (in the c.t. case). The idea here is to approximate the traces
of products of Toeplitz matrices and operators (which are no longer Toeplitz)
by the traces of a Toeplitz matrix and a Toeplitz operator, respectively. Let
$A_{T}(f)$ denote either the $T\times T$ Toeplitz matrix $B_{T}(f)$ or the
$T$-truncated Toeplitz operator $W_{T}(f)$ generated by the spectral density
$f$, and let $A_{T}(g)$ denote either the $T\times T$ Toeplitz matrix
$B_{T}(g)$, or the $T$-truncated Toeplitz operator $W_{T}(g)$ generated by the
function $g$ (for definitions see Section 3.1). The trace condition is: if
$fg\in L^{2}$
and
$T^{-1}\bigl{[}\hbox{tr}\bigl{(}A_{T}(f)A_{T}(g)\bigr{)}^{2}-\hbox{tr}\bigl{(}A_{T}(f^{2}g^{2})\bigr{)}\bigr{]}\to
0\quad{\rm as}\quad T\to\infty,$
then the quadratic functional $Q_{T}$ in (1.1) with an appropriate
normalization obeys the CLT (see Theorems 3.1(A) and 3.4).
We will also discuss the time-domain counterparts of $(L^{p},L^{q})$ and
$(\alpha,\beta)$ conditions.
### 1.5 Methods and tools.
The most commonly used methods to prove central limit theorems for Toeplitz
type quadratic forms and functionals are:
* (a)
the method of cumulants or moments,
* (b)
the approximation method,
* (c)
the method of characteristic functions.
To prove the non-central limit theorems for Toeplitz type quadratic
functionals, was used:
* (a)
the spectral representation of the underlying process,
* (b)
the properties of Lévy-Itô-type and Stratonovich-type multiple stochastic
integrals,
* (c)
the power counting theorem.
Some details of the above methods are described in Section 8.
### 1.6 The structure of the paper
The paper is structured as follows. In Section 2 we describe the model of
interest - a stationary process, and recall some key notions and results from
the theory of stationary processes. In Section 3 we present sufficient
conditions for Toeplitz type quadratic forms and functionals of the form (1.1)
to obey the CLT in the case where the model is either a Gaussian or a linear
process. Section 4 contains functional central and noncentral limit theorems
for processes generated by Toeplitz type quadratic forms and functionals for
Gaussian and linear models. Section 5 is devoted to the functional central and
noncentral limit theorems for Lévy-driven linear models. In Section 6 we
discuss the case of tapered Toeplitz quadratic functionals, and state central
limit theorems. Section 7 contains some applications, involving nonparametric
estimation of spectral functionals and Whittle parametric estimation
procedure. In Section 8 we briefly discuss the methods and tools, used to
prove central and noncentral limit theorems for Toeplitz type quadratic forms
and functionals.
## 2 The model: second-order stationary process
In this section we introduce the model of interest - a second-order stationary
process, and recall some key notions and results from the theory of stationary
processes.
### 2.1 Key notions and some basic results
#### 2.1.1 Second-order (wide-sense) stationary process
Let $\\{X(u),\ u\in\mathbb{U}\\}$ be a centered real-valued second-order
(wide-sense) stationary process defined on a probability space
$(\Omega,\mathcal{F},P)$ with covariance function $r(t)$, that is,
$\mathbb{E}[X(u)]=0,\quad r(u)=\mathbb{E}[X(t+u)X(t)],\quad u,t\in\mathbb{U},$
where $\mathbb{E}[\cdot]$ stands for the expectation operator with respect to
measure $P$. We consider simultaneously the c.t. case, where
$\mathbb{U}=\mathbb{R}:=(-\infty,\infty)$, and the d.t. case, where
$\mathbb{U}=\mathbb{Z}:=\\{0,\pm 1,\pm 2,\ldots\\}$. We assume that $X(u)$ is
a non-degenerate process, that is, ${\rm
Var}[X(u)]=\mathbb{E}|X(u)|^{2}=r(0)>0$. (Without loss of generality, we
assume that $r(0)=1$). In the c.t. case the process $X(u)$ is also assumed
mean-square continuous, that is, $\mathbb{E}[X(t)-X(s)]^{2}\to 0$ as $t\to s$.
This assumption is equivalent to that of the covariance function $r(u)$ be
continuous at $u=0$ (see, e.g., Cramér and Leadbetter [23], Section 5.2).
#### 2.1.2 Spectral representations
By the Herglotz theorem in the d.t. case, and the Bochner-Khintchine theorem
in the c.t. case (see, e.g., Cramér and Leadbetter [23], Doob [31], Ibragimov
and Linnik [77]), there is a finite measure $\mu$ on
$(\Lambda,\mathfrak{B}(\Lambda))$, where $\Lambda=\mathbb{R}$ in the c.t.
case, and $\Lambda=[-\pi.\pi]$ in the d.t. case, and $\mathfrak{B}(\Lambda)$
is the Borel $\sigma$-algebra on $\Lambda$, such that for any $t\in\mathbb{U}$
the covariance function $r(t)$ admits the following spectral representation:
$r(u)=\int_{\Lambda}\exp\\{i\lambda u\\}d\mu(\lambda),\quad u\in\mathbb{U}.$
(2.1)
The measure $\mu$ in (2.1) is called the spectral measure of the process
$X(u)$. The function $F(\lambda):=\mu[-\pi,\lambda]$ in the d.t. case and
$F(\lambda):=\mu[-\infty,\lambda]$ in the c.t. case, is called the spectral
function of the process $X(t)$. If $F(\lambda)$ is absolutely continuous (with
respect to Lebesgue measure), then the function
$f(\lambda):=dF(\lambda)/d\lambda$ is called the spectral density of the
process $X(t)$. Notice that if the spectral density $f(\lambda)$ exists, then
$f(\lambda)\geq 0$, $f(\lambda)\in L^{1}(\Lambda)$, and (2.1) becomes
$r(u)=\int_{\Lambda}\exp\\{i\lambda u\\}f(\lambda)d\lambda,\quad
u\in\mathbb{U}.$ (2.2)
Thus, the covariance function $r(u)$ and the spectral function $F(\lambda)$
(resp. the spectral density function $f(\lambda)$) are equivalent
specifications of the second order properties for a stationary process
$\\{X(u),\ u\in\mathbb{U}\\}$.
By the well-known Cramér theorem (see, e.g., Cramér and Leadbetter [23]) for
any stationary process $\\{X(u),\,u\in\mathbb{U}\\}$ with spectral measure
$\mu$ there exists an orthogonal stochastic measure $Z=Z(B)$,
$B\in\mathfrak{B}(\Lambda)$, such that for every $u\in\mathbb{U}$ the process
$X(u)$ admits the following spectral representation:
$X(u)=\int_{\Lambda}\exp\\{i\lambda u\\}dZ(\lambda),\quad u\in\mathbb{U}.$
(2.3)
Moreover, $E\left[|Z(B)|^{2}\right]=\mu(B)$ for every
$B\in\mathfrak{B}(\Lambda)$. For definition and properties of orthogonal
stochastic measures and stochastic integral in (2.3) we refer, e.g., Cramér
and Leadbetter [23].
#### 2.1.3 Kolmogorov’s isometric isomorphism theorem
Given a probability space $(\Omega,\mathcal{F},P)$, define the $L^{2}$-space
of random variables $\xi=\xi(\omega)$, $\mathbb{E}[\xi]=0$:
$L^{2}(\Omega):=\left\\{\xi:\,||\xi||^{2}:=\int_{\Omega}|\xi(\omega)|^{2}dP(\omega)<\infty\right\\}.$
(2.4)
Then $L^{2}(\Omega)$ becomes a Hilbert space with the following inner product:
for $\xi,\eta\in L^{2}(\Omega)$
$(\xi,\eta)=\mathbb{E}[\xi\eta]=\int_{\Omega}\xi(\omega){\eta(\omega)}dP(\omega).$
(2.5)
For $a,b\in\mathbb{U}$, $-\infty\leq a\leq b\leq\infty,$ we define the space
$H_{a}^{b}(X)$ to be the closed linear subspace of the space $L^{2}(\Omega)$
spanned by the random variables $X(u,\omega)$, $u\in[a,b]$:
$H_{a}^{b}(X):=\overline{sp}\\{X(u),\,\,a\leq u\leq b\\}_{L^{2}(\Omega)}.$
(2.6)
Observe that the subspace $H_{a}^{b}(X)$ consists of all finite linear
combinations $\sum_{k=1}^{n}c_{k}X(u_{k})$ ($a\leq u_{k}\leq b$), as well as,
their $L^{2}(\Omega)$-limits.
The space $H(X):=H_{-\infty}^{\infty}(X)$ is called the Hilbert space
generated by the process $X(u)$, or the time-domain of $X(u)$.
Let $\mu$ be the spectral measure of the process
$\\{X(u),\,u\in\mathbb{U}\\}$. Consider the weighted $L^{2}$-space
$L^{2}(\mu):=L^{2}(\mu,\Lambda)$ of complex-valued functions
$\varphi(\lambda),\,\lambda\in\Lambda$, defined by
$L^{2}(\mu):=\left\\{\varphi(\lambda):\,||\varphi||^{2}_{\mu}:=\int_{\Lambda}|\varphi(\lambda)|^{2}d\mu(\lambda)<\infty\right\\}.$
(2.7)
Then $L^{2}(\mu)$ becomes a Hilbert space with the following inner product:
for $\varphi,\psi\in L^{2}(\mu)$
$(\varphi,\psi)_{\mu}=\int_{\Lambda}\varphi(\lambda)\overline{\psi}(\lambda)d\mu(\lambda).$
(2.8)
The Hilbert space $L^{2}(\mu,\Lambda)$ is called the frequency-domain of the
process $X(u)$.
Kolmogorov’s isometric isomorphism theorem states that for any stationary
process $X(u)$, $u\in\mathbb{U}$, with spectral measure $\mu$ there exists a
unique isometric isomorphism $V$ between the time- and frequency-domains
$H(X)$ and $L^{2}(\mu)$, such that $V[X(u)]=e^{iu\lambda}$ for any
$u\in\mathbb{U}.$
Thus, any linear problem in the time-domain $H(X)$ can be translated into one
in the frequency-domain $L^{2}(\mu)$, and vice versa. This fact allows to
study stationary processes using analytic methods.
### 2.2 Linear processes. Existence of spectral density functions
We will consider here stationary processes possessing spectral density
functions. For the following results we refer to Cramér and Leadbetter [23],
Doob [31], Ibragimov and Linnik [77].
* (a)
The spectral function $F(\lambda)$ of a d.t. stationary process
$\\{X(u),\,u\in\mathbb{Z}\\}$ is absolutely continuous (with respect to the
Lebesgue measure), $F(\lambda)=\int_{-\pi}^{\lambda}f(x)dx$, if and only if it
can be represented as an infinite moving average:
$X(u)=\sum_{k=-\infty}^{\infty}a(u-k)\xi(k),\quad\sum_{k=-\infty}^{\infty}|a(k)|^{2}<\infty,$
(2.9)
where $\\{\xi(k),k\in\mathbb{Z}\\}\sim$ WN(0,1) is a standard white-noise,
that is, a sequence of orthonormal random variables.
* (b)
The covariance function $r(u)$ and the spectral density $f(\lambda)$ of $X(u)$
are given by formulas:
$r(u)=\mathbb{E}X(u)X(0)=\sum_{k=-\infty}^{\infty}a(u+k)a(k),$ (2.10)
and
$f(\lambda)=\frac{1}{2\pi}|\widehat{a}(\lambda)|^{2}=\frac{1}{2\pi}\left|\sum_{k=-\infty}^{\infty}a(k)e^{-ik\lambda}\right|^{2},\quad\lambda\in\Lambda.$
(2.11)
* (c)
In the case where $\\{\xi(k),k\in\mathbb{Z}\\}$ is a sequence of Gaussian
random variables, the process $\\{X(u),\,u\in\mathbb{Z}\\}$ is Gaussian.
Similar results hold for c.t. processes. Indeed, the following results hold.
* (a)
The spectral function $F(\lambda)$ of a c.t. stationary process
$\\{X(u),\,u\in\mathbb{R}\\}$ is absolutely continuous (with respect to
Lebesgue measure), $F(\lambda)=\int_{-\infty}^{\lambda}f(x)dx$, if and only if
it can be represented as an infinite continuous moving average:
$X(u)=\int_{\mathbb{R}}a(u-t)d\xi(t),\quad\int_{\mathbb{R}}|a(t)|^{2}dt<\infty,$
(2.12)
where $\\{\xi(t),t\in\mathbb{R}\\}$ is a process with orthogonal increments
and $\mathbb{E}|d\,\xi(t)|^{2}=dt$.
* (b)
The covariance function $r(u)$ and the spectral density $f(\lambda)$ of $X(u)$
are given by formulas:
$r(u)=\mathbb{E}X(u)X(0)=\int_{\mathbb{R}}a(u+x)a(x)dx,$ (2.13)
and
$f(\lambda)=\frac{1}{2\pi}|\widehat{a}(\lambda)|^{2}=\frac{1}{2\pi}\left|\int_{\mathbb{R}}e^{-i\lambda
t}a(t)dt\right|^{2},\quad\lambda\in\mathbb{R}.$ (2.14)
* (c)
In the case where $\\{\xi(t),t\in\mathbb{R}\\}$ is a Gaussian process, the
process $\\{X(u),\,u\in\mathbb{Z}\\}$ is Gaussian.
### 2.3 Lévy-driven linear process
We first recall that a Lévy process, $\\{\xi(t),\ t\in\mathbb{R}\\}$ is a
process with independent and stationary increments, continuous in probability,
with sample-paths which are right-continuous with left limits
(c$\grave{a}$dl$\grave{a}$g) and $\xi(0)=\xi(0-)=0$. The Wiener process
$\\{B(t),\ t\geq 0\\}$ and the centered Poisson process
$\\{N(t)-\mathbb{E}N(t),\ t\geq 0\\}$ are typical examples of centered Lévy
processes. A Lévy-driven linear process $\\{X(t),\ t\in\mathbb{R}\\}$ is a
real-valued c.t. stationary process defined by (2.12), where $\xi(t)$ is a
Lévy process satisfying the conditions:
$\mathbb{E}\xi(t)=0$, $\mathbb{E}\xi^{2}(1)=1$ and
$\mathbb{E}\xi^{4}(1)<\infty$.
In the case where $\xi(t)=B(t)$, $X(t)$ is a Gaussian process.
The function $a(\cdot)$ in representations (2.9) and (2.12) plays the role of
a time-invariant filter, and the linear processes defined by (2.9) and (2.12)
can be viewed as the output of a linear filter $a(\cdot)$ applied to the
process $\\{\xi(u),\ t\in\mathbb{U}\\}$, called the innovation or driving
process of $X(u)$.
Processes of the form (2.12) appear in many fields of science (economics,
finance, physics, etc.), and cover a large class of popular models in c.t.
time series modeling. For instance, the so-called c.t. autoregressive moving
average (CARMA) models, which are the c.t. analogs of the classical
autoregressive moving average (ARMA) models in d.t. case, are of the form
(2.12) and play a central role in the representations of c.t. stationary time
series (see, e.g., Brockwell [18]).
### 2.4 Dependence (memory) structure of the model
In the frequency domain setting, the statistical and spectral analysis of
stationary processes requires two types of conditions on the spectral density
$f(\lambda).$ The first type controls the singularities of $f(\lambda)$, and
involves the dependence (or memory) structure of the process, while the
second type – controls the smoothness of $f(\lambda).$
We will distinguish the following types of stationary models:
(a) short memory (or short-range dependent),
(b) long memory (or long-range dependent),
(c) intermediate memory (or anti-persistent).
The memory structure of a stationary process is essentially a measure of the
dependence between all the variables in the process, considering the effect of
all correlations simultaneously. Traditionally memory structure has been
defined in the time domain in terms of decay rates of the autocorrelations, or
in the frequency domain in terms of rates of explosion of low frequency
spectra (see, e.g., Beran [11], Beran et al. [12], Giraitis et al. [59],
Guégan [68], Robinson [85], and references therein).
It is convenient to characterize the memory structure in terms of the spectral
density function.
#### 2.4.1 Short memory models
Much of statistical inference is concerned with short memory stationary
models, where the spectral density $f(\lambda)$ of the model is bounded away
from zero and infinity, that is, there are constants $C_{1}$ and $C_{2}$ such
that
$0<C_{1}\leq f(\lambda)\leq C_{2}<\infty.$
A typical d.t. short memory model example is the stationary Autoregressive
Moving Average (ARMA)$(p,q)$ process $X(t)$ defined to be a stationary
solution of the difference equation:
$\psi_{p}(B)X(t)=\theta_{q}(B)\varepsilon(t),\quad t\in\mathbb{Z},$
where $\psi_{p}$ and $\theta_{q}$ are polynomials of degrees $p$ and $q$,
respectively, $B$ is the backshift operator defined by $BX(t)=X(t-1)$, and
$\\{\varepsilon(t),t\in\mathbb{Z}\\}$ is a d.t. white noise, that is, a
sequence of zero-mean, uncorrelated random variables with variance
$\sigma^{2}$. The covariance $r(k)$ of (ARMA)$(p,q)$ process is exponentially
bounded:
$|r(k)|\leq Cr^{-k},\quad k=1,2,\ldots;\quad 0<C<\infty;\,\,0<r<1,$
and the spectral density $f(\lambda)$ is a rational function (see, e.g.,
Brockwell and Davis [19], Section 3.1):
$f(\lambda)=\frac{\sigma^{2}}{2\pi}\cdot\frac{|\theta_{q}(e^{-i\lambda})|^{2}}{|\psi_{p}(e^{-i\lambda})|^{2}}.$
(2.15)
A typical c.t. short-memory model example is the stationary c.t. ARMA$(p,q)$
processes, denoted by CARMA$(p,q)$, which is defined to be the solution of a
$p$th order stochastic differential equation with a suitable initial condition
and driven by a standard Brownian motion and its derivatives up to and
including order $0\leq q<p$. The spectral density function $f(\lambda)$ of the
CARMA$(p,q)$ process is given by the following formula (see, e.g., Brockwell
[18]):
$f(\lambda)=\frac{\sigma^{2}}{2\pi}\cdot\frac{|\beta(i\lambda)|^{2}}{|\alpha(i\lambda)|^{2}},$
(2.16)
where $\alpha(z)$ and $\beta(z)$ are polynomials of degrees $p$ and $q$,
respectively.
Another important c.t. short memory model is the Ornstein-Uhlenbeck process,
which is a Gaussian stationary process with covariance function
$r(t)=\sigma^{2}e^{-\alpha|t|}$ ($t\in\mathbb{R}$), and spectral density
$f(\lambda)=\frac{\sigma^{2}}{\pi}\cdot\frac{\alpha^{2}}{\lambda^{2}+\alpha^{2}},\quad\alpha>0,\,\lambda\in\mathbb{R}.$
(2.17)
#### 2.4.2 Discrete-time long-memory and anti-persistent models
Data in many fields of science (economics, finance, hydrology, etc.), however,
is well modeled by stationary processes whose spectral densities are unbounded
or vanishing at some fixed points (see, e.g., Beran [11], Guégan [68], Palma
[83], Taqqu [97] and references therein).
A long-memory model is defined to be a stationary process with unbounded
spectral density, and an anti-persistent model – a stationary process with
vanishing (at some fixed points) spectral density.
In the discrete context, a basic long-memory model is the Autoregressive
Fractionally Integrated Moving Average (ARFIMA)$(0,d,0))$ process $X(t)$
defined to be a stationary solution of the difference equation (see, e.g.,
Brockwell and Davis [19], Section 13.2):
$(1-B)^{d}X(t)=\varepsilon(t),\quad 0<d<1/2,$
where $B$ is the backshift operator and $\varepsilon(t)$ is a d.t. white noise
defined above. The spectral density $f(\lambda)$ of $X(t)$ is given by
$f(\lambda)=|1-e^{-i\lambda}|^{-2d}=(2\sin(\lambda/2))^{-2d},\quad
0<\lambda\leq\pi,\quad 0<d<1/2.$ (2.18)
Notice that $f(\lambda)\thicksim c\,|\lambda|^{-2d}$ as $\lambda\to 0$, that
is, $f(\lambda)$ blows up at $\lambda=0$ like a power function, which is the
typical behavior of a long memory model.
A typical example of an anti-persistent model is the ARFIMA$(0,d,0)$ process
$X(t)$ with spectral density $f(\lambda)=|1-e^{-i\lambda}|^{-2d}$ with $d<0$,
which vanishes at $\lambda=0$. Note that the condition $d<1/2$ ensures that
$\int_{-\pi}^{\pi}f(\lambda)d\lambda<\infty$, implying that the process $X(t)$
is well defined because $E|X(t)|^{2}=\int_{-\pi}^{\pi}f(\lambda)d\lambda.$
Data can also occur in the form of a realization of a ’mixed’ short-long-
intermediate-memory stationary process $X(t)$. A well-known example of such a
process, which appears in many applied problems, is an ARFIMA$(p,d,q)$ process
$X(t)$ defined to be a stationary solution of the difference equation:
$\psi_{p}(B)(1-B)^{d}X(t)=\theta_{q}(B)\varepsilon(t),\quad d<1/2,$
where $B$ is the backshift operator, $\varepsilon(t)$ is a d.t. white noise,
and $\psi_{p}$ and $\theta_{q}$ are polynomials of degrees $p$ and $q$,
respectively. The spectral density $f_{X}(\lambda)$ of $X(t)$ is given by
$f_{X}(\lambda)=|1-e^{-i\lambda}|^{-2d}f(\lambda),\quad d<1/2,$ (2.19)
where $f(\lambda)$ is the spectral density of an ARMA$(p,q)$ process, given by
(2.15). Observe that for $0<d<1/2$ the model $X(t)$ specified by (2.19)
displays long-memory, for $d<0$ – intermediate-memory, and for $d=0$ – short-
memory. For $d\geq 1/2$ the function $f_{X}(\lambda)$ in (2.19) is not
integrable, and thus it cannot represent a spectral density of a stationary
process. Also, if $d\leq-1$, then the series $X(t)$ is not invertible in the
sense that it cannot be used to recover a white noise $\varepsilon(t)$ by
passing $X(t)$ through a linear filter (see, e.g., Bondon and Palma [16], and
Brockweel and Davis [19]).
The ARFIMA$(p,d,q)$ processes, first introduced by Granger and Joyeux [66],
and Hosking [73], became very popular due to their ability in providing a good
characterization of the long-run properties of many economic and financial
time series. They are also very useful for modeling multivariate time series,
since they are able to capture a larger number of long term equilibrium
relations among economic variables than the traditional multivariate ARIMA
models (see, e.g., Guégan [68], and Henry and Zaffaroni [71] for a survey on
this topic).
Another important long-memory model is the fractional Gaussian noise (fGn). To
define fGn we first consider the fractional Brownian motion (fBm)
{$B_{H}(t),t\in\mathbb{R}\\}$ with Hurst index $H$, $0<H<1$, defined to be a
centered Gaussian $H$-self-similar process having stationary increments, that
is, $B_{H}(t)$ satisfies the following conditions:
(a) $B_{H}(0)=0$, $\mathbb{E}[B_{H}(t)]=0$, $t\in\mathbb{R}$;
(b) $\\{B_{H}(at),t\in\mathbb{R}\\}\stackrel{{\scriptstyle
d}}{{=}}\\{a^{H}B_{H}(t),t\in\mathbb{R}\\}$ for any $a>0$;
(c) $\\{B_{H}(t+u)-B_{H}(u),t\in\mathbb{R}\\}\stackrel{{\scriptstyle
d}}{{=}}\\{B_{H}(t),t\in\mathbb{R}\\}$ for each fixed $u\in\mathbb{R}$;
(d) the covariance function is given by
${\rm
Cov}(B_{H}(s),B_{H}(t))=\frac{\sigma_{0}^{2}}{2}\left[|t|^{2H}-|s|^{2H}-|t-s|^{2H}\right],$
where the symbol $\stackrel{{\scriptstyle d}}{{=}}$ stands for equality of the
finite-dimensional distributions, and $\sigma_{0}^{2}={\rm Var}B_{H}(1)$. Then
the increment process
$\\{X(k):=B_{H}(k+1)-B_{H}(k),k\in\mathbb{Z}\\},$
called fractional Gaussian noise (fGn), is a d.t. centered Gaussian stationary
process with covariance function
$r(k)=\frac{\sigma_{0}^{2}}{2}\left[|k+1|^{2H}-|k|^{2H}-|k-1|^{2H}\right],\quad
k\in\mathbb{Z},$ (2.20)
and spectral density function
$f(\lambda)=c\,|1-e^{-i\lambda}|^{2}\sum_{k=-\infty}^{\infty}|\lambda+2\pi
k|^{-(2H+1)},\quad-\pi\leq\lambda\leq\pi,$ (2.21)
where $c$ is a positive constant.
It follows from (2.21) that $f(\lambda)\thicksim c\,|\lambda|^{1-2H}$ as
$\lambda\to 0$, that is, $f(\lambda)$ blows up if $H>1/2$ and tends to zero if
$H<1/2$. Also, comparing (2.18) and (2.21), we observe that, up to a constant,
the spectral density of fGn has the same behavior at the origin as
ARFIMA$(0,d,0)$ with $d=H-1/2.$
Thus, the fGn $\\{X(k),k\in\mathbb{Z}\\}$ has long-memory if $1/2<H<1$ and is
anti-percipient if $0<H<1/2$. The variables $X(k)$, $k\in\mathbb{Z}$, are
independent if $H=1/2$. For more details we refer to Samorodnisky and Taqqu
[89] and Taqqu [97, 99].
#### 2.4.3 Continuous-time long-memory and anti-persistent models
In the continuous context, a basic process which has commonly been used to
model long-range dependence is the fractional Brownian motion (fBm) $B_{H}$
with Hurst index $H$, defined above, which can be regarded as a Gaussian
process having a ”spectral density”:
$f(\lambda)=c|\lambda|^{-(2H+1)},\quad c>0,\ \,\,0<H<1,\
\,\,\lambda\in\mathbb{R}.$ (2.22)
The form (2.22) can be understood in a generalized sense (see Yaglom [108],
Section 24, Flandrin [36], Solo [91]), since the fBm $B_{H}$ is a
nonstationary process.
A proper stationary model in lieu of fBm is the fractional Riesz-Bessel motion
(fRBm), introduced in Anh et al. [2], and defined as a c.t. Gaussian process
$X(t)$ with spectral density
$f(\lambda)=c\,|\lambda|^{-2\alpha}(1+\lambda^{2})^{-\beta},\quad\lambda\in\mathbb{R},\,0<c<\infty,\,0<\alpha<1,\,\beta>0.$
(2.23)
The exponent $\alpha$ determines the long-range dependence, while the exponent
$\beta$ indicates the second-order intermittency of the process (see, e.g.,
Anh et al. [3] and Gao et al. [41]).
Notice that the process $X(t)$, specified by (2.23), is stationary if
$0<\alpha<1/2$ and is non-stationary with stationary increments if
$1/2\leq\alpha<1.$ Observe also that the spectral density (2.23) behaves as
$O(|\lambda|^{-2\alpha})$ as $|\lambda|\to 0$ and as
$O(|\lambda|^{-2(\alpha+\beta)})$ as $|\lambda|\to\infty$. Thus, under the
conditions $0<\alpha<1/2$, $\beta>0$ and $\alpha+\beta>1/2,$ the function
$f(\lambda)$ in (2.23) is well-defined for both $|\lambda|\to 0$ and
$|\lambda|\to\infty$ due to the presence of the component
$(1+\lambda^{2})^{-\beta}$, $\beta>0$, which is the Fourier transform of the
Bessel potential.
Comparing (2.22) and (2.23), we observe that the spectral density of fBm is
the limiting case as $\beta\to 0$ that of fRBm with Hurst index
$H=\alpha-1/2.$
Another important c.t. long-memory model is the CARFIMA$(p,H,q)$ processes,
which is defined to be the solution of a $p$th order stochastic differential
equation with a suitable initial condition and driven by a fractional Brownian
motion with Hurst parameter $H$ and its derivatives up to and including order
$0\leq q<p$. The spectral density function $f(\lambda)$ of the
CARFIMA$(p,H,q)$ process is given by the following formula (see, e.g., Tsai
and Chan [104]):
$f(\lambda)=\frac{\sigma^{2}}{2\pi}\Gamma(2H+1)\sin(\pi
H)|\lambda|^{1-2H}\frac{|\beta(i\lambda)|^{2}}{|\alpha(i\lambda)|^{2}},$
(2.24)
where $\alpha(z)$ and $\beta(z)$ are polynomials of degrees $p$ and $q$,
respectively. Notice that for $H=1/2$, the spectral density function given by
(2.24) becomes that of the short-memory CARMA$(p,q)$ process, given by (2.16).
## 3 CLT for Toeplitz type quadratic functionals for Gaussian and linear
processes
In this section we present sufficient conditions for quadratic forms and
functionals of the form (1.1) to obey the CLT. The processes considered will
be d.t. and c.t. Gaussian or linear processes with memory. The matrix and the
operator that characterize the quadratic form and functional are Toeplitz.
As it was mentioned in the introduction, the limit distributions of the
functionals in (1.1) are completely determined by the spectral density
$f(\lambda)$ (or covariance function $r(t)$) and the generating function
$g(\lambda)$ (or generating kernel $\widehat{g}(t)$). Conditions that are in
terms of the spectral density $f(\lambda)$ and the generating function
$g(\lambda)$ will be called frequency-domain conditions, while conditions that
are in terms of the covariance function $r(t)$ and the generating kernel
$\widehat{g}(t)$ will be called time-domain conditions.
### 3.1 Frequency domain conditions
Let $\\{X(u),\ u\in\mathbb{U}\\}$ be a centered real-valued Gaussian
stationary process with spectral density $f(\lambda)$, $\lambda\in\Lambda$ and
covariance function $r(t):=\widehat{f}(t)$, $t\in\mathbb{U}$, where
$\mathbb{U}$ and $\Lambda$ are as in Section 2. We are interested in the
asymptotic distribution (as $T\to\infty$) of the following Toeplitz type
quadratic functionals of the process $X(u)$:
$Q_{T}:=Q_{T}(f,g)=\left\\{\begin{array}[]{ll}\sum_{t=1}^{T}\sum_{s=1}^{T}\widehat{g}(t-s)X(t)X(s)&\mbox{in
the d.t.\ case},\\\ \\\
\int_{0}^{T}\int_{0}^{T}\widehat{g}(t-s)X(t)X(s)\,dt\,ds&\mbox{in the c.t.\
case},\end{array}\right.$ (3.1)
where $\widehat{g}(t)$ is the Fourier transform of some real, even, integrable
function $g(\lambda),$ $\lambda\in\Lambda$. In the d.t. case the functions
$f(\lambda)$ and $g(\lambda)$ are assumed to be $2\pi$-periodic and
periodically extended to $\mathbb{R}$.
Note. We include the function $f$ in the notation $Q_{T}(f,g)$ to emphasize
that the distribution of the quadratic form depends also on the spectral
density $f$. Let $Q_{T}$ be as in (3.1). By $\widetilde{Q}_{T}$ we denote the
standard normalized quadratic functional:
$\widetilde{Q}_{T}:=T^{-1/2}\,\left(Q_{T}-\mathbb{E}[Q_{T}]\right).$ (3.2)
As before, the notation
$\widetilde{Q}_{T}\overset{d}{\rightarrow}\eta\sim N(0,\sigma^{2})\quad{\rm
as}\quad T\to\infty$ (3.3)
will mean that the distribution of the random variable $\widetilde{Q}_{T}$
tends (as $T\to\infty$) to the centered normal distribution with variance
$\sigma^{2}$.
Toeplitz matrices and operators arise naturally in the theory of stationary
processes, and serve as tools, to study many topics of the spectral and
statistical analysis of d.t. and c.t. stationary processes.
We first define the truncated Toeplitz matrices and operators, generated by
integrable real symmetric functions.
Let $\psi(\lambda)$ be an integrable real symmetric function defined on
$\Lambda=[-\pi,\pi]$. For $T=1,2,\ldots$, the $(T\times T)$-truncated Toeplitz
matrix generated by $\psi(\lambda)$, denoted by $B_{T}(\psi)$, is defined by
the following equation (see, e.g., Ginovyan and Sahakyan [52], and Grenander
and Szegő [67]):
$B_{T}(\psi):=\|\widehat{\psi}(t-s)\|_{t,s=1,2\ldots,T}=\left(\begin{array}[]{llll}\widehat{\psi}(0)&\widehat{\psi}(-1)&\cdots&\widehat{\psi}(1-T)\\\
\widehat{\psi}(1)&\widehat{\psi}(0)&\cdots&\widehat{\psi}(2-T)\\\
\cdots&\cdots&\cdots&\cdots\\\
\widehat{\psi}(T-1)&\widehat{\psi}(T-2)&\cdots&\widehat{\psi}(0)\end{array}\right),$
(3.4)
where $\widehat{\psi}(t)=\int_{\Lambda}e^{i\lambda
t}\,\psi(\lambda)\,d\lambda$ $(t\in\mathbb{Z})$ are the Fourier coefficients
of $\psi$.
Given a real number $T>0$ and an integrable real symmetric function
$\psi(\lambda)$ defined on $\mathbb{R}:=(-\infty,\infty)$, the $T$-truncated
Toeplitz operator (also called Wiener-Hopf operator) generated by
$\psi(\lambda)$, denoted by $W_{T}(\psi)$, is defined by the following
equation (see, e.g., Ginovyan [45], Ginovyan and Sahakyan [53], Grenander and
Szegő [67], Ibragimov [75], and Kac [79]):
$[W_{T}(\psi)u](t)=\int_{0}^{T}\hat{\psi}(t-s)u(s)ds,\quad u(s)\in
L^{2}[0,T],$ (3.5)
where $\widehat{\psi}(t)=\int_{\mathbb{R}}e^{i\lambda
t}\,\psi(\lambda)\,d\lambda$ $(t\in\mathbb{R})$ is the Fourier transform of
$\psi(\lambda)$.
Let $A_{T}(f)$ and $A_{T}(g)$ denote either the $T$-truncated Toeplitz
operators (in the c.t. case), or the $T\times T$ Toeplitz matrices (in the
d.t. case) generated by the functions $f$ and $g$, respectively. Observe that
$A_{T}(f)$ is the covariance matrix (or operator) of the process $\\{X(u),\
u\in\mathbb{U}\\}$.
We assume below that $f,g\in L^{1}(\Lambda)$, and with no loss of generality,
that $g\geq 0$. Also, we set
$\sigma^{2}_{0}:=16\pi^{3}\int_{\Lambda}f^{2}(\lambda)g^{2}(\lambda)\,d\lambda.$
(3.6)
As usual $\Lambda=[-\pi,\pi]$ in the d.t. case and $\Lambda=\mathbb{R}$ in the
c.t. case.
The theorems that follow contain sufficient conditions expressed in terms of
$f(\lambda)$ and $g(\lambda)$ to ensure central limit theorems for standard
normalized quadratic functionals $\widetilde{Q}_{T}$ both for d.t. and c.t.
Gaussian processes. Some of the assumptions imposed on $f$ allow for long-
range dependence ($f(0)=\infty$), others for discontinuities at other
frequencies. Sometimes the good behavior of one function, say $g$ can
compensate for the bad behavior of $f$ and vice versa.
###### Theorem 3.1.
Let $f$, $g$, $A_{T}(f)$, $A_{T}(g)$, and $\widetilde{Q}_{T}$ be as above.
Each of the following conditions is sufficient for functional $Q_{T}$ to obey
the CLT, that is,
$\widetilde{Q}_{T}\overset{d}{\rightarrow}\eta\sim
N(0,\sigma_{0}^{2})\quad{\rm as}\quad T\to\infty,$ (3.7)
with $\sigma^{2}_{0}$ given by (3.6).
* (A)
$f\cdot g\in L^{1}(\Lambda)\cap L^{2}(\Lambda)$ and
$\chi_{2}(\widetilde{Q}_{T}):=\frac{2}{T}\hbox{tr}\bigl{[}A_{T}(f)A_{T}(g)\bigr{]}^{2}\longrightarrow\sigma_{0}^{2},$
(3.8)
where $\hbox{tr}[A]$ stands for the trace of the operator (or the matrix) $A$.
* (B)
The function
$\varphi({\bf
u}):=\varphi(u_{1},u_{2},u_{3})=\int_{\Lambda}f(\lambda)g(\lambda-
u_{1})f(\lambda-u_{2})g(\lambda-u_{3})\,d\lambda$ (3.9)
belongs to $L^{2}(\Lambda^{3})$ and is continuous at ${\bf 0}=(0,0,0).$
* (C)
$f\in L^{1}(\Lambda)\cap L^{p}(\Lambda)$ $(p\geq 2)$ and $g\in
L^{1}(\Lambda)\cap L^{q}(\Lambda)$ $(q\geq 2)$ with $1/p+1/q\leq 1/2.$
* (D)
$f\in L^{1}(\Lambda)\cap L^{2}(\Lambda)$, $g\in L^{1}(\Lambda)\cap
L^{2}(\Lambda)$, $fg\in L^{2}(\Lambda)$ and
$\int_{\Lambda}f^{2}(\lambda)g^{2}(\lambda-\mu)\,d\lambda\longrightarrow\int_{\Lambda}f^{2}(\lambda)g^{2}(\lambda)\,d\lambda\quad{\rm
as}\quad\mu\to 0.$
###### Remark 3.1.
Observe that assertion (A) implies assertions (B) – (D), and assertion (B)
implies assertions (C) and (D) (see Giraitis and Surgailis [60], Ginovyan and
Sahakyan [52, 53]). For the d.t. case: assertions (A) and (D) were proved in
Giraitis and Surgailis [60] (see also Giraitis et al. [59]); assertions (B)
was proved in Ginovyan and Sahakyan [52]; assertion (C) for $p=q=\infty$ was
first established by Grenander and Szegő ([67], Section 11.7), while the case
$p=2$, $q=\infty$ was proved by Ibragimov [75] and Rosenblatt [87], in the
general d.t. case assertion (C) was proved by Avram [6]. For the c.t. case,
assertions (A) – (D) were proved in Ginovyan [45] and in Ginovyan and Sahakyan
[53].
The following theorem provides conditions for the CLT to hold when either $f$
or $g$ is a sufficiently smooth function. Given a number $\alpha$
($0<\alpha<1$). Denote by $\rm{Lip}_{\alpha}(\Lambda)$ the class of Lipschitz
functions on $\Lambda$. By definition, $\phi\in\rm{Lip}_{\alpha}(\Lambda)$ if
there exists a constant $C<\infty$ such that
$|\phi(x)-\phi(y)|\leq C|x-y|^{\alpha}\quad\text{for all}\quad x,y\in\Lambda.$
###### Theorem 3.2.
Let either $f\in\rm{Lip}_{\alpha}(\Lambda)$ or
$g\in\rm{Lip}_{\alpha}(\Lambda)$ with $\alpha>1/2$, and let $fg\in
L^{2}(\Lambda)$. Then $\widetilde{Q}_{T}\overset{d}{\rightarrow}\eta\sim
N(0,\sigma^{2}_{0})$ as $T\to\infty$ with $\sigma^{2}_{0}$ given by (3.6).
Theorem 3.2 for the d.t. case was proved in Giraitis and Taqqu [61]. For the
c.t. case it can be proved similarly.
To state the next theorem, we need to introduce a class of slowly varying
functions at zero. Recall that a function $u(\lambda)$,
$\lambda\in\mathbb{R}$, is called slowly varying at zero if it is non-negative
and for any $t>0$
$\lim_{\lambda\rightarrow 0}\frac{u(t\lambda)}{u(\lambda)}\rightarrow 1.$
Denote by $SV_{0}(\Lambda)$ the class of slowly varying functions at zero
$u(\lambda)$, $\lambda\in\Lambda$, satisfying the following conditions: for
some $a>0$, $u(\lambda)$ is bounded on $[-a,a]$, $\lim_{\lambda\to
0}u(\lambda)=0,$ $u(\lambda)=u(-\lambda)$ and $0<u(\lambda)<u(\mu)$ for
$0<\lambda<\mu<a$. An example of a function belonging to $SV_{0}(\Lambda)$ is
$u(\lambda)=\left|\ln|\lambda|\right|^{-\gamma}$ with $\gamma>0$ and $a=1$.
###### Theorem 3.3.
Assume that the functions $f$ and $g$ are integrable on $\mathbb{R}$ and
bounded outside any neighborhood of the origin, and satisfy for some $a>0$ the
following conditions:
$f(\lambda)\leq|\lambda|^{-\alpha}L_{1}(\lambda),\quad|g(\lambda)|\leq|\lambda|^{-\beta}L_{2}(\lambda),\quad\lambda\in[-a,a],$
(3.10)
for some $\alpha<1,\ \beta<1$ with $\alpha+\beta\leq 1/2$, where $L_{1}(x)$
and $L_{2}(x)$ are slowly varying functions at zero satisfying
$\displaystyle L_{i}\in SV_{0}(\mathbb{R}),\ \
\lambda^{-(\alpha+\beta)}L_{i}(\lambda)\in L^{2}[-a,a],\ i=1,2.$ (3.11)
Then $\widetilde{Q}_{T}\overset{d}{\rightarrow}\eta\sim N(0,\sigma^{2}_{0})$
as $T\to\infty$ with $\sigma^{2}_{0}$ given by (3.6).
###### Remark 3.2.
For the d.t. case the result of Theorem 3.3 under the condition
$\alpha+\beta<1/2$ was first obtained by Fox and Taqqu [39]. For the general
case, including the critical value $\alpha+\beta=1/2$ it was proved in
Ginovyan and Sahakyan [52]. For the c.t. case the result was proved in
Ginovyan and Sahakyan [53].
###### Remark 3.3.
The conditions $\alpha<1$ and $\beta<1$ in Theorem 3.3 ensure that the Fourier
transforms of $f$ and $g$ are well defined. Observe that when $\alpha>0$ the
process $X(t)$ may exhibit long-range dependence. We also allow here
$\alpha+\beta$ to assume the critical value 1/2. The assumptions $f\cdot g\in
L^{1}(\mathbb{R})$, $f,g\in L^{\infty}(\mathbb{R}\setminus[-a,a])$ and (3.11)
imply that $f\cdot g\in L^{2}(\mathbb{R})$, so that the variance
$\sigma^{2}_{0}$ in (3.6) is finite.
###### Remark 3.4.
In Theorem 3.3, the assumption that $L_{1}(x)$ and $L_{2}(x)$ belong to
$SV_{0}(\mathbb{R})$ instead of merely being slowly varying at zero is done in
order to deal with the critical case $\alpha+\beta=1/2$. Suppose that we are
away from this critical case, namely, $f(x)=|x|^{-\alpha}l_{1}(x)$ and
$g(x)=|x|^{-\beta}l_{2}(x)$, where $\alpha+\beta<1/2$, and $l_{1}(x)$ and
$l_{2}(x)$ are slowly varying at zero functions. Assume also that $f(x)$ and
$g(x)$ are integrable and bounded on $(-\infty,-a)\cup(a,+\infty)$ for any
$a>0$. We claim that Theorem 3.3 applies. Indeed, choose
$\alpha^{\prime}>\alpha$, $\beta^{\prime}>\beta$ with
$\alpha^{\prime}+\beta^{\prime}<1/2$. Write
$f(x)=|x|^{-\alpha^{\prime}}|x|^{\delta}l_{1}(x)$, where
$\delta=\alpha^{\prime}-\alpha>0$. Since $l_{1}(x)$ is slowly varying, when
$|x|$ is small enough, for some $\epsilon\in(0,\delta)$ we have
$|x|^{\delta}l_{1}(x)\leq|x|^{\delta-\epsilon}$. Then one can bound
$|x|^{\delta-\epsilon}$ by $c\left|\ln|x|\right|^{-1}\in SV_{0}(\mathbb{R})$
for small $|x|<1$. Hence one has when $|x|<1$ is small enough,
$f(x)\leq|x|^{-\alpha^{\prime}}\left(c\left|\ln|x|\right|^{-1}\right).$
Similarly, when $|x|<1$ is small enough, one has
$g(x)\leq|x|^{-\beta^{\prime}}\left(c\left|\ln|x|\right|^{-1}\right).$
All the assumptions in Theorem 3.3 are now readily checked with $\alpha$ and
$\beta$ replaced by $\alpha^{\prime}$ and $\beta^{\prime}$, respectively.
###### Remark 3.5.
The functions
$f(\lambda)=|\lambda|^{-\alpha}|\ln|\lambda||^{-\gamma}\quad\mbox{and}\quad
g(\lambda)=|\lambda|^{-\beta}|\ln|\lambda||^{-\gamma},$ (3.12)
where $\alpha<1,$ $\beta<1,$ $\alpha+\beta\leq 1/2$ and $\gamma>1/2$, provide
examples of spectral density $f(\lambda)$ and generating function $g(\lambda)$
satisfying the conditions of Theorem 3.3 (see Ginovyan and Sahakyan [52]).
###### Remark 3.6.
The slowly varying functions $L_{1}$ and $L_{2}$ in (3.10) are important
because they provide a great flexibility in the choice of spectral density $f$
and generating function $g$. Theorem 3.3 shows that in the critical case
$\alpha+\beta=1/2$ the limit distribution of the standard normalized quadratic
form $\widetilde{Q}_{T}$ given by (3.2) is Gaussian and depends on the slowly
varying factors $L_{1}$ and $L_{2}$ through $f$ and $g$.
###### Remark 3.7.
The functions $f(\lambda)$ and $g(\lambda)$ in Theorem 3.3 have singularities
at the point $\lambda=0$, and are bounded in any neighborhood of this point.
It can be shown that the choice of the point $\lambda=0$ is not essential, and
instead it can be taken to be any point $\lambda_{0}\in[-a,a]$. Using the
properties of the products of Toeplitz matrices and operators it can be shown
that Theorem 3.3 remains valid when $f(\lambda)$ and $g(\lambda)$ have
singularities of the form (3.12) at the same finite number of points of the
segment $[-a,a]$ (see Ginovyan and Sahakyan [52]).
###### Remark 3.8.
Assertion (A) of Theorem 3.1 implies Theorem 3.3. On the other hand, for
functions $f(\lambda)=\lambda^{-3/4}$ and $g(\lambda)=\lambda^{3/4}$
satisfying the conditions of Theorem 3.3, the function
$\varphi(u_{1},u_{2},u_{3})$ in (3.9) is not defined for $u_{2}=0$, $u_{1}\neq
0$, $u_{3}\neq 0$, showing that assertion (B) of Theorem 3.1 generally does
not imply Theorem 3.3 (see Ginovyan and Sahakyan [52]).
Giraitis and Surgailis [60] proved that assertion (A) of Theorem 3.1 remains
valid for d.t. linear processes. More precisely, they proved the following
theorem.
###### Theorem 3.4.
Let $\\{X(u),\,u\in\mathbb{Z}\\}$ be a d.t. stationary linear process of the
form (2.9) with spectral density $f$. Let $Q_{T}$ be a quadratic form
generated by a function $g$ given by (3.1), and let $B_{T}(f)$ and $B_{T}(g)$
be the $T\times T$ Toeplitz matrices generated by the functions $f$ and $g$,
respectively (see (3.4)). Assume that
$\frac{1}{T}\hbox{tr}\bigl{[}B_{T}(f)B_{T}(g)\bigr{]}^{2}\longrightarrow
8\pi^{3}\int_{-\pi}^{\pi}f^{2}(\lambda)g^{2}(\lambda)d\lambda<\infty,$ (3.13)
Then the CLT holds for $Q_{T}$, that is,
$\widetilde{Q}_{T}:=T^{-1/2}\,\left(Q_{T}-\mathbb{E}[Q_{T}]\right)\overset{d}{\rightarrow}\eta\sim
N(0,\sigma^{2})\quad{\rm as}\quad T\to\infty,$ (3.14)
where
$\sigma^{2}=16\pi^{3}\int_{-\pi}^{\pi}f^{2}(\lambda)g^{2}(\lambda)d\lambda+\kappa_{4}\left[2\pi\int_{-\pi}^{\pi}f(\lambda)g(\lambda)d\lambda\right]^{2},$
(3.15)
and $\kappa_{4}$ is the fourth cumulant of $\xi(0)$.
The next result, which was proved in Giraitis and Taqqu [61], extends Theorem
3.2 to the case of d.t. linear processes.
###### Theorem 3.5.
Let $\\{X(u),\,u\in\mathbb{Z}\\}$ be a d.t. stationary linear process of the
form (2.9) with spectral density $f$ and $\mathbb{E}[\xi^{2}(0)]=1$, and let
$Q_{T}$ be a quadratic form generated by a function $g$ given by (3.1).
Suppose either $f\in\rm{Lip}_{\alpha}(\Lambda)$ or
$g\in\rm{Lip}_{\alpha}(\Lambda)$ with $\alpha>1/2$, and $fg\in
L^{2}(\Lambda)$. Then $\widetilde{Q}_{T}\overset{d}{\rightarrow}\eta\sim
N(0,\sigma^{2})$ as $T\to\infty$ with $\sigma^{2}$ given by (3.15).
Giraitis and Surgailis [60] conjectured that the relation (3.13), and hence,
the CLT for $Q_{T}$, holds under the single condition that the integral on the
right hand side of (3.13) is finite. Ginovyan [44] showed that the finiteness
of this integral does not guarantee convergence in (3.13), and conjectured
that positiveness and finiteness of the integral in (3.13) must be sufficient
for $Q_{T}$ to obey the CLT.
The next proposition shows that the condition of positiveness and finiteness
of the integral in (3.13) also is not sufficient for $Q_{T}$ to obey the CLT
(see Ginovyan and Sahakyan [52]).
###### Proposition 3.1.
There exist a spectral density $f(\lambda)$ and a generating function
$g(\lambda)$ such that
$0<\int_{-\pi}^{\pi}f^{2}(\lambda)\,g^{2}(\lambda)\,d\lambda<\infty$ (3.16)
and
$\lim_{T\to\infty}\sup\chi_{2}(\widetilde{Q}_{T})=\lim_{n\to\infty}\sup\frac{2}{T}\hbox{tr}\left(B_{T}(f)B_{T}(g)\right)^{2}=\infty,$
(3.17)
that is, the condition (3.16) does not guarantee convergence in (3.13), and
hence is not sufficient for $Q_{T}$ to obey the CLT.
To construct functions $f(\lambda)$ and $g(\lambda)$ satisfying (3.16) and
(3.17), for a fixed $p\geq 2$ we choose a number $q>1$ to satisfy $1/p+1/q>1$,
and for such $p$ and $q$ we consider the functions $f_{0}(\lambda)$ and
$g_{0}(\lambda)$ defined by
$f_{0}(\lambda)=\left\\{\begin{array}[]{ll}\left({2^{s}}/{s^{2}}\right)^{1/p},&\mbox{if
$2^{-s-1}\leq\lambda\leq 2^{-s},\,s=2m$}\\\ 0,&\mbox{if
$2^{-s-1}\leq\lambda\leq 2^{-s},\,s=2m+1$},\end{array}\right.$ (3.18)
$g_{0}(\lambda)=\left\\{\begin{array}[]{ll}\left({2^{s}}/{s^{2}}\right)^{1/q},&\mbox{if
$2^{-s-1}\leq\lambda\leq 2^{-s},\,s=2m+1$}\\\ 0,&\mbox{if
$2^{-s-1}\leq\lambda\leq 2^{-s},\,s=2m$},\end{array}\right.$ (3.19)
where $m$ is a positive integer. For an arbitrary finite positive constant $C$
we set $g_{\pm}(\lambda)=g_{0}(\lambda)\pm C$. Then the functions $f=f_{0}$
and $g=g_{+}$ or $g=g_{-}$ satisfy (3.16) and (3.17) (for details we refer to
Ginovyan [44], and Ginovyan and Sahakyan [52]). Consequently, for these
functions the quadratic form $Q_{T}$ does not obey the CLT, and it is of
interest to describe the limiting non-Gaussian distribution of $Q_{T}$ in this
special case.
### 3.2 Time domain conditions
In this subsection we present time-domain sufficient conditions for a
quadratic form $Q_{T}$ of the form (3.1) to obey the CLT. That is, the
conditions are in terms of the covariance function $r(t)$ and the generating
kernel $\widehat{g}(t)$. The processes considered here will be d.t. Gaussian
or linear processes with memory. Observe that the time-domain conditions
stated below are, in general, not equivalent to the frequency-domain
conditions stated in Theorems 3.1 and 3.3. The method of proof for
establishing the CLT is also different under the time-domain conditions. In
this case, diagrams are used and the method of moments is applied. This is
because, in the frequency-domain, one can use an approximation technique,
essentially replacing the possibly unbounded spectral density by a bounded
one, which allowed us to approximate the bivariate quadratic forms by
univariate sums of $m$-dependent random variables. Such an approximation
technique, however, does not work with time-domain conditions, and one has to
deal directly with the covariances, which decrease slowly.
The next results contain sufficient conditions in terms of the covariance
function $r(t)$ and the generating kernel $\widehat{g}(t)$ ensuring central
limit theorems for standard normalized quadratic form $\widetilde{Q}_{T}$ for
the d.t. Gaussian processes (see Fox and Taqqu [37], and Giraitis and Taqqu
[62]).
###### Theorem 3.6.
If the covariance function $r(t)$ and the generating kernel $\widehat{g}(t)$
satisfy the following condition:
$r\in l^{p}\,(p\geq 1)\quad\text{and}\quad\widehat{g}\in l^{q}\,(q\geq
1)\quad\text{with}\quad 1/p+1/q\geq 3/2,$
then the CLT holds for the quadratic form $Q_{T}$ with limiting variance
$\sigma^{2}_{0}$ given by (3.6).
###### Remark 3.9.
In fact, in Giraitis and Taqqu [62] (see also Giraitis and Taqqu [63]) was
proved more general result stating that Theorem 3.6 is true for quadratic
forms of the form:
$Q_{T,m,n}=\sum_{t=1}^{T}\sum_{s=1}^{T}\widehat{g}(t-s)P_{m,n}(X(t),X(s)),\quad
T\in\mathbb{N},$ (3.20)
where $P_{m,n}(X(t),X(s))$ is a bivariate Appell polynomial (Wick power) of
the linear variables $X(t)$ and $X(s)$, $m,n\geq 0$, $m+n\geq 1$, and $X(t)$
is a d.t. linear process of the form (2.9). Also, observe that
$Q_{T,1,1}=Q_{T}$.
The next theorem, which was proved in in Giraitis and Taqqu [62], shows that
under some rather restrictive conditions on the covariance function $r(t)$ and
the generating kernel $\widehat{g}(t)$, the long-range dependence of the
process $X(t)$ can be compensated by the fast decay of the generating kernel
$\widehat{g}(t)$ in such a way that the CLT for $Q_{T}$ holds. These
conditions ensure, in fact, that the sufficient conditions in the frequency
domain, provided in Theorem 3.3, are satisfied. The theorem involves quasi-
monotone sequences: a sequence $\\{b(t),t\in\mathbb{Z}\\}$ is said to be
quasi-monotonically convergent to $0$ if $b(t)\to 0$ and $b(t+1)\leq
b(t)(1+c/t)$ as $t\to 0$ for some $c>0$. The sequence $b(t)$ has bounded
variation if $\sum_{t=1}^{\infty}|b(t+1)-b(t)|<\infty$.
###### Theorem 3.7.
Assume that $r(t)=|t|^{-\gamma_{1}}L_{1}(|t|),$
$\widehat{g}(t)=|t|^{-\gamma_{2}}L_{2}(|t|)$ with $0<\gamma_{1},\gamma_{2}<3$
and $\gamma_{1}+\gamma_{2}>3/2$. Suppose in addition that both sequences
$\\{r(t)\\}$ and $\\{{\widehat{g}(t)}\\}$ have bounded variation and are
quasi-monotonically convergent to $0$; if $1<\gamma_{1}<3$, $r(t)$ has the
same sign for large $t$ and satisfies $\sum_{t\in\mathbb{Z}}r(t)=0$; if
$1<\gamma_{2}<3$, $\widehat{g}(t)$ has the same sign for large $t$ and
satisfies $\sum_{t\in\mathbb{Z}}\widehat{g}(t)=0$. Then the CLT holds for the
quadratic form $Q_{T}$. The limiting variance is expressed by (3.6) provided
that $0<\gamma_{1},\gamma_{2}<1$.
### 3.3 Operator conditions
In this subsection we assume that the model $X(t)$ is a d.t. Gaussian process
defined on a probability space $(\Omega,\mathcal{F},P)$, and we are interested
in asymptotic normality of the quadratic form $Q_{T}:=Q_{T}(f,g)$ given by
(3.1) (d.t. case). Recall the notation (see formula (3.2)
$\widetilde{Q}_{T}(f,g):=T^{-1/2}\,\left(Q_{T}(f,g)-\mathbb{E}[Q_{T}(f,g)]\right).$
(3.21)
and (see formula (3.6)
$\sigma^{2}(f,g):=16\pi^{3}\int_{-\pi}^{\pi}f^{2}(\lambda)g^{2}(\lambda)\,d\lambda.$
(3.22)
We denote by $L^{2}(dP)$ the $L^{2}$-space constructed from a probability
measure $P$ such that $X(t)\in L^{2}(dP)$, $t\in\mathbb{Z}$.
Solev and Gerville-Reache [90] observed that for a fixed spectral density $f$
and a number $T\in\mathbb{N}$ the quadratic form $\widetilde{Q}_{T}(f,g)$ in
(3.21) can be regarded as the value of a linear operator
$\mathbf{\widetilde{Q}}_{T}:g\mapsto\widetilde{Q}_{T}(f,g)$. It turns out that
in order to study the asymptotic normality of the quadratic form
$\widetilde{Q}_{T}(f,g)$, it is enough to understand for what sets
$B_{\varepsilon}$ (possibly, depending on $f$) we have
$\limsup_{T\to\infty}\sup_{g\in
B_{\varepsilon}}||\widetilde{Q}_{T}(f,g)||_{L^{2}(dP)}\leq\varepsilon.$ (3.23)
It can be shown that the function $||\widetilde{Q}_{T}(f,g)||_{L^{2}(dP)}$
possesses the following symmetry property regarded as a function of $(f,g)$:
if $g$ is a nonnegative function, then
$||\widetilde{Q}_{T}(f,g)||^{2}_{L^{2}(dP)}=||\widetilde{Q}_{T}(g,f)||^{2}_{L^{2}(dP)}.$
(3.24)
The next result plays a key role in the proof of the asymptotic normality of
$\widetilde{Q}_{T}(f,g)$ (see Solev and Gerville-Reache [90]).
###### Theorem 3.8.
Let $\mathcal{G}_{0}$ and $\mathcal{G}$ ($\mathcal{G}_{0}\subset\mathcal{G}$)
be linear subsets in the space $L^{1}[-\pi,\pi]$. Assume that the following
conditions hold:
* (a)
for every $g_{0}\in\mathcal{G}_{0}$ the limit
$\lim_{T\to\infty}||\widetilde{Q}_{T}(f,g_{0})||^{2}_{L^{2}(dP)}=\sigma^{2}(f,g_{0})<\infty.$
exists, and
$\widetilde{Q}_{T}(f,g_{0})\overset{d}{\rightarrow}\eta_{0}\sim
N(0,\sigma^{2}(f,g_{0}))\quad{\rm as}\quad T\to\infty;$
* (b)
for every $g\in\mathcal{G}$ and a number $\varepsilon>0$ there exists a
function $g_{0}\in\mathcal{G}_{0}$ such that
$\limsup_{T\to\infty}||\widetilde{Q}_{T}(f,g-g_{0}||_{L^{2}(dP)}\leq\varepsilon,\quad||g-g_{0})||_{f^{2}}\leq\varepsilon,$
where $||\cdot||_{L^{2}(dP)}$ and $||\cdot||_{f^{2}}$ are the norms in the
spaces $L^{2}(dP)$ and $L^{2}(f^{2},[-\pi,\pi])$, respectively. Then
$\sigma^{2}(f,g)<\infty$, the limit
$\lim_{T\to\infty}||\widetilde{Q}_{T}(f,g)||^{2}_{L^{2}(dP)}=\sigma^{2}(f,g)$
exists, and
$\widetilde{Q}_{T}(f,g)\overset{d}{\rightarrow}\eta\sim
N(0,\sigma^{2}(f,g))\quad{\rm as}\quad T\to\infty.$ (3.25)
Thus, if the results holds for every $g_{0}\in\mathcal{G}_{0}$ (part (a)),
then it holds for every $g\in\mathcal{G}$ close to $g_{0}$ (part (b)).
Given a function $\phi\in L^{1}[-\pi,\pi]$ and an interval
$I\subset[-\pi,\pi]$ of length $|I|$, we set
$\\{\phi\\}_{I}:=\frac{1}{|I|}\int_{I}\phi(x)dx,$
and define
$\lambda(\phi):=\sup_{I}\\{\phi\\}_{I}\times\\{1/\phi\\}_{I},$ (3.26)
where the supremum is taken over all intervals $I\subset[-\pi,\pi]$. The
condition $\lambda(\phi)<\infty$ implies a type of smoothness of the function
$\psi=\log|\phi|$ in the neighborhoods of points where it turns to infinity
(for details, see the paper Hunt et al. [74]).
The following result was obtained by Solev and Gerville-Reache [90].
###### Theorem 3.9.
Assume that the following conditions hold:
* (a)
$f\leq f_{*}$, $|g|\leq g_{*}$ and $\lambda(f_{*}),\lambda(g_{*})<\infty$,
* (b)
$f_{*}g_{*}\in L^{2}[-\pi,\pi]$.
Then the quadratic form $Q_{T}(f,g)$ obeys the CLT, that is, the relation
(3.25) is satisfied.
## 4 Functional limit theorems for Gaussian and linear models
In this section, we establish weak convergence in $C[0,1]$ for c.t. case and
in $D[0,1]$ for d.t. case of normalized stochastic processes, generated by
Toeplitz type quadratic functionals of a Gaussian stationary process,
exhibiting long-range dependence. Specifically, we are interested in
describing the limiting behavior (as $T\to\infty$) of the following process
$Q_{T}(\tau)$, generated by Toeplitz type quadratic functionals of a Gaussian
stationary process $\\{X(u),\ u\in\mathbb{U}\\}$ with spectral density $f$:
$Q_{T}(\tau)=\left\\{\begin{array}[]{ll}\sum_{t=1}^{[T\tau]}\sum_{s=1}^{[T\tau]}\widehat{g}(t-s)X(t)X(s)&\mbox{in
the d.t.\ case},\\\ \\\
\int_{0}^{T\tau}\int_{0}^{T\tau}\widehat{g}(t-s)X(t)X(s)\,dt\,ds&\mbox{in the
c.t.\ case},\end{array}\right.$ (4.1)
where $\tau\in[0,1]$, $\widehat{g}(t)$, $t\in\mathbb{U}$ is the Fourier
transform of some integrable even function $g(\lambda)$, $\lambda\in\Lambda$,
and $[\,\cdot\,]$ stands for the greatest integer.
The limit of the process (4.1) again is completely determined by the spectral
density $f(\lambda)$ (or covariance function $r(t)$) and the generating
function $g(\lambda)$ (or generating kernel $\widehat{g}(t)$), and depending
on their properties, the limiting process can be Gaussian or not. We say that
we have a functional non-central limit theorem (FNCLT) for the process
$Q_{T}(\tau)$ if the limit is non-Gaussian or, if Gaussian, it is not Brownian
motion. We say that a functional central limit theorem (FCLT) for
$Q_{T}(\tau)$ holds if we have the following weak convergence in $C[0,1]$ in
the c.t. case (and in $D[0,1]$ in the d.t. case):
$\widetilde{Q}_{T}(\tau)\Longrightarrow\sigma B(\tau),\quad\tau\in[0,1],$
(4.2)
where $\sigma>0$, $B(\tau)$ is a standard Brownian motion, and
$\widetilde{Q}_{T}(\tau)$ is the standard normalized version of the process
$Q_{T}(\tau)$ in (4.1):
$\widetilde{Q}_{T}(\tau):=T^{-1/2}\left(Q_{T}(\tau)-\mathbb{E}[Q_{T}(\tau)]\right),\quad\tau\in[0,1].$
(4.3)
### 4.1 Functional CLT for $Q_{T}(\tau)$
We first state functional central limit theorems, when the limit of the
normalized process (4.3) is Brownian motion.
In the d.t. case the next result was obtained in Giraitis and Taqqu [65] (see
also Fox and Taqqu [38] and Giraitis and Taqqu [61]).
###### Theorem 4.1.
If the covariance function $r(t)$ and the generating kernel $\widehat{g}(t)$
satisfy the following condition: $r\in l^{p}$ $(p\geq 1)$ and $\widehat{g}\in
l^{q}$ $(q\geq 1)$ with $1/p+1/q\geq 3/2,$ then for the process $Q_{T}(\tau)$
the FCLT holds, that is, the convergence (4.2) holds in $D[0,1]$ with limiting
variance $\sigma^{2}$ given by
$\sigma^{2}:=\sum_{t,s,v\in\mathbb{Z}}\widehat{g}(t)\widehat{g}(s)\mathrm{Cov}\left(P(X(v),X(v+t)),P(X(0),X(s)\right),$
(4.4)
where $P(X(t),X(s)):=X(t)X(s)-\mathbb{E}[X(t)X(s)]$.
Now we consider the c.t. case. The next result, which is an extension of
Theorem 3.1(A), involves the convergence of finite-dimensional distributions
of the process $\widetilde{Q}_{T}(\tau)$ to those of a standard Brownian
motion (see Bai et al. [8]).
###### Theorem 4.2.
Assume that the spectral density $f(\lambda)$ and the generating function
$g(\lambda)$ satisfy the following conditions:
$f\cdot g\in L^{1}(\mathbb{R})\cap L^{2}(\mathbb{R})$ (4.5)
and
$\mathbb{E}[\widetilde{Q}_{T}^{2}(1)]\rightarrow
16\pi^{3}\int_{\mathbb{R}}f^{2}(x)g^{2}(x)dx~{}\text{ as }T\rightarrow\infty.$
(4.6)
Then we have the following convergence of the finite-dimensional distributions
$\widetilde{Q}_{T}(\tau)\overset{f.d.d.}{\longrightarrow}\sigma_{0}B(\tau),$
where $\widetilde{Q}_{T}(\tau)$ is as in (4.3), $B(\tau)$ is a standard
Brownian motion, and (see (3.6)):
$\sigma^{2}_{0}:=16\pi^{3}\int_{\mathbb{R}}f^{2}(x)g^{2}(x)dx.$ (4.7)
To extend the convergence of finite-dimensional distributions in Theorem 4.2
to the weak convergence in the space $C[0,1]$, we impose an additional
condition on the underlying Gaussian process $X(t)$ and on the generating
function $g$. It is convenient to impose this condition in the time domain,
that is, on the covariance function $r:=\hat{f}$ and the generating kernel
$a:=\hat{g}$. The following condition is an analog of the assumption in
Theorem 4.1
$r(\cdot)\in L^{p}(\mathbb{R}),\quad a(\cdot)\in
L^{q}(\mathbb{R})\quad\text{for some}\quad p,q\geq
1,~{}\frac{1}{p}+\frac{1}{q}\geq\frac{3}{2}.$ (4.8)
###### Remark 4.1.
In fact under (4.5), the condition (4.8) is sufficient for the convergence in
(4.6). Indeed, let $\bar{p}=p/(p-1)$ and $\bar{q}=q/(q-1)$ be the Hölder
conjugates of $p$ and $q$, respectively. Since $1\leq p,q\leq 2$, one has by
the Hausdorff-Young inequality that $\|f\|_{\bar{p}}\leq
c_{p}\|r\|_{p},~{}\|g\|_{\bar{q}}\leq c_{q}\|a\|_{q},$ and hence, in view if
(4.8) we have
$f(\cdot)\in L^{\bar{p}},\quad g(\cdot)\in
L^{\bar{q}},\quad\frac{1}{\bar{p}}+\frac{1}{\bar{q}}=2-\frac{1}{p}-\frac{1}{q}\leq
1/2.$ (4.9)
Then the convergence in (4.6) follows from Theorem 3.1(C) (see the proof of
Theorem 3 of Ginovyan and Sahakyan [53] in the c.t. case, and Giraitis and
Surgailis [60] in the d.t. case).
###### Remark 4.2.
Observe that condition (4.8) is satisfied if the functions $r(t)$ and $a(t)$
satisfy the following: there exist constants $C>0$, $\alpha^{*}$ and
$\beta^{*}$, such that
$|r(t)|\leq C(1\wedge|t|^{\alpha^{*}-1}),\qquad|a(t)|\leq
C(1\wedge|t|^{\beta^{*}-1}),$ (4.10)
where $0<\alpha^{*},\beta^{*}<1/2$ and $\alpha^{*}+\beta^{*}<1/2$. Indeed, to
see this, note first that $r(\cdot),\ a(\cdot)\in L^{\infty}(\mathbb{R})$.
Then one can choose $p,q\geq 1$ such that $p(\alpha^{*}-1)<-1$ and
$q(\beta^{*}-1)<-1$, which entails that $r(\cdot)\in L^{p}(\mathbb{R})$ and
$a(\cdot)\in L^{q}(\mathbb{R})$. Since $1/p+1/q<2-\alpha^{*}-\beta^{*}$ and
$2-\alpha^{*}-\beta^{*}>3/2$, one can further choose $p,q$ to satisfy
$1/p+1/q\geq 3/2$.
The next results, two functional central limit theorems, extend Theorems
3.1(A) and 3.3 to weak convergence in the space $C[0,1]$ of the stochastic
process $\widetilde{Q}_{T}(\tau)$ to a standard Brownian motion.
###### Theorem 4.3.
Let the spectral density $f$ and the generating function $g$ satisfy condition
(4.5). Let the covariance function $r(t)$ and the generating kernel $a(t)$
satisfy condition (4.8). Then we have the following weak convergence in
$C[0,1]$:
$\widetilde{Q}_{T}(\tau)\Longrightarrow\sigma_{0}B(\tau),$
where $\widetilde{Q}_{T}(\tau)$ is as in (4.3), $\sigma_{0}$ is as in (4.7),
and $B(\tau)$ is a standard Brownian motion.
Recall the class $SV_{0}(\mathbb{R})$ of slowly varying at zero functions
$u(x)$, $x\in\mathbb{R}$, satisfying the following conditions: for some $a>0$,
$u(x)$ is bounded on $[-a,a]$, $\lim_{x\to 0}u(x)=0,$ $u(x)=u(-x)$ and
$0<u(x)<u(y)$ for $0<x<y<a$.
###### Theorem 4.4.
Assume that the functions $f$ and $g$ are integrable on $\mathbb{R}$ and
bounded outside any neighborhood of the origin, and satisfy for some $a>0$
$f(x)\leq|x|^{-\alpha}L_{1}(x),\quad|g(x)|\leq|x|^{-\beta}L_{2}(x),\quad
x\in[-a,a]$ (4.11)
for some $\alpha<1,\ \beta<1$ with $\alpha+\beta\leq 1/2$, where $L_{1}(x)$
and $L_{2}(x)$ are slowly varying at zero functions satisfying
$L_{i}\in SV_{0}(\mathbb{R}),\quad x^{-(\alpha+\beta)}L_{i}(x)\in
L^{2}[-a,a],\quad i=1,2.$ (4.12)
Let, in addition, the covariance function $r(t)$ and the generating kernel
$a(t)$ satisfy condition (4.8). Then we have the following weak convergence in
$C[0,1]$:
$\widetilde{Q}_{T}(\tau)\Longrightarrow\sigma_{0}B(\tau),$
where $\widetilde{Q}_{T}(\tau)$ is as in (4.3), $\sigma_{0}$ is as in (4.7),
and $B(\tau)$ is a standard Brownian motion.
###### Remark 4.3.
The proofs of Theorems 4.2 \- 4.4, given in Bai et al. [8] uses the method
developed in Ginovyan and Sahakyan [53], which itself is based on the
approximations of traces of the products of truncated Toeplitz operators (see
Section 8.1).
### 4.2 Functional NCLT for $Q_{T}(\tau)$
#### 4.2.1 Non-central limit theorems (discrete-time)
For d.t. Gaussian processes the problem of description of the limit
distribution of the quadratic form:
$Q_{T}:=\sum_{t=1}^{T}\sum_{s=1}^{T}\widehat{g}(t-s)X(t)X(s),\quad
T\in\mathbb{N},$ (4.13)
if it is non-Gaussian, goes back to the classical papers of Rosenblatt
[86]–[88]. Later this problem was studied in a series of papers by Fox and
Taqqu, Taqqu, and Terrin and Taqqu (see, e.g., [37], [97], [100], [101],
[102], and references therein). Suppose that the spectral density $f(\lambda)$
and the generating function $g(\lambda)$ are regularly varying functions at
the origin:
$f(\lambda)=|\lambda|^{-\alpha}L_{1}(\lambda)\quad{\rm and}\quad
g(\lambda)=|\lambda|^{-\beta}L_{2}(\lambda),\quad\alpha<1,\beta<1,$ (4.14)
where $L_{1}(\lambda)$ and $L_{2}(\lambda)$ are slowly varying functions at
zero, which are bounded on bounded intervals. The conditions $\alpha<1$ and
$\beta<1$ ensure that the Fourier coefficients of $f$ and $g$ are well
defined. When $\alpha>0$ the model $\\{X(t),t\in\mathbb{Z}\\}$ exhibits long
memory.
It is the sum $\alpha+\beta$ that determines the asymptotic behavior of the
quadratic form $Q_{T}$. If $\alpha+\beta\leq 1/2$, then by Theorem 3.3 the
standard normalized quadratic form
$T^{-1/2}\left(Q_{T}-\mathbb{E}[Q_{T}]\right)$
converges in distribution to a Gaussian random variable. If
$\alpha+\beta>1/2$, the convergence to a Gaussian distribution fails.
Consider the embedding of the discrete sequence $\\{Q_{T},\,T\in\mathbb{N}\\}$
into a continuous-time process
$\\{Q_{T}(\tau),\,T\in\mathbb{N},\,\tau\in[0,1]\\}$ defined by
$Q_{T}(\tau):=\sum_{t=1}^{[T\tau]}\sum_{s=1}^{[T\tau]}\widehat{g}(t-s)X(t)X(s),$
(4.15)
where $[\,\cdot\,]$ stands for the greatest integer. The problem of interest
is to describe the limit distribution of the following normalized process:
$\widetilde{Q}_{T}(\tau):=d_{T}^{-1}\left(Q_{T}(\tau)-\mathbb{E}[Q_{T}(\tau)]\right)$
(4.16)
where $d_{T}$ is a suitably chosen normalization factor.
In [86] (see also [88]) Rosenblatt showed that if a d.t. centered Gaussian
process $X(t)$ has covariance function $r(t)=(1+t^{2})^{\alpha/2-1/2}$ with
$1/2<\alpha<1$, then the random variable
$\widetilde{Q}_{T}:=T^{-\alpha}\sum_{t=1}^{T}\left[X^{2}(t)-1\right]$
has a non-Gaussian limiting distribution, and described this distribution in
terms of characteristic functions. This is a special case of (4.16) with
$\tau=1$, $1/2<\alpha<1$, $\beta=0$ and $d_{T}=T^{\alpha}$. In [97] (see also
[100]) Taqqu extended Rosenblatt’s result by showing that the stochastic
process
$\widetilde{Q}_{T}(\tau):=T^{-\alpha}\sum_{t=1}^{[T\tau]}\left[X^{2}(t)-1\right]$
converges (as $T\to\infty$) weakly in $D[0,1]$ to a process (called the
Rosenblatt process) which has the double Wiener-Itô integral representation
$Q(\tau):=C_{\alpha}\int_{\mathbb{R}^{2}}^{{}^{\prime\prime}}\frac{e^{i\tau(x+y)}-1}{i(x+y)}|x|^{-\alpha/2}|y|^{-\alpha/2}dZ(x)dZ(y),$
(4.17)
where $Z(\cdot)$ is a complex-valued Gaussian random measure defined on the
Borel $\sigma$-algebra $\mathcal{B}(\mathbb{R})$, and satisfying $EZ(B)=0$,
$E|Z(B)|^{2}=|B|$, and $\overline{Z(-B)}=Z(B)$ for any
$B\in\mathcal{B}(\mathbb{R})$. The double prime in the integral (4.17)
indicates that the integration excludes the diagonals $x=\pm y$.
Notice that the distribution of the random variable $Q(\tau)$ in (4.17) for
$\tau=1$ is described in Veillette and Taqqu [105].
The next result, proved in Terrin and Taqqu [101] using power counting
theorems (see Section 8.6), describes the non-Gaussian limit distribution of
the suitable normalized process $Q_{T}(\tau)$.
###### Theorem 4.5.
Let $f(\lambda)$ and $g(\lambda)$ be as in (4.14) with $\alpha<1$, $\beta<1$
and slowly varying at zero and bounded on bounded intervals factors
$L_{1}(\lambda)$ and $L_{2}(\lambda)$. Let the process $Q_{T}(\tau)$ be as in
(4.15). Then for $\alpha+\beta>1/2$ the process:
$Z_{T}(\tau):=\frac{1}{T^{\alpha+\beta}L_{1}(1/T)L_{2}(1/T)}\,\left(Q_{T}(\tau)-\mathbb{E}[Q_{T}(\tau)]\right)$
(4.18)
converges (as $T\to\infty$) weakly in $D[0,1]$ to
$Z(\tau):=\int_{\mathbb{R}^{2}}^{{}^{\prime\prime}}K_{\tau}(x,y)dZ(x)dZ(y),$
(4.19)
where
$K_{\tau}(x,y)=|xy|^{-\alpha/2}\int_{\mathbb{R}}\frac{e^{i\tau(x+u)}-1}{i(x+u)}\cdot\frac{e^{i\tau(y-u)}-1}{i(y-u)}|u|^{-\beta}du,$
(4.20)
The double prime in the integral (4.19) indicates that the integration
excludes the diagonals $x=\pm y$.
###### Remark 4.4.
The limiting process in (4.19) is real-valued, non-Gaussian, and satisfies
$\mathbb{E}[Z(\tau)]=0$ and
$\mathbb{E}[Z^{2}(\tau)]=\int_{\mathbb{R}^{2}}|K_{\tau}(x,y)|^{2}dxdy$. It is
self-similar with parameter $H=\alpha+\beta\in(1/2,2)$, that is, the processes
$\\{Z(a\tau),\tau\in[0,1]\\}$ and $\\{a^{H}Z(\tau),\tau\in[0,1]\\}$ have the
same finite dimensional distributions for all $a>0$.
#### 4.2.2 Non-central limit theorems (continuous-time)
Now we state a _non-central limit theorem_ for the process $Q_{T}(\tau)$ in
the c.t. case, that is, when $Q_{T}(\tau)$ in (4.1) is defined through
integrals. Let the spectral density $f$ and the generating function $g$
satisfy
$f(x)=|x|^{-\alpha}L_{1}(x)\quad{\rm and}\quad g(x)=|x|^{-\beta}L_{2}(x),\quad
x\in\mathbb{R},\quad\alpha<1,\,\beta<1,$ (4.21)
with slowly varying at zero functions $L_{1}(x)$ and $L_{2}(x)$ such that
$\int_{\mathbb{R}}|x|^{-\alpha}L_{1}(x)dx<\infty\quad{\rm
and}\quad\int_{\mathbb{R}}|x|^{-\beta}L_{2}(x)dx<\infty.$
We assume in addition that the functions $L_{1}(x)$ and $L_{2}(x)$ satisfy the
following condition, called Potter’s bound (see Giraitis et al. [59], formula
(2.3.5)): for any $\epsilon>0$ there exists a constant $C>0$ so that if $T$ is
large enough, then
$\frac{L_{i}(u/T)}{L_{i}(1/T)}\leq C(|u|^{\epsilon}+|u|^{-\epsilon}),\quad
i=1,2.$ (4.22)
Note that a sufficient condition for (4.22) to hold is that $L_{1}(x)$ and
$L_{2}(x)$ are bounded on intervals $[a,\infty)$ for any $a>0$, which is the
case for the slowly varying functions in Theorem 4.4.
Now we are interested in the limit process of the following normalized version
of the process $Q_{T}(\tau)$ given by (4.1), with $f$ and $g$ as in (4.21):
$Z_{T}(\tau):=\frac{1}{T^{\alpha+\beta}L_{1}(1/T)L_{2}(1/T)}\left(Q_{T}(\tau)-\mathbb{E}[Q_{T}(\tau)]\right).$
(4.23)
###### Theorem 4.6.
Let $f$ and $g$ be as in (4.21) with $\alpha<1$, $\beta<1$ and slowly varying
at zero functions $L_{1}(x)$ and $L_{2}(x)$ satisfying (4.22), and let
$Z_{T}(\tau)$ be as in (4.23). Then if $\alpha+\beta>1/2$, we have the
following weak convergence in the space $C[0,1]$:
$Z_{T}(\tau)\Longrightarrow Z(\tau),$
where the limit process $Z(\tau)$ is given by
$Z(\tau)=\int_{\mathbb{R}^{2}}^{\prime\prime}H_{\tau}(x_{1},x_{2})W(dx_{1})W(dx_{2}),$
(4.24)
with
$H_{\tau}(x_{1},x_{2})=|x_{1}x_{2}|^{-\alpha/2}\int_{\mathbb{R}}\left[\frac{e^{i\tau(x_{1}+u)}-1}{i(x_{1}+u)}\right]\cdot\left[\frac{e^{i\tau(x_{2}-u)}-1}{i(x_{2}-u)}\right]|u|^{-\beta}du~{},$
(4.25)
where $W(\cdot)$ is a complex Gaussian random measure with Lebesgue control
measure, and the double prime in the integral (4.24) indicates that the
integration excludes the diagonals $x_{1}=\pm x_{2}$.
###### Remark 4.5.
Let $P_{T}$ and $P$ denote the measures generated in $C[0,1]$ by the processes
$Z_{T}(\tau)$ and $Z(\tau)$ given by (4.23) and (4.24), respectively. Then
Theorem 4.6 can be restated as follows: under the conditions of Theorem 4.6,
the measure $P_{T}$ converges weakly in $C[0,1]$ to the measure $P$ as
$T\rightarrow\infty$. Similar assertions can be stated for Theorems 4.3 and
4.4.
###### Remark 4.6.
Comparing Theorems 4.5 and 4.6, we see that the limit process $Z(\tau)$ is the
same both for continuous and discrete time models. Also, it is worth noting
that although the statements of Theorems 4.5 and 4.6 are similar, the proofs
are different. The proof of Theorem 4.5, given in Terrin and Taqqu [101], uses
hard technical analysis based on power counting theorems, while the proof of
Theorem 4.6, given in Bai et al. [9], is simple and uses the spectral
representation of the underlying process and properties of Wiener-Itô
integrals.
## 5 Functional limit theorems for Lévy-driven linear models
In this section, we survey results involving functional central and non-
central limit theorems for a suitably normalized stochastic process:
$Q_{T}(\tau):=\int_{0}^{T\tau}\int_{0}^{T\tau}\widehat{g}(t-s)X(t)X(s)dtds,\quad\tau\in[0,1]$
(5.1)
in the general case where the underlying model $\\{X(t),\ t\in\mathbb{R}\\}$
is a c.t. linear process driven from Lévy noise $\xi(t)$ with time invariant
filter $a(\cdot)$, that is,
$X(t)=\int_{\mathbb{R}}a(t-s)d\xi(s),\quad\int_{\mathbb{R}}|a(s)|^{2}ds<\infty.$
(5.2)
These theorems, which were stated and proved in Bai et al. [9], extend the
results stated in Section 4 for Wiener-driven processes and show that under
some $(L^{p},L^{q})$-type conditions imposed on the filter $a(\cdot)$ and the
generating kernel $\widehat{g}(\cdot)$ of the quadratic functional, the
process $Q_{T}(\tau)$ obeys a central limit theorem, that is, the finite-
dimensional distributions of the standard $\sqrt{T}$ normalized process
$Q_{T}(\tau)$ in (5.1) tend to those of a normalized standard Brownian motion.
In contrast, when the functions $a(\cdot)$ and $\widehat{g}(\cdot)$ have slow
power decay, then we have a non-central limit theorem for $Q_{T}(\tau)$, that
is, the finite-dimensional distributions of the process $Q_{T}(\tau)$,
normalized by $T^{\gamma}$ for some $\gamma>1/2$, tend to those of a non-
Gaussian non-stationary-increment self-similar process which can be
represented by a double stochastic Wiener-Itô integral on $\mathbb{R}^{2}$.
We point out that the proofs of the central limit theorems given in Bai et al.
[9] were based on an approximation approach, which reduces the quadratic
integral form to a single integral form, while the proofs of the non-central
limit theorems, used the spectral representation of the underlying process,
the properties of Wiener-Itô integrals, and a continuous analog of a method to
establish convergence in distribution of quadratic functionals to double
Wiener-Itô integrals, developed by Surgailis [94] (see also Giraitis et al.
[59]).
It is worth noting that if the underlying process $X(t)$ is not necessarily
Gaussian, additional complications arise due to the contribution of the random
diagonal term in the double stochastic integral with respect to Lévy noise,
which is not present in the case of Gaussian noise (see Remark 5.2 below). For
this model, in Avram et al. [7], a central limit theorem for the quadratic
functional $Q_{T}(1)$ was stated (without proof) under some
$(L^{p},L^{q})$-type conditions imposed on the spectral density $f(\lambda)$
and the generating function $g(\lambda)$ (see Remark 5.5 below). For a related
study of the sample covariances of Lévy-driven moving average processes we
refer to the papers by Cohen and Lindner [22], and Spangenberg [93].
In this section, we follow the paper Bai et al. [9], and will use the
following notation.
The symbol $\ast$ will stand for the convolution:
$(\phi\ast\psi)(u)=\int_{\mathbb{R}}\phi(u-x)\psi(x)dx,$
while the symbol $\bar{\ast}$ is used to denote the reversed convolution:
$(\phi^{\bar{\ast}2})(u)=(\phi\bar{\ast}\phi)(u)=\int_{\mathbb{R}}\phi(u+x)\phi(x)dx.$
By $\mathcal{F}$ and $\mathcal{F}^{-1}$ we denote the Fourier and the inverse
Fourier transforms:
$(\mathcal{F}\phi)(u)=\widehat{\phi}(u)=\int_{\mathbb{R}}e^{ixu}\phi(x)dx,\quad(\mathcal{F}^{-1}\phi)(u)=\frac{1}{2\pi}\int_{\mathbb{R}}e^{-ixu}\phi(x)dx.$
We will use the following well-known identities:
$\mathcal{F}(\phi\ast\psi)=\mathcal{F}(\phi)\cdot\mathcal{F}(\psi)\quad\text{and}\quad\mathcal{F}(\phi\bar{\ast}\psi)=\mathcal{F}(\phi)\cdot\overline{\mathcal{F}(\psi)}.$
(5.3)
### 5.1 Central limit theorems
Let $\\{X(t),\ t\in\mathbb{R}\\}$ be a centered real-valued linear process
given by (5.2) with filter $a(\cdot)\in L^{2}(\mathbb{R})$. Recall that the
covariance function $r(t)$ and the spectral density $f(\lambda)$ of $X(t)$ are
given by formulas (2.13) and (2.14), respectively.
The theorem that follows contains $(L^{p},L^{q})$-type time-domain sufficient
conditions for the process $Q_{T}(\tau)$ to obey the central limit theorem
(see Bai et al. [9]).
###### Theorem 5.1.
Let $X(t)$ be as in (5.2), and let $Q_{T}(\tau)$ be as in (5.1). Assume that
$a(\cdot)\in L^{p}(\mathbb{R})\cap
L^{2}(\mathbb{R}),\quad\widehat{g}(\cdot)\in L^{q}(\mathbb{R})$ (5.4)
with
$1\leq p,q\leq 2,\quad\frac{2}{p}+\frac{1}{q}\geq\frac{5}{2}.$ (5.5)
Then
$\widetilde{Q}_{T}(\tau):=T^{-1/2}\left(Q_{T}(\tau)-\mathbb{E}[Q_{T}(\tau)]\right)\overset{f.d.d.}{\longrightarrow}\sigma
B(\tau),$ (5.6)
where the symbol $\overset{f.d.d.}{\longrightarrow}$ stands for convergence of
finite-dimensional distributions, $B(\tau)$ is a standard Brownian motion, and
$\sigma^{2}=\int_{\mathbb{R}}[2K_{A}(v)+\kappa_{4}K_{B}(v)]~{}dv,$ (5.7)
where $\kappa_{4}$ is the fourth cumulant of $\xi(1)$, and
$K_{A}(v)=\Big{(}(a\ast b)^{\bar{\ast}2}\cdot a^{\bar{\ast}2}\Big{)}(v),\quad
K_{B}(v)=\Big{(}(a{\ast}b)\cdot a\Big{)}^{\bar{\ast}2}(v).$ (5.8)
###### Remark 5.1.
Young’s convolution inequality (see, e.g., Bogachev [15], Theorem 3.9.4)
states that for any numbers $p,p_{1},q$ satisfying $1\leq p,q\leq
p_{1}\leq\infty$ and $\frac{1}{p_{1}}=\frac{1}{p}+\frac{1}{q}-1$, and for any
functions $f\in L^{p}(\mathbb{R})$, $g\in L^{q}(\mathbb{R})$, the function
$f\ast g$ is defined almost everywhere, $f\ast g\in L^{p_{1}}(\mathbb{R})$,
and
$\|f\ast g\|_{p_{1}}\leq\|f\|_{p}\|g\|_{q}.$ (5.9)
Applying this inequality to the convolution in (2.13), we get
$\|r\|_{p_{1}}\leq\|a\|_{p}^{2}<\infty$, where $1+1/p_{1}=2/p$. Hence the
relations (5.4) and (5.5) imply that
$r(\cdot)\in L^{p_{1}}(\mathbb{R}),\quad b(\cdot)\in
L^{q}(\mathbb{R}),\quad\frac{1}{p_{1}}+\frac{1}{q}=\frac{2}{p}-1+\frac{1}{q}\geq\frac{5}{2}-1=\frac{3}{2}.$
(5.10)
The condition (5.10) is sufficient for the convergence in Theorem 5.1 to hold
in the case where $\xi(t)$ is Brownian motion (see Theorem 4.3). In fact, in
this case, the convergence in Theorem 5.1 holds under even a weaker condition
imposed on the generating function $g(\lambda)$ and the spectral density
$f(\lambda)$ of $X(t)$ (see Theorem 4.2).
###### Remark 5.2.
In contrast to the cases where the model is either a d.t. linear process (see
Giraitis and Surgailis [60]), or a c.t. Gaussian process (see Bai et al. [8]),
it is convenient to impose the time-domain conditions (5.4) and (5.5) on the
functions $a(\cdot)$ and $\widehat{g}(\cdot)$, instead of on the spectral
density $f(\lambda)$ and the generating function $g(\lambda)$. This allows the
analysis of the random diagonal term which arises from the double stochastic
integral with respect to a non-Gaussian Lévy process. In the d.t. case the
random diagonal term is estimated by the full double sum (see Giraitis and
Surgailis [60], relation (2.3)), while in the c.t. Gaussian case, there is no
such random diagonal term (see Ginovyan and Sahakyan [53]). In the c.t. non-
Gaussian case, there is a random diagonal term in the form of a single
stochastic integral that cannot be controlled by the double integral, and
hence it should be treated separately (see the proof of Theorem 2.1 of Bai et
al. [9]).
###### Remark 5.3.
Observe that the long-run variance $\sigma^{2}$ given by (5.7) can be
expressed in terms of the spectral density $f(\lambda)$ and the generating
function $g(\lambda)$, provided that these functions satisfy some regularity
conditions. Indeed, using the equalities in (5.3) and the Parseval-Plancherel
theorem, under suitable integrability conditions on $a(\cdot)$ and $b(\cdot)$,
we can write
$\displaystyle\int_{\mathbb{R}}K_{A}(v)dv$
$\displaystyle=\int_{\mathbb{R}}(a\ast
b)^{\bar{\ast}2}(v)a^{\bar{\ast}2}(v)dv=\frac{1}{2\pi}\int_{\mathbb{R}}\mathcal{F}\left((a\ast
b)^{\bar{\ast}2}\right)(\lambda)\overline{\mathcal{F}\left(a^{\bar{\ast}2}\right)(\lambda)}d\lambda=$
$\displaystyle=\frac{1}{2\pi}\int_{\mathbb{R}}|\mathcal{F}(a\ast
b)(\lambda)|^{2}|\mathcal{F}(a)(\lambda)|^{2}d\lambda=\frac{1}{2\pi}\int_{\mathbb{R}}|\widehat{a}(\lambda)\widehat{b}(\lambda)|^{2}|\widehat{a}(\lambda)|^{2}d\lambda$
$\displaystyle=8\pi^{3}\int_{\mathbb{R}}f(\lambda)^{2}g(\lambda)^{2}d\lambda,$
where in the last equality we used the fact $|\widehat{a}|^{2}=2\pi f$ and
$\widehat{b}=2\pi g$ (because $b(\cdot)$ is an even function). Similarly, we
have
$\displaystyle\int_{\mathbb{R}}K_{B}(v)dv$
$\displaystyle=\int_{\mathbb{R}}dv\int_{\mathbb{R}}dx\Big{(}(a{\ast}b)\cdot
a\Big{)}(x)\Big{(}(a{\ast}b)\cdot
a\Big{)}(x+v)=\left(\int_{\mathbb{R}}(a{\ast}b)(x)a(x)dx\right)^{2}$
$\displaystyle=\frac{1}{4\pi^{2}}\left(\int_{\mathbb{R}}\widehat{a}(x)\widehat{b}(x)\overline{\widehat{a}(x)}d\lambda\right)^{2}=4\pi^{2}\left[\int_{\mathbb{R}}f(\lambda)g(\lambda)d\lambda\right]^{2}.$
So an alternative expression for $\sigma^{2}$ in (5.7) is
$\sigma^{2}=16\pi^{3}\int_{\mathbb{R}}f(\lambda)^{2}g(\lambda)^{2}d\lambda+\kappa_{4}\left[2\pi\int_{\mathbb{R}}f(\lambda)g(\lambda)d\lambda\right]^{2},$
(5.11)
which should be compared with Avram et al. [7] (Theorem 4.1), and formula
(3.15) for an analogous expression in the d.t. case.
###### Remark 5.4.
The d.t. analog of Theorem 5.1 with $t=1$ and $\xi$ being Gaussian was
established in Giraitis and Surgailis [60]. A special case of Theorem 5.1 with
$t=1$ and $\xi$ being Gaussian was established in Ginovian [45], and in
Ginovian and Sahakyan [53]. Theorem 5.1 for Wiener-driven model
($\kappa_{4}=0$) was proved in Bai et al. [8].
###### Remark 5.5.
For Lévy-driven model with $t=1$ and $\sigma^{2}$ given by (5.11), a version
of Theorem 5.1 was stated in Avram et al. [7] (Theorem 4.1). They impose
$(L^{p},L^{q})$-type conditions on the spectral density $f(\cdot)$ and the
generating function $g(\cdot)$, and assume the existence of all moments of the
driving Lévy process $\xi(t)$. The details of the proof of Theorem 4.1 in
Avram et al. [7] is unfortunately omitted, and it is not clear how the omitted
details of the method-of-moment proof can be carried out given the complexity
of computing the moments of multiple integrals with respect to non-Gaussian
Lévy noise (see Peccati and Taqqu [84], Chapter 7).
The following corollary contains sufficient conditions for the assumptions in
Theorem 5.1 to hold. These conditions involve bounds on the tails of functions
$a(\cdot)$ and $b(\cdot)$ by suitable power functions (see Bai et al. [9]).
###### Corollary 5.1.
The convergence in (5.6) holds if the functions $a(\cdot)$ and $b(\cdot)$
satisfy the following conditions:
$a(\cdot),\,\,b(\cdot)\in L^{\infty}(\mathbb{R}),\quad|a(x)|\leq
c|x|^{\alpha/2-1},\quad|b(x)|\leq c|x|^{\beta-1}$ (5.12)
with
$0<\alpha,\beta<1,\quad\alpha+\beta<1/2.$
### 5.2 Non-central limit theorems
We now state the non-central limit theorems. We make the following assumptions
on the functions $a(\cdot)$ and $b(\cdot)$, and on their Fourier transforms
$\widehat{a}(\cdot)$ and $\widehat{b}(\cdot)$.
###### Assumption 5.1.
The Fourier transform $\widehat{a}(\cdot)$ of $a(\cdot)\in L^{2}(\mathbb{R})$
satisfies
$\widehat{a}(x)=A(x)|x|^{-\alpha/2}L_{1}^{1/2}(x),$
where $L_{1}(x)$ is an even non-negative function slowly varying at zero and
bounded on intervals $[c,\infty)$ for any $c>0$, and $A(x)$ is a complex-
valued function satisfying $|A(x)|=1$, and $\lim_{x\rightarrow
0^{+}}A(x)=A_{0}$ for some $A_{0}$ on the complex unit circle (since
$\widehat{a}(-x)=\overline{\widehat{a}(x)}$, we also have $\lim_{x\rightarrow
0^{-}}A(x)=\overline{A_{0}}$).
###### Assumption 5.2.
The generating function $\widehat{b}(\cdot)\in L^{1}(\mathbb{R})$ and
satisfies
$\widehat{b}(x)=|x|^{-\beta}L_{2}(x),$
where $L_{2}(x)$ is an even non-negative function slowly varying at zero and
bounded on intervals $[c,\infty)$ for any $c>0$.
###### Assumption 5.3.
The parameters $\alpha$ and $\beta$ above satisfy
$-1/2<\alpha<1,\quad-1/2<\beta<1,\quad\alpha+\beta>1/2.$ (5.13)
###### Assumption 5.4.
There exist numbers $\alpha^{*}$ and $\beta^{*}$ satisfying
$0<\alpha^{*},\beta^{*}<1\quad 1<\alpha^{*}+\beta^{*}<\alpha+\beta+1/2,$
such that
$|a(x)|\leq C|x|^{\alpha^{*}/2-1},\quad|b(x)|\leq C|x|^{\beta^{*}-1}.$
The proof of the following theorem can be found in Bai et al. [9].
###### Theorem 5.2.
Suppose that Assumptions 5.1 \- 5.4 hold. Then as $T\rightarrow\infty$ we have
$Z_{T}(\tau):=\frac{1}{T^{\alpha+\beta}L_{1}(1/T)L_{2}(1/T)}\left(Q_{T}(\tau)-\mathbb{E}Q_{T}(\tau)\right)\overset{f.d.d.}{\longrightarrow}Z_{\alpha,\beta}(\tau),$
(5.14)
where
$Z_{\alpha,\beta}(\tau)=\frac{1}{2\pi}\int_{\mathbb{R}^{2}}^{\prime\prime}|x_{1}x_{2}|^{-\alpha/2}\int_{\mathbb{R}}\frac{e^{i\tau(x_{1}+u)}-1}{i(x_{1}+u)}\frac{e^{i\tau(x_{2}-u)}-1}{i(x_{2}-u)}|u|^{-\beta}du~{}W(dx_{1})W(dx_{2}),$
(5.15)
where the double prime ′′ indicates the exclusion of the hyper-diagonals
$u_{p}=\pm u_{q}$, $p\neq q$ and $W(\cdot)$ is a complex-valued Brownian
motion (see Section 8.5).
###### Remark 5.6.
The regular variation conditions on $\widehat{a}(\cdot)$ and
$\widehat{b}(\cdot)$ in Assumptions 5.1 \- 5.3 generally do not follow from
the corresponding regular variation conditions imposed on the inverse Fourier
transforms $a(\cdot)$ and $b(\cdot)$. This implication only holds under some
additional assumptions on the slowly varying factors $L_{1}(\cdot)$ and
$L_{2}(\cdot)$ of $a(\cdot)$ and $b(\cdot)$. For instance, it will hold if we
have (see Bingham et al. [13], formula (4.3.7))
$a(x)=x^{\alpha/2-1}\ell_{1}(x)1_{[0,\infty)}(x),\quad
b(x)=|x|^{\beta-1}\ell_{2}(x),$ (5.16)
where $0<\alpha<1$, $0<\beta<1$, $\alpha+\beta>1/2$, and $\ell_{1}(x)$ and
$\ell_{2}(x)$ are even non-negative functions which are locally bounded,
slowly varying at infinity and _quasi-monotone_. Recall that a slowly varying
function $l(\cdot)$ is said to be quasi-monotone if it has locally bounded
variation, and for all $\delta>0$, one has (see Bingham et al. [13], Section
2.7):
$\int_{0}^{x}t^{\delta}|d\ell(t)|=O(x^{\delta}l(x))\quad{\rm as}\quad
x\to\infty.$
A sufficient condition for a slowly varying $\ell(x)$ with locally bounded
variation to be quasi-monotone is that $x^{\delta}\ell(x)$ is increasing and
$x^{-\delta}\ell(x)$ is decreasing when $x$ is large enough, for any
$\delta>0$ (see Theorem 1.5.5 and Corollary 2.7.4 in Bingham et al. [13]).
Notice also that Assumption 5.4 will be satisfied if (5.16) holds.
###### Remark 5.7.
Let the functions $a(\cdot)$ and $b(\cdot)$ be as in (5.16) with $\alpha<0$ or
$\beta<0$ (by (5.13) only one of $\alpha$ and $\beta$ can be negative). Assume
that $\alpha<0$ and $\beta>0$. Then for the corresponding regular variation of
$\widehat{a}(\cdot)$ to hold, one needs to impose in addition that
$\int_{0}^{\infty}a(x)dx=0$. In this case, one does not need to assume quasi-
monotonicity for $\ell_{1}$ (see Corollary 1.40 of Soulier [92]). Similar
considerations hold if $\beta<0$ and $\alpha>0$ instead.
###### Remark 5.8.
Note that Assumption 5.1 holds with $\alpha=0$ if $a(\cdot)\in
L^{1}(\mathbb{R})$ and $\int_{0}^{\infty}a(x)\neq 0$, and Assumption 5.2 holds
with $\beta=0$ if $b(\cdot)\in L^{1}(\mathbb{R})$ and
$\int_{0}^{\infty}b(x)\neq 0$.
The next theorem contains time-domain representations for the limiting process
$Z_{\alpha,\beta}(\tau)$ in (5.15) in the case $\alpha,\beta\geq 0$ (see Bai
et al. [9], Theorem 2.3).
###### Theorem 5.3.
The limiting process $Z_{\alpha,\beta}(\tau)$ in (5.15) admits the following
time-domain representations:
1. (a)
when $\alpha>0$, $\beta>0$:
$Z_{\alpha,\beta}(\tau)\overset{f.d.d.}{=}c_{\alpha,\beta}\int_{\mathbb{R}^{2}}^{\prime}\int_{0}^{\tau}\int_{0}^{\tau}|u-v|^{\beta-1}(u-x_{1})_{+}^{\alpha/2-1}(v-x_{2})_{+}^{\alpha/2-1}dudv~{}B(dx_{1})B(dx_{2}),$
(5.17)
where
$c_{\alpha,\beta}=\frac{\Gamma(1-\beta)\sin(\beta\pi/2)}{\pi\Gamma(\alpha/2)^{2}};$
2. (b)
when $\alpha>1/2$, $\beta=0$:
$Z_{\alpha,\beta}(\tau)\overset{f.d.d.}{=}c_{\alpha}\int_{\mathbb{R}^{2}}^{\prime}\int_{0}^{\tau}(u-x_{1})_{+}^{\alpha/2-1}(u-x_{2})_{+}^{\alpha/2-1}du~{}B(dx_{1})B(dx_{2}),$
(5.18)
where
$c_{\alpha}=\frac{\sin(\alpha\pi/2)\Gamma(1-\alpha/2)}{\pi\Gamma(\alpha/2)};$
3. (c)
when $\alpha=0$, $\beta>1/2$:
$Z_{\alpha,\beta}(\tau)\overset{f.d.d.}{=}c_{\beta}\int_{[0,\tau]^{2}}^{\prime}|x_{1}-x_{2}|^{\beta-1}~{}B(dx_{1})B(dx_{2}),$
(5.19)
where
$c_{\beta}=\frac{\Gamma(1-\beta)\sin(\beta\pi/2)}{\pi}.$
Here $B(\cdot)$ is the real Brownian random measure and the prime ′ in the
integrals indicates the exclusion of the diagonals.
###### Remark 5.9.
In view of (5.1) and (5.16), the representation (5.17) gives an explicit
insight of the convergence in Theorem 5.2 (see Theorem 5.4 below). The process
in (5.18) is known as Rosenblatt process (see Taqqu [97]), and the
corresponding convergence in Theorem 5.2 is the c.t. analog of the d.t. case
considered in Fox and Taqqu [37]. The representation (5.19) is obtained
because for $\alpha=0$, the underlying process $X(t)$ has short memory and in
this case, one expects that in the limit $X(t)dt$ in (5.1) can be replaced by
the white noise $B(dt)$.
###### Remark 5.10.
It is of interest to obtain appropriate elementary expressions for the time-
domain representation of the limiting process $Z_{\alpha,\beta}(\tau)$,
similar to (5.17) - (5.19), in the cases where either $\alpha$ or $\beta$
satisfying (5.13) is negative.
Using the time-domain representation (5.17), one can state a non-central limit
theorem in the case where $\alpha,\beta>0$ without going to the spectral
domain. This simplifies the assumptions imposed on the functions $a(\cdot)$
and $b(\cdot)$ (see Bai et al. [9]).
###### Theorem 5.4.
Suppose that the functions $a(\cdot)$ and $b(\cdot)$ are given by (5.16),
where $0<\alpha<1$, $0<\beta<1$, $\alpha+\beta>1/2$, and $\ell_{1}(x)$ and
$\ell_{2}(x)$ are even functions slowly varying at infinity and bounded on
bounded intervals. Then as $T\rightarrow\infty$ we have
$\displaystyle\frac{1}{T^{\alpha+\beta}\ell_{1}(T)\ell_{2}(T)}\left(Q_{T}(\tau)-\mathbb{E}[Q_{T}(\tau)]\right)\overset{f.d.d.}{\longrightarrow}$
$\displaystyle\int_{\mathbb{R}^{2}}^{\prime}\int_{0}^{\tau}\int_{0}^{\tau}|u-v|^{\beta-1}(u-x_{1})_{+}^{\alpha/2-1}(v-x_{2})_{+}^{\alpha/2-1}dudv~{}B(dx_{1})B(dx_{2}).$
## 6 CLT for Tapered Toeplitz Quadratic Functionals
### 6.1 The problem
In this section we consider a question concerning asymptotic distribution (as
$T\to\infty$) of the following tapered Toeplitz type quadratic functionals of
the centered stationary process $X(u)$, $u\in\mathbb{U}$, with spectral
density $f(\lambda)$, $\lambda\in\Lambda$:
$Q_{T}^{h}:=\left\\{\begin{array}[]{ll}\sum_{t=1}^{T}\sum_{s=1}^{T}\widehat{g}(t-s)h_{T}(t)h_{T}(s)X(t)X(s)&\mbox{in
the d.t.\ case},\\\ \\\
\int_{0}^{T}\int_{0}^{T}\widehat{g}(t-s)h_{T}(t)h_{T}(s)X(t)X(s)\,dt\,ds&\mbox{in
the c.t.\ case},\end{array}\right.$ (6.1)
where $\widehat{g}(t)$ is the Fourier transform of some integrable even
function $g(\lambda)$ and
$h_{T}(t):=h(t/T)$ (6.2)
with a taper function $h(\cdot)$ to be specified below.
The limit distribution of the functional (6.1) is completely determined by the
functions $f$, $g$ and $h$, and depending on their properties it can be either
Gaussian (that is, $Q^{h}_{T}$ with an appropriate normalization obey central
limit theorem), or non-Gaussian.
We discuss here the case where the limit distribution is Gaussian, and present
sufficient conditions in terms of functions $f$, $g$ (and $h$) ensuring
central limit theorems for a standard normalized tapered quadratic functional
$Q_{T}^{h}$.
We will assume that the taper function $h(\cdot)$ satisfies the following
assumption.
###### Assumption 6.1.
The taper $h:\mathbb{R}\to\mathbb{R}$ is a continuous nonnegative function of
bounded variation and of bounded support $[0,1]$, such that $H_{k}\neq 0$,
where
$H_{k}:=\int_{0}^{1}h^{k}(t)dt,\quad k\in\mathbb{N}:=\\{1,2,\ldots\\}.$ (6.3)
Note. The case where $h(t)={\mathbb{I}}_{[0,1]}(t)$, where
${\mathbb{I}}_{[0,1]}(\cdot)$ denotes the indicator of the segment $[0,1]$,
will be referred to as the non-tapered case.
###### Remark 6.1.
In the d.t. case, an example of a taper function $h(t)$ satisfying Assumption
6.1 is the Tukey-Hanning taper function $h(t)=0.5(1-\cos(\pi t))$ for
$t\in[0,1]$. In the c.t. case, a simple example of a taper function $h(t)$
satisfying Assumption 6.1 is the function $h(t)=1-t$ for $t\in[0,1]$ (see,
e.g., Anh et al. [5]).
### 6.2 Statistical motivation
Much of the statistical inferences (parametric and nonparametric estimation,
hypotheses testing) about the spectrum or the covariance of a stationary
process $\\{X(u),\ u\in\mathbb{U}\\}$ is based on the sample:
$\mathbf{X}_{T}:=\left\\{\begin{array}[]{ll}\\{X(t),\,t=1,\ldots,T\\}&\mbox{in
the d.t.\ case},\\\ \\{X(t),\,0\leq t\leq T\\}&\mbox{in the c.t.\
case}\end{array}\right.$ (6.4)
In the statistical analysis of stationary processes, however, the data are
frequently tapered before calculating the statistics of interest, and the
statistical inference procedure, instead of the original data $\mathbf{X}_{T}$
given by (6.4), is based on the tapered data $\mathbf{X}_{T}^{h}$:
$\mathbf{X}_{T}^{h}:=\left\\{\begin{array}[]{ll}\\{h_{T}(t)X(t),\,t=1,\ldots,T\\}&\mbox{in
the d.t.\ case},\\\ \\{h_{T}(t)X(t),\,0\leq t\leq T\\}&\mbox{in the c.t.\
case},\end{array}\right.$ (6.5)
with a taper function $h(t)$, $t\in\mathbb{R}$.
The benefits of tapering the data have been widely reported in the literature
(see, e.g., Bloomfield [14], Brillinger [17], Dahlhaus [24, 25, 26], Dahlhaus
and Künsch [28], Guyon [69], and references therein). For example, data-tapers
are introduced to reduce the so-called ’leakage effects’, that is, to obtain
better estimation of the spectrum of the model in the case where it contains
high peaks. Other application of data-tapers is in situations in which some of
the data values are missing. Also, the use of tapers leads to bias reduction,
which is especially important when dealing with spatial data. In this case,
the tapers can be used to fight the so-called ’edge effects’.
Quadratic functionals of the form (6.1) appear both in nonparametric and
parametric estimation of the spectrum of the process $X(t)$ based on the
tapered data (6.5). For instance, when we are interested in nonparametric
estimation of a linear integral functional in $L^{p}(\Lambda)$, $p>1$ of the
form:
$J=J(f):=\int_{\Lambda}f(\lambda)g(\lambda)d\lambda,$ (6.6)
where $g(\lambda)\in L^{q}(\Lambda)$, $1/p+1/q=1$, then a natural statistical
estimator for $J(f)$ is the linear integral functional of the empirical
spectral density (periodogram) of the process $X(t)$. To define this
estimator, we first introduce some notation.
Denote by $H_{k,T}(\lambda)$ the tapered Dirichlet type kernel, defined by
$H_{k,T}(\lambda):=\left\\{\begin{array}[]{ll}\sum_{t=1}^{T}h_{T}^{k}(t)e^{-i\lambda
t}&\mbox{in the d.t.\ case},\\\ \\\\[-2.84526pt]
\int_{0}^{T}h_{T}^{k}(t)e^{-i\lambda t}dt&\mbox{in the c.t.\
case}.\end{array}\right.$ (6.7)
Define the finite Fourier transform of the tapered data (6.5):
$d^{h}_{T}(\lambda):=\left\\{\begin{array}[]{ll}\sum_{t=0}^{T}h_{T}(t)X(t)e^{-i\lambda
t}&\mbox{in the d.t.\ case},\\\ \\\\[-2.84526pt]
\int_{0}^{T}h_{T}(t)X(t)e^{-i\lambda t}dt&\mbox{in the c.t.\
case}.\end{array}\right.$ (6.8)
and the tapered periodogram $I^{h}_{T}(\lambda)$ of the process $X(t)$:
$\displaystyle
I^{h}_{T}(\lambda):=\frac{1}{C_{T}}\,d^{h}_{T}(\lambda)d^{h}_{T}(-\lambda)=\left\\{\begin{array}[]{ll}\frac{1}{C_{T}}\left|\sum_{t=0}^{T}h_{T}(t)X(t)e^{-i\lambda
t}\right|^{2}&\mbox{in the d.t.\ case},\\\ \\\\[-2.84526pt]
\frac{1}{C_{T}}\left|\int_{0}^{T}h_{T}(t)X(t)e^{-i\lambda
t}dt\right|^{2}&\mbox{in the c.t.\ case}.\end{array}\right.$ (6.12)
where
$C_{T}:=2\pi H_{2,T}(0)=2\pi\int_{0}^{T}h_{T}^{2}(t)dt=2\pi H_{2}\,T\neq 0.$
(6.13)
Notice that for non-tapered case ($h(t)={\mathbb{I}}_{[0,1]}(t)$), we have
$C_{T}=2\pi T$.
As an estimator $J_{T}^{h}$ for functional $J(f)$, given by (6.6), based on
the tapered data (6.5), we consider the averaged tapered periodogram (or a
simple ”plug-in” statistic), defined by
$\displaystyle J_{T}^{h}$ $\displaystyle=$ $\displaystyle
J(I^{h}_{T}):=\int_{\Lambda}I^{h}_{T}(\lambda)g(\lambda)d\lambda.$ (6.14)
In view of (6.1), (6.12) and (6.14) we have
$\displaystyle
J_{T}^{h}=C_{T}^{-1}Q_{T}^{h}=\left\\{\begin{array}[]{ll}\frac{1}{C_{T}}\sum_{t=1}^{T}\sum_{s=1}^{T}\widehat{g}(t-s)h_{T}(t)h_{T}(s)X(t)X(s)&\mbox{in
the d.t.\ case},\\\ \\\\[-2.84526pt]
\frac{1}{C_{T}}\int_{0}^{T}\int_{0}^{T}\widehat{g}(t-s)h_{T}(t)h_{T}(s)X(t)X(s)\,dt\,ds&\mbox{in
the c.t.\ case},\end{array}\right.$ (6.18)
where $C_{T}$ is as in (6.13), and $\widehat{g}(t)$ is the Fourier transform
of function $g(\lambda)$.
Thus, to study the asymptotic properties of the estimator $J_{T}^{h}$, we have
to study the asymptotic distribution (as $T\to\infty$) of the tapered Toeplitz
type quadratic functional $Q_{T}^{h}$ given by (6.1) (for details see Section
7).
### 6.3 Central limit theorems for tapered quadratic functional $Q_{T}^{h}$
We will use the following notation. By $\widetilde{Q}^{h}_{T}$ we denote the
standard normalized quadratic functional:
$\widetilde{Q}^{h}_{T}=T^{-1/2}\,\left(Q^{h}_{T}-\mathbb{E}[Q^{h}_{T}]\right).$
(6.19)
Also, we set
$\sigma^{2}_{h}:=16\pi^{3}H_{4}\int_{\Lambda}f^{2}(\lambda)g^{2}(\lambda)\,d\lambda,$
(6.20)
where $H_{4}$ is as in (6.3). The notation
$\widetilde{Q}^{h}_{T}\overset{d}{\rightarrow}\eta\sim
N(0,\sigma^{2}_{h})\quad{\rm as}\quad T\to\infty$ (6.21)
will mean that the distribution of the random variable $\widetilde{Q}^{h}_{T}$
tends (as $T\to\infty$) to the centered normal distribution with variance
$\sigma^{2}_{h}$.
Let $\psi(\lambda)$ be an integrable real symmetric function defined on
$[-\pi,\pi]$, and let $h(t)$, $t\in[0,1]$ be a taper function. For
$T=1,2,\ldots$, the $(T\times T)$-truncated tapered Toeplitz matrix generated
by $\psi$ and $h$, denoted by $B_{T}^{h}(\psi)$, is defined by the following
equation (see (3.4) for non-tapered case):
$B_{T}^{h}(\psi):=\|\widehat{\psi}(t-s)h_{T}(t)h_{T}(s)\|_{t,s=1,2\ldots,T},$
(6.22)
where $\widehat{\psi}(t)$ $(t\in\mathbb{Z})$ are the Fourier coefficients of
$\psi$.
Given a real number $T>0$ and an integrable real symmetric function
$\psi(\lambda)$ defined on $\mathbb{R}$, the $T$-truncated tapered Toeplitz
operator (also called tapered Wiener-Hopf operator) generated by $\psi$ and a
taper function $h$, denoted by $W_{T}^{h}(\psi)$ is defined as follows (see
(3.5) for non-tapered case):
$[{W}^{h}_{T}(\psi)u](t)=\int_{0}^{T}\hat{\psi}(t-s)u(s)h_{T}(s)ds,\quad
u(s)\in L^{2}([0,T];h_{T}),$ (6.23)
where $\hat{\psi}(\cdot)$ is the Fourier transform of $\psi(\cdot)$, and
$L^{2}([0,T];h_{T})$ denotes the weighted $L^{2}$-space with respect to the
measure $h_{T}(t)dt$.
Let $A_{T}^{h}(f)$ be either the $T\times T$ tapered Toeplitz matrix
$B_{T}^{h}(f)$, or the $T$-truncated tapered Toeplitz operator $W_{T}^{h}(f)$
generated by the spectral density $f$ and taper $h$, and let $A_{T}^{h}(g)$
denote either the $T\times T$ tapered Toeplitz matrix, or the $T$-truncated
tapered Toeplitz operator generated by the functions $g$ and $h$.
#### 6.3.1 CLT for Gaussian models
The theorems that follow extend the results of Theorems 3.1 and 3.3 to the
tapered case. We assume that the model process $X(t)$ is Gaussian, and with no
loss of generality, that $g\geq 0$. The following theorems were proved in
Ginovyan and Sahakyan [57].
###### Theorem 6.1.
Assume that $f\cdot g\in L^{1}(\Lambda)\cap L^{2}(\Lambda)$, the taper
function $h$ satisfies Assumption 6.1, and for $T\to\infty$
$\chi_{2}(\widetilde{Q}^{h}_{T})=\frac{2}{T}\hbox{tr}\bigl{[}A^{h}_{T}(f)A^{h}_{T}(g)\bigr{]}^{2}\longrightarrow\sigma^{2}_{h},$
(6.24)
where $\sigma^{2}_{h}$ is as in (6.20). Then
$\widetilde{Q}^{h}_{T}\overset{d}{\rightarrow}\eta\sim N(0,\sigma^{2}_{h})$ as
$T\to\infty$.
###### Theorem 6.2.
Assume that the function
$\varphi(x_{1},x_{2},x_{3})=\int_{\Lambda}f(u)g(u-x_{1})f(u-x_{2})g(u-x_{3})\,du$
(6.25)
belongs to $L^{2}(\Lambda^{3})$ and is continuous at $(0,0,0)$, and the taper
function $h$ satisfies Assumption 6.1. Then
$\widetilde{Q}^{h}_{T}\overset{d}{\rightarrow}\eta\sim N(0,\sigma^{2}_{h})$ as
$T\to\infty$.
###### Theorem 6.3.
Assume that $f(\lambda)\in L^{p}(\Lambda)$ $(p\geq 1)$ and $g(\lambda)\in
L^{q}(\Lambda)$ $(q\geq 1)$ with $1/p+1/q\leq 1/2$, and the taper function $h$
satisfies Assumption 6.1. Then
$\widetilde{Q}^{h}_{T}\overset{d}{\rightarrow}\eta\sim N(0,\sigma^{2}_{h})$ as
$T\to\infty$.
###### Theorem 6.4.
Let $f\in L^{2}(\Lambda)$, $g\in L^{2}(\Lambda)$, $fg\in L^{2}(\Lambda)$,
$\int_{\Lambda}f^{2}(\lambda)g^{2}(\lambda-\mu)\,d\lambda\longrightarrow\int_{\Lambda}f^{2}(\lambda)g^{2}(\lambda)\,d\lambda\quad{\rm
as}\quad\mu\to 0,$ (6.26)
and let the taper function $h$ satisfy Assumption 6.1. Then
$\widetilde{Q}^{h}_{T}\overset{d}{\rightarrow}\eta\sim N(0,\sigma^{2}_{h})$ as
$T\to\infty$.
To state the next theorem, we recall the class $SV_{0}(\mathbb{R})$ of slowly
varying functions at zero $u(\lambda)$, $\lambda\in\mathbb{R}$, satisfying the
following conditions: for some $a>0$, $u(\lambda)$ is bounded on $[-a,a]$,
$\lim_{\lambda\to 0}u(\lambda)=0,$ $u(\lambda)=u(-\lambda)$ and
$0<u(\lambda)<u(\mu)$ for $0<\lambda<\mu<a$.
###### Theorem 6.5.
Assume that the functions $f$ and $g$ are integrable on $\mathbb{R}$ and
bounded outside any neighborhood of the origin, and satisfy for some $a>0$
$f(\lambda)\leq|\lambda|^{-\alpha}L_{1}(\lambda),\quad|g(\lambda)|\leq|\lambda|^{-\beta}L_{2}(\lambda),\quad\lambda\in[-a,a],$
(6.27)
for some $\alpha<1,\ \beta<1$ with $\alpha+\beta\leq 1/2$, where $L_{1}(x)$
and $L_{2}(x)$ are slowly varying functions at zero satisfying
$\displaystyle L_{i}\in SV_{0}(\mathbb{R}),\ \
\lambda^{-(\alpha+\beta)}L_{i}(\lambda)\in L^{2}[-a,a],\ i=1,2.$ (6.28)
Also, let the taper function $h$ satisfy Assumption 6.1. Then
$\widetilde{Q}^{h}_{T}\overset{d}{\rightarrow}\eta\sim N(0,\sigma^{2}_{h})$ as
$T\to\infty$.
As in Remark 3.3, the conditions $\alpha<1$ and $\beta<1$ in Theorem 6.5
ensure that the Fourier transforms of $f$ and $g$ are well defined. Observe
that when $\alpha>0$ the process $X(t)$ may exhibit long-range dependence. We
also allow here $\alpha+\beta$ to assume the critical value 1/2. The
assumptions $f\cdot g\in L^{1}(\Lambda)$, $f,g\in
L^{\infty}(\Lambda\setminus[-a,a])$ and (6.28) imply that $f\cdot g\in
L^{2}(\Lambda)$, so that the variance $\sigma^{2}_{h}$ in (6.20) is finite.
#### 6.3.2 CLT for Lévy-driven stationary linear models
Now we assume that the underlying model $X(t)$ is a Lévy-driven stationary
linear process defined by (2.12), where $a(\cdot)$ is a filter from
$L^{2}(\mathbb{R})$, and $\xi(t)$ is a Lévy process satisfying the conditions:
$\mathbb{E}\xi(t)=0$, $\mathbb{E}\xi^{2}(1)=1$ and
$\mathbb{E}\xi^{4}(1)<\infty$.
The central limit theorem that follow is a tapered counterpart of Theorem 5.1
and was proved in Ginovyan and Sahakyan [56].
###### Theorem 6.6.
Assume that the filter $a(\cdot)$ and the generating kernel
$\widehat{g}(\cdot)$ are such that
$a(\cdot)\in L^{p}(\mathbb{R})\cap
L^{2}(\mathbb{R}),\quad\widehat{g}(\cdot)\in L^{q}(\mathbb{R}),\quad 1\leq
p,q\leq 2,\quad\frac{2}{p}+\frac{1}{q}\geq\frac{5}{2},$ (6.29)
and the taper $h$ satisfies Assumption 6.1. Then
$\widetilde{Q}^{h}_{T}\overset{d}{\rightarrow}\eta\sim N(0,\sigma^{2}_{L,h})$
as $T\to\infty$, where
$\sigma^{2}_{L,h}=16\pi^{3}H_{4}\int_{\mathbb{R}}f^{2}(\lambda)g^{2}(\lambda)d\lambda+\kappa_{4}4\pi^{2}H_{4}\left[\int_{\mathbb{R}}f(\lambda)g(\lambda)d\lambda\right]^{2},$
(6.30)
and $H_{4}$ is as in (6.3).
###### Remark 6.2.
Notice that if the underlying process $X(t)$ is Gaussian, then in formula
(6.30) we have only the first term and so $\sigma^{2}_{L,h}=\sigma^{2}_{h}$
(see (6.20)), because in this case $\kappa_{4}=0$. On the other hand, the
condition (6.29) is more restrictive than the conditions in Theorems 6.1 \-
6.5. Thus, for Gaussian processes Theorems 6.1 \- 6.5 improve Theorem 6.6.
###### Remark 6.3.
Central and non-central limit theorems for tapered quadratic forms of a d.t.
long memory Gaussian stationary fields have been proved in Doukhan et al.
[32].
## 7 Applications
### 7.1 Nonparametric estimation of spectral functionals
Suppose we observe a realization $\mathbf{X}_{T}:=\\{X(u)$, $0\leq u\leq T$
(or $u={1,\ldots,T}$ in the d.t. case)} of a centered stationary process
$X(t)$ with an unknown spectral density function $f(\lambda)$,
$\lambda\in\Lambda$. We assume that $f(\lambda)$ belongs to a given (infinite-
dimensional) class $\mathbf{\Theta}\subset L^{p}(\Lambda)$ $(p\geq 1)$ of
spectral densities possessing some smoothness properties. Let $\Phi(\cdot)$ be
some known functional, the domain of definition of which contains
$\mathbf{\Theta}$. The problem is to estimate the value $\Phi(f)$ of the
functional $\Phi(\cdot)$ at an unknown point $f\in\mathbf{\Theta}$ on the
basis of observation $\mathbf{X}_{T},$ and investigate the asymptotic (as
$T\to\infty$) properties of the suggested estimators.
This problem for linear and some nonlinear smooth functionals for d.t. and
c.t. stationary models has been extensively discussed in the literature (see,
e.g., Dahlhaus and Wefelmeyer [29], Ginovyan [42, 43, 46, 48, 49], Ginovyan
and Sahakyan [56, 57], Has’minskii and Ibragimov [70], Taniguchi [95],
Taniguchi and Kakizawa [96], and references therein).
In this section we apply the results of Section 6.3 to show that the statistic
$J_{T}^{h}$ given by (6.14) is a consistent and asymptotically normal
estimator for the linear functional $J(f)$ given by (6.6). We follow the
papers Ginovyan and Sahakyan [56, 57]. To state the corresponding results, we
first introduce the $L^{p}$-Hölder class and set up a set of assumptions.
Given numbers $p\geq 1$, $0<\alpha<1$,
$r\in\mathbb{N}_{0}:=\mathbb{N}\cup\\{0\\}$, we set $\beta=\alpha+r$ and
denote by $H_{p}(\beta)$ the $L^{p}$-Hölder class, that is, the class of those
functions $\psi(\lambda)\in L^{p}(\Lambda)$, which have $r$-th derivatives in
$L^{p}(\Lambda)$ and with some positive constant $C$ satisfy
$||\psi^{(r)}(\cdot+\lambda)-\psi^{(r)}(\cdot)||_{p}\leq C|\lambda|^{\alpha}.$
Now we list the assumptions.
###### Assumption 7.1.
The filter $a(\cdot)$ and the generating kernel $\widehat{g}(\cdot)$ are such
that
$a(\cdot)\in L^{p}(\Lambda)\cap L^{2}(\Lambda),\quad\widehat{g}(\cdot)\in
L^{q}(\Lambda)$ (7.1)
with
$1\leq p,q\leq 2,\quad\frac{2}{p}+\frac{1}{q}\geq\frac{5}{2}.$ (7.2)
###### Assumption 7.2.
The spectral density $f$ and the generating function $g$ are such that $f,g\in
L^{1}(\Lambda)\cap L^{2}(\Lambda)$ and $g$ is of bounded variation.
###### Assumption 7.3.
The spectral density $f$ and the generating function $g$ are such that $f\in
H_{p}(\beta_{1})$, $\beta_{1}>0$, $p\geq 1$ and $g(\lambda)\in
H_{q}(\beta_{2})$, $\beta_{2}>0$, $q\geq 1$ with $1/p+1/q=1$, and one of the
conditions a)–d) is satisfied:
a) $\beta_{1}>1/p$, $\beta_{2}>1/q$
b) $\beta_{1}\leq 1/p$, $\beta_{2}\leq 1/q$ and $\beta_{1}+\beta_{2}>1/2$
c) $\beta_{1}>1/p$, $1/q-1/2<\beta_{2}\leq 1/q$
d) $\beta_{2}>1/q$, $1/p-1/2<\beta_{1}\leq 1/p$.
###### Remark 7.1.
In Ginovian [45] it was proved that if Assumption 7.3 is satisfied, then there
exist numbers $p_{1}$ $(p_{1}>p)$ and $q_{1}$ $(q_{1}>q)$, such that
$H_{p}(\beta_{1})\subset L_{p_{1}}$, $H_{q}(\beta_{2})\subset L_{q_{1}}$ and
$1/{p_{1}}+1/{q_{1}}\leq 1/2$.
The next theorem controls the bias $E(J_{T}^{h})-J$ and provides sufficient
conditions assuring the proper rate of convergence of the bias to zero, which
is necessary to obtain the asymptotic normality of the estimator $J_{T}^{h}$.
Specifically, we have the following result, which was proved in Ginovyan and
Sahakyan [56].
###### Theorem 7.1.
Let the functionals $J:=J(f)$ and $J_{T}^{h}:=J(I^{h}_{T})$ be defined by
(6.6) and (6.14), respectively. Then under Assumptions 7.2 (or 7.3) and 6.1
the following asymptotic relation holds:
$\displaystyle T^{1/2}\left[\mathbb{E}(J_{T}^{h})-J\right]\to 0\quad{\rm
as}\quad T\to\infty.$ (7.3)
The next theorem, which is an immediate consequence of Theorem 6.6, contains
sufficient conditions for functional $J_{T}^{h}$ to obey the central limit
theorem.
###### Theorem 7.2.
Let $J:=J(f)$ and $J_{T}^{h}:=J(I^{h}_{T})$ be defined by (6.6) and (6.14),
respectively. Then under Assumptions 7.1 and 6.1 the functional $J_{T}^{h}$
obeys the central limit theorem. More precisely, we have
$\displaystyle
T^{1/2}\left[J_{T}^{h}-\mathbb{E}(J_{T}^{h})\right]\overset{d}{\rightarrow}\eta\quad{\rm
as}\quad T\to\infty,$ (7.4)
where the symbol $\overset{d}{\rightarrow}$ stands for convergence in
distribution, and $\eta$ is a normally distributed random variable with mean
zero and variance $\sigma^{2}_{h}(J)$ given by
$\sigma^{2}_{h}(J)=4\pi
e(h)\int_{\mathbb{R}}f^{2}(\lambda)g^{2}(\lambda)d\lambda+\kappa_{4}e(h)\left[\int_{\mathbb{R}}f(\lambda)g(\lambda)d\lambda\right]^{2}.$
(7.5)
Here $\kappa_{4}$ is the fourth cumulant of $\xi(1)$, and
$e(h):=\frac{H_{4}}{H_{2}^{2}}=\int_{0}^{1}h^{4}(t)dt\left(\int_{0}^{1}h^{2}(t)dt\right)^{-2}.$
(7.6)
Taking into account the equality
$\displaystyle
T^{1/2}\left[J_{T}^{h}-J\right]=T^{1/2}\left[\mathbb{E}(J_{T}^{h})-J\right]+T^{1/2}\left[J_{T}^{h}-\mathbb{E}(J_{T}^{h})\right],$
(7.7)
as an immediate consequence of Theorems 7.1 and 7.2, we obtain the next result
that contains sufficient conditions for a simple ”plug-in” statistic
$J(I^{h}_{T})$ to be an asymptotically normal estimator for a linear spectral
functional $J(f)$.
###### Theorem 7.3.
Let the functionals $J:=J(f)$ and $J_{T}^{h}:=J(I^{h}_{T})$ be defined by
(6.6) and (6.14), respectively. Then under Assumptions 7.1, 7.2 (or 7.3) and
6.1 the statistic $J_{T}^{h}$ is an asymptotically normal estimator for
functional $J$. More precisely, we have
$\displaystyle
T^{1/2}\left[J_{T}^{h}-J\right]\overset{d}{\rightarrow}\eta\quad{\rm as}\quad
T\to\infty,$ (7.8)
where $\eta$ is as in Theorem 7.2, that is, $\eta$ is a normally distributed
random variable with mean zero and variance $\sigma^{2}_{h}(J)$ given by (7.5)
and (7.6).
In the Gaussian case we have more accurate result. The next theorem, which was
proved in Ginovyan and Sahakyan [57], states that for Gaussian models
Assumptions 7.3 and 6.1 are sufficient for the statistic $J_{T}^{h}$ to be an
asymptotically normal estimator for functional $J$.
###### Theorem 7.4.
Let the functionals $J:=J(f)$ and $J_{T}^{h}:=J(I^{h}_{T})$ be defined by
(6.6) and (6.14), respectively. Then under Assumptions 7.3 and 6.1 the
statistic $J_{T}^{h}$ is an asymptotically normal estimator for functional
$J$. More precisely, we have
$\displaystyle
T^{1/2}\left[J_{T}^{h}-J\right]\overset{d}{\rightarrow}\eta\quad{\rm as}\quad
T\to\infty,$ (7.9)
where $\eta$ is a normally distributed random variable with mean zero and
variance $\sigma^{2}_{h}(J)$ given by
$\sigma^{2}_{h}(J)=4\pi
e(h)\int_{\mathbb{R}}f^{2}(\lambda)g^{2}(\lambda)d\lambda,\quad
e(h):=H_{4}H_{2}^{-2},$ (7.10)
and $H_{k}$ is as in (6.3).
### 7.2 Parametric estimation: The Whittle procedure
We assume here that the spectral density $f(\lambda)$ belongs to a given
parametric family of spectral densities
$\mathcal{F}:=\\{f(\lambda,\theta):\,\theta\in\Theta\\}$, where
$\theta:=(\theta_{1},\ldots,\theta_{p})$ is an unknown parameter and $\Theta$
is an open subset in the Euclidean space $\mathbb{R}^{p}$. The problem of
interest is to estimate the unknown parameter $\theta$ on the basis of the
tapered data (6.5), and investigate the asymptotic (as $T\to\infty$)
properties of the suggested estimators, depending on the dependence (memory)
structure of the model $X(t)$ and the smoothness of its spectral density $f$.
There are different methods of estimation: maximum likelihood, Whittle,
minimum contrast, etc. Here we focus on the Whittle method.
The Whittle estimation procedure, originally devised for d.t. short memory
stationary processes, is based on the smoothed periodogram analysis on a
frequency domain, involving approximation of the likelihood function and
asymptotic properties of empirical spectral functionals (see Whittle [107]).
The Whittle estimation method since its discovery has played a major role in
the asymptotic theory of parametric estimation in the frequency domain, and
was the focus of interest of many statisticians. Their aim was to weaken the
conditions needed to guarantee the validity of the Whittle approximation for
d.t. short memory models, to find analogues for long and intermediate memory
models, to find conditions under which the Whittle estimator is asymptotically
equivalent to the exact maximum likelihood estimator, and to extend the
procedure to the c.t. models and random fields.
For the d.t. case, it was shown that for Gaussian and linear stationary models
the Whittle approach leads to consistent and asymptotically normal estimators
under short, intermediate and long memory assumptions. Moreover, it was shown
that in the Gaussian case the Whittle estimator is also asymptotically
efficient in the sense of Fisher (see, e. g., Dahlhaus [27], Dzhaparidze [33],
Fox and Taqqu [39], Giraitis and Surgailis [60], Guyon [69], Heyde and Gay
[72], Taniguchi and Kakizawa [96], Walker [106], and references therein).
For c.t. models, the Whittle estimation procedure has been considered, for
example, in Anh et al. [4], Avram et al. [7], Casas and Gao [21], Dzhaparidze
[33], Gao [40], Gao et al. [41], Leonenko and Sakhno [80], Tsai and Chan
[104], where can also be found additional references. In this case, it was
proved that the Whittle estimator is consistent and asymptotically normal.
The Whittle estimation procedure based on the d.t. tapered data has been
studied in Alomari et al. [1], Dahlhaus [24], Dahlhaus and Künsch [28], Guyon
[69], Ludeña and Lavielle [82]. In the case where the underlying model is a
Lévy-driven c.t. linear process with possibly unbounded or vanishing spectral
density function, consistency and asymptotic normality of the Whittle
estimator was established in Ginovyan [50].
To explain the idea behind the Whittle estimation procedure, assume for
simplicity that the underlying process $X(t)$ is a d.t. Gaussian process, and
we want to estimate the parameter $\theta$ based on the sample
$X_{T}:=\\{X(t),\,t=1,\ldots,T\\}$. A natural approach is to find the maximum
likelihood estimator (MLE) $\widehat{\theta}_{T,MLE}$ of $\theta$, that is, to
maximize the log-likelihood function $L_{T}(\theta)$, which in this case takes
the form:
$L_{T}(\theta)=-\frac{T}{2}\ln 2\pi-\frac{1}{2}\ln\det
B_{T}(f_{\theta})-\frac{1}{2}X^{\prime}_{T}[B_{T}(f_{\theta})]^{-1}X_{T},$
where $B_{T}(f_{\theta})$ is the Toeplitz matrix generated by $f_{\theta}$.
Unfortunately, the above function is difficult to handle, and no explicit
expression for the estimator $\widehat{\theta}_{T,MLE}$ is known (even in the
case of simple models). An approach, suggested by P. Whittle, called the
Whittle estimation procedure, is to approximate the term $\ln\det
B_{T}(f_{\theta})$ by $\frac{T}{2}\int_{-\pi}^{\pi}\ln
f_{\theta}(\lambda)d\lambda$ and the inverse matrix $[B_{T}(f_{\theta})]^{-1}$
by the Toeplitz matrix $B_{T}(1/f_{\theta})$. This leads to the following
approximation of the log-likelihood function $L_{T}(\theta)$, introduced by P.
Whittle [107], and called Whittle functional:
$L_{T,W}(\theta)=-\frac{1}{4\pi}\int_{-\pi}^{\pi}\left[\ln
f_{\theta}(\lambda)+\frac{I_{T}(\lambda)}{f_{\theta}(\lambda)}\right]\,d\lambda,$
where $I_{T}(\lambda)$ is the ordinary periodogram of the process $X(t)$.
Now maximizing the Whittle functional $L_{T,W}(\theta)$ with respect to
$\theta$, we get the Whittle estimator $\widehat{\theta}_{T}$ for $\theta$. It
can be shown that if
$T^{-1/2}(L_{T}(\theta)-L_{T,W}(\theta)\to 0\quad{\rm as}\quad
n\to\infty\quad{\rm in\,\,probability,}$
then the MLE $\widehat{\theta}_{T,MLE}$ and the Whittle estimator
$\widehat{\theta}_{T}$ are asymptotically equivalent in the sense that
$\widehat{\theta}_{T}$ also is consistent, asymptotically normal and
asymptotically Fisher-efficient (see, e.g., Dzhaparidze [33]).
In the continuous context, the Whittle procedure of estimation of a spectral
parameter $\theta$ based on the sample $X_{T}:=\\{X(t),\,0\leq t\leq T\\}$ is
to choose the estimator $\widehat{\theta}_{T}$ to minimize the weighted
Whittle functional:
$U_{T}(\theta):=\frac{1}{4\pi}\int_{\mathbb{R}}\left[\ln
f(\lambda,\theta)+\frac{I_{T}(\lambda)}{f(\lambda,\theta)}\right]\cdot
w(\lambda)\,d\lambda,$ (7.11)
where $I_{T}(\lambda)$ is the continuous periodogram of $X(t)$, and
$w(\lambda)$ is a weight function ($w(-\lambda)=w(\lambda)$, $w(\lambda)\geq
0$, $w(\lambda)\in L^{1}(\mathbb{R})$) for which the integral in (7.11) is
well defined. An example of common used weight function is
$w(\lambda)=1/(1+\lambda^{2})$.
The Whittle procedure of estimation of a spectral parameter $\theta$ based on
the tapered sample (6.5) is to choose the estimator $\widehat{\theta}_{T,h}$
to minimize the weighted tapered Whittle functional:
$U_{T,h}(\theta):=\frac{1}{4\pi}\int_{\Lambda}\left[\log
f(\lambda,\theta)+\frac{I^{h}_{T}(\lambda)}{f(\lambda,\theta)}\right]\cdot
w(\lambda)\,d\lambda,$ (7.12)
where $I^{h}_{T}(\lambda)$ is the tapered periodogram of $X(t)$, given by
(6.12), and $w(\lambda)$ is a weight function for which the integral in (7.12)
is well defined. Thus,
$\widehat{\theta}_{T,h}:=\underset{\theta\in\Theta}{\rm
Arg\,min}\,U_{T,h}(\theta).$ (7.13)
Here we follow the paper Ginovyan [50]. To state results involving properties
of the Whittle estimator, we first introduce the following set of assumptions.
###### Assumption 7.4.
The true value $\theta_{0}$ of the parameter $\theta$ belongs to a compact set
$\Theta$, which is contained in an open set $S$ in the $p$-dimensional
Euclidean space $\mathbb{R}^{p}$, and $f(\lambda,\theta_{1})\neq
f(\lambda,\theta_{2})$ whenever $\theta_{1}\neq\theta_{2}$ almost everywhere
in $\Lambda$ with respect to the Lebesgue measure.
###### Assumption 7.5.
The functions $f(\lambda,\theta)$, $f^{-1}(\lambda,\theta)$ and
$\frac{\partial}{\partial\theta_{k}}f^{-1}(\lambda,\theta)$, $k=1,\ldots,p$,
are continuous in $(\lambda,\theta)$.
###### Assumption 7.6.
The functions $f:=f(\lambda,\theta)$ and
$g:=w(\lambda)\frac{\partial}{\partial\theta_{k}}f^{-1}(\lambda,\theta)$
satisfy Assumptions 7.2 or 7.3 for all $k=1,\ldots,p$ and $\theta\in\Theta$.
###### Assumption 7.7.
The functions $a:=a(\lambda,\theta)$ and $b:=\widehat{g}$, where $g$ is as in
Assumption 7.6, satisfy Assumption 7.1.
###### Assumption 7.8.
The functions
$\frac{\partial^{2}}{\partial\theta_{k}\partial\theta_{j}}f^{-1}(\lambda,\theta)$
and
$\frac{\partial^{3}}{\partial\theta_{k}\partial\theta_{j}\partial\theta_{j}}f^{-1}(\lambda,\theta)$,
$k,j,l=1,\ldots,p$, are continuous in $(\lambda,\theta)$ for
$\lambda\in\Lambda$, $\theta\in N_{\delta}(\theta_{0})$, where
$N_{\delta}(\theta_{0}):=\\{\theta:\,|\theta-\theta_{0}|<\delta\\}$ is some
neighborhood of $\theta_{0}$.
###### Assumption 7.9.
The matrices
$\displaystyle
W(\theta):=\|w_{ij}(\theta)\|,\,\,A(\theta):=\|a_{ij}(\theta)\|,\,\,B(\theta):=\|b_{ij}(\theta)\|\quad
i,j=1,\ldots,p$ (7.14)
are positive definite, where
$\displaystyle w_{ij}(\theta)$ $\displaystyle=$
$\displaystyle\frac{1}{4\pi}\int_{\Lambda}\frac{\partial}{\partial\theta_{i}}\ln
f(\lambda,\theta)\frac{\partial}{\partial\theta_{j}}\ln
f(\lambda,\theta)w(\lambda)d\lambda,$ (7.15) $\displaystyle a_{ij}(\theta)$
$\displaystyle=$
$\displaystyle\frac{1}{4\pi}\int_{\Lambda}\frac{\partial}{\partial\theta_{i}}\ln
f(\lambda,\theta)\frac{\partial}{\partial\theta_{j}}\ln
f(\lambda,\theta)w^{2}(\lambda)d\lambda,$ (7.16) $\displaystyle
b_{ij}(\theta)$ $\displaystyle=$
$\displaystyle\frac{\kappa_{4}}{16\pi^{2}}\int_{\Lambda}\frac{\partial}{\partial\theta_{i}}\ln
f(\lambda,\theta)w(\lambda)d\lambda\int_{\mathbb{R}}\frac{\partial}{\partial\theta_{j}}\ln
f(\lambda,\theta)w(\lambda)d\lambda,$ (7.17)
and $\kappa_{4}$ is the fourth cumulant of $\xi(1)$.
Consistency of the Whittle estimator. The next theorem contains sufficient
conditions for Whittle estimator to be consistent.
###### Theorem 7.5.
Let $\widehat{\theta}_{T,h}$ be the Whittle estimator defined by (7.13) and
let $\theta_{0}$ be the true value of parameter $\theta$. Then, under
Assumptions 7.4–7.7 and 6.1, the statistic $\widehat{\theta}_{T,h}$ is a
consistent estimator for $\theta$, that is,
$\widehat{\theta}_{T,h}\to\theta_{0}$ in probability as $T\to\infty$.
Asymptotic normality of the Whittle estimator. Having established the
consistency of the Whittle estimator $\widehat{\theta}_{T,h}$, we can go on to
obtain the limiting distribution of
$T^{1/2}\left(\widehat{\theta}_{T,h}-\theta_{0}\right)$ in the usual way by
applying the Taylor’s formula, the mean value theorem, and Slutsky’s
arguments. Specifically we have the following result, showing that under the
above assumptions, the Whittle estimator $\widehat{\theta}_{T,h}$ is
asymptotically normal.
###### Theorem 7.6.
Suppose that Assumptions 7.4–7.9 and 6.1 are satisfied. Then the Whittle
estimator $\widehat{\theta}_{T,h}$ of an unknown spectral parameter $\theta$
based on the tapered data (6.5) is asymptotically normal. More precisely, we
have
$\displaystyle
T^{1/2}\left(\widehat{\theta}_{T,h}-\theta_{0}\right)\overset{d}{\rightarrow}N_{p}\left(0,e(h)\Gamma(\theta_{0})\right)\quad{\rm
as}\quad T\to\infty,$ (7.18)
where $N_{p}(\cdot,\cdot)$ denotes the $p$-dimensional normal law,
$\overset{d}{\rightarrow}$ stands for convergence in distribution,
$\displaystyle\Gamma(\theta_{0})=W^{-1}(\theta_{0})\left(A(\theta_{0})+B(\theta_{0})\right)W^{-1}(\theta_{0}),$
(7.19)
where the matrices $W$, $A$ and $B$ are defined in (7.14)-(7.17), and the
tapering factor $e(h)$ is given by formula (7.6).
## 8 Methods and tools
In this section we briefly discuss the methods and tools, used to prove the
central and noncentral limit theorems for Toeplitz type quadratic forms and
functionals stated in Sections 3–6, as well as the results stated in Section
7.
As mentioned in the introduction, the most commonly used methods to prove
central limit theorems are: (a) the method of characteristic functions. (b)
the method of cumulants or moments, (c) the approximation method.
To prove the central limit theorems stated in this paper, depending on the
underlying model, was used either the method of cumulants or the approximation
method. If the underlying model is Gaussian, then the method of cumulants was
applied, otherwise the approximation method was applied, which reduces the
quadratic integral form to a single integral form. To prove the non-central
limit theorems, was used the spectral representation of the underlying
process, the properties of Lévy-Itô-type and Stratonovich-type multiple
stochastic integrals, and power counting theorems. Some details of the above
methods are described below.
### 8.1 The characteristic functions and cumulant criteria for the CLT
The characteristic functions criterion for the CLT is based on the fact that
convergence in distribution is equivalent to the pointwise convergence of the
corresponding characteristic functions. The general cumulant criterion for the
CLT is based on the following result (see, e.g., Giraitis et al. [59],
Corollary 4.2.1).
###### Proposition 8.1.
Let the random variables $\\{X_{T},T\in\mathbb{R}\\}$ have all moments finite,
and let $\mathbb{E}[X_{T}]\to 0$, $\mathrm{Var}[X_{T}]\to\sigma^{2}$, and
$\chi_{k}(X_{T}):=\mathrm{Cum}_{k}(X_{T})\to 0$ for all $k=3,4,\ldots$ as
$T\to\infty$. Then $X_{T}\overset{d}{\rightarrow}X\sim N(0,\sigma^{2})$ as
$T\to\infty$.
The characteristic functions and cumulant criteria for the CLT for quadratic
functionals is based on the following general result (see Ibragimov [75]).
Let $\xi$ be a Gaussian random variable with values in a separable Hilbert
space $H$. In other words, $\xi$ is a random variable with characteristic
functional:
$\varphi(h)={\exp}\left\\{m(h)-\frac{1}{2}(Rh,h)\right\\},\quad h\in H,$ (8.1)
where $m(h)$ is a continuous linear functional and the correlation operator
$R$ is a self-adjoint completely continuous operator with finite trace. We
assume, without loss of generality, that $m(h)\equiv 0$. Let $A$ be some
linear self-adjoint bounded operator.
The proof of the following result can be found in Ibragimov [75].
###### Proposition 8.2.
Let the operators $R$ and $A$ be as above. The quadratic form $(A\xi,\xi)$ has
the same distribution as the sum
$\sum_{k=1}^{\infty}\lambda_{k}^{2}\xi_{k}^{2}$, where $\xi_{k}$ are
independent $N(0,1)$ Gaussian random variables and $\lambda_{k}$ are the
eigen-values of the operator $B:=RA.$
###### Remark 8.1.
It can easily be shown that the sets of non-zero eigen-values of the operators
$RA$, $AR$ and $R^{1/2}AR^{1/2}$ coincide, where $R^{1/2}$ is the positive
definite square root of $R$.
As mentioned above, Toeplitz matrices and operators arise naturally in the
theory of stationary processes, and serve as tools, to study many topics of
the spectral and statistical analysis of d.t. and c.t. stationary processes.
Let $A_{T}(f)$ denote the covariance matrix (or operator) of the process
$\\{X(u),\ u\in\mathbb{U}\\}$, that is, $A_{T}(f)$ is either the $T\times T$
Toeplitz matrix ($B_{T}(f)$), or the $T$-truncated Toeplitz operator
$W_{T}(f)$ generated by the spectral density $f$, and let $A_{T}(g)$ denote
either the $T\times T$ Toeplitz matrix, or the $T$-truncated Toeplitz operator
generated by the function $g$ (for definitions see formulas (3.4) and (3.5)).
As a consequence of Proposition 8.2, we have the following result, which gives
a link between the distribution of the quadratic functional $Q_{T}$ in (1.1)
and the trace problem for Toeplitz matrices and operators (see, e.g., Ginovyan
et al. [58], Grenander and Szegő [67], Ibragimov [75]).
* 1.
The quadratic functional $Q_{T}$ in (1.1) has the same distribution as the sum
$\sum_{k=1}^{\infty}\lambda_{k,T}^{2}\xi_{k}^{2}$
($\sum_{k=1}^{T}\lambda_{k,T}^{2}\xi_{k}^{2}$ in the d.t. case), where
$\\{\xi_{k},k\geq 1\\}$ are independent $N(0,1)$ Gaussian random variables and
$\\{\lambda_{k,T},k\geq 1\\}$ are the eigenvalues of the operator
$A_{T}(f)A_{T}(g)$.
* 2.
The characteristic function $\varphi_{T}(t)$ of $Q_{T}$ is given by
$\varphi_{T}(t)=\prod_{k=1}^{\infty}|1-2it\lambda_{k,T}|^{-1/2}.$ (8.2)
* 3.
The $k$–th order cumulant $\chi_{k}(\cdot)$ of $Q_{T}$ is given by
$\displaystyle\chi_{k}(Q_{T})=2^{k-1}(k-1)!\sum_{j=1}^{\infty}\lambda_{j,T}^{k}=2^{k-1}(k-1)!\,\hbox{tr}\,[A_{T}(f)A_{T}(g)]^{k},$
(8.3)
where $\hbox{tr}[A]$ stands for the trace of an operator $A$.
The product in (8.2) and the sum in (8.3) are over $j=1,\ldots,T$ in the d.t.
case.
##### The tapered case.
To study the asymptotic distribution (as $T\to\infty$) of the tapered
functional $Q^{h}_{T}$, given by (6.1), we use the method of cumulants, the
frequency-domain approach, and the technique of truncated tapered Toeplitz
matrices and operators.
Let $A_{T}^{h}(f)$ be either the $T\times T$ tapered Toeplitz matrix
$B_{T}^{h}(f)$, or the $T$-truncated tapered Toeplitz operator $W_{T}^{h}(f)$
generated by the spectral density $f$ and taper $h$, and let $A_{T}^{h}(g)$
denote either the $T\times T$ tapered Toeplitz matrix, or the $T$-truncated
tapered Toeplitz operator generated by the functions $g$ and $h$ (for
definitions see formulas (6.22) and (6.23)). Similar to the non-tapered case,
we have the following results (cf. Ginovyan et al. [58], Grenander and Szegő
[67], Ibragimov [75]).
* 1.
The quadratic functional $Q^{h}_{T}$ in (6.1) has the same distribution as the
sum $\sum_{j=1}^{\infty}\lambda_{j,T}^{2}\xi_{j}^{2}$, where $\\{\xi_{j},j\geq
1\\}$ are independent $N(0,1)$ Gaussian random variables and
$\\{\lambda_{j,T},j\geq 1\\}$ are the eigenvalues of the operator
$A_{T}^{h}(f)\,A_{T}^{h}(g)$.
* 2.
The characteristic function $\varphi(t)$ of $Q^{h}_{T}$ is given by formula:
$\varphi(t)=\prod_{j=1}^{\infty}|1-2it\lambda_{j,T}|^{-1/2}.$ (8.4)
* 3.
The $k$–th order cumulant $\chi_{k}(Q^{h}_{T})$ of $Q^{h}_{T}$ is given by
formula:
$\displaystyle\chi_{k}(Q_{T})=2^{k-1}(k-1)!\sum_{j=1}^{\infty}\lambda_{j,T}^{k}=2^{k-1}(k-1)!\,\hbox{tr}\,[A_{T}^{h}(f)\,A_{T}^{h}(g)]^{k}.$
(8.5)
Thus, to describe the asymptotic distribution of the quadratic functional
$Q^{h}_{T}$, we have to control the traces and eigenvalues of the products of
truncated tapered Toeplitz operators and matrices.
### 8.2 Approximation of traces of products of Toeplitz matrices and
operators
The trace approximation problem for truncated Toeplitz operators and matrices
has been discussed in detail in the survey paper Ginovyan et al. [58] for non-
tapered case. Here we present some important results, which were used to prove
CLT for quadratic functionals $Q^{h}_{T}$.
Let $h$ be a taper function satisfying Assumption 6.1. Let $A^{h}_{T}(\psi)$
be either the $T\times T$ tapered Toeplitz matrix $B^{h}_{T}(\psi)$, or the
$T$-truncated tapered Toeplitz operator $W^{h}_{T}(\psi)$ generated by a
function $\psi$ (for definitions see formulas (6.22) and (6.23))
Observe that, in view of (6.3), (6.13), (6.22) and (6.23), we have
$\frac{1}{T}\hbox{tr}\left[A^{h}_{T}(\psi)\right]=\frac{1}{T}\cdot\widehat{\psi}(0)\cdot\int_{0}^{T}h_{T}^{2}(t)dt=2\pi
H_{2}\int_{\Lambda}\psi(\lambda)d\lambda.$ (8.6)
What happens to the relation (8.6) when $A^{h}_{T}(\psi)$ is replaced by a
product of Toeplitz matrices (or operators)? Observe that the product of
Toeplitz matrices (resp. operators) is not a Toeplitz matrix (resp. operator).
The idea is to approximate the trace of the product of Toeplitz matrices
(resp. operators) by the trace of a Toeplitz matrix (resp. operator) generated
by the product of the corresponding generating functions. More precisely, let
$\\{\psi_{1},\psi_{2},\ldots,\psi_{m}\\}$ be a collection of integrable real
symmetric functions defined on $\Lambda$. Let $A_{T}^{h}(\psi_{i})$ be either
the $T\times T$ tapered Toeplitz matrix $B_{T}^{h}(\psi_{i})$, or the
$T$-truncated tapered Toeplitz operator $W_{T}^{h}(\psi_{i})$ generated by a
function $\psi_{i}$ and a taper function $h$. Define
$\displaystyle
S_{A,\mathcal{H},h}(T):=\frac{1}{T}\hbox{tr}\left[\prod_{i=1}^{m}A_{T}^{h}(\psi_{i})\right],\quad
M_{\Lambda,\mathcal{H},h}:=(2\pi)^{m-1}H_{m}\int_{\Lambda}\left[\prod_{i=1}^{m}\psi_{i}(\lambda)\right]\,d\lambda,$
and let
$\displaystyle\Delta(T):=\Delta_{A,{\Lambda},\mathcal{H},h}(T)=|S_{A,\mathcal{H},h}(T)-M_{{\Lambda},\mathcal{H},h}|.$
(8.7)
###### Proposition 8.3.
Let $\Delta(T)$ be as in (8.7). Each of the following conditions is sufficient
for
$\Delta(T)=o(1)\quad{\rm as}\quad T\to\infty.$ (8.8)
* (C1)
$\psi_{i}\in L^{1}(\Lambda)\cap L^{p_{i}}(\Lambda)$, $p_{i}>1$,
$i=1,2,\ldots,m$, with $1/p_{1}+\cdots+1/p_{m}\leq 1$.
* (C2)
The function $\varphi({\bf u})$ defined by
$\varphi({\bf u}):\,=\,\int_{\Lambda}\psi_{1}(\lambda)\psi_{2}(\lambda-
u_{1})\psi_{3}(\lambda-u_{2})\cdots\psi_{m}(\lambda-u_{m-1})\,d\lambda,$ (8.9)
where ${\bf u}=(u_{1},u_{2},\ldots,u_{m-1})\in\Lambda^{m-1}$, belongs to
$L^{m-2}(\Lambda^{m-1})$ and is continuous at ${\bf
0}=(0,0,\ldots,0)\in\Lambda^{m-1}$.
###### Remark 8.2.
In the nontapered case, Proposition 8.3 was proved in Ginovyan et al. [58], in
the tapered case, it was proved in Ginovyan [51]. Proposition 8.3 was used to
prove parts (B) and (C) of Theorem 3.1 (in the nontapered case) and Theorems
6.2 and 6.3 (in the tapered case). In the special case $m=4$,
$\psi_{1}=\psi_{3}:=f$ and $\psi_{2}=\psi_{4}:=g$, in Ginovyan and Sahakyan
[52] (in the d.t. case) and in Ginovyan and Sahakyan [53] (in the c.t. case)
it was proved that the conditions of Theorem 3.1(D) and Theorem 3.3 are also
sufficient for (8.8).
###### Remark 8.3.
More results concerning the trace approximation problem for truncated Toeplitz
operators and matrices can be found in Ginovyan and Sahakyan [54, 55],
Ginovyan et al. [58] and Lieberman and Phillips [81].
### 8.3 Approximation method for the CLT
The approximation method for CLT for quadratic functionals is based on
approximation of quadratic functional $Q_{T}$ by a univariate sum (integral)
of $m$-dependent random variables, and then using the CLT for such variables
(see, e.g., Giraitis et al. [59], Section 4.5, and Giraitis and Surgailis
[60]).
Let $X_{i}(u)$ ($i=1,2$) be two linear processes of the form (2.9),
subordinated to the same orthonormal sequence $\\{\xi(k),\,k\in\mathbb{Z}\\}$,
with square summable covariance functions $r_{i}(u)$ ($i=1,2$), that is,
$X_{i}(u)=\sum_{k=-\infty}^{\infty}a_{i}(u-k)\xi(k),\quad\quad\sum_{k=-\infty}^{\infty}|a_{i}(k)|^{2}<\infty,\quad
i=1,2,$ (8.10)
and
$\sum_{u=-\infty}^{\infty}|r_{i}(u)|^{2}<\infty,\quad i=1,2.$ (8.11)
Denote
$S_{T}:=\sum_{u=1}^{T}X_{1}(u)X_{2}(u).$ (8.12)
###### Proposition 8.4.
Let $X_{i}(u)$ $(i=1,2)$ and $S_{T}$ be as above, and let the quadratic form
$Q_{T}$ be as in (1.1). Then the following assertions hold.
* (a)
The distribution of $T^{-1/2}(S_{T}-\mathbb{E}[S_{T}])$ tends to the centered
normal distribution with variance:
$\sigma^{2}:=\sum_{u=-\infty}^{\infty}r_{1}(u)r_{2}(u)+\kappa_{4}r_{1,2}^{2},$
(8.13)
where $\kappa_{4}$ is the fourth cumulant of $\xi(0)$, and
$r_{1,2}=\mathbb{E}[X_{1}(0)X_{2}(0)]$.
* (b)
$\mathrm{Var}(Q_{T}-S_{T})=o(T)$ as $T\to\infty$.
A similar result is true for c.t. linear processes of the form (2.12), where
now we have (see [53]):
$S_{T}:=\int_{0}^{T}X_{1}(u)X_{2}(u)du.$
### 8.4 Fejér-type singular integrals
We define Fejér-type kernels and singular integrals, and state some of their
properties that were used to prove the limit theorems stated in Section 7.
For a number $k$ ($k=2,3,\ldots$) and a taper function $h$ satisfying
Assumption 6.1 consider the following Fejér-type ’tapered’ kernel function:
$\Phi^{h}_{k,T}({\bf u}):=\frac{H_{T}({\bf
u})}{(2\pi)^{k-1}H_{k,T}(0)},\quad{\bf
u}=(u_{1},\ldots,u_{k-1})\in\mathbb{R}^{k-1},$ (8.14)
where
$H_{T}({\bf u}):={H_{1,T}(u_{1})\cdots
H_{1,T}(u_{k-1})H_{1,T}\left(-\sum_{j=1}^{k-1}u_{j}\right)},$ (8.15)
and the function $H_{k,T}(\cdot)$ is defined by (6.7) with $H_{k,T}(0)=T\cdot
H_{k}\neq 0$ (see (6.3)).
The proofs of propositions that follow can be found in Ginovyan and Sahakyan
[56]. The next result shows that, similar to the classical Fejér kernel, the
’tapered’ kernel $\Phi^{h}_{k,T}({\bf u})$ is an approximation identity (see
Ginovyan and Sahakyan [56], Lemma 3.4).
###### Proposition 8.5.
For any $k=2,3,\ldots$ and a taper function $h$ satisfying Assumption 6.1 the
kernel $\Phi^{h}_{k,T}({\bf u})$, ${\bf
u}=(u_{1},\ldots,u_{k-1})\in{\mathbb{R}}^{k-1}$, possesses the following
properties:
* a)
$\sup_{T>0}\int_{{\mathbb{R}}^{k-1}}\left|\Phi^{h}_{k,T}({\bf
u})\right|\,d{\bf u}=C_{1}<\infty;$
* b)
$\int_{{\mathbb{R}}^{k-1}}\Phi^{h}_{k,T}({\bf u})\,d{\bf u}=1;$
* c)
$\lim_{T\to\infty}\int_{{\mathbb{E}}^{c}_{\delta}}\left|\Phi^{h}_{k,T}({\bf
u})\right|\,d{\bf u}=0$ for any $\delta>0;$
* d)
If $k>2$ for any $\delta>0$ there exists a constant $M_{\delta}>0$ such that
for $T>0$
$\left\|\Phi^{h}_{k,T}\right\|_{L^{p_{k}}({\mathbb{E}^{c}_{\delta}})}\leq
M_{\delta},$ (8.16)
where $p_{k}=\frac{k-2}{k-3}$ for $k>3$, $p_{3}=\infty$ and
${\mathbb{E}}_{\delta}^{c}={\mathbb{R}}^{k-1}\setminus{\mathbb{E}}_{\delta},\quad{\mathbb{E}}_{\delta}=\\{{\bf
u}=(u_{1},\ldots,u_{k-1})\in{\mathbb{R}}^{k-1}:\,|u_{i}|\leq\delta,\,i=1,\ldots,k-1\\}.$
* e)
If the function $\Psi\in L^{1}(\mathbb{R}^{k-1})\bigcap
L^{k-2}(\mathbb{R}^{k-1})$ is continuous at ${\bf v}=(v_{1},\ldots,v_{k-1})$
$\ (L^{0}$ is the space of measurable functions), then
$\lim_{T\to\infty}\int_{\mathbb{R}^{k-1}}\Psi({\bf u}+{\bf
v})\Phi^{h}_{k,T}({\bf u})d{\bf u}=\Psi({\bf v}).$ (8.17)
Denote
$\Delta_{2,T}^{h}:=\int_{\mathbb{R}^{2}}f(\lambda)g(\lambda+\mu)\Phi^{h}_{2,T}(\mu)d\lambda
d\mu-\int_{\mathbb{R}}f(\lambda)g(\lambda)d\lambda,$ (8.18)
where $\Phi^{h}_{2,T}(\mu)$ is given by (8.14), (8.15).
The next two propositions, which were used to prove Theorems 7.1 and 7.3, give
information on the rate of convergence to zero of $\Delta_{2,T}^{h}$ as
$T\to\infty$.
###### Proposition 8.6.
Assume that Assumptions 7.2 and 6.1 are satisfied. Then the following
asymptotic relation holds:
$\Delta_{2,T}^{h}=o\left(T^{-1/2}\right)\quad{\rm as}\quad T\to\infty.$ (8.19)
###### Proposition 8.7.
Assume that Assumptions 7.3 and 6.1are satisfied. Then the following
inequality holds:
$|\Delta_{2,T}^{h}|\leq
C_{h}\begin{cases}T^{-(\beta_{1}+\beta_{2})},&\text{if}\ \
\beta_{1}+\beta_{2}<1\\\ T^{-1}\ln T,&\text{if}\ \ \beta_{1}+\beta_{2}=1\\\
T^{-1},&\text{if}\ \ \beta_{1}+\beta_{2}>1,\end{cases}\qquad T>0,$ (8.20)
where $C_{h}$ is a constant depending on $h$.
Notice that for non-tapered case ($h(t)={\mathbb{I}}_{[0,1]}(t)$), the above
stated results were proved in Ginovyan and Sahakyan [53] (see also Ginovyan
and Sahakyan [54, 55]).
### 8.5 Lévy-Itô-type and Stratonovich-type multiple stochastic integrals
To prove limit theorems for quadratic functionals of Lévy-driven c.t. linear
models, was used the multiple _off-diagonal_ (Itô-type) and _with-diagonal_
(Stratonovich-type) stochastic integrals with respect to Lévy noise. In this
subsection we introduce these integrals, and briefly discuss their properties
(see, e.g., Bai et al. [9], Farré et al. [35], Peccati and Taqqu [84]).
Let $f$ be a function in $L^{2}(\mathbb{R}^{k})$, then the following off-
diagonal multiple stochastic integral, called Itô-Lévy integral, is well-
defined:
$I_{k}^{\xi}(f)=\int_{\mathbb{R}^{k}}^{\prime}f(x_{1},\ldots,x_{k})\xi(dx_{1})\ldots\xi(dx_{k}),$
(8.21)
where $\xi(t)$ is a Lévy process with $\mathbb{E}\xi(t)=0$ and
$\mathrm{Var}[\xi(t)]=\sigma_{\xi}^{2}t$, and the prime ′ indicates that we do
not integrate on the diagonals $x_{i}=x_{j}$, $i\neq j$. The multiple integral
$I_{k}^{\xi}(\cdot)$ satisfies the following inequality:
$\|I_{k}^{\xi}(f)\|_{L^{2}(\Omega)}^{2}\leq
k!\sigma_{\xi}^{2k}\|f\|_{L^{2}(\mathbb{R}^{k})}^{2},$ (8.22)
and the inequality in (8.22) becomes equality if $f$ is symmetric:
$\|I_{k}^{\xi}(f)\|_{L^{2}(\Omega)}^{2}=k!\sigma_{\xi}^{2k}\|f\|_{L^{2}(\mathbb{R}^{k})}^{2}.$
(8.23)
Observe that if in (8.21), $\xi(\cdot)=B(\cdot)$, where $B(\cdot)$ is the
real-valued Brownian motion, then the corresponding integral:
$I_{k}^{B}(f)=\int_{\mathbb{R}^{k}}^{\prime}f(x_{1},\ldots,x_{k})B(dx_{1})\ldots
B(dx_{k})$ (8.24)
is called multiple Wiener-Itô integral (see Itô [78]).
The Wiener-Itô integral can also be defined with respect to the complex-valued
Brownian motion:
$I_{k}^{W}(g)=\int_{\mathbb{R}^{k}}^{\prime\prime}g(u_{1},\ldots,u_{k})W(du_{1})\ldots
W(du_{k}),$ (8.25)
where $g\in L^{2}(\mathbb{R}^{k})$ is a complex-valued function satisfying
$g(-u_{1},\ldots,-u_{k})=\overline{g(u_{1},\ldots,u_{k})}$, and $W(\cdot)$ is
a complex-valued Brownian motion (with real and imaginary parts being
independent) viewed as a random integrator (see, e.g., Embrechts and Maejima
[34]), and the double prime ′′ indicates the exclusion of the hyper-diagonals
$u_{p}=\pm u_{q}$, $p\neq q$.
The next result, which can be deduced from Proposition 9.3.1 of Peccati and
Taqqu [84] and Proposition 4.2 of Dobrushin [30] (see Bai et al. [9]), gives a
relationship between the integrals $I_{k}^{B}(\cdot)$ and $I_{k}^{W}(\cdot)$,
defined by (8.24) and (8.25), respectively.
###### Proposition 8.8.
Let $f_{j}(\cdot)$ be real-valued functions in $L^{2}(\mathbb{R}^{k_{j}})$,
$j=1,\ldots,J$, and let
$\widehat{f}_{j}(w_{1},\ldots,w_{k_{j}})=\int_{\mathbb{R}^{k_{j}}}f_{j}(x_{1},\ldots,x_{k_{j}})e^{i\left(x_{1}w_{1}+\ldots+x_{k_{j}}w_{k_{j}}\right)}dx_{1}\ldots
dx_{k_{j}}$
be the $L^{2}$-Fourier transform of $f_{j}(\cdot)$. Then
$\Big{(}I_{k_{1}}^{B}(f_{1}),\ldots,I_{k_{J}}^{B}(f_{J})\Big{)}\overset{d}{=}\Big{(}(2\pi)^{-k_{1}/2}I_{k_{1}}^{W}\left(\widehat{f}_{1}A^{\otimes_{k_{1}}}\right),\ldots,(2\pi)^{-k_{J}/2}I_{k_{J}}^{W}\left(\widehat{f}_{J}A^{\otimes_{k_{J}}}\right)\Big{)},$
for any function $A(u):\mathbb{R}\rightarrow\mathbb{C}$ such that $|A(u)|=1$
and $A(w)=\overline{A(-w)}$ almost everywhere, where $A^{\otimes
k}(w_{1},\ldots,w_{k}):=A(w_{1})\cdots A(w_{k})$.
In the next proposition we state a stochastic Fubini’s theorem (see Bai et al.
[9], Lemma 3.1, or Peccati and Taqqu [84], Theorem 5.12.1).
###### Proposition 8.9.
Let $(S,\mu)$ be a measure space with $\mu(S)<\infty$, and let
$f(s,x_{1},\ldots,x_{k})$ be a function on $S\times\mathbb{R}^{k}$ such that
$\int_{S}\int_{\mathbb{R}^{k}}f^{2}(s,x_{1},\ldots,x_{k})dx_{1}\ldots
dx_{k}\mu(ds)<\infty,$
then we can change the order of the multiple stochastic integration
$I_{k}^{\xi}(\cdot)$ and the deterministic integration
$\int_{S}f(s,\cdot)\mu(ds)$:
$\int_{S}I_{k}^{\xi}\big{(}f(s,\cdot)\big{)}\mu(ds)=I_{k}^{\xi}\left(\int_{S}f(s,\cdot)\mu(ds)\right).$
The _with-diagonal_ counterpart of the Lévy-Itô integral $I_{k}^{\xi}(f)$,
called a _Stratonovich-type_ stochastic integral, is defined by
$\mathring{I}_{k}^{\xi}(f):=\int_{\mathbb{R}^{k}}f(x_{1},\ldots,x_{k})\xi(dx_{1})\ldots\xi(dx_{k}),$
(8.26)
which includes all the diagonals. We refer to Farré et al. [35] for a
comprehensive treatment of Stratonovich-type integrals
$\mathring{I}_{k}^{\xi}(f)$. Observe that for the with-diagonal integral
$\mathring{I}_{k}^{\xi}(f)$ to be well-defined, the integrand $f$ needs also
to be square-integrable on all the diagonals of $\mathbb{R}^{k}$ (see Bai et
al. [9], Farré et al. [35]).
The with-diagonal integral $\mathring{I}_{k}^{\xi}(f)$ can be expressed by
off-diagonal integrals of lower orders using the Hu-Meyer formula (see Farré
et al. [35], Theorem 5.9). In the special case when $k=2$, we have
$\mathring{I}_{2}^{\xi}(f)=\int_{\mathbb{R}^{2}}^{\prime}f(x_{1},x_{2})\xi(dx_{1})\xi(dx_{2})+\int_{\mathbb{R}}f(x,x)\xi^{(2)}_{c}(dx)+\int_{\mathbb{R}}f(x,x)dx,$
(8.27)
where
$\xi^{(2)}_{c}(t)=\xi^{(2)}(t)-\mathbb{E}\xi^{(2)}(t)=\xi^{(2)}(t)-|t|,$
(8.28)
and $\xi^{(2)}(t)$ is the quadratic variation of $\xi(t)$, which is non-
deterministic if $\xi(t)$ is non-Gaussian (see Farré et al. [35], equation
(10)). The centered process $\xi^{(2)}_{c}(t)$ is called a second order
_Teugels martingale_ , which is a Lévy process with the same filtration as
$\xi(t)$, whose quadratic variation is deterministic:
$[\xi^{(2)}_{c}(t),\xi^{(2)}_{c}(t)]=\kappa_{4}t,$
where $\kappa_{4}$ is the fourth cumulant of $\xi(1)$. For any $f,g\in
L^{2}(\mathbb{R})$, one has (see Farré et al. [35])
$\mathbb{E}\left[\int_{\mathbb{R}}g(x)\xi^{(2)}_{c}(dx)\int_{\mathbb{R}}h(x)\xi^{(2)}_{c}(dx)\right]=\kappa_{4}\int_{\mathbb{R}}f(x)g(x)dx.$
(8.29)
The decomposition (8.27) implies that
$\mathbb{E}\left[\mathring{I}_{k}^{\xi}(f)\right]=\int_{\mathbb{R}}f(x,x)dx.$
Notice that for any $f\in L^{2}(\mathbb{R}^{2})$ and $g\in L^{2}(\mathbb{R})$
the following integrals, the first of which is an off-diagonal double integral
and the second is a single integral with respect to Teugels martingale
$\xi_{c}^{(2)}(t)$:
$\int_{\mathbb{R}^{2}}^{\prime}f(x_{1},x_{2})\xi(dx_{1})\xi(dx_{2})\quad\text{and}\quad\int_{\mathbb{R}}g(x)\xi^{(2)}_{c}(dx).$
(8.30)
are uncorrelated (see Bai et al. [9]).
### 8.6 Power counting theorems
Power counting theorems provide convergence conditions for some classes of
integrals on $\mathbb{R}^{n}$ whose integrands are products of functions
bounded near zero by powers of linearly dependent affine functionals and near
infinity by different powers of those functionals. These theorems are useful
in studying asymptotic distributions of statistics of time series models with
long-range dependence. The results stated below were used in Terrin and Taqqu
[101] to establish non-central limit theorems for quadratic forms $Q_{T}$ of
d.t. stationary processes with long-range dependence (see Theorem 4.5).
First we introduce some notation. Let $\mathfrak{L}:=\\{L_{1},\ldots,L_{m}\\}$
be a collection of linear functionals $L_{i}(\bf x)$ on $\mathbb{R}^{n}$,
$i=1,\ldots,m$. For numbers $0<a_{i}\leq b_{i}$, $c_{i}>0$, and real constants
$\alpha_{i}$ and $\beta_{i}$ we define
$P_{1}({\bf x}):=f_{1}(L_{1}({\bf x}))f_{2}(L_{2}({\bf x}))\cdots
f_{m}(L_{m}({\bf x})),$ (8.31)
where the functions $f_{i}$, $(i=1,\ldots,m)$ satisfy the condition:
$|f_{i}(y)|\leq\left\\{\begin{array}[]{ll}c_{i}|y_{i}|^{\alpha_{i}}&\mbox{if}\quad|y|<a_{i}\\\
c_{i}|y_{i}|^{\beta_{i}}&\mbox{if}\quad|y|>b_{i},\end{array}\right.$ (8.32)
and $|f_{i}(y)|$ is bounded above in the interval $(a_{i},b_{i})$,
$i=1,...,m$. The constants $\alpha_{i}$ and $\beta_{i}$ are called the
exponents of $|y|$ around $0$ and $\infty$, respectively. For $W\subset T$ we
set $s(W):={\rm span}(W)\cap T$, where ${\rm span}(W)$ denotes the linear span
of $W$. Define
$d_{0}(W):=r(W)+\sum_{s(W)}\alpha_{i}\quad\text{and}\quad
d_{\infty}(W):=r(T)-r(W)+\sum_{T\setminus s(W)}\beta_{i}.$ (8.33)
A summation over a set $E$ means summation over the set $\\{i:L_{i}\in E\\}$,
$|E|$ denotes the cardinality of $E$ and $r(E)$ is the rank of $E$, that is,
the number of linearly independent functionals in $E$. We call a set $W$
padded if for every $L$ in $W$, $L$ is also in $s(W\setminus\\{L\\})$, that
is, $L$ can be obtained as a linear combination of other elements in $W$.
Observe that $W=\emptyset$ is padded, and if $W$ is linearly independent, then
$d_{0}(W):=|W|+\sum_{s(W)}\alpha_{i}$.
The proofs of the following results can be found in Terrin and Taqqu [101]
(see also Fox and Taqqu [39], and Terrin and Taqqu [103]).
###### Proposition 8.10.
If $r(T)=n$ and (a) $d_{0}(W)>0$ for every nonempty subset $W$ of $T$ with
$s(W)=W$, while (b) $d_{\infty}(W)<0$ for every proper subset $W$ of $T$ with
$s(W)=W$, including the empty set. Then
$\int_{\mathbb{R}^{n}}|P_{1}({\bf x})|d{\bf x}<\infty,$ (8.34)
where $P_{1}({\bf x})$ is as in (8.31).
Proposition 8.10 can be extended to the class of functionals of the form
$L_{i}(\bf x)+\theta_{i}$, where $\theta_{i}$ is a constant. For $i=1,...,m$
let $\alpha_{i}$ and $\theta_{i}$ be real constants, and let $M_{i}(\bf x)$ be
a linear functional on $\mathbb{R}^{n}$. Put $L_{i}(\bf x):=M_{i}(\bf
x)+\theta_{i}$ and set
$P_{2}({\bf x}):=|L_{1}({\bf x})|^{\alpha_{1}}\cdots|L_{m}({\bf
x})|^{\alpha_{m}}.$ (8.35)
###### Proposition 8.11.
Let $s(W)$ and $d_{0}(W)$ be as above. Suppose that $d_{0}(W)>0$ for every
nonempty subset $W$ of $T$ with $s(W)=W$. Then
$\int_{[-t,t]^{n}}|P_{2}({\bf x})|d{\bf x}<\infty\quad\text{for all}\quad
t>0,$ (8.36)
where $P_{2}({\bf x})$ is as in (8.35).
###### Remark 8.4.
If $\alpha_{i}>-1$ and $\beta_{i}\geq-1$ in Proposition 8.10, then it suffices
to verify the conditions (a) and (b) for sets $W$ that are also padded. If
$\alpha_{i}>-1$ in Proposition 8.11, then it suffices to consider subsets $W$
that are also padded (see Terrin and Taqqu [101, 103]).
## References
* [1] Alomari, H. M., Frías, M.P., Leonenko, N. N., Ruiz-Medina, M.D. Sakhno, L. and Torres, A. (2017). Asymptotic properties of parameter estimates for random fields with tapered data, Electronic Journal of Statistics 11, 3332–3367.
* [2] Anh, V.V., Angulo, J.M. and Ruiz-Medina, M.D. (1999). Possible long-range dependence in fractional random fields. J. Statist. Plann. Inference 80, 95–110.
* [3] Anh, V.V., Leonenko, N.N. and McVinish, R. (2001). Models for fractional Riesz-Bessel motion and related processes. Fractals 9, 329–346.
* [4] Anh, V.V., Leonenko, N.N. and Sakhno, L.M. (2007). Minimum contrast estimation of random processes based on information of second and third orders. J. Statist. Planning Inference. 137, 1302–1331.
* [5] Anh, V.V., Leonenko, N.N. and Sakhno, L.M. (2009). Evaluation of bias in high-order spectral estimation. Theor. Probability and Math. Statist. 80, 1–14.
* [6] Avram, F. (1988). On bilinear forms in Gaussian random variables and Toeplitz matrices. Probab. Theory Related Fields 79, 37–45.
* [7] Avram, F., Leonenko, N. N. and Sakhno, L. (2010). On a Szegö type limit theorem, the Hölder-Young-Brascamp-Lieb inequality, and the asymptotic theory of integrals and quadratic forms of stationary fields, ESAIM: Probability and Statistics 14, 210–255.
* [8] Bai, S., Ginovyan, M. S. and Taqqu, M. S. (2015). Functional limit theorems for Toeplitz quadratic functionals of continuous time Gaussian stationary processes. Statistics and Probability Letters 104, 58-67.
* [9] Bai, S., Ginovyan, M. S. and Taqqu, M. S. (2016). Limit theorems for quadratic forms of Lévy-driven continuous time linear processes. Stochast. Process. Appl. 126, 1036–1065.
* [10] Bentkus, R. (1972). On the error of the estimate of the spectral function of a stationary process. Lit. Mat. Sb. 12, 55–71.
* [11] Beran, J.(1994). Statistics for Long-Memory Processes. Chapman & Hall, New York.
* [12] Beran, J., Feng, Y., Ghosh, S. and Kulik, R. (2013). Long-Memory Processes: Probabilistic Properties and Statistical Methods. Springer, Berlin.
* [13] Bingham, N.H., Goldie, C.M. and Teugels, J.L. (1989). Regular Variation. Cambridge University Press, New York.
* [14] Bloomfield, P. (1976). Fourier Annlysis of Time Series. Wiley, New York.
* [15] Bogachev, V.I. (2007) Measure Theory, Vol. 1, Springer, New York.
* [16] Bondon, P. and Palma, W. (2006). A class of antipersistent processes. J. Time Ser. Anal. 28, 261–273.
* [17] Brillinger, D.R. (1981). Time Series: Data Analysis and Theory, Holden Day, San Francisco.
* [18] Brockwell, P.J (2014). Recent results in the theory and applications of CARMA processes. Annals of the Institute of Statistical Mathematics 66(4), 647–685.
* [19] Brockwell, P.J and Davis, R.A. (1991) Time Series: Theory and Methods, 2nd ed. Springer, New York.
* [20] Bryc, W. and Dembo, A. (1997) Large deviations for quadratic functionals of Gaussian processes. J. Theoret. Probab. 10, 307–332.
* [21] Casas, I. and Gao, J. (2008). Econometric estimation in long-range dependent volatility models: Theory and practice. Journal of Econometrics. 147, 72–83.
* [22] Cohen, S. and Lindner, A. (2013). A central limit theorem for the sample autocorrelations of a Lévy driven continuous time moving average process, J. Statist. Plann. Inference 143 (8), 1295–1306.
* [23] Cramér, H. and Leadbetter, M.R. (1967) Stationary and Related Stochastic Processes: Sample Function Properties and Their Applications. John Wiley & Sons, New York.
* [24] Dahlhaus, R. (1983). Spectral analysis with tapered data, J. Time Ser. Anal. 4, 163-174.
* [25] Dahlhaus, R. (1984). Parameter estimation of stationary processes with spectra containing strong peaks. In ”Robust and Nonlinear Time Series Analysis” (Eds. Franke, Hardle and Martin). Lecture Notes in Statistics, 26, 50-67.
* [26] Dahlhaus, R. (1985). A functional limit theorem for tapered empirical spectral functions, Stoch. Process. Appl. 19, 135–149.
* [27] Dahlhaus, R. (1989). Efficient parameter estimation for self-similar processes. Ann. Statist. 17 1749–1766.
* [28] Dahlhaus, R. and Künsch, H (1987). Edge effects and efficient parameter estimation for stationary random fields, Biometrika 74(4), 877–882.
* [29] Dahlhaus, R. and Wefelmeyer, W. (1996) Asymptotically optimal estimation in misspecified time series models, Ann. Statist. 24, 952–974.
* [30] Dobrushin, R. L. (1979). Gaussian and their subordinated self-similar random generalized fields, Ann. Probab. 7(1), 1–28.
* [31] Doob, J.L. (1953). Stochastic Processes, Wiley, New York.
* [32] Doukhan, P., León, J. and Soulier, P. (1996). Central and non central limit theorems for strongly dependent stationary Gaussian field. Rebrape, 10, 205–223.
* [33] Dzhaparidze, K. (1986). Parameter Estimation and Hypotesis Testing in Spectral Analysis of Stationary Time Series, Springer, New York.
* [34] Embrechts, P. and Maejima, M. (2002). Selfsimilar Processes, Princeton University Press.
* [35] Farré, M., Jolis, M. and Utzet, F. (2010). Multiple Stratonovich integral and Hu-Meyer formula for Lévy processes, Ann. Probab. 38(6), 2136–2169.
* [36] Flandrin, P. (1989). On the spectrum of fractional Brownian motions. IEEE Trans. Inform. Theory 35, 197–199.
* [37] Fox, R. and Taqqu, M.S. (1985). Non-central limit theorems for quadratic forms in random variables having long-range dependence. Ann. Probab. 13(2), 428–446.
* [38] Fox, R. and Taqqu, M.S. (1986). Large-sample properties of parameter estimation for strongly dependent stationary Gaussian time series. Ann. Statist. 14, 517–532.
* [39] Fox, R. and Taqqu, M.S. (1987). Central limit theorem for quadratic forms in random variables having long-range dependence. Probab. Theory Related Fields 74, 213–240.
* [40] Gao, J. (2004). Modelling long-range dependent Gaussian processes with application in continuous time financial models. J. Appl.Probab. 41, 467–482.
* [41] Gao J., Anh V., Heyde C. and Tieng Q. (2001). Parameter estimation of stochastic processes with long-range dependence and intermittency. J. Time Ser. Anal. 22, 517–535.
* [42] Ginovyan, M.S. (1988). Asymptotically efficient nonparametric estimation of functionals on spectral density with zeros, Theory Probab. Appl. 33 315–322.
* [43] Ginovyan, M.S. (1988). On estimate of the value of the linear functional in a spectral density of stationary Gaussian process. Theory Probab. Appl. 33, 777–781.
* [44] Ginovyan, M.S. (1993). A note on central limit theorem for Toeplitz type quadratic forms in stationary Gaussian variables. J. Cont. Math. Anal. 28, 78–81.
* [45] Ginovyan, M.S. (1994). On Toeplitz type quadratic functionals in Gaussian stationary process. Probab. Theory Related Fields. 100, 395–406.
* [46] Ginovyan, M.S. (1995). Asymptotic properties of spectrum estimate of stationary Gaussian process. J. Cont. Math. Anal. 30 1–17.
* [47] Ginovyan, M.S. (2003). Asymptotically efficient nonparametric estimation of nonlinear spectral functionals. Acta Appl. Math. 78, 145–154.
* [48] Ginovyan, M.S. (2011). Efficient estimation of spectral functionals for Gaussian stationary models. Communications on Stochastic Analysis 5, 211–232.
* [49] Ginovyan, M.S. (2011). Efficient estimation of spectral functionals for continuous time stationary models. Acta Appl. Math. 115, 233–254.
* [50] Ginovyan, M.S. (2020). Parameter Estimation for Lévy-Driven continuous time Linear Models with Tapered Data. Acta Appl. Math. 169 (1), 79–97.
* [51] Ginovyan, M.S. (2021). Goodness-of-fit tests for stationary Gaussian processes with tapered data. Acta Appl. Math. 171(1), 1–12.
* [52] Ginovyan, M.S. and Sahakyan, A. A. (2005). On the Central Limit Theorem for Toeplitz Quadratic Forms of Stationary Sequences. Theory Probab. and Appl. 49, 612 - 628.
* [53] Ginovyan, M.S. and Sahakyan, A. A. (2007). Limit Theorems for Toeplitz quadratic functionals of continuous time stationary process, Probab. Theory Relat. Fields 138, 551–579.
* [54] Ginovyan, M.S. and Sahakyan, A. A. (2012). Trace approximations of products of truncated Toeplitz operators. Theory Probab. Appl. 56(1), 57–71.
* [55] Ginovyan, M.S. and Sahakyan, A. A. (2013). On the trace approximations of products of Toeplitz matrices. Statist. Probab. Lett. 83(3), 753-760.
* [56] Ginovyan, M.S. and Sahakyan, A. A. (2019). Estimation of spectral functionals for Lévy-driven continuous time linear models with tapered data, Electronic Journal of Statistics 13, 255–283.
* [57] Ginovyan, M.S. and Sahakyan, A. A. (2019). Limit theorems for tapered Toeplitz quadratic functionals of continuous time Gaussian stationary processes. J. Cont. Math. Anal. 54(4), 222–239.
* [58] Ginovyan, M.S., Sahakyan, A. A. and Taqqu M. S. (2014). The trace problem for Toeplitz matrices and operators and its impact in probability. Probability Surveys 11, 393–440.
* [59] Giraitis, L., Koul, H. and Surgailis, D. (2012). Large Sample Inference for Long Memory Processes. Imperial College Press, London.
* [60] Giraitis, L. and Surgailis, D. (1990). A central limit theorem for quadratic forms in strongly dependent linear variables and its application to asymptotical normality of Whittle’s estimate. Probab. Theory Related Fields. 86, 87–104.
* [61] Giraitis, L. and Taqqu, M.S. (1997). Limit theorems for bivariate Appell polynomials: Part I. Central limit theorems. Probab. Theory Related Fields, 107, 359–381.
* [62] Giraitis, L. and Taqqu, M.S. (1998). Central limit theorems for quadratic forms with time-domain conditions. Annals of Probability. 26(1), 377–398.
* [63] Giraitis, L. and Taqqu, M.S. (1999). Convergence of normalized quadratic forms, J. Statist. Plann. Inference 80 (1), 15–35.
* [64] Giraitis, L. and Taqqu, M.S. (1999). Whittle estimator for finite-variance non-Gaussian long-memory time series. Ann. Statist. 27 (1), 178–203.
* [65] Giraitis, L. and Taqqu, M.S. (2001). Functional non-central and central limit theorems for bivariate Appell polynomials. Journal of Theoretical Probability, 14(2), 393–426.
* [66] Granger, C.W.J. and Joyeux, K. (1980). An introduction to long-memory time series and fractional differencing. J. Time Ser. Anal. 1, 15–29.
* [67] Grenander, U. and Szegő, G. (1958). Toeplitz Forms and Their Applications. University of California Press, Berkeley and Los Angeles.
* [68] Guégan, D. (2005). How can we define the concept of long memory? An econometric survey. Econometric Reviews 24, 113–149.
* [69] Guyon, X. (1995). Random Fields on a Network: Modelling, Statistics and Applications, Springer, New York.
* [70] Hasminskii, R.Z. and Ibragimov, I.A. (1986). Asymptotically efficient nonparametric estimation of functionals of a spectral density function. Probab. Theory Related Fields 73, 447–461.
* [71] Henry, M. and Zaffaroni, P. (2003). The long range dependence paradigm for macroeconomics and finance. In Long range dependence: Theory and applications. Doukhan, P., Oppenheim, G., Taqqu, M.S. Eds. Birkhüser, Boston, 417–438.
* [72] Heyde, C.C. and Gay, R. (1993). Smoothed periodogram asymptotics and estimation for processes and fields with possible long-range dependence. Stochastic Processes and their Applications 45, 169–182.
* [73] Hosking, J.R.M. (2002). Fractional differencing. Biometrika 68, 165–176.
* [74] Hunt, R. A., Muckenhoupt, B. and Wheeden, R. L. (1973). Weighted Norm Inequalities for the Conjugate Function and Hilbert Transform. Trans. of the AMS 176, 227–251.
* [75] Ibragimov, I.A. (1963). On estimation of the spectral function of a stationary Gaussian process. Theory Probab. Appl. 8, 366–401.
* [76] Ibragimov, I.A. (1967). On maximum likelihood estimation of parameters of the spectral of stationary time series. Theory Probab. Appl. 12, 115–119.
* [77] Ibragimov, I.A. and Linnik, Yu. V. (1971). Independent and Stationary Sequences of Random Variables. Wolters-Noordhoff Publishing Groningen, The Netherlands.
* [78] Itô, K. (1951). Multiple Wiener Integral. J. Math. Soc. Japan 3(1), 157-169.
* [79] Kac, M. (1954). Toeplitz matrices, translation kernels and a related problem in probability theory. Duke Math. J. 21, 501–509.
* [80] Leonenko, N. N. and Sakhno, L. M. (2006). On the Whittle estimators for some classes of continuous-parameter random processes and fields. Stat. Probab. Lett. 76, 781–795.
* [81] Lieberman, O. and Phillips, P.C.B. (2004). Error bounds and asymptotic expansions for Toeplitz product functionals of unbounded spectra. J. Time Ser. Anal. 25, 733–753.
* [82] Ludeña, C. and Lavielle, M. (1999). The Whittle Estimator for Strongly Dependent Stationary Gaussian Fields. Scand. J. Stat. 26, 433–450.
* [83] Palma, W. (2007). Long-memory time series. Wiley, New York.
* [84] Peccati, G. and Taqqu, M.S. (2011). Wiener Chaos: Moments, Cumulants and Diagrams: as Survey With Computer Implementation, Springer, New York.
* [85] Robinson, P.M. (2003) Long memory time series. In Time series with long memory. Robinson, P.M. Ed. Oxford University Press, Oxford, 4–32.
* [86] Rosenblatt M. (1961). Independence and dependence. In Proc. 4th Berkeley Symp. Math. Stat. Probab. 2, 431–443.
* [87] Rosenblatt M. (1962). Asymptotic behavior of eigenvalues of Toeplitz forms. J. Math. and Mech. 11, 941–950.
* [88] Rosenblatt M. (1979). Some limit theorems for partial sums of quadratic forms in stationary Gaussian variables. Z. Wahr. verw. Gebiete 49, 125–132.
* [89] Samorodnisky, G. and Taqqu, M.S. (1994). Stable Non-Gaussian Random Peocesses. Chapman & Hall/CRC, New York.
* [90] Solev, V. N. and Gerville-Reache, L. (2006). Large Toeplitz operators and quadratic forms generated by a stationary Gaussian sequence. Journal of Mathematical Sciences 139(3), 6625-6630.
* [91] Solo, V. (1992). Intrinsic random functions and the paradox of 1/f noise. SIAM J. Appl. Math. 52 270–291.
* [92] Soulier, P. (2009). Some applications of regular variation in probability and statistics. Unpublished Notes.
* [93] Spangenberg, F. (2015). A central limit theorem for the sample autocovariance of a continuous time moving average processwith long memory. ArXiv Preprint arXiv:1502.04851.
* [94] Surgailis, D. (1982). Zones of attraction of self-similar multiple integrals. Lithuanian Mathematical Journal 22 (3), 327–340.
* [95] Taniguchi, M. (1987). Minimum contrast estimation for spectral densities of stationary processes. J. R. Stat. Soc. Ser. B-Stat. Methodol. 49, 315–325.
* [96] Taniguchi, M. and Kakizawa, Y. (2000). Asymptotic Theory of Statistical Inference for Time Series. Academic Press, New York.
* [97] Taqqu, M.S. (1975). Weak convergence to fractional Brownian motion and the Rosenblatt process. Z. Wahr. verw. Gebiete 31, 287–302.
* [98] Taqqu, M.S. (1987). Toeplitz matrices and estimation of time series with long-range dependence. In Proc. of the First World Congress of the Bernoulli Society. VNU Science Press, BV: Utrecht, The Netherlands 1, 75–83.
* [99] Taqqu, M.S. (2003). Fractional Brownian motion and long-range dependence. In Long range dependence: Theory and applications. Doukhan, P., Oppenheim, G., Taqqu, M.S. Eds. Birkhüser, Boston, 5–38.
* [100] Taqqu, M.S. (2011). The Rosenblatt process. In Selected works of Murray Rosenblatt, Selected works in Probab. and Statist. Davis R.A. et al. Eds. Springer, Boston, 29–45.
* [101] Terrin, N. and Taqqu, M.S. (1990). A noncentral limit theorem for quadratic forms of Gaussian stationary sequences. J. Theoret. Probab. 3, 449–475.
* [102] Terrin, N. and Taqqu, M.S. (1991). Convergence to a Gaussian limit as the normalization exponent tends to 1/2. Statist. Probab. Lett. 11, 419–427.
* [103] Terrin, N. and Taqqu, M.S. (1991). Power Counting Theorem in Euclidean Space. In: Random Walks, Brownian Motion, and Interacting Particle Systems. Progress in Probability. Durrett R., Kesten H. Eds. Vol. 28. Birkhäuser, Boston, MA, 425–440.
* [104] Tsai, H. and Chan, K. S. (2005). Quasi-maximum likelihood estimation for a class of continuous-time long memory processes. J. Time Ser. Anal. 26(5), 691–713.
* [105] Veillette, M. and Taqqu, M.S. (2013). Properties and numerical evaluation of the Rosenblatt distribution. Bernoulli 19, 982–1005.
* [106] Walker, A.M. (1964). Asymptotic properties of least-squares estimates of parameters of the spectrum of a stationary non-deterministic time-series. J. Austr. Math. Soc. 4, 363–384.
* [107] Whittle, P. (1951). Hypothesis Testing in Time Series. Hafner, New York.
* [108] Yaglom, A.M. (1986). The Correlation Theory of Stationary and Related Random Processes, Vol. 1, Springer, New York.
Mamikon S. Ginovyan:
Department of Mathematics and Statistics, Boston University,
111 Cummington Mall, Boston, MA 02215, USA
e-mail<EMAIL_ADDRESS>
Murad S. Taqqu:
Department of Mathematics and Statistics, Boston University,
111 Cummington Mall, Boston, MA 02215, USA
e-mail<EMAIL_ADDRESS>
|
Preliminariessection: preliminaries See kemenysnell1983 or norris1998markov
for basic facts about Markov chains. Throughout this paper, except where
otherwise noted, Markov chains are assumed to have a finite state space
indexed by positive integers $1,\dots,n$ for some $n$; when we consider Markov
chains with countably infinite state spaces, we will assume that for each
state $i$ there are only finitely many states $j$ such that the transition
probability $P_{ij}$ from state $i$ to state $j$ is positive. We can represent
a Markov chain by a weighted directed graph $G=(V,E)$ whose vertices $v_{i}\in
V$ are the allowed states and the weight of the directed edge
$(v_{i},v_{j})\in E$ is $P_{ij}$. For brevity, we will use the words state and
vertex to refer to both the state in the Markov chain and the corresponding
vertex in $G$, passing back and forth between the abstract Markov chain and
its concrete embodiment as a random walk on $G$. We say that state $i$ is an
absorbing state if $P_{ii}=1$ (equivalently if $G$ has a loop at $v_{i}$ with
weight 1). Notice that for every vertex $v\in G$, the sum of the weights of
all edges $(v,w)$ is 1.
Let $H=P-I$, where $I$ is the $n$-by-$n$ identity matrix. Let $H_{i}$,
$P_{i}$, and $I_{i}$ denote the $i$th rows of the matrices $H$, $P$, and $I$
respectively. Observe that $I_{i}=e_{i}$ (the $i$th unit vector) and that $-H$
is the Laplacian of the graph. The Markov chain admits at least one stationary
measure $\ppi$ for which the vector $v=[\ppi(1),\dots,\ppi(n)]$ satisfies
$vP=v$ (this follows from the assumption that the Markov chain is finite; see,
e.g., norris1998markov), so 1 is an eigenvalue of $P$ and 0 is an eigenvalue
of $H$. All eigenvalues of $P$ have magnitude at most 1. When the Markov chain
is irreducible (that is, when every state can be reached from every other
state in some finite number of steps), 1 is a simple eigenvalue of $P$ and 0
is a simple eigenvalue of $H$, so that $H$ has rank $n-1$, and the rows of
$H$, taken with integer coefficients, generate an $n-1$-dimensional sublattice
of the space of vectors with entries summing to zero; in this case, there is a
unique stationary measure $\ppi$ satisfying $\ppi(1)+\cdots+\ppi(n)=1$.
We now informally introduce the hunger game on the weighted directed graph $G$
by comparing it to the chip-firing model and the rotor-router model before
offering a more technical definition.
The “goodness” of a deterministic analogue of a random process can be assessed
by the notion of discrepancy. If some numerical characteristic of the
deterministic process converges to a corresponding numerical characteristic of
the random process as simulation time goes to infinity, one can try to
determine the rate of convergence.
The simplest deterministic analogues of Markov chains were invented by Engel
engel1975probabilistic,engel1976does, under the name the stochastic abacus
(though the term chip-firing is more common nowadays). Suppose that all the
transition probabilities $P_{ij}$ are rational, and that we have positive
integers $d_{1},\dots,d_{n}$ such that $d_{i}P_{ij}$ is an integer for all
$i,j$. Assume that the Markov chain is irreducible. Define a chip-
configuration as an $n$-tuple
$\mathbf{c}=(\mathbf{c}_{1},\dots,\mathbf{c}_{n})$; say that a chip-
configuration is stable if $\mathbf{c}_{i}<d_{i}$ whenever $i$ is a non-
absorbing state. If $\mathbf{c}_{i}\geq d_{i}$, then we obtain another chip-
configuration by firing at $i$, replacing $\mathbf{c}$ by
$\mathbf{c^{\prime}}=\mathbf{c}+d_{i}H_{i}$; if we represent the Markov chain
by drawing a graph with vertices $v_{1},\dots,v_{n}$ and we represent the
chip-configuration $\mathbf{c}$ by putting $\mathbf{c}_{j}$ chips at $v_{j}$
for all $j$, then firing means sending $d_{i}P_{ij}$ chips from $i$ to $j$ for
each $j\neq i$. Chip-firing can be used to find the stationary probability
measure of an irreducible Markov chain as follows. Put sufficiently many chips
on the state-graph so that stabilization is impossible no matter how many
firings are performed, and start performing firings however one wishes; when
we encounter a chip-configuration we have seen before (as must happen
eventually), the vector that records the number of times each state has fired
will be a stationary vector.
Suppose we have the Markov chain given by the Markov matrix 12 & 12 0
12 0 12
0 12 12 , representing a doubly-reflecting random walk. Its corresponding
graph is shown in fig:ex2G .
[scale=0.6,font=,baseline,thick] in 1,…,3 [color=black,fill=green!30,thick]
(4*,0) circle (16pt); at (4*,0) $v_{\x}$; in 2,3 [-¿,¿=latex] plot
[smooth,tension=1] coordinates (4*-0.866*0.7,0.4*0.7) (4*-2,0.7)
(4*-4+0.866*0.7,0.4*0.7); in 1,2 [-¿,¿=latex] plot [smooth,tension=1]
coordinates (4*+0.866*0.7,-0.4*0.7) (4*+2,-0.7) (4*+4-0.866*0.7,-0.4*0.7); at
(6,1) 0.5; at (10,1) 0.5; at (6,-1) 0.5; at (10,-1) 0.5; [-¿,¿=latex] plot
[smooth,tension=5] coordinates (4-0.866*0.7,0.5*0.7) (4-1.5,0)
(4-0.866*0.7,-0.5*0.7); at (2,0) 0.5; [-¿,¿=latex] plot [smooth,tension=5]
coordinates (12+0.866*0.7,-0.5*0.7) (12+1.5,0) (12+0.866*0.7,0.5*0.7); at
(14,0) 0.5;
Figure : A graph $G$ corresponding to a doubly-reflecting random walk.
If we place 1 chip at state 1, 2 chips at state 2, and 1 chip at state 3, then
state 2 is unstable, so we may fire at 2, turning the chip-configuration
(1,2,1) into the chip-configuration (2,0,2). Then firing at 1 and at 3 brings
us back to (1,2,1). The vector that records the number of times each state
fired is (1,1,1), which is indeed a stationary vector for this Markov chain.
The rotor-router model [holroyd2010rotor] is a different scheme for imitating
Markov chains deterministically. Assume as above that the Markov chain has
rational transition probabilities, with $d_{i}$ as above. Represent the Markov
chain using a directed graph with $d_{i}P_{ij}$ parallel edges from $v_{i}$ to
$v_{j}$, so that $v_{i}$ has outdegree $d_{i}$. Each vertex distributes
arriving chips along its outgoing edges in a cyclic manner, not sending a chip
along any edge for a second time until it has sent a chip along every edge at
least once, and thereafter always sending the next chip along the edge along
which it has sent a chip least recently. Inasmuch as the vertex with the chip
gets to decide where the chip goes next, we call this “supply-side” management
of the chip’s movement. Assume that the Markov chain is irreducible. It can be
shown that once the chip enters an infinite loop (as must happen eventually),
the fraction of the time that the chip spends at vertex $v_{i}$ is
proportional to the steady-state $\ppi(i)$. We use the same Markov chain as
example: 3 chip. Suppose we start with the chip at $v_{1}$ and begin the game
by sending the chip to $v_{2}$, then $v_{3}$, then $v_{3}$ again. Since at
this point the chip has already traveled along the edge sending $v_{3}$ to
$v_{3}$, it now travels along the edge from $v_{3}$ to $v_{2}$. As the chip
has already gone from $v_{2}$ to $v_{3}$, the rotor-router protocol dictates
that it must now travel along the other edge from $v_{2}$ and go to $v_{1}$.
Similarly, as it has already gone from $v_{1}$ to $v_{2}$, now it must travel
from $v_{1}$ to $v_{1}$. Thereafter the process cycles forever. Since within
each cycle the chip spends 2 steps at each vertex, (2,2,2) is a stationary
vector. The hunger game introduced in this article can be seen as a “demand-
side” management system: each vertex has a “hunger” for chips, determined by
its expectation of receiving chips from neighboring vertices that have been
previously visited. When a vertex $v$ receives a chip, the neighboring
vertices’ hunger increases in accordance to the transition probabilities from
$v$ to those vertices, and $v$ sends its chip to the vertex with highest
hunger (regardless of whether that chip is a neighbor of $v$). Now we give a
more formal definition of the hunger game. It is simplest to start with the
situation in which the chain runs forever without restarts (in contrast to
chains that will be restarted when they enter an absorbing state). We also
start with the case in which the state space is finite, with $|V|=n$,
deferring discussion of infinite-state spaces until later. The hunger vector
$\mathbf{h}\in\R^{n}$ represents the hunger at each vertex in $V$; we will
sometimes call it the hunger state to emphasize its interpretation as a state
of the hunger game system. At each step, whichever vertex has the highest
hunger receives the chip; if vertex $v_{i}$ receives the chip, then the hunger
vector $\mathbf{h}$ is updated by adding $H_{i}=P_{i}-I_{i}$ to it,
corresponding to the increase in vertices’ hunger from the presence of this
chip at $v_{i}$ but also the satiation of $v_{i}$ after receiving a chip. If
multiple states are tied for the highest hunger, we break the tie by choosing
the lowest-indexed such vertex. Since each row of $H$ has entries summing to
0, total hunger never changes. Each relocation of the chip is referred to as
firing the chip; specifically, when the chip is relocated at $i$, we say the
chip fires to $i$. Unlike rotor-router or chip-firing, under the hunger game
rules a chip does not necessarily have to be fired to a vertex adjacent to its
current location; see example: 3 hunger . In fact, the determination of the
chip’s next location depends only on the hunger state $\mathbf{h}$ and not the
current location of the chip. The step can be described purely in terms of the
matrix $H$ and the vector $\mathbf{h}$ without any reference to chips, via the
rule “Replace $\mathbf{h}$ by $\mathbf{h}^{\prime}=\mathbf{h}+H_{i}$ where $i$
maximizes $\mathbf{h}_{i}$, choosing the smallest such $i$ in the event of a
tie.” We use the same Markov chain as example: 3 chip. Starting with
$\mathbf{h}=0$, as shown in subfig:ex2init , regardless of the initial
location of the chip, we fire the chip to $v_{1}$ under the tie-breaking rule,
yielding $\mathbf{h}=\left[-\frac{1}{2},\frac{1}{2},0\right]$ as shown in
subfig:ex2fire1 . After this, we fire to $v_{2}$, as shown in subfig:ex2fire2
, and then fire to $v_{3}$, as shown in subfig:ex2fire3 . Notice that we have
returned back to the initial hunger state $\mathbf{h}=\mathbf{0}$, so this
process repeats, visiting states 1, 2, 3, then back to 1, and so on. Notice
that we fire the chip from 3 to 1 even though $P_{31}=0$.
[scale=0.6,font=,baseline,thick] /in 1/white,2/white,3/white
[color=black,fill=,thick] (4*,0) circle (16pt); /in 1/0,2/0,3/0 at (4*,0) ; in
2,3 [-¿,¿=latex] plot [smooth,tension=1] coordinates (4*-0.866*0.7,0.4*0.7)
(4*-2,0.7) (4*-4+0.866*0.7,0.4*0.7); in 1,2 [-¿,¿=latex] plot
[smooth,tension=1] coordinates (4*+0.866*0.7,-0.4*0.7) (4*+2,-0.7)
(4*+4-0.866*0.7,-0.4*0.7); at (6,1) 0.5; at (10,1) 0.5; at (6,-1) 0.5; at
(10,-1) 0.5; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(4-0.866*0.7,0.5*0.7) (4-1.5,0) (4-0.866*0.7,-0.5*0.7); at (2,0) 0.5;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (12+0.866*0.7,-0.5*0.7)
(12+1.5,0) (12+0.866*0.7,0.5*0.7); at (14,0) 0.5;
[scale=0.6,font=,baseline,thick] /in 1/blue!30,2/yellow!30,3/white
[color=black,fill=,thick] (4*,0) circle (16pt); /in 1/-0.5,2/0.5,3/0 at (4*,0)
; in 2,3 [-¿,¿=latex] plot [smooth,tension=1] coordinates
(4*-0.866*0.7,0.4*0.7) (4*-2,0.7) (4*-4+0.866*0.7,0.4*0.7); in 1,2
[-¿,¿=latex] plot [smooth,tension=1] coordinates (4*+0.866*0.7,-0.4*0.7)
(4*+2,-0.7) (4*+4-0.866*0.7,-0.4*0.7); at (6,1) 0.5; at (10,1) 0.5; at (6,-1)
0.5; at (10,-1) 0.5; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(4-0.866*0.7,0.5*0.7) (4-1.5,0) (4-0.866*0.7,-0.5*0.7); at (2,0) 0.5;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (12+0.866*0.7,-0.5*0.7)
(12+1.5,0) (12+0.866*0.7,0.5*0.7); at (14,0) 0.5;
[scale=0.6,font=,baseline,thick] /in 1/yellow!30,2/blue!30,3/yellow!30
[color=black,fill=,thick] (4*,0) circle (16pt); /in 1/0,2/-0.5,3/0.5 at (4*,0)
; in 2,3 [-¿,¿=latex] plot [smooth,tension=1] coordinates
(4*-0.866*0.7,0.4*0.7) (4*-2,0.7) (4*-4+0.866*0.7,0.4*0.7); in 1,2
[-¿,¿=latex] plot [smooth,tension=1] coordinates (4*+0.866*0.7,-0.4*0.7)
(4*+2,-0.7) (4*+4-0.866*0.7,-0.4*0.7); at (6,1) 0.5; at (10,1) 0.5; at (6,-1)
0.5; at (10,-1) 0.5; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(4-0.866*0.7,0.5*0.7) (4-1.5,0) (4-0.866*0.7,-0.5*0.7); at (2,0) 0.5;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (12+0.866*0.7,-0.5*0.7)
(12+1.5,0) (12+0.866*0.7,0.5*0.7); at (14,0) 0.5;
[scale=0.6,font=,baseline,thick] /in 1/white,2/yellow!30,3/blue!30
[color=black,fill=,thick] (4*,0) circle (16pt); /in 1/0,2/0,3/0 at (4*,0) ; in
2,3 [-¿,¿=latex] plot [smooth,tension=1] coordinates (4*-0.866*0.7,0.4*0.7)
(4*-2,0.7) (4*-4+0.866*0.7,0.4*0.7); in 1,2 [-¿,¿=latex] plot
[smooth,tension=1] coordinates (4*+0.866*0.7,-0.4*0.7) (4*+2,-0.7)
(4*+4-0.866*0.7,-0.4*0.7); at (6,1) 0.5; at (10,1) 0.5; at (6,-1) 0.5; at
(10,-1) 0.5; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(4-0.866*0.7,0.5*0.7) (4-1.5,0) (4-0.866*0.7,-0.5*0.7); at (2,0) 0.5;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (12+0.866*0.7,-0.5*0.7)
(12+1.5,0) (12+0.866*0.7,0.5*0.7); at (14,0) 0.5;
Figure : The initial hunger state $\mathbf{h}=\mathbf{0}$. Figure :
$\mathbf{h}$ after firing to $v_{1}$, shown in blue. States with updated
hungers are shown in yellow. Figure : $\mathbf{h}$ after firing to $v_{2}$,
shown in blue. States with updated hungers are shown in yellow. Figure :
$\mathbf{h}$ after firing to $v_{3}$, shown in blue. States with updated
hungers are shown in yellow. Figure : The hunger game on a doubly-reflecting
random walk.
If one ignores the chip and focuses on the entries of the hunger vector, the
hunger game can be viewed as a greedy variant of chip-firing. Specifically, if
all entries of $\mathbf{h}$ and $P$ are rational, we can without loss of
generality assume they are all non-negative integers (since the firing rule is
invariant under affine transformation of the hunger vector) and create a chip-
configuration of the customary kind in which the number of chips at $i$ is
$\mathbf{h}_{i}$. Then the firing rule, translated to the new context, tells
us to fire the vertex $i$ that has the most chips, with ties resolved as
before. When the state space is (countably) infinite, our hunger vectors are
infinite sequences. We restrict ourselves to sequences that are bounded and
take on only finitely many distinct values; this ensures that there exists at
least one $i$ for which $\mathbf{h}_{i}$ equals $\sup_{i}\mathbf{h}_{i}$, from
which it follows that a smallest such $i$ exists. As each vertex has only
finitely many outgoing edges, this assumption ensures that after any finite
number of steps in the hunger game there are only finitely many distinct
values of hunger, so a vertex of highest hunger can be found and the lowest-
indexed one can be chosen, ad infinitum. For Markov chains with absorbing
states we vary the procedure slightly. We start with a graph devoid of chips,
add a chip at an initial vertex, and then follow the rule for moving the chip
described above, with the extra stipulation that when the chip reaches an
absorbing vertex, it gets removed from the graph. In more detail, we define
chip addition operators $E_{i}$ as follows: Given an initial hunger vector
$\mathbf{h}$, we place a chip at $v_{i}$ (increasing the hunger of the
neighbors of $v_{i}$) and add $P_{i}$ to $\mathbf{h}$, and we then repeatedly
move the chip to the currently hungriest vertex (the lowest-indexed one, in
the event of a tie), simultaneously incrementing the hunger vector by the row
of $H$ corresponding to the chip’s new location, until we arrive at an
absorbing vertex $v_{k}$, at which point we subtract $P_{k}=I_{k}$ from the
current hunger vector and remove the chip from $v_{k}$. We define
$E_{i}(\mathbf{h})$ to be the final hunger vector, and call $E_{i}$ the chip
addition operator at $i$. It is possible for $E_{i}(\mathbf{h})$ to be
undefined, in the event that the process never arrives at an absorbing state,
but we will show in lemma: finite terminate that for finite absorbing chains
the process must terminate so that $E_{i}$ is well-defined; moreover, each
chip addition operator preserves total hunger, since the sum of the entries
increases by 1 when $P_{i}$ is added, stays the same each time a row of $H$ is
added, and decreases by 1 when $P_{k}$ is subtracted. As in the previous
situation, the process can be described purely in terms of vector and matrix
operations without reference to $G$ or a chip. Given a vector
$\mathbf{h}\in\R^{n}$, add row $P_{i}$ to $\mathbf{h}$. Thereafter, if $j$ is
the unique value such that $h_{j^{\prime}}<h_{j}$ for all $j^{\prime}<j$ and
$h_{j}\geq h_{j^{\prime}}$ for all ${j^{\prime}}>j$, add $H_{j}$ to
$\mathbf{h}$, unless $j$ is an absorbing state (call it $k$), in which case
subtract $P_{k}$ from the hunger vector and stop, calling the result
$E_{i}(\mathbf{h})$. Suppose we have the Markov chain represented by the graph
in fig:ex1G . It has absorbing states $v_{1}$ and $v_{5}$.
[scale=0.6,font=,baseline,thick] in 1,…,5 [color=black,fill=green!30,thick]
(4*,0) circle (16pt); at (4*,0) $v_{\x}$; [-¿,¿=latex] plot [smooth,tension=5]
coordinates (8+0.5*0.7,0.866*0.7) (8+0,1.5) (8-0.5*0.7,0.866*0.7); at (8,1.8)
0.6; in 3,…,4 [-¿,¿=latex] plot [smooth,tension=1] coordinates
(4*-0.866*0.7,0.4*0.7) (4*-2,0.7) (4*-4+0.866*0.7,0.4*0.7); in 2,…,3
[-¿,¿=latex] plot [smooth,tension=1] coordinates (4*+0.866*0.7,-0.4*0.7)
(4*+2,-0.7) (4*+4-0.866*0.7,-0.4*0.7); [-¿,¿=latex] (8-0.7,0) – (4+0.7,0);
[-¿,¿=latex] (16+0.7,0) – (20-0.7,0); at (6,0.3) 0.2; at (10,1) 0.6; at (14,1)
0.2; at (10,-1) 0.2; at (14,-1) 0.4; at (18,-0.3) 0.8; [-¿,¿=latex] plot
[smooth,tension=5] coordinates (4-0.866*0.7,0.5*0.7) (4-1.5,0)
(4-0.866*0.7,-0.5*0.7); at (2.2,0) 1; [-¿,¿=latex] plot [smooth,tension=5]
coordinates (20+0.866*0.7,-0.5*0.7) (20+1.5,0) (20+0.866*0.7,0.5*0.7); at
(21.8,0) 1;
Figure : A graph $G$ corresponding to an absorbing Markov chain.
Let us compute $E_{3}(\mathbf{0})$. Starting with $\mathbf{h}=\mathbf{0}$, for
our first step we add a chip to $v_{3}$ to yield $\mathbf{h}=[0,0.6,0,0.4,0]$,
as shown in subfig:ex1insert . After this, we follow additional steps of the
hunger game process, firing the chip successively to $v_{2}$, $v_{4}$, and
$v_{5}$, as shown in subfig:ex1fire3 . Having reached an absorbing state, the
final step is to remove the chip from $v_{5}$, resulting in
$E_{3}(\mathbf{0})=[0.2,0.2,0.4,-0.6,-0.2]$, as shown in subfig:ex1remove .
[scale=0.6,font=,baseline,thick] /in 1/white,2/white,3/white,4/white,5/white
[color=black,fill=,thick] (4*,0) circle (16pt); /in 1/0,2/0,3/0,4/0,5/0 at
(4*,0) ; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(8+0.5*0.7,0.866*0.7) (8+0,1.5) (8-0.5*0.7,0.866*0.7); at (8,1.8) 0.6; in
3,…,4 [-¿,¿=latex] plot [smooth,tension=1] coordinates (4*-0.866*0.7,0.4*0.7)
(4*-2,0.7) (4*-4+0.866*0.7,0.4*0.7); in 2,…,3 [-¿,¿=latex] plot
[smooth,tension=1] coordinates (4*+0.866*0.7,-0.4*0.7) (4*+2,-0.7)
(4*+4-0.866*0.7,-0.4*0.7); [-¿,¿=latex] (8-0.7,0) – (4+0.7,0); [-¿,¿=latex]
(16+0.7,0) – (20-0.7,0); at (6,0.3) 0.2; at (10,1) 0.6; at (14,1) 0.2; at
(10,-1) 0.2; at (14,-1) 0.4; at (18,-0.3) 0.8; [-¿,¿=latex] plot
[smooth,tension=5] coordinates (4-0.866*0.7,0.5*0.7) (4-1.5,0)
(4-0.866*0.7,-0.5*0.7); at (2.2,0) 1; [-¿,¿=latex] plot [smooth,tension=5]
coordinates (20+0.866*0.7,-0.5*0.7) (20+1.5,0) (20+0.866*0.7,0.5*0.7); at
(21.8,0) 1; [scale=0.6,font=,baseline,thick] /in
1/white,2/yellow!30,3/blue!30,4/yellow!30,5/white [color=black,fill=,thick]
(4*,0) circle (16pt); /in 1/0,2/0.6,3/0,4/0.4,5/0 at (4*,0) ; [-¿,¿=latex]
plot [smooth,tension=5] coordinates (8+0.5*0.7,0.866*0.7) (8+0,1.5)
(8-0.5*0.7,0.866*0.7); at (8,1.8) 0.6; in 3,…,4 [-¿,¿=latex] plot
[smooth,tension=1] coordinates (4*-0.866*0.7,0.4*0.7) (4*-2,0.7)
(4*-4+0.866*0.7,0.4*0.7); in 2,…,3 [-¿,¿=latex] plot [smooth,tension=1]
coordinates (4*+0.866*0.7,-0.4*0.7) (4*+2,-0.7) (4*+4-0.866*0.7,-0.4*0.7);
[-¿,¿=latex] (8-0.7,0) – (4+0.7,0); [-¿,¿=latex] (16+0.7,0) – (20-0.7,0); at
(6,0.3) 0.2; at (10,1) 0.6; at (14,1) 0.2; at (10,-1) 0.2; at (14,-1) 0.4; at
(18,-0.3) 0.8; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(4-0.866*0.7,0.5*0.7) (4-1.5,0) (4-0.866*0.7,-0.5*0.7); at (2.2,0) 1;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (20+0.866*0.7,-0.5*0.7)
(20+1.5,0) (20+0.866*0.7,0.5*0.7); at (21.8,0) 1;
[scale=0.6,font=,baseline,thick] /in
1/yellow!30,2/blue!30,3/yellow!30,4/white,5/white [color=black,fill=,thick]
(4*,0) circle (16pt); /in 1/0.2,2/0.2,3/0.2,4/0.4,5/0 at (4*,0) ; [-¿,¿=latex]
plot [smooth,tension=5] coordinates (8+0.5*0.7,0.866*0.7) (8+0,1.5)
(8-0.5*0.7,0.866*0.7); at (8,1.8) 0.6; in 3,…,4 [-¿,¿=latex] plot
[smooth,tension=1] coordinates (4*-0.866*0.7,0.4*0.7) (4*-2,0.7)
(4*-4+0.866*0.7,0.4*0.7); in 2,…,3 [-¿,¿=latex] plot [smooth,tension=1]
coordinates (4*+0.866*0.7,-0.4*0.7) (4*+2,-0.7) (4*+4-0.866*0.7,-0.4*0.7);
[-¿,¿=latex] (8-0.7,0) – (4+0.7,0); [-¿,¿=latex] (16+0.7,0) – (20-0.7,0); at
(6,0.3) 0.2; at (10,1) 0.6; at (14,1) 0.2; at (10,-1) 0.2; at (14,-1) 0.4; at
(18,-0.3) 0.8; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(4-0.866*0.7,0.5*0.7) (4-1.5,0) (4-0.866*0.7,-0.5*0.7); at (2.2,0) 1;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (20+0.866*0.7,-0.5*0.7)
(20+1.5,0) (20+0.866*0.7,0.5*0.7); at (21.8,0) 1;
[scale=0.6,font=,baseline,thick] /in
1/white,2/white,3/yellow!30,4/blue!30,5/yellow!30 [color=black,fill=,thick]
(4*,0) circle (16pt); /in 1/0.2,2/0.2,3/0.4,4/-0.6,5/0.8 at (4*,0) ;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (8+0.5*0.7,0.866*0.7)
(8+0,1.5) (8-0.5*0.7,0.866*0.7); at (8,1.8) 0.6; in 3,…,4 [-¿,¿=latex] plot
[smooth,tension=1] coordinates (4*-0.866*0.7,0.4*0.7) (4*-2,0.7)
(4*-4+0.866*0.7,0.4*0.7); in 2,…,3 [-¿,¿=latex] plot [smooth,tension=1]
coordinates (4*+0.866*0.7,-0.4*0.7) (4*+2,-0.7) (4*+4-0.866*0.7,-0.4*0.7);
[-¿,¿=latex] (8-0.7,0) – (4+0.7,0); [-¿,¿=latex] (16+0.7,0) – (20-0.7,0); at
(6,0.3) 0.2; at (10,1) 0.6; at (14,1) 0.2; at (10,-1) 0.2; at (14,-1) 0.4; at
(18,-0.3) 0.8; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(4-0.866*0.7,0.5*0.7) (4-1.5,0) (4-0.866*0.7,-0.5*0.7); at (2.2,0) 1;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (20+0.866*0.7,-0.5*0.7)
(20+1.5,0) (20+0.866*0.7,0.5*0.7); at (21.8,0) 1;
[scale=0.6,font=,baseline,thick] /in 1/white,2/white,3/white,4/white,5/blue!30
[color=black,fill=,thick] (4*,0) circle (16pt); /in
1/0.2,2/0.2,3/0.4,4/-0.6,5/0.8 at (4*,0) ; [-¿,¿=latex] plot
[smooth,tension=5] coordinates (8+0.5*0.7,0.866*0.7) (8+0,1.5)
(8-0.5*0.7,0.866*0.7); at (8,1.8) 0.6; in 3,…,4 [-¿,¿=latex] plot
[smooth,tension=1] coordinates (4*-0.866*0.7,0.4*0.7) (4*-2,0.7)
(4*-4+0.866*0.7,0.4*0.7); in 2,…,3 [-¿,¿=latex] plot [smooth,tension=1]
coordinates (4*+0.866*0.7,-0.4*0.7) (4*+2,-0.7) (4*+4-0.866*0.7,-0.4*0.7);
[-¿,¿=latex] (8-0.7,0) – (4+0.7,0); [-¿,¿=latex] (16+0.7,0) – (20-0.7,0); at
(6,0.3) 0.2; at (10,1) 0.6; at (14,1) 0.2; at (10,-1) 0.2; at (14,-1) 0.4; at
(18,-0.3) 0.8; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(4-0.866*0.7,0.5*0.7) (4-1.5,0) (4-0.866*0.7,-0.5*0.7); at (2.2,0) 1;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (20+0.866*0.7,-0.5*0.7)
(20+1.5,0) (20+0.866*0.7,0.5*0.7); at (21.8,0) 1;
[scale=0.6,font=,baseline,thick] /in
1/white,2/white,3/white,4/white,5/yellow!30 [color=black,fill=,thick] (4*,0)
circle (16pt); /in 1/0.2,2/0.2,3/0.4,4/-0.6,5/-0.2 at (4*,0) ; [-¿,¿=latex]
plot [smooth,tension=5] coordinates (8+0.5*0.7,0.866*0.7) (8+0,1.5)
(8-0.5*0.7,0.866*0.7); at (8,1.8) 0.6; in 3,…,4 [-¿,¿=latex] plot
[smooth,tension=1] coordinates (4*-0.866*0.7,0.4*0.7) (4*-2,0.7)
(4*-4+0.866*0.7,0.4*0.7); in 2,…,3 [-¿,¿=latex] plot [smooth,tension=1]
coordinates (4*+0.866*0.7,-0.4*0.7) (4*+2,-0.7) (4*+4-0.866*0.7,-0.4*0.7);
[-¿,¿=latex] (8-0.7,0) – (4+0.7,0); [-¿,¿=latex] (16+0.7,0) – (20-0.7,0); at
(6,0.3) 0.2; at (10,1) 0.6; at (14,1) 0.2; at (10,-1) 0.2; at (14,-1) 0.4; at
(18,-0.3) 0.8; [-¿,¿=latex] plot [smooth,tension=5] coordinates
(4-0.866*0.7,0.5*0.7) (4-1.5,0) (4-0.866*0.7,-0.5*0.7); at (2.2,0) 1;
[-¿,¿=latex] plot [smooth,tension=5] coordinates (20+0.866*0.7,-0.5*0.7)
(20+1.5,0) (20+0.866*0.7,0.5*0.7); at (21.8,0) 1;
Figure : The effect of inserting a chip at $v_{3}$, shown in blue. States with
updated hungers are shown in yellow. Figure : $\mathbf{h}$ as the chip fires
successively to $v_{2}$, $v_{4}$, and $v_{5}$, shown in blue. Updated hungers
are shown in yellow. Figure : $\mathbf{h}$ after removing chip from $v_{5}$,
shown in yellow. Figure : The hunger game on $G$ from fig:ex1G after inserting
a chip at $v_{3}$.
Notice that the total hunger is 0 at the start, increases to 1 when the chip
is added at $v_{3}$, stays 1 as the chip moves through $G$, and decreases to 0
when the chip is removed at the end. Since increasing the hunger at every
vertex by the same amount has no effect on the dynamics of the hunger game,
when our Markov chain is finite we will often assume that total hunger is 0.
|
# Non-relativistic Effective Quantum Mechanics of the Coulomb Interaction
David M. Jacobs<EMAIL_ADDRESS>Physics Department, Norwich University
158 Harmon Dr, Northfield, VT 05663 Matthew Jankowski Physics Department,
Hamilton College
198 College Hill Rd., Clinton, NY 13323
###### Abstract
We apply the ideas of effective field theory to nonrelativistic quantum
mechanics. Utilizing an artificial boundary of ignorance as a calculational
tool, we develop the effective theory using boundary conditions to encode
short-ranged effects that are deliberately not modeled; thus, the boundary
conditions play a role similar to the effective action in field theory.
Unitarity is temporarily violated in this method, but is preserved on average.
As a demonstration of this approach, we consider the Coulomb interaction and
find that this effective quantum mechanics can predict the bound state
energies to very high accuracy with a small number of fitting parameters. It
is also shown to be equivalent to the theory of quantum defects, but derived
here using an _effective_ framework. The method respects electromagnetic gauge
invariance and also can describe decays due to short-ranged interactions, such
as those found in positronium. Effective quantum mechanics appears applicable
for systems that admit analytic long-range descriptions, but whose short-
ranged effects are not reliably or efficiently modeled. Potential applications
of this approach include atomic and condensed matter systems, but it may also
provide a useful perspective for the study of blackholes.
###### pacs:
Valid PACS appear here
††preprint: APS/123-QED
## I Introduction
Effective field theory (EFT) has had many successes within subfields of
physics that include condensed matter, particle physics, astrophysics, and
cosmology. The success of an effective field theory depends on a hierarchy of
scales; the momenta or wavelengths of the experimental probes or observations
of a system must be markedly different from the scale(s) of the processes not
described, at least in detail, by the effective theory.
In this work we apply many of the ideas of EFT to quantum mechanics. The
starting point of our discussion begins with the description of contact
interactions, or delta-functions potentials, in quantum mechanics in two and
three dimensions. It is known that such potentials sometimes require elaborate
regularization and renormalization schemes to ensure physically sensible
results are obtained Jackiw (1995). In Jackiw (1995) it was advocated that
non-trivial boundary conditions are a preferable alternative method to using
delta functions.
When applied to bound Coulomb states, this non-trivial boundary condition
method – also known as the method of self-adjoint extension – has been shown
to produce energy levels that obey Rydberg’s formula, at least when the
boundary condition parameter is small and proportional to the quantum defect
Beck (2016). In Beck (2016) it was shown that a unique boundary condition can
provide an effective description of “UV” physics near the origin, such as the
effect of a finite nuclear radius or the Darwin fine-structure correction –
really anything that is, or may be approximated as a delta-function potential.
There are two notable limitations to the analysis in Beck (2016): (1) because
all non-trivial $\ell\neq 0$ solutions to the Schrodinger equation are not
normalizeable, the method applies only for $s$-states ($\ell=0$) and (2) it
does not reproduce the Rydberg-Ritz formula, the more accurate bound state
energy formula in which the quantum defect is energy-dependent Gallagher
(1994).
The motivation of Jacobs (2016) was to extract a useful effective theory that
would apply for _all_ angular momentum states. In that work a finite region of
space encompassing the origin was omitted from analysis, thereby naively
obviating the need to discard the non-trivial $\ell\neq 0$ solutions. The
radius of what is referred to as the _boundary of ignorance_ , $r_{b}$ was
interpreted as a kind of short-distance cutoff on which the boundary
conditions effectively capture the omitted physics. In order to enforce
unitarity conservation, however, the limit $r_{b}\to 0$ had to be taken at the
end of any calculation; it follows that the analysis of Jacobs (2016) only
reproduces the results of Beck (2016). Thereafter, results in Burgess et al.
(2017a, b) demonstrated similar results using an effective field theory of
point particles.
In Jacobs (2019), it was finally realized that violation of unitarity is not
necessarily a problem if the violation is only temporary because the norm of a
state function _averages_ to unity. Therefore it is not necessary to take the
limit $r_{b}\to 0$ to arrive at a useful theory. From another point of view,
one should not need to have a complete description of a system to arbitrarily
short distance from the origin, just as the framework of EFT does not require
a complete understanding of a theory to arbitrarily large momentum.
Recall that for a one-dimensional system described by a scalar potential and a
boundary at the coordinate $x=x_{b}$, all eigenmodes must obey the same
(Robin) boundary condition in a standard analysis, i.e.
$\psi_{i}(x_{b})+{\cal Z}\psi_{i}^{\prime}(x_{b})=0\,,$ (1)
where the modes are labelled by generic index $i$, and ${\cal Z}$ is a real
number; for example, ${\cal Z}=0$ corresponds to the Dirichlet condition. The
central equation of Jacobs (2019) is what results from promoting the boundary
condition to be mode-dependent, i.e. ${\cal Z}\to{\cal Z}_{i}$, or
$\psi_{i}(x_{b})+{\cal Z}_{i}\psi_{i}^{\prime}(x_{b})=0\,.$ (2)
Because the boundary condition varies for different eigenmodes, this theory is
not _instantaneously_ hermitian or unitary; however, those standard conditions
do hold when averaged over sufficiently long times111Unitarity violation
appears to be a consequence of only considering the domain $x\geq x_{b}$,
whereas a particle can in reality propagate in and out of the omitted region,
$0\leq x<x_{b}$..
In this article we significantly extend the analysis of Jacobs (2019),
demonstrating that this approach can be successfully applied in three
dimensions with coupling to the full electromagnetic gauge field. We devote
the bulk of our effort to analysis of hydrogenic atoms and arrive at the
theory of quantum defects, albeit using a different framework from earlier
analyses (see, e.g., a well-known review by Seaton Seaton (1983)). The
theoretical framing here is in the same spirit as that of effective field
theory, however we do not appeal to a Lagrangian formalism. We start with the
Schrodinger equation, using a Hamiltonian valid at long distance; the
conditions of the wavefunction on the boundary of ignorance play a role
analogous to the effective action. We therefore consider this method to be a
demonstration of what can be called effective quantum mechanics.
In Section II we derive the three-dimensional version of the boundary
condition (2) that respects electromagnetic gauge invariance and discuss its
consequences. In Section III we analyze the bound states of the Coulomb
problem and derive the theory of quantum defects as a consequence of a low-
energy effective theory describing the broken $SO(3)\otimes SO(3)$ symmetry of
the Schrodinger-Coulomb problem. In Section IV we check the effective method
against synthetically-generated data for a UV-complete model of an extended
nucleus. In Section V we consider the successes and limitations of this non-
relativistic theory applied to real systems. In Section VI we consider
decaying states. In Section VII we conclude with a discussion of our results,
list possible applications, and mention some outstanding issues. Throughout
this article we use the natural unit convention $\hbar=c=1$.
## II Boundary Condition and Consequences
The dynamics of a point charge of mass $m$ and charge $Q$ coupled
electromagnetically is described by the Hamiltonian
$H=\frac{\left(\vec{p}-Q\vec{A}\right)^{2}}{2m}+Q\varphi\,.$ (3)
We expect that the boundary condition (2) can be promoted to a spherically
symmetric and gauge-invariant version, namely
$R_{i}(r_{\text{b}})+{\cal Z}_{i}(r_{b})\,D_{r}R_{i}(r_{\text{b}})=0\,,$ (4)
where
$D_{r}=\partial_{r}-iQA_{r}\,,$ (5)
and $A_{r}$ is the radial component of the vector potential, $\vec{A}$. It
would then follow that, under a local $U(1)$ transformation of the wave
function and the electromagnetic field characterized by the function $\Omega$,
$\displaystyle\Psi$ $\displaystyle=$ $\displaystyle e^{iQ\Omega}\Psi^{\prime}$
$\displaystyle\vec{A}$ $\displaystyle=$
$\displaystyle\vec{A}^{\prime}+\vec{\nabla}\Omega$ $\displaystyle\varphi$
$\displaystyle=$ $\displaystyle\varphi^{\prime}-\dot{\Omega}\,,$ (6)
equation (4) will be invariant.
We derive (4) by mandating conservation of probability associated with _a
single eigenmode_ where, for simpler notation, we note that $\Psi$ represents
such a mode:
$\frac{d}{dt}(\Psi,\Psi)=-\int dV\,\nabla\cdot\vec{J}=0\,,$ (7)
where the probability current density $\vec{J}$ following from the Hamiltonian
(3) is
$\vec{J}=\frac{i}{2m}\left[\left(\nabla\Psi^{*}\right)\Psi-\Psi^{*}\left(\nabla\Psi\right)+2iQ\vec{A}\left\lvert\Psi\right\rvert^{2}\right]\,.$
(8)
Given the spherically symmetric boundary at $r=r_{b}$, and separability of the
eigenmode as
$\Psi=e^{-i\omega t}R\left(r\right)Y_{\ell m}\left(\theta,\phi\right)\,,$ (9)
the divergence theorem may be used to demonstrate that
$\left(D_{r}R\right)^{*}R-R^{*}D_{r}R\bigg{|}_{r=r_{b}}=0\,,$ (10)
where $D_{r}$ is as defined in (5). Following Bonneau et al. (2001), one may
then multiply by an arbitrary constant $w$ with units of length and define the
dimensionless complex quantities
$\displaystyle x$ $\displaystyle\equiv$ $\displaystyle R(r_{b})$ (11)
$\displaystyle y$ $\displaystyle\equiv$ $\displaystyle
wD_{r}R(r)|_{r=r_{b}}\,,$ (12)
so that equation (10) is then equivalent to
$\left\lvert x+iy\right\rvert^{2}-\left\lvert x-iy\right\rvert^{2}=0\,.$ (13)
The argument of the two terms above have an equal magnitude and differ only by
an arbitrary phase factor $e^{i\theta}$; it follows that
$R(r_{b})-\cot{\frac{\theta}{2}}wD_{r}R(r)\bigg{|}_{r=r_{b}}=0\,,$ (14)
which is boundary condition (4) once we make the identification
${\cal Z}=-\cot{\frac{\theta}{2}}w\,;$ (15)
again, we note that $\theta$, $w$ and, therefore, ${\cal Z}$ are unique to the
_specific eigenmode_ in question.
There are consequences of promoting the boundary condition to be mode-
dependent. For example, the Hamiltonian is not Hermitian, which is observed by
computing222The _outward_ normal to the boundary points in the inward radial
direction, hence the change of sign in the last line of (II).
$\displaystyle\langle
H\Psi_{i},\Psi_{j}\rangle-\langle\Psi_{i},H\Psi_{j}\rangle$
$\displaystyle=-\frac{1}{2m}\int
dV\,\nabla\cdot\left[\left(\vec{D}\Psi_{i}\right)^{*}\Psi_{j}-\Psi_{i}^{*}\left(\vec{D}\Psi_{j}\right)\right]$
$\displaystyle=\frac{1}{2m}r_{b}^{2}\\!\int\\!d\Omega\left[\left(D_{r}\Psi_{i}\right)^{*}\Psi_{j}-\Psi_{i}^{*}\left(D_{r}\Psi_{j}\right)\right]_{r=r_{b}},$
(16)
which is not generally zero for two distinct eigenstates labeled by $i$ and
$j$. In particular, because of the orthogonality of the spherical harmonics,
this term is nonzero when states $i$ and $j$ have the same angular momentum
quantum numbers333This would also include the spin quantum number if it were
considered in this analysis..
If the angular quantum numbers are the same, e.g. $\ell_{i}=\ell_{j}$, etc.,
then
$\langle H\Psi_{i},\Psi_{j}\rangle-\langle\Psi_{i},H\Psi_{j}\rangle\\\
=\frac{1}{2m}r_{b}^{2}\left({\cal Z}_{i}-{\cal
Z}_{j}\right)\left[\left(D_{r}R_{i}\right)^{*}\left(D_{r}R_{j}\right)\right]_{r=r_{b}}e^{-i\left(\omega_{j}-\omega_{i}\right)t}\,,$
(17)
Equation (17) is gauge invariant and never zero when $i\neq j$, but does
average to zero over a period
$2\pi/\left\lvert\omega_{i}-\omega_{j}\right\rvert$; the same is true about
the inner product of two distinct eigenmodes. It may therefore be said that
hermiticity and orthogonality do not generally hold at each instant, but they
do in a time-averaged sense.
Unitarity is also temporarily violated. By construction, the norm of each
eigenmode is equal to unity for all time, but the same cannot be said for
composite state. Following Jacobs (2019), consider the state
$\Upsilon=c_{i}\Psi_{i}+c_{j}\Psi_{j}\,,$ (18)
for two complex coefficients, $c_{i}$ and $c_{j}$. We assume the standard
normalization condition
$\left\lvert c_{1}\right\rvert^{2}+\left\lvert c_{2}\right\rvert^{2}=1\,,$
(19)
which will be justified below. The inner product of the composite state with
itself is therefore
$\langle\Upsilon,\Upsilon\rangle=1+c_{i}^{*}c_{j}\langle\Psi_{i},\Psi_{j}\rangle+c_{j}^{*}c_{i}\langle\Psi_{j},\Psi_{i}\rangle\,.$
(20)
The last two offending terms do not vanish because the eigenmodes are not
instantaneously orthogonal. Because the time derivative of the inner product
is
$\frac{d}{dt}\langle\Psi_{i},\Psi_{j}\rangle=i\langle
H\Psi_{i},\Psi_{j}\rangle+\text{cx. conj.}$ (21)
the time derivative of the composite state is
$\displaystyle\frac{d}{dt}\langle\Upsilon,\Upsilon\rangle$ $\displaystyle=$
$\displaystyle
c_{i}^{*}c_{j}\frac{d}{dt}\langle\Psi_{i},\Psi_{j}\rangle+\text{cx. conj.}$
(22) $\displaystyle=$ $\displaystyle-\left({\cal Z}_{i}-{\cal
Z}_{j}\right)\left\lvert\rho_{ij}\right\rvert\sin{\left[\left(\omega_{i}-\omega_{j}\right)t+\theta_{ij}\right]}\,,$
where
$\rho_{ij}=\frac{c_{i}^{*}c_{j}}{m}r_{b}^{2}\left[\left(D_{r}R_{i}\right)^{*}\left(D_{r}R_{j}\right)\right]\bigg{|}_{r=r_{b}}$
(23)
and
$\theta_{ij}=\arg{\rho_{ij}}\,.$ (24)
Apparently, the norm of the composite state is
$\langle\Upsilon,\Upsilon\rangle=1+\frac{\left({\cal Z}_{i}-{\cal
Z}_{j}\right)}{\omega_{i}-\omega_{j}}\left\lvert\rho_{ij}\right\rvert\cos{\left[\left(\omega_{i}-\omega_{j}\right)t+\theta_{ij}\right]}\,,$
(25)
which averages to unity over a period
$2\pi/\left\lvert\omega_{i}-\omega_{j}\right\rvert$.
The above analysis suggests that the best way to normalize a composite state
is to demand
$\langle\Upsilon,\Upsilon\rangle_{T}\equiv 1\,,$ (26)
where the subscript indicates an averaging over a time, $T$ of appropriate
length. A similar condition is apparently obeyed for the orthogonality of
modes
$\langle\Psi_{i},\Psi_{j}\rangle_{T}=0\,,$ (27)
which justifies (19), as well as the hermiticity of the Hamiltonian,
$\displaystyle\langle
H\Psi_{i},\Psi_{j}\rangle_{T}=\langle\Psi_{i},H\Psi_{j}\rangle_{T}\,.$ (28)
Precisely what is considered _appropriately_ long depends on the physical
system being studied. Because we focus almost exclusively on Coulomb bound
states in the sections below, let us consider a composite state built from two
such eigenmodes with the same angular quantum numbers. Clearly, averaging over
a time longer than $2\pi/\left\lvert\omega_{i}-\omega_{j}\right\rvert$ is
sufficient according to the above analysis, and this is always larger than the
time scale associated with any omitted short-distance physics. $T$ must also
be much shorter than any processes not included in this analysis that occur
over relatively _long_ times, such as the spontaneous transition time between
the two states, which is also true444The transitions between states with the
same angular momentum are accompanied by the emission of at least two photons.
Such processes occur at a rate that does not exceed
$\sim\left\lvert\omega_{i}-\omega_{j}\right\rvert\alpha^{5}$ (see, e.g.,
Fitzpatrick (2015)).. Although this by no means constitutes a proof that this
method will work for all systems, it does suggest that its success as an
effective theory depends on a clear hierarchy of _time_ scales, in addition to
length scales.
## III Coulomb States
Here we consider an electron of charge $-e$ bound to a positive nucleus of
charge $Ze$, so that the _long-distance_ Hamiltonian is given by equation (3)
with $\vec{A}=0$ and scalar potential
$\varphi=\frac{Ze}{r}\,.$ (29)
The time-independent radial Schrodinger equation is
$-\frac{1}{r^{2}}\partial_{r}\left(r^{2}R^{\prime}(r)\right)+\left(\frac{\ell\left(\ell+1\right)}{r^{2}}-\frac{2\kappa}{r}+q^{2}\right)R(r)=0$
(30)
where
$\kappa=Zm\alpha\,,$ (31)
$\alpha\simeq 1/137$ is the fine structure constant, and the energy
eigenvalues are defined by
$E=-\frac{q^{2}}{2m}\,.$ (32)
As described in Jacobs (2016) there is one independent solution to this
differential equation that is guaranteed to be square-integrable in the
$r\to\infty$ limit555One of us (D.M.J.) would like to acknowledge Harsh Mathur
for explaining the importance of this particular (decaying) linear combination
of the two solutions to the confluent hypergeometric equation.. We write this
as, up to a normalization constant,
$R(r)=e^{-qr}\left(2qr\right)^{\ell}\,U\left(1+\ell-\frac{\kappa}{q}\Big{|}2(\ell+1)\Big{|}2qr\right)\,,$
(33)
where $U$ is Tricomi’s confluent hypergeometric function.
Quantization of the energies comes from application of the boundary condition
(4). It must be obeyed in such a way that any observables, such as the energy
or, equivalently, $q$ are independent of the location of the boundary. Because
this is a long-distance effective theory, we expect the spatial scale of the
wavefunction to be much larger than the boundary radius, or $qr_{b}\ll 1$.
This means that, in principle, equation (4) could be expanded to arbitrary
order in $qr_{b}$ and would then provide an arbitrarily precise analysis. Any
function of $q$ could then be solved for; in particular, we find it best to
solve for $\psi\left(1+\ell-\frac{\kappa}{q}\right)$, where
$\psi(z)\equiv\frac{\Gamma^{\prime}(z)}{\Gamma(z)}$ (34)
is the digamma function.
The digamma function is readily seen to appear in the series form of the
Tricomi function, given in Appendix A. By using the digamma identity,
$\psi(1+z)=\psi(z)+\frac{1}{z}\,,$ (35)
one could, in principle, solve equation (4) to write
$\psi\left(1+\ell-\frac{\kappa}{q}\right)=F_{\ell}\left[{\cal
Z}(r_{b}),r_{b}\right]\,,$ (36)
where $F_{\ell}$ is a function of both the boundary function ${\cal Z}(r_{b})$
and $r_{b}$, and accurate up to a particular order in the expansion parameter,
$qr_{b}$. One could differentiate this equation with respect to $r_{b}$ and
demand that it be equal to zero, resulting in a first order differential
equation for the boundary function ${\cal Z}(r_{b})$. It could then be said
that ${\cal Z}(r_{b})$ runs, in the sense of a renormalization group, with the
boundary radius in way that ensures that the eigenvalues do not depend on
where the boundary is; this was first described in Jacobs (2016), but a
similar procedure may be found in Burgess et al. (2017a).
However, at this point we simply integrate with respect to $r_{b}$ to
implicitly solve for ${\cal Z}(r_{b})$, i.e.
$F_{\ell}\left[{\cal Z}(r_{b}),r_{b}\right]=\chi_{\ell}(q)\,,$ (37)
where $\chi_{\ell}(q)$ is an arbitrary integration function that must only be
constant with respect to $r_{b}$. As in Jacobs (2019), we posit that
$\chi_{\ell}(q)$ captures the unspecified interactions behind the boundary,
$r<r_{b}$. This establishes the result
$\psi\left(1+\ell-\frac{\kappa}{q}\right)=\chi_{\ell}(q)\,.$ (38)
We follow this apparently tautological procedure because, in practice, solving
explicitly for ${\cal Z}(r_{b})$ is cumbersome even at lowest order in the
$qr_{b}$ expansion and for $\ell=0$; it becomes increasingly challenging at
higher order and at higher values of $\ell$. Instead, given the series form of
the Tricomi function (see Appendix A) and boundary condition (4) that must be
satisfied for arbitrary $r_{b}$, we make the generic ansatz666An arbitrary
choice of length of $\left(2\kappa\right)^{-1}$ was put into the argument of
the logarithm; any other choice can be made with a corresponding redefinition
of the $c_{j}$’s. for the boundary function,
${\cal Z}(r_{b})=\kappa^{-1}\sum_{j=1}^{\infty}\left(c_{j}+d_{j}\ln{2\kappa
r_{b}}\right)\left(2\kappa r_{b}\right)^{j}\,.$ (39)
By solving (4) for each term proportional to $r_{b}^{j}$ and
$\ln{r_{b}}\,r_{b}^{j}$ we can determine the dimensionless coefficients
$c_{j}$ and $d_{j}$ uniquely, up to the arbitrary integration function
$\chi_{\ell}(q)$ which must appear in any equation containing
$\psi\left(1+\ell-\frac{\kappa}{q}\right)$. It may be verified that
$\chi_{\ell}(q)$ only appears in the $c_{j}$ and does not first appear in the
series until $c_{2\ell+2}$; it then appears in all subsequent $c_{j}$ which
also may be understood through use of the digamma identity (35).
Below we will not explicitly refer to $\chi_{\ell}(q)$, but its presence is
implied in any discussion of $c_{2\ell+2}$, which we simply refer to as the
integration function.
We do not, at present, have an analysis valid for arbitrary $\ell$. However,
we have checked that the following procedure works at least up to $\ell=3$; it
therefore seems implausible that it would not work to arbitrarily high $\ell$.
We explicitly show the procedure for $\ell=0$ and $\ell=1$ below and the
$\ell=2$ analysis may be found in Appendix B. The summary is that at each
$\ell$ we may write the solutions as deviations from their canonical form as
$q=\frac{\kappa}{n-\delta_{\ell}}$ (40)
where $n$ is an integer and $\delta_{\ell}$ is called the _quantum defect_
(see, e.g., Seaton (1983)). For each $\ell$-state we have considered it is
possible to write the defect in the form
$\delta_{\ell}=\delta_{\ell(0)}+\lambda_{\ell(1)}\frac{E}{\Lambda}+\lambda_{\ell(2)}\left(\frac{E}{\Lambda}\right)^{2}+\dots\,,$
(41)
in other words, a low energy expansion in $E/\Lambda$, where $\Lambda$ is a
high energy (UV) scale.
The connection between non-trivial boundary conditions and the quantum defect
ansatz of (40) was first made in Beck (2016). The ansatz for $q$, equation
(40), is a deviation from the canonical solutions,
$q=\frac{\kappa}{n}\,,$ (42)
and is motivated by two distinct considerations. The first reason is obvious:
from an experimental point of view, hydrogenic atoms and highly-excited
(Rydberg) states of large atoms are known to display spectra that are largely
in agreement with (42) – this was, of course, one of the earliest successes of
quantum mechanics. This canonical case apparently corresponds to the limit
$\chi_{\ell}\to\pm\infty$, suggesting that it or, equivalently, the
$c_{2\ell+2}$ will actually take on very large (but finite) values when this
method is applied to real systems.
The second reason to use the canonical solutions as a point of departure is
that they are special from the theoretical point of view; they are the unique
solutions for which there is exists an $n^{2}$-fold degeneracy at each energy
level, $n$. This can be traced to presence of a “hidden” $SO(3)\otimes SO(3)$
symmetry, the result of a conserved Runge-Lenz vector, in addition to angular
momentum (see, e.g., Weinberg (2012)). As deviations are made from the
canonical solution (42), therefore, one could say that the $SO(3)\otimes
SO(3)$ symmetry is broken to the usual $SO(3)$ symmetry associated with
3-dimensional rotations Beck (2016). Although the Runge-Lenz vector operator
continues to be conserved, the $\ell$-dependent boundary conditions mean that
it acts on a different domain than that of the Hamiltonian, making it an
unphysical operator777See, e.g., Al-Hashimi and Wiese (2008) for a discussion
about an analogous problem on a conical space in two dimensions..
### III.1 Effective description of $\ell=0$ bound states
For the $\ell=0$ solutions, the term-by-term consideration of the boundary
condition (4) with the ansatz of (39) yields
$c_{2}=-\gamma-\frac{q}{4\kappa}-\frac{1}{2}\left[\ln{\frac{q}{\kappa}}+\psi\left(1-\frac{\kappa}{q}\right)\right]\,,$
(43)
where $\gamma$ is the Euler-Mascheroni constant. Performing an asymptotic
expansion of the digamma function yields888Although this series does not
converge, any truncation will be increasingly accurate as $\left\lvert
E\right\rvert$ decreases.
$c_{2}=-\frac{\pi}{2}\cot{\pi\frac{\kappa}{q}}-\gamma-\frac{1}{24}\frac{E}{E_{\text{Ry}}}-\frac{1}{240}\left(\frac{E}{E_{\text{Ry}}}\right)^{2}+{\cal
O}\left(\\!\frac{E}{E_{\text{Ry}}}\\!\right)^{3}\,.$ (44)
where
$E_{\text{Ry}}\equiv\frac{\kappa^{2}}{2m}\,.$ (45)
We make the ansatz
$q=\frac{\kappa}{n-\delta}$ (46)
where, as argued in Jacobs (2019), it should always be possible999In Jacobs
(2019) the definition $n-\delta\equiv\tilde{n}-\tilde{\delta}$ was made, where
$\tilde{n}$ is the closest integer to $n-\delta$, therefore
$\left\lvert\tilde{\delta}\right\rvert<1/2$ was guaranteed. In the interest of
clarity we do not adopt that notation; instead we mandate
$\left\lvert\delta\right\rvert<1/2$. to define
$\left\lvert\delta\right\rvert\leq 1/2$. It appears that this is the only
departure from the original quantum defect model, wherein there is no such
restriction on the size of the defect (see, e.g., Gallagher (1994)). We expand
equation (44) in small $\delta$ and find that the defect can be solved for
implicitly as
$\delta=\frac{\left(1-\frac{\pi^{2}}{3}\delta^{2}-\frac{\pi^{4}}{45}\delta^{4}+{\cal
O}\left(\delta^{6}\right)\right)}{2c_{2}+2\gamma+\frac{1}{12}\frac{E}{E_{\text{Ry}}}+\frac{1}{120}\left(\frac{E}{E_{\text{Ry}}}\right)^{2}+{\cal
O}\left(\frac{E}{E_{\text{Ry}}}\right)^{3}}\,.$ (47)
We do not know what functional form the integration function – or $c_{2}$ –
should have, but two comments are warranted. Firstly, deviations from the
canonical Coulomb spectrum are assumed here to be the result of short-
ranged/high-energy physics _not included explicitly_ in the Coulomb potential,
and therefore we expect those deviations not to depend explicitly on the ratio
$E/E_{\text{Ry}}$. Secondly, a series form for $c_{2}$ as an expansion in $E$
over some high energy scale is arguably the simplest guess, and is also
consistent with the well-known and successful approach taken when writing down
an effective action in the context of effective field theory. Not knowing _a
priori_ what the coefficients of this expansion should be, we parametrize the
denominator of equation (47) to be in the series form
$A_{0}+A_{1}\frac{E}{\Lambda}+A_{2}\left(\frac{E}{\Lambda}\right)^{2}+\dots\,,$
(48)
where $\Lambda$ is a high energy scale and, in the parlance of field theory,
we call the $A_{i}$ _renormalized_ expansion coefficients. Equivalently, the
integration function could apparently be written
$c_{2}=B_{0}+B_{1}\frac{E}{\Lambda}+B_{2}\frac{E^{2}}{\Lambda^{2}}+\dots\,,$
(49)
where the _bare_ expansion coefficients, $B_{i}$ are related to their
renormalized counterparts by
$\displaystyle B_{0}$ $\displaystyle=$ $\displaystyle\frac{A_{0}}{2}-\gamma$
$\displaystyle B_{1}$ $\displaystyle=$
$\displaystyle\frac{A_{1}}{2}-\frac{1}{24}\frac{\Lambda}{E_{\text{Ry}}}$
$\displaystyle B_{2}$ $\displaystyle=$
$\displaystyle\frac{A_{2}}{2}-\frac{1}{240}\frac{\Lambda^{2}}{E_{\text{Ry}}^{2}}\,,$
(50)
and so on. Summarizing, we have
$\delta=\frac{\left(1-\frac{\pi^{2}}{3}\delta^{2}-\frac{\pi^{4}}{45}\delta^{4}+{\cal
O}\left(\delta^{6}\right)\right)}{A_{0}+A_{1}\frac{E}{\Lambda}+A_{2}\left(\frac{E}{\Lambda}\right)^{2}+\dots}\,,$
(51)
which can be iteratively solved for $\delta$. Without any loss of generality,
we therefore write
$\delta=\delta_{0}+\lambda_{1}\frac{E}{\Lambda}+\lambda_{2}\left(\frac{E}{\Lambda}\right)^{2}+\dots\,,$
(52)
where $\delta_{0}$ and the $\lambda_{i}$ are dimensionless coefficients. One
could speculate that, because $\delta\to 0$ as the canonical solutions are
recovered, $\delta_{0}$ is proportional to $E_{\text{Ry}}/\Lambda$, possibly
raised to a positive power. We demonstrate below that this is indeed the case,
at least when applied to the hydrogen atom.
### III.2 Effective description of $\ell=1$ bound states
Following the procedure used in the previous section, for $\ell=1$ we discover
$c_{4}=\frac{9-16\gamma}{256}-\frac{3}{64}\frac{q}{\kappa}-\frac{\left(1-\gamma\right)}{16}\frac{q^{2}}{\kappa^{2}}+\frac{1}{64}\frac{q^{3}}{\kappa^{3}}\\\
+\frac{q^{2}-\kappa^{2}}{32\kappa^{2}}\left[\ln{\frac{q}{\kappa}}+\psi\left(2-\frac{\kappa}{q}\right)\right]\,.$
(53)
After expanding the digamma function in small $q/\kappa$ and writing this
expression in terms of energies we find
$\delta=\frac{\left(1+\frac{E}{E_{\text{Ry}}}\right)\left(1-\frac{\pi^{2}}{3}\delta^{2}-\frac{\pi^{4}}{45}\delta^{4}+{\cal
O}\left(\delta^{6}\right)\right)}{32c_{4}\\!-\\!\frac{9}{8}\\!+\\!2\gamma\\!-\\!\frac{11-24\gamma}{12}\frac{E}{E_{\text{Ry}}}\\!+\\!\frac{11}{120}\left(\frac{E}{E_{\text{Ry}}}\right)^{2}\\!+\\!{\cal
O}\left(\frac{E}{E_{\text{Ry}}}\right)^{3}}\,.$ (54)
Here a convenient parametrization of the denominator in equation (54) is the
series form
$\left(1+\frac{E}{E_{\text{Ry}}}\right)\left(A_{0}+A_{1}\frac{E}{\Lambda}+A_{2}\left(\frac{E}{\Lambda}\right)^{2}+\dots\right)\,,$
(55)
or, equivalently,
$c_{4}=B_{0}+B_{1}\frac{E}{\Lambda}+B_{2}\frac{E^{2}}{\Lambda^{2}}+\dots$ (56)
where
$\displaystyle B_{0}$ $\displaystyle=$
$\displaystyle\frac{A_{0}}{32}+\frac{9}{256}-\frac{\gamma}{16}$ $\displaystyle
B_{1}$ $\displaystyle=$
$\displaystyle\frac{A_{1}}{32}+\left(\frac{11}{384}-\frac{\gamma}{16}\right)\frac{\Lambda}{E_{\text{Ry}}}$
$\displaystyle B_{2}$ $\displaystyle=$
$\displaystyle\frac{A_{2}}{32}-\frac{11}{3840}\left(\frac{\Lambda}{E_{\text{Ry}}}\right)^{2}\,.$
(57)
The result is the same as that of the $\ell=0$ case, namely $\delta$ may be
put in a form identical to equations (51) and (52). The $\ell=2$ analysis
follows similarly and may be found in Appendix B.
### III.3 Brief comments about scattering
Because of the apparent equivalence between this effective approach and that
of quantum defect theory we do not dwell on the analysis of scattering states.
We simply note that, whereas in bound state calculations the definition
$q^{2}=-2mE$ is made, for scattering one defines the wave number $k$ by
$k^{2}=2mE\,,$ (58)
where the energy, $E>0$. This suggests an analytic continuation of the
integration function $\chi_{\ell}(q^{2})$ in the variable to $q^{2}\to-k^{2}$,
in other words an analytic continuation of the defects $\delta_{\ell}(E)$ from
$E<0$ to $E>0$. This is precisely what is known to occur within quantum defect
theory and we direct the interested reader toward the relevant literature
(see, e.g., Seaton (1983) and references therein).
## IV Fits to synthetic data
Here we consider the long-range Coulomb potential modified at short distance
with a specific UV-completion, namely one in which there is a constant
“nuclear” charge density. The scalar potential is therefore
$\varphi=\begin{cases}\frac{Zer^{2}}{R_{\text{nuc}}^{3}},~{}~{}~{}~{}~{}&(0\leq
r\leq R_{\text{nuc}})\\\
\frac{Ze}{r},~{}~{}~{}~{}~{}~{}~{}&(r>R_{\text{nuc}})\,.\end{cases}$ (59)
The time-independent radial Schrodinger equation in the nuclear interior is
$-\frac{1}{r^{2}}\partial_{r}\left(r^{2}R^{\prime}(r)\right)+\left(\frac{\ell\left(\ell+1\right)}{r^{2}}-\frac{4r^{2}}{b^{4}}+q^{2}\right)R(r)=0$
(60)
where
$b^{4}\equiv\frac{2R_{\text{nuc}}^{3}}{Zm\alpha}\,.$ (61)
Imposing regularity at the origin, the solution to (60) may be written, up to
a normalization constant, as
$R(r)=e^{-i\left(\frac{r}{b}\right)^{2}}\left(\frac{r}{b}\right)^{\ell}\\\
\times
M\left(\frac{2\ell+3}{4}-\frac{i}{8}\left(qb\right)^{2}\Big{|}\frac{2\ell+3}{2}\Big{|}2i\left(\frac{r}{b}\right)^{2}\right)\,,$
(62)
where $M$ is Kummer’s hypergeometric function.
It is important to separately consider two different types of hydrogenic
systems, namely those in which deviations from the Coulomb potential occur at
radii that are smaller and larger than the Bohr radius. For this reason we
consider two examples in which the nuclear radius, $R_{\text{nuc}}$ satisfies
either or $\kappa R_{\text{nuc}}<1$ or $\kappa R_{\text{nuc}}>1$. The
synthetic bound state energies of this UV-complete model are generated by
matching the interior and exterior solutions, equations (33) and (62), and
their first derivatives at $r=R_{\text{nuc}}$.
To display the robustness of the effective theory we apply it to bound states
with a leading order (LO) fit using only $\delta_{\ell(0)}$, next-to-leading
order (NLO) by fitting for $\delta_{\ell(0)}$ and $\lambda_{\ell(1)}$, and
next-to-next-to leading order (NNLO) by fitting for $\delta_{\ell(0)}$,
$\lambda_{\ell(1)}$, and $\lambda_{\ell(2)}$. We assume both $m$ and $\alpha$
are perfectly known by some independent means, and utilize equations (32),
(40), and (41) to fit to the lowest energy levels, i.e. those with the largest
$\left\lvert E\right\rvert$, so as to make predictions for the higher energy
levels.
In Figures 1 and 2 we display the relative error in the predicted energy
levels for $\kappa R_{\text{nuc}}=0.31$ when $\ell=0$ and $\ell=1$,
respectively; in those figures we normalize the energies to the ground state,
$E_{0}$.
Figure 1: Relative errors in the $\ell=0$ binding energies, compared to the
UV-complete model, wherein $\kappa R_{\text{nuc}}=0.31$. Figure 2: Relative
errors in the $\ell=1$ binding energies, compared to the UV-complete model
wherein $\kappa R_{\text{nuc}}=0.31$, as in Figure 1.
For the large nuclear radius we choose $\kappa R_{\text{nuc}}=2.73$. The
method here is unsuccessful unless one fits the effective theory to the
synthetic data starting at somewhat higher energy levels. This is consistent
with the fact that this method is a _long-distance_ effective theory; thus one
should only expect it to provide accurate predictions when the characteristic
length scale of the wavefunction is large compared to the nuclear radius, or
$qR_{\text{nuc}}<1$. We choose not to fit the first 20 synthetic levels
(ground state and 19 excited states), beginning our fits near the canonical
eigenvalue of $q=\kappa/21$ and can therefore make predictions beginning near
$q=\kappa/22$. In Figures 3 and 4 we display our results for $\ell=0$ and
$\ell=1$, respectively, normalizing the energies to the 21st excited state,
$E_{21}$. Although the errors initially grow marginally as higher energy
levels are considered, eventually there is a turnover and the errors begin to
decrease. In any case, at any given energy level, the effective method gives
predictions that are always more accurate at higher order.
Figure 3: Relative errors in the $\ell=0$ binding energies, compared to the
UV-complete model, wherein $\kappa R_{\text{nuc}}=2.73$. Figure 4: Relative
errors in the $\ell=1$ binding energies, compared to the UV-complete model,
wherein $\kappa R_{\text{nuc}}=2.73$, as in Figure 3.
## V Application to physical systems
In so far as Rydberg atoms are concerned, the $\delta$ ansatz of (41) is
equivalent to the usual quantum defect method(s), wherein the modified
Rydberg-Ritz expression is often written
$\delta_{n\ell
j}=\delta_{0}+\frac{\delta_{2}}{\left(n-\delta_{0}\right)^{2}}+\frac{\delta_{4}}{\left(n-\delta_{0}\right)^{4}}+\dots\,,$
(63)
for some experimentally determined constants $\delta_{0},\delta_{2}$, etc.
Gallagher (1994). The only difference with our approach, as mentioned in
Section III.1, is that here we restrict the size of the defect to obey
$\left\lvert\delta_{0}\right\rvert<1$. For example, the measured transition
frequencies of the alkalis ${}^{23}\text{Na}$, ${}^{39}\text{K}$, and
${}^{85}\text{Rb}$ are fit with the original defect model to give a leading
order $s$-state defect, $\delta_{0}^{\text{QDT}}\simeq 1.380$, $2.180$ and
$3.131$, respectively Gallagher (1994). In the effective model these are
simply interpreted as $\delta_{0}=0.380$, $0.180$, and $0.131$. In other
words, the ground state of an alkali atom corresponds to $n=1$, rather than
the principal quantum number corresponding to its row in the periodic table.
Fitting the hydrogen spectrum, for example, one can achieve reasonably
accurate results; however, the predictions become only marginally more
accurate at higher order in the effective theory, and this is likely because
of relativistic effects that are not accounted for. Consider the effective
theory applied to a particular state of a hydrogenic atom in which the nucleus
has a charge $+e$. At leading order, $\delta_{\ell}=\delta_{\ell(0)}$ so that,
expanding in small $\delta_{\ell(0)}$, the energy levels are
$E=-\frac{m\alpha^{2}}{2n^{2}}-\frac{m\alpha^{2}}{n^{3}}\delta_{\ell(0)}-\frac{3m\alpha^{2}}{2n^{4}}\delta_{\ell(0)}^{2}+\dots$
(64)
The first term corresponds to the canonical eigenvalues, whereas the second
term is proportional to the corrections that are usually obtained using
perturbation theory; in particular, short-ranged corrections to the Coulomb
potential proportional to $1/r^{2}$, $1/r^{3}$, etc., as well as a delta-
function centered about $r=0$ give corrections proportional to $n^{-3}$ (see,
e.g., Drake and Swainson (1990)). Let us call those potential corrections
$U_{\text{rel}}$. Perturbation theory is used to correct the canonical energy
levels by an amount
$\left<\Psi\right|U_{\text{rel}}\left|\Psi\right>=-\frac{m\alpha^{4}}{\left(2j+1\right)n^{3}}\,,$
(65)
where $j=\ell\pm 1/2$ Berestetskii et al. (1982). Matching with the effective
method indicates that
$\delta_{\ell(0)}={\cal O}(\alpha^{2})\,,$ (66)
or $\delta_{\ell(0)}\propto E_{\text{Ry}}/\Lambda$, which is true when we set
the high energy scale $\Lambda=m$, the mass of the electron.
At the next-to-leading order, we apparently have
$\delta_{\ell}=\delta_{\ell(0)}-\lambda_{\ell(1)}\frac{\alpha^{2}}{2n^{2}}\,.$
(67)
With an additional parameter there is, of course, an improved fit to the
hydrogen spectrum; however, it is only a marginal improvement. Although
equation (64) is modified, the effect of the parameter $\lambda_{\ell(1)}$
only appears at order $n^{-5}$, whereas there is already trouble with the
order $n^{-4}$ term. This is because there is a remaining fine-structure
effect not captured by $U_{\text{rel}}$, but instead comes from the
relativistic correction to the kinetic energy. That kinetic correction amounts
to
$+\frac{3m\alpha^{4}}{8n^{4}}$ (68)
for all $\ell$ states of Hydrogen, which cannot be accounted for
simultaneously with the order $n^{-3}$ correction in equation (65). In any
case it would not be appropriate; the relativistic correction to the kinetic
energy is not a short-ranged effect that should be hidden behind the boundary
of ignorance.
Although this and the preceding sections demonstrate the utility of the non-
relativistic effective quantum mechanics, a relativistically corrected version
of the theory is clearly warranted. Those results will appear in forthcoming
work Jacobs (2021).
## VI Decays due to UV effects
Here we consider if and how the analysis must be modified if the eigenmode in
question decays at a rate, $\Gamma$, via some interaction(s) near the
origin101010This effective method can describe decays of states due to short-
ranged effects, such as annihilation; the analysis presented here does not
describe, e.g., transitions from one state to another.. In other words,
normalizing the state at $t=0$,
$\langle\Psi,\Psi\rangle=e^{-\Gamma t}\,,$ (69)
where, for simpler notation, we note that $\Psi$ represents a single time-
dependent eigenmode. We assume the eigenmodes may be written in the variable-
separated form as
$\Psi=e^{-i\omega t-\frac{\Gamma}{2}t}R\left(r\right)Y_{\ell
m}\left(\theta,\phi\right)\,,$ (70)
where $\omega$ is real and any normalization constant is absorbed into $R(r)$.
In the case of a decaying state we must modify equation (7) to
$\frac{d}{dt}\langle\Psi,\Psi\rangle=-\int dV\,\nabla\cdot\vec{J}=-\Gamma
e^{-\Gamma t}\,.$ (71)
The probability current density is still given by equation (8), but here the
application of the divergence theorem results in a modification to equation
(10), namely
$\left(D_{r}R\right)^{*}R-R^{*}D_{r}R\bigg{|}_{r=r_{b}}=\frac{2im}{r_{b}^{2}}\Gamma\,.$
(72)
Multiplying this equation by $w$, an arbitrary constant with units of length
and making the same definitions for $x$ and $y$ as in equation (11), it may be
verified that equation (72) is equivalent to
$\left\lvert x+iy\left(1-\frac{mw\Gamma}{r_{b}^{2}\left\lvert
y\right\rvert^{2}}\right)\right\rvert^{2}=\left\lvert
x-iy\left(1+\frac{mw\Gamma}{r_{b}^{2}\left\lvert
y\right\rvert^{2}}\right)\right\rvert^{2}\,.$ (73)
Because there is an equivalence of the arguments up to a phase factor
$e^{i\theta}$, one may write
$R-\left(\cot{\frac{\theta}{2}}w+i\frac{m\Gamma}{r_{b}^{2}\left\lvert
D_{r}R\right\rvert^{2}}\right)D_{r}R=0\,.$ (74)
It follows that the boundary condition is a complexified form of (4), namely
$R(r_{b})+\left({\cal Z}_{\text{re}}(r_{b})+i{\cal
Z}_{\text{im}}(r_{b})\right)D_{r}R(r_{b})=0\,,$ (75)
where
${\cal Z}_{\text{re}}(r_{b})=-\cot{\frac{\theta}{2}}w\,,$ (76)
and
${\cal Z}_{\text{im}}(r_{b})=-\frac{m\Gamma}{\left\lvert
r_{b}D_{r}R\right\rvert^{2}}\,.$ (77)
The presence of $D_{r}R\left(r_{b}\right)$ in the definition ${\cal
Z}_{\text{im}}(r_{b})$ may seem strange because it suggests a non-linearly
realized boundary condition; however, we remind the reader that this analysis
is meant to be approximate. We have shown that if the eigenmode is _exactly_
proportional to $e^{-\frac{\Gamma}{2}t}$, then it follows that equation (77)
must hold _exactly_. However, such a rigid expectation is inappropriate.
Clearly, this method is not capable of describing, e.g., deviations from a
pure exponential decay law which is known to occur at very short times after
an unstable state has been established (see, e.g., Chiu et al. (1977)). The
method described here is intended as a long-distance – hence a long-time –
effective theory. We therefore suggest that equation (77) gives only a
qualitative relationship: ${\cal Z}_{\text{im}}(r_{b})$ is both proportional
to the decay rate of the system and bounded according to ${\cal
Z}_{\text{im}}(r_{b})\leq 0$. Beyond these qualitative features, we only
assume that ${\cal Z}_{\text{im}}(r_{b})$ has _some_ dependence on $r_{b}$
that can be determined in the analysis of a particular system.
Consider positronium, a system described at long distance by a Coulomb
potential with a reduced mass of $m/2$. The analysis from Section III follows
in a nearly identical fashion, but the energy eigenvalues and quantum defects
are complex, i.e.
$E=-\frac{q^{2}}{m}=\omega-i\frac{\Gamma}{2}\,,$ (78)
where
$q=\frac{m\alpha}{2\left(n-\delta\right)}\,,$ (79)
and, at lowest order,
$\delta=\delta_{0,\text{re}}+i\delta_{0,\text{im}}\,.$ (80)
We will further assume that $\delta_{0,\text{im}}\ll 1$ so that a perturbative
expansion in small $\delta$ is still possible and therefore the analysis of
Section III is equally valid; we confirm this below. From equations (78),
(79), and (80), it is apparent that the real part of the energy
$\omega=-\frac{m\left(Z\alpha\right)^{2}}{4n^{2}}-\frac{m\left(Z\alpha\right)^{2}}{2n^{3}}\delta_{0,\text{re}}+\dots\,,$
(81)
whereas the decay rate is given by
$\Gamma=\frac{m\left(Z\alpha\right)^{2}}{n^{3}}\delta_{0,\text{im}}+\dots\,,$
(82)
which displays the standard $n^{-3}$ dependence expected from equation (65).
At higher order, $\delta_{0,\text{im}}$ would affect $\omega$ as well;
however, we have already established in Section V that this analysis is
limited because it is missing relativistic corrections and therefore only
these lowest-order results are worth reporting here.
Within Quantum Electrodynamics, the lowest order decay rate of positronium is
predicted to be (see, e.g., Cassidy (2018))
$\Gamma_{\text{QED}}=\begin{cases}\frac{m\alpha^{5}}{2n^{3}}~{}~{}~{}~{}~{}~{}&\text{(singlet)}\\\
\frac{4}{9\pi}\left(\pi^{2}-9\right)\frac{m\alpha^{6}}{2n^{3}}~{}~{}~{}~{}~{}~{}&\text{(triplet)}\end{cases}$
(83)
which means that matching to that UV-complete theory would yield
$\delta_{0,\text{im}}=\begin{cases}{\cal
O}(\alpha^{3})~{}~{}~{}~{}~{}~{}&\text{(singlet)}\\\ {\cal
O}(\alpha^{4})~{}~{}~{}~{}~{}~{}&\text{(triplet)}\,.\end{cases}$ (84)
## VII Discussion
We have shown how to construct a nonrelativistic effective quantum mechanics
in three dimensions for systems possessing spherical symmetry. The short-
distance cutoff length, $r_{b}$ is a conceptual and calculational crutch used
to derive our results and ultimately vanishes from any final result. The role
of the boundary function is that of a coupling constant; each mode “feels” a
different coupling constant that varies with energy. A high energy scale,
$\Lambda$ appears in the low-energy expansion of physical quantities, such as
bound states. We focused primarily on the Coulomb interaction and have found
non-trivial results for _all_ angular momentum states, ultimately showing an
equivalence to quantum defect theory. We have also shown the method provide a
means of describing decays due to effects at short distance.
The most pressing question is how to apply this approach relativistically for
application to high precision spectroscopy of atoms and molecules. Rydberg
atoms, are of particular importance because they have potential applications
in quantum computing and electromagnetic field sensing, for example Adams et
al. (2019). There also appears to be a pertinent application to positronium,
in particular because of a recently discovered discrepancy between a measured
transition frequency in that system and the predictions from QED Gurung et al.
(2020). These ideas presumably also have applications in the areas of
condensed matter, particle physics, and possibly gravitation. Like the Coulomb
interaction, blackholes provide a $1/r$ potential at long distances and
exhibit a kind of boundary, the event horizon, behind which information is
obscured.
Acknowledgements
Thanks are owed to Harsh Mathur, with whom many discussions were had during
early stages of this work. Thanks are also owed to the late Bryan Lynn for
raising the question about decaying systems and positronium, in particular.
One of us (MJ) would like to thank Hamilton College for funding during a
portion of this work and another of us (DMJ) would like to thank the Hamilton
College Physics Department for its hospitality during early stages of this
work.
## Appendix A Series form of the Tricomi function
The solutions to the Schrodinger-Coulomb problem involve solutions to the
confluent hypergeometric equation
$yg^{\prime\prime}(y)+\left(B-y\right)g^{\prime}(y)-Ag(y)=0\,.$ (85)
A standard textbook analysis, e.g., Arfken and Weber (2005), involves a series
ansatz one may show that gives two independent solutions
$M(A,B,y)=\sum_{n=0}^{\infty}\frac{1}{n!}\frac{A^{(n)}}{B^{(n)}}y^{n}\,,$ (86)
known as Kummer’s function, and
$M_{2}(A,B,y)=y^{1-B}M(A+1-B,2-B,y)$ (87)
The Tricomi function, $U(A,B,y)$ is the special linear combination of the two
that is guaranteed to decay as $y\to\infty$, usually defined as
$U(A,B,y)\equiv\frac{\Gamma{(1-B)}}{\Gamma{(A-B+1)}}M(A,B,y)\\\
+\frac{\Gamma(B-1)}{\Gamma(A)}M_{2}(A,B,y)\,;$ (88)
this along with, e.g., $M(A,B,y)$ may be chosen as a linearly independent set
of solutions as long as $B$ is not an integer greater than $1$.
In the Coulomb problem, however, $B=2+2\ell$, so care must be taken to
understand the series form of $U(A,B,y)$. One can, for example, let
$B=2+2\ell+\epsilon$, where $\epsilon$ is treated as perturbatively small; in
the end one can let $\epsilon\to 0$ and show that the Tricomi function may be
written exactly as
$U\left(A,2+2\ell,y\right)=\frac{1}{\Gamma\left(A\right)\Gamma\left(A-2\ell-1\right)}\times\\\
\left(-\sum_{n=0}^{2\ell}\frac{\left(-y\right)^{n-2\ell-1}}{n!}\Gamma\left(A-2\ell-1+n\right)\Gamma\left(2\ell+1-n\right)\right.\\\
\left.+\sum_{n=0}^{\infty}\frac{y^{n}}{n!}\frac{\Gamma(A+n)}{\Gamma(2\ell+2+n)}\biggl{[}\psi(A+n)-\psi(2\ell+2+n)\right.\\\
-\psi(n+1)+\ln{y}\biggr{]}\Biggr{)}$ (89)
## Appendix B Effective description of $\ell=2$ bound states
Here we find
$c_{6}=\frac{59-72\gamma}{373\,248}-\frac{5}{20\,736}\frac{q}{\kappa}+\frac{\left(180\gamma-197\right)}{186\,624}\frac{q^{2}}{\kappa^{2}}\\\
+\frac{5}{6912}\frac{q^{3}}{\kappa^{3}}+\frac{\left(35-24\gamma\right)}{31\,104}\frac{q^{4}}{\kappa^{4}}-\frac{1}{5184}\frac{q^{5}}{\kappa^{5}}\\\
-\frac{\left(4q^{4}-5q^{2}\kappa^{2}+\kappa^{4}\right)}{10368\kappa^{4}}\left[\ln{\frac{q}{\kappa}}+\psi\left(3-\frac{\kappa}{q}\right)\right]\,.$
(90)
After expanding the digamma function in small $q/\kappa$ and writing this in
terms of energies we find
$c_{6}=-\frac{\pi}{10368}\left(1+\frac{5E}{E_{\text{Ry}}}+\frac{4E^{2}}{E_{\text{Ry}}^{2}}\right)\cot{\pi\frac{\kappa}{q}}\\\
+\frac{59-72\gamma}{373\,248}-\frac{360\gamma-283}{373\,248}\left(\frac{E}{E_{\text{Ry}}}\right)\\\
-\frac{629-960\gamma}{1224160}\left(\frac{E}{E_{\text{Ry}}}\right)^{2}+{\cal
O}\left(\frac{E}{E_{\text{Ry}}}\right)^{3}\,.$ (91)
We make the defect ansatz in equation (46) to find
$\delta=\left(1+\frac{5E}{E_{\text{Ry}}}+\frac{4E^{2}}{E_{\text{Ry}}^{2}}\right)\left(1-\frac{\pi^{2}}{3}\delta^{2}-\frac{\pi^{4}}{45}\delta^{4}+{\cal
O}\left(\delta^{6}\right)\right)\\\
\times\left[10368c_{6}-\frac{59}{36}+2\gamma-\left(\frac{283}{36}-10\gamma\right)\frac{E}{E_{\text{Ry}}}\right.\\\
\left.+\left(\frac{629}{120}-8\gamma\right)\left(\frac{E}{E_{\text{Ry}}}\right)^{2}+{\cal
O}\left(\frac{E}{E_{\text{Ry}}}\right)^{3}\right]^{-1}\,.$ (92)
We may parametrize the denominator of equation (92) to be in the series form
$\left(1+\frac{5E}{E_{\text{Ry}}}+\frac{4E^{2}}{E_{\text{Ry}}^{2}}\right)\left(A_{0}+A_{1}\frac{E}{\Lambda}+A_{2}\left(\frac{E}{\Lambda}\right)^{2}+\dots\right)\,,$
(93)
or, equivalently,
$c_{6}=B_{0}+B_{1}\frac{E}{\Lambda}+B_{2}\frac{E^{2}}{\Lambda^{2}}+\dots$ (94)
where
$\displaystyle B_{0}$ $\displaystyle=$
$\displaystyle\frac{A_{0}}{10\,368}+\frac{59}{373\,248}-\frac{\gamma}{5184}$
$\displaystyle B_{1}$ $\displaystyle=$
$\displaystyle\frac{A_{1}}{10\,368}+\left(\frac{283}{373\,248}-\frac{5\gamma}{5184}\right)\frac{\Lambda}{E_{\text{Ry}}}$
$\displaystyle B_{2}$ $\displaystyle=$
$\displaystyle\frac{A_{2}}{10\,368}+\left(\frac{629}{1\,244\,160}-\frac{\gamma}{1296}\right)\left(\frac{\Lambda}{E_{\text{Ry}}}\right)^{2}\,.$
(95)
It follows that $\delta$ may be put in a form identical to equations (51) and
(52).
## References
* Jackiw (1995) R. Jackiw, _Diverse topics in theoretical and mathematical physics_ (World Scientific, 1995).
* Beck (2016) S. Beck, Ph.D. thesis, Case Western Reserve University (2016), URL http://rave.ohiolink.edu/etdc/view?acc_num=case1465577450.
* Gallagher (1994) T. F. Gallagher, _Rydberg Atoms_ , Cambridge Monographs on Atomic, Molecular and Chemical Physics (Cambridge University Press, 1994).
* Jacobs (2016) D. M. Jacobs, J. Phys. A49, 295203 (2016), eprint 1511.03954.
* Burgess et al. (2017a) C. P. Burgess, P. Hayman, M. Williams, and L. Zalavari, JHEP 04, 106 (2017a), eprint 1612.07313.
* Burgess et al. (2017b) C. P. Burgess, P. Hayman, M. Rummel, M. Williams, and L. Zalavari, JHEP 07, 072 (2017b), eprint 1612.07334.
* Jacobs (2019) D. M. Jacobs, Phys. Rev. A 100, 062122 (2019), eprint 1909.13407.
* Seaton (1983) M. J. Seaton, Reports on Progress in Physics 46, 167 (1983), URL https://doi.org/10.1088%2F0034-4885%2F46%2F2%2F002.
* Bonneau et al. (2001) G. Bonneau, J. Faraut, and G. Valent, Am.J.Phys. 69, 322 (2001), eprint quant-ph/0103153.
* Fitzpatrick (2015) R. Fitzpatrick, _Quantum Mechanics_ (WORLD SCIENTIFIC, 2015), eprint https://www.worldscientific.com/doi/pdf/10.1142/9645, URL https://www.worldscientific.com/doi/abs/10.1142/9645.
* Weinberg (2012) S. Weinberg, _Lectures on quantum mechanics_ (Cambridge University Press, 2012).
* Al-Hashimi and Wiese (2008) M. H. Al-Hashimi and U. J. Wiese, Annals of Physics 323, 82 (2008), eprint 0707.4379.
* Drake and Swainson (1990) G. W. F. Drake and R. A. Swainson, Phys. Rev. A 42, 1123 (1990), URL https://link.aps.org/doi/10.1103/PhysRevA.42.1123.
* Berestetskii et al. (1982) V. Berestetskii, E. Lifshitz, and L. Pitaevskii, _Quantum Electrodynamics: Volume 4_ , Course of theoretical physics (Elsevier Science, 1982), ISBN 9780750633710, URL https://books.google.com/books?id=URL5NKX8vbAC.
* Jacobs (2021) D. Jacobs, in preparation (2021).
* Chiu et al. (1977) C. Chiu, E. Sudarshan, and B. Misra, Phys. Rev. D 16, 520 (1977).
* Cassidy (2018) D. Cassidy, Eur. Phys. J. D 72, 53 (2018).
* Adams et al. (2019) C. S. Adams, J. D. Pritchard, and J. P. Shaffer, Journal of Physics B: Atomic, Molecular and Optical Physics 53, 012002 (2019), URL https://doi.org/10.1088/1361-6455/ab52ef.
* Gurung et al. (2020) L. Gurung, T. Babij, S. Hogan, and D. Cassidy, Phys. Rev. Lett. 125, 073002 (2020).
* Arfken and Weber (2005) G. B. Arfken and H. J. Weber, _Mathematical Methods for Physicists_ (Elsevier Academic Press, 2005).
|
# Statistical Characterization of Wireless MIMO Channels in Mode-Stirred
Enclosures
Mir Lodro, , Steve Greedy, Sirio Belga Fedeli, Christopher Smartt, Ana
Vukovic, David Thomas and Gabriele Gradoni Mir Lodro, Steve Greedy,
Christopher Smartt, Ana Vukovic, David Thomas and Gabriele Gradoni are with
George Green Institute for Electromagnetic Research-GGIEMR, the University of
Nottingham, UK. Gabriele Gradoni is also with British Telecommunication and
the University of Cambridge, UK. Sirio Belga Fedeli is with Department of
Mathematics, King’s College London, UK
###### Abstract
We present the statistical characterization of a 2x2 Multiple-Input Multiple-
Output wireless link operated in a mode-stirred enclosure, with channel state
information available only at the receiver (agnostic transmitter). Our
wireless channel measurements are conducted in absence of line of sight and
varying the inter-element spacing between the two antenna elements in both the
transmit and receive array. The mode-stirred cavity is operated: i) at a low
number of stirrer positions to create statistical inhomogeneity; ii) at two
different loading conditions, empty and with absorbers, in order to mimic a
wide range of realistic equipment level enclosures. Our results show that two
parallel channels are obtained within the confined space at both the operating
conditions. The statistical characterization of the wireless channel is
presented in terms of coherence bandwidth, path loss, delay spread and Rician
factor, and wideband channel capacity. It is found that the severe multipath
fading supported by a highly reflecting environment creates unbalance between
the two Multiple-Input Multiple-Output channels, even in presence of
substantial losses. Furthermore, the channel capacity has a multi-modal
distribution whose average and variance scale monotonically with the number of
absorbers. Results are of interest in IoT devices, including wireless chip-to-
chip and device-to-device communications, operating in highly reflective
environments.
###### Index Terms:
SDR, USRP, MIMO, IoT, Mode-Stirred Enclosure, Channel Characterization.
## I Introduction
Statistical channel characterization and modeling is one of the foremost step
when Multiple-Input Multiple-Output (MIMO) wireless systems are to be deployed
in complex and highly reflective radio propagation environments. In scanarios
where small devices communicate with compact anetnna arrays in confined
environments, a situation occurring for example in chip-to-chip and device-to-
device communications, the role of multipath fading becomes increasing
important and traditional channel models fail to descirbe the fading
statistics. Measurement campaigns for statistical characterization and
modeling of compact MIMO systems operating within confined environments
deserve more attention, especially in view of design and roll out of Internet-
of-Things networks. The MIMO wireless channel characterization can be tackled
from different perspectives that depends on various factors such as type and
number of transmit and receive antennas, their heights, polarization, and the
layout of the propagation channel. The wireless channel is normally influenced
by the number of scatterers, their material profile, as well as their
distribution around transmitter and receiver. Remarkably, in [1] different
approaches have been used to study a Single-Input Single-Output (SISO) chip-
to-chip (C2C) channels in computer casing. Other Investigators [2][3][4] have
presented two ring models for stationary wireless C2C channel in metal
enclosure with a SISO system. The study in [5][6] show measurements for ultra-
wideband (UWB) board-to-board communication within computer chassis, and found
a uniform propagation environment independent of transmit/receive antenna
location, with small variations in channel path gains. Further, in [7] the
Investigators have performed channel measurements in metal cabinet and in
[8][9] they have performed channel measurements in ICT equipment.
In this work, a metal mode-stirred enclosure is used to emulate a complex MIMO
propagation channel. Additionally, the metal enclosure is modified by rotating
an irregular stirrer inside the metal enclosure, under different loading
conditions. This provides a means to emulate a dynamical multi-path channel
supported by an highly-reflective environment. This study complements the
recent study of low-dimensional MIMO systems operated in a large scale
dynamical environment emulated by a reverberation chamber operated with a live
base station [10]. Our observations point towards the need of abetter
understanding of the fundamental propagation mechanisms in complex
electromagnetic environments, as well as their relation with information
theoretic metrics to evaluate the achievable performance of future wireless
communication systems.
After the introduction in Section I, Section II. explains parameter baseband
model for MIMO channel measurements and parameter extraction. SDR based MIMO
measurement setup in explained in Section III. In Section IV., we presented
key statistical parameters of MIMO channel such as delay spread, coherence
bandwidth, pathloss, Rician factor, and channel capacity statistics. This
completes the statistical characterization of these parameters. In Section V.
we draw some Conclusion and Future Perspective.
## II SDR Based Parameter Extraction for Virtual MIMO
The MATLAB/Simulink framework is used as a baseband waveform development
platform to extract the channel transfer function parameters. We use a virtual
MIMO setting to measure MIMO channel transfer matrix through a software-
defined-radio (SDR) platform. The virtual MIMO array measurements are suitable
for wireless environments with fixed antenna arrays. The scattering parameters
are measured using a frequency-sweep approach where the local oscillators of
the transmit and receive Universal Software Radio Peripherals (USRPs), both
X310, are synchronously tuned from 5.50 GHz to 5.70 GHz. The center frequency
of the local oscillators of the transmit and receive USRPs was also tuned
synchronously. The sweep signal generated in Simulink is updated every 0.025
seconds and the local oscillator stability time was 0.1 seconds. Therefore, at
every frequency there were four snapshots of $N=10,000$ long complex samples
and the total time for one frequency sweep was $0.1*N=50.1$ seconds excluding
from Simulink initialization and start-up time. Baseband sampling rate was set
to 400 Ksps and the RF front-end sampling rate was also 400 ksps. The two X310
USRPs received 10 MHz and 1 PPS reference signals from Octoclock-G using equal
length cables. We recorded both complex I and Q samples per SDR channel as
well as received power per SDR channel along with the metadata information of
each measurement.
The frequency transfer function is measured by sweeping the local oscillators
of the two USRP synchronously. Figure 1 shows the baseband model to perform
the frequency sweep followed by the measurement of the received signal. It was
assured that there is no underflow and overflow during the measurement cycle,
which is important in MIMO channel characterization. This model is thus used
to perform MIMO measurements using virtual MIMO technique where the channel is
measured in two phases. In the first phase the instant channel between
transmit antenna 1 and receive antenna 1 and receive antenna 2 is measured, in
the second phase the instant channel between transmit antenna 2 and receive
antenna 1 and receive antenna 2 is measured. Figure 2 shows four frequency
sweeps, each of 200 MHz bandwidth. The full MIMO channel measurement can be
obtained with the same baseband model without switching transmitting channels.
This can be acheived by transmitting two baseband complex Sinusoids at
different frequencies while sweeping local oscillators synchronously. In the
baseband model, both the lowpass filter and the bandpass filter can be
employed to measure the magnitude of the two received Sinusoids.
Figure 1: Simulink model for the MIMO frequency transfer function measurement.
Figure 2: Number of frequency sweeps.
The scattering parameter measurement steps are presented in the flowchart 3.
Figure 3: USRP based channel measurement flowchart.
## III Measurement Setup
Compact MIMO wireless systems, including C2C communication paltforms [11], can
be tested and characterized in the GHz regime by employing our measurement
setup combining SDR devices with a small reverberation chamber. Here, the SDR
based test-bed was used to obtain complex frequency transfer function of the
MIMO wireless C2C channel. The frequency was swept from 5.50GHz to 5.70GHz.
The channel measurements were performed with four identical and vertically
polarized monopole antennas connected with input and output ports of the SDR
using RF cables. Figure 4 shows the sketch of the USRP-based 2x2 MIMO
measurement setup. The two USRPs are connected to independent host Personal
Computers for signal generation and transmission over the fading channel
emulated in the metal enclosure. This is important aspect to avoid inter-board
or intra-board RF interference. A significant channel leakage was observed
when the 2x2 MIMO channel was measured using same X310 USRP. Therefore, this
type of setup gives general assessment of compact MIMO channels but antenna
cabling needs to be handled carefully. Inherently, a 2x2 MIMO system using
independent USRPs as transmitter and receiver could also be developed by
connecting both USRPs to single high-performance PC. This will require host PC
to have dual NIC installed or one 10G Ethernet card with two interface ports.
In this way, data acquisition is performed on the same PC and packet
mismatched can be avoided because of different baseband model runtimes, and
hence increasing error vector magnitude (EVM) and bit error rate (BER)
measurement accuracy for wireless communication system measurements. Again, in
our solution the channel transfer function is measured by sweeping the local
oscillator frequency of transmit and receive USRP simultaneously over a
specified bandwidth.
Figure 4: Complete sketch of the no-line-of-sight MIMO channel measurement
setup inside a metal enclosure.
Figure 5 shows the front view image of an USRP based 2x2 MIMO testbed. It
shows the two X310 USRPs front panels and the two UBX-160 RF daughterboards
per USRPs. The Octoclock-G is used to supply 10 MHz and 1 PPS synchronization
signals to both the USRPs. In order to avoid leakage and the effect of intra-
board and inter-board interference, separate UBX-160 RF cards are used for
each transmit and receive channels.
Figure 5: Front view of the 2x2 MIMO measurement setup based on two X310
USRPs.
Figure 6 shows the sketch of the 2x2 MIMO channel measurement setup, where
transmitter and receiver units are located at distance $D$ apart. Furthermore,
the transmitter and receiver elements are separated by distances $d_{t}$ and
$d_{r}$ respectively.
((a))
((b))
Figure 6: MIMO Configuration (a) MIMO channel matrix $\mathbf{H}$ (b) MIMO
channel elements.
((a))
((b))
((c))
((d))
Figure 7: Measurement setup: (a) Overview of measurement setup (b) Inside view
of measurement setup showing panel and inside view of the metal enclosure (c)
Metallic panel with a conducting sheet creating the no-line-of-sight (d)
Asymmetric mode-stirrer.
We used the USRP X310 platforms from Ettus research for our wideband channel
sounder to perform wideband measurements using four similar vertically
polarized monopole antennas. Inside the enclosure, we also used a metal sheet
of dimension $13\,cm\times 20\,cm$ to create the MIMO no-line-of-sight (NLOS)
propagation environment. This form of channel arrangement is known to have
increased MIMO channel capacity as it reduces the possibility of generating
rank-deficient or degenerate MIMO channels. In other words, it creates a rich
multi-path fading environment which is supposed to maximize the MIMO channel
capacity. The metal enclosure has dimensions of $h\times\ell\times w$ of
$45\,cm\times 37\,cm\times 55\,cm$ with a mechanical mode-stirrer in it as
shown in Fig. 7(d). The mode-stirrer was controlled using an external Yuasa
battery. We performed the NLOS MIMO channel measurements in an empty metal
enclosure first, and then in presence of losses when the metal enclosure was
loaded with 2, 4 and 8 RF absorbers cones (normally employed to coat the
interior of anechoic chambers). We conducted NLOS MIMO channel measurements
for two sets of transmit and receive inter-element distances at ten different
stirrer positions. Since the SDR sweep time was higher than the coherence time
of the channel, we performed frequency response measurements at stationary
stirrer positions. The measurements were repeated when the metal enclosure was
empty and when it was sequentially loaded with two, four and eight RF
absorbers cones. The value of the measurement parameters are summarized in
Table .I.
TABLE I: MEASUREMENT SYSTEM PARAMETERS Parameters | Value
---|---
Bandwidth | 200 MHz
Measurement Points | 501
Start Frequency | 5.50 GHz
Stop frequency | 5.70 GHz
Tx/Rx Polarization | Vertical
Time Resolution | 5 ns
Frequency Resolution | 400 KHz
USRP clock rate | 200 Mbps
Sample Rate | 400 Ksps
USRP Decimation/Interpolation | 500
## IV Measurement Results
### IV-A MIMO System Model
The discrete-time input output relationship of MIMO system with $N_{T}$
transmit antennas and $N_{R}$ receive antennas is given as:
$\mathbf{y=Hx+n},$ (1)
where $\mathbf{y}\in\mathbb{C}^{N_{R}\times 1}$ is the complex-valued received
signal, $\mathbf{x}\in\mathbb{C}^{N_{T}\times 1}$ is the complex-valued
transmitted signal, $\mathbf{n}\in\mathbb{C}^{N_{R}\times 1}$ is an additive
Gaussian noise term, and $\mathbf{H}\in\mathbb{C}^{N_{T}\times N_{R}}$ is the
MIMO channel matrix of dimensions $N_{T}\times N_{R}$ with complex-valued
subchannel elements denoting path gains between $N_{T}$ transmit and $N_{R}$
receive antennas.
### IV-B Channel Capacity
The maximum rate of information that a channel can support with arbitrary low
probability of error is known as the channel capacity [12]. The wideband MIMO
channel capacity over a bandwidth $B$ is given by [13][14]:
$C=\frac{1}{B}\int_{f_{min}}^{f_{max}}\log_{2}\left(1+\frac{\rho}{N_{T}}H(f)^{H}H(f)\right)\,df\quad
bits/s/Hz.$ (2)
Equivalently, the channel capacity for measurements performed at discrete
frequency points distributed across a range $[-B/2,B/2]$, is defined as:
$C=\frac{1}{B}\sum_{k=1}^{B}\log_{2}\left(1+\frac{\rho}{N_{T}}|H(f_{k})|^{2}\right)\quad
bits/s/Hz,$ (3)
where $B$ is the frequency bandwidth, $H(f)$ is complex-valued wideband
frequency transfer function of the MIMO channel and $\rho$ is the signal to
noise ratio (SNR) per receive antenna. A $200MHz$ channel bandwidth has been
evaluated with 501 subchannels of $400KHz$ each. For a proper MIMO system
operation, the MIMO channel matrix should demonstrate useful properties.
Firstly, it should not be rank deficient i.e. it shouldn’t be subject to a
reduction of the rank due keyhole or pin-hole channels. More precisely, the
keyhole channels are known as degenerate MIMO channels whose rank is 1. In
this condition, the capacity of degenerate MIMO channels is equal to that in
SISO mode. Secondly, the MIMO channel matrix should exhibit a condition number
$\geq 1$ or $0\,dB$. We remind that the rank of channel matrix $\mathbf{H}$ is
the number of non-zero singular values. It indicates number of decodable data
streams that can be spatially multiplexed across the wireless link.
Operatively, the condition number of a MIMO channel matrix is defined as ratio
of the maximum singular value $\sigma_{max}$ to minimum singular value
$\sigma_{min}$:
$K(\mathbf{H})=\frac{\sigma_{max}}{\sigma_{min}}\geq 1,$ (4)
which can be expressed in dB as:
$K(\mathbf{H})_{dB}=20\log_{10}\left(K(\mathbf{H})\right).$ (5)
We also remind that the singular values $\sigma_{i}$ of the channel matrix are
related to the eigenvalues as $\lambda_{i}=\sigma_{i}^{2}$. Therefore, the
condition number in terms of eigenvalues of MIMO channel matrix $\mathbf{H}$
is given as:
$K(\mathbf{H})_{dB}=10\log_{10}\frac{\lambda_{max}}{\lambda_{min}}\geq 0dB.$
(6)
Figure 8: Complex transfer function of MIMO subchannels.
Figure 8 shows the transfer function of the MIMO sub-channel elements for our
2x2 wireless link. The transfer function shows that diversity is introduced by
the highly reflective environment created by the metal enclosure.
Additionally, it can be observed that the transfer function of subchannel
elements has similar shape and different path loss profile in each channel
element. This is expected as no buffer decoupling and matching network has
been used to achieve equally strong channels [15].
Figure 9: Eigenvalues of wireless chip-to-chip channel.
Figure 9 shows the empirical cumulative density function (CDF) of the
eigenvalues of the MIMO wireless channel. It can be seen that the gain
difference at $10\%$ probability is $1\,dB$. Therefore, at $10\%$ probability
condition number is $1\,dB$ that meets the expected definition of a proper
MIMO channel matrix.
### IV-C Coherence Bandwidth
The coherence bandwidth is measured as the auto-correlation of the frequency
transfer function $H(f)$. It is the bandwidth over which the signal
fluctuations are considered flat. The coherence bandwidth is determined by
measuring the width of frequency auto-correlation function at particular
threshold usually above 0.5. It is generally helpful for the selection of the
design parameters of wireless communications systems in any complex radio
propagation environment that is unknown. This parameter can greatly affect the
complexity of wireless C2C communication system as the inclusion and selection
of equalizer as a design component for combating the inter-symbol interference
(ISI) depends on this parameter [16]. It is inversely related to RMS delay
spread and coherence bandwidth, and inn terms of RMS delay spread at threshold
of 0.5 is given by:
$B_{c}\approx\frac{1}{5\tau_{rms}},$ (7)
while at threshold of 0.9 reads:
$B_{c}\approx\frac{1}{50\tau_{rms}}.$ (8)
Figure 10: Coherence bandwidth of two SDR channels.
The coherence bandwidth can also be defined by using the concept of average
mode-bandwidth in a loaded RC [17][18]. Figure 10 shows the measured coherence
bandwidth of the two MIMO channels. It is shown that the maximum coherence
bandwidth value is $2\,MHz$.
### IV-D Path Loss Model
The path loss is important for the design of wireless links and is calculated
by inverting the path gains as discussed in [19]. The path loss can be
expressed as:
$P_{L}(d)_{dB}=PL(d_{0})_{dB}+10\alpha\log_{10}\left(\frac{d}{d_{0}}\right)+X_{\sigma},$
(9)
where $\alpha$ represents path loss exponent. The $X_{\sigma}$ represents the
attenuation caused by shadowing, and it follows a Gaussian distribution with
zero-mean and standard deviation $\sigma$. We indicate $P_{L}(d_{0})_{dB}$ as
the path loss at reference distance $d_{0}$:
$PL(d_{0})=20\log_{10}\left(\frac{4\pi d_{0}}{\lambda}\right).$ (10)
The path-loss model is obtained by performing the least-square fitting of the
measured data. Linear fitting gives the intercept and slope of the measured
data. In this way, similarly to (9), a statistical path loss model with
different path loss exponents and shadowing variable is obtained. Figure 11
shows the path loss of two received USRP channels.
((a))
((b))
Figure 11: Linear fitting of the channel parameters (a) path loss and (b)
power delay profile.
### IV-E Delay Spread
The delay spread is a useful parameter that gives the amount of dispersion a
wireless channel undergoes in a physical propagation environments. It is
measured by selecting a threshold in dB above the noise floor or below the
maximum peak of the power delay profile (PDP). Therefore, the number of MPCs
received is strictly function of the threshold and dynamic range of the
receiver. The delay spread is given by the following expression [20]:
$\tau_{RMS}=\sqrt{\frac{\sum_{k}P(\tau_{k})\tau_{k}^{2}}{\sum_{k}P(\tau_{k})}-\frac{\sum_{k}P(\tau_{k})\tau_{k}}{\sum_{k}P(\tau_{k})}},$
(11)
or more compactly by:
$\tau_{RMS}=\sqrt{\overline{\tau^{2}}-\overline{\tau}},$ (12)
where $P(\tau_{k})$ is $k$th power of the multi-path component (MPC) arriving
at $\tau_{k}$, $\overline{\tau}$ is mean excess delay, and
$\overline{\tau^{2}}$ is second central moment of the PDP, with:
$\overline{\tau}=\frac{\sum_{k}P(\tau_{k})\tau_{k}}{\sum_{k}P(\tau_{k})}.$
(13)
Figure LABEL:fig:pdp shows the PDP of the two orthogonal MIMO channels.
### IV-F Rician Factor
The Rician factor is the ratio between the strong line of sight (LOS)
component and the scattered NLOS components. Various methods are available in
the literature for calculation of Rician factor from narrowband and wideband
measurements. Rician factor is helpful to know in presence of rich multipath
scattering in the radio propagation environment, especially if there is no
direct coupling between transmit and received antennas. Specifically, this is
useful when the measurements are performed in a reverberating environment such
as an electromagnetic RC or metal enclosures. The Rician factor is ratio of
direct power $P_{d}$ and scattered power $P_{s}$ component [21][22][23, 24]:
$K=\frac{P_{d}}{P_{s}}=\frac{|\overline{S21}|^{2}}{\overline{|S21-\overline{S21}|^{2}}}.$
(14)
Figure 12: Complex cloud of MIMO sub-channels centered around zero.
Figure 12 shows the scatter plot of sub-channel elements of the MIMO channel
matrix. It is shown that the real and imaginary values are grouped around the
zero-axis. This indicates that sufficient scattering occurs and a negligible
direct coupling or LOS component.
### IV-G Distribution of the Channel Capacity
We now turn our attention to the statistics of the channel capacity $C$
starting from measured data. Figures 13,14,15, and 16) show the probability
density function (PDF) of the channel capacity for chamber in empty and
selected loaded conditions. The empirical distributions show isolated and
dominant peaks and, therefore, are multimodal. Moreover, intervals separating
consecutive peaks is controlled by the amount on losses while multimodality
seems to disappear as the losses increase. This behaviour, far from a Gaussian
distribution, is hardly analytically accessible. Furthermore, idealised
assumptions commonly used for statistical channel modelling, as well as fading
characterization, need to be revisited from a theoretical point of view.
However, for sufficiently small $\rho$, the range of $C$ allows one to
simplify the investigations, i.e. typical realization of $C$ are given by:
$C\approx\frac{\rho}{2B\ln 2}\int_{f_{min}}^{f_{max}}dfH(f)H(f)^{\dagger}.$
(15)
Therefore, one is left with the statistics of
$H(f)H(f)^{\dagger}=\sum_{ij=1}^{2}|H_{ij}|^{2}(f)$. In Figs. 17,18, we
evaluate the first statistics of:
$\frac{1}{B}\int_{f_{min}}^{f_{max}}df|H_{xy}|^{2}(f).$ (16)
Experiments confirm the monotonicity of its mean and variance as function of
absorption [25]. The appearance of two curves in Figs.17,18 is due to the
selective excitement of the transmit antennas while measuring the channel
coefficient at two receiver antennas.
Figure 13: Distribution for the channel capacity with no cones in $25$mm
$\times$ $25$ mm metal enclosure. Figure 14: Distribution for the channel
capacity with $2$ cones in $25$ mm$\times$ $25$mm metal enclosure. Figure 15:
Distribution for the channel capacity with $4$ cones in $25$mm$\times$ $25$mm
metal enclosure. Figure 16: Distribution for the channel capacity with $8$
cones in $25$mm$\times$ $25$mm metal enclosure. Figure 17: Mean of Eq(16) for
different cones settings in the $25$mm$\times$ $25$mm. Figure 18: Variance of
Eq(16) for different cones settings in the $25$mm$\times$ $25$mm.
## V Conclusion
A thorough statistical characterization of an SDR-based 2x2 MIMO wireless
system operating in a mode-stirred metal enclosure has been performed and
discussed. The MIMO channel measurements extracted from the USRP-based tesdbed
show the formation of two parallel MIMO channels. In particular, the creation
of an appropriate MIMO channel transfer matrix is supported by the analysis of
key performance indicators under severe dynamical multi-path conditions
emulated by a mechanical mode-stirrer process. Starting from measured complex-
valued transfer function of the MIMO wireless channel, PDP, path loss,
coherence bandwidth, and Rician factor are obtained following definitions
grounded on statistics RC theory. The two channels show substantial gain but
are unbalanced, suggesting that equally strong channels can be achieved if
decoupling and matching networks are used at both transmit and receive sides,
even in presence of moderate losses. Additionally, MIMO channel capacity
results show strong non-Gaussianity which is function of losses introduced in
the enclosure. In particular, multi-modal probability distributions are
observed and multi-modality is mitigated by increasing the losses of the
enclosure. Both the ergodic channel capacity and the associated standard
deviation scale monotonically with losses.
## References
* [1] Z. M. Chen and Y. Zhang, “Inter-chip wireless communication channel: Measurement, characterization, and modeling,” _IEEE Transactions on Antennas and Propagation_ , vol. 55, pp. 978–986, 2007.
* [2] A. Zajic and P. Juyal, “Modeling of thz chip-to-chip wireless channels in metal enclosures,” in _12th European Conference on Antennas and Propagation (EuCAP 2018)_. IET, 2018, pp. 1–5.
* [3] J. Fu, P. Juyal, and A. Zajić, “300 ghz channel characterization of chip-to-chip communication in metal enclosure,” in _2019 13th European Conference on Antennas and Propagation (EuCAP)_. IEEE, 2019, pp. 1–5.
* [4] S. Kim and A. Zajić, “300 ghz path loss measurements on a computer motherboard,” in _2016 10th European Conference on Antennas and Propagation (EuCAP)_. IEEE, 2016, pp. 1–5.
* [5] J. Karedal, A. P. Singh, F. Tufvesson, and A. F. Molisch, “Characterization of a computer board-to-board ultra-wideband channel,” _IEEE Communications letters_ , vol. 11, no. 6, pp. 468–470, 2007.
* [6] S. Redfield, S. Woracheewan, H. Liu, P. Chiang, J. Nejedlo, and R. Khanna, “Understanding the ultrawideband channel characteristics within a computer chassis,” _IEEE Antennas and Wireless Propagation Letters_ , vol. 10, pp. 191–194, 2011.
* [7] S. Khademi, S. P. Chepuri, Z. Irahhauten, G. J. Janssen, and A.-J. van der Veen, “Channel measurements and modeling for a 60 ghz wireless link within a metal cabinet,” _IEEE Transactions on Wireless Communications_ , vol. 14, no. 9, pp. 5098–5110, 2015.
* [8] M. Ohira, T. Umaba, S. Kitazawa, H. Ban, and M. Ueba, “Experimental characterization of microwave radio propagation in ict equipment for wireless harness communications,” _IEEE Transactions on Antennas and Propagation_ , vol. 59, no. 12, pp. 4757–4765, 2011.
* [9] N. Nakamoto, H. Ban, T. Oka, S. Kitazawa, K. Kobayashi, N. Kikuchi, H. Hatamoto, S. Shimizu, and M. Hara, “Wireless harness inside ict equipments,” in _2013 15th International Conference on Advanced Communications Technology (ICACT)_. IEEE, 2013, pp. 135–143.
* [10] “Mimo 4x4 vs. mimo 2x2 performance assessment of a real life lte base station in a reverberation chamber,” _AEU - International Journal of Electronics and Communications_ , vol. 129, p. 153500, 2021.
* [11] W. Rayess, D. W. Matolak, S. Kaya, and A. K. Kodi, “Antennas and channel characteristics for wireless networks on chips,” _Wireless Personal Communications_ , vol. 95, no. 4, pp. 5039–5056, 2017.
* [12] C.-N. Chuah, D. N. C. Tse, J. M. Kahn, and R. A. Valenzuela, “Capacity scaling in mimo wireless systems under correlated fading,” _IEEE Transactions on Information theory_ , vol. 48, no. 3, pp. 637–650, 2002.
* [13] A. F. Molisch, M. Steinbauer, M. Toeltsch, E. Bonek, and R. S. Thoma, “Capacity of mimo systems based on measured wireless channels,” _IEEE Journal on selected areas in communications_ , vol. 20, no. 3, pp. 561–569, 2002\.
* [14] D. Tse and P. Viswanath, _Fundamentals of wireless communication_. Cambridge university press, 2005.
* [15] S. Phang, M. T. Ivrlač, G. Gradoni, S. C. Creagh, G. Tanner, and J. A. Nossek, “Near-field mimo communication links,” _IEEE Transactions on Circuits and Systems I: Regular Papers_ , vol. 65, no. 9, pp. 3027–3036, 2018.
* [16] G. T. et al., _FETOpen Project NEMF21_ , 2018, https://cordis.europa.eu/project/id/664828/results.
* [17] X. Chen, P.-S. Kildal, and S.-H. Lai, “Estimation of average rician k-factor and average mode bandwidth in loaded reverberation chamber,” _IEEE Antennas and Wireless Propagation Letters_ , vol. 10, pp. 1437–1440, 2011.
* [18] O. Delangre, P. De Doncker, M. Lienard, and P. Degauque, “Delay spread and coherence bandwidth in reverberation chamber,” _Electronics letters_ , vol. 44, no. 5, pp. 328–329, 2008.
* [19] M. Kim, Y. Konishi, Y. Chang, and J.-i. Takada, “Large scale parameters and double-directional characterization of indoor wideband radio multipath channels at 11 ghz,” _IEEE Transactions on Antennas and Propagation_ , vol. 62, no. 1, pp. 430–441, 2013.
* [20] T. S. Rappaport _et al._ , _Wireless communications: principles and practice_. prentice hall PTR New Jersey, 1996, vol. 2.
* [21] C. L. Holloway, D. A. Hill, J. M. Ladbury, P. F. Wilson, G. Koepke, and J. Coder, “On the use of reverberation chambers to simulate a rician radio environment for the testing of wireless devices,” _IEEE transactions on antennas and propagation_ , vol. 54, no. 11, pp. 3167–3177, 2006.
* [22] J. D. Sánchez-Heredia, J. F. Valenzuela-Valdés, A. M. Martínez-González, and D. A. Sanchez-Hernandez, “Emulation of mimo rician-fading environments with mode-stirred reverberation chambers,” _IEEE Transactions on Antennas and Propagation_ , vol. 59, no. 2, pp. 654–660, 2010.
* [23] C. Lemoine, P. Besnier, and M. Drissi, “Advanced method for estimating direct-toscattered ratio of rician channel in reverberation chamber,” _Electronics Letters_ , vol. 45, no. 4, pp. 194–196, 2009.
* [24] V. M. Primiani, M. Barazzetta, L. Bastianelli, D. Micheli, E. Moglie, R. Diamanti, and G. Gradoni, “Reverberation chambers for testing wireless devices and systems,” _IEEE Electromagnetic Compatibility Magazine_ , vol. 9, no. 2, pp. 45–55, 2020.
* [25] G. Gradoni, M. Richter, S. Phang, S. B. Fedeli, U. Kuhl, O. Legrand, and A. Ishimaru, “Statistical model for mimo propagation channel in cavities and random media,” in _2020 XXXIIIrd General Assembly and Scientific Symposium of the International Union of Radio Science_ , 2020, pp. 1–4.
|
# Disorder effects in topological insulator thin films
Yi Huang (黄奕<EMAIL_ADDRESS>B. I. Shklovskii School of Physics and
Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA
(August 27, 2024)
###### Abstract
Thin films of topological insulators (TI) attract large attention because of
expected topological effects from the inter-surface hybridization of Dirac
points. However, these effects may be depleted by unexpectedly large energy
smearing $\Gamma$ of surface Dirac points by the random potential of abundant
Coulomb impurities. We show that in a typical TI film with large dielectric
constant $\sim 50$ sandwiched between two low dielectric constant layers, the
Rytova-Chaplik-Entin-Keldysh modification of the Coulomb potential of a charge
impurity allows a larger number of the film impurities to contribute to
$\Gamma$. As a result, $\Gamma$ is large and independent of the TI film
thickness $d$ for $d>5$ nm. In thinner films $\Gamma$ grows with decreasing
$d$ due to reduction of screening by the hybridization gap. We study the
surface conductivity away from the neutrality point and at the neutrality
point. In the latter case, we find the maximum TI film thickness at which the
hybridization gap is still able to make a TI film insulating and allow
observation of the quantum spin Hall effect, $d_{\max}\sim 7$ nm.
## I Introduction
Topological insulators (TI) continue to generate a strong interest because of
their surfaces host massless Dirac states on the background of the bulk energy
gap. Typically, as-grown TI crystals are heavily doped semiconductors with
concentration of donors $\sim 10^{19}$ cm-3. (For certainty, we talk about
n-type case where the Fermi level is high in the conduction band). However, to
employ Dirac states in transport, one has to move the Fermi level close to the
Dirac point. In bulk crystals, this is done by intentional compensation of
donors with almost equal concentration of acceptors. With increasing degree of
compensation, the Fermi level shifts from the conduction band to inside the
gap and eventually arrives at the surface Dirac points.
This seemingly easy solution of the Fermi-level problem, however, comes with a
price Skinner _et al._ (2012). In fully compensated TI, all donors and
acceptors are charged, and these charges randomly distributed in space create
random potential fluctuations as large as the TI semiconductor gap. These
fluctuations create equal numbers of electron and hole puddles, and
substantially reduce the activation energy of the bulk transport Ren _et al._
(2011); Knispel _et al._ (2017). At the same time near the surface, the
random potential of charged impurities smears the Dirac point by the energy
$\Gamma$ self-consistently determined by the surface electrons screening
Skinner and Shklovskii (2013); Skinner _et al._ (2013). This smearing was
observed by the scanning tunnel microscopy Beidenkopf _et al._ (2011). It
also should determine the width of Landau levels of Dirac electrons and
quantum relaxation time $\tau_{q}=\hbar/\Gamma$ as measured by Shubnikov-de-
Haas oscillations.
Recently TI research shifted to thin TI films of thickness $d<20$ nm range
Zhang _et al._ (2010); Kim _et al._ (2013); Nandi _et al._ (2018); Chong
_et al._ (2020); Chaudhuri _et al._ (2021); Di Bernardo _et al._ (2021);
Wang _et al._ (2020). This interest is related to observations of the inter-
surface hybridization leading to the Dirac points hybridization gaps
$\Delta(d)$ and related topological effects, including the quantum spin Hall
effect Chong _et al._ (2020). However, such observations are obscured by
unexpectedly large effects of disorder. One might think that the role of
disorder in thin TI films should be smaller than in the bulk TI. Indeed, at a
given total 3D concentration of charged impurities $N$, the 2D concentration
of them $Nd$ in a thin TI film is quite small. In a thin film, the Fermi level
can be shifted to the Dirac point by the gate parallel to the TI film (see
Figure 1). Therefore, one might expect that the compensation by acceptors can
be avoided to get a much smaller $\Gamma(d)$. However, a distant gate can only
compensate the average charge density of donors. Local fluctuations of the
donor concentration and charge density still create a large random potential
that, after self-consistent screening by surface electrons, results in a large
Dirac point smearing energy $\Gamma(d)$.
In this paper, we show that in a typical TI film with large dielectric
constant $\sim 50$ sandwiched between two low dielectric constant layers, the
Rytova-Chaplik-Entin-Keldysh modification of the Coulomb potential of a charge
impurity slows down the potential decay in space, and allows a larger number
of the film impurities to contribute in $\Gamma$. As a result, $\Gamma$ is
large and independent on the TI film thickness $d$ for $d>5$ nm. At smaller
thickness, $\Gamma$ grows with decreasing $d$ due to reduced by hybridization
gap screening. We also study the surface conductivity both far away from
neutrality point when kinetic energy is much larger than $\Gamma$ and at the
neutrality point. In the latter case, we find the maximum thickness at which
the hybridization gap makes a TI film insulating and allows observation of the
quantum spin Hall effect, $d_{\max}\sim 7$ nm.
Contrary to the bulk case, what happens in the thin TI film strongly depends
on the average dielectric constant of the film environment
$\kappa=(\kappa_{1}+\kappa_{2})/2$, where indexes 1 and 2 are related to two
sides of the film (see Figure 1). Below we consider three different cases
$\kappa_{f}\gg\kappa$, $\kappa_{f}=\kappa$, and $\kappa_{f}\ll\kappa$.
Figure 1: TI thin film of thickness $d$ with dielectric constant $\kappa_{f}$
deposited on the substrate with dielectric constant $\kappa_{2}$. The top
metallic gate is separated from the film by a spacer of thickness $D$ with
dielectric constant $\kappa_{1}$. The topological surfaces are shown by blue
lines. In the case $\kappa_{f}\gg\kappa$, a typical charge impurity is shown
by a red circle with its electric field $\mathbf{E}$ (black) channeling
through the film for a distance $\lambda$ before exiting outside. In similar
topologically trivial films, the electric field exits at a larger distance
$r_{0}$.
In Section II we calculate $\Gamma$ at the neutrality point for the most
interesting first case when the potential of a charge impurity was described
by Rytova Rytova (1967), Chaplik and Entin Chaplik and Entin (1971), and
Keldysh Keldysh (1979). In Section III we study the cases $\kappa_{f}=\kappa$
and $\kappa_{f}\ll\kappa$. In Section IV we comment on the role of the gate
when it is close to the film surface. In Section V we calculate the
conductivity of a TI film ignoring hybridization gap. In Section VI we
concentrate on the effect of the hybridization gap on the conductivity at the
neutrality point and find the maximum thickness $d_{\max}$ at which TI film is
still insulating so that one can observe spin Hall effect.
## II Thin TI film in small dielectric constant environment
In this section, we calculate $\Gamma(d)$ in the case of
$\kappa_{f}\gg\kappa$. For example Wang _et al._ (2020); Chong _et al._
(2020), a BiSbTeSe2 (BSTS) thin film with $\kappa_{f}\sim 50$ can be
sandwiched between two h-BN layers with $\kappa_{1,2}\sim 5$ Laturia _et al._
(2018). In this case $\kappa\sim 5$ is 10 times smaller than $\kappa_{f}$. If
$\kappa_{f}\gg\kappa$, the electric field of a charged impurity inside the
thin film is trapped inside the film for a distance
$r_{0}=(\kappa_{f}/2\kappa)d$, and only after $r>r_{0}$ the electric field
exits to the environment. This leads to the effective Coulomb interaction with
asymptotic expressions Chaplik and Entin (1971),
$v_{0}(\mathbf{r})\approx\begin{cases}\frac{e^{2}}{\kappa r},\quad&r>r_{0},\\\
-\frac{e^{2}}{\kappa
r_{0}}\quantity[\ln(r/2r_{0})+\gamma],\quad&d<r<r_{0},\end{cases}$ (1)
where $\mathbf{r}$ is a 2D vector in the plane of TI film, and $\gamma=0.577$
is the Euler constant. The Fourier transform of $v_{0}(\mathbf{r})$ is
$v_{0}(q)=\frac{2\pi e^{2}}{\kappa q(1+qr_{0})},$ (2)
valid for $q<1/d$.
In a TI film, the electric field of a charged impurity experiences additional
screening by Dirac electrons living on the surfaces of the film. To describe
this screening, we start from the equation for the electric potential of
screened charged impurities $\phi(\mathbf{r})$
$\mu[n(\mathbf{r})]-e\phi(\mathbf{r})=E_{F},$ (3)
where $E_{F}={\rm const.}$ is the Fermi level (electro-chemical potential),
$\mu[n(\mathbf{r})]=\hbar v_{F}k_{F}[n(\mathbf{r})]$ is the (local) chemical
potential, $v_{F}$ is the velocity near the Dirac cone, and
$k_{F}[n(\mathbf{r})]=\sqrt{4\pi\absolutevalue{n(\mathbf{r})}}$ is the local
Fermi wave vector. If the average chemical potential $\mu$ is large enough, so
that $\mu^{2}\gg e^{2}\phi^{2}$, then $\mu[n(\mathbf{r})]$ can be linearized
in the local carrier density variation $\delta n(\mathbf{r})$
$\displaystyle\mu[n(\mathbf{r})]\approx\mu+\delta n(\mathbf{r})/\nu(\mu).$ (4)
where $\nu(\mu)=d{n}/d{\mu}=\mu/(2\pi(\hbar v_{F})^{2})$ is the thermodynamic
density of states (TDOS) at zero temperature. Introducing the effective fine
structure constant $\alpha=e^{2}/\kappa_{f}\hbar v_{F}$, we can write the TDOS
as
$\nu(\mu)=\frac{\kappa_{f}^{2}\alpha^{2}}{2\pi e^{4}}\mu.$ (5)
In the Thomas-Fermi (TF) approximation 111TF approximation is justified as
long as $\alpha\ll 1$. See related discussion in Ref. Skinner and Shklovskii,
2013., the screening by surface electrons can be described by the dielectric
function
$\epsilon(q)=1-v_{0}(q)\Pi_{TF},$ (6)
where the TF polarization bubble is $\Pi_{TF}=-\nu(\mu)$, and the bare
interaction $v_{0}(q)$ is given by Eq. (2). We arrive at the screened
potential of one charge impurity within a thin TI film
$v(q)=\frac{v_{0}(q)}{\epsilon(q)}=\frac{2\pi
e^{2}}{\kappa[q(1+qr_{0})+q_{s}]},$ (7)
where $q_{s}=2\pi e^{2}\nu/\kappa$ and $q<1/d$.
We see that if $q_{s}r_{0}\gg 1$ then, unlike in uniform 3D dielectrics,
inside the TI film a strong screening happens at the distance
$\lambda=(r_{0}/q_{s})^{1/2}.$ (8)
Indeed, the behavior of $v(q)$ changes at $q=\lambda^{-1}$:
$v(q)\approx\begin{cases}\frac{2\pi e^{2}}{\kappa
q_{s}},\quad&q<\lambda^{-1},\\\ \frac{2\pi e^{2}}{\kappa
q^{2}r_{0}},\quad&\lambda^{-1}<q<d^{-1}.\end{cases}$ (9)
Figure 2: Log-log plot of the screened interaction $v(r)$ for different
$q_{s}$. The gray lines are obtained by the Fourier transform of $2\pi
e^{2}/\kappa(q+q_{s})$, which shows $\sim r^{-3}$ in large distance.
The behavior of $v(r)$ for different values of $q_{s}$ is shown in Figure 2.
At large distance $r\gg\lambda$, we get $v(r)\simeq e^{2}/\kappa
q_{s}^{2}r^{3}$ like for a quadrupole. The difference between topological and
topologically trivial films is also schematically illustrated in Figure 1.
Assuming that impurities are randomly distributed inside the film, the mean
squared fluctuation of the potential is given by 222Here we drop the short
distance contribution $r<d$ to the potential $\phi(\mathbf{r})$ which is
standard Coulomb potential. This does not change the result substantially as
long as $d\ll\lambda$.
$\displaystyle\expectationvalue{\phi^{2}}$ $\displaystyle=\frac{1}{e^{2}}\int
d\mathbf{r}v^{2}(\mathbf{r})Nd$ $\displaystyle=\frac{2\pi
Nde^{2}}{\kappa^{2}}f(q_{s}r_{0}),$ (10)
where the function $f(x)$ reads
$\displaystyle f(x)$ $\displaystyle=\frac{2}{4x-1}$
$\displaystyle+\begin{cases}\frac{2}{(1-4x)^{3/2}}\tanh^{-1}{\sqrt{1-4x}},\quad&0<x<1/4,\\\
-\frac{2}{(4x-1)^{3/2}}\tan^{-1}{\sqrt{4x-1}},\quad&x>1/4.\end{cases}$ (11)
We are interested in two limiting cases of the dimensionless parameter
$q_{s}r_{0}=(r_{0}/\lambda)^{2}$:
$\expectationvalue{\phi^{2}}=\frac{2\pi
Nde^{2}}{\kappa^{2}}\begin{cases}(2q_{s}r_{0})^{-1},\quad&\lambda\ll r_{0},\\\
-2-\ln(q_{s}r_{0}),\quad&\lambda\gg r_{0}.\end{cases}$ (12)
There is a simple qualitative interpretation of the limiting expression of
$\expectationvalue{\phi^{2}}$. In the case when $\lambda\ll r_{0}$ (or
$q_{s}r_{0}\gg 1$), surface electrons screening cuts off the impurity
potential at distance $\lambda$ from the impurity center. The fluctuation of
number of impurities inside radius $\lambda$ is equal to
$(Nd\lambda^{2})^{1/2}$. Since each charge impurity of this area contributes
to the potential $\sim e/\kappa r_{0}$ [see Eq. (1)], we get
$\expectationvalue{\phi^{2}}\sim(Nd\lambda^{2})(e/\kappa r_{0})^{2}$, namely
the first line of Eq. (12). On the other hand, at $\lambda\gg r_{0}$ (or
$q_{s}r_{0}\ll 1$) the potential of impurity $v(\mathbf{r})$ follows Eq. (1)
with effective screening length $r_{0}$. Taking into account that fluctuation
of number of impurities inside radius $r_{0}$ is $\sim\sqrt{Ndr_{0}^{2}}$, we
arrive at the second line of Eq. (12).
We are interested in the charge neutrality point where $E_{F}=0$, and $\phi$
has the Gaussian distribution function with $\expectationvalue{\phi}=0$ and
$\expectationvalue{\phi^{2}}=\Gamma^{2}/e^{2}$. Next, we want to calculate the
average density of states $\expectationvalue{\nu}$ using the Gaussian
distribution function of $\phi$,
$\expectationvalue{\nu}=\int_{-\infty}^{\infty}d(e\phi)2\nu(e\phi)\frac{e^{-e^{2}\phi^{2}/2\Gamma^{2}}}{\sqrt{2\pi\Gamma^{2}}}=\frac{2\alpha^{2}\kappa_{f}^{2}\Gamma}{(2\pi^{3})^{1/2}e^{4}}.$
(13)
Here we multiply the density of states by a factor of 2 because the potential
at each surface is screened by electrons of both the top and bottom TI
surfaces. The above use of the potential $\sim e^{2}/\kappa r_{0}$ inside the
the TI film at distance $d<r<\lambda$ from a Coulomb impurity apparently is
valid only for $d<\lambda$. This condition is equivalent to $d\lesssim
d_{c}=\alpha^{-4/3}N^{-1/3}$ 333The value of $d_{c}$ will be determined after
we obtained $\lambda(d)$ self-consistently in Eq. (17). For thicker films,
$d>d_{c}$, one should think about two separate surfaces like in a bulk sample
where each surface screens its own random potential Skinner and Shklovskii
(2013). Then one also can find the lower limit of applicability of large $d$
theory Skinner and Shklovskii (2013), $d_{c}$, as the screening radius $r_{s}$
of a single surface found in Ref. Skinner and Shklovskii (2013).
At $d\lesssim d_{c}$ replacing $\nu$ by $\expectationvalue{\nu}$ in
$q_{s}=2\pi e^{2}\nu/\kappa$, we have
$q_{s}=\sqrt{\frac{8}{\pi}}\frac{\alpha^{2}\kappa_{f}^{2}\Gamma}{\kappa
e^{2}}.$ (14)
Now one can solve for $\Gamma$ and $q_{s}$ self-consistently using Eqs. (12)
and (14). If $\lambda\ll r_{0}$, then
$\displaystyle\Gamma$
$\displaystyle=\quantity(\frac{\pi^{3}}{2})^{1/6}\frac{e^{2}N^{1/3}}{\kappa_{f}\alpha^{2/3}},$
(15) $\displaystyle q_{s}$
$\displaystyle=2^{4/3}\alpha^{4/3}\frac{\kappa_{f}}{\kappa}N^{1/3},$ (16)
$\displaystyle\lambda$ $\displaystyle=2^{-7/6}\alpha^{-2/3}(Nd^{3})^{-1/6}d.$
(17)
The result for $\Gamma$ is independent on $d$, and up to a numerical factor is
the same as the results earlier obtained for a bulk samples Skinner and
Shklovskii (2013). Therefore, our $\Gamma$ easily matches that of Ref. Skinner
and Shklovskii (2013) at $d=d_{c}$. To ensure the self-consistency, one should
check whether the assumption $q_{s}r_{0}\gg 1$ with $q_{s}$ given by Eq. (16)
is correct. We find that, Eqs. (15), (16) and (17) are valid if $d\gg
d_{1}=(\kappa/\kappa_{f})^{2}\alpha^{-4/3}N^{-1/3}$. Note that at the
neutrality point Eq. (17) provides the typical size of puddles, while the
concentration of electrons and holes in puddles is
$n_{p}\sim(Nd\lambda^{2})^{1/2}/\lambda^{2}\sim(\alpha N)^{2/3}$. This
concentration does not depend on $d$ and is the same as the puddle
concentration at the surface of a bulk sample Skinner and Shklovskii (2013).
In the other limiting case $\lambda\gg r_{0}$, in the first approximation we
have
$\displaystyle\Gamma$ $\displaystyle\approx\quantity{\frac{2\pi
Nde^{4}}{\kappa^{2}}\ln\quantity[\quantity(\frac{\kappa}{\kappa_{f}})^{3}\frac{1}{2\alpha^{2}(Nd^{3})^{1/2}}]}^{1/2},$
(18) $\displaystyle q_{s}$ $\displaystyle\approx
4\quantity(\frac{\alpha\kappa_{f}}{\kappa})^{2}\sqrt{Nd}\quantity{\ln\quantity[\quantity(\frac{\kappa}{\kappa_{f}})^{3}\frac{1}{2\alpha^{2}(Nd^{3})^{1/2}}]}^{1/2}.$
(19)
Eqs. (18) and (19) are valid if $d\ll d_{1}$, i.e., the arguments of
logarithms are much larger than unity.
Figure 3: Log-log plot of the disorder potential amplitude $\Gamma$ as a
function of thickness $d$ for two examples of scenarios (a) and (b). We use
$N=10^{19}$ cm-3, $\kappa_{f}=50$, and $\alpha^{-1}=7$ in order to relate the
dimensionless left-vertical and bottom-horizontal axes to the right-vertical
and top-horizontal dimensional axes. The blue solid line corresponds to the
scenario (a): $\kappa_{f}/\kappa>\alpha^{-1}>1$ (in this example we use such
$\kappa=\kappa_{a}$ that $\kappa_{f}/\kappa_{a}=10$). The red dashed curve
corresponds to the alternative scenario (b):
$\alpha^{-1}>\kappa_{f}/\kappa\geq 1$ (for this example we use such
$\kappa=\kappa_{b}$ that $\kappa_{f}/\kappa_{b}=2$ ). On the left-vertical
axis, dimensionless potential amplitude $\Gamma$ in units of $\gamma
e^{2}N^{1/3}/\kappa_{f}$ where $\gamma=(\pi^{3}/2)^{1/6}$ is shown. On the
right-vertical axis, we show $\Gamma$ in units of meV. On the bottom-
horizontal axis we show the characteristic dimensionless TI film widths
$\tilde{d}_{1a,b}=(\kappa_{a,b}/\kappa_{f})^{2}\alpha^{-4/3}$,
$\tilde{d}_{2a,b}=(\kappa_{a,b}/\kappa_{f})^{2/3}$,
$\tilde{d}_{3}=\alpha^{2/3}$, and $\tilde{d}_{c}=\alpha^{-4/3}$. On the top-
horizontal axis, the film thickness $d$ in units of nm is shown. Numbers of
Eqs. (15) and (18) describing parts of $\Gamma(d)$ plots are shown next to
them.
In order to derive the above results we assumed that electric potential
fluctuations follow Gaussian distribution. This assumption is valid if the
number of substantially contributing to the potential impurities $M\gg 1$. If
$\lambda>r_{0}$, or $q_{s}r_{0}<1$, then Eq. (19) yields $M=Ndr_{0}^{2}\sim
Nd^{3}(\kappa_{f}/\kappa)^{2}\gg 1$ when $d\gg
d_{2}=(\kappa/\kappa_{f})^{2/3}N^{-1/3}$. On the other hand, if
$\lambda<r_{0}$, or $q_{s}r_{0}>1$, using Eq. (17) we get that
$M=Nd\lambda^{2}\sim(Nd^{3})^{2/3}\alpha^{-4/3}\gg 1$ when $d\gg
d_{3}=\alpha^{2/3}N^{-1/3}$.
In Figure 3 we schematically summarize our results for $\Gamma(d)$ for two
scenarios covering generic situation with $\alpha^{-1}>1$ and
$\kappa_{f}/\kappa>1$:
Scenario (a) is defined by inequality $\kappa_{f}/\kappa>\alpha^{-1}>1$. In
this scenario the energy $\Gamma(d)$ is a constant given by Eq. (15) for
$d>\alpha^{2/3}N^{-1/3}$, while for $d<\alpha^{2/3}N^{-1/3}$ the Gaussian
approach fails.
Scenario (b) is defined by inequality $\alpha^{-1}>\kappa_{f}/\kappa\geq 1$.
In this case the energy $\Gamma(d)$ is a constant given by Eq. (15) for
$d>d_{1}=(\kappa/\kappa_{f})^{2}\alpha^{-4/3}N^{-1/3}$, while $\Gamma(d)$
crosses over to Eq. (18) for
$(\kappa/\kappa_{f})^{2/3}N^{-1/3}=d_{2}<d<d_{1}=(\kappa/\kappa_{f})^{2}\alpha^{-4/3}N^{-1/3}$.
In this scenario, the Gaussian approach fails at $d<d_{2}$. Scenario (b)
includes the case $\kappa=\kappa_{f}$ for which $d_{2}=N^{-1/3}$ and
$d_{1}=d_{c}$, so that for thin films, $d<d_{c}$, $d$-independent part of the
function $\Gamma(d)$ does not exist.
Let us see how these two scenarios work for TIs based on BSTS-like systems
with $v_{F}\sim 3\times 10^{5}$ m/s, $\kappa_{f}\sim 50$, $\alpha^{-1}\sim 7$
and $N\simeq 10^{19}$ cm-3. If such a TI film is sliced between h-BN layers,
then $\kappa\simeq\kappa_{a}=5$ Laturia _et al._ (2018),
$\kappa_{f}/\kappa\simeq 10>\alpha^{-1}$ bringing us to scenario (a). If the
same film is sliced between two layers of HfO2 with
$\kappa\simeq\kappa_{b}=25$ Robertson (2004), then $\kappa_{f}/\kappa\simeq
2<\alpha^{-1}$ and we find ourselves in scenario (b). These two examples are
used in Figure 3 to plot functions $\Gamma(d)$ for both scenarios. In both
scenarios, $d_{c}\sim\alpha^{-4/3}N^{-1/3}\gg d_{1,2,3}$ is the largest length
scale.
In the first example, Eq. (15) obtained for bulk samples Skinner and
Shklovskii (2013) gives $\Gamma\sim 35$ meV which remains valid till very
small film widths $d\sim d_{3}\simeq 2$ nm, in spite of smaller concentration
of impurities $Nd$. Such an unexpectedly strong role of disorder in thin BSTS-
like TI films sandwiched between two low-$\kappa$ layers is a result of the
dielectric constant contrast between the TI film and its environment leading
to the large contribution from distant Coulomb impurities into potential
fluctuations.
In the above we ignored the concentration of charged impurities in the
environment outside the TI film $N_{e}$. Let us now evaluate the role of such
impurities. To save electrostatic energy, the electric field lines of an
impurity at distance $z\lesssim r_{0}$ from the film surface first enter
inside the TI film and then radially spread inside the film to distance $\sim
r_{0}$ before exiting outside the film to infinity. Thus, one can think that
effectively each outside impurity is represented inside the film by a charge
$e$ disk, with radius $z$ and thickness $d$. In the presence of screening,
only small minority of the outside impurities with $z<\lambda$ contribute in
fluctuating charge of the volume $d\lambda^{2}$. As a result, total effective
concentration of impurities projected from outside the film is
$N_{e}\lambda/d$. If $N_{e}\lambda/d<N$, where $\lambda$ is given by Eq. (17),
outside impurities can be ignored and our results are valid. For example, in
our scenario (a), for the BSTS TI film with $d\sim 5$ nm on silicon oxide
substrate with Shklovskii (2007) $N_{e}\sim 10^{17}$ cm-3, $\lambda/d\sim 2$
and our results are valid 444On the other hand, if $N_{e}\lambda/d>N$ the
screening length $\lambda$ should be recalculated self-consistently together
with $\Gamma$. Then, instead of Eqs. (15) and (17), we arrive at new results
$\Gamma\sim(e^{2}N_{e}^{1/3}/\kappa_{f})\alpha^{-6/7}(N_{e}d^{3})^{-1/21}$ and
$\lambda\sim d\alpha^{-4/7}(N_{e}d^{3})^{-1/7}$..
Let us now discuss the effect of the hybridization gap on the disorder
potential. In a thin enough clean TI film the surface states of two opposite
surfaces hybridize and their Dirac spectra acquires the hybridization gaps
$\Delta(d)=\Delta_{0}\exp(-d/d_{0}),$ (20)
where $\Delta_{0}\sim 1$ eV and $d_{0}\sim 1.8$ nm for Bi0.7Sb1.3Te1.05Se1.95
while $d_{0}\sim 1.2$ nm for BiSbTe1.5Se1.5 Chong _et al._ (2020). Such a
hybridization gaps $\Delta(d)$ equals to the scenario (a) $\Gamma(d)=35$ meV
at $d=4$ nm for $d_{0}=1.2$ nm, or $d=6$ nm for $d_{0}=1.8$ nm. Thus, our
results should be valid for $d\gtrsim 5$ nm.
More accurately speaking in the presence of gap, the local dispersion relation
at the Fermi level becomes
$\mu(\mathbf{r})=\sqrt{\hbar^{2}v_{F}^{2}k_{F}^{2}(\mathbf{r})+\Delta^{2}/4}$,
and the corresponding thermodynamic density of states is given by Eq. (5)
multiplied by a Heaviside theta function $\Theta(\mu-\Delta/2)$. Below we use
$\Gamma_{\Delta}$ for disorder potential amplitude in presence of
hybridization gap $\Delta$ while we continue to use $\Gamma(d)$ for obtained
above result for $\Delta=0$. If $\Delta\ll\Gamma(d)$, or $d\gg 5$ nm, one gets
perturbatively Skinner _et al._ (2013)
$\Gamma_{\Delta}\approx\Gamma(d)[1+\Delta^{2}/24\Gamma^{2}(d)]$. On the other
hand, if $\Delta\gg\Gamma(d)$, or $d\ll 5$ nm, surface electrons and holes
screen the disorder potential in a nonlinear way, only when it exceeds
$\Delta/2$. Such nonlinear screening leads to
$\Gamma_{\Delta}\simeq\Delta/2>\Gamma(d)$.
## III Thin TI film in the same or larger dielectric-constant environment
In this section, we first consider the case when $\kappa_{f}=\kappa$ and the
Coulomb interaction with a charged impurity is $v_{0}(r)=e^{2}/\kappa r$. In
the TF approximation, the interaction screened by the surface electrons is
given by
$v(r,z)=\frac{e^{2}}{\kappa}\int_{0}^{\infty}dq\frac{J_{0}(qr)}{1+q_{s}/q}e^{-qz},$
(21)
where $J_{0}(x)$ is the zeroth Bessel function of the first kind. The
potential fluctuation squared reads
$\displaystyle\expectationvalue{\phi^{2}}$ $\displaystyle=\frac{1}{e^{2}}\int
Nd\mathbf{r}\int_{0}^{d}dzv^{2}(r,z)$ $\displaystyle=\frac{2\pi
Nde^{2}}{\kappa^{2}}[-e^{2q_{s}d}{\rm Ei}(-2q_{s}d)],$ (22)
where ${\rm Ei}(x)$ is the exponential integral function. Eq. (III) has the
following limits
$\expectationvalue{\phi^{2}}=\frac{2\pi
Nde^{2}}{\kappa^{2}}\begin{cases}(2q_{s}d)^{-1},\quad&q_{s}d\gg 1,\\\
-\gamma-\ln(2q_{s}d),\quad&q_{s}d\ll 1.\end{cases}$ (23)
Next, we solve $\Gamma$ and $q_{s}$ self-consistently similarly to previous
sections. If $q_{s}d\gg 1$ one obtains the results for $\Gamma$ and $q_{s}$
given by Eqs. (15) and (16) with $\kappa_{f}=\kappa$ and smaller by $2^{1/3}$
in coefficients. On the other hand, if $q_{s}d\ll 1$ one gets the results of
$\Gamma$ and $q_{s}$ given by Eqs. (18) and (19), with $\kappa_{f}=\kappa$.
The first solution exists only at $d>d_{c}$, i.e. in the bulk case, while the
second corresponds to the thin film case, $d<d_{c}$. Thus, for
$\kappa_{f}=\kappa$ we arrived to the same result as in Section II, scenario
(b).
For BSTS film with $\kappa_{f}\sim 50$, $\alpha^{-1}\sim 7$ and the impurities
concentration $N\sim 10^{19}$ cm-3 surrounded by the dielectrics with
$\kappa\sim\kappa_{f}$, we get $\Gamma\sim 30$ meV at $d>q_{s}^{-1}\sim 20$
nm.
Let us now briefly consider the case of large-$\kappa$ environment, when
$\kappa_{f}\ll\kappa_{1},\kappa_{2}$. For example, we can imagine thin TI
films sandwiched between two STO layers which have very large dielectric
constant. They should screen the random potential of impurities $\Gamma_{1}$
and $\Gamma_{2}$ on both side 1 and 2 surfaces and make $\Gamma_{1,2}\ll
e^{2}N^{1/3}/\kappa_{f}$.
If STO is only on side 2 of the TI film, it dramatically reduces $\Gamma_{2}$
of this side, while on the other side potential is screened by STO only at the
distance $r>d$. The number of impurities contributing to $\Gamma_{1}(d)$ is
$\sim\sqrt{Nd^{3}}$, so that
$\Gamma_{1}(d)\sim(e^{2}/\kappa_{f}d)\sqrt{Nd^{3}}$.
For BSTS film with $\kappa_{f}\sim 50$, $\alpha^{-1}\sim 7$, and impurities
concentration $N\sim 10^{19}$ cm-3 sitting on top of STO, we have
$\Gamma_{1}(d)\sim 3\sqrt{d}$ meV where $d$ is measured in units of nm. For
example, if $d=10$ nm, then $\Gamma_{1}\sim 10$ meV.
## IV metallic gate
In this section, we return to the case $\kappa_{f}\gg\kappa_{1,2}$ and discuss
the effect on $\Gamma$ from the metallic gate on top of the low dielectric
constant layer with thickness $D$ (see Figure 1). To get some intuitions, we
will start from the question how such a gate affects the electric field of a
point charge inside the film, namely, how the gate modifies the Rytova-Keldysh
potential Eq. (2) for the case of topologically trivial semiconductor film
without surface electrons and their screening. This question was carefully
studied in Ref. Kondovych _et al._ (2017). The main result is that, at small
enough separation $D<4d\kappa_{f}\kappa_{1}/\kappa_{2}^{2}\sim r_{0}$, large
distance part of the potential Eq. (2) is truncated (screened) at the distance
$\Lambda=\sqrt{Dd\kappa_{f}/\kappa_{1}}\lesssim r_{0}$. This happens because
electric field lines exits the film in the direction to the gate at the
distance $r\gtrsim\Lambda$ 555The length $\Lambda$ can be found also via the
following simple variational estimate. Let us assume that electric field lines
are confined inside the TI film within radius $\Lambda\ll r_{0}$ from the
point charge and exit from the film to the gate in the area
$\sim\pi\Lambda^{2}$ . The total electrostatic energy consists of two major
contributions, one is the energy of the field inside the film
$\sim(e^{2}/\kappa_{f}d)\ln(\Lambda/d)$, and the other one is the field energy
in the gate dielectric $\sim e^{2}D/\kappa_{1}\Lambda^{2}$. Minimizing the
total energy yields
$\Lambda=\sqrt{Dd\kappa_{f}/\kappa_{1}}\sim\sqrt{r_{0}D}$..
Let us now recall what surface electrons screening does to the point charge
potential in a TI film without gate. We saw in Section II that TI surface
electrons screening length is given by $\lambda=\sqrt{r_{0}/q_{s}}$. Now for a
TI film with the gate we have both gate and surface electron screening working
together. Comparing the expressions of $\Lambda$ and $\lambda$ we see, as one
could expect, that the distance $D(2\kappa/\kappa_{1})\sim D$, which is
essentially the distance to the gate, should play the role of $q_{s}^{-1}$
666The factor $(2\kappa/\kappa_{1})$ is of order unity if
$\kappa_{1}\simeq\kappa_{2}$ are not quite different.. This means that if
$D\gtrsim q_{s}^{-1}$ the gate plays only a perturbative role, while for in
the case $D\lesssim q_{s}^{-1}$ the distance $D$ should replace $q_{s}^{-1}$
in the final result for $\Gamma$. Replacing $q_{s}^{-1}$ by
$D(2\kappa/\kappa_{1})$ in the case of $\lambda\ll r_{0}$ in Eq. (12) yields
$\Gamma=2\quantity(\frac{\pi e^{4}ND}{\kappa_{f}\kappa_{1}})^{1/2}.$ (24)
This result is valid when it is smaller than Eq. (15), i.e. at $D\lesssim
D_{c}=N^{-1/3}\alpha^{-4/3}(\kappa_{1}/\kappa_{f})$.
In most experiments, $D>D_{c}$, so the screening by gate is negligible
compared to surface electrons screening. For example, in Ref. Chong _et al._
(2020), the gate separation is $D\simeq 20$ nm, while $D_{c}\sim 5$ nm
assuming $\kappa_{f}/\kappa_{1}\sim 10$ and $N\sim 10^{19}$ cm-3.
## V conductivity
In this section, we calculate the conductivity of the surface for the scenario
(a) in section II assuming that $D>D_{c}$. In the linear screening region
$\mu^{2}\gg e^{2}\phi^{2}$, where the electron density is weakly perturbed by
impurities, using Boltzmann kinetic equation for Dirac electrons, one has the
expression of the conductivity for a single surface Culcer and Winkler (2008),
$\sigma=\frac{e^{2}}{h}\frac{\mu\tau}{4\hbar}.$ (25)
Here $\tau$ is the transport relaxation time whose inverse is given by
$\frac{1}{\tau}=\frac{\alpha\kappa_{f}Ndk_{F}}{\pi\hbar
e^{2}}\int_{0}^{\pi}d\theta v^{2}(q)(1-\cos\theta)\frac{1}{2}(1+\cos\theta),$
(26)
where $v(q)$ is given by Eq. (7) with $q=2k_{F}\sin\theta/2$ and
$q_{s}=k_{F}\alpha\kappa_{f}/\kappa$. The factor $(1+\cos\theta)/2$ in Eq.
(26) arises when the backscattering is suppressed as a consequence of the spin
texture at the Dirac point, as in Weyl semimetals Burkov _et al._ (2011).
Changing the integral variable from $\theta$ to $q$, Eq. (26) can be rewritten
as
$\frac{1}{\tau}=\frac{4\pi e^{2}\kappa_{f}\alpha Nd}{\kappa^{2}\hbar
k_{F}}\int_{0}^{2k_{F}}\frac{dq}{2k_{F}}\frac{q^{2}\sqrt{1-(q/2k_{F})^{2}}}{\quantity[q(1+qr_{0})+q_{s}]^{2}}$
(27)
Using $x=q/2k_{F}$, the integral in Eq. (27) can be expressed in a
dimensionless form
$I=\int_{0}^{1}dx\frac{x^{2}\sqrt{1-x^{2}}}{[x(1+2k_{F}r_{0}x)+(\alpha
r_{0}/d)]^{2}}.$ (28)
In the scenario (a) we are considering $d>d_{3}=\alpha^{2/3}N^{-1/3}$ and
$d<d_{c}=\alpha^{-4/3}N^{-1/3}$. First inequality means that $k_{F}d>\alpha$,
because $k_{F}>\Gamma/\hbar v_{F}\simeq(\alpha N)^{1/3}$. Therefore we are
interested in Eq. (28) in the limit $k_{F}d\gg\alpha$. In this case
$k_{F}r_{0}\gg\alpha\kappa_{f}/\kappa>1$, so the integral in Eq. (28) is
approximated by
$I\approx\int_{0}^{1}dx\frac{x^{2}\sqrt{1-x^{2}}}{[2k_{F}r_{0}x^{2}+(\alpha
r_{0}/d)]^{2}}$ (29)
The integral kernel peaks at
$x\simeq(\alpha/2k_{F}d)^{1/2}\simeq(2k_{F}\lambda)^{-1}$, which corresponds
to a momentum transfer $q_{\max}\simeq\lambda^{-1}\ll k_{F}$. [Here we used
Eq. (17) for $\lambda$]. The peak value is
$(4q_{s}r_{0})^{-1}=(2k_{F}d\alpha\kappa_{f}^{2}/\kappa^{2})^{-1}$. The width
of the peak is $\Delta x\sim(k_{F}\lambda)^{-1}$. As a result, the integral in
limit $k_{F}d\gg\alpha$ is given by
$I\approx\frac{\pi}{2\sqrt{2}}\frac{\kappa^{2}/\kappa_{f}^{2}}{(k_{F}d)^{3/2}\alpha^{1/2}}.$
(30)
and Eq. (27) is
$\frac{1}{\tau}\approx\sqrt{2}\pi^{2}\frac{\alpha^{1/2}e^{2}N}{\kappa_{f}\hbar
k_{F}^{5/2}d^{1/2}}.$ (31)
Substituting Eq. (31) into Eq. (25) with $k_{F}=\sqrt{4\pi n}$, we have the
conductivity
$\sigma\approx\frac{e^{2}}{h}\frac{2}{\pi^{1/4}}\frac{n^{7/4}d^{1/2}}{\alpha^{3/2}N}.$
(32)
where $d_{3}\lesssim d\lesssim d_{c}$. At $d\sim d_{c}\sim n^{-1/2}/\alpha$ or
$n\sim(\alpha d)^{-2}$, our conductivity Eq. (32) becomes of the order of
$\sigma\sim\frac{e^{2}}{h}\frac{n^{3/2}}{N\alpha^{2}}$ (33)
and with logarithmic accuracy crosses over to the bulk one Skinner and
Shklovskii (2013). At the charge neutrality point $n=n_{p}\sim(\alpha
N)^{2/3}$, we get the minimum conductivity
$\sigma_{\min}\sim(e^{2}/h)(Nd^{3}/\alpha^{2})^{1/6}$ (34)
which is larger than $e^{2}/h$ in the range of its validity $d\gtrsim
d_{3}=\alpha^{2/3}N^{-1/3}$. At $d\sim d_{c}=\alpha^{-4/3}N^{-1/3}$ our
$\sigma_{\min}\sim e^{2}/h\alpha$ and with logarithmic accuracy crosses over
to the bulk one Skinner and Shklovskii (2013).
It is remarkable that due to Rytova-Chaplik-Entin-Kedysh modification of the
Coulomb potential of charged impurity at large range of electron
concentrations $(\alpha N)^{2/3}\lesssim n\lesssim(\alpha d)^{-2}$, not only
$\Gamma$, but also the conductivity Eq. (32), is determined by the long range
potential with $q\sim\lambda^{-1}\ll k_{F}$. Only at large $n$ and $d$ the
conductivity Eq. (33) is determined the large momentum $q\sim k_{F}$
scattering on standard Coulomb potentials of impurities located at distances
smaller than $k_{F}^{-1}$ from the TI film surface Skinner and Shklovskii
(2013).
The condition of validity of the above conductivity theory is that the local
kinetic energy is larger than $\hbar/\tau$. In the worst case when $\mu=0$ and
the local kinetic energy is of the order of $\Gamma(d)$, using Eqs. (15) and
(31), we get $\Gamma\tau/\hbar\sim\alpha^{-1/3}(Nd^{3})^{1/6}>1$, because
$d>\alpha^{2/3}N^{-1/3}$. This justifies our conductivity results.
## VI Conductivity and hybridization gap
So far we have ignored the the effect of hybridization gap on the conductivity
near neutrality point $\mu=0$. In the absence of disorder potential and the
Fermi level located inside the hybridization gap TI film becomes an insulator.
There is a big interest to realize such an insulator in BSTS system Chong _et
al._ (2020); Nandi _et al._ (2018), which is expected to show the quantum
spin Hall effect and a corresponding four probe resistance $h/2e^{2}$ König
_et al._ (2008). Below, we suggest a modification of the theory Nandi _et
al._ (2018) of disorder effects on conductivity of the very thin TI film near
neutrality point $\mu=0$, using our theory of the random potential developed
in Section II.
We consider the case $\Gamma(d)>\Delta$ so that
$\Gamma_{\Delta}\simeq\Gamma(d)$. Then the potential of charged impurities
$\phi(\mathbf{r})$ bends both bands on each surface up and down with
characteristic scale $a$, creating at the Fermi level large electron and hole
puddles with diameter $\sim a(\Gamma/\Delta)^{4/3}$ Polyakov and Shklovskii
(1995). These puddles are separated by thin insulating stripes of the width
$x=a\Delta/\Gamma$, which form insulating infinite cluster (see Fig. 4 in Ref.
Nandi _et al._ (2018)) residing at the potential $\phi(\mathbf{r})$
percolation level $\phi(\mathbf{r})=0$.
At low temperatures, this system can conduct only if electrons can easily
tunnel across these insulating stripes. In a relatively thick TI film where
the hybridization gap $\Delta$ is small enough to allow easy tunneling, the
conductivity of surface states is still metallic. Let us find the upper limit
of $\Delta$ for such metallic films, $\Delta_{c}$. The probability of the
Zener-like tunneling across a thin insulating stripe of width $x$ is Nandi
_et al._ (2018)
$\displaystyle P\propto\exp(-\frac{x\Delta}{\hbar
v_{F}})=\exp(-\frac{a\Delta^{2}}{\Gamma\hbar v_{F}}).$ (35)
For the case $\kappa_{f}/\kappa>\alpha^{-1}\gg 1$ studied in Section II using
$a=\lambda$, substituting Eqs. (15) and (17) into Eq. (35), we find that the
critical value of the hybridization gap at which $P$ loses its exponentially
small factor is
$\Delta_{c}(d)=\frac{e^{2}}{\kappa_{f}d}\frac{(Nd^{3})^{1/4}}{\alpha^{1/2}}.$
(36)
Substituting Eq. (20) into Eq. (36) we solved it for the largest thickness
$d_{\max}$ at which TI films can be considered insulating. We got $d_{\max}=5$
nm for $d_{0}=1.2$ nm and $d_{\max}=8$ nm for $d_{0}=1.8$ nm. In both cases
$\Delta_{c}(d_{\max})<\Gamma(d_{\max})$, so that our theory is applicable.
For the STO case studied in Section III,
$\Gamma=\Gamma_{1}(d)\sim(e^{2}/\kappa_{f}d)\sqrt{Nd^{3}}$ and $a=d$.
Substituting these values into Eq. (35) we arrive to the same $\Delta_{c}(d)$
Eq. (36) 777This universal value is larger than the estimate of Ref. Nandi
_et al._ (2018), where the bulk TI surface screening radius Skinner and
Shklovskii (2013) was used for $a$.. Thus, our neutrality point conductivity
is valid if $d>d_{\max}$, while spin Hall effect can be achieved only at
$d<d_{\max}$. This result is in qualitative agreement with results of Ref.
Chong _et al._ (2020). On the other hand, the fact that in Ref. Nandi _et
al._ (2018) the TI film thickness $d=4$ nm was not sufficiently small still
remains unexplained.
###### Acknowledgements.
We are grateful to S.K. Chong, V.V. Deshpande, B. Skinner, and D. Weiss and
for useful discussions. Y.H. was partially supported by the William I. Fine
Theoretical Physics Institute.
## References
* Skinner _et al._ (2012) B. Skinner, T. Chen, and B. I. Shklovskii, Phys. Rev. Lett. 109, 176801 (2012).
* Ren _et al._ (2011) Z. Ren, A. A. Taskin, S. Sasaki, K. Segawa, and Y. Ando, Phys. Rev. B 84, 165311 (2011).
* Knispel _et al._ (2017) T. Knispel, W. Jolie, N. Borgwardt, J. Lux, Z. Wang, Y. Ando, A. Rosch, T. Michely, and M. Grüninger, Phys. Rev. B 96, 195135 (2017).
* Skinner and Shklovskii (2013) B. Skinner and B. I. Shklovskii, Phys. Rev. B 87, 075454 (2013).
* Skinner _et al._ (2013) B. Skinner, T. Chen, and B. Shklovskii, Journal of Experimental and Theoretical Physics 117, 579 (2013).
* Beidenkopf _et al._ (2011) H. Beidenkopf, P. Roushan, J. Seo, L. Gorman, I. Drozdov, Y. San Hor, R. J. Cava, and A. Yazdani, Nature Physics 7, 939 (2011).
* Zhang _et al._ (2010) Y. Zhang, K. He, C.-Z. Chang, C.-L. Song, L.-L. Wang, X. Chen, J.-F. Jia, Z. Fang, X. Dai, W.-Y. Shan, _et al._ , Nature Physics 6, 584 (2010).
* Kim _et al._ (2013) D. Kim, P. Syers, N. P. Butch, J. Paglione, and M. S. Fuhrer, Nature communications 4, 2040 (2013).
* Nandi _et al._ (2018) D. Nandi, B. Skinner, G. H. Lee, K.-F. Huang, K. Shain, C.-Z. Chang, Y. Ou, S.-P. Lee, J. Ward, J. S. Moodera, P. Kim, B. I. Halperin, and A. Yacoby, Phys. Rev. B 98, 214203 (2018).
* Chong _et al._ (2020) S. K. Chong, L. Liu, T. D. Sparks, F. Liu, and V. V. Deshpande, “Topological phase transitions in a hybridized three-dimensional topological insulator,” (2020), arXiv:2004.04870 [cond-mat.mes-hall] .
* Chaudhuri _et al._ (2021) D. Chaudhuri, M. Salehi, S. Dasgupta, M. Mondal, J. Moon, D. Jain, S. Oh, and N. P. Armitage, Phys. Rev. B 103, L081110 (2021).
* Di Bernardo _et al._ (2021) I. Di Bernardo, J. Hellerstedt, C. Liu, G. Akhgar, W. Wu, S. A. Yang, D. Culcer, S.-K. Mo, S. Adam, M. T. Edmonds, and M. S. Fuhrer, Advanced Materials 33, 2005897 (2021).
* Wang _et al._ (2020) J. Wang, C. Gorini, K. Richter, Z. Wang, Y. Ando, and D. Weiss, Nano Letters 20, 8493 (2020).
* Rytova (1967) N. S. Rytova, Moscow University Physics Bulletin 3, 18 (1967).
* Chaplik and Entin (1971) A. Chaplik and M. Entin, Journal of Experimental and Theoretical Physics 34, 1335 (1971).
* Keldysh (1979) L. V. Keldysh, Soviet Journal of Experimental and Theoretical Physics Letters 29, 658 (1979).
* Laturia _et al._ (2018) A. Laturia, M. L. Van de Put, and W. G. Vandenberghe, npj 2D Materials and Applications 2, 1 (2018).
* Note (1) TF approximation is justified as long as $\alpha\ll 1$. See related discussion in Ref. skinner2013a.
* Note (2) Here we drop the short distance contribution $r<d$ to the potential $\phi(\mathbf{r})$ which is standard Coulomb potential. This does not change the result substantially as long as $d\ll\lambda$.
* Note (3) The value of $d_{c}$ will be determined after we obtained $\lambda(d)$ self-consistently in Eq. (17).
* Robertson (2004) J. Robertson, The European physical journal applied physics 28, 265 (2004).
* Shklovskii (2007) B. I. Shklovskii, Phys. Rev. B 76, 233411 (2007).
* Note (4) On the other hand, if $N_{e}\lambda/d>N$ the screening length $\lambda$ should be recalculated self-consistently together with $\Gamma$. Then, instead of Eqs. (15) and (17), we arrive at new results $\Gamma\sim(e^{2}N_{e}^{1/3}/\kappa_{f})\alpha^{-6/7}(N_{e}d^{3})^{-1/21}$ and $\lambda\sim d\alpha^{-4/7}(N_{e}d^{3})^{-1/7}$.
* Kondovych _et al._ (2017) S. Kondovych, I. Luk’yanchuk, T. I. Baturina, and V. M. Vinokur, Scientific Reports 7, 42770 (2017).
* Note (5) The length $\Lambda$ can be found also via the following simple variational estimate. Let us assume that electric field lines are confined inside the TI film within radius $\Lambda\ll r_{0}$ from the point charge and exit from the film to the gate in the area $\sim\pi\Lambda^{2}$ . The total electrostatic energy consists of two major contributions, one is the energy of the field inside the film $\sim(e^{2}/\kappa_{f}d)\ln(\Lambda/d)$, and the other one is the field energy in the gate dielectric $\sim e^{2}D/\kappa_{1}\Lambda^{2}$. Minimizing the total energy yields $\Lambda=\sqrt{Dd\kappa_{f}/\kappa_{1}}\sim\sqrt{r_{0}D}$.
* Note (6) The factor $(2\kappa/\kappa_{1})$ is of order unity if $\kappa_{1}\simeq\kappa_{2}$ are not quite different.
* Culcer and Winkler (2008) D. Culcer and R. Winkler, Phys. Rev. B 78, 235417 (2008).
* Burkov _et al._ (2011) A. A. Burkov, M. D. Hook, and L. Balents, Phys. Rev. B 84, 235126 (2011).
* König _et al._ (2008) M. König, H. Buhmann, L. W. Molenkamp, T. Hughes, C.-X. Liu, X.-L. Qi, and S.-C. Zhang, Journal of the Physical Society of Japan 77, 031007 (2008).
* Polyakov and Shklovskii (1995) D. G. Polyakov and B. I. Shklovskii, Phys. Rev. Lett. 74, 150 (1995).
* Note (7) This universal value is larger than the estimate of Ref. Nandi _et al._ (2018), where the bulk TI surface screening radius Skinner and Shklovskii (2013) was used for $a$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.